<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Poor Disk performance for cp, in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/poor-disk-performance-for-cp/m-p/4054188#M304992</link>
    <description>The problem is that you really don't know what you are measuring because there are too many variables. Copying cooked files throws too many things into the mix. Is is filesystem performance? Is it buffer cache? Is it LVM or VxVM throughput? Is is disk throughput? &lt;BR /&gt;&lt;BR /&gt;You need to drop down to a lower level so that you are measuring no more than absolutely necessary at one time. Moreover, when you read from a cooked file and write to a cooked file what are you actually measuring because you are going through two stacks of the items listed above?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;I would approach it like this:&lt;BR /&gt;1) timex dd if=/dev/rdsk/c2t1d0 bs=256k \&lt;BR /&gt;   count=2000 of=/dev/null # this will measure raw disk performance&lt;BR /&gt;2) timex dd if=/dev/dsk/c2t1d0 bs=256k \&lt;BR /&gt;   count=2000 of=/dev/null # this will measure "luke-warm" disk performance&lt;BR /&gt;3) timex dd if=/dev/vg05/rlvol1 bs=256k \&lt;BR /&gt;   count=2000 of=/dev/null # raw LVM&lt;BR /&gt;4) timex dd if=/dev/vg05/lvol1 bs=256k \&lt;BR /&gt;   count=2000 of=/dev/null # "luke-warm" LVM&lt;BR /&gt;&lt;BR /&gt;or for 3-4 do if=/dev/vx/rdsk/dgxxx/volnn&lt;BR /&gt;and if=/dev/vx/dsk/dgxxx/volnn for VxVM&lt;BR /&gt;&lt;BR /&gt;5) create a 500MiB cooked file and test it.&lt;BR /&gt;   timex dd if=/dev/zero bs=256k count=2000&lt;BR /&gt;   of=/aaa/bbb/myfile # cooked write test&lt;BR /&gt;6) timex dd if=/aaa/bbb/myfile bs=256k of=/dev/null # cooked read test&lt;BR /&gt;&lt;BR /&gt;You should repeat the tests several times and average the results and then compare them to the old box and you should have some idea where the bottleneck(s) lie.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Mon, 13 Aug 2007 21:10:47 GMT</pubDate>
    <dc:creator>A. Clay Stephenson</dc:creator>
    <dc:date>2007-08-13T21:10:47Z</dc:date>
    <item>
      <title>Poor Disk performance for cp,</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/poor-disk-performance-for-cp/m-p/4054187#M304991</link>
      <description>&lt;!--!*#--&gt;Hi,&lt;BR /&gt;&lt;BR /&gt;I am having some problems with cp(copy) on our new system. We have a rx3660, 8gb, 1cpu and a MSA 30 DB attached to U320 (both buses). I got filesystem /data 30GB (distributed, extent-striped over 3 drives(pvg1), mirrored(pvg2)) the primary is sitting on one channel, mirror copy on the second channel. &lt;BR /&gt;A second filesystem /eom 30GB is stripped accros the mirror disk group (pvg2) on the second channel as well. I cp /data to /eod&lt;BR /&gt;The speed is very low around 6-7MB/s. Takes 30-40 min on idle system to copy 16GB of data. In comparisment my old system, RP5440 with U160 is 50% faster. I can't put my finger on why it is so. I have included all my system.perf.sh from 5 minuts into the copy. What bugs me the mosts is sar -d 1 600 info. Here are tails from both old(same disk setup) and new system. &lt;BR /&gt;&lt;BR /&gt;New system:&lt;BR /&gt;Average    c5t0d0    1.77    0.60       4      68    0.39    8.61&lt;BR /&gt;Average    c5t1d0    1.24    0.62       3      52    0.56    8.23&lt;BR /&gt;Average    c1t0d0   15.26    0.50     287    4534    0.00    0.64&lt;BR /&gt;Average    c1t1d0   32.17    0.50     285    4504    0.01    1.24&lt;BR /&gt;Average    c1t2d0   15.13    0.50     290    4548    0.00    0.62&lt;BR /&gt;Average    c2t0d0   26.94  393.47     385    6041   17.61    2.94&lt;BR /&gt;Average    c2t1d0   25.26  394.05     380    5941   18.22    2.88&lt;BR /&gt;Average    c2t2d0   24.18  397.63     371    5768   17.60    2.92&lt;BR /&gt;&lt;BR /&gt;Old System:&lt;BR /&gt;Average    c1t2d0    2.12    0.50       5      50    4.60    6.21&lt;BR /&gt;Average    c2t2d0    1.19    0.50       2      24    4.75    7.30&lt;BR /&gt;Average    c4t8d0   26.18   22.99     579    9190   11.62    5.99&lt;BR /&gt;Average   c4t10d0   19.82    0.61     438    6914    5.14    1.31&lt;BR /&gt;Average   c4t12d0   15.25    0.51     365    5780    5.06    1.17&lt;BR /&gt;Average    c5t8d0   37.48  139.12     592    9314  158.76   10.08&lt;BR /&gt;Average   c5t10d0   43.05  111.74     558    8771  126.57   10.34&lt;BR /&gt;Average   c5t12d0   43.25  123.32     563    8855  131.06   10.26&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;why is avque 3-4 times larger ?&lt;BR /&gt;&lt;BR /&gt;This one is killing me, theoreticaly, the new disk system should be 2x faster, not slower.&lt;BR /&gt;&lt;BR /&gt;Thnaks,&lt;BR /&gt;&lt;BR /&gt;Danilo&lt;BR /&gt;</description>
      <pubDate>Mon, 13 Aug 2007 20:30:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/poor-disk-performance-for-cp/m-p/4054187#M304991</guid>
      <dc:creator>Danilo Mihailovic</dc:creator>
      <dc:date>2007-08-13T20:30:52Z</dc:date>
    </item>
    <item>
      <title>Re: Poor Disk performance for cp,</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/poor-disk-performance-for-cp/m-p/4054188#M304992</link>
      <description>The problem is that you really don't know what you are measuring because there are too many variables. Copying cooked files throws too many things into the mix. Is is filesystem performance? Is it buffer cache? Is it LVM or VxVM throughput? Is is disk throughput? &lt;BR /&gt;&lt;BR /&gt;You need to drop down to a lower level so that you are measuring no more than absolutely necessary at one time. Moreover, when you read from a cooked file and write to a cooked file what are you actually measuring because you are going through two stacks of the items listed above?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;I would approach it like this:&lt;BR /&gt;1) timex dd if=/dev/rdsk/c2t1d0 bs=256k \&lt;BR /&gt;   count=2000 of=/dev/null # this will measure raw disk performance&lt;BR /&gt;2) timex dd if=/dev/dsk/c2t1d0 bs=256k \&lt;BR /&gt;   count=2000 of=/dev/null # this will measure "luke-warm" disk performance&lt;BR /&gt;3) timex dd if=/dev/vg05/rlvol1 bs=256k \&lt;BR /&gt;   count=2000 of=/dev/null # raw LVM&lt;BR /&gt;4) timex dd if=/dev/vg05/lvol1 bs=256k \&lt;BR /&gt;   count=2000 of=/dev/null # "luke-warm" LVM&lt;BR /&gt;&lt;BR /&gt;or for 3-4 do if=/dev/vx/rdsk/dgxxx/volnn&lt;BR /&gt;and if=/dev/vx/dsk/dgxxx/volnn for VxVM&lt;BR /&gt;&lt;BR /&gt;5) create a 500MiB cooked file and test it.&lt;BR /&gt;   timex dd if=/dev/zero bs=256k count=2000&lt;BR /&gt;   of=/aaa/bbb/myfile # cooked write test&lt;BR /&gt;6) timex dd if=/aaa/bbb/myfile bs=256k of=/dev/null # cooked read test&lt;BR /&gt;&lt;BR /&gt;You should repeat the tests several times and average the results and then compare them to the old box and you should have some idea where the bottleneck(s) lie.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 13 Aug 2007 21:10:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/poor-disk-performance-for-cp/m-p/4054188#M304992</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2007-08-13T21:10:47Z</dc:date>
    </item>
    <item>
      <title>Re: Poor Disk performance for cp,</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/poor-disk-performance-for-cp/m-p/4054189#M304993</link>
      <description>&lt;!--!*#--&gt;Clay I attached the result form the tests you suggested, each test run3 time on the new box. And these look good. I did these to measure performance when we first got the system. Problem seems to be when I try to use cp to copy about 17GB of data. On the old system same operation is complete in 45% less time, on a bus half that speed.&lt;BR /&gt;&lt;BR /&gt;dd times for cooked 500mb file&lt;BR /&gt;              New system      Old System&lt;BR /&gt;read(ave)        8s              5s&lt;BR /&gt;write(ave)      25s             22s&lt;BR /&gt;&lt;BR /&gt;but once i start the cp, the speed at the end on the new system avreages to 6-7 MB/s while old system is around 11-12MB/s&lt;BR /&gt;&lt;BR /&gt;One thing is fast on new system and thats ignite tapes, they are made at 17-20MB/s on LTO3 drive.</description>
      <pubDate>Tue, 14 Aug 2007 13:38:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/poor-disk-performance-for-cp/m-p/4054189#M304993</guid>
      <dc:creator>Danilo Mihailovic</dc:creator>
      <dc:date>2007-08-14T13:38:18Z</dc:date>
    </item>
    <item>
      <title>Re: Poor Disk performance for cp,</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/poor-disk-performance-for-cp/m-p/4054190#M304994</link>
      <description>Running those tests I mentioned only makes sense when you compare each step along the way with old and new. I did notice that your buffer cache is working well. Notice the dramatic reduction in the first time a file is read or written compared to subsequent operations. Anyway, you should do each of those steps on both boxes and look for significant differences. The other things I would look at buffer cache tuning differnces between the two boxes and check the scsi_max_qdepth setting on each disk and compare those to the new.&lt;BR /&gt;&lt;BR /&gt;e.g. scsictl -a /dev/rdsk/c1t5d0&lt;BR /&gt;&lt;BR /&gt;Man scsictl for details. If not overriden, the value defaults to the global_value set by the kernel tunable scsi_max_qdepth.</description>
      <pubDate>Tue, 14 Aug 2007 14:23:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/poor-disk-performance-for-cp/m-p/4054190#M304994</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2007-08-14T14:23:58Z</dc:date>
    </item>
  </channel>
</rss>

