<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Improving Disk Read Performance with blocksize=256k in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-disk-read-performance-with-blocksize-256k/m-p/3037772#M633441</link>
    <description>I'm suspicious of sar -d reports from a SAN or other advanced drive array, or of no load benchmarking.&lt;BR /&gt; If writting async the access time would relfect the time it took the SAN's cache to respond. Real world response would be influenced by the competition for the cache space, more closely related to physical disk i/o</description>
    <pubDate>Fri, 01 Aug 2003 12:43:26 GMT</pubDate>
    <dc:creator>doug mielke</dc:creator>
    <dc:date>2003-08-01T12:43:26Z</dc:date>
    <item>
      <title>Improving Disk Read Performance with blocksize=256k</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-disk-read-performance-with-blocksize-256k/m-p/3037765#M633434</link>
      <description>Hi,&lt;BR /&gt;   I found something really bizarre with my filesystem when I did some dd tests on a one gigabyte file on disk subsystem that was fiber channel attached to my HP rp7400 8-way running HP 11i. The dd read test was 6 times faster with bs of 256k-1 or less than bs of 256k. I re-did the same tests on a local SCSI3 disk , the difference was 1.3 times. Are there any system parameters that I can change to improve the read performance with bs=256k?  Any help is appreciated. Thanks.&lt;BR /&gt;&lt;BR /&gt;bigcat:/ 1253# umount /essc1r6                                    &lt;BR /&gt;bigcat:/ 1254# mount /dev/ess/c1r6 /essc1r6                       &lt;BR /&gt;bigcat:/ 1255# timex dd if=/essc1r6/1gbfile of=/dev/null bs=262143&lt;BR /&gt;4096+1 records in&lt;BR /&gt;4096+1 records out&lt;BR /&gt;&lt;BR /&gt;real       10.19&lt;BR /&gt;user        0.02&lt;BR /&gt;sys         9.65&lt;BR /&gt;&lt;BR /&gt;bigcat:/ 1256# umount /essc1r6                                    &lt;BR /&gt;bigcat:/ 1257# mount /dev/ess/c1r6 /essc1r6                       &lt;BR /&gt;bigcat:/ 1258# timex dd if=/essc1r6/1gbfile of=/dev/null bs=262144&lt;BR /&gt;4096+0 records in&lt;BR /&gt;4096+0 records out&lt;BR /&gt;&lt;BR /&gt;real     1:02.07&lt;BR /&gt;user        0.02&lt;BR /&gt;sys         5.39&lt;BR /&gt;bigcat:/ 1259# df -g /essc1r6 | grep fragment&lt;BR /&gt;           8192 file system block size            1024 fragment size&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 31 Jul 2003 00:03:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/improving-disk-read-performance-with-blocksize-256k/m-p/3037765#M633434</guid>
      <dc:creator>Ken Law</dc:creator>
      <dc:date>2003-07-31T00:03:37Z</dc:date>
    </item>
    <item>
      <title>Re: Improving Disk Read Performance with blocksize=256k</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-disk-read-performance-with-blocksize-256k/m-p/3037766#M633435</link>
      <description>Real world tests depend on what kind of data you have in the real world.  Oracle might like this setup or it might not, depending on what kind of data you have.&lt;BR /&gt;&lt;BR /&gt;For reading small files, this is horribly inefficient.  It really depends on what kind of work your system does in real life to determine whether this is a good idea.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Thu, 31 Jul 2003 00:33:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/improving-disk-read-performance-with-blocksize-256k/m-p/3037766#M633435</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2003-07-31T00:33:26Z</dc:date>
    </item>
    <item>
      <title>Re: Improving Disk Read Performance with blocksize=256k</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-disk-read-performance-with-blocksize-256k/m-p/3037767#M633436</link>
      <description>Note that in your 2nd test that there was an extremely high probability that a very large fraction of your data was already cached. That instantly makes your data suspect. Also note that sequential reads are not the norm in most I/O so that your tests may not be of much value. In general, vxfs file systems don't care about block sizes because the filesystem is extent based. I find that 64k operations tend to be optimal for most applications and vxfs filesystems tend to write in about those chunks regardless of block or fragment size. You might play with the disk_sort_seconds kernel tunable to better optimize a mixture of sequential and random i/o.</description>
      <pubDate>Thu, 31 Jul 2003 00:56:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/improving-disk-read-performance-with-blocksize-256k/m-p/3037767#M633436</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2003-07-31T00:56:01Z</dc:date>
    </item>
    <item>
      <title>Re: Improving Disk Read Performance with blocksize=256k</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-disk-read-performance-with-blocksize-256k/m-p/3037768#M633437</link>
      <description>While the filesystem layout defaults to 8k, this has little to do with physical I/O. The dd command induces an artifical task, that of reading lagre blocks of data.  As mentioned, reading from a mountpoint will use the buffer cache so subsequent test runs will be much faster.&lt;BR /&gt;&lt;BR /&gt;The kernel has a rather complex method to create physical I/O which is a significant topic in advanced HP-UX internals course material. The kernel tries to maximize I/O into 128k chunks when possible (sequential data) but there are a number of non-sequential tasks that can't be optimized. So while dd shows significant improvement with bs=128k or bs=256k, these values are meaningless to a database that reads and write 12kb records randomly scattered throughout the disk.&lt;BR /&gt;&lt;BR /&gt;You'll see significant random access performance by using several disks (more than 2) in striped volumes.</description>
      <pubDate>Thu, 31 Jul 2003 01:13:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/improving-disk-read-performance-with-blocksize-256k/m-p/3037768#M633437</guid>
      <dc:creator>Bill Hassell</dc:creator>
      <dc:date>2003-07-31T01:13:10Z</dc:date>
    </item>
    <item>
      <title>Re: Improving Disk Read Performance with blocksize=256k</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-disk-read-performance-with-blocksize-256k/m-p/3037769#M633438</link>
      <description>&lt;BR /&gt;&lt;BR /&gt; Hi,&lt;BR /&gt;&lt;BR /&gt; Some applications require that you have a certain block size. Oracle for example works with 8K block size (For SAP R/3 applications) .</description>
      <pubDate>Thu, 31 Jul 2003 04:55:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/improving-disk-read-performance-with-blocksize-256k/m-p/3037769#M633438</guid>
      <dc:creator>Khalid A. Al-Tayaran</dc:creator>
      <dc:date>2003-07-31T04:55:50Z</dc:date>
    </item>
    <item>
      <title>Re: Improving Disk Read Performance with blocksize=256k</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-disk-read-performance-with-blocksize-256k/m-p/3037770#M633439</link>
      <description>What SAN storage system server has attached? Most probably you need to tune it or OS access methods for performance&lt;BR /&gt;Eugeny</description>
      <pubDate>Thu, 31 Jul 2003 05:12:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/improving-disk-read-performance-with-blocksize-256k/m-p/3037770#M633439</guid>
      <dc:creator>Eugeny Brychkov</dc:creator>
      <dc:date>2003-07-31T05:12:00Z</dc:date>
    </item>
    <item>
      <title>Re: Improving Disk Read Performance with blocksize=256k</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-disk-read-performance-with-blocksize-256k/m-p/3037771#M633440</link>
      <description>Unless your application does lots of dd's or sequential scans you are unlikely to get this speed up, if any at all.&lt;BR /&gt;&lt;BR /&gt;What kind of storage do you have?  If it is a SAN/inteligent array (say VA7410 etc) the data will be cached on the array so the second pass will be quicker.&lt;BR /&gt;&lt;BR /&gt;There is actually a heated debate where I work... we use a 4K stripe &amp;amp; some say that increasing it to 16K or 64k will improve performance and some say it will destroy performance.  Doing dd tests will definitely show 64k is better than 4k, but the proof of the pudding is how the application/users respond.  If it aint broke dont fix it, because the type of actions needs to optimise your system to 256k would be quite alot.&lt;BR /&gt;&lt;BR /&gt;IF you need more some proof that the disks need tuning then &lt;BR /&gt; o look at "sar -d 60 5" results.   If the service times are high (all relative, 1-3 excellent, 3-6 good, 6-10 OK, 10+ there may be problems).  &lt;BR /&gt; o Also look at you average block size or block size per disk.  You can do this using MeasureWare &amp;amp; do per disk extracts &lt;BR /&gt;extarct -xt -v -d -r &lt;REP_FILE&gt; -b &lt;MM&gt; -e &lt;MM&gt;&lt;BR /&gt;&lt;BR /&gt;This will create a file called xfrdDISK.asc, &amp;amp; look at the Phys IO/s &amp;amp; Phys kB/s for an idea of the average block size.&lt;BR /&gt;&lt;BR /&gt;We have an average block size of 2.5k, which implies a 4k stripe is about right.&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Tim&lt;/MM&gt;&lt;/MM&gt;&lt;/REP_FILE&gt;</description>
      <pubDate>Fri, 01 Aug 2003 09:17:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/improving-disk-read-performance-with-blocksize-256k/m-p/3037771#M633440</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2003-08-01T09:17:38Z</dc:date>
    </item>
    <item>
      <title>Re: Improving Disk Read Performance with blocksize=256k</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-disk-read-performance-with-blocksize-256k/m-p/3037772#M633441</link>
      <description>I'm suspicious of sar -d reports from a SAN or other advanced drive array, or of no load benchmarking.&lt;BR /&gt; If writting async the access time would relfect the time it took the SAN's cache to respond. Real world response would be influenced by the competition for the cache space, more closely related to physical disk i/o</description>
      <pubDate>Fri, 01 Aug 2003 12:43:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/improving-disk-read-performance-with-blocksize-256k/m-p/3037772#M633441</guid>
      <dc:creator>doug mielke</dc:creator>
      <dc:date>2003-08-01T12:43:26Z</dc:date>
    </item>
  </channel>
</rss>

