Operating System - HP-UX
1753672 Members
5761 Online
108799 Solutions
New Discussion юеВ

Improving Disk Read Performance with blocksize=256k

 
Ken Law
Occasional Contributor

Improving Disk Read Performance with blocksize=256k

Hi,
I found something really bizarre with my filesystem when I did some dd tests on a one gigabyte file on disk subsystem that was fiber channel attached to my HP rp7400 8-way running HP 11i. The dd read test was 6 times faster with bs of 256k-1 or less than bs of 256k. I re-did the same tests on a local SCSI3 disk , the difference was 1.3 times. Are there any system parameters that I can change to improve the read performance with bs=256k? Any help is appreciated. Thanks.

bigcat:/ 1253# umount /essc1r6
bigcat:/ 1254# mount /dev/ess/c1r6 /essc1r6
bigcat:/ 1255# timex dd if=/essc1r6/1gbfile of=/dev/null bs=262143
4096+1 records in
4096+1 records out

real 10.19
user 0.02
sys 9.65

bigcat:/ 1256# umount /essc1r6
bigcat:/ 1257# mount /dev/ess/c1r6 /essc1r6
bigcat:/ 1258# timex dd if=/essc1r6/1gbfile of=/dev/null bs=262144
4096+0 records in
4096+0 records out

real 1:02.07
user 0.02
sys 5.39
bigcat:/ 1259# df -g /essc1r6 | grep fragment
8192 file system block size 1024 fragment size


7 REPLIES 7
Steven E. Protter
Exalted Contributor

Re: Improving Disk Read Performance with blocksize=256k

Real world tests depend on what kind of data you have in the real world. Oracle might like this setup or it might not, depending on what kind of data you have.

For reading small files, this is horribly inefficient. It really depends on what kind of work your system does in real life to determine whether this is a good idea.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
A. Clay Stephenson
Acclaimed Contributor

Re: Improving Disk Read Performance with blocksize=256k

Note that in your 2nd test that there was an extremely high probability that a very large fraction of your data was already cached. That instantly makes your data suspect. Also note that sequential reads are not the norm in most I/O so that your tests may not be of much value. In general, vxfs file systems don't care about block sizes because the filesystem is extent based. I find that 64k operations tend to be optimal for most applications and vxfs filesystems tend to write in about those chunks regardless of block or fragment size. You might play with the disk_sort_seconds kernel tunable to better optimize a mixture of sequential and random i/o.
If it ain't broke, I can fix that.
Bill Hassell
Honored Contributor

Re: Improving Disk Read Performance with blocksize=256k

While the filesystem layout defaults to 8k, this has little to do with physical I/O. The dd command induces an artifical task, that of reading lagre blocks of data. As mentioned, reading from a mountpoint will use the buffer cache so subsequent test runs will be much faster.

The kernel has a rather complex method to create physical I/O which is a significant topic in advanced HP-UX internals course material. The kernel tries to maximize I/O into 128k chunks when possible (sequential data) but there are a number of non-sequential tasks that can't be optimized. So while dd shows significant improvement with bs=128k or bs=256k, these values are meaningless to a database that reads and write 12kb records randomly scattered throughout the disk.

You'll see significant random access performance by using several disks (more than 2) in striped volumes.


Bill Hassell, sysadmin
Khalid A. Al-Tayaran
Valued Contributor

Re: Improving Disk Read Performance with blocksize=256k



Hi,

Some applications require that you have a certain block size. Oracle for example works with 8K block size (For SAP R/3 applications) .
Eugeny Brychkov
Honored Contributor

Re: Improving Disk Read Performance with blocksize=256k

What SAN storage system server has attached? Most probably you need to tune it or OS access methods for performance
Eugeny
Tim D Fulford
Honored Contributor

Re: Improving Disk Read Performance with blocksize=256k

Unless your application does lots of dd's or sequential scans you are unlikely to get this speed up, if any at all.

What kind of storage do you have? If it is a SAN/inteligent array (say VA7410 etc) the data will be cached on the array so the second pass will be quicker.

There is actually a heated debate where I work... we use a 4K stripe & some say that increasing it to 16K or 64k will improve performance and some say it will destroy performance. Doing dd tests will definitely show 64k is better than 4k, but the proof of the pudding is how the application/users respond. If it aint broke dont fix it, because the type of actions needs to optimise your system to 256k would be quite alot.

IF you need more some proof that the disks need tuning then
o look at "sar -d 60 5" results. If the service times are high (all relative, 1-3 excellent, 3-6 good, 6-10 OK, 10+ there may be problems).
o Also look at you average block size or block size per disk. You can do this using MeasureWare & do per disk extracts
extarct -xt -v -d -r -b -e

This will create a file called xfrdDISK.asc, & look at the Phys IO/s & Phys kB/s for an idea of the average block size.

We have an average block size of 2.5k, which implies a 4k stripe is about right.

Regards

Tim
-
doug mielke
Respected Contributor

Re: Improving Disk Read Performance with blocksize=256k

I'm suspicious of sar -d reports from a SAN or other advanced drive array, or of no load benchmarking.
If writting async the access time would relfect the time it took the SAN's cache to respond. Real world response would be influenced by the competition for the cache space, more closely related to physical disk i/o