Operating System - HP-UX
1838105 Members
3405 Online
110124 Solutions
New Discussion

Re: vxfs direct IO question

 
JR_3
Advisor

vxfs direct IO question

Question: How can I adjust the direct_io size of vxfs ?

I did dd command to read a 500MB file and dump to /dev/null , with bs=64K, 128K, 256K, 512K

#dd if=/DB2/1GB of=/dev/null bs=65535

Monitor IO stat using "sar -d " and glance

The results are very interesting:
- with bs=64K, 128K, (256K-1) : it takes around 6 sec, and avg physical IO size ~ 30KB
- with bs=256K, 512K : it takes 34 secs, and avg physical IO size ~ 8KB

Why bs=(64K,128K) get much better performance than bs=(256K, 512K) ???

I guess while bs>=256KB, vxfs do direct IO (discovered_direct_iosz = 262144). But why the size of each directIO=8KB?

I also do a "dd" command read from raw device with bs=512K , the result is good, ~5 sec and per physical IO size ~ 150KB.



#vxtunfs -p /DB2
Filesystem i/o parameters for /DB2
read_pref_io = 65536
read_nstream = 1
read_unit_io = 65536
write_pref_io = 65536
write_nstream = 1
write_unit_io = 65536
pref_strength = 10
buf_breakup_size = 131072
discovered_direct_iosz = 262144
max_direct_iosz = 1048576
default_indir_size = 8192
qio_cache_enable = 0
max_diskq = 1048576
initial_extent_size = 1
max_seqio_extent_size = 2048
max_buf_data_size = 8192

4 REPLIES 4
Steven E. Protter
Exalted Contributor

Re: vxfs direct IO question

Shalom,

Few ideas.

1) You don't say your OS. but kmtune/kctune and maybe look at sam and see if there is a kernel parameter for this. sam provides kernel documentation.
2) You may be able to impact this with the block size PE size -s parameter of vgcreate if you are willing to re-create the volume group.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
JR_3
Advisor

Re: vxfs direct IO question


My OS is HPUX11iv1

Actually, I have a oracle database server (Oracle9iR2 / rp3440/ HPUX11iv1 ), the IO performance is poor while doing RMAN incremental backup:
- from Oracle statspack , physical reads =~ 20 /sec
- from glance, logical reads =~ 20 reads/s , physical read=~ 2500 reads/s, and the average size per physical read=8KB

It seems that while Oracle request a large read ( ~1MB ) , HPUX/vxfs break the read request into 125 pieces of 8KB physical read. And of course get poor performance.

JR_3
Advisor

Re: vxfs direct IO question


Try to tune "discovered_direct_iosz "

###discovered_direct_iosz =256k
dd if=/DB2/tt/1GB of=/dev/null bs=600k
results: 30 secs , 8KB per physical read
### discovered_direct_iosz =1024k
The same dd command got :
5 sec , 48KB per physical read

It shows "discovered_direct_iosz " is the key factor, but I still don't know how to tune directIO size!!???
JR_3
Advisor

Re: vxfs direct IO question


"max_buf_data_size" seems the answer

set max_buf_data_size = 65536, and the performance improved for both small bs ( 64K, 128K ) and large bs ( 512K ).

Do anybody know is there any disadvantage to set max_buf_data_size from 8K to 64K ?