- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: vxfs direct IO question
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-25-2007 05:13 PM
04-25-2007 05:13 PM
vxfs direct IO question
I did dd command to read a 500MB file and dump to /dev/null , with bs=64K, 128K, 256K, 512K
#dd if=/DB2/1GB of=/dev/null bs=65535
Monitor IO stat using "sar -d " and glance
The results are very interesting:
- with bs=64K, 128K, (256K-1) : it takes around 6 sec, and avg physical IO size ~ 30KB
- with bs=256K, 512K : it takes 34 secs, and avg physical IO size ~ 8KB
Why bs=(64K,128K) get much better performance than bs=(256K, 512K) ???
I guess while bs>=256KB, vxfs do direct IO (discovered_direct_iosz = 262144). But why the size of each directIO=8KB?
I also do a "dd" command read from raw device with bs=512K , the result is good, ~5 sec and per physical IO size ~ 150KB.
#vxtunfs -p /DB2
Filesystem i/o parameters for /DB2
read_pref_io = 65536
read_nstream = 1
read_unit_io = 65536
write_pref_io = 65536
write_nstream = 1
write_unit_io = 65536
pref_strength = 10
buf_breakup_size = 131072
discovered_direct_iosz = 262144
max_direct_iosz = 1048576
default_indir_size = 8192
qio_cache_enable = 0
max_diskq = 1048576
initial_extent_size = 1
max_seqio_extent_size = 2048
max_buf_data_size = 8192
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-25-2007 06:29 PM
04-25-2007 06:29 PM
Re: vxfs direct IO question
Few ideas.
1) You don't say your OS. but kmtune/kctune and maybe look at sam and see if there is a kernel parameter for this. sam provides kernel documentation.
2) You may be able to impact this with the block size PE size -s parameter of vgcreate if you are willing to re-create the volume group.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-25-2007 06:49 PM
04-25-2007 06:49 PM
Re: vxfs direct IO question
My OS is HPUX11iv1
Actually, I have a oracle database server (Oracle9iR2 / rp3440/ HPUX11iv1 ), the IO performance is poor while doing RMAN incremental backup:
- from Oracle statspack , physical reads =~ 20 /sec
- from glance, logical reads =~ 20 reads/s , physical read=~ 2500 reads/s, and the average size per physical read=8KB
It seems that while Oracle request a large read ( ~1MB ) , HPUX/vxfs break the read request into 125 pieces of 8KB physical read. And of course get poor performance.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-25-2007 07:31 PM
04-25-2007 07:31 PM
Re: vxfs direct IO question
Try to tune "discovered_direct_iosz "
###discovered_direct_iosz =256k
dd if=/DB2/tt/1GB of=/dev/null bs=600k
results: 30 secs , 8KB per physical read
### discovered_direct_iosz =1024k
The same dd command got :
5 sec , 48KB per physical read
It shows "discovered_direct_iosz " is the key factor, but I still don't know how to tune directIO size!!???
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-25-2007 08:16 PM
04-25-2007 08:16 PM
Re: vxfs direct IO question
"max_buf_data_size" seems the answer
set max_buf_data_size = 65536, and the performance improved for both small bs ( 64K, 128K ) and large bs ( 512K ).
Do anybody know is there any disadvantage to set max_buf_data_size from 8K to 64K ?