HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- disk performance on hpux 11.11
Operating System - HP-UX
1832895
Members
2658
Online
110048
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-28-2006 09:20 PM
08-28-2006 09:20 PM
disk performance on hpux 11.11
Hi
We have an 9000/800/rp3440 running 11.11 and during an Oracle import of a full 6.5 Gb dump file we get the follwing numbers with iostat.
The disks are on an EMC DMX1000 in RAID 5 configuration. The connection goes over 2 x 2Gb fiber, one as alternate path.
withhout SRDF link
bps sps
17000 2000
with SRDF link
bps sps
3800 450
I wonder what kind of numbers are feasible or are we already on the top for a standard hp9000/EMC confuration.
Paul
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-30-2006 06:46 AM
08-30-2006 06:46 AM
Re: disk performance on hpux 11.11
We also use EMC and SRDF. There are a number of factors that will affect your performance including what other applications are using part of the shared disks the LUNS are built from. We found the size of the IO block has a significant affect on the performance of the SRDF copy.
Assuming /oradata/dmsp/fs002 is mount point for a file system that is on disks that use SRDF enter this command as root: vxtunefs /oradata/dmsp/fs002. You should see something like:
read_pref_io = 65536
read_nstream = 10
read_unit_io = 65536
write_pref_io = 65536
write_nstream = 1
write_unit_io = 65536
pref_strength = 10
buf_breakup_size = 131072
discovered_direct_iosz = 262144
max_direct_iosz = 1048576
default_indir_size = 8192
qio_cache_enable = 0
max_diskq = 1048576
initial_extent_size = 1
max_seqio_extent_size = 2048
max_buf_data_size = 65536
Except yours will probablly have read_pref_io = 8192 . The SRDF link is affected by the number of IO requests. It will work more effeciently with fewer, larger IO blocks. Do a man on vxtunefs and study that. A recommendation would be to inssue the command: vxtunefs -o read_pref_io=65536 /oradata/dmsp/fs002 and do the same for write_pref_io.
This will in effect make each IO block 8 times larger thereby requiring 8 times fewer IOs. Making this change will not cause an 8 fold increase in SRDF link throughput but will cause a noticable increase.
Assuming /oradata/dmsp/fs002 is mount point for a file system that is on disks that use SRDF enter this command as root: vxtunefs /oradata/dmsp/fs002. You should see something like:
read_pref_io = 65536
read_nstream = 10
read_unit_io = 65536
write_pref_io = 65536
write_nstream = 1
write_unit_io = 65536
pref_strength = 10
buf_breakup_size = 131072
discovered_direct_iosz = 262144
max_direct_iosz = 1048576
default_indir_size = 8192
qio_cache_enable = 0
max_diskq = 1048576
initial_extent_size = 1
max_seqio_extent_size = 2048
max_buf_data_size = 65536
Except yours will probablly have read_pref_io = 8192 . The SRDF link is affected by the number of IO requests. It will work more effeciently with fewer, larger IO blocks. Do a man on vxtunefs and study that. A recommendation would be to inssue the command: vxtunefs -o read_pref_io=65536 /oradata/dmsp/fs002 and do the same for write_pref_io.
This will in effect make each IO block 8 times larger thereby requiring 8 times fewer IOs. Making this change will not cause an 8 fold increase in SRDF link throughput but will cause a noticable increase.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-30-2006 07:40 AM
08-30-2006 07:40 AM
Re: disk performance on hpux 11.11
Shalom,
Note that oracle should perform well on reads with your configuration. If you have a write intensive environment you may find the performance with RAID 5 on the EMC might lead to long i/o wait.
SEP
Note that oracle should perform well on reads with your configuration. If you have a write intensive environment you may find the performance with RAID 5 on the EMC might lead to long i/o wait.
SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP