Operating System - HP-UX
1832895 Members
2658 Online
110048 Solutions
New Discussion

disk performance on hpux 11.11

 
IT Operations Unix Admi
Occasional Advisor

disk performance on hpux 11.11


Hi

We have an 9000/800/rp3440 running 11.11 and during an Oracle import of a full 6.5 Gb dump file we get the follwing numbers with iostat.
The disks are on an EMC DMX1000 in RAID 5 configuration. The connection goes over 2 x 2Gb fiber, one as alternate path.

withhout SRDF link

bps sps
17000 2000

with SRDF link

bps sps
3800 450

I wonder what kind of numbers are feasible or are we already on the top for a standard hp9000/EMC confuration.


Paul
2 REPLIES 2
George Neill
Trusted Contributor

Re: disk performance on hpux 11.11

We also use EMC and SRDF. There are a number of factors that will affect your performance including what other applications are using part of the shared disks the LUNS are built from. We found the size of the IO block has a significant affect on the performance of the SRDF copy.

Assuming /oradata/dmsp/fs002 is mount point for a file system that is on disks that use SRDF enter this command as root: vxtunefs /oradata/dmsp/fs002. You should see something like:

read_pref_io = 65536
read_nstream = 10
read_unit_io = 65536
write_pref_io = 65536
write_nstream = 1
write_unit_io = 65536
pref_strength = 10
buf_breakup_size = 131072
discovered_direct_iosz = 262144
max_direct_iosz = 1048576
default_indir_size = 8192
qio_cache_enable = 0
max_diskq = 1048576
initial_extent_size = 1
max_seqio_extent_size = 2048
max_buf_data_size = 65536

Except yours will probablly have read_pref_io = 8192 . The SRDF link is affected by the number of IO requests. It will work more effeciently with fewer, larger IO blocks. Do a man on vxtunefs and study that. A recommendation would be to inssue the command: vxtunefs -o read_pref_io=65536 /oradata/dmsp/fs002 and do the same for write_pref_io.
This will in effect make each IO block 8 times larger thereby requiring 8 times fewer IOs. Making this change will not cause an 8 fold increase in SRDF link throughput but will cause a noticable increase.
Steven E. Protter
Exalted Contributor

Re: disk performance on hpux 11.11

Shalom,

Note that oracle should perform well on reads with your configuration. If you have a write intensive environment you may find the performance with RAID 5 on the EMC might lead to long i/o wait.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com