1748046 Members
4957 Online
108757 Solutions
New Discussion юеВ

wio % high

 
SOLVED
Go to solution
Jane-Marie Smith
Occasional Advisor

wio % high

I have just installed a L1000 with hp-ux 11.
the output from sar shows that the wait I/O is high.

samples
Average 10 9 55 25

swapinfo -mta
Mb Mb Mb PCT START/ Mb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 512 73 439 14% 0 - 1 /dev/vg00/lvol2
dev 512 68 444 13% 0 - 1 /dev/vg00/lvol9
dev 512 72 440 14% 0 - 1 /dev/vg00/lvol10
reserve - 571 -571
memory 725 599 126 83%
total 2261 1383 878 61% - 0 -

is there a kernel parameter that needs changing
Thank you
Jane

Just another day
10 REPLIES 10
Praveen Bezawada
Respected Contributor
Solution

Re: wio % high

Hi

What are the values for rcache, wcache, nfile-sz, nproc-sz, inod-sz
you can use the commands

sar -o sarfile x y
sar -Af sarfile > sar.report

...BPK...
Jane-Marie Smith
Occasional Advisor

Re: wio % high

Hi

I have attached a copy of the sar report.

thanks

Jane
Just another day
Praveen Bezawada
Respected Contributor

Re: wio % high

Hi
What i can say from the sar report is that the disks
c4t1d0
c5t1d0
c4t2d0
c5t2d0
c2t2d0
the %busy is more than 60% continuously, this means that these disk have become a bottleneck.
You can try to move one or more file or filesystem from these disks to a lesser used disk. This can help decrease the load on those disk.

...BPK...
Jane-Marie Smith
Occasional Advisor

Re: wio % high

Thanks for that Praveen. I will make further investigation and see what I can move.

thanks again for your help.
Just another day
Bill Hassell
Honored Contributor

Re: wio % high

Something you might want to do before moving physical disks around is to identify the logical volumes that are the busiest and then the files that are the busiest on the lvol. That way, you can see about just moving a directory (if possible) or creating a symlink to another volume so the busy files are on another channel.

About the only way to find busy lvols and files is with Glance. Glance will show doisk I/O by hardware channel, mounted filesystem or lvol. Glance can also show you the program(s) that are performing the highest I/O.

Once you identify a really busy program, use Glance's ability to list all the open files for a single program and browse through the list looking for files where the offset jumps every time you refresh Glance. These are likely candidates as the busy files. See what it takes to move the file(s) to other locations. Repeat for other busy programs. Glance is available as a free trial version on your Application CDs (usually #2 CD).


Bill Hassell, sysadmin
Neale Machin
Advisor

Re: wio % high

Are you running a database? If so then check the tablespace and index setup within the vgs and lvs
Just cos I look after Unix Boxes doesnt mean I wear sandals
Jane-Marie Smith
Occasional Advisor

Re: wio % high

Hi,
Yes these disks do have a database on them and it is only one filesystem stipped, with a three way mirror.

I have foundthe problem. The users are running 'end of year reports' during the day instead of using the over night batch job facility.

Thanks to all who helped.

JMS
Just another day
Dan Rosen
Frequent Advisor

Re: wio % high

While you have found the most common error, I have been working on resolving this as well. My WIO floats around a 40% average every day.

I believe I have the database and system tuned as best I can, with the requirements placed on the kernal (database wants some weird numbers, even though it doesn't come close to utilizing them).

Anywho... with a WIO% of over 40%, and a belief that the only way to fix it is a physical upgrade, which direction would you go? Additional Memory or additional hard drives for the Autoraid.

Current stats: 512M RAM
5 9 GB, 10K drives in HP Autoraid
Database size: 3+ GB
Progress database, if that matters (if anyone recognizes it...!)

TIA
Paul McCleary
Honored Contributor

Re: wio % high

Hi,

I know u say your app/dbase are tuned as well as you can, but is the AutoRAID setup correctly? This can cause a lot of problems, especially a high WIO.

There is a lot of contention over the best way to setup an AutoRAID - An effective performant way of setting it up is if you have 2 controllers and attach with 2 HBAs and require 2 VGs (vg00 & vgapp) then create 4 LUNs of half the total size u require (2 for each VG). Each LUN will have 2 paths. Create the VG with alternate paths for the 2 LUNs. e.g. c3t6d0 c2t5d1

Then vgextend the alternate paths to the LUNs e.g. c2t5d0 c3t6d1.

Then create your logical volumes by striping across the LUNs e.g. lvcreate -i 2 -I 64 -L ?? -n lvol1 /dev/vg00. Setup up your filesystems. Perform the same procedure for your second VG.

This method ensures that you utilise both channels for your data giving you maximum throughput of 20MB/S for each path.

The AR uses a working set, which basically ensures the most regularly used files are in the fastest access storage, be it cache or RAID0/1. As you utilise the space more data is migrated to RAID5 to provide increased capacity the default is to ensure a minimum of 10% for RAID0/1. Unallocated space is used for RAID0/1 as free space within your LUNs. However, once you use the space in the LUNs it will not be used for RAID0/1 again even if u delete the data, due to how the data is stored.

One thing that does concern me is that you only have 5 disks, the AutoRAID performs at it's optimum with all 12 disks. I would definitely recommend you add more disks. As to memory, this is a separate issue to WIO - have a check how your buffer cache is working and if any processes are waiting on memory, but I don't think this will help with this issue.

Hope this helps, Paul