- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- any problem with the high wio%
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-14-2003 07:17 AM
01-14-2003 07:17 AM
Here is my sar output(average)
%usr %sys %wio %idle
Average 35 8 55 2
runq-sz %runocc swpq-sz %swpocc
Average 1.2 6 0.0 0
bread/s lread/s %rcache bwrit/s lwrit/s %wcache pread/s pwrit/s
Average 0 4521 100 7 44 84 3519 293
swpin/s bswin/s swpot/s bswot/s pswch/s
Average 0.00 0.0 0.00 0.0 14173
scall/s sread/s swrit/s fork/s exec/s rchar/s wchar/s
Average 35173 8372 3540 13.08 11.67 9280218 29881
iget/s namei/s dirbk/s
Average 4 557 0
rawch/s canch/s outch/s rcvin/s xmtin/s mdmin/s
Average 0 0 0 0 0 0
msg/s sema/s
Average 1.12 973.43
Due to the limited space in question window, pls find a complet sar log attached.Pls note that the io of all disks is very good.This is a typical picture:
Average c13t4d1 24.92 0.50 36 632 5.00 8.98
Can anyone help me to find out why wio% is so high while the io of disks is good, the
%rcache is 100%, the %wcache is 84% ???
Thank you in advance. Do i have to provide any additional log , for exmaple, vmstat, iostat, top?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-14-2003 07:37 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-14-2003 07:40 AM
01-14-2003 07:40 AM
Re: any problem with the high wio%
<5 is perfect
5-20 is busy but not excessive
>20 is completely i/o bound
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-14-2003 07:42 AM
01-14-2003 07:42 AM
Re: any problem with the high wio%
Can't open the zip archive, it says it is corrupt.
Do you have Glance on your Server? I believe there is a free trial version on the Application CDs (probably CD # 2 or # 1). If you install this, you can examine the processes running to find out which ones are waiting for I/O.
Without Glance, I would suggest the following,...
'vmstat' - will show count of processes blocked for resources, and activity on the memory (pages freed and allocation).
How big is swap and how full is it (swapinfo -ta)? If you see deactivations on vmstat, you are running out of memory.
Buffer Cache should be no larger than 400M (rule of thumb), but if you have some spare memory, bump it up by 100M or so and see what happens.
Check top and see how much time the system spends in system mode - is the User Mode chewing up the Wait IO or is it the OS managing itself that causes the wait.
This is a big area to examine - check out previous posts on performance, and look at training or a good book (HP-UX Performance Tuning by Weygant and Saurs is a fantastic reference).
Best of luck Ian
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-14-2003 07:43 AM
01-14-2003 07:43 AM
Re: any problem with the high wio%
I will upload the last part of sar, average numbers.
Thank you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-14-2003 07:43 AM
01-14-2003 07:43 AM
Re: any problem with the high wio%
ioscan -fknCdisk
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-14-2003 07:49 AM
01-14-2003 07:49 AM
Re: any problem with the high wio%
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-14-2003 07:51 AM
01-14-2003 07:51 AM
Re: any problem with the high wio%
What's going on with these disk devices??
Average c13t5d0 14.80 32767.50 33 549 5.00 5.81
Average c13t5d1 12.33 32767.50 31 555 4.99 4.72
Average c14t4d4 12.63 32767.50 31 554 5.00 5.03
Average c14t4d5 13.38 32767.50 31 558 4.95 5.38
What OS release are you running and what is the latest patch bundle you have installed?
live free or die
harry
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-14-2003 08:45 AM
01-14-2003 08:45 AM
Re: any problem with the high wio%
I don't know why there are such high queue length for the 4 disks.I will figure out the application running on them.Do you think it is the 4 devices causing too high wio%?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-14-2003 09:10 AM
01-14-2003 09:10 AM
Re: any problem with the high wio%
Thank u for u help.
I found there are 4 HBA's in host hbsd1.But
C13,C14,C17,C18 are actually two HBA's, they are bound to 3aa,14aa, and see about 144 devices.
And C21,C23 are another two HBA's, and they are bound to 3BA,14ba, and they see about 40 devices.
Pls confirm these tommorrow.
For i am not onsite, i have to rely on my onsite colleague to get necessary logs.Sorry for my being unable to provide quickly
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-14-2003 09:19 AM
01-14-2003 09:19 AM
Re: any problem with the high wio%
OS level: B.11.11.
model:9000/800/SD16000
about the patch level, can you find it in my attachment?
The zip file is all that i can give, which is produced by a script, named emcgrab.Yes, we are dealing with an EMC Symmetrix connecting to HP superdome.
From the zip file, you can see many, many stuffs, but definitely i have to get more from customer.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-14-2003 10:11 AM
01-14-2003 10:11 AM
Re: any problem with the high wio%
Just thought I'd point that out.
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-15-2003 03:04 AM
01-15-2003 03:04 AM
Re: any problem with the high wio%
If its running on EMC the performance should be ok - it seems you have uneven performance - some devices are running very heavy compares to others.
Are you using EMC Optimizer ? this is like a load balancer - it monitors all devices on the EMC and then moves the data around to even the load across all devcies. Once this is done your stats on the HP will show a nice even (low) disk usage across all devices and performance will be a lot lot better and your wio% should drop a lot. As a guide - our EMC devices on HP servers with EMC Optimzer configured all run with a wio% of 5 or less - even with heavy i/o.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-15-2003 05:39 AM
01-15-2003 05:39 AM
Re: any problem with the high wio%
How many disc adapters do you have?
Most EMC are also set for the 2nd outstanding write to wait on the 1st. With many writes, this can cause a lot of wait. Most microcode levels allow this to be set higher.
Curious how much cache is in the EMC? This may need to be higher.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-15-2003 05:54 AM
01-15-2003 05:54 AM
Re: any problem with the high wio%
On symmetrix side, there are totally 4 fibre channel ports for the superdome, but we can see that only c13,c14 is in use, i can see from our log(sorry that i am unable to upload to size limit) that 2 FA ports are used by HP superdome.
I do not think channel throughput is a problem, since we are using 2GB adapters.
I have seen some docuements saying that %wio is another kind of idle,which can indicate either that IO is too sluggish or CPU is too fast.I can say that the IO for each disk is very good, justified by very nice number of avwait and avserv.
If the customer adds one or more Oracle instance on the server, i think wio% can go down due to continual IO feeding by parallel process/threads. It that correct?
Another thing I wish to clarify is the extremely high avque(about 32000) for the 4 devices as mentioned by Harry. It is such a weird number. And I have examined LVM layout, the 4 disks are no more than different from others, and LVs are created in host striping manner, the 4 devices are with many volume residing on them.I am really puzzled by this monster like number.
Pls cast some light on this, especially whether the high %wio is a bad thing or an usual thing?
Thank you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-15-2003 06:19 AM
01-15-2003 06:19 AM
Re: any problem with the high wio%
http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0x3da54a988422d711abdc0090277a778c,00.html
With a wio% that high its definitely an i/o issue, the question is what to do to isolate or fix it.
With 2Gb connections to the EMC theyre not going to be the problem, the problem is either the EMC is having trouble keeping up with the large number of i/o requests - and EMC can check this for you and tell if you if certain physical disks in the Symmetrix are thrashing (Optimizer can report and fix it), or, in my opinion, you have too many i/o requests going to particlular EMC luns (/dev/dsk/.. entries).
Certainly something to try is stripe your lvols across all available channels and devices to even out the io load - this should increase throughput considerably - unless the problem is at the EMC end. You need to investigate both possibilities.