Operating System - HP-UX
1825151 Members
3860 Online
109679 Solutions
New Discussion юеВ

System & I/O Performance: Should we still believe %WIO

 
Alzhy
Honored Contributor

System & I/O Performance: Should we still believe %WIO

Analyzing an IO situation on a system hooked up to an EVA Array that is shared amongst servers on a SAN... On one particular server, my %WIO have been consistently above 30% even if my IOPS and Blocks/sec throghputs are relatively low.. (Server is an Oracle DB/App Combo - Oracle mounts are tuned for directIO, buffers kept to 800MB - System is a 16GB one). I do notice that component LUN access times reported by "sar -d" show acces times of 10-20ms average on some LUNs while avwait are consistently below 10 (5 average)..

Should I trust sar's %WIO or believe that we are swamping the EVA already considering that it is a shared front end array. It's architecture is as follows:

EVA Array to Switch (2 Controllers -- 2x2Gbps FC's for a totoal of 4 FCs to the Switches).. About 6 Hosts with 2x2Gbps FC HBA's connect to the Switches (Fabric)... So when all 6 Hosts are hitting the Fabric, it means theorethically a maximum Fabric load to the backend FC links to the EVA controllers of 24 (6 x 4 Gbps). And since the Switch to EVA bandwidth is theorethically a shared pipe with only 8 Gbps bandwidth.. we may be swamping the bandwidth or the array itself.. unfortunately my San Master does not have the tools on the Switch or the Arrays to validate...

Hakuna Matata.
2 REPLIES 2
Sridhar Bhaskarla
Honored Contributor

Re: System & I/O Performance: Should we still believe %WIO

Hi Nelson,

Waiting for IO to complete otherwise idle - %wio is one metric that I constantly keep track of.

However, I will use 'glance' to look at the Queue length before I consider it as a problem. -u key in glance and second column. Most of the seeing QLen on a disk corresponded to %wio in my sar output. This may be because of the layout of the tablespaces and tables on them. We usually get into a constant problem of one table being heavily accessed with millions of unnecessary rows, no index, full table scans messing up the performance.

-Sri
You may be disappointed if you fail, but you are doomed if you don't try
doug mielke
Respected Contributor

Re: System & I/O Performance: Should we still believe %WIO

Nelson,
Sar has been my friend for so long I'd be truly hurt if I ever thought it was leading me astray.
But here's my thought:
I have an EVA w/ 2 controllers serving 2 servers. WIO times are always in the single digits, and often <1. Times are lower when the system is busy, I assume because a higher % of requests are served from the EVA's cache.
Could it be the the server you are looking at makes the fewest requests of the EVA, thus its data is seldem in eva cache, raising the access time in that one server, up to the normal physical i/o times (20+) that I see on my direct attached storage.

A test coud be to do some large i/o of data that you know is in EVA cache, such as something that you are sure another server is accessing.