Disk Arrays
cancel
Showing results for 
Search instead for 
Did you mean: 

High wio% in XP512, everithing else normal

SOLVED
Go to solution
José Enrique González
Frequent Advisor

High wio% in XP512, everithing else normal

Hi, guys:

I have two N4000 boxes in an Oracle database cluster (v8.1.7) sharing LUNs in an XP512 disk array in an active-active configuration. When reporting sar -u, wio% shows consistently high (around 35% and sometimes more). avque is always 0.5 for all LUNs (fine), avwait is in average around 7ms (fine) and avserv is around 10ms (fine, I think, according to XP512 docs).

Is there any starting point to diagnose this behavior? Unfortunately I don't have Glance in my boxes to apply in-deep analysis. Any help you can bring me is truly appreciated.

Jose Enrique
6 REPLIES
Vincent Fleming
Honored Contributor
Solution

Re: High wio% in XP512, everithing else normal

Jose,

Is this something new, or has the system been like that for a while?

A couple of things to check...

Look at the service times and IO rates of the internal disks (if any). If the system's paging or something to an internal drive, it'll likely skew numbers like wio%. Don't forget that wio% is a system-wide guess, not a truely measured number.

Depending on your IO rate, those service times are a little high. You don't mention how the XP is configured - # of array groups, drive type, LUN emulation, LUSE use, LVM configuration, and port congestion can all cause higher latencies.

Some things to consider, if you're sure it's not the host system:
- can you add more array groups (more drives = more performance)
- can you move the DB to 15k drives?
- avoid LUSE
- review your LVM configuration - if striping, look closely, and compare with the XP lun config - you may be striping within an array group, thrashing the heads.
- check the port performance numbers; you may benefit from spreading the LUNs around on more ports.

I hope this helps!

Vince
No matter where you go, there you are.
Alzhy
Honored Contributor

Re: High wio% in XP512, everithing else normal

I think it is not a disk I/O problem that you're having. Please be aware that what sar reports in %wio is not solely wait on disk I/O. Have you checked Oracle statistics if it indeed starved for disk I/O?
Hakuna Matata.
José Enrique González
Frequent Advisor

Re: High wio% in XP512, everithing else normal

Thank you, guys, for your time. As database seems to be running good, I will try to conducte a closer inspection of LUN configuration with the help of experts, and so I will look for internal disks activity according to your suggestion.
Alzhy
Honored Contributor

Re: High wio% in XP512, everithing else normal

To make the XP512 (aka HDS9960) fly - our recipe is to always stripe accross LUNS (ldevs) from different array groups on different ACPs and CHIp/HBA ports. 4 or 8 way stripes with 64-128k stipe size should be sufficient for general DB usage.
Hakuna Matata.
José Enrique González
Frequent Advisor

Re: High wio% in XP512, everithing else normal

Excelent, Nelson! Are your DB partition raw or filesystems?
Alzhy
Honored Contributor

Re: High wio% in XP512, everithing else normal

We are using both cooked (VxFS with DirectIO) and RAW. On some we use the Veritas QuickIO product with the XP512... which alows us to do raw-IO like performance on filesystems (VxFS).

The key here is SAME - stripe and mirror everything. The "mirror" you donot have to worry about as it is already taken care of. It is the "stripe" that you need to plan and implement.

Hakuna Matata.