Disk Enclosures
1752592 Members
3727 Online
108788 Solutions
New Discussion юеВ

High IO wait on HP Itanium Systems.

 
Manoj (SAP Technology)
Occasional Contributor

High IO wait on HP Itanium Systems.

Hello,

I'm experiencing the following issue with some of the customers who are using HP integrity systems. All are Oracle/SAP application servers.

We are facing high CPU io wait for all these servers. Storage systems varies from local disk to EVAs in different systems. Customers are NOT facing any performance issues due to this. But just curious why this high IO wait.

Oracle release : 9.2.0.6

All these systems are using HP filesystems and NOT raw devices. So no async io possible by default.

Only thing we could find was the following trace file entry;

Ioctl ASYNC_CONFIG error, errno = 1

As per SAP and Oracle notes, we have done following things to avoid this.

1. dba user is provided with MLOCK privileges.
2. Changed the following oracle parameters;

disk_asynch_io = FALSE
filesystemio_options = DIRECTIO

But still sar output is showing high io wait during normal operation.

Anybody has faced similar issues?
Your help on this is much appreciated.

Regards, Manoj




3 REPLIES 3
Alzhy
Honored Contributor

Re: High IO wait on HP Itanium Systems.

Can you send/post output of:

mount -p
kmtune
sar -d 5 10 (at time you notice I/O waits)
sar 5 10 (coincident with above)
vxlicrep
vgdisplay -v (if using LVM)
vxprint -Aht (if using VxVM)

That error in your Oracle trace seem indicative that of async_io usage being declared in the Oracle instance's init file. That's supposed to be simply ignored if your storage is not really on raw devices.
Hakuna Matata.
Manoj (SAP Technology)
Occasional Contributor

Re: High IO wait on HP Itanium Systems.

Hello,

Attached file contains all command outputs.

Thanks for your help.

Regards, Manoj
Alzhy
Honored Contributor

Re: High IO wait on HP Itanium Systems.

Manoj,

Can you see if your oracle data file mount points are mounted as directIO? Your attachment does not show it -- it was cut. A DirectIO mount should show something like:

vxfs ioerror=mwdisable,largefiles,mincache=direct,delaylog,convosync=direct 0 0

Also, it appears your dbc_max_pct is set to its default that would be tasking to the system. Set it to a Percentage (value depending on your RAM) so the resultant buffer cache is around 800MB. You may also want to simply set dbc_min_pct to the same value so you have a fixed buffer cache.

AS to the High Wait IO, well the sar -d stats certainly are all within bounds for the EVA. If your customers are not (yet?) experiencing performance issues - then expect it to happen in the future. One other issue that I see is the EVA array in the mix.

Try the above changes (set the Oracle mounts to DirectIO and the buffer cache changes and see if it changes anyting.
Hakuna Matata.