Disk Enclosures
1748154 Members
3591 Online
108758 Solutions
New Discussion юеВ

Re: EVA Performance

 
Gary Betts
Occasional Contributor

EVA Performance

We current have an EVA3000

On the EVA I have a disk group containing 20 Physical Disks and 8 Vdisks configured.

One vdisk (configured as raid1, 150GB) is assinged to a database server and I am seeing high i/o on this device, using sar -d upto 100% at times. As below:

c21t0d0 99.06 50.53 1944 31018 27.52 3.94

I am presuming that the EVA will have spread the data across the physical disks within the disk group.

We are aware of some application issues with regard to i/o but I need to look at possible tuning of the EVA but I am stuck on how to progress.

I am aware that disk group occupancy needs to be lower then 90% (ideally 85%) to aid performance.

Also potentially adding more phsysical disks in the group may increase performance. Does anyone have an opinion on this.

Any suggestions ?





2 REPLIES 2
Jonathan Harris_3
Trusted Contributor

Re: EVA Performance

First things first, your 100% disk busy time doesn't actually mean that much (well, not insofar as you're intimating). This is just indicative of the amount of time that the disk is being used, not how hard it's being driven. Theoretically, you could be throughputting minimal I/O, but as long as it's a constant stream it will show 100% utilization.

However, it is useful when used in comparison with the wait times. My knowledge of HP-UX (I'm assuming this is where the output is from) is minimal (I work with Solaris), so I can't really tell you if your wait times are high are not.

Anyway, the basic rule to remember is if %busy is high and wait times are low, then the infrastructure isn't struggling. High %busy and high wait times indicate problems.

You only indicate host performance, you need to monitor your SAN infrastructure to give an overall picture of end-to-end performance. What throughput are you getting through the switch ports (MB/s)? Throughput on the array ports (MB/s)? Disk group and vdisk performance (MB/s / latency / r+w req/s)?

Compare your stats with the 3000's capabilities: http://h18006.www1.hp.com/products/quickspecs/11619_div/11619_div.html#Technical%20Specifications - are you getting close to those figures?

On to generic EVA performance tweaks.

There are things that can be done to tweak db performance. You don't say what db you're using, but the Oracle 10G / EVA 5000 Best Practice document can be found here: http://h71028.www7.hp.com/ERC/downloads/4AA0-5620ENW.pdf - although it is Oracle specific, some of the practices can be implemented regardless of the db being used.

Some generic guidelines on SAN performance:
VRAID1 gives better write performance than VRAID5 due to fewer write commands necessary for parity. Read performance is similar. Generally, logs go on VRAID1, data on VRAID5.

More spindles = better performance. Faster spindles = better performance (ie, 15K RPM beats 10K RPM).

Make sure you're using the caches on the array.

Possible problems:
What are the other Vdisks doing? If you're hosting more than one I/O intensive application on the same set of disks you compromise performance, as you have different systems competing for resources. Actually, even hosting other systems that just have average I/O requirements on the same set of disks as a system that is I/O intensive can cause a significant degradation in performance.

Continuous Access / Business Copy can cause large decreases in performance.
Amar_Joshi
Honored Contributor

Re: EVA Performance

Hi Gary,

You may probably look at the EVA's performance using EVAPerf, which should be installed on your SAN Management Appliance (SMA) or if CommandView version is too old, you may need to install additional EVAPerf component on your SMA.

It's very much true that BC and CA can affect the performance and if you have Synchronous CA setup, it will affect more because it depends on how good connection you have between 2 EVAs.

Back to EVAPerf, you can look for particular Vdisk's performance and the very first thing you should check is the "Queue Depth", lower Queue-Depth is healthy sign (usually below 1). Also you can check the settings for All the VDisks' preferred path to controller (check it from CommandView) if performance is studied well and you see that only 1 Vdisk is IO intensive, you can make it preferred to Controller A and all other VDisks to Controller B. Similarly you can perform the load balancing on hosts side also (if using SecurePath, use #spmgr to set the LB_POLICY).


Hopefully if you are able to see anything suspicious, add to the forum and we can proceed from there.