HPE EVA Storage
1832512 Members
4573 Online
110043 Solutions
New Discussion

Help calculating I/O Load for EVA

 
SOLVED
Go to solution
Ivan Ferreira
Honored Contributor

Help calculating I/O Load for EVA

In the attached document is a simple formula to calculate the I/O load for a storage device.

I have doubts while calculating I/O load for our installations.

Now, in my SAN, I have dual HBA on all hosts in a zone with all EVA controllers ports (4 ports Active-Active configuration).

So, the paths for each lun are 4. The operating system is Tru64, in the /etc/ddr.dbase I can see that the queue depth is 25.

This is a two node cluster and it has 23 luns presented.

So, If we use the formula I get:

For only one host: P*q*L = 4*25*23 = 2300
For the two host: 2300+2300 = 4600

4600 because both hosts access the same LUNS with cluster file systems.

The resulting value, 2300, is already higher than the Service-Q for EVA, as indicated in the document (2048). And this is only one cluster of the severals I have connected to the EVA.

So, my question is:

- Is the document formula reliable? I assume yes.
- Am I missing something in the calculations?
- Not all LUNs in the cluster have I/O operations constantly, is this a factor that should I consider somewhere?

I want to know, how many additional hosts can I connect to this Storage without affecting the performance (too much).

Thanks for your time and help in advance.
Por que hacerlo dificil si es posible hacerlo facil? - Why do it the hard way, when you can do it the easy way?
1 REPLY 1
Sameer_Nirmal
Honored Contributor
Solution

Re: Help calculating I/O Load for EVA

Yes, the formula is reliable to understand & calculate/assess IO loading on array devices with actual known values put in. This is helpful to estimate the number of hosts one can have in an environment with a storage array which it can handle without port overloading. When you use pre-defined values like q=25 in this case, you are calculating based on this value which maybe maximum or low estimate and need not be actual in practice.

Service-Q depths value 2048 mentioned for EVA is per port basis and means 2048 outstanding IOs ( simulataneous) at a time. Thus one EVA controller with 4 ports, you can have 2048x4 = 8192 outstanding IOs at a time handled by the EVA controller without any port overloading. With 2 controllers you get 16384.

Number of paths (P) considered are supposed to be "active" and are host paths from a single controller port perspective. Not sure if the dual HBAs are dual port or single in this case, but then value of P would vary. With this formula, it is assumed that a host is putting "q" ( as defined ) many outstanding IOs at a time for a specific LUN. Practically, it need not be the case as outstanding IOs could be less than "q" depending on application/DB IO behaviour.

Other important aspect needs to be considered is IO performance of disks or anyother IO devices which comes in between the controller port and Disks. This is because array controller IO queue clearance is depends on these devices which makes it possible to take care of outstanding IOs from host side. This is where number of disks, cache settings, VRAID type used etc comes into picture as important things in case of EVA and its IO performance.

Number of hosts which could be connected to EVA without any penalty of IO performance depends on the Application/DB IO pattern/behaviour, optimised layout of DB on EVA and ensuring efficient load balancing of LUNs acorss EVA controllers ( avoiding MP port loading with less proxy IOs, path load balancing setting through a s/w like DMP,PP etc ) . The best way to trap the performace is looking at host port and vdisk statistics in EVAperf. As I said before, the EVAPerf values could be used with this formula to know current port loading .