HPE GreenLake Administration
- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- HPE EVA Storage
- >
- Help calculating I/O Load for EVA
HPE EVA Storage
1832512
Members
4573
Online
110043
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-22-2006 03:01 AM
11-22-2006 03:01 AM
In the attached document is a simple formula to calculate the I/O load for a storage device.
I have doubts while calculating I/O load for our installations.
Now, in my SAN, I have dual HBA on all hosts in a zone with all EVA controllers ports (4 ports Active-Active configuration).
So, the paths for each lun are 4. The operating system is Tru64, in the /etc/ddr.dbase I can see that the queue depth is 25.
This is a two node cluster and it has 23 luns presented.
So, If we use the formula I get:
For only one host: P*q*L = 4*25*23 = 2300
For the two host: 2300+2300 = 4600
4600 because both hosts access the same LUNS with cluster file systems.
The resulting value, 2300, is already higher than the Service-Q for EVA, as indicated in the document (2048). And this is only one cluster of the severals I have connected to the EVA.
So, my question is:
- Is the document formula reliable? I assume yes.
- Am I missing something in the calculations?
- Not all LUNs in the cluster have I/O operations constantly, is this a factor that should I consider somewhere?
I want to know, how many additional hosts can I connect to this Storage without affecting the performance (too much).
Thanks for your time and help in advance.
I have doubts while calculating I/O load for our installations.
Now, in my SAN, I have dual HBA on all hosts in a zone with all EVA controllers ports (4 ports Active-Active configuration).
So, the paths for each lun are 4. The operating system is Tru64, in the /etc/ddr.dbase I can see that the queue depth is 25.
This is a two node cluster and it has 23 luns presented.
So, If we use the formula I get:
For only one host: P*q*L = 4*25*23 = 2300
For the two host: 2300+2300 = 4600
4600 because both hosts access the same LUNS with cluster file systems.
The resulting value, 2300, is already higher than the Service-Q for EVA, as indicated in the document (2048). And this is only one cluster of the severals I have connected to the EVA.
So, my question is:
- Is the document formula reliable? I assume yes.
- Am I missing something in the calculations?
- Not all LUNs in the cluster have I/O operations constantly, is this a factor that should I consider somewhere?
I want to know, how many additional hosts can I connect to this Storage without affecting the performance (too much).
Thanks for your time and help in advance.
Por que hacerlo dificil si es posible hacerlo facil? - Why do it the hard way, when you can do it the easy way?
Solved! Go to Solution.
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-22-2006 06:11 PM
11-22-2006 06:11 PM
Solution
Yes, the formula is reliable to understand & calculate/assess IO loading on array devices with actual known values put in. This is helpful to estimate the number of hosts one can have in an environment with a storage array which it can handle without port overloading. When you use pre-defined values like q=25 in this case, you are calculating based on this value which maybe maximum or low estimate and need not be actual in practice.
Service-Q depths value 2048 mentioned for EVA is per port basis and means 2048 outstanding IOs ( simulataneous) at a time. Thus one EVA controller with 4 ports, you can have 2048x4 = 8192 outstanding IOs at a time handled by the EVA controller without any port overloading. With 2 controllers you get 16384.
Number of paths (P) considered are supposed to be "active" and are host paths from a single controller port perspective. Not sure if the dual HBAs are dual port or single in this case, but then value of P would vary. With this formula, it is assumed that a host is putting "q" ( as defined ) many outstanding IOs at a time for a specific LUN. Practically, it need not be the case as outstanding IOs could be less than "q" depending on application/DB IO behaviour.
Other important aspect needs to be considered is IO performance of disks or anyother IO devices which comes in between the controller port and Disks. This is because array controller IO queue clearance is depends on these devices which makes it possible to take care of outstanding IOs from host side. This is where number of disks, cache settings, VRAID type used etc comes into picture as important things in case of EVA and its IO performance.
Number of hosts which could be connected to EVA without any penalty of IO performance depends on the Application/DB IO pattern/behaviour, optimised layout of DB on EVA and ensuring efficient load balancing of LUNs acorss EVA controllers ( avoiding MP port loading with less proxy IOs, path load balancing setting through a s/w like DMP,PP etc ) . The best way to trap the performace is looking at host port and vdisk statistics in EVAperf. As I said before, the EVAPerf values could be used with this formula to know current port loading .
Service-Q depths value 2048 mentioned for EVA is per port basis and means 2048 outstanding IOs ( simulataneous) at a time. Thus one EVA controller with 4 ports, you can have 2048x4 = 8192 outstanding IOs at a time handled by the EVA controller without any port overloading. With 2 controllers you get 16384.
Number of paths (P) considered are supposed to be "active" and are host paths from a single controller port perspective. Not sure if the dual HBAs are dual port or single in this case, but then value of P would vary. With this formula, it is assumed that a host is putting "q" ( as defined ) many outstanding IOs at a time for a specific LUN. Practically, it need not be the case as outstanding IOs could be less than "q" depending on application/DB IO behaviour.
Other important aspect needs to be considered is IO performance of disks or anyother IO devices which comes in between the controller port and Disks. This is because array controller IO queue clearance is depends on these devices which makes it possible to take care of outstanding IOs from host side. This is where number of disks, cache settings, VRAID type used etc comes into picture as important things in case of EVA and its IO performance.
Number of hosts which could be connected to EVA without any penalty of IO performance depends on the Application/DB IO pattern/behaviour, optimised layout of DB on EVA and ensuring efficient load balancing of LUNs acorss EVA controllers ( avoiding MP port loading with less proxy IOs, path load balancing setting through a s/w like DMP,PP etc ) . The best way to trap the performace is looking at host port and vdisk statistics in EVAperf. As I said before, the EVAPerf values could be used with this formula to know current port loading .
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP