HPE EVA Storage
1756742 Members
2130 Online
108852 Solutions
New Discussion

Trying to get better understanding of IOPS

 
mgtow
Occasional Advisor

Trying to get better understanding of IOPS

I'm trying to get a better understanding of IOPS within our environment. However, there are a few things that I need to confirm in my findings. I appreciate your assistance. Sorry for the long winded and hopefully not confusing description.

Environment:

EVA8100 1DG populated with [32] 15k and [40] 10k drives.

Going forward I know we need more 15k drives to support our imitative but not sure how many more. In the process of virtualizing most servers and most desktops, VMware sphere and view.

Things I believe I need to figure out:
1. How many IOPS were initially available and now what's currently being used and of course taking a vraid 5 penalty into consideration. This website seems to provide a good source for at least determining the initial IOPS figure: (vraid penalty as well as 20% wiggle room per disk) http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us&objectID=c01671044

2. The second thing what I'm a little shaky on is determining how to figure out the %R's and %W's of the DG, vDisk or VM. I have some EVAPerf captures and I see that I could use the Windows Performance counters on my Command View EVA server such as HP EVA Virtual Disk and HP EVA Physical Disk Group which might work as well. What I'm wondering is would you sum the counters: Avg Read kb/sec # and Read Req/s # to give you the "total" read and then do the same for the respective write counters? Then use these two "totals" to derive the %R and %W values?

3. The formula that I'm referring to is how to figure out how many spindles that i need to handle x number of IOPS.
[%R + (%W * Raid Penalty) * IOPS required] / disk iops value (taken from the URL above)

Also, I would be curious to hear from those who have or are testing View. What kind of IOPS are you seeing with your virtual Windows 7 desktops for task and power users?

thanks,
Mike