Storage Boards Cleanup
To make it easier to find information about HPE Storage products and solutions, we are doing spring cleaning. This includes consolidation of some older boards, and a simpler structure that more accurately reflects how people use HPE Storage.
Disk Arrays
cancel
Showing results for 
Search instead for 
Did you mean: 

Thoughts on evaperf counters

Pall Sigurdsson
Occasional Advisor

Thoughts on evaperf counters

We are currently working on integrating evaperf with cacti (via the built-in WMI counters) we are producing nice graphs but have a few issues:

1) evaperf quite frequently stops working via WMI and returning empty results.

We did a bit of googling around and the workaround of running "wmiadap /f" and then restarting wmi service will fix the problem at least for a couple of hours.

evaperf and running perfmon locally works fine. Reading wmi counters remotely returns empty results. Other wmi counters work fine, only the data from evaperf is broken.

2) We notice that there seems to be quite a difference between evaperf results from physicaldiskgroup (evaperf pdg) and the total storage (evaperf as).

The eva we are using for testing has only one physicaldiskgroup and it reports much lower results (both requests/sec and mb/sec than is indicated by the results of as and for the vdisks involved)

For example the by running both concurrently we see:
* pdg write and read requests return: 35 requests/sec

And at the same time we get:

* evaperf as total host requests/sec is 700

If we look at mb/sec we see similar trend.

The problem can be visualied by comparing these two images:
http://pall.sigurdsson.is/evaperf/eva-throughput.png
http://pall.sigurdsson.is/evaperf/physicaldiskgroup-throughput.png

The attached graphs show that the trend is consistent but something seems to be wrong with the scale. Our estimates show the total eva performance to be closer to actual performance.


3) As stated above, the info from evaperf as seems to be more or less accurate but sometimes we have huge spikes for a short while. It claims to be delivering total of 6gb/sec (thats gigaBYTES) which is way more than this eva4400 should be able to handle.

Here is a link that demonstrates the problem:
http://pall.sigurdsson.is/evaperf/eva-anomalies.png

does anyone have solutions to any of these?
8 REPLIES
Prokopets
Respected Contributor

Re: Thoughts on evaperf counters

Pall, i had similar problems with perfmon - it showed me strange results about host speed, iops and other parameters. Unfortunately, haven't resolved this issue and as it wasn't urgent i gave up. Maybe it was a problem of old CV version (i tested it on 8.0) but i'm not sure. Maybe HP is trying to move users to storage essentials :)
Bert Zefat
Advisor

Re: Thoughts on evaperf counters

Do you have the cvs files that evaperf creates. And have you loaded them in PerfMonkey, to compare the Cacti output to the PerMonkey output?

The spikes could be a result of a high cach read.
Pall Sigurdsson
Occasional Advisor

Re: Thoughts on evaperf counters

For issue #2 (difference between physicaldiskgroup and totalarray throughput) i have compared them only to running evaperf as and evaperf pdg manually.

The difference is wast (20-fold) and obvious so i did not do a detailed comparison.


Regarding the huge spikes, big cache reads seems plausible but i ruled that out because 6bytes/second is way to much for 2 4gbit fiber channel ports to deliver cache or no cache.

Also I compared the results with the activity of each virtualdisk (because vdisk counters show both cache hit and cache miss) and there are indeed spikes in the traffic but they do not add close anywhere close to these figures.

ps: Does anyone know how long the sample period is for the counters gathered for wmi? I suspect they are 1second averages but i would like to see some confirmation of this.
Pall Sigurdsson
Occasional Advisor

Re: Thoughts on evaperf counters

I will give itrc support a try. Will post results here if there will be any.
Pall Sigurdsson
Occasional Advisor

Re: Thoughts on evaperf counters

Ok i have talked with ITRC support. Decided to post updates here for the search engines to crawl.

1) After three weeks of tedious debugging, HP Support kindly excused themselves with 'technically we dont support evaperf over wmi'. In other words, this was a known issue but HP has no plans to fix this issue. A very big dissappointment that was.

2) According to ITRC, The physicaldiskgroup counters are only counting 'backend io' but i have yet to get a definition of what counts a backend io. I know this is not just a matter of cached vs. uncached because my uncached traffic is 20-times higher than what the physicaldiskgroup counters indicate.

In other words, physicaldiskgroup (pdg) counters are completely useless.

3) Still working on this issue, this is propably an error in the evaperf program though.
Prokopets
Respected Contributor

Re: Thoughts on evaperf counters

Pall, it's quite predictable. What's the point of improving free evaperf instead of trying to sell Storage Essentials... :(
Cmorrall
Frequent Advisor

Re: Thoughts on evaperf counters

This may be against the terms of use, but I'd like to promote a service we are currently offering; EVAperfect. Full disclosure, I work for the company who provides this service.

In short, it's an online service to collect and display EVA performance data.

You can see a rough edit of the service in action on http://www.youtube.com/watch?v=noNlggjjFAk

Pall Sigurdsson
Occasional Advisor

Re: Thoughts on evaperf counters

Another update for the search engines.

1) evaperf over WMI is in fact very broken and official answer by HP is they dont support it!

2) ITRC has changed their answer... Now they claim PhysicalDiskgroup counters are in fact "average of every disk in the group" not total counters for the group. Their reply was "Since there is no documentation available that claims otherwise, this is not a bug"

3) Total counters for a specific array are in fact wildly off the charts (i.e. they sometimes claim the eva is delivering 700gbytes/sec) .. This is a confirmed bug and itrc has confirmed that they are NOT planning on fixing this.

Have a nice day everyone!