Disk Enclosures
Showing results for 
Search instead for 
Did you mean: 

evaperf sample durations

Simon Hargrave
Honored Contributor

evaperf sample durations


I'm setting up evaperf scripts to collect 5-minute averages with -cont 300 from our 8100's. At the moment I do this for each 24 hour period, but if it falls over for any reason I lose a whole days data. So I'm now rewriting it to collect for an hour, every hour and amalgomating.

However one thing I've noticed worries me a little. If we run a command something like: -

evaperf" mof -cont 300 -dur 86100 -csv -od c:\evaperf_data

The first sample (e.g. midnight) is returned immediately.

Am I correct in thinking that this first sample is a "point in time" sample, and not representative of the previous 5 minutes? My testing seems to suggest that this is the case.

The reason this is worrying is that we amalgomate the MB/s figures for each of our vdisks and multiply by the number of seconds in the sample to get an estimate of total throughput for CA journal log sizing purposes. If this first sample isn't representative of the average through the full 5 minutes and only represents e.g. 1 second, then this will throw the figures out. Not so bad when it's once a day, but if we run it once an hour it could be that 5 out of every 60 minutes of data is bogus so overall figures out by up to 10%.

Has anyone accounted for this before?
Simon Hargrave
Honored Contributor

Re: evaperf sample durations

Right, I've done some more investigation and it is indeed correct that the first sample is inaccurate as it only represents a point-in-time rather than an average. You can see this from the attached graph.

I basically scheduled a 60 second continuous collection, then another 5 minutes later, and another, and another, ending up with 4 separate collects running. You can see that the first point of each subsequent collection is off-the-line, and from then on they match perfectly.

So it seems that to get truly accurate data from evaperf, you need to ignore the first time point of any collection. For me that means collecting every 60 minutes for 65 minutes and binning the overlapping time in each subsequent file via a script.

Hopefully this will help if anyone searches for similar issues in future.