Disk Enclosures
1753971 Members
8856 Online
108811 Solutions
New Discussion

Re: EVA 4000 poor performance

 
Christ 33
Advisor

Re: EVA 4000 poor performance

Thanks for your answer.
All the LUNs are in WriteBack cache on.
During the tests the controllers arenâ t busy (Cpu use.gif).
During this test I checked also the time latency (Time latency.gif)
You can see the write latency of my operations (purple line). I get good results.
During this test, there was --parasite-- read data coming from other servers (blue line). As you can see, the reads latencies are sometime very long.
If I check the blocks size read and write during this time, (Block_size 1, 2 and 3.gif
You can see my write blocks size (64ko), and you can see also, some time, some blocks with small size.

We have a correlation between block size and time latency in timestamp.
What do you think about these blocks size?
All the servers are in load balancing mode with mpio drivers
I didnâ t know if we have a problem with bock size or with SAN architecture.
Amar_Joshi
Honored Contributor

Re: EVA 4000 poor performance

Probably your EVA is performing at what's being coming from the host. Mean to say that host itself is not able to throw enough data to load the EVA. You may check queue depth at hosts (need Microsoft expert) and see if data is queuing up at the HBA itself. Also check the CPU and memory usage. You may try switching between MPIO load balancing (round robin, shortest queue depth, shortest service time) and see if it gives you better results.

In my personal opinion, with such a low HSV controller usage, EVA is under utilized and something else is broken along the way.

Hope this helps
Rödel
New Member

Re: EVA 4000 poor performance

Hi ,

on which type of drive do you get the low performance copy to the Fata or to the FC drives ?

I think the bottle neck is the Blade or the network.


Server:
at first try to check the performance of the Blade Servers.
Which Models (e.g Blade 20p) do you have installed ?


Network:
pls check the config rules ,written in SAN Design Reference Guide,and check Firmware
(http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00403562/c00403562.pdf)

Storage:
1.)Check with EVAperfview
http://h71036.www7.hp.com/enterprise/downloads/HPStorageWorksCommandViewEVAPerf.pdf

If nothing help,pls contact HP support.

Jonathan Harris_3
Trusted Contributor

Re: EVA 4000 poor performance

Firstly, let's get rid of a few red herrings.

It's unlikely to be a host problem (unless you're running old hardware).

It's almost certainly not going to be a network problem - modern fabric switches aren't going to struggle with 90MB/s.

Port throughputs on the EVA aren't going to be a problem.

The theoretical throughputs from HP are based on an optimum setup (ie, a single server, load-balancing, sequential read / write throughput, fully-loaded EVA with 15K FC disks). A typical environment won't come close to these figures.


So, on to your 'problem'. Except that it's not really much of a problem - 90MB/s actually seems like a decent throughput for your setup.

A quick look at your environment indicates several basic issues.

You need to look at disk contention. Because you're sharing your EVA with lots of servers accessing lots of LUNs, that nice sequential write that you're trying to do is getting interrupted by read/write requests from other servers. Look at it from the EVA's point of view - it's writing for one server on to one bit of disk and is just about to write to the next part of the disk when it gets a request from another server to read a bit of data. The disk heads have to stop their write operation and go off to another part of the disk to do some reading for another server. Multiply this instance by 20 different servers and it's clear that performance is going to suffer.

However, this is the trade-off you get when you buy a SAN - you're not going to get the best performance all the time for all servers. The idea is just to work the layout so that you get contention levels that give acceptable performance levels on all servers. To this end, don't host I/O intensive apps on the same disk group. Currently, it looks like you're hosting 5 Oracle DB's and a file cluster on the same disk group - I'd expect the performance of all of them to be pretty poor.

A really important point to make is that you can not just use up space on an EVA and expect performance to continue across all your servers. It may seem a strange concept, not using that available space, but expect performance to decrease if you do.

Secondly, 26 disks isn't going to help an EVA fulfil its potential. Performance in a disk group is linearly bound to the number of spindles available. For example, with 50% reads, you could expect a disk group of 26 x 15k disks (VRAID5) to handle an average of 1750 random IOPS. If you fully loaded your EVA with 56 x 15k disks and put them in a disk group, you could expect potential average throughput to more than double to 3800 IOPS (peaks of traffic can easily exceed this - it's just for example).

And then there's VRAID5. VRAID5 has to perform parity writes as well as just the normal write to disks, so each write command requires four writes as opposed to the two required for VRAID1. How does this translate? Well, taking the example above (50% reads, 15k disks, single disk group), in a 26 disk disk group, you'd be looking at 2950 IOPS and with a 56 disk disk group, this goes up to 6350.

Some other points of note:
Don't use FATA drives for random I/O.
Make sure your block size doesn't exceed 512k (cos this'll flush the cache)
Christ 33
Advisor

Re: EVA 4000 poor performance

Thanks for your answer Jonathan.

Your comments are very interesting.
If we have host I/O intensive app (like Oracle), we need for example to separate data bases and redo, archive control logs on different disk groups.
To get the best performance of a disc group, we see that itâ s interesting to have at least 56 discs.
To use of Oracle server, we need an EVA with 112 discs to get its full potential?

We use FATA discs, only to make â Disc to Disc to Tapeâ , with standard Windows copy command.
Jonathan Harris_3
Trusted Contributor

Re: EVA 4000 poor performance

Hi Christian,

My comments were based on generic assumptions of hosting multiple databases and a cluster on a relatively small disk group. You're now making some pretty big jumps in logic!

First of all, disks. The more disks (spindles) you have available to a particular application(s), the better disk I/O throughput you will get. A disk group with 12 spindles will perform better than one of 8. 16 will be better than 12, 56 better than 16, 112 better than 56, etc, etc. The trick is calculating what you actually need. There's no point having an EVA 6000 with 112 disks for your database if it could happily run on a disk group of 16 disks.

An Oracle server on its own isn't I/O intensive - that depends purely on how much data it's processing. You're obviously in a much better position to actually collect data on this. Once you have an idea of the traffic coming from all your servers (and perhaps what you're expecting your servers to be throughputting), you can begin to design a solution.

For Oracle design on a small EVA, your first reading should be this: http://www.oracle.com/technology/deploy/performance/pdf/EVA_ORACLE_paper.pdf

After that, you have to work out which way to go - expansion, splitting of disk groups, etc. You'd probably do well to get HP Storage involved - they'll be better placed to analyse your needs and the best possible solution.