- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- EVA 4000 poor performance
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-25-2007 08:16 PM
07-25-2007 08:16 PM
EVA 4000 poor performance
We have an EVA4000 2C4D with 54 discs (mix 146 FC, 250 FATA and 500 FATA).
Same slot for each enclosure for each disc category.
We have poor performance in copy mode: less than 90 mo/s, from windows servers. I tried a test from the Manap to copy 6 files of 1.7 Gb. From C: to C:, it was faster than from C: to EVA! During the test, there was nobody else than me on the SAN. I have redone the test with evaperf command running. I copied from C: to 146 Gb FC group, to 250 Gb FATA, to 500 FATA and from 250 FATA to 146 Gb FC (Full SAN). The result was between 53 and 103 Go/s for the EVA (poor value). Time latency was between 2 and 6 ms (very good value).
These times latency, was only for my copy. During a test, I tried to send request from another server to an Oracle data base which was on the SAN. In this case, the time latency for theses requests was between 20 an 40 ms! During all these tests, controller CPU usage stayed under 15%
I run at the same time SanHealth on the switches Brocade, but I donâ t see different information.
It seem that there is a bottle neck somewhere, but I dontâ s see where!
How do you explain this strange result?
See attach the SAN drawing.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-27-2007 12:35 AM
07-27-2007 12:35 AM
Re: EVA 4000 poor performance
I've learned that EVA performs best (generic workloads) with >32 drives per disk group.
From everything I've read, don't rely on the FATA disk groups for anything other than disk-disk backup, or archival.
-tjh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-27-2007 01:35 AM
07-27-2007 01:35 AM
Re: EVA 4000 poor performance
I have 56 discs in 4 enclosures.
26 FC discs 146 Go 15k
19 FATA 250 Go
11 FATA 500 Go
I made an error when I told about -Same slot for each enclosure for each disc category-. Somewhere, there is a mix.
We have 3 discs groups, one for each disc type.
We use FC disc for database Oracle 10g, FATA disc to make disc to disc to tape.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-27-2007 01:51 AM
07-27-2007 01:51 AM
Re: EVA 4000 poor performance
(26) 146GB drives is not too bad. When you setup your test, did you use vraid5, or vraid1?
How many virtual disks are configured in that disk group (146GB)?
-tjh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-27-2007 02:15 AM
07-27-2007 02:15 AM
Re: EVA 4000 poor performance
In user guide HP say: -- Disk drives should be installed in vertical columns within the disk enclosures --. Do you think that this will impact the performance if itâ s not?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-27-2007 02:24 AM
07-27-2007 02:24 AM
Re: EVA 4000 poor performance
In your case, if you have (6) 146GB drives in each shelf, with two shelves each containing a 7th drive, you should be fine.
Why (50) vdisks? Are they all presented to different hosts?
-tjh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-27-2007 02:35 AM
07-27-2007 02:35 AM
Re: EVA 4000 poor performance
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-29-2007 06:21 PM
07-29-2007 06:21 PM
Re: EVA 4000 poor performance
Initiate a collection of the EVA performance objects with the following command from the DOS prompt of the SMA : evaperf all -cont -csv >data.csv
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-30-2007 04:53 AM
07-30-2007 04:53 AM
Re: EVA 4000 poor performance
Let me start by stating what the maximum rated performance figures are for the EVA4000.
For a single controller - the maximum throughput is 335MB/s. This is from the quickspecs located at http://h18000.www1.hp.com/products/quickspecs/Division_01-2006/12234_div.PDF
From the same quick specs, the host ports are 2Gb/s, which provide approximatley 200 MB/s per host port. Of which the EVA4000 does have 4 ports in total, two on each controller.
This is where you need to be investigating exactly what sort of multipathing you have setup. Ie are you load balancing across both controllers down two different paths or are you perhaps only using one path. Because if you are using only one path, the maximum theoretical throughput is only 200 MB/s.
In regards to your file copy test, there are a couple of points you need to be considering that will be stopping it from achieving the maximum value
- block size of the file systems that you are using and of the files themselves being copied.
- the IO throughput that a file copy is creating (I would expect it to be significanlty less than what an Oracle DB can request).
- file system and logical volume disk buffers and queues.
All of these will be playing a part and a value of 90 MB/s from a straight file copy off the FC disk group is reasonable.
More advanced tests such as those in this white paper are better able to stress an EVA.
http://h71028.www7.hp.com/ERC/downloads/4AA0-5452ENW.pdf
Regards
Owen
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-30-2007 07:01 AM
07-30-2007 07:01 AM
Re: EVA 4000 poor performance
Also very basic thing to note is the WriteBack cache on all the Vdisks you have created.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-31-2007 12:52 AM
07-31-2007 12:52 AM
Re: EVA 4000 poor performance
All the LUNs are in WriteBack cache on.
During the tests the controllers arenâ t busy (Cpu use.gif).
During this test I checked also the time latency (Time latency.gif)
You can see the write latency of my operations (purple line). I get good results.
During this test, there was --parasite-- read data coming from other servers (blue line). As you can see, the reads latencies are sometime very long.
If I check the blocks size read and write during this time, (Block_size 1, 2 and 3.gif
You can see my write blocks size (64ko), and you can see also, some time, some blocks with small size.
We have a correlation between block size and time latency in timestamp.
What do you think about these blocks size?
All the servers are in load balancing mode with mpio drivers
I didnâ t know if we have a problem with bock size or with SAN architecture.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-31-2007 09:09 AM
07-31-2007 09:09 AM
Re: EVA 4000 poor performance
In my personal opinion, with such a low HSV controller usage, EVA is under utilized and something else is broken along the way.
Hope this helps
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-01-2007 02:23 AM
08-01-2007 02:23 AM
Re: EVA 4000 poor performance
on which type of drive do you get the low performance copy to the Fata or to the FC drives ?
I think the bottle neck is the Blade or the network.
Server:
at first try to check the performance of the Blade Servers.
Which Models (e.g Blade 20p) do you have installed ?
Network:
pls check the config rules ,written in SAN Design Reference Guide,and check Firmware
(http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00403562/c00403562.pdf)
Storage:
1.)Check with EVAperfview
http://h71036.www7.hp.com/enterprise/downloads/HPStorageWorksCommandViewEVAPerf.pdf
If nothing help,pls contact HP support.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-02-2007 11:23 PM
08-02-2007 11:23 PM
Re: EVA 4000 poor performance
It's unlikely to be a host problem (unless you're running old hardware).
It's almost certainly not going to be a network problem - modern fabric switches aren't going to struggle with 90MB/s.
Port throughputs on the EVA aren't going to be a problem.
The theoretical throughputs from HP are based on an optimum setup (ie, a single server, load-balancing, sequential read / write throughput, fully-loaded EVA with 15K FC disks). A typical environment won't come close to these figures.
So, on to your 'problem'. Except that it's not really much of a problem - 90MB/s actually seems like a decent throughput for your setup.
A quick look at your environment indicates several basic issues.
You need to look at disk contention. Because you're sharing your EVA with lots of servers accessing lots of LUNs, that nice sequential write that you're trying to do is getting interrupted by read/write requests from other servers. Look at it from the EVA's point of view - it's writing for one server on to one bit of disk and is just about to write to the next part of the disk when it gets a request from another server to read a bit of data. The disk heads have to stop their write operation and go off to another part of the disk to do some reading for another server. Multiply this instance by 20 different servers and it's clear that performance is going to suffer.
However, this is the trade-off you get when you buy a SAN - you're not going to get the best performance all the time for all servers. The idea is just to work the layout so that you get contention levels that give acceptable performance levels on all servers. To this end, don't host I/O intensive apps on the same disk group. Currently, it looks like you're hosting 5 Oracle DB's and a file cluster on the same disk group - I'd expect the performance of all of them to be pretty poor.
A really important point to make is that you can not just use up space on an EVA and expect performance to continue across all your servers. It may seem a strange concept, not using that available space, but expect performance to decrease if you do.
Secondly, 26 disks isn't going to help an EVA fulfil its potential. Performance in a disk group is linearly bound to the number of spindles available. For example, with 50% reads, you could expect a disk group of 26 x 15k disks (VRAID5) to handle an average of 1750 random IOPS. If you fully loaded your EVA with 56 x 15k disks and put them in a disk group, you could expect potential average throughput to more than double to 3800 IOPS (peaks of traffic can easily exceed this - it's just for example).
And then there's VRAID5. VRAID5 has to perform parity writes as well as just the normal write to disks, so each write command requires four writes as opposed to the two required for VRAID1. How does this translate? Well, taking the example above (50% reads, 15k disks, single disk group), in a 26 disk disk group, you'd be looking at 2950 IOPS and with a 56 disk disk group, this goes up to 6350.
Some other points of note:
Don't use FATA drives for random I/O.
Make sure your block size doesn't exceed 512k (cos this'll flush the cache)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-03-2007 02:38 AM
08-03-2007 02:38 AM
Re: EVA 4000 poor performance
Your comments are very interesting.
If we have host I/O intensive app (like Oracle), we need for example to separate data bases and redo, archive control logs on different disk groups.
To get the best performance of a disc group, we see that itâ s interesting to have at least 56 discs.
To use of Oracle server, we need an EVA with 112 discs to get its full potential?
We use FATA discs, only to make â Disc to Disc to Tapeâ , with standard Windows copy command.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-03-2007 05:05 AM
08-03-2007 05:05 AM
Re: EVA 4000 poor performance
My comments were based on generic assumptions of hosting multiple databases and a cluster on a relatively small disk group. You're now making some pretty big jumps in logic!
First of all, disks. The more disks (spindles) you have available to a particular application(s), the better disk I/O throughput you will get. A disk group with 12 spindles will perform better than one of 8. 16 will be better than 12, 56 better than 16, 112 better than 56, etc, etc. The trick is calculating what you actually need. There's no point having an EVA 6000 with 112 disks for your database if it could happily run on a disk group of 16 disks.
An Oracle server on its own isn't I/O intensive - that depends purely on how much data it's processing. You're obviously in a much better position to actually collect data on this. Once you have an idea of the traffic coming from all your servers (and perhaps what you're expecting your servers to be throughputting), you can begin to design a solution.
For Oracle design on a small EVA, your first reading should be this: http://www.oracle.com/technology/deploy/performance/pdf/EVA_ORACLE_paper.pdf
After that, you have to work out which way to go - expansion, splitting of disk groups, etc. You'd probably do well to get HP Storage involved - they'll be better placed to analyse your needs and the best possible solution.