- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- Re: EVA 4000 poor performance
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-31-2007 12:52 AM
тАО07-31-2007 12:52 AM
Re: EVA 4000 poor performance
All the LUNs are in WriteBack cache on.
During the tests the controllers aren├в t busy (Cpu use.gif).
During this test I checked also the time latency (Time latency.gif)
You can see the write latency of my operations (purple line). I get good results.
During this test, there was --parasite-- read data coming from other servers (blue line). As you can see, the reads latencies are sometime very long.
If I check the blocks size read and write during this time, (Block_size 1, 2 and 3.gif
You can see my write blocks size (64ko), and you can see also, some time, some blocks with small size.
We have a correlation between block size and time latency in timestamp.
What do you think about these blocks size?
All the servers are in load balancing mode with mpio drivers
I didn├в t know if we have a problem with bock size or with SAN architecture.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-31-2007 09:09 AM
тАО07-31-2007 09:09 AM
Re: EVA 4000 poor performance
In my personal opinion, with such a low HSV controller usage, EVA is under utilized and something else is broken along the way.
Hope this helps
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-01-2007 02:23 AM
тАО08-01-2007 02:23 AM
Re: EVA 4000 poor performance
on which type of drive do you get the low performance copy to the Fata or to the FC drives ?
I think the bottle neck is the Blade or the network.
Server:
at first try to check the performance of the Blade Servers.
Which Models (e.g Blade 20p) do you have installed ?
Network:
pls check the config rules ,written in SAN Design Reference Guide,and check Firmware
(http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00403562/c00403562.pdf)
Storage:
1.)Check with EVAperfview
http://h71036.www7.hp.com/enterprise/downloads/HPStorageWorksCommandViewEVAPerf.pdf
If nothing help,pls contact HP support.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-02-2007 11:23 PM
тАО08-02-2007 11:23 PM
Re: EVA 4000 poor performance
It's unlikely to be a host problem (unless you're running old hardware).
It's almost certainly not going to be a network problem - modern fabric switches aren't going to struggle with 90MB/s.
Port throughputs on the EVA aren't going to be a problem.
The theoretical throughputs from HP are based on an optimum setup (ie, a single server, load-balancing, sequential read / write throughput, fully-loaded EVA with 15K FC disks). A typical environment won't come close to these figures.
So, on to your 'problem'. Except that it's not really much of a problem - 90MB/s actually seems like a decent throughput for your setup.
A quick look at your environment indicates several basic issues.
You need to look at disk contention. Because you're sharing your EVA with lots of servers accessing lots of LUNs, that nice sequential write that you're trying to do is getting interrupted by read/write requests from other servers. Look at it from the EVA's point of view - it's writing for one server on to one bit of disk and is just about to write to the next part of the disk when it gets a request from another server to read a bit of data. The disk heads have to stop their write operation and go off to another part of the disk to do some reading for another server. Multiply this instance by 20 different servers and it's clear that performance is going to suffer.
However, this is the trade-off you get when you buy a SAN - you're not going to get the best performance all the time for all servers. The idea is just to work the layout so that you get contention levels that give acceptable performance levels on all servers. To this end, don't host I/O intensive apps on the same disk group. Currently, it looks like you're hosting 5 Oracle DB's and a file cluster on the same disk group - I'd expect the performance of all of them to be pretty poor.
A really important point to make is that you can not just use up space on an EVA and expect performance to continue across all your servers. It may seem a strange concept, not using that available space, but expect performance to decrease if you do.
Secondly, 26 disks isn't going to help an EVA fulfil its potential. Performance in a disk group is linearly bound to the number of spindles available. For example, with 50% reads, you could expect a disk group of 26 x 15k disks (VRAID5) to handle an average of 1750 random IOPS. If you fully loaded your EVA with 56 x 15k disks and put them in a disk group, you could expect potential average throughput to more than double to 3800 IOPS (peaks of traffic can easily exceed this - it's just for example).
And then there's VRAID5. VRAID5 has to perform parity writes as well as just the normal write to disks, so each write command requires four writes as opposed to the two required for VRAID1. How does this translate? Well, taking the example above (50% reads, 15k disks, single disk group), in a 26 disk disk group, you'd be looking at 2950 IOPS and with a 56 disk disk group, this goes up to 6350.
Some other points of note:
Don't use FATA drives for random I/O.
Make sure your block size doesn't exceed 512k (cos this'll flush the cache)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-03-2007 02:38 AM
тАО08-03-2007 02:38 AM
Re: EVA 4000 poor performance
Your comments are very interesting.
If we have host I/O intensive app (like Oracle), we need for example to separate data bases and redo, archive control logs on different disk groups.
To get the best performance of a disc group, we see that it├в s interesting to have at least 56 discs.
To use of Oracle server, we need an EVA with 112 discs to get its full potential?
We use FATA discs, only to make ├в Disc to Disc to Tape├в , with standard Windows copy command.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-03-2007 05:05 AM
тАО08-03-2007 05:05 AM
Re: EVA 4000 poor performance
My comments were based on generic assumptions of hosting multiple databases and a cluster on a relatively small disk group. You're now making some pretty big jumps in logic!
First of all, disks. The more disks (spindles) you have available to a particular application(s), the better disk I/O throughput you will get. A disk group with 12 spindles will perform better than one of 8. 16 will be better than 12, 56 better than 16, 112 better than 56, etc, etc. The trick is calculating what you actually need. There's no point having an EVA 6000 with 112 disks for your database if it could happily run on a disk group of 16 disks.
An Oracle server on its own isn't I/O intensive - that depends purely on how much data it's processing. You're obviously in a much better position to actually collect data on this. Once you have an idea of the traffic coming from all your servers (and perhaps what you're expecting your servers to be throughputting), you can begin to design a solution.
For Oracle design on a small EVA, your first reading should be this: http://www.oracle.com/technology/deploy/performance/pdf/EVA_ORACLE_paper.pdf
After that, you have to work out which way to go - expansion, splitting of disk groups, etc. You'd probably do well to get HP Storage involved - they'll be better placed to analyse your needs and the best possible solution.
- « Previous
-
- 1
- 2
- Next »