Disk Enclosures
1752545 Members
4741 Online
108788 Solutions
New Discussion юеВ

Re: Please share your EVA Disk Array Experiences!

 
SOLVED
Go to solution

Please share your EVA Disk Array Experiences!

Hello ITRC Forums!

I am about to embark on a disk array refresh project for our customer, and was wanting feedback from the general community on your experiences with Storageworks EVA 5000 arrays, specifically real-world stories regarding ease of management, performance, configuration gotchas, ...

Specifically, we have a development SAN which comprises an XP256, HP Edge 2/32 switches and HPUX 11.0 (and eventually Tru64 5.1). Eventually we will have 6 x HP (L and N servers) and 2 x Tru64 hosts connected, which support our development and DR environments.

The XP 256 is reaching capacity and is very expensive to upgrade, we are considering our options. One option is to replace the XP256 with an EVA 5000, however I have the following concerns.

We currently own an old AutoRaid 12H connected to a V class server - the AutoRaid suffers from woeful performance. From what I can gather, the EVA sounds a lot like a glorified AutoRaid. Is this too cruel an assessment?

From what I've read, the EVA doesn't give enough control to the storage manager to allow us to place I/O's on specific spindles - this makes it difficult or impossible to manage, diagnose and fix I/O problems. Compaq's answer to I/O issues is to throw more spindles at the virtual array, the EVA then stripes and redistributes the existing I/O onto these new spindles (possibly further complicating the I/O layout for future problems). Possibly the answer to very poor I/O is to overconfigure storage/spindles (eg 1Tb of storage overlays onto 2Tb of spindles), or to purchase multiple underconfigured EVA's and allow the storage administrator to lay the I/O pattern out using traditional methods.

Disk allocations appear to be across all spindles available to the EVA. For instance, if we create an 8Gb filesystem and the EVA has 20 disks available to it, each spindle on the backend has it's share of the 8Gb. If one of the 20 disks fails the EVA appears to reallocate it's storage across the 19 remaining disks... The larger the number of disks available to the EVA, the more complex (and messy) the rebuild process becomes. I imagine that a disk rebuild on a busy or highly configured EVA would be very expensive in terms of performance.

Does performance degrade over time as the EVA manages itself? Does internal fragmentation become too much of an I/O problem, if so how do you manage this?

Are there any real world experiences which would help me to avoid configuration nightmares I have experienced in the past when implementing "new" hardware such as AutoRaid, XP, ...?

I Imagine cache plays an important part in performance of the EVA - what factors should I consider when placing an order?

Thanks for anyone who can give me any sort of feedback for any of the above - points will be awarded!
5 REPLIES 5
Leif Halvarsson_2
Honored Contributor

Re: Please share your EVA Disk Array Experiences!

Hi,
I am perhaps not the right person to answere as we don't have our EVA in production yet but I will try to comment some of your questions.
It is possible to configure several (max. 16), independent disk groups but, the general rule (for performance reasons) is to use as few disk groups as possible (perhaps just one). A difference from ordinary RAID systmes is that the virtual disks (LUNS) whitin a disk group can be configured with different RAID levels (Vraid0,1 or 5).

The rebuild process does not necessary be more complex with more disks. EVA divides a disk group in subgroups (RSS=Redundant storage sets). The "redundancey" is keept whitin each RSS.
Johan Magnusson
Occasional Advisor

Re: Please share your EVA Disk Array Experiences!

Hi,

Will try to help you out with some of your issues.

Cache:
"Best practice" would be to use mirrored cache in a fault tolerance scenario. My experince though is that when you crate a lun and performance is an issue you will be better of with out it. You can still create another lun with mirrored cache and have them coexist. Still, buy as much cache you can.

Disk allocation:
A few weeks ago 2 disks "burned" within 24 hours in our EVA. We have 1.8 TB of data and a mix of raid 1 & 5 with Continuous Access to another site. To my surprise we didn├В┬┤t experience that much of a performance issue during the rebuild process, EVA did a fantastic job in the background.

Performance:
You would achieve best performance through my experience if you present a lun to 2 ports (Active - Active). Make the host access the lun on both paths with the option "least I/O".

Management:
Management through command view eva doesnt give you as much control perhaps as older/other systems. On the other hand you will get a system that is easy to manage and troubleshoot.

Regards
Joh

Re: Please share your EVA Disk Array Experiences!

Are there any allocation rules I should be aware of? For instance, my EVA will be SAN attached via (?4?) FC channels. Can a disk group/luns only be allocated to a specific channel, or can I allocate lun 1 to one security zone, lun 2 (of the same group) to another zone/host, ...? Can I allocate a mix of luns from the same disk group to different operating system types (eg HPUX and Tru64, possibly Windows/Novell in future)?

With our XP512, each FC port has a configured "host mode", so that if I want to allocate storage to HP it has to go via port 1, Tru64 via port 2 (simplified example, everything is dual pathed for the XP's). I'm assuming that a SAN attached EVA will have similar host mode setup for it's FC ports?

Thanks a lot guys - very helpful so far.
Leif Halvarsson_2
Honored Contributor

Re: Please share your EVA Disk Array Experiences!

Hi,
I am not sure if it is possible to restrict a LUN to a specific FC-port (I doubt it is). The normal configuration is that all LUN is visible to all ports and the two ports on each controller is connected to different switches. If the host has two HBAs, there will be four paths for each LUN (the host will see each LUN as four different disks). To not confuse the OS you also need a software on the host (SecurePath) which handle this redundant paths and present it to the OS as one single volume.

If you want to restrict access to one single path (controller port) you must do this with zoning in the switches or LUN masking in the HBAs. This is necessary if you not have SecurePath installed on a host.

Yes, you can allocate LUNs for different hosts and different OS in the same disk group.

One of the configuration steps in the EVA is to create host information. Eg.:
- LAN name
- IP-adress
- WWWID of the HBAs
- Operating system.

After a LUN (Virtual disk) is created, the LUN must be "presented" to the host. This means:
- The LUN is only visible for that host.
- The EVA array has all relevant information about that host.

Mike Naime
Honored Contributor
Solution

Re: Please share your EVA Disk Array Experiences!

I come from the VMS/Compaq side. We have HSG's, EVA's, and MSA's in our data center.

We use our EVA for backups. 168 x146GB disks in 16 shelves. (2C12D + 1/2 of 0C12D) We have over 100 systems (One per VMS or AIX cluster) that is sending Rman/Image backups to the EVA on a nightly basis. This totals out to about 15TB of actual data each day that we are backing up. Since we SNAP this data, and then use a backup cluster to send these snaps to tape, we are passing at least 30TB of data each day through our EVA in about a 20 hour time period. That comes out to about 1.5TB per hour through the EVA.

When we first got our EVA, we saw a single system go from a 35GB/hour transfer rate to a 75GB/hour transfer rate on a 1GIG SAN fabric where the source data is on HSG's. I suspect that with an end to end 2GIG setup, that this number would go higher.

You tell me if that is fast enough?

As has been pointed out elsewhere. The EVA uses all 4 ports for all OS's that are using the EVA. you set the OS type on the host connections, not the EVA ports.

Managebility:
Right now, we can't really tell what the EVA is doing performance wise. There are no tools to really tell us what it is doing.

Since we where used to the CLI on the HSG's we rapidly learned how to use the scripting tool for CLI type of programming of the EVA. (YES, WE HATE GUI'S!)

The SAN appliance that is a required purchase along with the EVA is OK to use. IT is kinda slow with all of the point/Click...Wait.....wait some more.... that you wind up doing. Learn the scripting, it's faster if you have a LOT of disks/hosts to setup!

When we have had drive failures/additions (3 in the first 6 months) there was no noticeable performance degredation.

We have another 10TB on order to complete this EVA. I'm not sure if we will add it to the same disk group, or make a new disk group. After that, I will be able to comment on performance degredation from adding 70 disks.

For a general rule of thumb, you will loose about 25-30% of your raw storage space to redundancy, spareing and overhead. Engineering recommends that you never exceed 85% occupancy, although 90% is more the MAX that you can have without affecting performance.

The 5000 version that we have is a SAN within a SAN. The 2 HSV110 storage controllers talk to the 18 storage shelves by way of 4 proprietary FC SAN switches (made by brocade) that are internal to the EVA itself.
VMS SAN mechanic