Disk Arrays
cancel
Showing results for 
Search instead for 
Did you mean: 

LUN Distribution Disk Group Striping in EVA 5000

rccmum
Super Advisor

LUN Distribution Disk Group Striping in EVA 5000

Hi Guys,

I am in the process of implementing EVA 5000 2C6D storage box and would be connected to SuperDome and L and N class servers ( 10 servers max.) with HP-UX 11.00 and 11.11, SP V3.0E VCS 3.0D,BC EVA, 72G 15K (56 disks)and 2 SAN switches I would appretiate your replies on some of following questions.

1. How one would decide to have equal distribution of LUNs across two HSV110 controllers for load balancing?
2. Does a LUN gets load balanced by default across the 2 frontend ports FP1 and FP2?
If it is, how one would know about it ,maybe through Secure Path?
3. I doubt to have better performance from EVA 5k if we keep SCSI Queue Depth 8 (HP-UX default). What would be optimal queue depth for EVA LUNs in general or maybe a thumb rule without compromising on LUN performance and overloading EVA? ( we are not certainly gonna have nearing 512 LUNS )
4. Is it ok to have host based striping i.e LVM striping on Vdisks from different DGs on EVA? Will it justify the move of having more than one DG against one group as per EVA best practices? Any performance Gain or Penalty?
( we opted only two DGs though )
18 REPLIES
Mark Poeschl_2
Honored Contributor

Re: LUN Distribution Disk Group Striping in EVA 5000

Using your numbers:
1. Load balance across HSV110 controllers by selecting preferred path ('A' or 'B') on alternating vdisks. That way each controller will be servicing requests from the same number of vdisks.
2. Yes, access to vdisks is always load balanced across both ports of a given HSV110.
3. No specific comment on HPUX settings - not my area of expertise - but in general an EVA takes good advantage of OS queue depths higher than what most OS' have as a default
4. I doubt host-based striping across DGs will gain you much unless your spindle count in each DG is very low. In a config like yours the two reasons I might go with two DGs would be to separate sequential and random I/O streams; or to keep log files separate from a database when one is being really fussy about availability.
Uwe Zessin
Honored Contributor

Re: LUN Distribution Disk Group Striping in EVA 5000

1.) make a fixed assignment like Mark wrote. If the operating system allows, leave off the preference and the EVA will automatically balance during boot or set the prefered path on the host (Secure Path).

2.) No, I do not agree. An EVA controller allows a host to send I/O requests to a single virtual disk concurrently via FP1 and FP2, but if the host uses only one path - there is no balancing! It is the responsibility of the host to balance.

3.) Unfortunately, there is no 'optimal' setting. Assuming the unlikely case of single, synchronous I/Os and you would not even need a queue depth, because there were only one I/O in flight at a time.

The EVA supports up to 2048 outstanding I/Os on a single port, but I would not design the solution that way. If one controller does down, the remaining one must be able to cope with its I/Os, too.

4.) Remember that each disk group has its own need for distributed sparing. On the other hand, I agree with Mark that keeping a database and its journals on different DGs might provide a little help should one DG go down (I have never seen a single DG going down, yet).
.
Tom O'Toole
Respected Contributor

Re: LUN Distribution Disk Group Striping in EVA 5000

2. Yes, access to vdisks is always load balanced across both ports of a given HSV110.


Mark (or anyone),

Do you know if a given LUN will only use one path at a time (even with multiple hosts accessing), or if, for example, two different hosts can access the same vdisk using both ports on the controller serving that LUN? Is there any documentation on this?
I believe with HSG80 storage only one of the two paths per controller is active for each LUN. Thanks.

p.s.

rccmum,

Can't give you a set number, but you will definitely want to raise your scsi queue depth above 8 to get the best performance out of an eva. Performance should go up until you hit 1-2 outstanding I/O per physical disk, if your app. and system can provide it. The EVA has a (large) limit of number of outstanding I/Os (I think it is 2048 per controller port, but don't quote me on it).

Host based striping across groups - because with striping if you lose either of the disk groups, you're hosed. The only reason I can think to be striping is to have LUNS larger than 2TB, and you would be better off striping to the same group, because you are not compounding your failure modes.
Can you imagine if we used PCs to manage our enterprise systems? ... oops.
Mark Poeschl_2
Honored Contributor

Re: LUN Distribution Disk Group Striping in EVA 5000

Tom -

Well, I don't know if it's the host or the controller doing it - I'll bow to Uwe's wisdom that it's the host. What I do know is that when doing some benchmark testing to a single vdisk I see roughly the same amount of activity on both ports (by looking at switch port counters) of the controller serving the disk.
Uwe Zessin
Honored Contributor

Re: LUN Distribution Disk Group Striping in EVA 5000

To me, a "LUN" is an entity in the SCSI address space, but in some environments and/or some documents it is also used to refer to the storage object (EVA virtual disk, HSG container) - always a great way for confusion. Strictly speaking, a virtual disk on the EVA is presented on 4 different LUNs through the 4 controller ports (A-FP1, A-FP2, B-FP1, B-FP2).

You _can_ do I/O to an EVA virtual disk through both ports of a controller, no matter if one host or multiple hosts. You can also do this through both ports of a HSG controller as well if it operates in multibus-failover mode; I have seen it on Tru64 Unix, for example.

Tru64 Unix uses all active paths. Last time I checked, OpenVMS does not.

On hosts with Secure Path installed - it depends whether you have enabled 'load balancing'. But do not confuse this load balancing of I/Os between active controller ports with assigning the management of multiple virtual disks between both controllers by setting a preferred path.
.
rccmum
Super Advisor

Re: LUN Distribution Disk Group Striping in EVA 5000

I am putting my concerns based on your feedbacks. Please Correct me if I am wrong and advise.


1. If the preferred path Either A or B is used (CommanView EVA), I guess loading balancing would be achived at EVA HSV controller level. Now the same should be done at OS level who is accessing the LUN so that we have complete loading balancing from OS and EVA perspectiveas a whole . One had to maintain sufficient paths between the two through HBAs and SP with preferred path option. I am not sure about putting on load balancing option in SP and its performance effects with above option.
Moreover SP will take care of switching to alternate paths in case of HBA or controller failure. AM I right?

2. I do agree to have two paths from host to a controller so that a LUN can get load balanced across both FP1 and FP2

3. I believe that Queue depth 8 is not sufficient at all for storage like EVA maybe underutilising it!!!. I think 2048 outstanding I/Os mentioned are nothing but queue depth which is 2048 per EVA port. Hence increasing the HP-UX queue depth in increment of 16,32.. would be depening on the need. I would be considering to change it to 16 first.
I am not sure about this increase in the queue depth would be taken care by SP created virtual device. I believe it should!!!
How do I ensure this at HBA level? Anything I can do at HBA level for better throughput?

4. I was thinking about leverging the performance of EVA (which it gives optimally with only 1 DG ) with LVM based stripping for Vdisks across two DGs. But I fear about the consequences one can have of this stripping breaks out!!!!!! if we have two Vdisks with Vraid 1 LVM stripping on the top of it . It maybe Bad idea. Any views here??
Our ultimate goal is to have sufficient redundancies and optimal performance using EVA.

Can anyone out there who has used double stripping with EVA put more light on this???

generic_1
Respected Contributor

Re: LUN Distribution Disk Group Striping in EVA 5000

Well First off, an EVA is probably a poor choice for a superdome unless you have low I/O requirements. If you can send it back I would :), not because EVA is a bad product, but you just bought a pinto engine for your Caddy :). XP 1024 or EMC DMX would serve you more justice.

I would load Securepath for EVA product even though HPUX has PVLinks.

Dont Chop your EVA into many Disks groups, that will kill your performance.

Use single initiator zones, standard.

Carve your luns/virtual disks up as small as possible and the same size. Keeping in mind you dont want to run into 0-255 disk limit in your Volume groups in planning yor disk size. Multiple disks will keep the number of initiators up which is good to keep your io from bottlenecking.

Stripe your Logical volumes by doing a lvcreate -L sizineinMB -i number of disk -I 1024 which is stripe size
This will stripe your data accross as many disks as possible :).

The main idea here is to use as many initiators and as many disks as possible. You do not want your disks to be the bottle neck.

Make sure this thing is on monitoring, and see if you can keep a few disks on hand, HP has had some stocking issues lately. EVA's i have like to munch disks.

Also I had posted a question about the disk enclosures. If this is new I would make sure that you have the newer version or demand it. I have seen bad disks bring and EVA down on these older setups. Not pretty.
Uwe Zessin
Honored Contributor

Re: LUN Distribution Disk Group Striping in EVA 5000

PVlinks does not work properly with VCS V3.

VCS V3 has an array-wide limit of 512 virtual disks, but snapshots are counted against it, too.

If you want more initiators, then you need more fibre channel adapters - else I don't understand what you mean by 'initiator'.
.
rccmum
Super Advisor

Re: LUN Distribution Disk Group Striping in EVA 5000

Jeff
Did you see the limitation of EVA somewhere using with SuperDome??
What sorta problems did you face with disk enclousers?
I do agree with you about having small and equal size of Vdisks to reduce IO contention.
But I doubt how far EVA is smart enough to take care of small Vdisks efficiently..

Uwe,
I guess you are the EVA expert here!!!!! Can you put more light on my concerns put in my second message and here..
Tom O'Toole
Respected Contributor

Re: LUN Distribution Disk Group Striping in EVA 5000

Uwe,

Sorry, I've been throwing around the term LUN to mean 'a presented vdisk' when I really mean (e.g.) VMS os_unit, your usage is of course correct. VMS (as of 7.3-2) does still only use one path per host. My impression until now was that even multiple hosts were all using the same controller port to the same unit (at least on Hsg80s). I guess it is easy enough for me to check on this for the hsg80 and eva5000.

Jeff,

I'm curious about the justification for creating numerous small storage sets and striping them at the host. The eva will stripe over all the disks is the group, so what is the point of this? If it is to load balance across paths, would not 4 vdisks per host device be enough?

Also, couldn't one buy several EVAs for the price of the other arrays you recommend? Just to add front end fibre ports approaches the price of another eva system. Anyway, it is strange that a single small eva is specified in this config.

rccmum,

To followup on Jeff's comment, you do have a fairly small configuration - 2C6D to connect all those systems. You have gotten only 56 15K drives. If you are not filling a configuration (240 drives), you get a faster system for the same price by buying 10K drives, since you get so many more of them/dollar. This is mentioned in the best practices paper.
Can you imagine if we used PCs to manage our enterprise systems? ... oops.
generic_1
Respected Contributor

Re: LUN Distribution Disk Group Striping in EVA 5000

By having many small disks in a san environment it allows the firmware algorymths to sense paterns like database reads so prefectching works better, because if you have larger disks it becomes more likely for requests for different IO are going to hit that lun confusing the software and making it a random and killing your prefetch. Also if you have more disks io requests wonnt stack up on that single initiator trying to reach that one or two huge disks/luns/virtual disk. The EVA by definition is supposed to move all this stuff around for you, but how efficient it is in reality youd have to test, or go by the whitepaper.

Why would I say Use something like a DMX, why not have 15k drives with more/faster cache, direct linked controllers that can share their I/O, an UPS insdie the frame, more fibers off of the controllers, and having seen higher reliability in practice, and that you've spent probably a million dollers on a superdome. Why cheap out on the slowest/greatest bottlneck you have left, being the disk.
generic_1
Respected Contributor

Re: LUN Distribution Disk Group Striping in EVA 5000

I have seen fiber channel disks go out with bad failures on the EVA5ks that cause the Controllers to shut down or crash at different sites we have. Its not pretty. Also, they should really have a built in ups with firmware management if they are being used in a large expensive highly available system. If you have low availability requirements and low IO EVA will work, but its not designed to be HP's high end, high availabilty product. Its more midtear, NAS would be your lowend.

High end locally attached disks would be the ideal because they dont have to share ;). SAN is supposed to bring cost efficiency and ease of management, and virutalizing large disks, although somethings I wonder about the cost efficient part :).

You can get way more fibers and controllers off of a DMX or XP even though they cost more, better prefetch software, more, faster cache, and more importantly more relieable.

I run DMX3000, symetrix, and XP1024 on our 7 Domes at my location, and PVlinks.
rccmum
Super Advisor

Re: LUN Distribution Disk Group Striping in EVA 5000


Waiting for Smart answers!!!! to questions mentioned.

Any suggestions for putting oracle 8i on EVA 5000? We would be migrating to oracle 9i later from 8i on the eva. can anyone suggest me to have some proactive things which could be done right away to ease migration at later stage?
rccmum
Super Advisor

Re: LUN Distribution Disk Group Striping in EVA 5000

Anybody there to put more comments on this thread and my questions??

Waiting for replies..
generic_1
Respected Contributor

Re: LUN Distribution Disk Group Striping in EVA 5000

If you put points on your previous replies Im sure youll See allot more information :). Click the assign points at the bottom if you are not familliar with the forums. I am guessing you are new member so welcome.

Cheers Jeff
Bill Costigan
Honored Contributor

Re: LUN Distribution Disk Group Striping in EVA 5000

One caution on LVM striping. I would do extent based striping not the native striping. If you stripe across 16 drives and then want to add more drives, you need to add another 16 drives (LUNs) because the stripe routines cannot handle changing the number of drives in a stripe.

If you did extent based striping, you could start striping across 16 LUNs and then later extend the VG by striping across an additional 4 LUNs.

Tom O'Toole
Respected Contributor

Re: LUN Distribution Disk Group Striping in EVA 5000

rccmum,

Seems to me you have gotten pretty good answers to all four questions you asked. Could you re-ask, in more detail, the questions you still have? Thanks.

Can you imagine if we used PCs to manage our enterprise systems? ... oops.
rccmum
Super Advisor

Re: LUN Distribution Disk Group Striping in EVA 5000

Guys , I have been curiously waiting for feedback for Gurus out there to this thread.
It made me understand EVA more closely...

Thanks all putting your comments on this thread, I appretiate your time and efforts.

Well , I have lot more questions on EVA 5000.
I would be creating new threads for them.