- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- LUN Distribution Disk Group Striping in EVA 5000
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-14-2005 01:35 AM
тАО07-14-2005 01:35 AM
LUN Distribution Disk Group Striping in EVA 5000
I am in the process of implementing EVA 5000 2C6D storage box and would be connected to SuperDome and L and N class servers ( 10 servers max.) with HP-UX 11.00 and 11.11, SP V3.0E VCS 3.0D,BC EVA, 72G 15K (56 disks)and 2 SAN switches I would appretiate your replies on some of following questions.
1. How one would decide to have equal distribution of LUNs across two HSV110 controllers for load balancing?
2. Does a LUN gets load balanced by default across the 2 frontend ports FP1 and FP2?
If it is, how one would know about it ,maybe through Secure Path?
3. I doubt to have better performance from EVA 5k if we keep SCSI Queue Depth 8 (HP-UX default). What would be optimal queue depth for EVA LUNs in general or maybe a thumb rule without compromising on LUN performance and overloading EVA? ( we are not certainly gonna have nearing 512 LUNS )
4. Is it ok to have host based striping i.e LVM striping on Vdisks from different DGs on EVA? Will it justify the move of having more than one DG against one group as per EVA best practices? Any performance Gain or Penalty?
( we opted only two DGs though )
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-14-2005 01:49 AM
тАО07-14-2005 01:49 AM
Re: LUN Distribution Disk Group Striping in EVA 5000
1. Load balance across HSV110 controllers by selecting preferred path ('A' or 'B') on alternating vdisks. That way each controller will be servicing requests from the same number of vdisks.
2. Yes, access to vdisks is always load balanced across both ports of a given HSV110.
3. No specific comment on HPUX settings - not my area of expertise - but in general an EVA takes good advantage of OS queue depths higher than what most OS' have as a default
4. I doubt host-based striping across DGs will gain you much unless your spindle count in each DG is very low. In a config like yours the two reasons I might go with two DGs would be to separate sequential and random I/O streams; or to keep log files separate from a database when one is being really fussy about availability.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-14-2005 02:45 AM
тАО07-14-2005 02:45 AM
Re: LUN Distribution Disk Group Striping in EVA 5000
2.) No, I do not agree. An EVA controller allows a host to send I/O requests to a single virtual disk concurrently via FP1 and FP2, but if the host uses only one path - there is no balancing! It is the responsibility of the host to balance.
3.) Unfortunately, there is no 'optimal' setting. Assuming the unlikely case of single, synchronous I/Os and you would not even need a queue depth, because there were only one I/O in flight at a time.
The EVA supports up to 2048 outstanding I/Os on a single port, but I would not design the solution that way. If one controller does down, the remaining one must be able to cope with its I/Os, too.
4.) Remember that each disk group has its own need for distributed sparing. On the other hand, I agree with Mark that keeping a database and its journals on different DGs might provide a little help should one DG go down (I have never seen a single DG going down, yet).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-14-2005 03:00 AM
тАО07-14-2005 03:00 AM
Re: LUN Distribution Disk Group Striping in EVA 5000
Mark (or anyone),
Do you know if a given LUN will only use one path at a time (even with multiple hosts accessing), or if, for example, two different hosts can access the same vdisk using both ports on the controller serving that LUN? Is there any documentation on this?
I believe with HSG80 storage only one of the two paths per controller is active for each LUN. Thanks.
p.s.
rccmum,
Can't give you a set number, but you will definitely want to raise your scsi queue depth above 8 to get the best performance out of an eva. Performance should go up until you hit 1-2 outstanding I/O per physical disk, if your app. and system can provide it. The EVA has a (large) limit of number of outstanding I/Os (I think it is 2048 per controller port, but don't quote me on it).
Host based striping across groups - because with striping if you lose either of the disk groups, you're hosed. The only reason I can think to be striping is to have LUNS larger than 2TB, and you would be better off striping to the same group, because you are not compounding your failure modes.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-14-2005 03:05 AM
тАО07-14-2005 03:05 AM
Re: LUN Distribution Disk Group Striping in EVA 5000
Well, I don't know if it's the host or the controller doing it - I'll bow to Uwe's wisdom that it's the host. What I do know is that when doing some benchmark testing to a single vdisk I see roughly the same amount of activity on both ports (by looking at switch port counters) of the controller serving the disk.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-14-2005 05:06 AM
тАО07-14-2005 05:06 AM
Re: LUN Distribution Disk Group Striping in EVA 5000
You _can_ do I/O to an EVA virtual disk through both ports of a controller, no matter if one host or multiple hosts. You can also do this through both ports of a HSG controller as well if it operates in multibus-failover mode; I have seen it on Tru64 Unix, for example.
Tru64 Unix uses all active paths. Last time I checked, OpenVMS does not.
On hosts with Secure Path installed - it depends whether you have enabled 'load balancing'. But do not confuse this load balancing of I/Os between active controller ports with assigning the management of multiple virtual disks between both controllers by setting a preferred path.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-14-2005 06:11 AM
тАО07-14-2005 06:11 AM
Re: LUN Distribution Disk Group Striping in EVA 5000
1. If the preferred path Either A or B is used (CommanView EVA), I guess loading balancing would be achived at EVA HSV controller level. Now the same should be done at OS level who is accessing the LUN so that we have complete loading balancing from OS and EVA perspectiveas a whole . One had to maintain sufficient paths between the two through HBAs and SP with preferred path option. I am not sure about putting on load balancing option in SP and its performance effects with above option.
Moreover SP will take care of switching to alternate paths in case of HBA or controller failure. AM I right?
2. I do agree to have two paths from host to a controller so that a LUN can get load balanced across both FP1 and FP2
3. I believe that Queue depth 8 is not sufficient at all for storage like EVA maybe underutilising it!!!. I think 2048 outstanding I/Os mentioned are nothing but queue depth which is 2048 per EVA port. Hence increasing the HP-UX queue depth in increment of 16,32.. would be depening on the need. I would be considering to change it to 16 first.
I am not sure about this increase in the queue depth would be taken care by SP created virtual device. I believe it should!!!
How do I ensure this at HBA level? Anything I can do at HBA level for better throughput?
4. I was thinking about leverging the performance of EVA (which it gives optimally with only 1 DG ) with LVM based stripping for Vdisks across two DGs. But I fear about the consequences one can have of this stripping breaks out!!!!!! if we have two Vdisks with Vraid 1 LVM stripping on the top of it . It maybe Bad idea. Any views here??
Our ultimate goal is to have sufficient redundancies and optimal performance using EVA.
Can anyone out there who has used double stripping with EVA put more light on this???
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-14-2005 10:16 AM
тАО07-14-2005 10:16 AM
Re: LUN Distribution Disk Group Striping in EVA 5000
I would load Securepath for EVA product even though HPUX has PVLinks.
Dont Chop your EVA into many Disks groups, that will kill your performance.
Use single initiator zones, standard.
Carve your luns/virtual disks up as small as possible and the same size. Keeping in mind you dont want to run into 0-255 disk limit in your Volume groups in planning yor disk size. Multiple disks will keep the number of initiators up which is good to keep your io from bottlenecking.
Stripe your Logical volumes by doing a lvcreate -L sizineinMB -i number of disk -I 1024 which is stripe size
This will stripe your data accross as many disks as possible :).
The main idea here is to use as many initiators and as many disks as possible. You do not want your disks to be the bottle neck.
Make sure this thing is on monitoring, and see if you can keep a few disks on hand, HP has had some stocking issues lately. EVA's i have like to munch disks.
Also I had posted a question about the disk enclosures. If this is new I would make sure that you have the newer version or demand it. I have seen bad disks bring and EVA down on these older setups. Not pretty.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-14-2005 03:08 PM
тАО07-14-2005 03:08 PM
Re: LUN Distribution Disk Group Striping in EVA 5000
VCS V3 has an array-wide limit of 512 virtual disks, but snapshots are counted against it, too.
If you want more initiators, then you need more fibre channel adapters - else I don't understand what you mean by 'initiator'.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-14-2005 11:16 PM
тАО07-14-2005 11:16 PM
Re: LUN Distribution Disk Group Striping in EVA 5000
Did you see the limitation of EVA somewhere using with SuperDome??
What sorta problems did you face with disk enclousers?
I do agree with you about having small and equal size of Vdisks to reduce IO contention.
But I doubt how far EVA is smart enough to take care of small Vdisks efficiently..
Uwe,
I guess you are the EVA expert here!!!!! Can you put more light on my concerns put in my second message and here..