- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Best Practices on Volume Groups and LUNS
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-14-2007 07:22 AM
тАО06-14-2007 07:22 AM
I have a HPUX 11.i that I will be hooking up to a HP EVA4000 SAN. Dual pathed fiber HPUX to the switch and switch to SAN. We are moving from an older XIOTECH SAN to the EVA. I am going to allocate about 200 GB to support about 120 GB of data on the SAN. Other platforms will be using the SAN, I just get a piece.
The HPUX Application is mostly smallish I/Os. No big databases, 250 plus users mostly on telnet sessions hitting multiple discrete tables. I still hit some good numbers for QLEN and UTIL on the current SAN in glance.
I have a chance here to either ask for several LUNs (5 or 6) as it is now on the older SAN, or just get it as one LUN.
My question is: Does throughput suffer if I do all the I/O through one LUN, or is it improved, or is there no difference?
We have been managing the multiple LUNs with one VG per LUN and 1 to 3 LVs per VG and I can keep doing that, but would like to stop if there is no advantage to throughput.
I frequently get I/O bound now on the older SAN, and glance keeps alternating between complaining about disk and CPU as my biggest bottleneck, so I can't sacrifice any throughput to convenience.
Any input will be appreciated.
Dan
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-14-2007 07:47 AM
тАО06-14-2007 07:47 AM
Re: Best Practices on Volume Groups and LUNS
I tend to choose a middle of the road approach these days; it may not help much (though it certainly did in the past) but it will never hurt.
1) Determine the total needed capacity of the VG; let's pretend 600GiB.
2) Divide the total capacity into as many equally sized LUN's as you have separate SCSI paths from the host to the array. Let's pretend you have 2 for brevity's sake.
Make LUN0 use primary path A; alternate path B. Make LUN1 use primary path B; alternate path A.
3) Now stripe each LVOL in the VG over both LUN's using a stripe size somewhere in the 64KiB-256KiB range.
That will efficiently spread the i/o from the host to the array over a small number of LUN's to manage.
NOTE: This still doesn't mean that Glance won't complain; if you divided those 2 LUN's into 16 LUN's then Glance would probably be very happy and theing would APPEAR to be great but you would probably see no significant difference in actual throughput.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-14-2007 07:54 AM
тАО06-14-2007 07:54 AM
Solution- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-14-2007 07:56 AM
тАО06-14-2007 07:56 AM
Re: Best Practices on Volume Groups and LUNS
/v00 1 file system made up from 1 74.125GB LUN
/v01 - /v0X are each made up of 518.875 GB filesystem whose lvol is made up of 7 striped 74.125GB LUNs.
Yes - we noticed a performance improvement striping at the lvol level as well as on the array.
We connect to EMC DMX'es.
Here's part of one of my lvols:
--- Logical volumes ---
LV Name /dev/vg61/lvsapd01
VG Name /dev/vg61
LV Permission read/write
LV Status available/syncd
Mirror copies 0
Consistency Recovery MWC
Schedule striped
LV Size (Mbytes) 531328
Current LE 33208
Allocated PE 33208
Stripes 7
Stripe Size (Kbytes) 128
Bad block on
Allocation strict
IO Timeout (Seconds) default
Will you see a performance improvement with 200GB split into multiple luns? maybe - best to try and if not - just make a large one.
Rgds...Geoff
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-14-2007 07:56 AM
тАО06-14-2007 07:56 AM
Re: Best Practices on Volume Groups and LUNS
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-14-2007 08:40 AM
тАО06-14-2007 08:40 AM
Re: Best Practices on Volume Groups and LUNS
1) Does the glance queue length number have any meaning in this case, or are those requests not really queued up? (they do clear pretty quick at time)
2) Does the sar number wio (waiting for i/o) , does that have any meaning at all and can it be related to queue length?
3) My paths are not SCSI from the host to the array, it's all fibre. Does that matter? When I look at sam, it just says I have 4 H/W paths to the LUN(s) given. How much do I have to manage it? I thought LVM alternate links are not supported on EVA disk arrays?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-14-2007 06:14 PM
тАО06-14-2007 06:14 PM
Re: Best Practices on Volume Groups and LUNS
> 2) Does the sar number wio (waiting for i/o) , does that have any meaning at all and can it be related to queue length?
When it comes to SANs, i don't think the SAR output is quite relevant. Better monitor at the controller level.
kind regards
yogeeraj
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-14-2007 06:42 PM
тАО06-14-2007 06:42 PM
Re: Best Practices on Volume Groups and LUNS
Re your Q3...
The EVA didn't used to support LVM PVLinks, but it has done now for a couple of years, so go ahead and add those alternate links to your LUN(s). Make sure that your primary link is to the LUN on the owning controller for best performance.
HTH
Duncan
I am an HPE Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-14-2007 06:46 PM
тАО06-14-2007 06:46 PM
Re: Best Practices on Volume Groups and LUNS
It would depend on the following things:
1. No. of physical disks in your EVA.
2. No. of disk-groups you have in your EVA.
3. No. of FC HBAs on your HP box.
If you have more no. of disks (say around 10) in your disk-group on EVA, then you should create a single RAID5 LUN of 200 GB, and export this LUN through both the FC ports.
EVA will internally take care of distributing the LUN such that it gives the best performance (since your LUN will be distributed across 10 physical disks, and you have more no. of spindles so that read and write doesn't get affected).
Perform the zoning such that the HP box is able to see this LUN from 2 paths, and create lvols as suggested above.
But in case you have less number of disks in a disk-group (less than 5), then you should consider creating more than one RAID-5 LUN such that each LUN is coming from different disk-group.
Hope that helps.
-Suraj
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-14-2007 06:52 PM
тАО06-14-2007 06:52 PM
Re: Best Practices on Volume Groups and LUNS
you wrote
"How much do I have to manage it? I thought LVM alternate links are not supported on EVA disk arrays?"
The old releases of the EVA were using active/passive configs, so the PVlinks were not supported and the securepath software was needed.
Since several years the EVA is doing active/active (the EVA4000 from the beginning) and there is no need for securepath and supported with PVlinks.
Doing as Clay suggested and use multiple LUNs accessing over different pathes will give you "load balancing" even with PVlinks.
Hope this helps!
Regards
Torsten.
__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.
__________________________________________________
No support by private messages. Please ask the forum!
If you feel this was helpful please click the KUDOS! thumb below!

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-15-2007 01:26 AM
тАО06-15-2007 01:26 AM
Re: Best Practices on Volume Groups and LUNS
Points assigned.
I'll have to learn more about assigning multiple pathing. Another topic.
Closing thread.
Dan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-15-2007 01:28 AM
тАО06-15-2007 01:28 AM
Re: Best Practices on Volume Groups and LUNS
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-15-2007 01:30 AM
тАО06-15-2007 01:30 AM
Re: Best Practices on Volume Groups and LUNS
You asked ...
2) Does the sar number wio (waiting for i/o) , does that have any meaning at all and can it be related to queue length?
The wio metric is host based metric so it has *some* relevance no matter what storage is used. It is a subset of idle cpu - if a cpu is idle but there is at least one process wating for IO then that measurement interval will be recorded as wio%.
Having high wio% does not necessarily mean there is host or array queueing, there could just me a lot of processes doing IO (perhaps slowly).
The sar metrics avque and avwait are direct measure of host-based queueing. They are accurate.
I usually look at wio% over time to detect changes in workload or IO performance.
Ken Johnson