Operating System - HP-UX
1822042 Members
3406 Online
109640 Solutions
New Discussion юеВ

Best Practices on Volume Groups and LUNS

 
SOLVED
Go to solution
Dan Maschmeier_1
Occasional Advisor

Best Practices on Volume Groups and LUNS

Hello all,
I have a HPUX 11.i that I will be hooking up to a HP EVA4000 SAN. Dual pathed fiber HPUX to the switch and switch to SAN. We are moving from an older XIOTECH SAN to the EVA. I am going to allocate about 200 GB to support about 120 GB of data on the SAN. Other platforms will be using the SAN, I just get a piece.
The HPUX Application is mostly smallish I/Os. No big databases, 250 plus users mostly on telnet sessions hitting multiple discrete tables. I still hit some good numbers for QLEN and UTIL on the current SAN in glance.
I have a chance here to either ask for several LUNs (5 or 6) as it is now on the older SAN, or just get it as one LUN.

My question is: Does throughput suffer if I do all the I/O through one LUN, or is it improved, or is there no difference?

We have been managing the multiple LUNs with one VG per LUN and 1 to 3 LVs per VG and I can keep doing that, but would like to stop if there is no advantage to throughput.
I frequently get I/O bound now on the older SAN, and glance keeps alternating between complaining about disk and CPU as my biggest bottleneck, so I can't sacrifice any throughput to convenience.
Any input will be appreciated.

Dan
12 REPLIES 12
A. Clay Stephenson
Acclaimed Contributor

Re: Best Practices on Volume Groups and LUNS

First, Glance and any other host-based performance tool is very misleading about the %busy values for array LUN's; all it can really know is that a lot of i/o is going to and from what it considers to be one physical disk --- that may, in fact, be comprised to 10 physical disks. In short, Glance may make your disk devices APPEAR to be severely loaded.

I tend to choose a middle of the road approach these days; it may not help much (though it certainly did in the past) but it will never hurt.

1) Determine the total needed capacity of the VG; let's pretend 600GiB.
2) Divide the total capacity into as many equally sized LUN's as you have separate SCSI paths from the host to the array. Let's pretend you have 2 for brevity's sake.
Make LUN0 use primary path A; alternate path B. Make LUN1 use primary path B; alternate path A.
3) Now stripe each LVOL in the VG over both LUN's using a stripe size somewhere in the 64KiB-256KiB range.

That will efficiently spread the i/o from the host to the array over a small number of LUN's to manage.

NOTE: This still doesn't mean that Glance won't complain; if you divided those 2 LUN's into 16 LUN's then Glance would probably be very happy and theing would APPEAR to be great but you would probably see no significant difference in actual throughput.
If it ain't broke, I can fix that.
Court Campbell
Honored Contributor
Solution

Re: Best Practices on Volume Groups and LUNS

My two cents is to just to have one large raid5 lun on the eva. even though it will look like physical I/O to the server you will probably get a lot of requests filled form the eva's cache. I still see some people suggest creating a number of small luns and distributing the extents via pvg's. I can't say that I have seem much more throughput by doing this. Other than that I can only suggest reading through the responses and making an educated decision.
"The difference between me and you? I will read the man page." and "Respect the hat." and "You could just do a search on ITRC, you don't need to start a thread on a topic that's been answered 100 times already." Oh, and "What. no points???"
Geoff Wild
Honored Contributor

Re: Best Practices on Volume Groups and LUNS

Our current setup is this for and Oracle DB server:

/v00 1 file system made up from 1 74.125GB LUN

/v01 - /v0X are each made up of 518.875 GB filesystem whose lvol is made up of 7 striped 74.125GB LUNs.

Yes - we noticed a performance improvement striping at the lvol level as well as on the array.

We connect to EMC DMX'es.

Here's part of one of my lvols:

--- Logical volumes ---
LV Name /dev/vg61/lvsapd01
VG Name /dev/vg61
LV Permission read/write
LV Status available/syncd
Mirror copies 0
Consistency Recovery MWC
Schedule striped
LV Size (Mbytes) 531328
Current LE 33208
Allocated PE 33208
Stripes 7
Stripe Size (Kbytes) 128
Bad block on
Allocation strict
IO Timeout (Seconds) default


Will you see a performance improvement with 200GB split into multiple luns? maybe - best to try and if not - just make a large one.

Rgds...Geoff
Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
Court Campbell
Honored Contributor

Re: Best Practices on Volume Groups and LUNS

Oh, and you might also want to look into getting securepath and set it for least I/O as the way of balancing I/O across controllers. That way you can get away from managing pvlinks.
"The difference between me and you? I will read the man page." and "Respect the hat." and "You could just do a search on ITRC, you don't need to start a thread on a topic that's been answered 100 times already." Oh, and "What. no points???"
Dan Maschmeier_1
Occasional Advisor

Re: Best Practices on Volume Groups and LUNS

Questions.
1) Does the glance queue length number have any meaning in this case, or are those requests not really queued up? (they do clear pretty quick at time)

2) Does the sar number wio (waiting for i/o) , does that have any meaning at all and can it be related to queue length?

3) My paths are not SCSI from the host to the array, it's all fibre. Does that matter? When I look at sam, it just says I have 4 H/W paths to the LUN(s) given. How much do I have to manage it? I thought LVM alternate links are not supported on EVA disk arrays?


Yogeeraj_1
Honored Contributor

Re: Best Practices on Volume Groups and LUNS

hi Dan,

> 2) Does the sar number wio (waiting for i/o) , does that have any meaning at all and can it be related to queue length?

When it comes to SANs, i don't think the SAR output is quite relevant. Better monitor at the controller level.

kind regards
yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)

Re: Best Practices on Volume Groups and LUNS

Hi Dan,

Re your Q3...

The EVA didn't used to support LVM PVLinks, but it has done now for a couple of years, so go ahead and add those alternate links to your LUN(s). Make sure that your primary link is to the LUN on the owning controller for best performance.

HTH

Duncan

I am an HPE Employee
Accept or Kudo
Suraj Singh_1
Trusted Contributor

Re: Best Practices on Volume Groups and LUNS

Well, the answer to your query is "it depends".

It would depend on the following things:
1. No. of physical disks in your EVA.
2. No. of disk-groups you have in your EVA.
3. No. of FC HBAs on your HP box.

If you have more no. of disks (say around 10) in your disk-group on EVA, then you should create a single RAID5 LUN of 200 GB, and export this LUN through both the FC ports.
EVA will internally take care of distributing the LUN such that it gives the best performance (since your LUN will be distributed across 10 physical disks, and you have more no. of spindles so that read and write doesn't get affected).

Perform the zoning such that the HP box is able to see this LUN from 2 paths, and create lvols as suggested above.

But in case you have less number of disks in a disk-group (less than 5), then you should consider creating more than one RAID-5 LUN such that each LUN is coming from different disk-group.

Hope that helps.
-Suraj
What we cannot speak about we must pass over in silence.
Torsten.
Acclaimed Contributor

Re: Best Practices on Volume Groups and LUNS

Hi Dan,

you wrote
"How much do I have to manage it? I thought LVM alternate links are not supported on EVA disk arrays?"

The old releases of the EVA were using active/passive configs, so the PVlinks were not supported and the securepath software was needed.

Since several years the EVA is doing active/active (the EVA4000 from the beginning) and there is no need for securepath and supported with PVlinks.

Doing as Clay suggested and use multiple LUNs accessing over different pathes will give you "load balancing" even with PVlinks.

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
Dan Maschmeier_1
Occasional Advisor

Re: Best Practices on Volume Groups and LUNS

Thanks to all for the responses.
Points assigned.
I'll have to learn more about assigning multiple pathing. Another topic.
Closing thread.

Dan
Dan Maschmeier_1
Occasional Advisor

Re: Best Practices on Volume Groups and LUNS

Closing my first thread, didn't know it gave you a parting comment when I made the one above.
kenj_2
Advisor

Re: Best Practices on Volume Groups and LUNS

Hi Dan -

You asked ...

2) Does the sar number wio (waiting for i/o) , does that have any meaning at all and can it be related to queue length?

The wio metric is host based metric so it has *some* relevance no matter what storage is used. It is a subset of idle cpu - if a cpu is idle but there is at least one process wating for IO then that measurement interval will be recorded as wio%.

Having high wio% does not necessarily mean there is host or array queueing, there could just me a lot of processes doing IO (perhaps slowly).

The sar metrics avque and avwait are direct measure of host-based queueing. They are accurate.

I usually look at wio% over time to detect changes in workload or IO performance.

Ken Johnson