Operating System - HP-UX
1748240 Members
3579 Online
108759 Solutions
New Discussion юеВ

Re: Multiple LUNs vs single LUN in HPUX 11iv3

 
SOLVED
Go to solution
Liviu I.
Frequent Advisor

Multiple LUNs vs single LUN in HPUX 11iv3

Hello,

Considering an EVA 8100 (or any EVA with active-active capability) and an HP-UX 11iv3 server what would be the best design principle in order to maximize storage access performance (IOPS and BW), to use 1 vdisk of 500GB (LVM: 1 PV in 1 VG with with single or multiple LVs) or 5 vdisks of 100GB (LVM: 5 PVs in 1 VG with single or multiple LVs) ?

I think that since the server has 2 dual-port HBAs (dual-fabric SAN), the OS supports native multipath, the storage supports active-active, the best idea is to be able to use the parallelism at any level given multiple IO operations, thus I would adopt the latter ...

I would appreciate any opinion/suggestion eventually doc.

Thank you
L.
13 REPLIES 13
Stephan._1
Trusted Contributor

Re: Multiple LUNs vs single LUN in HPUX 11iv3

Hi,
I would use multiple paths and disks to get the benefits of lvm striping and using both controllers/caches.

Remember that the EVA use multiples of 8 for expansion and the good old binary system.

For example 4 Paths 8 Disks 64GB sounds nice.

Some Hands on for the EVA:
http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA0-2787ENW.pdf
Share what you know, learn what you don't.
Zinky
Honored Contributor

Re: Multiple LUNs vs single LUN in HPUX 11iv3

Liviu,

If said disk is to be used under ASM in a Oracle - I will sugegst for say 5x100GB Vdisks and path prefrenced alternating between A and B controllers. That way you fully harness the bandwidth to your EVA and the dual controllers.

But you can also do a comparative performance since it is quite EASY to colapse and ercarve VDISKS (LUNS) on an EVA.

I suggest you use ORACLEs ORION Tool which is very easy to use and simulates comparative IO and even striping that happens in ASM. The tool is at:

http://www.oracle.com/technology/software/tech/orion/index.html


Let us know your results.
Hakuna Matata

Favourite Toy:
AMD Athlon II X6 1090T 6-core, 16GB RAM, 12TB ZFS RAIDZ-2 Storage. Linux Centos 5.6 running KVM Hypervisor. Virtual Machines: Ubuntu, Mint, Solaris 10, Windows 7 Professional, Windows XP Pro, Windows Server 2008R2, DOS 6.22, OpenFiler
Liviu I.
Frequent Advisor

Re: Multiple LUNs vs single LUN in HPUX 11iv3

Thank you for your ideas.
Well, unfortunately Orion testing is not an option so far, since I am talking about a production environment (and the latest tests I ran, with IOmeter, turned the storage unresponsive). ASM is also not used now. But for sure I appreciate your idea.
After reading the HP whitepaper svo suggested, I draw a conclusion , and I would appreciate any comments on this : page 22 (controller balancing influences performance): I would split the disk-group in 5 vdisks (LUNs) in order to have flexibility in balancing the workload. But the following phrase can ignore all your plans :

"HP Continuous Access EVA requires that all LUNs within a DR group be owned by the same controller. Load balancing is performed at the DR group level and not the individual LUN. Additional configuration information can be found in the HP StorageWorks Continuous Access EVA implementation guide."

Thus the flexibility is reduced when using Continuous Access.
As an example : if one has 2 applications each needing 500 GB space; and it has a DR group for each , then it makes no sense to split each 500GB in 5 LUNS since all of them will be owned by the same controller.
Another note, stating that there is no performance improvement when splitting on the same controller:

Striping LUNs within a disk group on the same controller provides no additional performance value. The EVA automatically stripes each LUN across all disks in a disk group.

Since the discussion turned to somehow a storage design issue (well they are very much related , and I'd be glad to have here members with experience in both domains), I would like to ask what do you think about my question from the LVM point of view ? I refer here at your experience in balancing filesystem access for performance improvement. Logical Volumes stripped or not ?

Thank you.
L.
Stephan._1
Trusted Contributor

Re: Multiple LUNs vs single LUN in HPUX 11iv3

Hi,
in my opinion you will only get a final answer by testing and comparing the results.

Following the whitepaper splitting and stripping doesn't make sense, so the only reason to use lvm will be the flexibility if you need to change it (increase, .. ).

In our enviroment we use EVA's as described above, even if we don't get, according the whitepaper, benefit of it.

If you look at http://communities.vmware.com/thread/73745 - these guys tested a lot of EVA's with iometer without any problems, so perhaps you give it a second try.
Share what you know, learn what you don't.
Liviu I.
Frequent Advisor

Re: Multiple LUNs vs single LUN in HPUX 11iv3

Well LVM is a must nowadays , especially in a mission-critical environment 24x7. And since we use LVM an I/O sequence (from the OS perspective) when you have a single LUN (translated in a PV) I see it as a sequential sequence; when using multiple LUNs (multiple PVs in a VG) I see it as a parallel sequence(expecially if the LV is stripped).
Am I right , or is it just my imagination ?

Thank you.
L.
Wim Rombauts
Honored Contributor

Re: Multiple LUNs vs single LUN in HPUX 11iv3

My experience is limited to HP-UX 11i v2, I have no v3 running here yet.
I also use LVM, multipathing, mirroring, ... and noticed that HP-UX as an IO queue per LUN.

So, if you have 1 big lun of 500GB, all IO is done sequentially. If yo have 5 luns of 100GB, 5 IO's can be done in parallel. So, in theory, you can flush your IO to the EVA's NVRAM 5 times faster with 5 LUN's as with 1 LUN, provided that your IO-card is not a bottleneck and the EVA's disk can keep up with the IO throughput.
Reading is the same 5 reads can be done in parallel, but at the bottom line, the EVA has to get the data from the same set of disks, so if the EVA itself is heavily loaded, it probably won't make any difference.
Liviu I.
Frequent Advisor

Re: Multiple LUNs vs single LUN in HPUX 11iv3

Thank you for your reply, thus you confirm what I thought. I can conclude somehow that it's better to use parallelism as much as possible: in the worst case, the bottleneck will be somewhere else on the SAN, EVA, etc. But from the point of view of a HP-UX admin design, it should be the best architecture one could achieve.
Liviu.
Zinky
Honored Contributor
Solution

Re: Multiple LUNs vs single LUN in HPUX 11iv3

L - the following may provide ye with good reading.

http://www.oracle.com/technology/deploy/performance/pdf/EVA_ORACLE_paper.pdf
Hakuna Matata

Favourite Toy:
AMD Athlon II X6 1090T 6-core, 16GB RAM, 12TB ZFS RAIDZ-2 Storage. Linux Centos 5.6 running KVM Hypervisor. Virtual Machines: Ubuntu, Mint, Solaris 10, Windows 7 Professional, Windows XP Pro, Windows Server 2008R2, DOS 6.22, OpenFiler
Liviu I.
Frequent Advisor

Re: Multiple LUNs vs single LUN in HPUX 11iv3

Now this is a great paper, although a little old. It would be great to find one with a new version of HP-UX, Oracle and EVA. But this comes as a confirmation of what I almost knew (or guessed), and eventually it can be used in a real world scenario design.

Anyway I still have to understand the stripe size LVM mechanisms and the fine tuning according to different data types.

L.