- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Multiple LUNs vs single LUN in HPUX 11iv3
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-11-2009 10:42 PM
08-11-2009 10:42 PM
Considering an EVA 8100 (or any EVA with active-active capability) and an HP-UX 11iv3 server what would be the best design principle in order to maximize storage access performance (IOPS and BW), to use 1 vdisk of 500GB (LVM: 1 PV in 1 VG with with single or multiple LVs) or 5 vdisks of 100GB (LVM: 5 PVs in 1 VG with single or multiple LVs) ?
I think that since the server has 2 dual-port HBAs (dual-fabric SAN), the OS supports native multipath, the storage supports active-active, the best idea is to be able to use the parallelism at any level given multiple IO operations, thus I would adopt the latter ...
I would appreciate any opinion/suggestion eventually doc.
Thank you
L.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-11-2009 11:22 PM
08-11-2009 11:22 PM
Re: Multiple LUNs vs single LUN in HPUX 11iv3
I would use multiple paths and disks to get the benefits of lvm striping and using both controllers/caches.
Remember that the EVA use multiples of 8 for expansion and the good old binary system.
For example 4 Paths 8 Disks 64GB sounds nice.
Some Hands on for the EVA:
http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA0-2787ENW.pdf
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-12-2009 10:38 AM
08-12-2009 10:38 AM
Re: Multiple LUNs vs single LUN in HPUX 11iv3
If said disk is to be used under ASM in a Oracle - I will sugegst for say 5x100GB Vdisks and path prefrenced alternating between A and B controllers. That way you fully harness the bandwidth to your EVA and the dual controllers.
But you can also do a comparative performance since it is quite EASY to colapse and ercarve VDISKS (LUNS) on an EVA.
I suggest you use ORACLEs ORION Tool which is very easy to use and simulates comparative IO and even striping that happens in ASM. The tool is at:
http://www.oracle.com/technology/software/tech/orion/index.html
Let us know your results.
Favourite Toy:
AMD Athlon II X6 1090T 6-core, 16GB RAM, 12TB ZFS RAIDZ-2 Storage. Linux Centos 5.6 running KVM Hypervisor. Virtual Machines: Ubuntu, Mint, Solaris 10, Windows 7 Professional, Windows XP Pro, Windows Server 2008R2, DOS 6.22, OpenFiler
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-13-2009 12:08 AM
08-13-2009 12:08 AM
Re: Multiple LUNs vs single LUN in HPUX 11iv3
Well, unfortunately Orion testing is not an option so far, since I am talking about a production environment (and the latest tests I ran, with IOmeter, turned the storage unresponsive). ASM is also not used now. But for sure I appreciate your idea.
After reading the HP whitepaper svo suggested, I draw a conclusion , and I would appreciate any comments on this : page 22 (controller balancing influences performance): I would split the disk-group in 5 vdisks (LUNs) in order to have flexibility in balancing the workload. But the following phrase can ignore all your plans :
"HP Continuous Access EVA requires that all LUNs within a DR group be owned by the same controller. Load balancing is performed at the DR group level and not the individual LUN. Additional configuration information can be found in the HP StorageWorks Continuous Access EVA implementation guide."
Thus the flexibility is reduced when using Continuous Access.
As an example : if one has 2 applications each needing 500 GB space; and it has a DR group for each , then it makes no sense to split each 500GB in 5 LUNS since all of them will be owned by the same controller.
Another note, stating that there is no performance improvement when splitting on the same controller:
Striping LUNs within a disk group on the same controller provides no additional performance value. The EVA automatically stripes each LUN across all disks in a disk group.
Since the discussion turned to somehow a storage design issue (well they are very much related , and I'd be glad to have here members with experience in both domains), I would like to ask what do you think about my question from the LVM point of view ? I refer here at your experience in balancing filesystem access for performance improvement. Logical Volumes stripped or not ?
Thank you.
L.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-13-2009 02:22 AM
08-13-2009 02:22 AM
Re: Multiple LUNs vs single LUN in HPUX 11iv3
in my opinion you will only get a final answer by testing and comparing the results.
Following the whitepaper splitting and stripping doesn't make sense, so the only reason to use lvm will be the flexibility if you need to change it (increase, .. ).
In our enviroment we use EVA's as described above, even if we don't get, according the whitepaper, benefit of it.
If you look at http://communities.vmware.com/thread/73745 - these guys tested a lot of EVA's with iometer without any problems, so perhaps you give it a second try.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-13-2009 03:21 AM
08-13-2009 03:21 AM
Re: Multiple LUNs vs single LUN in HPUX 11iv3
Am I right , or is it just my imagination ?
Thank you.
L.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-18-2009 10:45 PM
08-18-2009 10:45 PM
Re: Multiple LUNs vs single LUN in HPUX 11iv3
I also use LVM, multipathing, mirroring, ... and noticed that HP-UX as an IO queue per LUN.
So, if you have 1 big lun of 500GB, all IO is done sequentially. If yo have 5 luns of 100GB, 5 IO's can be done in parallel. So, in theory, you can flush your IO to the EVA's NVRAM 5 times faster with 5 LUN's as with 1 LUN, provided that your IO-card is not a bottleneck and the EVA's disk can keep up with the IO throughput.
Reading is the same 5 reads can be done in parallel, but at the bottom line, the EVA has to get the data from the same set of disks, so if the EVA itself is heavily loaded, it probably won't make any difference.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-18-2009 11:22 PM
08-18-2009 11:22 PM
Re: Multiple LUNs vs single LUN in HPUX 11iv3
Liviu.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-19-2009 05:54 AM
08-19-2009 05:54 AM
Solutionhttp://www.oracle.com/technology/deploy/performance/pdf/EVA_ORACLE_paper.pdf
Favourite Toy:
AMD Athlon II X6 1090T 6-core, 16GB RAM, 12TB ZFS RAIDZ-2 Storage. Linux Centos 5.6 running KVM Hypervisor. Virtual Machines: Ubuntu, Mint, Solaris 10, Windows 7 Professional, Windows XP Pro, Windows Server 2008R2, DOS 6.22, OpenFiler
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-19-2009 07:31 AM
08-19-2009 07:31 AM
Re: Multiple LUNs vs single LUN in HPUX 11iv3
Anyway I still have to understand the stripe size LVM mechanisms and the fine tuning according to different data types.
L.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-19-2009 07:42 AM
08-19-2009 07:42 AM
Re: Multiple LUNs vs single LUN in HPUX 11iv3
Under ASM, it is the same -- 100 or 200 GB VDisks. We expect ASM to just stripe/Balance accross these volumes.
Several years ago though -- I ahd a mini experiment with an adventurous DBA. I had him compare the performance of a DB that was on a single 1TB LUN versus 5 200GB LUNs. With the 1TB LUN experiment -- I set my Q depth to 128. There were hardly any difference in performance. Back End RAID setup was VRAID10.
Favourite Toy:
AMD Athlon II X6 1090T 6-core, 16GB RAM, 12TB ZFS RAIDZ-2 Storage. Linux Centos 5.6 running KVM Hypervisor. Virtual Machines: Ubuntu, Mint, Solaris 10, Windows 7 Professional, Windows XP Pro, Windows Server 2008R2, DOS 6.22, OpenFiler
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-19-2009 10:50 AM
08-19-2009 10:50 AM
Re: Multiple LUNs vs single LUN in HPUX 11iv3
My conclusion to this thread would be that it is better to have parallel access at any layer in an I/O stream of transactions, even if it is not a performance improvement in some situations.It turns out to be not a notable performance penalty while staying on the safe side also.
Thank you all who replied.
L.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-24-2009 08:26 PM
08-24-2009 08:26 PM
Re: Multiple LUNs vs single LUN in HPUX 11iv3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-20-2010 07:24 AM
01-20-2010 07:24 AM
Re: Multiple LUNs vs single LUN in HPUX 11iv3
device %busy avque r+w/s blks/s avwait avserv
c14t0d0 98.01 2.48 481 8234 2.14 4.43
c16t0d0 97.51 2.80 499 8554 2.23 4.25
device %busy avque r+w/s blks/s avwait avserv
c14t1d0 80.00 0.50 768 75824 0.00 1.61
c16t1d0 75.00 0.50 713 70936 0.00 1.54
c14t1d1 25.00 0.50 208 7216 0.00 2.03
c16t1d1 30.50 0.50 226 8960 0.00 2.15
c14t1d2 40.50 0.50 461 15776 0.00 1.77
c16t1d2 47.00 0.50 528 18032 0.00 1.87
c14t1d3 30.50 0.50 386 13288 0.01 1.81
c16t1d3 35.00 0.50 430 14360 0.00 1.89
c14t1d4 0.50 0.50 2 40 0.00 2.93
c16t1d4 1.50 0.50 3 48 0.00 6.45
c14t1d5 7.50 0.50 101 2557 0.00 1.66
c16t1d5 9.00 0.51 100 2115 0.00 1.78
Application performance is improved, and tools like sar and GlancePlus no longer indicate a disk bottleneck. This was achieved with no hardware changes.