Operating System - HP-UX
1753373 Members
5191 Online
108792 Solutions
New Discussion юеВ

Re: Multiple LUNs vs single LUN in HPUX 11iv3

 
SOLVED
Go to solution
Zinky
Honored Contributor

Re: Multiple LUNs vs single LUN in HPUX 11iv3

My HP-UX (11.11/1.31) on EVA8XXX / Oracle Recipe is we provide a 100 to 200 GB Vdisk sizes with a volume or a filesystem sitting on top of each Vdisk. SCSI QDepth I have set to 32. I leave it to the DBAs to stripe their datafiles accross these filesystems.

Under ASM, it is the same -- 100 or 200 GB VDisks. We expect ASM to just stripe/Balance accross these volumes.

Several years ago though -- I ahd a mini experiment with an adventurous DBA. I had him compare the performance of a DB that was on a single 1TB LUN versus 5 200GB LUNs. With the 1TB LUN experiment -- I set my Q depth to 128. There were hardly any difference in performance. Back End RAID setup was VRAID10.

Hakuna Matata

Favourite Toy:
AMD Athlon II X6 1090T 6-core, 16GB RAM, 12TB ZFS RAIDZ-2 Storage. Linux Centos 5.6 running KVM Hypervisor. Virtual Machines: Ubuntu, Mint, Solaris 10, Windows 7 Professional, Windows XP Pro, Windows Server 2008R2, DOS 6.22, OpenFiler
Liviu I.
Frequent Advisor

Re: Multiple LUNs vs single LUN in HPUX 11iv3

I did the same having 100 GB vdisks, but the drawback I find is the harder management: having to present many "small" vdisks instead of 1 "big"; having to scan/import many PV's. But this is maybe my fault, since I should have had scripts for this.

My conclusion to this thread would be that it is better to have parallel access at any layer in an I/O stream of transactions, even if it is not a performance improvement in some situations.It turns out to be not a notable performance penalty while staying on the safe side also.

Thank you all who replied.
L.
Liviu I.
Frequent Advisor

Re: Multiple LUNs vs single LUN in HPUX 11iv3

thank you
R Bray
New Member

Re: Multiple LUNs vs single LUN in HPUX 11iv3

I realize this is an old thread, but I haven't seen this sort of information posted after searching for it. We have had favorable performance benefits with splitting the filesystem of an Oracle database onto multiple LUNs on the same physical disk group on an EMC SAN. The originial queue length of 2-3 per path (c14 and c16 are separate FC cards) led us to believe that Oracle and the filesystem were paralleling read requests sufficiently well, but it seemed that somewhere in block transfer world, I/Os were being serialized (the SAN never reported a queue depth of >1 on the LUN, and the backing disks were not saturated). This is HP-UX 11iv2, Oracle 10g, on LVM/VxFS with EMC PowerPath 5.1.0. Some sar -d output before and after the split:

device %busy avque r+w/s blks/s avwait avserv
c14t0d0 98.01 2.48 481 8234 2.14 4.43
c16t0d0 97.51 2.80 499 8554 2.23 4.25


device %busy avque r+w/s blks/s avwait avserv
c14t1d0 80.00 0.50 768 75824 0.00 1.61
c16t1d0 75.00 0.50 713 70936 0.00 1.54
c14t1d1 25.00 0.50 208 7216 0.00 2.03
c16t1d1 30.50 0.50 226 8960 0.00 2.15
c14t1d2 40.50 0.50 461 15776 0.00 1.77
c16t1d2 47.00 0.50 528 18032 0.00 1.87
c14t1d3 30.50 0.50 386 13288 0.01 1.81
c16t1d3 35.00 0.50 430 14360 0.00 1.89
c14t1d4 0.50 0.50 2 40 0.00 2.93
c16t1d4 1.50 0.50 3 48 0.00 6.45
c14t1d5 7.50 0.50 101 2557 0.00 1.66
c16t1d5 9.00 0.51 100 2115 0.00 1.78

Application performance is improved, and tools like sar and GlancePlus no longer indicate a disk bottleneck. This was achieved with no hardware changes.