Operating System - HP-UX
1753613 Members
5942 Online
108797 Solutions
New Discussion юеВ

Re: Large LUN vs more luns

 
Sunil Sharma_1
Honored Contributor

Re: Large LUN vs more luns

Hi,

I think by creating more number of LUNs you will have flexibility to manage disk space and you can take out some space if required later point of time and you can add also.

Sunil
*** Dream as if you'll live forever. Live as if you'll die today ***
Brian Watkins
Frequent Advisor

Re: Large LUN vs more luns

You'll be better off using multiple 36GB or 72GB LUNs in the VG. This will give you more physical spindles to spread the IO across, which will help prevent IO bottlenecks and improve IO performance.

If you use 36GB or 72GB LUNs along with striping, your new DBs should really scream.

We just migrated a customer off of an old FC60 onto our Symmetrix. Their DB was around 300GB, so we gave them a 360GB volume group comprised of 10x36GB LUNs striped across all members at 128K and they've done nothing but marvel at how much better the DB performs now.

John Payne_2
Honored Contributor

Re: Large LUN vs more luns

You are using 1 HBA now, but is there a possibility that you will have 2 later? If so, you want to use secure path now to manage the luns so that it doesn't come around and stomp on things now. Also, secure path can do load balancing on the controller side, even with 1 HBA.

We have an EVA here, and I have the HW/Mass Storage guys set luns to 250GB, but that was just for conformities sake. We have 1 machine with 1TB divided into 4 luns. We really didn't get much out of it, performance seems to be the same as with a gigantic lun. (We tested this before the EVA was in production here...) With Oracle, the DBA's just wanted a bunch of filesystems to point to anyway. We have our redo logs on a separate lun as the data, etc. This seems to had mitigated a risk issue in the minds of the DBA's.

With our configuration, we are very happy, but we really didn't see a difference from a large lun to several smaller luns, except you have more VG's to deal with.

The secure path failover with 2 HBA's is almost transparent, by the way.

Hope it helps

John
Spoon!!!!
Alzhy
Honored Contributor

Re: Large LUN vs more luns

I am for one large LUN.

RAID groups on modern arrays (EVA, HDS, etc..)from which LUNs are carved and assembled now usually deal with 73GB physical disks (minimum) with 144GB and 288+GB physical disks on the way. Most common array configs these days are 2x2 stripe-mirrors or 3+1P RAID5. So we're talking about 140GB to 210GB RaidGroups here which could be presented as single LUN's or assembled into larger LUN's depending on your array -- or carved up into smaller LUN's (LUNlet as I would call them).

Presenting RAIDgroups as large LUN's (TB sized for that matter) should pose no performance issues - rather it simplifies things - dealing with smaller disk/lun objects that one has to deal with in your choice of VM- volume manager (logical). Your VM can carve up this gigantic LUN into smaller slices or volumes if you wish.

It is still a religious debate whether presenting one gigantic LUN (say 1.5 TB LUN) as a single filesystem say for Oracle or FIleshare use. The most common issues - allegedly performance and inherent risks. The others would be backups -- obviously it will be a challenge for traditional backup systems to backup one gigantic filesystem -- sepcially for those that still use tapes. I would decline to comment - but I am for large or lerger LUN presentations from modern arrays and somewhat selective whether I would want to present this large lun as just one filesystem.
Hakuna Matata.
doug mielke
Respected Contributor

Re: Large LUN vs more luns

I'm a big lun fan, if it comes down to vote counting.

My real reason for this post is a 'bug?' as salesman described, with the last version of secure path ( prior to March, 03)

All my luns failed to be presented upon first reboot. Having multiple arrays, this resulted in hardware paths pointing to wrong devices. A mess.

I was told to run an SP update command, (sorry, my notes are home) and relink kernel, and that this is fixed in the March version of secure path.

Tim D Fulford
Honored Contributor

Re: Large LUN vs more luns

I seem to be ad odds with some of the above replies. I would go for 24x 64GB LUNs and extent stripe. If you go for 2x 768GB you will get enourmouse disk queues. BTW you may also need to up the max_scsi_queue_depth kernel parameter to 64 or more (default 8).

I suspect you will be trying to get something like 20,000 IO/s If you have 2 LUNs you will be trying to get 10,000 IO/s down each LUN. This will mean an effective service time for EACH LUN to be 0.1 ms. With 24 LUNs you will be asking for a more reasonable 1.2 ms (still pritty impressive).

The other main reason for going with more resonably size LUNs is the disk queue, assuming you do get 0.1 ms per LUN you will still be queuing 12 times more IOs. This will appear on the HP-UX system. OK it will munch through the queue quickly, but when you consider queues build up with a power of 2 (square, something I remembered from a stats lecure) you will be getting vast queues very quickly.

The last thing, you may want to consider reducing your VG size from 1.5GB. The reason I say this is LVM has a hard limit of 255 LV per VG. This means that you will need to create LVs of just over 6GB each. We use Informix which means we are limited to 2GB LVs/chunks, thus the biggest VG we can have is 0.5 TB. If you have a similar limit or will be creating LVs smaller than 6GB you will end up with unusable space that you CANNOT recoup (OK backup/destroy/restore will do it)!

Just my advice based on experience

Tim
-