- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Large LUN vs more luns
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-09-2003 08:17 PM
тАО06-09-2003 08:17 PM
Re: Large LUN vs more luns
I think by creating more number of LUNs you will have flexibility to manage disk space and you can take out some space if required later point of time and you can add also.
Sunil
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-23-2003 12:31 PM
тАО06-23-2003 12:31 PM
Re: Large LUN vs more luns
If you use 36GB or 72GB LUNs along with striping, your new DBs should really scream.
We just migrated a customer off of an old FC60 onto our Symmetrix. Their DB was around 300GB, so we gave them a 360GB volume group comprised of 10x36GB LUNs striped across all members at 128K and they've done nothing but marvel at how much better the DB performs now.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-23-2003 12:55 PM
тАО06-23-2003 12:55 PM
Re: Large LUN vs more luns
We have an EVA here, and I have the HW/Mass Storage guys set luns to 250GB, but that was just for conformities sake. We have 1 machine with 1TB divided into 4 luns. We really didn't get much out of it, performance seems to be the same as with a gigantic lun. (We tested this before the EVA was in production here...) With Oracle, the DBA's just wanted a bunch of filesystems to point to anyway. We have our redo logs on a separate lun as the data, etc. This seems to had mitigated a risk issue in the minds of the DBA's.
With our configuration, we are very happy, but we really didn't see a difference from a large lun to several smaller luns, except you have more VG's to deal with.
The secure path failover with 2 HBA's is almost transparent, by the way.
Hope it helps
John
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-01-2003 07:42 AM
тАО07-01-2003 07:42 AM
Re: Large LUN vs more luns
RAID groups on modern arrays (EVA, HDS, etc..)from which LUNs are carved and assembled now usually deal with 73GB physical disks (minimum) with 144GB and 288+GB physical disks on the way. Most common array configs these days are 2x2 stripe-mirrors or 3+1P RAID5. So we're talking about 140GB to 210GB RaidGroups here which could be presented as single LUN's or assembled into larger LUN's depending on your array -- or carved up into smaller LUN's (LUNlet as I would call them).
Presenting RAIDgroups as large LUN's (TB sized for that matter) should pose no performance issues - rather it simplifies things - dealing with smaller disk/lun objects that one has to deal with in your choice of VM- volume manager (logical). Your VM can carve up this gigantic LUN into smaller slices or volumes if you wish.
It is still a religious debate whether presenting one gigantic LUN (say 1.5 TB LUN) as a single filesystem say for Oracle or FIleshare use. The most common issues - allegedly performance and inherent risks. The others would be backups -- obviously it will be a challenge for traditional backup systems to backup one gigantic filesystem -- sepcially for those that still use tapes. I would decline to comment - but I am for large or lerger LUN presentations from modern arrays and somewhat selective whether I would want to present this large lun as just one filesystem.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-01-2003 08:01 AM
тАО07-01-2003 08:01 AM
Re: Large LUN vs more luns
My real reason for this post is a 'bug?' as salesman described, with the last version of secure path ( prior to March, 03)
All my luns failed to be presented upon first reboot. Having multiple arrays, this resulted in hardware paths pointing to wrong devices. A mess.
I was told to run an SP update command, (sorry, my notes are home) and relink kernel, and that this is fixed in the March version of secure path.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-01-2003 08:13 AM
тАО07-01-2003 08:13 AM
Re: Large LUN vs more luns
I suspect you will be trying to get something like 20,000 IO/s If you have 2 LUNs you will be trying to get 10,000 IO/s down each LUN. This will mean an effective service time for EACH LUN to be 0.1 ms. With 24 LUNs you will be asking for a more reasonable 1.2 ms (still pritty impressive).
The other main reason for going with more resonably size LUNs is the disk queue, assuming you do get 0.1 ms per LUN you will still be queuing 12 times more IOs. This will appear on the HP-UX system. OK it will munch through the queue quickly, but when you consider queues build up with a power of 2 (square, something I remembered from a stats lecure) you will be getting vast queues very quickly.
The last thing, you may want to consider reducing your VG size from 1.5GB. The reason I say this is LVM has a hard limit of 255 LV per VG. This means that you will need to create LVs of just over 6GB each. We use Informix which means we are limited to 2GB LVs/chunks, thus the biggest VG we can have is 0.5 TB. If you have a similar limit or will be creating LVs smaller than 6GB you will end up with unusable space that you CANNOT recoup (OK backup/destroy/restore will do it)!
Just my advice based on experience
Tim
- « Previous
- Next »