- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- Optimal LUN sizing
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-14-2003 11:35 AM
тАО05-14-2003 11:35 AM
When building a 150GB mountpoint, is it "better" to use one 150GB LUN on a va7410 or to create three 50GB LUNs and put them in a common volume group?
An HP tech suggested to me that he had never seen a 150GB or 200GB LUN in production from a va array and suggested that we use 50GB (or smaller) LUNs. This seems counter-intuitive to me, as the three LUNs would require three times the overhead on the array.
Thoughts?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-14-2003 11:46 AM
тАО05-14-2003 11:46 AM
Re: Optimal LUN sizing
I have no experience with the VA7410 myself but I don't understand why this advanced array could not handle a 150GB LUN. I use 400GB LUNs on our rather simple disk array without problems.
If all LUNs is in the same RAID-set (as I understand it is in the VA7410) there is no advantage with several small LUNs compared to one large.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-14-2003 12:01 PM
тАО05-14-2003 12:01 PM
SolutionVA7410 can have LUN in size of all available its space if you want. But for perfomance reasons you can do the following: create two LUNs in different Redundancy Groups (RG1 and RG2) and then created VG so that primary path for RG1 LUN would go through VA C1 and for RG2 LUN through C2.
BUT: I was told that VA7x10 family has improved intercontroller bus and this I mentioned above should not have big perfomance reason to implement.
The best I see is simply load balance LUNs between RGs (mean distribute all disks between both RGs and allocate approx. same space in both RGs for LUNs). This will ensure that both disk-RG-owning controllers will be engaged equally
Eugeny
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-14-2003 08:31 PM
тАО05-14-2003 08:31 PM
Re: Optimal LUN sizing
However, operating systems are a different thing. Windows is good about automatic queue depth management, but hpux is not. For hpux you must pay a little attention to assure the queue depth is sufficient to allow maximum performance.
You failed to indicate the os or application, so I???ll guess. The queue depth defines the number of outstanding commands the array can be processing concurrently per LUN. For write activity, the queue depth is not very important; the write cache allows 1000???s of active IOs. But for reads it is very important, and more important for small block random than large sequential workloads. So, if your workload has a large component of small, random reads, you???ll need to adjust the hpux queue depth.
The goal is to have the total LUN depth per RG about 2 or 3 times the number of disks per RG. (for sequential read workloads, a total queue depth of 8 is sufficient) So if a RG has 20 disks, then the LUNs created on that RG needs a total queue depth of about 50 or 60. The default queue depth for hpux is 8 per LUN. So, for this example, if you have 7 or 8 LUNs (that are stripped) then you???re cool. If you have less, then you???ll need to manually adjust the queue depth.
To do this, use the scsictl command. It will temporarily set a new queue depth ??? however, it is not persistent thru a reboot. Script it into your boot process. This also explains why some will recommend multiple LUNs - to create a greater total LUN depth.
Got it?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-15-2003 04:11 AM
тАО05-15-2003 04:11 AM
Re: Optimal LUN sizing
Steve.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-15-2003 04:47 AM
тАО05-15-2003 04:47 AM
Re: Optimal LUN sizing
This is exactly what I needed to think this through. BTW, this is an array serving several HP-UX 11i systems (rp5430 x 2, rp2470, K370) all running Oracle and Oracle Financials. We're soon planning to add some legacy Oracle systems to our SAN. We're load balancing by splitting the path of high-I/O systems evenly across controllers.
Again, thank you muchly... :)