Disk Enclosures
Showing results for 
Search instead for 
Did you mean: 

Optimal LUN sizing

Go to solution
Simon Galton
Frequent Advisor

Optimal LUN sizing

Folks -- I'm polling for wisdom on this one... :)

When building a 150GB mountpoint, is it "better" to use one 150GB LUN on a va7410 or to create three 50GB LUNs and put them in a common volume group?

An HP tech suggested to me that he had never seen a 150GB or 200GB LUN in production from a va array and suggested that we use 50GB (or smaller) LUNs. This seems counter-intuitive to me, as the three LUNs would require three times the overhead on the array.

Leif Halvarsson_2
Honored Contributor

Re: Optimal LUN sizing

I have no experience with the VA7410 myself but I don't understand why this advanced array could not handle a 150GB LUN. I use 400GB LUNs on our rather simple disk array without problems.

If all LUNs is in the same RAID-set (as I understand it is in the VA7410) there is no advantage with several small LUNs compared to one large.
Eugeny Brychkov
Honored Contributor

Re: Optimal LUN sizing

VA7410 can have LUN in size of all available its space if you want. But for perfomance reasons you can do the following: create two LUNs in different Redundancy Groups (RG1 and RG2) and then created VG so that primary path for RG1 LUN would go through VA C1 and for RG2 LUN through C2.
BUT: I was told that VA7x10 family has improved intercontroller bus and this I mentioned above should not have big perfomance reason to implement.
The best I see is simply load balance LUNs between RGs (mean distribute all disks between both RGs and allocate approx. same space in both RGs for LUNs). This will ensure that both disk-RG-owning controllers will be engaged equally
Trusted Contributor

Re: Optimal LUN sizing

LUN size and count are unimportant to the VA for performance; with the exception you need at least one LUN per RG to access the data.

However, operating systems are a different thing. Windows is good about automatic queue depth management, but hpux is not. For hpux you must pay a little attention to assure the queue depth is sufficient to allow maximum performance.

You failed to indicate the os or application, so I???ll guess. The queue depth defines the number of outstanding commands the array can be processing concurrently per LUN. For write activity, the queue depth is not very important; the write cache allows 1000???s of active IOs. But for reads it is very important, and more important for small block random than large sequential workloads. So, if your workload has a large component of small, random reads, you???ll need to adjust the hpux queue depth.

The goal is to have the total LUN depth per RG about 2 or 3 times the number of disks per RG. (for sequential read workloads, a total queue depth of 8 is sufficient) So if a RG has 20 disks, then the LUNs created on that RG needs a total queue depth of about 50 or 60. The default queue depth for hpux is 8 per LUN. So, for this example, if you have 7 or 8 LUNs (that are stripped) then you???re cool. If you have less, then you???ll need to manually adjust the queue depth.

To do this, use the scsictl command. It will temporarily set a new queue depth ??? however, it is not persistent thru a reboot. Script it into your boot process. This also explains why some will recommend multiple LUNs - to create a greater total LUN depth.

Got it?

Re: Optimal LUN sizing

I don't see the server type mentioned, but if you are attaching something like an HP server (with active/passive redundant paths), and haven't invested in a multipathing software, it can also be to your advantage to have at least 2 LUNs to put the load down both paths to the disk. ie. Make the primary path different for each LUN. Call it poor man's load balancing.

Simon Galton
Frequent Advisor

Re: Optimal LUN sizing

Folks, thanks for the oustanding replies, especially Eugeny and Roger (excellent background on queue depth, thank you).

This is exactly what I needed to think this through. BTW, this is an array serving several HP-UX 11i systems (rp5430 x 2, rp2470, K370) all running Oracle and Oracle Financials. We're soon planning to add some legacy Oracle systems to our SAN. We're load balancing by splitting the path of high-I/O systems evenly across controllers.

Again, thank you muchly... :)