Disk Enclosures
1748165 Members
3917 Online
108758 Solutions
New Discussion юеВ

Re: [XP20000] LUSE and performance

 
mathias.d
New Member

[XP20000] LUSE and performance

Hello,

I've read here for SQL databases which genereate lot of random read access the best solution is to dispatch data on most disks as possible.

Perhaps I haven't well understood but if it is true, I would like to know if create LUSE of seven LDEVs choosing each LDEV on different CU is the best way to have best read performance or not.

Regards,
15 REPLIES 15
Steven Ruby
Occasional Advisor

Re: [XP20000] LUSE and performance

I wouldnt suggest using LUSE for this. On a side note to that, I do use LUSE for a lot of different apps.

The LUSE is just a concat so in reality your reads wouldnt be hitting all the disks, just the set of disks in the PG's then data in the I/O is on.

You could always try it and see how it performs for you though.

IBaltay
Honored Contributor

Re: [XP20000] LUSE and performance

Hi,
For Windows apps, in the classical XP conceipt with no ThP (Thin provisioning), there is the possibility to concatenate the array groups of 7D+1P to 14D+2P or even 28D+4P with interleaving/striping. Then it can be combined with LUSE (the only option for the distribution even if not stripped and only spanned - the effective max recommended for the performance reason up to the 8 LDEV LUSE chunks). Hence the conceipt of LUSE seems to be better then big LDEVs within one PG. BTW there should not be any perf penalty in the newer firmwares in LUSE against the single LDEVs within one PG...
the pain is one part of the reality
Steven Ruby
Occasional Advisor

Re: [XP20000] LUSE and performance

Can you explain this a little better? You are saying that LUSE has no perf penalty, what vers of microcode is this? Not that I dont believe what you are saying just wondering when there were changes to the microcode specifically related to this.

>>>> BTW there should not be any perf penalty in the newer firmwares in LUSE against the single LDEVs within one PG...
>>>>
IBaltay
Honored Contributor

Re: [XP20000] LUSE and performance

Hi,
HP internal testing proved that OPEN-V LUSE volumes have at least the same or even better performance then single big LDEVs and thus LUSE conceipt has been demystified... It is logical, because it is the way of static load balancing, but the "magical/performance LUSE" member count is said to be 10-12, ...
the pain is one part of the reality
IBaltay
Honored Contributor

Re: [XP20000] LUSE and performance

it does not certainly mean that we are using the LUSE for LVM, because there the host based LB/distribution is much more flexible... and also it is not used in ThP...
the pain is one part of the reality
Steven Ruby
Occasional Advisor

Re: [XP20000] LUSE and performance

10-12 LDEV's in a LUSE is the magic number?

thanks for the info.


#############
IBaltay Feb 18, 2009 22:37:49 GMT Unassigned

--------------------------------------------------------------------------------
Hi,
HP internal testing proved that OPEN-V LUSE volumes have at least the same or even better performance then single big LDEVs and thus LUSE conceipt has been demystified... It is logical, because it is the way of static load balancing, but the "magical/performance LUSE" member count is said to be 10-12, ...
IBaltay
Honored Contributor

Re: [XP20000] LUSE and performance

theoretical max for e.g. XP12k is 36 LDEVs in LUSE, but 10-12 is the max for the LUSE performance/managing reasons, but once more to the topic, the main here for Windows and all OSes that have no LVM is using the RAID5 concatenation of 7D+1P with interleaving (max is 4x(7D+1P)) especialy for the random read heavy load of SQL and Exchange, where the e.g. 3D+1P could be a IO bottleneck...
the pain is one part of the reality
Steven Ruby
Occasional Advisor

Re: [XP20000] LUSE and performance

I agree with that.

thanks for the info on LUSE, makes me even feel better about my VMware config using them.
Nigel Poulton
Respected Contributor

Re: [XP20000] LUSE and performance

De-mistifying and handing out "magic" numbers with no explanaition is hardly de-mystifying right!? The performance tests can show one thing but the "mystery" still remains if there is no explanation of "why".....

Id be interested to hear more if you have it to hand. Share and share alike ;-)

My thoughts on the topic are as follows -

I rarely use LUSE. However, I see where it can have perf benefits over non LUSE volumes especially with random workloads as you bring more spindles in to play as long as your LDEVs in your LUSE come from different Array Groups. So Im not a LUSE basher.... Of course RAID5 (7+1) is better, as you state.

Anyway, on earlier subsystems (XP1024 and its probably the same now) the way that LUSE related metadata was handled was not as good as it is now.

The CHIPs would spend too much time reading LUSE metadata and the likes. E.g. if a LUSE vol was heavily utilised the array would spend a lot of time working out which LDEV the blocks were actually on etc. For this reason, back then I believe best practice was a max of 8 LDEVs per LUSE. This was a breaking point where adding any more would see a nose-dive in performance due to significant overhead in managing the layout and location of blocks within the LUSE volume. The more LDEVs in the LUSE the more overhead....

LUSE metadata was stored on the reserved control cylinders on the "top LDEV" in the LUSE (OPEN-V do not have control cylinders). This helps demystify the following -
1. Adding additional LDEVs to a LUSE volume "can" lose customer data. Reason: The amount of metadata required to address the extra LDEVs in the LUSE might exceed the reserved area for control cylinders and overwrite customer data at the beginning of the top LDEV.
2. It is also the reason you cannot add a new LDEV to a LUSE that has a lower CU:LDEV address than the existing top LDEV. Doing this would change the top LDEV and move the metadata to an unsupported area of the LUSE volume.
3. Its also partly why you can destroy a LUSE and recreate it and all of your data be in tact (at least back on the XP1024) despite the message saying all data would be destroyed. As long as you create it exactly the same and have not formatted the LDEVs in between.

Anyhow, you could often see a hotspot appear on that "top LDEV" as it was constantly referenced for LUSE metadata etc. That hot-spot would become the bottleneck with more than 8(?) LDEVs in the LUSE.

Like I say, I dont use LUSE much these days so things may have changed slightly. I have to assume that LUSE metadata for OPEN-V volumes is stored in SM as they do not have reserved control cylinders. This might increase the number of LDEVs per LUSE before hitting breaking point as reading SM is way faster than reading an LDEV. But its still an overhead and can become a bottleneck under certain scenarios, probably hence the current magic number of 10-12.

HTH and appreciate anybody elses thoughts.

Talk about the XP and EVA @ http://blog.nigelpoulton.com