- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- Re: [XP20000] LUSE and performance
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-18-2009 07:13 AM
тАО02-18-2009 07:13 AM
[XP20000] LUSE and performance
I've read here for SQL databases which genereate lot of random read access the best solution is to dispatch data on most disks as possible.
Perhaps I haven't well understood but if it is true, I would like to know if create LUSE of seven LDEVs choosing each LDEV on different CU is the best way to have best read performance or not.
Regards,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-18-2009 01:58 PM
тАО02-18-2009 01:58 PM
Re: [XP20000] LUSE and performance
The LUSE is just a concat so in reality your reads wouldnt be hitting all the disks, just the set of disks in the PG's then data in the I/O is on.
You could always try it and see how it performs for you though.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-18-2009 02:01 PM
тАО02-18-2009 02:01 PM
Re: [XP20000] LUSE and performance
For Windows apps, in the classical XP conceipt with no ThP (Thin provisioning), there is the possibility to concatenate the array groups of 7D+1P to 14D+2P or even 28D+4P with interleaving/striping. Then it can be combined with LUSE (the only option for the distribution even if not stripped and only spanned - the effective max recommended for the performance reason up to the 8 LDEV LUSE chunks). Hence the conceipt of LUSE seems to be better then big LDEVs within one PG. BTW there should not be any perf penalty in the newer firmwares in LUSE against the single LDEVs within one PG...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-18-2009 02:12 PM
тАО02-18-2009 02:12 PM
Re: [XP20000] LUSE and performance
>>>> BTW there should not be any perf penalty in the newer firmwares in LUSE against the single LDEVs within one PG...
>>>>
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-18-2009 02:37 PM
тАО02-18-2009 02:37 PM
Re: [XP20000] LUSE and performance
HP internal testing proved that OPEN-V LUSE volumes have at least the same or even better performance then single big LDEVs and thus LUSE conceipt has been demystified... It is logical, because it is the way of static load balancing, but the "magical/performance LUSE" member count is said to be 10-12, ...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-18-2009 02:43 PM
тАО02-18-2009 02:43 PM
Re: [XP20000] LUSE and performance
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-18-2009 03:15 PM
тАО02-18-2009 03:15 PM
Re: [XP20000] LUSE and performance
thanks for the info.
#############
IBaltay Feb 18, 2009 22:37:49 GMT Unassigned
--------------------------------------------------------------------------------
Hi,
HP internal testing proved that OPEN-V LUSE volumes have at least the same or even better performance then single big LDEVs and thus LUSE conceipt has been demystified... It is logical, because it is the way of static load balancing, but the "magical/performance LUSE" member count is said to be 10-12, ...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-18-2009 03:22 PM
тАО02-18-2009 03:22 PM
Re: [XP20000] LUSE and performance
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-18-2009 03:26 PM
тАО02-18-2009 03:26 PM
Re: [XP20000] LUSE and performance
thanks for the info on LUSE, makes me even feel better about my VMware config using them.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-19-2009 03:55 AM
тАО02-19-2009 03:55 AM
Re: [XP20000] LUSE and performance
Id be interested to hear more if you have it to hand. Share and share alike ;-)
My thoughts on the topic are as follows -
I rarely use LUSE. However, I see where it can have perf benefits over non LUSE volumes especially with random workloads as you bring more spindles in to play as long as your LDEVs in your LUSE come from different Array Groups. So Im not a LUSE basher.... Of course RAID5 (7+1) is better, as you state.
Anyway, on earlier subsystems (XP1024 and its probably the same now) the way that LUSE related metadata was handled was not as good as it is now.
The CHIPs would spend too much time reading LUSE metadata and the likes. E.g. if a LUSE vol was heavily utilised the array would spend a lot of time working out which LDEV the blocks were actually on etc. For this reason, back then I believe best practice was a max of 8 LDEVs per LUSE. This was a breaking point where adding any more would see a nose-dive in performance due to significant overhead in managing the layout and location of blocks within the LUSE volume. The more LDEVs in the LUSE the more overhead....
LUSE metadata was stored on the reserved control cylinders on the "top LDEV" in the LUSE (OPEN-V do not have control cylinders). This helps demystify the following -
1. Adding additional LDEVs to a LUSE volume "can" lose customer data. Reason: The amount of metadata required to address the extra LDEVs in the LUSE might exceed the reserved area for control cylinders and overwrite customer data at the beginning of the top LDEV.
2. It is also the reason you cannot add a new LDEV to a LUSE that has a lower CU:LDEV address than the existing top LDEV. Doing this would change the top LDEV and move the metadata to an unsupported area of the LUSE volume.
3. Its also partly why you can destroy a LUSE and recreate it and all of your data be in tact (at least back on the XP1024) despite the message saying all data would be destroyed. As long as you create it exactly the same and have not formatted the LDEVs in between.
Anyhow, you could often see a hotspot appear on that "top LDEV" as it was constantly referenced for LUSE metadata etc. That hot-spot would become the bottleneck with more than 8(?) LDEVs in the LUSE.
Like I say, I dont use LUSE much these days so things may have changed slightly. I have to assume that LUSE metadata for OPEN-V volumes is stored in SM as they do not have reserved control cylinders. This might increase the number of LDEVs per LUSE before hitting breaking point as reading SM is way faster than reading an LDEV. But its still an overhead and can become a bottleneck under certain scenarios, probably hence the current magic number of 10-12.
HTH and appreciate anybody elses thoughts.