- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- Re: [XP20000] LUSE and performance
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-19-2009 07:20 AM
тАО02-19-2009 07:20 AM
Re: [XP20000] LUSE and performance
The logical block addresses within a LUSE (or any logical volume) are mapped from the beginning to the
end. If the first LDEV within a LUSE (the head LDEV) is receiving more I/O operations than the other
LDEVs within a LUSE, this is just because the host application is issuing more I/Os to the address range
near the beginning of the logical volume.
If you configured a LUSE in order to spread the I/O load over more than one parity group, you might want
to look at using ├в concatenated parity groups├в instead. With concatenated parity groups, each stripe goes
across all the parity groups, and thus this does a much better job of distributing random I/O (which tends to
be clustered in space ├в locality of reference) than using a LUSE. For the same reason, concatenated
parity groups can give you higher sequential throughput if you have an application that steadily drives a
port at 400 MB/s to a single LDEV.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-19-2009 07:34 AM
тАО02-19-2009 07:34 AM
Re: [XP20000] LUSE and performance
Im not sure if your last response was directed at me, or everyone in general?
Im cool with Concatenated Parity Groups and the likes, hence why I dont use LUSE much these days.
I'm hoping you can provide some up to date theory/inner workings of the XP to back up the 10-12 LDEV limit you mention
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-19-2009 07:41 AM
тАО02-19-2009 07:41 AM
Re: [XP20000] LUSE and performance
So to resume LUSE isn't the solution to increase performance when there is a lot of random read access because there no garanty all LDEVs (and so all parity groups and so all disks) are used.
To increase performance in this case the best solution is to use concatenated parity groups.
This gives me another question, I read several times here for performance we should use "at least 7+1 disks in parity group", so it is possible to create parity group greater than 7+1 ?
For now we are using parity groups composed by 2 columns of 4 disks, is it possible to create parity groups horizontaly (13+1) ?
By this way we'll have 4 parity groups of 14 disks, so I'll be able to use all 56 disks of our disks unit.
Thank you again to help me to understand.
Kindly regards
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-19-2009 08:05 AM
тАО02-19-2009 08:05 AM
Re: [XP20000] LUSE and performance
the RAID5 concatenation options are the following:
2x(7D+1P)=14D+2P interleaved = stripped
4x(7D+1P)=28D+4P interleaved = stripped
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-19-2009 11:59 AM
тАО02-19-2009 11:59 AM
Re: [XP20000] LUSE and performance
1. on the OSes without the LVM, one problem is in the fact that concatenated PGs the size is limited to the 7D+1P LDEVS, so e.g. you cannot create more 1TB VOLs without LUSE even here
2. at the same time I better correct my magical number from 10-12 back to 8 for reducing the impact of the overhead of the first LDEV of LUSE
3. the overhead is mainly the host single scsi queue based (lock control, etc.)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-19-2009 12:15 PM
тАО02-19-2009 12:15 PM
Re: [XP20000] LUSE and performance
General addition:
in XP12k/24k the LUSE performance impact is minimal, aprox. 1%,
some performance tests gaining the LUSE LDEV static concurrency are giving better results on LUSE vols then on the single LDEV in the concatenated PGs.
- « Previous
-
- 1
- 2
- Next »