Disk Enclosures
1753643 Members
5404 Online
108798 Solutions
New Discussion юеВ

Re: [XP20000] LUSE and performance

 
IBaltay
Honored Contributor

Re: [XP20000] LUSE and performance

some more notes on the LUSE and CONCATENATION (limited to 32 physical disks)

The logical block addresses within a LUSE (or any logical volume) are mapped from the beginning to the
end. If the first LDEV within a LUSE (the head LDEV) is receiving more I/O operations than the other
LDEVs within a LUSE, this is just because the host application is issuing more I/Os to the address range
near the beginning of the logical volume.

If you configured a LUSE in order to spread the I/O load over more than one parity group, you might want
to look at using ├в concatenated parity groups├в instead. With concatenated parity groups, each stripe goes
across all the parity groups, and thus this does a much better job of distributing random I/O (which tends to
be clustered in space ├в locality of reference) than using a LUSE. For the same reason, concatenated
parity groups can give you higher sequential throughput if you have an application that steadily drives a
port at 400 MB/s to a single LDEV.

the pain is one part of the reality
Nigel Poulton
Respected Contributor

Re: [XP20000] LUSE and performance

IBaltay,

Im not sure if your last response was directed at me, or everyone in general?

Im cool with Concatenated Parity Groups and the likes, hence why I dont use LUSE much these days.

I'm hoping you can provide some up to date theory/inner workings of the XP to back up the 10-12 LDEV limit you mention
Talk about the XP and EVA @ http://blog.nigelpoulton.com
mathias.d
New Member

Re: [XP20000] LUSE and performance

Thanks a lot to all you who give me so much useful information.

So to resume LUSE isn't the solution to increase performance when there is a lot of random read access because there no garanty all LDEVs (and so all parity groups and so all disks) are used.

To increase performance in this case the best solution is to use concatenated parity groups.

This gives me another question, I read several times here for performance we should use "at least 7+1 disks in parity group", so it is possible to create parity group greater than 7+1 ?

For now we are using parity groups composed by 2 columns of 4 disks, is it possible to create parity groups horizontaly (13+1) ?

By this way we'll have 4 parity groups of 14 disks, so I'll be able to use all 56 disks of our disks unit.

Thank you again to help me to understand.

Kindly regards
IBaltay
Honored Contributor

Re: [XP20000] LUSE and performance

Hi,
the RAID5 concatenation options are the following:
2x(7D+1P)=14D+2P interleaved = stripped
4x(7D+1P)=28D+4P interleaved = stripped

the pain is one part of the reality
IBaltay
Honored Contributor

Re: [XP20000] LUSE and performance

Nigel,
1. on the OSes without the LVM, one problem is in the fact that concatenated PGs the size is limited to the 7D+1P LDEVS, so e.g. you cannot create more 1TB VOLs without LUSE even here

2. at the same time I better correct my magical number from 10-12 back to 8 for reducing the impact of the overhead of the first LDEV of LUSE

3. the overhead is mainly the host single scsi queue based (lock control, etc.)
the pain is one part of the reality
IBaltay
Honored Contributor

Re: [XP20000] LUSE and performance

addition to paragraph 1. - it applies certainly to the 146GB disks

General addition:
in XP12k/24k the LUSE performance impact is minimal, aprox. 1%,
some performance tests gaining the LUSE LDEV static concurrency are giving better results on LUSE vols then on the single LDEV in the concatenated PGs.
the pain is one part of the reality