- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- Disk Enclosures
- >
- Logical volume size reduces performance on 6404: w...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-28-2008 05:33 AM
тАО03-28-2008 05:33 AM
Logical volume size reduces performance on 6404: why?
If I create my volume with the first command below, I get 450MB/sec. If I create the volume with the second command (size incremented by 1MB), the performance drops to 110MB/sec. Why?
hpacucli ctrl slot=2 create type=ld drives=all raid=6 size=261119 stripesize=64
hpacucli ctrl slot=2 create type=ld drives=all raid=6 size=261120 stripesize=64
Background:
- DL585 G1
- SA 6404 in a PCI-X 133 slot
- 28 U320 drives on ports A1 and A2 (same controller)
All firmware has been updated today using the HP Firmware 8.0 CD.
Test procedure:
I have the following command running in one window:
iostat -k /dev/cciss/c?d0
and I do a read test with this command:
cat /dev/cciss/c2d0 > /dev/null
while watching the iostat output.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-01-2008 06:45 AM
тАО04-01-2008 06:45 AM
Re: Logical volume size reduces performance on 6404: why?
Further information:
- The 28 drives are in MSA30 enclosures
- The drives are 300GB U320 10k RPM drives
We want to create 2TB logical volumes on the array.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-01-2008 07:31 AM
тАО04-01-2008 07:31 AM
Re: Logical volume size reduces performance on 6404: why?
The only thing on my mind is block alignment.
In the first case, for whatever reason, you are lucky & your RAID array sectors are aligned with your file system, so single block write request from the OS level generates just one write to the array.
In the second case you are unlucky, sectors are not aligned & each single block write request actually spans two blocks on the array (because of the offset).
If that's true you should see roughly 50% performance degradation - still not the perfect explanation for your 4x drop!
Rgds.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-04-2008 08:07 AM
тАО04-04-2008 08:07 AM
Re: Logical volume size reduces performance on 6404: why?
I've made a further discovery about this problem:
- if the logical volume is partitioned, and I read from a partition, the problem does not occur:
cat /dev/cciss/c2d0p1 > /dev/null
- however, reading the same logical volume directly I do have the problem:
cat /dev/cciss/c2d0 > /dev/null
In the first case (reading c2d0p1), I get 400MB/sec. In the second case (reading c2d0), I get 100MB/sec. The logical volume is 2TB, and the partition fills the logical volume.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-04-2008 08:31 AM
тАО04-04-2008 08:31 AM
Re: Logical volume size reduces performance on 6404: why?
I've now created LVM on the partitions, reading the LVM volume at 400MB/sec is possible after setting the read ahead buffer size.
Basically, this means we can now use the array, but I'm still very curious about the poor performance of the c2d0 device node.
hpacucli ctrl slot=2 array A create type=ld raid=6 stripesize=64
hpacucli ctrl slot=2 array A create type=ld raid=6 stripesize=64
hpacucli ctrl slot=2 array A create type=ld raid=6 stripesize=64
hpacucli ctrl slot=2 array A create type=ld raid=6 stripesize=64
hpacucli ctrl slot=2 logicaldrive all show
fdisk /dev/cciss/c2d0
fdisk /dev/cciss/c2d1
fdisk /dev/cciss/c2d2
fdisk /dev/cciss/c2d3
pvcreate /dev/cciss/c2d0p1
pvcreate /dev/cciss/c2d1p1
pvcreate /dev/cciss/c2d2p1
pvcreate /dev/cciss/c2d3p1
vgcreate vg_array2_test /dev/cciss/c2d?p1
vgscan
lvscan
pvscan
lvcreate -n vol0 -l 1859630 vg_array2_test
cat /dev/mapper/vg_array2_test-vol0 > /dev/null
# the read speed observed with iostat is 150MB/sec, slower than reading the partition
blockdev --getra /dev/mapper/vg_array2_test-vol0
# default value, 256, is too low
blockdev --setra 8192 /dev/mapper/vg_array2_test-vol0
cat /dev/mapper/vg_array2_test-vol0 > /dev/null
# iostat -k reveals a read speed of 400MB/sec while the cat command executes