- Community Home
- >
- Servers and Operating Systems
- >
- HPE ProLiant
- >
- ProLiant Servers (ML,DL,SL)
- >
- Strange Benchmark Results on Smart Array P410 and ...
ProLiant Servers (ML,DL,SL)
1822011
Members
4157
Online
109639
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-21-2011 09:19 AM
тАО02-21-2011 09:19 AM
Strange Benchmark Results on Smart Array P410 and what Raid Level to choose
I got 4 ProLiant DL180 G6 with the following specs:
2*Xeon E5620
16 GB RAM
2*73GB SAS
10*2TB SATA
Smart Array P410 /w 512mb BBWC
I was in a hurry when i got them, so i immediately deployed 3 of them using raid5 for the 10*2TB without any testing.
Now the I/O performance is pretty bad in certain cases.
These servers are used as downloadservers for a pretty big website, they are running debian/lighttpd on a 1GE uplink each.
Usually there are about 2500 open filehandles reading big files (500MB to 4GB) from the raid5.
Performance is pretty good as long as there are no writes to the array.
Every now and then i need to transfer some new files to the servers, whenever i do so the read performance goes really bad.
Some figures to further explain what's happening:
Normal Usage:
roughly 2500 filehandles
purely reading from the array
outputting some 950 Mbit/s through the 1GE Uplink
almost no strain on the server
iowait < 6%
virtually no load (< 3)
So now i start some filetransfers to the servers, like 4 at a time coming in at 300 Mbit/s.
The output immediately drops to 500 Mbit/s and below.
I am using these settings:
Scheduler: noop (let the controller handle the scheduling)
16MB Readahead
Drive Write Cache enabled
Cacheratio 25/75
256KB Stripe Size
jfs
Since i still got one server left that is not deployed yet, i used it for benchmarking different raid levels and settings.
Using bonnie++ (testing with 32GB Files) on debian6 i got these results:
raid5, DWC disabled, 16MB Readahead Cache
Seq Output Block: 318 Mbyte/s
Rewrite: 216 Mbyte/s
Seq Input Block: 961 Mbyte/s
raid5, DWC enabled, 16MB Readahead Cache
Seq Output Block: 303 Mbyte/s
Rewrite: 199 Mbyte/s
Seq Input Block: 955 Mbyte/s
raid5+0, DWC disabled, 16MB Readahead Cache
Seq Output Block: 297 Mbyte/s
Rewrite: 225 Mbyte/s
Seq Input Block: 881 Mbyte/s
raid5+0, DWC enabled, 16MB Readahead Cache
Seq Output Block: 223 Mbyte/s
Rewrite: 172 Mbyte/s
Seq Input Block: 726 Mbyte/s
raid1+0, DWC disabled, 16MB Readahead Cache
Seq Output Block: 214 Mbyte/s
Rewrite: 151 Mbyte/s
Seq Input Block: 678 Mbyte/s
raid1+0, DWC enabled, 16MB Readahead Cache
Seq Output Block: 187 Mbyte/s
Rewrite: 120 Mbyte/s
Seq Input Block: 469 Mbyte/s
I restarted the server several times after completing initialization of the array.
Each test was repeated 3 times calculating the average result.
I really do not understand why the raid5 got better results than the raid1+0 and why the raid5+0 did not totally outperform the raid5.
Seeing these results i also don't get why the servers totally bog down when transferring new files to them while heavily reading, according to the results they should be easily capable.
Please help me understand these results and advise me on which raid level and settings to use for my usage scenario.
Thank you
Kind regards
Nikolaus S.
2*Xeon E5620
16 GB RAM
2*73GB SAS
10*2TB SATA
Smart Array P410 /w 512mb BBWC
I was in a hurry when i got them, so i immediately deployed 3 of them using raid5 for the 10*2TB without any testing.
Now the I/O performance is pretty bad in certain cases.
These servers are used as downloadservers for a pretty big website, they are running debian/lighttpd on a 1GE uplink each.
Usually there are about 2500 open filehandles reading big files (500MB to 4GB) from the raid5.
Performance is pretty good as long as there are no writes to the array.
Every now and then i need to transfer some new files to the servers, whenever i do so the read performance goes really bad.
Some figures to further explain what's happening:
Normal Usage:
roughly 2500 filehandles
purely reading from the array
outputting some 950 Mbit/s through the 1GE Uplink
almost no strain on the server
iowait < 6%
virtually no load (< 3)
So now i start some filetransfers to the servers, like 4 at a time coming in at 300 Mbit/s.
The output immediately drops to 500 Mbit/s and below.
I am using these settings:
Scheduler: noop (let the controller handle the scheduling)
16MB Readahead
Drive Write Cache enabled
Cacheratio 25/75
256KB Stripe Size
jfs
Since i still got one server left that is not deployed yet, i used it for benchmarking different raid levels and settings.
Using bonnie++ (testing with 32GB Files) on debian6 i got these results:
raid5, DWC disabled, 16MB Readahead Cache
Seq Output Block: 318 Mbyte/s
Rewrite: 216 Mbyte/s
Seq Input Block: 961 Mbyte/s
raid5, DWC enabled, 16MB Readahead Cache
Seq Output Block: 303 Mbyte/s
Rewrite: 199 Mbyte/s
Seq Input Block: 955 Mbyte/s
raid5+0, DWC disabled, 16MB Readahead Cache
Seq Output Block: 297 Mbyte/s
Rewrite: 225 Mbyte/s
Seq Input Block: 881 Mbyte/s
raid5+0, DWC enabled, 16MB Readahead Cache
Seq Output Block: 223 Mbyte/s
Rewrite: 172 Mbyte/s
Seq Input Block: 726 Mbyte/s
raid1+0, DWC disabled, 16MB Readahead Cache
Seq Output Block: 214 Mbyte/s
Rewrite: 151 Mbyte/s
Seq Input Block: 678 Mbyte/s
raid1+0, DWC enabled, 16MB Readahead Cache
Seq Output Block: 187 Mbyte/s
Rewrite: 120 Mbyte/s
Seq Input Block: 469 Mbyte/s
I restarted the server several times after completing initialization of the array.
Each test was repeated 3 times calculating the average result.
I really do not understand why the raid5 got better results than the raid1+0 and why the raid5+0 did not totally outperform the raid5.
Seeing these results i also don't get why the servers totally bog down when transferring new files to them while heavily reading, according to the results they should be easily capable.
Please help me understand these results and advise me on which raid level and settings to use for my usage scenario.
Thank you
Kind regards
Nikolaus S.
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-22-2011 07:24 AM
тАО02-22-2011 07:24 AM
Re: Strange Benchmark Results on Smart Array P410 and what Raid Level to choose
For SATA RAID 5, those are not bad scores. If you don't need all that space. Do RAID 10 on those drives 8 in RAID 10 with two spares.
RAID 5 can be very slow. Agonizingly slow. I have seen 10 MB/s per disk on writes. If your network does alot of writes, don't use RAID 5.
RAID 5 can be very slow. Agonizingly slow. I have seen 10 MB/s per disk on writes. If your network does alot of writes, don't use RAID 5.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-22-2011 11:23 AM
тАО02-22-2011 11:23 AM
Re: Strange Benchmark Results on Smart Array P410 and what Raid Level to choose
I need the space, but i would prefer better performance and just get a few more servers.
What I don't get are the results of the benchmarks.
I can't figure out why raid10 and raid50 came out worse than raid5.
Nevermind, i just deployed the last server using raid50, no more time for testing, i need the downloadservers to be live now.
I will followup with an update about how raid50 is doing.
What I don't get are the results of the benchmarks.
I can't figure out why raid10 and raid50 came out worse than raid5.
Nevermind, i just deployed the last server using raid50, no more time for testing, i need the downloadservers to be live now.
I will followup with an update about how raid50 is doing.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Learn About
News and Events
Support
© Copyright 2025 Hewlett Packard Enterprise Development LP