ProLiant Servers (ML,DL,SL)
cancel
Showing results for 
Search instead for 
Did you mean: 

Strange Benchmark Results on Smart Array P410 and what Raid Level to choose

Nikolaus S.
Occasional Visitor

Strange Benchmark Results on Smart Array P410 and what Raid Level to choose

I got 4 ProLiant DL180 G6 with the following specs:
2*Xeon E5620
16 GB RAM
2*73GB SAS
10*2TB SATA
Smart Array P410 /w 512mb BBWC

I was in a hurry when i got them, so i immediately deployed 3 of them using raid5 for the 10*2TB without any testing.
Now the I/O performance is pretty bad in certain cases.

These servers are used as downloadservers for a pretty big website, they are running debian/lighttpd on a 1GE uplink each.

Usually there are about 2500 open filehandles reading big files (500MB to 4GB) from the raid5.
Performance is pretty good as long as there are no writes to the array.
Every now and then i need to transfer some new files to the servers, whenever i do so the read performance goes really bad.

Some figures to further explain what's happening:
Normal Usage:
roughly 2500 filehandles
purely reading from the array
outputting some 950 Mbit/s through the 1GE Uplink
almost no strain on the server
iowait < 6%
virtually no load (< 3)

So now i start some filetransfers to the servers, like 4 at a time coming in at 300 Mbit/s.
The output immediately drops to 500 Mbit/s and below.

I am using these settings:
Scheduler: noop (let the controller handle the scheduling)
16MB Readahead
Drive Write Cache enabled
Cacheratio 25/75
256KB Stripe Size
jfs

Since i still got one server left that is not deployed yet, i used it for benchmarking different raid levels and settings.
Using bonnie++ (testing with 32GB Files) on debian6 i got these results:

raid5, DWC disabled, 16MB Readahead Cache
Seq Output Block: 318 Mbyte/s
Rewrite: 216 Mbyte/s
Seq Input Block: 961 Mbyte/s

raid5, DWC enabled, 16MB Readahead Cache
Seq Output Block: 303 Mbyte/s
Rewrite: 199 Mbyte/s
Seq Input Block: 955 Mbyte/s

raid5+0, DWC disabled, 16MB Readahead Cache
Seq Output Block: 297 Mbyte/s
Rewrite: 225 Mbyte/s
Seq Input Block: 881 Mbyte/s

raid5+0, DWC enabled, 16MB Readahead Cache
Seq Output Block: 223 Mbyte/s
Rewrite: 172 Mbyte/s
Seq Input Block: 726 Mbyte/s

raid1+0, DWC disabled, 16MB Readahead Cache
Seq Output Block: 214 Mbyte/s
Rewrite: 151 Mbyte/s
Seq Input Block: 678 Mbyte/s

raid1+0, DWC enabled, 16MB Readahead Cache
Seq Output Block: 187 Mbyte/s
Rewrite: 120 Mbyte/s
Seq Input Block: 469 Mbyte/s

I restarted the server several times after completing initialization of the array.
Each test was repeated 3 times calculating the average result.

I really do not understand why the raid5 got better results than the raid1+0 and why the raid5+0 did not totally outperform the raid5.

Seeing these results i also don't get why the servers totally bog down when transferring new files to them while heavily reading, according to the results they should be easily capable.

Please help me understand these results and advise me on which raid level and settings to use for my usage scenario.

Thank you
Kind regards
Nikolaus S.
2 REPLIES
Michael A. McKenney
Respected Contributor

Re: Strange Benchmark Results on Smart Array P410 and what Raid Level to choose

For SATA RAID 5, those are not bad scores. If you don't need all that space. Do RAID 10 on those drives 8 in RAID 10 with two spares.

RAID 5 can be very slow. Agonizingly slow. I have seen 10 MB/s per disk on writes. If your network does alot of writes, don't use RAID 5.

Nikolaus S.
Occasional Visitor

Re: Strange Benchmark Results on Smart Array P410 and what Raid Level to choose

I need the space, but i would prefer better performance and just get a few more servers.

What I don't get are the results of the benchmarks.
I can't figure out why raid10 and raid50 came out worse than raid5.

Nevermind, i just deployed the last server using raid50, no more time for testing, i need the downloadservers to be live now.

I will followup with an update about how raid50 is doing.