System Administration
cancel
Showing results for 
Search instead for 
Did you mean: 

Is Raid 5 really this bad? or...

wobbe
Respected Contributor

Is Raid 5 really this bad? or...

Hoi

I've setup an old Proliant DL380G2 as an NFS server that I use to backup VMware images.
But I think something is wrong because this server with its 6x144 15K disks and BBWC gets out performed (big time) by single 160 ATA disk on an 7 year old Dell Workstation.

Some info.

O.S. Debian Lenny netinstall on both machines. Running pretty much the same packages and no gui. NFS exports are on XFS and XFS under LVM control. Mount options, export settings all the same.

Workstation: A 6 or 7 year old Dell workstion with 512 mb rambus memory, a 1.8GHz P4 and 160GB 7200 ata disk. Intel PCI gigabit ethernet controller.

Server: Proliant DL380G2. 2x 1.4 Ghz PIII (Xeon? not sure) RA5300 (Ultra3) with BBWC, 6X144 15k U320 Harddisks, 2 GB Ram, NC7131 PCI-X gigabit ethernet controller. BBWC set to 75$ write. (parity data has been calculated)


I've attached some vmstat screenshots that clearly show the superiour write performance of the Dell pc. (tested with a direct link) I also added a screenshot of the servers boot log. There are some errors that might be related.(I tried noapic) Or perhaps Raid 5 is just this slow and I'm wasting my and your time.

WB

11 REPLIES
Steven E. Protter
Exalted Contributor

Re: Is Raid 5 really this bad? or...

Shalom WB,

Raid 5 is going to be slow on heavy write applications. The data has to be written to more places.

Raid 5 parity 9 requires everything be in 45 different places. This takes time.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
wobbe
Respected Contributor

Re: Is Raid 5 really this bad? or...

Yeah, but this slow?
These are 6x 15.000 rpm disks!
Writes are on average not even 1/4 of the Dell pc's speed.
Viktor Balogh
Honored Contributor

Re: Is Raid 5 really this bad? or...

The underlying XFS might also cause this. I've done several tests with XFS (they recommend it especially for small read/write operations like databases do) but the overall performance was more awful than with the default ext3 settings... I musted to revert back.
(Ok, maybe it must have been finetuned by a more competent person.)

****
Unix operates with beer.
wobbe
Respected Contributor

Re: Is Raid 5 really this bad? or...

No, can't believe that. I've done a few tests and XFS performs better, lower CPU usage and quick delete of large files. (I only have large files) Besides both are server and workstation are using XFS.
I need to do reconfigure on a production server tonigh. Maybee I have time to convert the server to Raid 10.
wobbe
Respected Contributor

Re: Is Raid 5 really this bad? or...

Ok, I've converted the system to raid 10.
Performance is a bit better but still less then half the speed of the dell workstation.
So something is clearly wrong.
I've run an Iperf test betwwen the server and the workstation and it only scores about 300mbit. So perhaps the NC7131 NIC is the bottleneck. Perhaps the NIC needs proprietary firmware installed?
Viktor Balogh
Honored Contributor

Re: Is Raid 5 really this bad? or...

>So perhaps the NC7131 NIC is the bottleneck.

Good point. I didn't realize it, because your config wasn't really clear until now. It is a Gbit card with 1024Mbit/sec _theoretical_max_speed_ which is ~128Mbyte _theoretical_ speed, so without the tcp headers, collision and the other overhead it is around 100Mbyte/sec, which is also theoretical max. If you have any other elements on the subnet the speed is much less. So don't expect too much from it, or go get a second NIC and do bonding.

(btw. how it is connected to the backup system? Switch/Hub? Speed? What's on the other side?)


Anyway, I wouldn't choose RAID5 unless I have a high quality RAID controller which is around 500USD. IMHO software RAID5 don't deliver the expected results, a cheap HW RAID5 is also not a solution worth to config. The parity should be computed live, on the fly. On the high-end RAID controllers there is enough cache read from/to, which is a serious performance growth. So RAID 1+0 seems a good decision. ;)
****
Unix operates with beer.
wobbe
Respected Contributor

Re: Is Raid 5 really this bad? or...

I'll try to install the same NIC in the server that I also use in the worksation.
But I'm not sure if this is going to work since this will mean mixing pci (NIC) with pci-x (RA5300)
But performance is so bad it would probably fit a 100mbit connection.
If things won't improve I'll try installing CentOS 4.8. HP has some drives for RHEL 4 so that will probably also work with CentOS.
wobbe
Respected Contributor

Re: Is Raid 5 really this bad? or...

Replaced the nic. Network performance is better now. Little over 700mbit with the iperf test between workstation and server.
But write performance to the server still sucks.

Question: I'm mixing pci with pci-x on the server. Will there be a performance pennalty for the pci-x device?
wobbe
Respected Contributor

Re: Is Raid 5 really this bad? or...

Reinstalled the server with CentOS 4.8 plus the HP support pack. The performance is now better then the Dell workstation. Still not as good is I hoped it would be. It's about 25% better then the workstation.
Also the pci-x NIC is performing much better.
Viktor Balogh
Honored Contributor

Re: Is Raid 5 really this bad? or...

>Still not as good is I hoped it would be.

If you read back your posts you can see that you never ever gave exact data about the performance. This performance-thing must be more of a measurable fact than any "expectation". Otherwise you cannot be sure that your system is underperforming.
****
Unix operates with beer.
wobbe
Respected Contributor

Re: Is Raid 5 really this bad? or...

Well I posted those vmstat screenshot that clearly show the difference between the workstation and the server. Attached is a new screenshot of the server load when copying data to the server. As you can see things have impoved much with centos 4.8. But I guess I would expect more of 6x144 15k Raid 10 drives in comparison with a single ata disk.