Operating System - Linux
1823114 Members
3275 Online
109646 Solutions
New Discussion юеВ

Re: MegaRAID (megaide.o) driver slow in linux

 
Edwin Eefting
Occasional Contributor

MegaRAID (megaide.o) driver slow in linux

Can someone help me with the problem described below? Our server is still painfully slow and LSI can't seem to help me.


----------------------------
Hi Edwin

If you are using onboard ATA (MegaRAID IDEal), please note

1. It is a Software RAID and not hardware RAID
2. LSI does not provide support for any LSI Components embedded onto a
Motherboard. It is the responsibility of the Motherboard or OEM as LSI does
not carry out test on this combination.

I am afraid that I will not be much helpful to you as in this case.

You ought to contact Compaq for more support.

Regards
Saba

---------- Forwarded message ----------
Date: Tue, 11 Nov 2003 13:00:23 +0100 (CET)
From: Edwin Eefting
To: Saba Nesar
Subject: MegaRaid (megaide) very slow

Hi,

I still have trouble with my HP proliant and MegaRaid configuration. I use
2 harddisks of 80gb in raid 1 (mirror) configuration. The individual disks
get 54mb/s with the hdparm -t test.

However, the mirrored configuration gets 3mb/s - 54mb/s. In the beginning
this problem wasn't a big issue, as the very slow speeds only occured once
in a while. (we assumed this was normal and the system was busy with
something) However, after extensive use it seems the system gets slower
and slower, and the load gets very high. Here is some testoutput from our
HP Proliant DL320:

root@pizza:/home/erwin/Maildir/new# hdparm -t /dev/sda

/dev/sda:
Timing buffered disk reads: 64 MB in 14.34 seconds = 4.46 MB/sec

root@pizza:/home/erwin/Maildir/new# uptime
12:39:59 up 10 days, 21:00, 11 users, load average: 3.83, 3.97, 4.04

When the server was first booted, it would show 54MB/s when i runned the
test again, however after this uptime it stays that slow. When I run "top"
the box is idle most of the time, and both the system and user cpu usage
are low:

12:40:59 up 10 days, 21:01, 11 users, load average: 3.92, 3.97, 4.04
166 processes: 161 sleeping, 5 running, 0 zombie, 0 stopped
CPU states: 2.8% user, 2.3% system, 0.0% nice, 94.9% idle
Mem: 515476K total, 447996K used, 67480K free, 80408K buffers
Swap: 979956K total, 105736K used, 874220K free, 138804K cached


I also tried to disable Swap and i tried running default kernels with the
default binairy driver. Currently i use kernel 2.4.22 with the
compiled megaide driver i received from you. I seems that the bottleneck
is I/O, maybe the megaide driver isn't communicating with the IDE
controller the right way, somehow?

Offcourse I understand that a mirrored configuration is slower, but not
_this_ slow. On a system with limited i/o bandwith I would expect a 50%
drop in disk writing performance, and no change in read performance.
(since only one disk is needed while reading)

I hope you can help me, since our systems are slower then their
predecessors right now. (pentium 200's) Even a simple "ls" can take
10-15 seconds when the system is "busy". Please let me know if I can
provide you with anymore usefull information. I'm even willing to give
someone of you root access to one of our machines so you can try some
tests yourself.

Thank you very much,
Edwin

>
> On Mon, 27 Oct 2003, Saba Nesar wrote:
>
> > Hi Edwin
> >
> > LSI Logic's Shim driver has its raid intelligence as binary file
> > megaide_lib.o and the rest of the driver is open. megaide_lib.o can be build
> > with the open source to get driver image megaide.o. Source tree is also
> > provided with the Make file for the same. The following instruction explains
> > how to build the driver module.
> >
> > The above information can be used without NDA. Files are herewith attached.
> >
> > Regards
> > Saba
> >
> >
> > -----Original Message-----
> > From: Edwin Eefting [mailto:edwin@datux.nl]
> > Sent: 27 October 2003 13:00
> > To: eurosupport@lsil.com
> > Subject: MegaRaid (megaide) support for linux 2.4.22 kernel?
> >
> >
> > Hello,
> >
> > We're having trouble getting the default binairy driver to work with a
> > couple of HP Proliant servers of our custommers. The drivers on the
> > lsilogic.com site are all for 2.4.20 kernels and don't seem to work with
> > newer versions.
> >
> > However, it's imperative that we use a 2.4.22 kernel because of other
> > technical issues. (2.4.20 is almost a year old now, and a lot has happend
> > in the mean time.)
> >
> > My question is: Is it possible to get a newer binairy driver, or is it
> > possible to get the sourcecode of the driver after signing a
> > non-disclosure contract? If this isn't possible we're forced to look into
> > another solution with hardware from a different vendor for our new
> > customers.
> >
> > Thanks,
> > Edwin
> >
> > --
> >
> > //||\\ Edwin Eefting
> > || || || DatuX - Linux solutions and innovations
> > \\||// http://www.datux.nl
> >
> >
>
>

--

//||\\ Edwin Eefting
|| || || DatuX - Linux solutions and innovations
\\||// http://www.datux.nl
4 REPLIES 4
erik petersen_1
Occasional Advisor

Re: MegaRAID (megaide.o) driver slow in linux

Hi Edwin,
I'm not a fan of software raid but try ditching the LSI raid configuration for JBOD at the controller and use the standard Linux software raid with the mirror formatted using JFS.

Cheers!
-erik
The mind is like a parachute -- if it doesn't open, it doesn't work.
Edwin Eefting
Occasional Contributor

Re: MegaRAID (megaide.o) driver slow in linux

I figured that out myself by now. I already used linux software raid (MD), but i figured it would be better to use LSI's software riad so that i can rebuild my disk in the bios in case of failure.

With linux MD you have to go trough much more trouble, especially if you wanna have multiple partions. (you can create a partion INSIDE a MD disk, you have to create several MD's, one for every partition :(
erik petersen_1
Occasional Advisor

Re: MegaRAID (megaide.o) driver slow in linux

Have you tried LVM?
The mind is like a parachute -- if it doesn't open, it doesn't work.
Edwin Eefting
Occasional Contributor

Re: MegaRAID (megaide.o) driver slow in linux

no i didn't test it with LVM, currently I just want the current situation to work correctly. (i don't want to take the server down and reinstall it at this point.)