HPE EVA Storage
1823762 Members
4244 Online
109664 Solutions
New Discussion юеВ

Redhat 5 Linux I/O performance question

 
SOLVED
Go to solution
J Peak
Frequent Advisor

Redhat 5 Linux I/O performance question

We are running Redhat Linux 5 with an EVA 8000 connected through a Cisco MDS 9506 switch.

Our disk array group is ~ 144 76 GB drives. We're presenting a single LUN to our database server. This LUN has 16 different paths using multipath with round-robin. We are using the CFQ scheduler from Linux for the multipath device and all of its slaves.

The issue we are seeing is that iostat shows us at 100% utilization for the dm-7 device and whichever slave it is currently using while doing only ~ 20Mb/s throughput.

These are 4Gb Fibre cards in a PCI-E x8 slot, four fibre cards total being used to access this LUN. The EVA is at best 50% controller used.

The only other thing that shows as a possible bottleneck are the CPU's on the system which are at 100% ... 14 of them for database and 2 of them for SYS related activities. Our svctime is .35 MS and our average wait time is single digit millisecond.

Any ideas why we're only seeing 20Mb/s written but showing our IOSTAT at 100% utilization on the device?

Thank you for any help,

I say thanks with pts.

 

P.S. This thread has been moved from Disk to Storage Area Networks (SAN) (Enterprise). -HP Forum Moderator

3 REPLIES 3
V├нctor Cesp├│n
Honored Contributor
Solution

Re: Redhat 5 Linux I/O performance question

The server CPU at 100% and the EVA controller CPU at 50% indicate a lot of I/Os being requested.
144 disks can give you 7000 - 15000 IOPS, depending on whether they are 10K or 15K and the vdisk is vraid 1 or 5.

You should check the average transfer size for the transfers you're doing to the EVA.

You can use EVAperf to log performance data and check what's happening exactly. Run evaperf -cont -dur 300 -csv -fo data.csv.
Compress and attach here.
Tom O'Toole
Respected Contributor

Re: Redhat 5 Linux I/O performance question


You are saying you have 16 CPUs all at 100%? That sounds like your problem. Is this io wait time?
Can you imagine if we used PCs to manage our enterprise systems? ... oops.
J Peak
Frequent Advisor

Re: Redhat 5 Linux I/O performance question

What we've found is that all of our systems report 100% utlization through iostat regardless of the amount of data being pushed through them. Iostat appears to be giving poor data.

On the controller side for our EVA's we are getting up to 50% controller CPU utilization, but that is during our heaviest work. Nothing seems amiss here, we have a large spindle count and hit cache 80% of the time.

Thank you for the help with this problem. Fortunately it appears our Iostat gives poor information and we are bound only by our CPUs which is a known issue.