- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- HPE EVA Storage
- >
- Redhat 5 Linux I/O performance question
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-25-2008 12:08 PM - last edited on тАО03-27-2014 06:38 PM by Lisa198503
тАО08-25-2008 12:08 PM - last edited on тАО03-27-2014 06:38 PM by Lisa198503
We are running Redhat Linux 5 with an EVA 8000 connected through a Cisco MDS 9506 switch.
Our disk array group is ~ 144 76 GB drives. We're presenting a single LUN to our database server. This LUN has 16 different paths using multipath with round-robin. We are using the CFQ scheduler from Linux for the multipath device and all of its slaves.
The issue we are seeing is that iostat shows us at 100% utilization for the dm-7 device and whichever slave it is currently using while doing only ~ 20Mb/s throughput.
These are 4Gb Fibre cards in a PCI-E x8 slot, four fibre cards total being used to access this LUN. The EVA is at best 50% controller used.
The only other thing that shows as a possible bottleneck are the CPU's on the system which are at 100% ... 14 of them for database and 2 of them for SYS related activities. Our svctime is .35 MS and our average wait time is single digit millisecond.
Any ideas why we're only seeing 20Mb/s written but showing our IOSTAT at 100% utilization on the device?
Thank you for any help,
I say thanks with pts.
P.S. This thread has been moved from Disk to Storage Area Networks (SAN) (Enterprise). -HP Forum Moderator
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-26-2008 01:36 AM
тАО08-26-2008 01:36 AM
Solution144 disks can give you 7000 - 15000 IOPS, depending on whether they are 10K or 15K and the vdisk is vraid 1 or 5.
You should check the average transfer size for the transfers you're doing to the EVA.
You can use EVAperf to log performance data and check what's happening exactly. Run evaperf -cont -dur 300 -csv -fo data.csv.
Compress and attach here.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-26-2008 09:49 AM
тАО08-26-2008 09:49 AM
Re: Redhat 5 Linux I/O performance question
You are saying you have 16 CPUs all at 100%? That sounds like your problem. Is this io wait time?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-27-2008 10:42 AM
тАО08-27-2008 10:42 AM
Re: Redhat 5 Linux I/O performance question
On the controller side for our EVA's we are getting up to 50% controller CPU utilization, but that is during our heaviest work. Nothing seems amiss here, we have a large spindle count and hit cache 80% of the time.
Thank you for the help with this problem. Fortunately it appears our Iostat gives poor information and we are bound only by our CPUs which is a known issue.