- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Re: p4500 high queue depth, poor performance
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-05-2013 02:52 PM
тАО01-05-2013 02:52 PM
Hello,
I have a 4 node p4500 g2 cluster with 600GB sas drives, with about 15 vm's running on it.
I also have a 5 node vsa cluster with 4 vm's and running on old ML350G5 servers with 6 750GB sata drives.
During off hours using I/O meter running on 2k3 on the same esx host I cannot seem to get more than 25MB/sec on the p4500, but on the vsa I can get 90-100MB/sec.
I am ruleing out esx or the switch/network config being the problem because both of these san's are connected to the same pair of 4948 switches and are connected to the same 5 esx hosts. Load on the p4500 is a bit higher, but off hours is maybe 200 iops, which should be nothing for that size of cluster.
What I do notice is that the queue depth jumps up quite high when trying to do anything with the p4500 cluster. reads or writes.
Everything is also running the same version 9.5.00.1215.0. I also don't see any hardware issues on the p4500, so all I can think of is that it needs and update or needs to be restarted....because it wasn't slow like this before.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-06-2013 08:12 AM
тАО01-06-2013 08:12 AM
Re: p4500 high queue depth, poor performance
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-13-2013 02:17 PM
тАО01-13-2013 02:17 PM
Re: p4500 high queue depth, poor performance
Did the update to version 10.0.00.1888.0, it looks like it solved all of the latency issues that I was seeing on my esx hosts, but the slowness continues.
The high queue depth is also less frequent but still exists when trying to do I/O tests.
Anyone else seeing this?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-13-2013 04:29 PM
тАО01-13-2013 04:29 PM
Re: p4500 high queue depth, poor performance
Queue depth also seems better on version 10. Did another test by taking a 2003 server and connecting it directly to the san using the iscsi initiator, and I get the same results. So this appears not to be an esx issue but an issue with the san.
If anyone else can confirm that the get good results with this version of san/iq I would be interested in knowing more about your setup.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-30-2013 05:38 PM
тАО01-30-2013 05:38 PM
Solution