HPE StoreVirtual Storage / LeftHand
cancel
Showing results for 
Search instead for 
Did you mean: 

p4500 high queue depth, poor performance

SOLVED
Go to solution
danletkeman
Frequent Advisor

p4500 high queue depth, poor performance

Hello,

I have a 4 node p4500 g2 cluster with 600GB sas drives, with about 15 vm's running on it.

 

I also have a 5 node vsa cluster with 4 vm's  and running on old ML350G5 servers with 6 750GB sata drives.

 

During off hours using I/O meter running on 2k3 on the same esx host I cannot seem to get more than 25MB/sec on the p4500, but on the vsa I can get 90-100MB/sec.  

 

I am ruleing out esx or the switch/network config being the problem because both of these san's are connected to the same pair of 4948 switches and are connected to the same 5 esx hosts.  Load on the p4500 is a bit higher, but off hours is maybe 200 iops, which should be nothing for that size of cluster.

 

What I do notice is that the queue depth jumps up quite high when trying to do anything with the p4500 cluster.  reads or writes.

 

Everything is also running the same version 9.5.00.1215.0.  I also don't see any hardware issues on the p4500, so all I can think of is that it needs and update or needs to be restarted....because it wasn't slow like this before.

4 REPLIES
danletkeman
Frequent Advisor

Re: p4500 high queue depth, poor performance

Just to add, I am seeing 1000-2000 ms latency graphs from my vmware hosts for the p4500 san. As where I am only seeing 20-30ms for the vsa san.
danletkeman
Frequent Advisor

Re: p4500 high queue depth, poor performance

Did the update to version 10.0.00.1888.0, it looks like it solved all of the latency issues that I was seeing on my esx hosts, but the slowness continues.

 

The high queue depth is also less frequent but still exists when trying to do I/O tests.

 

Anyone else seeing this?

danletkeman
Frequent Advisor

Re: p4500 high queue depth, poor performance

Queue depth also seems better on version 10.  Did another test by taking a 2003 server and connecting it directly to the san using the iscsi initiator, and I get the same results.  So this appears not to be an esx issue but an issue with the san.

 

If anyone else can confirm that the get good results with this version of san/iq I would be interested in knowing more about your setup.

 

 

danletkeman
Frequent Advisor
Solution

Re: p4500 high queue depth, poor performance

Found my issue, I had a bad ASIC on one of my switches. After fixing that we setup some I/O test servers and I can now get 5000+ iops and 450MB/sec on a 4 node p4500g2 cluster with a queue depth of around 80~. I'm sure it will do more but this was only with 3 test servers running on it.