Array Setup and Networking
Showing results for 
Search instead for 
Did you mean: 

Maximum iSCSI Port Queue Depth

Occasional Visitor

Maximum iSCSI Port Queue Depth

Dear All,

We are getting alot of alerts from VMware VCOPS on volume latency lately.

I suspect that the iSCSI ports' queue depth on our Nimble array is full during certain time of the night. We have a 2 ports CS460.

Is there a way to report the queue depth usage on the array?

What is the max queue depth setting on the array?


Occasional Contributor

Re: Maximum iSCSI Port Queue Depth

Hi KimYong,

Nimble Uses Queue Depth of 128 per ISCSI connections. The details are explained on the below link. The VCOPS settings could be due to Thresholds but will have our Nimble Support check the system status.

ESXTop shows up the parameters.

Where does the "128" value come from?

The 128 is actually set by Nimble.  iSCSI is better than FC because the target can tell the initiator how much queue depth it has and then the initiator can use that value (or a lower one).   In our case we respond with 128.  That value is per session so you if  you have multiple paths you’ll actually get (#_paths * 128) as the total queue depth for that LUN.

Is "128" the Queue Depth at LUN level?

The queue depth is actually at the connection level.  If there are 2 connections/sessions, then it would actually be 256 per LUN/volume.

What is the typical Q-depth at the Array Port in Nimble storage? 128 seems high, is that because Nimble is a Flash storage and LUNs are capable of processing high amount of commands?

The value of 128 was selected so that queue_depths would not generally be a limitation in terms of performance, so yes, because the Nimble arrays are high performance.  The cost of more queue depth is also not high in our architecture.  It should probably be noted the q-depth is not per port but per iSCSI session/connection.