Array Setup and Networking
1745782 Members
3688 Online
108722 Solutions
New Discussion

NCM iSCSI target counts differ for volume vs VVOL?

 
EliteX2Owner
Advisor

NCM iSCSI target counts differ for volume vs VVOL?

Hi all, running vSphere 6.5 with vCenter (VCSA) 6.5, Nimble Connection Manager 4.1.  I have both traditional volumes and VVOL space defined.  I'm using iscsi targets with dynamic discovery.  For the VVOL data store, I've found that NCM increases the iscsi path count to six once some heavy load occurs, and seems to leave it there.  For the normal LUN with VMFS on top, which I've confirmed is also managed by NCM, it is only using two paths.  We have more bandwidth available on the network side as well as the Nimble array itself, so I'm curious why there's a difference, and how to get the traditional storage to use all the paths.

6 REPLIES 6
chris24
Respected Contributor

Re: NCM iSCSI target counts differ for volume vs VVOL?

Hello,

Please raise a support question regarding this we will be very keen to review this information! The behaviour appears out of sync with both how NCM and vVol's work, more paths will not improve performance unless you have multiple (slow) 1GB links, multiple subnets (more than two) or a storage pool spanning multiple arrays.

Definitely a question to raise with support, drop them an email with the serial number, time stamp and your observations. I will follow up this thread with their findings.

Many thanks,

Chris

EliteX2Owner
Advisor

Re: NCM iSCSI target counts differ for volume vs VVOL?

Will do.  In our case we do only have two subnets, but the host side has more than 20 Gbps for storage traffic, and we're talking to an AF7k with six 10gig interfaces, so I'm not clear on why we would not achieve greater than 20 gbps throughput if NCM can spread I/O across more than two interfaces?

Thanks

EliteX2Owner
Advisor

Re: NCM iSCSI target counts differ for volume vs VVOL?

Oh, one additional note.  I just noticed the iscsi targets happen to also be the discovery IP's, while the targets for the VVOL volume are the six specific interface IP's.  I believe the vvol volume was created in vcenter using the nimble plugin, but the regular storage may have been created on the nimble side and just 'rescanned' on the vcenter side to make it show up.  Maybe the plugin adds the targets as static instead of relying on the discovery?

chris24
Respected Contributor

Re: NCM iSCSI target counts differ for volume vs VVOL?

Hello David,

Creating multiple paths will not increase throughput (the bottleneck will be the the 2 x 10GB's switches), the target will be the discovery IP the connections ('static' these get stripped by NCM over time or by x IOPS) will then be redirected to the actual data interfaces (6x). You can see this for all volumes in Monitor > Connections, they all (vvols / datastores) have one to one connections

However what's interesting here is that spreading large IO over multiple interfaces is absolutely the right way to go (don't saturate an incoming adapter, after all you have multiple hosts) the behaviour does tally up with what the array with a host on NCM would do.

All volumes use the discovery IP and get redirected accordingly, the question here is why the behaviour is different? Support have the answer

EliteX2Owner
Advisor

Re: NCM iSCSI target counts differ for volume vs VVOL?

The more detailed setup info is Cisco UCS on the hosting side, so the networking is virtualized and we've got about 80 gbps to each server blade.  The storage traffic leaves via two instances of 4x10gig links, to two dedicated storage switches, and the AF7k is connected across the same switches, with one subnet per switch.

This same setup is also talking to an EMC XtremIO with PowerPath managing the connections, and we routinely see throughput as high as 30 to 40 Gbps; it's a single brick with 4x10gig.  The switches have 200 gbps between them, so as long as the connection manager (powerpath or NCM) is load balancing properly, the LACP hashing seems to successfully allow us to achieve greater than 20 Gbps regularly given there's multiple sources and multiple targets.  If NCM is only going to load balance to two targets, we're going to be stuck at the connection speed of the two targets, and we're seeing throughput plateau where you'd expect it from wire speed of 2x10gig with jumbo frames and delayed ack.

The VVOL volume achieves higher throughput, and we're seeing NCM load balance that across four of the six targets, so I know the array can go faster.  I can always do static targets to see what happens.

I'll get the ticket going now with more detail.

mamatadesaiNim
HPE Blogger

Re: NCM iSCSI target counts differ for volume vs VVOL?

"/etc/nimble/nimble-mode.sh list" will show all iSCSI connections.

VVol data path is to the Protocol Endpoint (PE) device on the array, and the connections go to the group-scoped iscsi target, and hence connect directly to the data IPs on the array.  NCM does not manage connections to the data IPs, i.e. it will not scale up or down these connections.  The number of connections will be equal to the number of data IPs on the array.

OTOH If you want to scale iSCSI connections for VMFS volumes, these go to the discovery IPs of the array and are automatically redirected, and these connections are managed by NCM on the ESXi host.  To increase the number of connections, you can edit /etc/nimble/ncm.conf and change min_vol_sessions.  Please keep ESXi maximums in mind, and limit your connections by setting an appropriate value for max_vol_sessions.  After this change is made, please re-run "/etc/nimble/nimble-mode.sh list" after waiting for 3-4 minutes.

Thanks,

Mamata

HPE Nimble Storage