Array Setup and Networking
Showing results for 
Search instead for 
Did you mean: 

Nimble Network - 2 isolated switches

Go to solution
Occasional Visitor

Nimble Network - 2 isolated switches

Is anyone running a configuration where each Nimble 10G connection goes into an isolated switch?

Most hosts are ESX.

Would I have a software initiator for each "fabric"?

or a single software initiatior with each vmkernel interface? (Each interface would end up being on a separate subnet/network)



Re: Nimble Network - 2 isolated switches

Ya we have two isolated switches & networks for the iSCSI traffic.

Your ESXi hosts can have only a single Software Initiator, but you need to create two separate virtual networks (unless your splitting things up with VLANS).

Here is a screen shot from vCenter.

2 separate virtual switches  (note that I setup the vSwitch's for local vm iSCSI traffic too)


Here is a rough logical wiring diagram I made up for our networks way back.

Let me know if you have any other questions.


Occasional Visitor

Re: Nimble Network - 2 isolated switches

Hi, thanks for the post.

I have experimented with 2 isolated switches and discovered issues. For example:

ControllerA, eth1a to switch 1
ControllerA, eth1b to switch 2
ControllerB, eth1a to switch 1
ControllerB, eth1b to switch 2

Tried Both Same subnet & separate subnet configurations.

If I connect a single pc in the same subnet in to switch 1 I can ping the active controller Management IP and connect to the web admin.

If I connect to switch 2 I cannot ping nor access the management admin.

Is this normal behaviour? If so, at what point or scenario would access be available on switch 2.

I appreciate under normal circumstances that 2 NICs would be used on the esxi host (which I will be adding after failover testing) but that still does not explain the loss of communication. I was under the impression that load balancing would occur over both controller Ethernet management ports. The same example is also identical with iSCSI data configuration.

Any ideas?

Many thanks



Re: Nimble Network - 2 isolated switches

Very normal.

The Nimble storage runs in an active-standby mode. One controller is active, the other on standby. The Management IP address is virtual, and stays with the currently active controller. In the event of initiating a switchover, or something happens to switch1 and connectivity is lost, the the other controller takes over and the management IP moves.

As far as iSCSI traffic, again, any IP addresses are also virtual, again staying with the currently active controller. The host's multiathing will use iSCSI paths as needed. For Best Practice, get the Nimble Connection Manager for the OS and install on every host. Expect a reboot to be needed after NCM is installed. That will set host parameters correctly, and monitor the host's traffic controlling paths to the active Nimble controller as needed.

Each time you go to the Hardware page, it will do a systems check to see if the other controller can be made active. If there is some problem, a banner will appear at the top of the page, in which case contact Nimble Support to resolve the issue.

Note: While I am an HPE Employee, all of my comments (whether noted or not), are my own and are not any official representation of the company

Accept or Kudo

Occasional Visitor

Re: Nimble Network - 2 isolated switches

Hi Sheldon, thanks for the well written explanation. Very informative.

I understand that the SAN controllers are in active/standby configuration and only one is active at any one time.

My confusion is over the active controller and the 2 Ethernet management ports connected to 2 separate switches. The management IP can only be pinged from 1 isolated switch.

If I turn off the switch that allows pings to the management IP and connect to the other switch it’s now able to ping. This tells he that the management isn’t running in a load balanced/mirrored is the documentation dictates?

Many thanks


If I turn off the
HPE Blogger

Re: Nimble Network - 2 isolated switches


this doesn't sound right, especially with iSCSI data - as you say.

If your switches are not stacked, you should not operate in a single subnet environment. You must create separate subnets - one for each switch - in order for the IP addresses to be connected.

For example:

10.10.10.x - NIC 1 (server) - Switch 1 - eth0a (on both controller A & B)
10.10.20.x - NIC 2 (server) - Switch 2 - eth0b (on both controller A & B).

You must also have Nimble Connection Manager installed on your hosts.

In the above configuration, you will be able to ping 10.10.10.x subnet & iSCSI discovery address only through NIC 1. NIC 2 will only be able to ping and connect to those over 10.10.20.x

If you get stuck, I highly recommend you engage with Nimble Support.


Nick Dyer
Nimble Field CTO & Evangelist

twitter: @nick_dyer_