StoreVirtual Storage
Showing results for 
Search instead for 
Did you mean: 

VSA storage networking with HyperV

Occasional Contributor

VSA storage networking with HyperV

I have read a few post already on what I am about to talk about.  The post however, never seem to have a clear resolution as to which direction I should go.  Even HP support seems to be confused on this so I would love to hear from someone who has faced this same issue.  I have been a VMware/DELL guy for many years but I moved into this position with this company and they are a full Windows/HP company.  So far so good but I am experiencing some issue with networking and we believe it is with the setup of HyperV to VSA to users. 

I have read the HPE StoreVirtual VSA design for Hyper V document.  It sort of contradicts itself and here is what I mean.  In one section it says that 4 NIC's teamed in two pairs is all that is needed, 1 pair for the Public and the 2nd pair for the VSA/ISCSI.  However, it says that ideally 6 NIC's are needed, 1 set paired for public, 1 set paired for management traffic, and the other 2 left unpaired for ISCSI.  If you read further down the document it then says for HyperV you should use the 6 NIC setup or get really crazy and do an 8 NIC setup.   So I really need clarification on this because it provides 4 different ways to do it. I also do not know what is meant by the term "Managemnt traffic" or why it needs teamed NIC's.  I get the StoreVirtual means VSA and public means users, is Management traffic the Quorum or the CMC or does it just mean for use with MPIO?

What I have is 3 x HP Proliant DL380 G9 server host that run Windows 2016 Datacenter with HyperV.  On each of them I have 4 10Gb NIC's that are teamed into two pairs.  The 1st pair goes to subnet 1 and is used for our public network, i.e. computers, users, applications, wifi, printers, etc.  The 2nd pair goes to subnet 2 and is used only for ISCSI and VSA traffic.  The 2 subnets cannot talk to each other and each subnet has their own set of switches.

From what I read I need to unteam the NIC's that the ISCSI's are using and switch to HPE MPIO for reduncy because when I look in CMC at the active connections we have 1 for each volume and the server its attached to.  When I look at other setups there are many connections and I gather that from MPIO.  But that means the VSA will also lose their teamed NIC.

Alright its alot to take in I know, so my questions.

1.  what is meant when the document speaks of "Hypervisor management traffic" is that the VSA for the ISCSI traffic?

2. Do I really need 1 teamed for "User Traffic," 1 teamed for "StoreVirtual VSA", and 2 individual for "ISCSI"

3. If I do need 6 NIC's then I have to decide which only get the 1Gb network because I only have 4 of the 10Gb and 4 of the 1Gb. 

I know its a big read and I didn't mean to ramble but any help would be apprectiated. 


Re: VSA storage networking with HyperV


Indeed, a very big post 

As per my understanding the NIC bonding is not supported for VSA so 2 NICs for VSA ( 1 for management traffic for the config / schedule tasks and 1 for VSA iSCSI traffic are more than enough when in different subnet).


See below (as it clearly says if your platform is used only for VSA purpose, 2 NICs are enough)

Network adapters
The number of network adapters available in a platform affects your options for configuring virtual
switches. Platforms that will host only StoreVirtual VSAs only need two ethernet (minimum 1
GbE) network adapters. Platforms that will host StoreVirtual VSAs and other virtual machines
should have at least four ethernet (minimum 1 GbE) network adapters so that two adapters can
be dedicated to the StoreVirtual VSA and iSCSI traffic



The other 2 or more NICs if present in the ESX can be used for different VMs not related to VSA at all




Accept or Kudo