StoreVirtual Storage
Showing results for 
Search instead for 
Did you mean: 

Hyper-V host NIC configuration for VSA

Frequent Visitor

Hyper-V host NIC configuration for VSA



I have read through all guides and papers i could find on setting up the VSA, but it still left me with one unclear question. The NIC configuration for the "storage" connection NIC configuration.


As per Microsoft best practice, they do not recomend to use NIC teaming for host connection to iSCSI targets. The recomandation is to utilize MPIO for redundancy. 


In light of this, not quite sure how to configure the NICs for the VSA. I know that if there are other VM running on the same host, 2 need to be reserved for VSA and 2 for the other VM traffic. Do we create a virtual switch (vSwitch) on each of the physical NICs (pNIC), and than create a virtual NIC (vNIC) on the host for the iSCSI connections. An than connect the VSA VM to one of these vSwitches?

Or do we create a NIC team from the pNICs, create 2 vNICs for host iSCSI  connected to this team, and than link the VSA VM's network connection to the team as well?


Not sure if i described it good enough, maybe the attached diagram might be easier to understand.


Still the questions remain - if we go with option 1 - VSA connected to the virtual switch on one of the physical NICs, do we not loose redundancy/performance? And if the correct setup is the one with the NIC team, is this is supported solution by both HP and Microsoft?


Any information would be greatly appreciated.

Valued Contributor

Re: Hyper-V host NIC configuration for VSA

I had to deal with this issue when setting up my VSAs.


Without a doubt, MPIO/DSM is best way to set up the initiator. That's what's being addressed in the Microsoft best practices guide. The extra logic in the MPIO/ DSM driver is what boosts performance and redundancy. Allowing for multiple paths to multiple targets from a single initiator. So, you always want multiple vNICs - if possible at the initiator. 
So, as long you've mapped your multiple vNICs to mutiple pNICs on your VMs you should be OK. This is done by creating mulitple 'networks' in 2008r2  or vSwitches in 2012.


Setting up the target - in this case, VSA  -  is a different issue  all together.

However, it seems that in your case, you're only allocating one pNIC to the VSA(s?). 

So in effect, there's no redundancy at each VSA node.


HP recommends ALB bonding on the VSA/NSM node, this way you get the best combination of performance and redundancy at the target.

This requires at least two vNICs - i.e. two pNICs.


Sure, if you have the ability to dedicate a different set of multiple pNICs to each of : 1, the targets and 2, the initiators, this should not be a problem.


However, if you're using a totally virtualized solution, then you might just create two networks/vSwitches, and  connect each VM (initiators and targets) to each network/vSwitch with a pair of vNICs.


In my case, I've used HP NCU Teaming on my VSA targets, and multiple vNICs presented to my VM inititators when possible, as we have a mixture of StoreVirtual NSM applicances and VSAs. On the NSMs we use HP built-in ALB bonding.

Valued Contributor

Re: Hyper-V host NIC configuration for VSA

I should have been clearer in post to say the following:
As VSAs do not support bonding, I used HP NCU teaming on my VSA VMs (LACP Active/Active) to get redundancy and increased performance.
Frequent Visitor

Re: Hyper-V host NIC configuration for VSA

Thanx for the reply.


It cleared a few questions, but stil not quite there yet. I tried to setup a test server to go through the setup.

Let's say i have 4 pNICs. I dedicate 2 pNICs to the VMs running on the host, and 2 pNICs to the storage network.


In further text, the 2 pNICs dedicated to VMs will be "ignored".

i created 2 virtual switches for storage (StorageSwitch1 & StorageSwitch2), each mapped to one of the pNICs dedicated to storage. Than i create 2 vNICs, connected to these 2 virtual switches. These will be called iSCSI1 & iSCSI2, and will be used by the host's iSCSI initiator to access the storage.


The next step is what i'm not sure about. 

If i run the VSA setup, and connect it to let's say StorageSwitch1, finish the setup and restart the VSA VM. I can than install CMC on my host and add the VSA that i just installed to the CMC. But i'm not quite clear how to achieve redundancy/multipath for the VSA. The Installation documentation for VSA says that in Hyper-V VM it does not support NIC bonding (teaming) within the VSA VM.

So i tried to create 2 additional vNICs on the host (VSA1 & VSA2), connected these to the 2 virtual switches, than i created a NIC team from VSA1 & VSA2, named it VSA Team, and used this to connect the VSA VM. This in theory should provide redundancy, but if i do it this way, i don't seem to be able to communicate between the host and the VSA.


Would you mind give some more detail on how you set up the NIC bonding to achieve redundancy for the VSA VM?


Thank you

Frequent Advisor

Re: Hyper-V host NIC configuration for VSA

Here the best practices for VSA, including networking.


They explain better that i can do it... 8-)))

Frequent Visitor

Re: Hyper-V host NIC configuration for VSA

Hi Manfri


Thank you for the link.

As mentioned previously, i have read through all documents i could find including the one you sent the link to.

There's information in there, that sounds a bit contradicting, and i just want to find out what is the supported solution whereas if i have a problem neither HP nor Microsoft will turn around and say bacuse this is an unsuported solution they can't help.


IN the quoted document under the networking section there's the following informatio:

Four network interfaces
With four NICs on the server, the recommended configuration is to dedicate two NICs to a vSwitch for StoreVirtual. In
addition it can be shared by the host to access volumes via iSCSI. The remaining two NICs on a second vSwitch are used for  application traffic (“user network”) as well as hypervisor management traffic. Configurations with four network interfaces as shown in figure 10 are considered the minimum configuration in terms of host side networking.


Six or more network interfaces
Six or more network interfaces per server are the ideal configuration. This allows for segregation of iSCSI traffic, application traffic, and management traffic (including network services as vSphere/Windows cluster communication and advanced features, such as vMotion). With six network interfaces, three virtual switches should be created; one vSwitch for iSCSI traffic, one vSwitch for management traffic, and one vSwitch for application traffic (see figure 11). 

For solutions based on Hyper-V, it may be required to add more network connectivity to accommodate iSCSI connectivity for host, guest and the StoreVirtual VSA itself. Per best practices, it is recommended to use individual network interfaces for Multipath I/O to connect to iSCSI targets instead of teaming network interfaces. Even though solutions using teamed adapters have not reported any problems, these configurations have also not been fully qualified by either Microsoft or HP. For more details, see the Hyper-V chapter in this document on page 28.


The highlighted parts are what is making my head in. In the 4 NIC setup, it says 2 NICs are shared for VSA and host iSCSI. In contrary the 6 NIC setup says 2 NICs sgould be designated for host iSCSI.

I know Microsoft's best practice says that you want to dedicate 2 NICs for host iSCSI using MPIO. That would than suggest that the 4 NIC option is not a solution according to best practice, and really for a fully supported solution you are looking to use 6 NICs, as specified above.

And this is the information i couldn't find a clear cut anser to and to which the document doesnt asnwer either as it list both options as possible.


The other factor i was looking to was the HW used. I am looking at HP DL380 Gen8 with HP Ethernet 10Gb 2-port 533FLR NIC adapter. This adapter supports Network Partitioning:

Network Partitioning The HP 533FLR-T supports NIC partitioning for ProLiant Gen8 rack servers. Allowing administrators to configure a 10Gb port as four separate partitions or physical functions. Each PCI function is associated with a different virtual NIC. To the OS and the network, each physical function appears as a separate NIC port.

This could be the answer to the dilemma if this adapter shows up as 4 NICs in the host, these 4 NICs could be used for storage (2 for VSA and 2 for 2 iSCSI), and than an additional NIC adapter could be used for Hyper-V traffic (VM network, live migration, heartbeat, etc). If anyone has any experience with these adapters, can you confirm if this would be a viable solution?

Thanx in advance for all replies

Occasional Visitor

Re: Hyper-V host NIC configuration for VSA




I'm experiencing the same problem to follow the hp best practises.


HP tell us that we shouldn't use teaming on Hyper-v for iSCSI and use MPIO instead, but with only one NIC on the VSA vm, how could I do that ?


It's written on this manual, page 21, that "For the VSA for Hyper-v, dedicate a unique network adapter for iSCSI traffic".


Anyway, even with  teaming, which, by the way, support iSCSI (see this blog), since each vNIC created on it should be on different subnets/vlan, we are again limited with the sinle NIC on VSA (on which vlan should I put my cluster VIP?).


All of this is very confusing and, after one days googling, i feel that nobody really use VSA on hyper-V, right ?


Any comment/advisory about this would be very appreciated,






Frequent Advisor

Re: Hyper-V host NIC configuration for VSA

When the host where you run the VSA must access the VSA themself you must implement 2 connectivity.


1) 1 Hyper-V switch ( where you can use teaming ) for the VSA to access the ISCSI network.

2) 1 or more NIC dedicated for Host access to ISCSI, and if you use more than 1 NIC you must use MPIO.


Another case is if a VM must access also to VSA, in this case i do not remember exactly what the manual says.


All these nics must be in the same VLAN and using the same IP subnet, and the VIP of the cluster must be in the same IP subnet, until you implement some kind of remote site where you cannot have the same VLAN, but this is well explained in doc ( feel fre to ask though ).


There also other considerations, if you want a decent redundancy you must implement a Failover Manager, and i implement ( if i have a even number of VSA nodes ) site placement so i can split my SAN in 2 logical sites ( can be rack under different power or room ) so it's guaranteed thet if an entire site crash my SAN stay up, but this is a design issue for every lefthand setup ( sorry storevirtual )  


There is someone using it in hyper-v 8-))



Occasional Visitor

Re: Hyper-V host NIC configuration for VSA

Hello Manfri,



Happy to know that I'm not alone there :)


Thanks for your knowledge. I was planning to make 2 different vLANs for iSCSI. I'll change that.


I'm stuck with 4 NIC on my Hyper-V host, then, with the informations you just gave me, here is my new design :


- Conveged team on NIC 1 and 2, with vSwitch for VMs and VSA + vNIC for cluster/LM/mgmt in Management OS

- NIC3 -> iSCSIvSwitch + vNIC for iSCSI in Management OS

- NIC4 -> iSCSIvSwitch + vNIC for iSCSI in Management OS


At the end, I got something similare of the second Aidan design on this page



New questions :

- How do I configure MPIO in VSA for the 2  new vNIC I've just added ?

- Will iSCSI traffic between VM and VSA has to go outside my host to the hardware switch and back ??







Occasional Visitor

Re: Hyper-V host NIC configuration for VSA

Any news? I actually have the same problem... Regards