Storage Boards Cleanup
To make it easier to find information about HPE Storage products and solutions, we are doing spring cleaning. This includes consolidation of some older boards, and a simpler structure that more accurately reflects how people use HPE Storage.
HPE StoreVirtual Storage / LeftHand
cancel
Showing results for 
Search instead for 
Did you mean: 

VSA Nics

cheazell
Advisor

VSA Nics

I've recently upgraded to a 4 node system. I want to move from 1GbE to  10GbE and I have a couple of questions around this for the VSA. I find the documentation for VSA vs. the Physical Storevirtual range to be problematic as HP doesn't adequately distinguish between the two - particularly around some of the VSA's limitations.

 

I want to know how many of you use both eth0 and eth1 on your VSAs?

 

Do you use the same subnet for both?

 

I am a VMWare shop so do you split eth0 into its own vswitch and run eth1 as the iscsi nic through another vswitch. I have a dual 10Gbe Nic would you use them on a 1:1 basis to the nics or would you dedicate a 1GbE nic to the VMNetwork of the VSA and use both 10GbE to the iSCSI? If you could show a capture of your VM Networking screen it would be very helpful

 

Thanks

4 REPLIES
5y53ng
Regular Advisor

Re: VSA Nics

Unfortunately you can only use one of the two NICs. I tried to use the second NIC in version 9 and 9.5 and I could never get it to work. If you have multiple 1GbE or 10GbE you "may" get some performance benefit from the VMXNET3 and jumbo frames. I don't have any number to back that up, but I have been playing with 10GbE and trying to use the VSA in a high performance environment. 

 

Depending on your workload you might consider separating the VSA and VMware iSCSI vmkernel port(s) on separate vswitches and pNICs if you run into latency issues. 

cheazell
Advisor

Re: VSA Nics

That is exactly how I have approached it. I have never used both eth0 and eth1 but I'm curious to see whether trying it would help.

 

I experimented with Jumbo frames in the 9.x days and determined there was little benefit if any so have abandoned them for this setup in 10.5

 

 

I"ve set mine up like this:

 

I have a 10GbE card with dual ports. I've created one vmk (vmk1) connected to port 1 of the pNic in a vSwitch (vSwitch4) that plugs into a dedicated 10GbE port on a  physical switch for iSCSI. I have created a second vmk (vmk2) conected to vSwitch4 bound to the second port on the pNic and connected to a different 10GbE physical switch. I've done this for Resilency's sake on all 4 nodes. For each of the nodes I've used a separate vswitch and pnic (1GbE) for the VSA's VMNetwork.

 

My question is am I actually better off in this configuration? Or because of the VSAs limitations vs the Physical Storevirtual models is my setup pointless and I should just connect 1 pNic to 1pSwitch and call it a day.

 

Cheers

5y53ng
Regular Advisor

Re: VSA Nics

From what you described I suspect you cannot configure any type of link aggregation using both of your physical switches. Does your VMware environment support distributed virtual switches?

 

Are there any other guests on your hosts besides the VSA?

 

cheazell
Advisor

Re: VSA Nics

Yes I have other VM guests on the hosts with the VSA's on them but they are using separate pnics, pswitches and vnics and switches.

 

I could probably set up a Trunk but I don't think I could stack them (although I could be wrong). I am using an 8 port 10GbE module in each of a E5406 chasis. Separated physically in our building.

 

I don't have enterprise plus - just enterprise so no distributed vSwitches.

 

This is what it looks like right now but I could easily change this. I'm considering adding the vm network portgroup into the same vswitch (vswitch4) so that is has a 10GbE connection. Doubt this would do much though.