StoreVirtual Storage
1752569 Members
5046 Online
108788 Solutions
New Discussion юеВ

How to get more than 1Gbps per StoreVirtual 10.5 VSA running on vSphere

 
SOLVED
Go to solution
025208
Visitor

How to get more than 1Gbps per StoreVirtual 10.5 VSA running on vSphere

In planning my HP Store Virtual SAN design I have read through the latest HP StoreVirtual Install & Configuration Whitepaper and found that a VSA does not support Bonding at the LefthandOS level like it does for the hardware version of the LefthandOS. Because the 1 NIC attached to the VSA for iSCSI traffic is a VMXNET3 the way I understand it is that I should have the ability to get more than 1Gbps to that VSA by using multiple 1Gbps NICS with some type of active-active network configuration at the vSphere vSwitch level. There is little documentation on what HP recommends for a configuration at that level other than using 2 1Gb NICs.

 

Is there a supported way to get more than 1 Gbps of bandwidth per HP StoreVirtual 10.5 VSA running on vSphere without using 10Gbps?

 

The closest way I could think to do this is create a 2-NIC Etherchannel team using IP Hash Load Balancing on my vSwitch where the VSA VM network will reside. The reason I think this may work is because I have 4 vSphere hosts each using 2 NICs with MPIO round robin for iSCSI traffic. If I understand Etherchannel with IP Hashing correctly each of the 8 paths to to a vsa will negotiate a path and will result in both NICs in the Etherchannel team being used for the VM Network of the VSA.

 

Does anyone know if I am right or wrong in my understanding of this configuration?
Is there a better way to obtain my goal?
Does anyone know of additional documentation that I haven't seen for this?

 

Details of Planned Environment:

 

Overview:
I am planning a 2-node StoreVirtual SAN using a network RAID-10 that will have about 8TB of usable space. Each of the 2 vSphere hosts will only host 1 VSA and nothing more. I will have 4 other vSphere 5.1 hosts configured in a vSphere Enterprise Cluster that will use the storage via iSCSI on the 2-Node VSA SAN which will host all of my Virtual Machine workload. The iSCSI configuration will be 2 NICs per Cluster host using vmware MPIO Round robin using both iSCSI switches, no network teaming configured.

 

2 VSA Nodes:
Hardware:
Server: Dell R710, 8GB RAM, PERC H700i 512MB NV Cache
Internal Storage: (Qty:8 - 300GB, 10k SAS, RAID 5: Capacity: 1953GB, Datastore1)
2- H810 with 1GB NV Cache
Dell MD1220 (Qty: 24-146GB, 15k SAS, RAID 50: Capacity: 3212GB, DataStore2)
Dell MD1220 (Qty: 24-146GB, 15k SAS, RAID 50: Capacity: 3212GB, DataStore3)
Each MD1220 will be hooked to its own PERC H810 RAID Controller & the internal storage will use the PERC H700i

 

Software:
Hypervisor: vSphere 5.1
VSA VM: HP StoreVirtual VSA 10.5

VSA Hardware:
CPU: 2 vCPU
RAM: 5GB
Storage: ~8TB
Disc 0: OS Drive for VSA, SCSI 0:0 Capacity: defualt size of deployment
Disc 1: DataStore1, SCSI 1:0, Capacity: ~1900GB
Disc 2: DataStore2, SCSI 1:1, Capacity: ~1600GB
Disc 3: DataStore2, SCSI 1:2, Capacity: ~1600GB
Disc 4: DataStore3, SCSI 1:3, Capacity: ~1600GB
Disc 5: DataStore2, SCSI 1:4, Capacity: ~1600GB

 

iSCSI Initiators:
4-Node vSphere 5.1 Enterprise Cluster
4-Dell R710, 2 CPUs, 144GB RAM, 8 1-Gbps NIC Ports (per node)
Mgmt/vMotion, vSwitch0, 2-NICs, Server Stack, Etherchannel Team with IP Hashing
VM Networks, vSwitch1, 2-NICs, Server Stack, Etherchannel Team with IP Hashing
iSCSI, vSwitch2, 2-NICs, iSCSI Stack (seperate switches), No Teaming, Jumbo Frames enabled
--2 iSCSI VMKernels
--each iSCSI VMKernel will be statically assigned to different NICs
--VMware MPIO round robin enabled
--Jumbo Frames enabled

 

iSCSI Stack:
2-Cisco 3750x Switches (Configured in a Stack and only used for iSCSI)
--Jumbo Frames & Flow Control Enabled

 

Server Stack:
6-Cisco 3750x Switches (Configured in a Stack and not used for iSCSI)

2 REPLIES 2
025208
Visitor
Solution

Re: How to get more than 1Gbps per StoreVirtual 10.5 VSA running on vSphere

I ran across this HP Communities post that I believe answers the question I had.

http://h30499.www3.hp.com/t5/HP-StoreVirtual-HP-LeftHand/VSA-ESX-Multiple-PNIC-s/m-p/6001243#M6508

 

 

Using an etherchannel with 2 or more physical NICs for the vm Network on the vSphere host that hosts the VSA should provide me with the ability to successfully load balance my I/O traffic across multiple NICs coming into the VSA. The better the load balancing that is achieved the better chance to reach more than 1 Gbps of bandwidth per VSA.

 

In order for IP hash to load balance the best the more iSCSI initiator IP addresses communicating with the VSA the better load balancing will get. The two best ways I see doing that are:

 

1. Adding more vSphere hosts that use the VSA storage.

 

2. Configuring MPIO (round robin) on each of these vSphere hosts with the iSCSI intiators using the VSA Storage. The more phyisical NICs used in the MPIO configuration per vSphere host the better the load balancing will get.

5y53ng
Regular Advisor

Re: How to get more than 1Gbps per StoreVirtual 10.5 VSA running on vSphere

I wrote the the post you referenced and just wanted to add that you should choose your ip scheme carefully. The operation of ip hash load balancing is documented on various sites. You can calculate which nic is used for communication between the different nodes to ensure you get the best performance.