StoreVirtual Storage
1751718 Members
6059 Online
108781 Solutions
New Discussion юеВ

Re: Performance (load balancing etc.)

 
Paul Hutchings
Super Advisor

Performance (load balancing etc.)

Something I'm not at all clear on with the VSA:

Let's assume I build a cluster comprising either two or four nodes, with dedicated hardware for the VSA and a dedicated iSCSI network.

I would have a single Virtual IP, and each VSA can only have a single gigabit vNIC for iSCSI?

That means I have potentially 4gbps worth of connectivity in and out of the cluster - correct?

So, if I have my production ESX boxes or a physical file servers each with a pair of dedicated gigabit NICs connected to the iSCSI network, how is the balancing of connections handled?

For example, if a file server has 4 LUNs assigned to it, is it the "law of averages" that suggests that each LUN target would be assigned a 1gbps connection and that each would go via a different gateway, or could I end up where a single physical file server is trying to access all of its targets via the same gateway VSA?

Is there any way to get more than a single gb connection to a target?

So far, I like the clustering and failover, I'm just trying to understand the performance side of things a little more once you move away from any disk bottleneck at the physical storage.

Thanks.
4 REPLIES 4
kghammond
Frequent Advisor

Re: Performance (load balancing etc.)

As far as I am aware, MPIO is the only way to get an individual target to use more than one NIC.

That requires either vSpere in the VMware world, or using HP/Lefthand's Windows MPIO provider.

Without MPIO, individual iSCSI targets maintain persistent paths across your network interfaces, thus they only use one NIC.

I hope that helps,
Kevin
Paul Hutchings
Super Advisor

Re: Performance (load balancing etc.)

Thanks for the reply.

If my ESX host running the VSA had several physical NICs into the iSCSI switch running at 1gbps, what would the VSA be able to use?

Or if it had a single 10gbps NIC what would it be able to use?

Basically is the VSA limited to 1gbps or just to one vNIC which works at the speed of the underlying physical NIC?
teledata
Respected Contributor

Re: Performance (load balancing etc.)

the VSA will only use 1 vNIC (for iSCSI traffic). It is listed as 1GbE, but as I understand it, a virtual NIC is only speed limited (unless you are using throttling) to the bus speed of the server. The underlying vswitches/physical nics is what will limit your connectivity.

Theoretically if you had multiple physical NICs on the vswitch that the VSA is connected to you should be able to exceed 1Gb to the VSA. Although I've never tested this theory to see how much throughput you can actually get to the VSA using the above method.

The previous post accurately describes the per target limitations unless using MPIO.

http://www.tdonline.com
Gauche
Trusted Contributor

Re: Performance (load balancing etc.)

"The previous post accurately describes the per target limitations unless using MPIO."

This is an important part to understand too.
Using native ESX 3.5 iSCSI and only a single LUN/volume, you are limited to 1GB speed because there is no MPIO in affect and there is only one LUN.

If you have more than one LUN, each LUN can use 1GB off the SAN, and each would be redirected to a different node to get good load balancing off the SAN nodes. So in your four node example, 4 luns could achieve 4x1Gb speed because they get load balanced across the SAN. That still does not require MPIO though.

FROM the ESX server, in 3.5, you are basically limited to 1GB because there is no nic load balancing for iSCSI in ESX 3.5.

In ESX 4.0 you can get nic load balancing for a single LUN by enabling the native round robin MPIO across more than one nic.

So ideal for the SAN side is to just make sure "load balancing" is enabled, its the default anyway, and if they are physical nodes do bond their network interfaces.

Ideal for the ESX side is to run ESX 4 and enable multi-nic MPIO.

This rather long post is a major simplification honestly. To understand ESX better I'd suggest you read these two posts...

for ESX 3.5 read this....
http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-customers-using-vmware.html
for ESX 4 read this....
http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vsphere.html
Adam C, LeftHand Product Manger