Array Setup and Networking
1748156 Members
4191 Online
108758 Solutions
New Discussion юеВ

Re: CS210 NICs and VMWare config

 
SOLVED
Go to solution
jrhall
Occasional Advisor

CS210 NICs and VMware Config

I've been using CS210 now for a few months shy of a year and love it. It was my first SAN (and our company's first) and I can't imagine using anything else now - it's fast and simple.  It took me a few months to get our (smallish) environment entirely on the Nimble as I was learning and test along the way.

Anyway, over the past few months I've played with different configurations of the NICs and trying to determine the "best" setup for performance.

I started out using the recommended Eth1 and 2 as Management and Eth3 and 4 for iSCSI.  In that configuration on the VMWare side, each host had two VMKernel vSwitches with one vmnic each bonded to iSCSI traffic. On the Nimble side, while monitoring the NIC performance, both showed identical traffic patterns (good).

I then tried adding the Nimble Eth3 to the iSCSI network and on the VMWare side I added a 3rd iSCSI vSwitch with dedicted vmnic.  The NIC traffic patterns on Nimble were not consistent.  I would have expected all 3 NICs to show the same traffic as before, but this wasn't happening. Traffic seemed to match across 2 random NICs with the 3rd showing something different or barely used. I also didn't notice any improved throughput, but I didn't run any tests to confirm that either.  That assumption is solely based on what I was seeing through the Nimble GUI.

My current config since that last setup didn't get me the results I was after is that I'm back to 2 vSwitches with 1 vmnic, but I've left the 3 NIC's on Nimble dedicated to iSCSI.  There didn't seem to be a difference if I had 2 or 3 vSwitches back to the Nimble.

I know that there's a choice in VMWare iSCSI setup to assign one vmnic per vSwitch, or multiple vmnics per vSwitch (the Nimble docs mention either method). I haven't done any testing if the other method would work better instead of the 1-to-1 setup I have now.

I guess my questions are, am I missing something on setting this up to get better (or more balanced) NIC performance out of the Nimble, or is this a design "issue" somewhere? Or will Nimble never balance the throughput across 3 NICs back to VMWare?  Is this a fruitless experiment?

For reference, here are some other settings along the way;

Nimble OS 2.2.6

3x ESXi 5.5 hosts

NCM plugin on hosts

2x HP2920 dedicated to iSCSI, stacked, flow-control, jumbo

Jumbo is turned on in vSwitches, vmnics, Nimble.

I'd be happy to provide other settings if anyone thinks it necessary.

4 REPLIES 4
Valdereth
Trusted Contributor

Re: CS210 NICs and VMWare config

Are you experiencing some slowness or issues that have led you down this path?  Do you have a performance need that isn't being met?  Just curious, because with 3 Hosts I would imagine two iSCSI interfaces per Host would suit you just fine.

Adding a 3rd iSCSI interface on the Nimble is definitely an option, but then that leaves you with a single management port per controller (assuming 4 interfaces per controller).  I doubt the controllers will failover if that single management interface drops.. so you might want to take that into consideration as well.

jrhall
Occasional Advisor

Re: CS210 NICs and VMWare config

I'm not experiencing slowness or issues, that I'm aware of. I'm just trying to squeeze as much performance and utilization out of the setup as I can.  Adding the 3rd NIC was suggested by a Nimble systems engineer (Glenn Stewart) a while ago when going over my initial install some months ago.

I was also under the impression that a NIC failure will spark a failover.

Valdereth
Trusted Contributor

Re: CS210 NICs and VMWare config

Are you seeing latency or network utilization numbers that lead you to believe adding another interface would increase performance?

Not trying to be difficult, but I'd caution opting for a controller failovers to handle management network failures vs. redundant interfaces on the same controller.

Back to your original question though - Nimble Connection Manager will utilize the optimal number of iSCSI sessions and paths to the Volumes on the array.  I've worked with similar Host extensions and they actually didn't utilize the new NICs until I generated enough IO.  If you really wanted to utilize all NICs evenly you'd probably want to use a modified Round Robin policy on your Hosts.  Since you have NCM setup though, just verify your Hosts are using that and let it do its thing. 

jrhall
Occasional Advisor
Solution

Re: CS210 NICs and VMWare config

I have noticed that my latency has increased as I moved both of my DB2 and Exchange servers onto Nimble (from physical machines and local storage), but nothing too bad. My average latency is still under 4ms or less, so not a concern at this point.

It's not that I'm trying to get more performance necessarily. It's more that once I added the 3rd NIC I didn't see it being used much, and saw no performance

benefits.  I didn't know if that was cased by a config issue, or something else.  I simply would have expected to see all 3 NIC's with similar traffic patterns, and I'm not.

I'm wondering if it's something to do with 3 NICs across only 2 physical switches, but I have no idea.