BladeSystem - General
1753499 Members
4463 Online
108794 Solutions
New Discussion юеВ

Re: Virtual Connect + iSCSI + VMware... best practices?

 
Tyler Modell
New Member

Virtual Connect + iSCSI + VMware... best practices?

I have been wondering (and searching) for some information with regards to iSCSI & Virtual Connect within a VMware virtualized environment but have not found much that specficially pertains to what I'm looking to do. Does anyone have any best practices, thoughts or theories for using iSCSI in a VMware + Virtual Connect environment?

Traditionally, I would just run a DL380 w/ a multi-port nic in a PCI slot and create a vSwitch w/ a LOM NIC and an expansion NIC for the iSCSI initiator. I would then connect those pNics to a Cisco 3750G stack (cross-stack etherchannel) and run a teaming policy of route based on IP hash. This would enable me to get maximum throughput and redundancy but I've been scratching my head on how to achieve the same thing with Virtual Connect. I know I can do it with a pair of 3120s or pass-throughs but I'd like to save the complexity, hassle, cost and cabling associated with each option if I could.

It is my understanding that with Virtual Connect, while I could technically setup two modules w/ shared uplink sets to core switches, on units that are side by side with shared uplink sets to the same switches, one of the shared uplink sets will go into a blocking state (to prevent loops). In that case, does the traffic that would have been set to go out of that VC route (I.e. - LOM 2 on I/O Module 2) not pass traffic or would it then route traffic through the 10Gig-E internal cross connect and out of the shared uplink set on I/O module 1?

Also, as there is no downlink teaming available, is there a recommended VMware teaming policy that should be applied to vSwitches that use VC? I'm assuming the recommended method would be route based on originating virtual port ID?

Any thoughts, suggestions or insight would be greatly appreciated.

Thanks,

Tyler
4 REPLIES 4
HEM_2
Honored Contributor

Re: Virtual Connect + iSCSI + VMware... best practices?

Tyler:

I'm not aware of any docs that talk about iSCSI + VC + VMWare. The only think I would do is try to have a dedicated VLAN for iSCSI traffic.

From above:
"In that case, does the traffic that would have been set to go out of that VC route (I.e. - LOM 2 on I/O Module 2) not pass traffic or would it then route traffic through the 10Gig-E internal cross connect and out of the shared uplink set on I/O module 1?"

You are talking about a single Shared Uplink Set with physical uplink ports on 2 or more VC modules. With a single Shared Uplink Set you end up with only 1 port or channel active and all others in standby. If your active port or channel is on Bay 1 the NIcs on Bay 2 for example will use the internal 10Gb stacking link and go out the active port or channel on Bay 1.

You can implement 2 Shared Uplink Sets in what is called an Active/Active Uplink Configuration (look at the VC Ethernet cookbook). Then the stacking links aren't used and each NIC uses an uplink port or channel from the same module it directly connects to.

As far as Teaming in ESX the only one that isn't supported is route based on IP hash because it requires switch configuration which you can't do in VC. The other teaming methods will work, mileage may vary depending on environment traffic pattern
Julian Stenning
Frequent Advisor

Re: Virtual Connect + iSCSI + VMware... best practices?

I have a set up that is using 2xVC 10Gbe enet modules in bays 1&2 for our 'public/front' blade interfaces (if you see what I mean), 2xVC in bays 5&6 for storage connectivity and 2xVC in bays 7&8 for virtual machine guest traffic.

Each VC pairing has a active/stand-by setup for uplinks.

Whilst our setup works (and btw we use NFS because the iSCSI performance was awful), we have run in to problems during VC firmware upgrading which need further investigation (disconnect of storage during VC failover).

I'm also unclear where I should be doing the teaming. Should I team at the VC level & present them to one NIC on the ESX server, or present two unteamed connections and team at the ESX level (e.g. 1x10Gbe from VC in Bay 5 to vmnic2 and 1x10Gbe from VC in Bay6 to vmnic3 and software-team for storage connectivity)?

I'm also keen to hear of others experience.

Regards,
Julian.
wacs
New Member

Re: Virtual Connect + iSCSI + VMware... best practices?

Hi,

Tyler, I saw your comment on the 'trends in infrastructure', Kleinch mentioned a whitepaper on iSCSI, iSCSI boot and bladesystem, has that materialised?
Ugo Bellavance (ATQ)
Frequent Advisor

Re: Virtual Connect + iSCSI + VMware... best practices?

You may want to have a look at http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01386629/c01386629.pdf

and

http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01471917/c01471917.pdf

In our case, we made a bond (NIC Team) out of the 2 LOM using LACP and it works. On top of that, we created vlan'd interfaces.