BladeSystem - General
1748246 Members
3041 Online
108760 Solutions
New Discussion юеВ

Re: VC-Enet, Cisco 3750's and VMWare

 
Douglas Spooner
Advisor

VC-Enet, Cisco 3750's and VMWare

Hi

I'm trying to work out the best way to provide maximum ethernet bandwidth to our ESX 3.02 Servers.

We had some consultants come in and setup it up a while back and I'm just reviewing the config as I've been doing some reading around on serveral blogs/forums and also going to be patching in the rest of the empty VC ethernet ports into our physical switches as we have increased the amount of blades in the enclosure.

I'm still trying to get my head around this all so please forgive me if I ask any dumb questions/ used terminology wrong.

2 Enclosures
4x 1/10GB VC-Enet Modules per enc
2x 4Gb VC-FC Modules per enc

BL460c blades, 4 NICS, 1 Qlogic Card

4 x 24 port Catalyst 3750's with stacking cables.

Each VC module is plugged into the same physical switch with a 802.1Q trunk.

So for example g1/0/16 - g1/0/24 Enc 1 VC Bay 1 ports 1-8


interface GigabitEthernet1/0/24
switchport trunk encapsulation dot1q
switchport trunk native vlan 999
switchport mode trunk
spanning-tree portfast trunk

In virtual connect we have serveral networks defined ALL_VLANS_1,2,3,4 with VLAN tunneling on.

Each network is then made up of all 8 ports from a bay so ALL_VLans_1 is port 1-8 from Bay 1 (with the exception of bays 1 & 6 on p8 for the SuS)

Looking at the physical switch config there are 4 Ether channel groups (there will be 8 when I'm finished) setup using LACP and set to active.

Currently each enclosure only has 8x1GB connections into the physical switches. 6 are used for all the All_VLANS and 2 are used for the Shared Uplink sets so we can use RDP to roll out the blades.


ESX 3.02 Network Config

1 vSwitch with 7 vLans, Vmotion and Service Console. So basically I select which network the guest will be in.

All 4 NIC's will be active (currently 2 until I patch) and load balancing policy in the vSwitch is set to "Route based on the
originating Virtual port id" which means a guest VM will never have access to more than 1GB of bandwidth unless I add another virtual NIC.


Originally I had planned to setup the etherchannels to mode on and use "Route based on IP Hash" on the ESX vSwitch as the VMWare & Cisco Networking guide had mentioned this was the best configuration. But reading the info on this thread (http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1201089) it seems to suggest this wont work "This is impossible in VC because each blade NIC connects to a different VC module on the midplane."

I have read several guides that always talk about mixing the two technologies but never all three at once ie VC, Cisco Catlyst Switches & ESX.

So if I want to provide maxium bandwidth to my guests inbound and outbound what are my options?

Are the options limited because we are using the VC eth modules that are plugging into our existing switches?
2 REPLIES 2
AllenDerusha
Occasional Advisor

Re: VC-Enet, Cisco 3750's and VMWare

First thing - check out the "HP Virtual Connect Ethernet Networking Scenario Cookbook" you can find here: http://h71028.www7.hp.com/enterprise/cache/316682-0-0-0-121.html

The trouble you are going to run into is that you cannot create a bonded network through VC straight to your Cisco stack, and you cannot bond NICs to VC across multiple VC devices.

When you say you have 4 NICs, I'm assuming you mean that you have the 2 onboard NICs and have separately installed 2 dual port mezz card. Are these assumptions correct?
Douglas Spooner
Advisor

Re: VC-Enet, Cisco 3750's and VMWare

Hi Allen

Thank you for the reply.

After doing some further reading I encountered the cookbook and discovered what you have explained for not being able to bond straight through.

Yes I should have been more clear the 4 Nics are represented by two onboard and 1 dual Mezz card.

From what I understand it would be better to have the Cisco 3020's switches rather than the 1/10GB VC modules as this allows different bonding and aggregaition scenarios.