BladeSystem - General
1752860 Members
3707 Online
108790 Solutions
New Discussion юеВ

Virtual Connect Configuration for ESX 3.5 U3

 
dchmax
Frequent Advisor

Virtual Connect Configuration for ESX 3.5 U3

Amongst other members of our team we have discussed how to correctly configure virtual connect to work with ESX. Here is some background info of our setup. The plan is to use BL495's to run the ESX host, 2 port eth mez card in mez 1 and 2 port FC mez card in mez 2. The chassis has Ethernet V-Net Modules in interconnect bay 1-4 and Fibre V-Net Modules in 5-8. These are not Flex-10 Modules. Max Ethernet ports for each blade will be 4. Teamed Onboard Eth1/2 (active/passive) Console/Vmotion Teamed Mes Eth 1/2 (active/passive) VM Traffic So here is the dilemma. 1) One option would be to use non-shared uplinks. VLAN's would be sent down the trunks but passed through to the nic it has been assigned to. ESX Virtual Switches would handled the VLANS 2) 2nd option would be to create one fat trunk, no link(s) dedicated to vm traffic but rather shared by all blades, to each Eth V-Net Modules. When setting up the server profile use the selection тАЬmultiple networks.тАЭ From within multiple networks select what VLAN's should be pasted through. Any thoughts on which configuration is the best/supported by HP.... Thanks [Updated on 3/12/2009 6:05 PM]
2 REPLIES 2
Adrian Clint
Honored Contributor

Virtual Connect Configuration for ESX 3.5 U3

First you have something I dont understand about the NICs and interconnects. I am assuming your statement about the interconnect numbers is wrong. Recommended is VC-Eth in interconnects 1,2,5 &6 and VC-SAN in 3 & 4. That way if you want to expand to 6 NICs, you just change the 2 port mezz to a 4 port and add VC-Eth to 7 & 8. (Our configuration) What we would have done if we used only 4 NICs would be Service Console on NIC2, Public on NIC1 and NIC4 (teamed & smartlink enabled) and Service Console on NIC3. We then put Vmotion and Service console thru a Vswitch on ESX and set the prefered NIC for each as above but a failover NIC to the other. If you have 2 NICs performing the same function eg Public you really ought not to put them on one Chip on a mezz card - if the chip fails or the card fails you will loose all your public network access. So split between a mezz nic and an onboard NIC. I would leave NIC1 as the public as thats the default PXE NIC. Onto your dilemma.... It depends on what VLANS are on the guests. If you want multiple VLANS to guests and guests can have separate VLANs and you dont want to use tunneling. Go with Mapped VLANS and shared uplink sets. (We did) One other thing to note... you cant add multiple networks to a single NIC from different Shared Uplink Sets, they must be on the same SUS.
dchmax
Frequent Advisor

Virtual Connect Configuration for ESX 3.5 U3

Thanks for the reply. I wish I could tell you that I'm wrong on the config but sadly I'm not. I lobbied for a long time to have the interconnects moved around and remove a pair of Fiber VC's. We are running VMS on Itanium and for some reason they feel the need to have 4 fiber ports on a blade. Crazy, pointless, and expensive. Anyway/... Now getting past that I've had to work around the not so flexable setup. It's nice to know someone else is using Shared UpLink Set and Mapped VLANs. Most VC docs, HP and VMware, point towards having a dedicated trunk, no shared uplink sets, just pass the trunk directly to the ESX host. I have a feeling the multiple networks was added in a recent firmware and the docs have not been updated. I'm still interested in hearing about other ESX Blade setups.