HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Hours:
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
BladeSystem - General
cancel
Showing results for 
Search instead for 
Did you mean: 

c7000, Flexfabric and VMware ESXi 4.1

 
SOLVED
Go to solution
Brian Proteau
Frequent Advisor

c7000, Flexfabric and VMware ESXi 4.1

We're looking at a transition for our ESX environment from DL series rack mounts to c7000 and BL620C G7 Servers. We'll be connecting to a Nexus and I am trying to get my head around Flexfabric interconnects.

(1) Am I right to assume I'll need 2 "HP Virtual Connect FlexFabric 10Gb/24-Port Modules" per c7000 chasis to eliminate that as a single point of failure?

(2) Is it true that I can connect (at most) 2 FlexFabric ports to each BL620C G7 Server Blades? In which case, I would choose one from each VC Flexfabric module.

(3) My plan is to carve those into 3 pairs of NICs (2-Management), (2-VMKernel), and (2-VM Networks) as well as (2-FCoE for FC storage). Each pair split across the Flexfabric Modules. This make sense?

(4) I read something about VLAN limitations which I didn't fully understand. We currently have 40+ VLANs going across our VM network trunks. Will I be able to do that on the VM Networks as defined above?

(5) Lastly, ideally I'd like a 5th pair to carve out for Fault Tolerance but, that doesn't seem possible so, I suppose I will just share the VMKernel for VMotion and Fault Tolerance. Just curious what others are doing.



5 REPLIES
Markus M.
Trusted Contributor
Solution

Re: c7000, Flexfabric and VMware ESXi 4.1

Hi Brian,

1) Thats correct !

2) You can connect all 4 embedded NICs (=16 virtual Nics)

3) It makes sense, but keep in mind, only one Interconnect is active, and therefore also the host port

4) 40+ is no prob, don´t know the exact limit, but its much higher

5) see 2) - no problem at all

Kind regards
Markus
Brian Proteau
Frequent Advisor

Re: c7000, Flexfabric and VMware ESXi 4.1

Thank you for the reply. Still a bit confused on this one...

3) It makes sense, but keep in mind, only one Interconnect is active, and therefore also the host port

On the vSphere ESX Host, the Load Balancing Policy (for the VM traffic) can be configured to utilize both (or all) members of the vSwitch team. In my current environment (DL380 G7)I have the vSwitch teams spread across 2 different physical NICs. Both NICs are utilized for VM traffic.

Are you saying that wouldn't happen using a pair of HP Virtual Connect FlexFabric 10Gb/24-Port Modules.
JKytsi
Honored Contributor

Re: c7000, Flexfabric and VMware ESXi 4.1

3) You can. Just remember to configure your virtual connect shared uplinks in active/active config (see cookbook for details)
Remember to give Kudos to answers! (click the KUDOS star)

You can find me from Twitter @JKytsi
JKytsi
Honored Contributor

Re: c7000, Flexfabric and VMware ESXi 4.1

Brian ...You have been member to ITRC since 2007 and not a single point given to answers that try to help You :/
Remember to give Kudos to answers! (click the KUDOS star)

You can find me from Twitter @JKytsi
Brian Proteau
Frequent Advisor

Re: c7000, Flexfabric and VMware ESXi 4.1

Thanks for the replies. Sorry about the points. I'm in many forums and I always do. It's not quite as obvious in this one. I'll see if I can retroactively sqare that up.

Thanks again.