HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Hours:
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
BladeSystem Virtual Connect
cancel
Showing results for 
Search instead for 
Did you mean: 

VC 1/10 and Flex Module Placement in c7000 Enclosure w/BL680c's

 
T. McQuillan
Occasional Visitor

VC 1/10 and Flex Module Placement in c7000 Enclosure w/BL680c's

Looking to check my config for a VC config to incorporate Flex-10 on a fully populated c7000 with BL680c Blades for use with vSphere. There is a lot of gotchas with this config, some of my interpretations of how VC and 680's work together are the key areas of focus for review. Issue 1: Because BL680's Onboard NICs are not Flex LOMs and because I have a need for 10 NICs for my vSphere vSwitches, I will need to utilize a 1/10GB-F VC Module in Interconnect Bay 1 and 2 and put the Flex-10 VC Module in Bay 5 and 6 to take advantage of Mez slot 2 for the Flex-10 NIC for 8x PCIe (Compared to Mez slot 1 @ 4x PCIe). This leaves Bay 7 & 8 for my VC FC Module. Issue2: Why 10 NICs? Redundancy & Bandwidth - service_console (Nic 1 & 3 from VC module in Bay 1 & 2 ) Active/Passive - vSwitch0 - vMotion (Nic 2 & 4 VMKernel Ports, no VC Connectivity) Active/Passive - vSwitch1 - presentation (Nic 5 & 8 from VC Flex Module in Bays 5 & 6) Active/Active - vSwitch2 - private (Nic 6 & 9 from VC Flex Module in Bays 5 & 6) Active/Active - vSwitch 3 - bakup-storage (Nic 7 & 10 from VC Flex Module in Bay 5 & 6) Active/Active - vSwitch 4 Carving up the bandwidth here is the main concern. I am looking to make sure that I can use 1 x 10GB uplink to each VC Flex module and split the bandwidth for the presentation VLAN @ 9GB (In Active/Active config with Module Bay 5 & 6 I have 18GB of bandwidth for presentation) and private VLAN @ 1GB (In Active/Active config with Module Bay 5 & 6 I have 2GB of bandwidth for VLAN private). Then I would use a 1GB uplinks to the VC Flex modules for the backup-storage VLAN (In Active/Active config with Module Bay 5 & 6 I have 2GB of bandwidth for VLAN backup-storage). I have put this all in Visio and it appears to work on paper but I am awaiting my Flex Modules and Flex NICs to test this. As you can see my hang-ups I believe are due to my perception on the need to have the 1/10GB-F VC Module for the BL680's onboard NICs to suffice VMWare service console NIC being the first seen NIC (NIC0) during the installation of vSphere and because I can't share service_console with VM Traffic NICs and I can't share vmotion with other NICs because of the inability to put two VLAns down a single Flex NIC. So if anyone wants to comment, I would love to hear peoples thoughts. Thanks!
6 REPLIES
chopper3
Frequent Advisor

VC 1/10 and Flex Module Placement in c7000 Enclosure w/BL680c's

I've been using Flex-10's for about 9 months or so now with beta and now production G6 blades. I get the need for 5 different VLANs but not why you feel you need so many NICs. I'd be tempted to leave inters 1, 2, 7 & 8 empty, put the VC-FCs in 3 & 4 and the Flex-10's in 5 & 6 then simply run 10Gbps trunks into 5 & 6, letting VC do the failover, present these trunks to ESX as a single NIC and break out the various VLANs within ESX itself. It'll moan because it won't see redundant console NICs but just ignore it. If you're starting to get bandwidth issues simply run another pair of 10Gbps links and let VC do the etherchanneling. Of course you're 'wasting' the LOMs (pity the 680 G6's aren't out yet!) but it'll be much simpler to build and manage this way. That's my 2c anyway.
T. McQuillan
Occasional Visitor

VC 1/10 and Flex Module Placement in c7000 Enclosure w/BL680c's

One of the reasons for the number of NICs is our network team is not keen on VLAN Tags, or I should say having to manage Tags, ofcourse leaving it to me to handle that, which is no issue when configuring it in SUS and then at the vSwitch in VMWare. Secondly recommendations by VMWare limit my ability to share NICs for service console traffic and vm guest traffic. I am definitely looking for a way to not have to use the 680's onboard NICs. If I can get away with using just the VC Flex modules and the VC FC modules and save the configuration mess, and the money for the 1/10GB-F Modules and the network port and cabling costs and complexity to my Network team.
bluehoops
Occasional Visitor

VC 1/10 and Flex Module Placement in c7000 Enclosure w/BL680c's

I am having a similar dilemma. I want to split traffic from my Hypervisor for Service Console, storage and VM generated traffic. Having 8 "virtual" NIC's seems a waste, as they map to the same 10gb interface? Why couldnt I use, say 1GB eth0 for Service console, then assign the rest to eth1, and attach two vlan'd networks to eth1, one for storage, one for domain traffic. I am new to this, so apologies if the answer is obvious :)
chopper3
Frequent Advisor

VC 1/10 and Flex Module Placement in c7000 Enclosure w/BL680c's

When your OS can manage all of the vlan tagging you need (such as VMWare) why wouldn't you just use all 10Gbps for a single NIC and break them out in the OS - that way, in this case, the COS will only use the traffic it needs (1Gbps is way too much for the COS), VMotion and the other networks can also use as much bandwidth as they can when needed.
Pascal
Occasional Advisor

VC 1/10 and Flex Module Placement in c7000 Enclosure w/BL680c's

You can split the traffic with the flex-10, just define a network , let's say admin and then at the network level you can choose to have a maximum bandwidth of 1 Gb/s max. Then you can create 2 other networks for storage and for domain traffic, without any limit. Once you create your server profile, add one adapter with admin network only and it will present to the server a 1 Gb/s network card. Then add a second adapter with multi network (storage+domain) and the server will automaticaly see a 9 Gb/s adapter. If you prefer one physical adapter per netowrk presented to the server it will just split the remaining bandwidth do with 3 adapter (one for each network) you would see 1 + 4.5 +4.5. This what we did for ou vmware host (assuming you have 2 flex-10) : 2 adapter (one per flex LOM:1-a and LOM:2-a) 1 Gb/s supporting admin vm network and host console. 2 adapter (one perf lex LOM:1-b and LOM:2-b) 9 Gb/s supporting all other vm network (storage, prodocuction, etc ...) each network using as much bandwitdh they need when they need. The host will see 4 adapter, 2 for the admin, 2 others for other network and will be able to load-balance or failover depending on your choices on both adapter. This is just an example, thouhgh english is not my native langage i hope you will understand it.
T. McQuillan
Occasional Visitor

VC 1/10 and Flex Module Placement in c7000 Enclosure w/BL680c's

Thanks folks for your posts!