- Integrated Systems
- About Us
- Integrated Systems
- About Us
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
12-17-2012 01:36 PM
East/West vMotion network without stacking in Virtual COnnect
Chris was looking for some info:
I am working on a design that will have a mixture of VMware ESXi hosts (one cluster of 16 blades) and physical Windows blades. The configuration has 2 c7000 enclosures with Virtual Connect FlexFabric modules in slots 1 and 2. Because of the requirement for both ESXi and Windows hosts in the enclosures, I am planning to use the managed (mapped VLANs) mode.
Should I stack the two enclosures or not stack them?
From my perspective, the benefit of stacking is that my vMotion and FT logging traffic can be configured to be entirely contained in the pair of stacked enclosures.
The benefit of not stacking is that I can execute completely independent firmware upgrades of each enclosure. I could use VMware vMotion to migrate all VMs from eclosure A to B, upgrade enclosure A, move all VMs from B to A, upgrade A, redistribute VMs between the two enclosures. Done. No risk in case something goes wrong. No dependencies on the NIC failover on the ESXi level to maintain connectivity during the firmware upgrade.
A few more considerations:
- I will have capacity to fit all VMs on half of the nodes temporarily, as there is extra capacity reserved for DR of VMs from another data center.
- I do not care about the physical Windows machines during the firmware upgrade.
- I do care about availability of all VMs running in the VMware farm. There will be 600+ of them, so it will be nearly impossible to negotiate outage on all of them. And if the cluster or significant portion of it went down unexpectedely then the disruption to business would be deemed unacceptable. This risk must be avoided.
At the moment, I am leaning towards not stacking, as I consider firmware upgrade as a huge risk, and from my perspective an unquantifiable risk - I do not control the process, it is monolithic and non-transparent.
Your comments and advice will be much appreciated.
Lots of input:
We have been discussing the same thing with a Integrity Blade HA solution and currently, I am recommending that customers consider not stacking the enclosures that contain their fail-over destination environment because of the fact of being able to do a dependent firmware upgrade that will not affect the other enclosure while performing it.
I would not stack either.
What would be the real benefit of stacking in you setup? I assume that you are not just looking for cable reduction and VCM consolidation, right? If you stack then you have created a dependency between both chassis:
1. VCM is dependent on the local chassis only. That’s not good in your situation.
2. Firmware Upgrades are done on both chassis. This also places a possible risk.
Keep them separate .
If you need to keep the VMotion and FT traffic within these TWO separate enclosures you can define the VMotion/FT networks with uplinks direct between the enclosures.
NOTE: This ONLY works between TWO enclosures.
NOTE: Define the networks with uplinks BEFORE making the physical connections or they will be treated as Stacking links.
NOTE: See first note.
If you need to VMotion/FT outside stacked enclosures or VC only network linked enclosure pair you need external switches. To the medium to long term growth could be supported by separate external VMotion/FT networks and migrating the ESX cluster nodes as appropriate.
This type of connection is mentioned in one of the docs but I don't recall which.
There are 2 things to keep in mind though.
1) Whatever network(s) exist between these two domains cannot be routed outside the enclosures. This is because VC does not allow traffic to come in one set of uplinks and then go out another. And these "back to back" connections are considered standard uplinks.
2) Its not just between 2 Enclosures but between 2 domains. So if you had 2 Stacks of 2 enclosures each, those can be linked in such a manner as to allow all 4 enclosures to each access these "local" networks.
And from Thomas:
This might have what you are looking for:
Directly Connecting VC Domains
In a multi-enclosure domain configuration with properly installed stacking cables, each network defined in the domain is available to all server profiles in the domain without requiring any additional uplink ports. This configuration enables you to establish an open communication path between two or more enclosures.
See the Virtual Connect for c-Class BladeSystem Setup and Installation Guide:
A user can also directly connect the uplinks from two enclosures (different domains) so that servers in the two domains attached to the networks configured for those uplinks can communicate with one another. This configuration establishes a private communication path between the two enclosures. However, the communication path is public for all of those servers and applications associated with it. Traffic would not flow from an upstream switch over that direct connection.
The two enclosures can communicate with each other by a dedicated uplink port or a shared uplink port defined on each enclosure. These uplinks on the two enclosures can be “teamed” using LACP because both domains run LACP active. The link between the two enclosures cannot have any additional active links connected to other targets. Only networks defined for that link can be shared between the two enclosures.
If you want module level fault tolerance, you would need to use connection mode failover to prevent a situation where enclosure ‘A’ selected the module 1 connection as ‘linked/active’ and enclosure ‘B’ selected module 2 as ‘linked/active’. You can avoid this black hole by specifying the module 1 connections as primary on both enclosures, and the module 2 connections as secondary on both enclosures. However this prevents the ability to create LACP channels between enclosures, but does provide fault tolerance.
Connecting an “external server” directly to Virtual Connect.
Only a single NIC from an external server would be supported per vNet defined. The vNet and external server would be isolated from any other external network without requiring additional NICs, uplink ports and vNets