BladeSystem Virtual Connect
cancel
Showing results for 
Search instead for 
Did you mean: 

To stack or not to stack c7000 enclosures

 
chuckk281
Trusted Contributor

To stack or not to stack c7000 enclosures

Chris was looking for advice for a customer situation:

 

*********************

 

Hi,

 

I am working on a design that will have a mixture of VMware ESXi hosts (one cluster of 16 blades) and physical Windows blades.  The configuration has 2 c7000 enclosures with Virtual Connect FlexFabric modules in slots 1 and 2. Because of the requirement for both ESXi and Windows hosts in the enclosures, I am planning to use the managed (mapped VLANs) mode.

 

Should I stack the two enclosures or not stack them?

 

From my perspective, the benefit of stacking is that my vMotion and FT logging traffic can be configured to be entirely contained in the pair of stacked enclosures.

 

The benefit of not stacking is that I can execute completely independent firmware upgrades of each enclosure.  I could use VMware vMotion to migrate all VMs from eclosure A to B, upgrade enclosure A, move all VMs from B to A, upgrade A, redistribute VMs between the two enclosures. Done. No risk in case something goes wrong.  No dependencies on the NIC failover on the ESXi level to maintain connectivity during the firmware upgrade.

 

A few more considerations:

        - I will have capacity to fit all VMs on half of the nodes temporarily, as there is extra capacity reserved for DR of VMs from another data center.

        - I do not care about the physical Windows machines during the firmware upgrade.

        - I do care about availability of all VMs running in the VMware farm.  There will be 600+ of them, so it will be nearly impossible to negotiate outage on all of them.  And if the cluster or significant portion of it went down unexpectedely then the disruption to business would be deemed unacceptable. This risk must be avoided.

 

At the moment, I am leaning towards not stacking, as I consider firmware upgrade as a huge risk, and from my perspective an unquantifiable risk - I do not control the process, it is monolithic and non-transparent.

 

Your comments and advice will be much appreciated.

 

****************************

 

Steve indicated his preference:

 

************************

 

We have been discussing the same thing with a Integrity Blade HA solution and currently, I am recommending that customers consider not stacking the enclosures that contain their fail-over destination environment because of the fact of being able to do a dependent firmware upgrade that will not affect the other enclosure while performing it.

 

********************************

 

Oliver also joined in:

 

******************

 

Hi Chris,

 

I would not stack either.

 

What would be the real benefit of stacking in you setup? I assume that you are not just looking for cable reduction and VCM consolidation, right? If you stack then you have created a dependency between both chassis:

  1. VCM is dependent on the local chassis only. That’s not good in your situation.
  2. Firmware Upgrades are done on both chassis. This also places a possible risk.

 

Keep them separate.

 

**********************

 

What do you think? What have you implemented?

1 REPLY
chuckk281
Trusted Contributor

Re: To stack or not to stack c7000 enclosures

Greg also weighed in on the subject:

 

*********************

 

If you need to keep the VMotion and FT traffic within these TWO separate enclosures you can define the VMotion/FT networks with uplinks direct between the enclosures. 

 

Caveat notes

NOTE: This ONLY works between TWO enclosures. 

NOTE: Define the networks with uplinks BEFORE making the physical connections or they will be treated as Stacking links.

NOTE: See first note.

 

If you need to VMotion/FT outside stacked enclosures or VC only network linked enclosure pair you need external switches.  To the medium to long term growth could be supported by separate external VMotion/FT networks and migrating the ESX cluster nodes as appropriate.

 

****************************