BladeSystem - General
1752802 Members
5855 Online
108789 Solutions
New Discussion

Re: Connecting two VC Stacks

 
NMcClintock
Occasional Contributor

Connecting two VC Stacks

We are currently redesigning our VC environment and are contemplating a dual stack solution, however have a question around connecting the two stacks without stacking links.

This is what we have:

Encl 1,2,3,4 all fully populated in a single VC domain and all enclosures stacked via CX4 ports.

What we want (if possible)

Encl 1&3 Stacked via CX4 - Stack A
Encl 2&4 Stacked via CX4 - Stack B

We would like to configure X3,4,5,6 for use as Network Sets for heartbeat, migrations, management etc between between Stack A & Stack B using LACP but not having both stacks joined into one (Networks Sets changing into Stacking Links) but the question is....

To achieve this config do we have to physically connect the VC modules to our Cisco Nexus (Stack A > Nexus > Stack B) or can we define networks on both Stacks and then connect directly (Stack A > Stack B) I have read that as long as we define the networks before the physical connection between stacks is made it should work fine.

Realise we will have 2 VC domains but that's better than a full black out for FW updates. We are lucky enough to have 2 enclosures in our LAB but can't start testing this for another 3 weeks which sucks.

Any advice is welcome as this a first for me.

Talk about the deep end...... Week 4 of using Virtual Connect

Interconnects:
Bay 1&2 1/10Gb Flex 10 Ethernet
Bay 3&4 4Gb VC FC

 

 

P.S. this thread has been moved from HP BladeSystem to HP BladeSystem Virtual Connect - HP Forums moderator

7 REPLIES 7
Psychonaut
Respected Contributor

Re: Connecting two VC Stacks

Since the VC's are not a switch and without stacking links I believe the answer is yes - you'll have to go up to the switch and back down again.  

 

What sort of black out are expecting on the 4-wide configuration?  The VC firmware updates are staged (side to side), as long as you have redundancy at the server level for your connections you can perform a VC update without have any server outage.  Before that you can do rolling updates on the servers themselves in outage windows.

 

It doesn't make much sense to me, you want to break them apart but have the same communication between the four?  And now instead of keeping that traffic local you are going to go VC-Nexus-VC.  That adds complexity where you currently don't have it.

 

Is there a different way to look at this, maybe an updated patching policy?

NMcClintock
Occasional Contributor

Re: Connecting two VC Stacks

I am in total agreement with you - Splitting out a 4 enclosure stack to create 2 x 2 enclosure stacks with the exact same functionality as what you started with does sound like a strange move but its the only option we can see we have available to us to regain control of a messy environment.

 

We are a nation wide company that services 5000+ staff from these 4 enclosures and the nature of our business means that we have a maximum outage window of 4 hours on any given Sunday morning (really not ideal, but we roll with the punches)

 

What we currently have is an HP designed, Installed and Configured "Best Practice" configuration...... In 2009. Since then HP have moved the goal posts a long long way away from what we currently have (at least that's what HP NZ are telling us) Suggested configuration is a maximum of three enclosures in any one stack.

 

As we are an HP Mission Critical customer its HP's job to do things like FW upgrades etc to these enclosures however all of this came about because the last time HP did these upgrades (VC FW 3.0 > 3.18 I think) they managed to drop all connections to our SAN Switches and in turn make a complete mess of around 50% of the 1000+ VM's we have

 

They also left us with a broken VCEM domain in which we can't manage VC from VCEM anymore but the VC modules still believe they are being managed through VCEM yet let you configure profiles, network sets etc from the primary module (VCM). To add insult to injury they are unable to give us any definitive answers on how to rectify this particular issue other than break the current stacking links and start over.

 

So.... Although we have Matrix 7.0 installed and all the included bells and whistles like IC, VCEM, ICSD etc we can't use any of the provisioning services or automation that this provides.

 

Anyway most of that is off topic sorry. As I am rather new to all of this stuff is there any config dumps I can post so you can get a real understanding of what our current mess looks like and maybe suggest a better approach to take? 

 

Also at the below link there is quite a good Stacked v Non-Stacked comparison chart below that would lead most readers to believe that Stacking is not actually the way to go now. You thoughts would be interesting...

 

http://stretch-cloud.info/2012/03/vc-stacking-and-non-stacking-design-comparison-visualization-speaks-loud/

 

The awesome part about all of this is that once the decision is made on how its all going to be configured we have to do the whole thing over again in our Secondary Data Centre.... Oh the joys.

 

I have also added a screenshot of our VC Domain Status, it looks bad but eveything works the way its intended to just without the ease of management. None of the errors shown can be resolved or removed according to HP so we just live with them.

Psychonaut
Respected Contributor

Re: Connecting two VC Stacks

I'll be honest, I don't envy the position you are in.  That looks to be a complete mess.  Of course you have to do what is best for you.  

 

How is HP telling you to remove the links and recreate the domain; do they have a plan for you?  Do they want you to delete the domain and start over?

 

I haven't seen any documentation that suggests stopping at a 3-wide domain (I asked HP about this last summer).

 

I'd call that chart misleading to say the least.  Those are Prasenjit's opinions and interpretations.  I could debate multiple of those points, but I think what it boils down to is that he doesn't trust HP having that much control.  A case can be made either way on that.  Depends on if you took the red pill or the blue pill.

 

I did a 3.51 to 3.70 upgrade in a Matrix environment (granted much smaller than yours) and didn't have a problem at all.  Saw one dropped packet on a VM I believe.  I mapped it out (only later did I find the document that says how the firmware update works).  All the modules are updated and then it moves to the restarts: it does the backup VC, then active.  Then it restarts all the  Bay 1 interconnects and then the Bay 2 interconnects on the other three chassis - so that both sides are never down; about 1-2 minutes between restarts.  Total time is between 40-50 minutes on a 4-wide.  I did the 3.30 to 3.51 without issue, too.  For 3.18 to 3.30 I wasn't in production yet.  Personally I have had no problems with the process in multiple 4-wide domains.

 

If you do break them out, given the history/magnitude of your problem, I would think really hard about just having each chassis in it's own domain.

 

 

Hongjun Ma
Trusted Contributor

Re: Connecting two VC Stacks

Personally I'm not the big fan of stacking. In most cases the gain is not worth the risk.

 

I feel what you are asking for is called "direct connect" VC domains

 

http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c03478426/c03478426.pdf

Page 29 bottom.

 

It's a little tricky and I rarely see customers doing that. 

 

I feel stand-alone c7000 is simple and straighforward to operate and maintain and usually I prefer that model.

My VC blog: http://hongjunma.wordpress.com



NMcClintock
Occasional Contributor

Re: Connecting two VC Stacks

Ok, so in light of everything above and the fact that we have just been donated an additional c7000 enclosure (this will make the de stacking one hell of a lot easier) and that we have VCEM licenses I now ask another question.

 

Is there any point at all in stacking enclosures if we have VCEM installed and working correctly? What are the limitations of a VCEM environment that has 4 single enclosure domains v one that has a single 4 enclosure wide stack domain? Remembering that we have Nexus 5000 switches available (10Gb Ports).

 

I can see the point in stacking without the use of VCEM but with VCEM you can move profiles etc between VC domains, across domain groups and a lot more.

 

The only downfall I can see is an additional hop to the Nexus but with an agregated 60Gb per enclosure to the Nexus' 1 extra hop is not going to be noticed.

 

Or am I just dreaming of a land where everything is made of cookies and I have blue fur as skin?

 

 

Hongjun Ma
Trusted Contributor

Re: Connecting two VC Stacks

If you stack 4 enclosures in one VC domain, then there is no benefit to use VCEM. You only add more complexity by adding VCEM. Remember VCEM is another software component and is based on HP SIM so its version is like 7.1.. Usually you need to maintain a compability table between VCM(3.x) and VCEM(6.x.x or 7.x.x) version.

 

Also, VCEM only operates in profile level and you don't have visibility at LAN/SAN level. you have to bring one domain to maintainance mode and do any vnet/fabric change. Some people may like this model while others may not.

 

My suggestion is that if you only have 4 enclosures, then using VCEM may bring unnecessary overhead.

My VC blog: http://hongjunma.wordpress.com



Psychonaut
Respected Contributor

Re: Connecting two VC Stacks

If you are going to use Matrix you need VCEM, right?  Plus if you are using VCEM defined MAC's and WWN's it would be a lot easier to stick with it.  I would stick with it for now and see if you move a lot of profiles between chassis and if you are utilizing it.  If not then remove it phase two or three.

 

Personally I like stacking and have had good luck for it.  I think it works great for ESX clusters with a lot of internal traffic.  

 

You have not had that luck and have to tear it apart anyway.  I took a look at the directly connecting VC Domains section and personally would steer away from it.  If you're going that route and have the network ports then run it all up to those Nexus switches.