- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- Redundancy in multi -enclosure VC domain with Act...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-22-2012 12:09 PM - edited 02-22-2012 12:19 PM
02-22-2012 12:09 PM - edited 02-22-2012 12:19 PM
Redundancy in multi -enclosure VC domain with Active/Active setup.
Can someone explain to me the appropriate way to configure a fully redundant Active/Active SUS in VC when one than one enclosure is involved?
Am I missing something?
Per the latest FlexFabric Cookbook, I'm able to setup redundant Active/Active SUS with multiple VLANs.
However, when I've tried to stretch this configuration to multiple enclosures, I end up with half of my uplinks not showing correct connections.
Configuration:
2 x C7000 chassis, each one with 2 HP VC FlexFabric 10Gb/24-Port Modules (Bays #1 and #2.)
2 x HP 5406zl switches, each with 2 10GBE x8 SFP+ ports.
16 BL620s G7 and BL460G7 Blades.
Various others switches, including 2910AL with 10GBE modules, etc.
FCOE is configured, and seems to be working properly as configured.
For ethernet, I've setup two SUS (VLAN-Trunk1, VLAN-Trunk2), each one with similar networks associated with them.
For example:
VLAN-Trunk1: ClusterHeartBeat (VLAN 112)
VLAN-Trunk2: ClusterHeartBeat2 (VLAN 112)
Everything works fine with a single Enclosure:
The physical connections, ports X5 and X6 on each FlexFabric 10Gb/24 module are connected to two LACP trunked ports on each HP 5406zl switch.
i.e.
encls_0:Bay1:X5 <-> 5406zl:A1
encls_0:Bay1:X6 <-> 5406zl:A2
encls_1:Bay1:X5 <-> 5406zl:B1
encls_1:Bay1:X6 <-> 5406zl:B2
STP, IRF is setup on the ZLs and they're linked via ISC.
I can get this same configuration to work on the second chassis, as long as they're in different VC domains and two chassis are not stacked.
However, since my use case is to create a stretch cluster (MSCS w/HyperV), I added both chassis to the same VC domain, and created a stacking link between the two Enclosures.
I followed the VC Multi Enclosure Stacking Guide, and connected port X1 on each FlexFabric 10Gb/24 module in the first enclosure to the same ports in the second enclosure.
i.e.
encls_0:Bay1:X1 <-> encls_1:Bay1:X1
encls_0:Bay2:X1 <-> encls_1:Bay2:X1
Doing this causes the links (X5/X6) to go from Linked/Active & Linked/Active to Linked/Standby & Linked/Standby on one of the chassis. The stacking links are working fine.
But looking at the configuration, I no longer have true redundancy to each blade. Is it not possible to have this configuration?
BTW, I'm runnning VC firmware v3.51 on these modules.
What is the correct way to make sure that all blades have truly redundant Active/Active links to the network?
If you need me to diagram this setup for better clarity, please let me know.
Thanks,
AO
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-22-2012 02:26 PM
02-22-2012 02:26 PM
Re: Redundancy in multi -enclosure VC domain with Active/Active setup.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-22-2012 03:08 PM
02-22-2012 03:08 PM
Re: Redundancy in multi -enclosure VC domain with Active/Active setup.
it will be better if you can provide a screen capture for your SUS summary view or you can do "show uplinkset" and "show uplinkset <SUS name>" in CLI.
the key thing is that if you want all uplinks active, assuming you have 4 VC modules and you have LACP uplink bundle from each module, then you need to define 4 SUS cover these 4 bundles, one SUS per bundle. If anything you have a SUS covering uplinks from two different VC modules, some links will be put in standby by VC due to its loop control mechanism.
For some internal cluster network like heartbeat, if they don't need to go out, you can define a network without any uplink, every blade can use this internal VC network and it can communicate across enlosures without leaving VC uplinks.