- Community Home
- >
- Networking
- >
- Switching and Routing
- >
- Comware Based
- >
- Need to setup redundant config with 2 separate dat...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-10-2016 01:11 AM
03-10-2016 01:11 AM
Need to setup redundant config with 2 separate datacenters on same site
Hi guys,
i'm working on a setup for us, we have 2 server rooms where we have 2 ESX servers with storeVirtual storage connected. iSCSI traffic is running on separate network and not on production network.
The customer wants to install a network with a redundant setup for each serverroom and across the 2 serverrooms. In first case i was thinking of an IRF setup over 2 serverrooms but the customer doens't feel comfortable with 4 switches in 1 IRF stack. If possible he would like to have 4 separate switches combined to perform redundant layer 2 traffic. All L3 traffic is handled by a firewall cluster so the core switching only performs L2 switching.
In that case i was thinking of a TRILL config with 4 5900 switches. At the moment they have a core which is configured of 4x 2900 series switches, the access switching is 25x0 series switches. I'm still worried about the setup and the connection of the 25x0 series switching with all other devices connected to the core switches. I want to keep it as simple as possible, but it must be as redundant as possible. they had earlier a problem with a Provision stack (3800) after firmware upgrade.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-14-2016 07:28 PM
03-14-2016 07:28 PM
Re: Need to setup redundant config with 2 separate datacenters on same site
One IRF stack consisting of a pair of 5900s on each site seems reasonable.
I'm not perfectly clear on whether the 5900s are set to replace the 2900s..?
The cross-site connection between the two IRF stacks of 5900s should consist of at least one 10Gb connection per physical switch, with all links configured as a LACP trunk.
Access switches should be patched to both their local 5900 physical switches, with both links configured as a LACP trunk.
Critical Access switches (eg, end of row/top of rack switches) could have additional uplinks from the cross-site core. I don't think the 5900s support Distributed Trunking like the 5400s do, so you would need to make sure spanning-tree was setup properly.
Your failure scenarios are:
- single cross-site link --> mitigated by having multiple cross-site links
- all cross-site links --> mitigated by havong multiple WAN/internet links, diversely presented to each datacentre
- single 5900 --> mitigated by having multiple 5900s in an IRF stack, with all connections to the stack (network and server) being diversely patched to both stack members
- one 5900 stack --> mitigated for critical Access switches by having additional cross-site links for Access to Remote Core
- one 2500 uplink --> mitigated by having multiple uplinks to all Access switches
- one 2500 switch --> mitigated for datacentre switches and critical devices by having them diversely patched to multiple 2530s.
- one datacentre --> mitigated by your DR & BC plans.