- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- Re: Flex-10 Shared Uplink Sets with two enclosures...
BladeSystem - General
1752569
Members
5046
Online
108788
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-26-2010 07:29 PM
тАО04-26-2010 07:29 PM
I am working on a design for a customer which consists of two C-7000 enclosures, two flex-10 modules in each chassis, and two Nexus 5000 switches, using BL490c G6 blades. Each chassis will be running vSphere and Server 2008 R2 blades.
My question is the best design for the Shared Uplink Sets. I want an active/active configuration. The Nexus switches can form a LACP group across switches.
I will follow the April 2010 stacking guide for two enclosures, and they will be linked and use one VC domain. I'm also using mapped VLAN mode. All uplinks from a particular interconnect module need to be dedicated to either production data or test/dev.
Each blade server will have 8 NICs, using four redundant pairs of networks. For example, vMotion1 and vMotion2, and these networks would use a different SUS.
Uplinks are:
Production_Network
Chassis 1, Bay 1, Ports X3 and X4
Chassis 2, Bay 2, Ports X3 and X4
Test/Dev
Chassis 1, Bay 2, Ports X3 and X4
Chassis 2, Bay 1, Ports X3 and X4
In order to support an active/active configuration using LACP and the Nexus switches, what should the SUS groups look like?
Option 1:
SUS_Prod_1: Chassis 1, Bay 1, X3 & X4
SUS_Prod_2: Chassis 2, Bay 2, X3 & X4
SUS_Test_1: Chassis 1, Bay 2, X3 & X4
SUS_Test_2: Chassis 2, Bay 1, X3 & X4
Or Option 2:
SUS_Prod_1: Chassis 1, Bay 1, X3; Chassis 2, Bay 2, X3
SUS_Prod_2: Chassis 1, Bay 1, X4; Chassis 2, Bay 2, X4
SUS_Test_1: Chassis 1, Bay 2, X3; Chassis 2, Bay 1, X3
SUS_Test_2: Chassis 1, Bay 2, X4; Chassis 2, Bay 1, X4
I'm thinking a SUS should just be composed of uplinks from a single interconnect, so option 1. I'd then assign four NICs on LOM1 to SUS_xxx_1 and the other four NICs on LOM2 to SUS_xxx_2.
Thoughts?
My question is the best design for the Shared Uplink Sets. I want an active/active configuration. The Nexus switches can form a LACP group across switches.
I will follow the April 2010 stacking guide for two enclosures, and they will be linked and use one VC domain. I'm also using mapped VLAN mode. All uplinks from a particular interconnect module need to be dedicated to either production data or test/dev.
Each blade server will have 8 NICs, using four redundant pairs of networks. For example, vMotion1 and vMotion2, and these networks would use a different SUS.
Uplinks are:
Production_Network
Chassis 1, Bay 1, Ports X3 and X4
Chassis 2, Bay 2, Ports X3 and X4
Test/Dev
Chassis 1, Bay 2, Ports X3 and X4
Chassis 2, Bay 1, Ports X3 and X4
In order to support an active/active configuration using LACP and the Nexus switches, what should the SUS groups look like?
Option 1:
SUS_Prod_1: Chassis 1, Bay 1, X3 & X4
SUS_Prod_2: Chassis 2, Bay 2, X3 & X4
SUS_Test_1: Chassis 1, Bay 2, X3 & X4
SUS_Test_2: Chassis 2, Bay 1, X3 & X4
Or Option 2:
SUS_Prod_1: Chassis 1, Bay 1, X3; Chassis 2, Bay 2, X3
SUS_Prod_2: Chassis 1, Bay 1, X4; Chassis 2, Bay 2, X4
SUS_Test_1: Chassis 1, Bay 2, X3; Chassis 2, Bay 1, X3
SUS_Test_2: Chassis 1, Bay 2, X4; Chassis 2, Bay 1, X4
I'm thinking a SUS should just be composed of uplinks from a single interconnect, so option 1. I'd then assign four NICs on LOM1 to SUS_xxx_1 and the other four NICs on LOM2 to SUS_xxx_2.
Thoughts?
Solved! Go to Solution.
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-27-2010 04:28 AM
тАО04-27-2010 04:28 AM
Solution
I would select Option 1 as it will give you more bandwidth in/out of the enclosure. Uplinks from the same module in VC have the ability to be LACP channeled together so each of your SUS' would have 20Gb of bandwidth instead of 10Gb. You can then take those 2 uplinks and connect one to Nexus_1 and the other to Nexus_2 and use Virtual Port Channeling.
Just note that, for example, LOM1's on blades in Chassis 2 would traverse the 10Gb stacking link between enclosures to get to the active uplinks of SUS_Prod_1. You might consider doubling your stacking links between enclosures so that you have a 20Gb stacking link instead of just 10Gb. For example I would have 2 stacking links between Chassis 1, Bay 1 and Chassis 2, Bay 1 (a 20 Gb channel) and 2 stacking links between Chassis 1, Bay 2 and Chassis 2, Bay 2.
As long as the servers have a fault tolerant network setup there is no single point of failure here.
Just note that, for example, LOM1's on blades in Chassis 2 would traverse the 10Gb stacking link between enclosures to get to the active uplinks of SUS_Prod_1. You might consider doubling your stacking links between enclosures so that you have a 20Gb stacking link instead of just 10Gb. For example I would have 2 stacking links between Chassis 1, Bay 1 and Chassis 2, Bay 1 (a 20 Gb channel) and 2 stacking links between Chassis 1, Bay 2 and Chassis 2, Bay 2.
As long as the servers have a fault tolerant network setup there is no single point of failure here.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-27-2010 04:48 PM
тАО04-27-2010 04:48 PM
Re: Flex-10 Shared Uplink Sets with two enclosures and Nexus switches
Thanks! Yes I was planning on dual stacking links between enclosures.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP