- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- Virtual Connect (VC) Flex-10 configuration "best p...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2012 02:19 PM
05-03-2012 02:19 PM
Virtual Connect (VC) Flex-10 configuration "best practice" request
Charlie was looking for some advice on a configuration:
*****************
I have a customer that’s beginning a proof of concept test prior to a large MS Exchange 2010 deployment. They’re using BL465c G7 servers in c7000 enclosures, attached via SAS mezz cards, through SASs switches (in the c7000) to MDS600’s. They are creating 3 copies of the data, so they have replication data that they want to keep separate from their primary data traffic. Microsoft does not recommend NIC teaming on the NIC ports that are associated with replication. so we need to figure out a hardware based failover for this. The current thought is to connect an extra uplink per VC Flex-10 and make a second shared uplink set from each VC module for just replication traffic. Then, have one of the NIC ports on the host connect through this SharedUplinkSet(SUS) on the first VC module for the replication data. In the event that the uplink or the first VC module fails, the assumption is that the cross connect between the VC modules will then carry the traffic through to the second (redundant) VC Flex-10 and use it’s replication SUS to continue handling the replication data. Anyone doing something like this, or have another recommendation? How are you handling the MS replication data failover without NIC teaming?
****************
Input from Chris:
****************
The problem you are going to run into is that if the IO Module goes offline (either fails or you are performing a firmware update to the module), you will lose network redundancy anyhow. Yes, MS doesn’t recommend using NIC Teaming. But, I have deployed many MS clusters with Teamed NICs in the past. I wouldn’t recommend multi-homing a server, as that can have even more unpredictable results than NIC teaming.
This is straight from the Planning Exchange 2010 High Availability and Site Resilience Technet Library (http://technet.microsoft.com/en-us/library/dd638104.aspx#NR):
“Additional Replication networks can be added, as needed. You can also prevent an individual network adapter from being a single point of failure by using network adapter teaming or similar technology. However, even when using teaming, this does not prevent the network itself from being a single point of failure.”
************
Other comments or suggestions?