- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- Clustered Blades losing connectivity/ Virtual Conn...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-16-2010 06:19 AM
07-16-2010 06:19 AM
Clustered Blades losing connectivity/ Virtual Connect set-up/nic teaming
Gary had a customer question when clustering servers and working with Virtual Connect:
************************************************************************************
I am not a VC expert and received the following summary from a customer.
Customer is losing blade connectivity after any network event and it seems that their redundancy is not setup correctly.
How do we/they ensure that Teaming and VC is configured properly.
One of the teamed nic's appears to be dropping connection. Its always the Team 1 nic 2 which corresponds to VC Bay 2-a.
***************************************************************************
Brian was looking to help the situation:
***********************************************************************
Well I would first unteam the nics and give them both IP’s then verify both NICs can infact ping the default gateway by disabling each one pinging and renabling. This will verify that the uplinks and indeed working and support a teamed solution.
If both NIC’s work then you may have success by disabling the heartbeat packets on the TEAM as I have seen some switches require some specific configuration to support the heartbeat packets. When the heartbeat packets aren’t working properly it will down a specific connection… because the primary NIC will see HB packets to the secondary NIC. If the switch isn’t passing them then it will disable the member of the team. You can see heartbeat transmitted and received in the statistics on the team in NCU. By disaling the HB packets you will be solely relying on link state for failure detection so you need to ensure you are also using Smartlink on the vNet… with Flex-10 that means you have to worry about the DCC firmware and drivers being at appropriate revs.
I am assuming this is Windows…. Can you verify the OS.
************************************************************************************************
Are you using clustering in your environment? Any issues with using Virtual Connect in your environment?