- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- blades, clustering and fault tolerance
BladeSystem - General
1753742
Members
4841
Online
108799
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-05-2010 09:43 AM
тАО10-05-2010 09:43 AM
I have a couple of bl460c blades which will be a two node cluster. The blades are in the same enclosure. I wish to maintain fault tolerance on these blades. I am concerned about the network set up. Both virtual connect switches(modules 1 and 2) are plugged in to an eqivalent trunk port. An ethernet network has been defined called VLAN100. This uses a shared uplink set, which is an active/standby set up, between module 1 and 2.
The blades have NIC1 on VLAN 100, with the private cluster network on NIC2. What happens if virtual connect module 1 fails. Would this mean the blades would lose the VLAN 100 on NIC 1 and therefore not be fault tolerant?
On non cluster servers, I get around this by setting up adapter teaming, with both NICS pointing to VLAN 100. The reasoning being if module 1 fails, adapter teaming will use NIC2 & module 2 to point to the same VLAN.
On a cluster, because one NIC is in use for the private network, I dont see how to maintain fault tolerance?
The blades have NIC1 on VLAN 100, with the private cluster network on NIC2. What happens if virtual connect module 1 fails. Would this mean the blades would lose the VLAN 100 on NIC 1 and therefore not be fault tolerant?
On non cluster servers, I get around this by setting up adapter teaming, with both NICS pointing to VLAN 100. The reasoning being if module 1 fails, adapter teaming will use NIC2 & module 2 to point to the same VLAN.
On a cluster, because one NIC is in use for the private network, I dont see how to maintain fault tolerance?
Solved! Go to Solution.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-05-2010 09:58 AM
тАО10-05-2010 09:58 AM
Solution
What type of enclosure??
What other Interconnect modules do you have installed in the enclosure.
Normally you would require at least two extra NIC's to get your cluster fault tolerance.
Team LOM1 and LOM2 and have them both using the main network (VLAN_100 I presume)
LOM1 mapps to IC Bay 1 and LOM2 maps to IC Bay 2.
add a Dual NIC Mezz card in the appropriate slot on your blades. (Note you will need additional Ethernet Modules for these NICs to map to. They cant map to IC1 or IC2, the actual bays will depend on Enclosure type and Mezz slot used.
Since both of your cluster blades are in the same enclosure, set up a "heartbeat" vlan but do not assign any uplink ports (Note::DO NOT specify a private network, this will defeat the object). Team your two additional NIC ports and assign them to the Heartbeat vlan. They will talk to each other across the backplane.
now you have both vlans redundent.
HTH
Dave.
What other Interconnect modules do you have installed in the enclosure.
Normally you would require at least two extra NIC's to get your cluster fault tolerance.
Team LOM1 and LOM2 and have them both using the main network (VLAN_100 I presume)
LOM1 mapps to IC Bay 1 and LOM2 maps to IC Bay 2.
add a Dual NIC Mezz card in the appropriate slot on your blades. (Note you will need additional Ethernet Modules for these NICs to map to. They cant map to IC1 or IC2, the actual bays will depend on Enclosure type and Mezz slot used.
Since both of your cluster blades are in the same enclosure, set up a "heartbeat" vlan but do not assign any uplink ports (Note::DO NOT specify a private network, this will defeat the object). Team your two additional NIC ports and assign them to the Heartbeat vlan. They will talk to each other across the backplane.
now you have both vlans redundent.
HTH
Dave.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-05-2010 05:05 PM
тАО10-05-2010 05:05 PM
Re: blades, clustering and fault tolerance
if you dont have a standby nic for serviceguard then you can put a heartbeat on multiple subnets. As long as one gets through the cluster is ok.
you could also cluster with nodes in different enclosures too.
you could also cluster with nodes in different enclosures too.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-12-2010 01:05 AM
тАО10-12-2010 01:05 AM
Re: blades, clustering and fault tolerance
.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP