HPE GreenLake Administration
- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- HPE EVA Storage
- >
- Re: HP Blade cluster
HPE EVA Storage
1826649
Members
3217
Online
109695
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-09-2006 05:09 AM
08-09-2006 05:09 AM
Hi ,
I have three blades enclousure, with 10 BL25 (5 cluster) attached to a EVA6000.
My question is
Will be a good practice to install one node of each cluster in separate enclousure?
There is some document where describe the best practice according with the cluster instalation in separate enclousure?
Whitch could be the problem installing the cluster members in the same enclousure?
Thank you
I have three blades enclousure, with 10 BL25 (5 cluster) attached to a EVA6000.
My question is
Will be a good practice to install one node of each cluster in separate enclousure?
There is some document where describe the best practice according with the cluster instalation in separate enclousure?
Whitch could be the problem installing the cluster members in the same enclousure?
Thank you
rperez
Solved! Go to Solution.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-09-2006 05:29 AM
08-09-2006 05:29 AM
Re: HP Blade cluster
> Whitch could be the problem installing the cluster members in the same enclousure?
Both members will be down when you have to swap the backplane. Or imagine you have to work on the cabinet's power cabeling.
Both members will be down when you have to swap the backplane. Or imagine you have to work on the cabinet's power cabeling.
.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-09-2006 10:23 AM
08-09-2006 10:23 AM
Re: HP Blade cluster
Uwe
There is some best practice document for Blade Severs?
W.S
There is some best practice document for Blade Severs?
W.S
rperez
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-09-2006 06:30 PM
08-09-2006 06:30 PM
Solution
William,
Common thinking would say 'try to share hardware and enviromental conditions as less as possible between clusternodes' (single point of failure) and create multiple ways for your cluster to access sources around your cluster. (redundancy)
So one piece of shared defect hardware can not bring down your custer and services.
I have no experience with blade hardware, but much more with MC/ServiceGuard and HP9000, so i would say put the clusternodes into different enclosures, dfferent mains and redundant powersupplies, redundant SAN Switches, redundant LAN switches etc.
Create the enviroment around your cluster so that there is no trivial part or condition that can bring your whole cluster down. Give your cluster an other option if something fails.
Think dissaster, be a little paranoid and keep it in budget (that's the hard part).
Succes !
Klaas Eenkhoorn
Common thinking would say 'try to share hardware and enviromental conditions as less as possible between clusternodes' (single point of failure) and create multiple ways for your cluster to access sources around your cluster. (redundancy)
So one piece of shared defect hardware can not bring down your custer and services.
I have no experience with blade hardware, but much more with MC/ServiceGuard and HP9000, so i would say put the clusternodes into different enclosures, dfferent mains and redundant powersupplies, redundant SAN Switches, redundant LAN switches etc.
Create the enviroment around your cluster so that there is no trivial part or condition that can bring your whole cluster down. Give your cluster an other option if something fails.
Think dissaster, be a little paranoid and keep it in budget (that's the hard part).
Succes !
Klaas Eenkhoorn
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Support
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP