- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Cluster Design advice
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-19-2012 08:49 AM
11-19-2012 08:49 AM
Cluster Design advice
Hi all,
I was wanting to get some opinions and advice, we are looking to redesign our storage and go active/active across both our sites.
Here is the equipment that we have available, these are evenly split across both sites:
x4 P4300 G1 2.4TB Raw
x12 P4300 G2 3.6TB Raw
x4 P4500G2 12TB Raw
These are all currently split into pairs giving us a total of 10 clusters, 5 at each site. The nodes are configured as hardware Raid 5 and Network Raid 10.
I believe that there is a 16 node cluster limit with these units and I do not believe you can mix nodes with different storage sizes into 1 cluster (could someone confirm?).
So in my mind what we can achieve is 3 clusters :
- Cluster 1 with the x4 P4300 G1
- Cluster 2 with the x12 P4300 G2
- Cluster 3 With the P4500 G2.
Thoughts?
Paul
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-19-2012 09:18 AM
11-19-2012 09:18 AM
Re: Cluster Design advice
you are correct about the cluster size limit.
As for the node combination. You can technically add different node types into the same cluster, but it is highly inadvisable since the usable space for each node will be limited to the max space on the smallest node, so adding a 2.4tb node to a cluster with a 12tb node will mean each 12tb node will instantly lose 9tb of usefull space! Also, the cluster will only run as FAST as the slowest node, so adding NL-SAS or SATA to a 15kRPM cluster will slow down the cluster to the speed of those 7200RPM drives. The result is adding mimatched nodes will really make everything worse even if it technically can work.
If you want an active active setup across two sites with EVERYTHING active on both sites, you should setup three clusters with half of the nodes of each cluster at each site. Each cluster should be made up of its own model type so you have a cluster of P4300's, one of P4300G2's, and one of P4500's.
If you don't need ACTIVE for all clusters across all sites, I would probably configure things slightly differently, but that would only be if you could use remote snapshot replication instead of true dual active/active.
Side note: read up on best practice rules for active/active since the above question is more of a 101 level question the odds you have dual active sites setup correctly is probably slim. #1 question to answer for dual active/active is where is your FOM? If you have that at one of your two active sites, ask youself what will happen when you lose that site... answer is you lose quorum and everything stops! You better have a thrid site if you need a seamless A/A setup.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-20-2012 03:21 AM
11-20-2012 03:21 AM
Re: Cluster Design advice
16 nodes in cluster is max supported cluster size, however best practice is not to go above 10 nodes per cluster. The reason is that the more nodes in cluster, the more data needs to be moved/synced between nodes and in-cluster bandwith use starts to interfere with data access (ant fetching data blocks over TCP adds additional latencies). So I would advice to use P4300G2s in 2 clusters.
Another reason (from my practical observations) not to grow clusters above 8-10 nodes is upgrades. Patch installation (if there is no possibility to disconnect volumes) can take ages to install. E.g. patch requiring node reboot installs in about 8-15 minutes (~5 minutes system reboot + volumes sync). If your maintenance windows are short, it might be problem to install all upgrades to systems. For the same reason I would consider spliting all systems to 2 or 3 management groups - you will have an ability to control when upgrade which cluster.
And, as it was already mentioned, don't forget about FOM installation in 3rd site. It can be some simple microserver on standart UPS, running separate FOMs for all your management groups. Bandwith usage between FOM and nodes is minimal.
Gediminas
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-23-2012 01:42 AM
11-23-2012 01:42 AM
Re: Cluster Design advice
@ Oikjn
All nodes have 15k SAS drives.
Seems like we agree on the 3 cluster approach, I think ultimately this is what we will end up with.
FOM currently we have none (at HP's advise) but we to have 2 separate sites, this is something that we are going to need to consider again as Active/Active is where we wish to end up.
@Gediminas
We are planning to upgrade our current 1GB link between the sites to a pair of 10GB link for the increased bandwidth and to remove our single point of failure. So I would hope this bandwidth is adequate for our storage traffic?
Yes upgrades is something else I hadn't considered we have a 4 hour maintenance windows, it seems like this could be a bit tight to complete and big upgrades. I may take your management group design into consideration for this purpose.
Thank you both for your input it has been very useful. We are at the early stage of this project but your comments will definetly have an impact on the final design.