StoreVirtual Storage
Showing results for 
Search instead for 
Did you mean: 

Cluster Design advice

Occasional Contributor

Cluster Design advice

Hi all,


I was wanting to get some opinions and advice, we are looking to redesign our storage and go active/active across both our sites.


Here is the equipment that we have available, these are evenly split across both sites:


x4 P4300 G1 2.4TB Raw

x12 P4300 G2 3.6TB Raw

x4 P4500G2 12TB Raw


These are all currently split into pairs giving us a total of 10 clusters, 5 at each site.  The nodes are configured as hardware Raid 5 and Network Raid 10.


I believe that there is a 16 node cluster limit with these units and I do not believe you can mix nodes with different storage sizes into 1 cluster (could someone confirm?).


So in my mind what we can achieve is 3 clusters :

  1. Cluster 1 with the x4 P4300 G1
  2. Cluster 2 with the x12 P4300 G2
  3. Cluster 3 With the P4500 G2.







Honored Contributor

Re: Cluster Design advice

you are correct about the cluster size limit.


As for the node combination.  You can technically add different node types into the same cluster, but it is highly inadvisable since the usable space for each node will be limited to the max space on the smallest node, so adding a 2.4tb node to a cluster with a 12tb node will mean each 12tb node will instantly lose 9tb of usefull space!  Also, the cluster will only run as FAST as the slowest node, so adding NL-SAS or SATA to a 15kRPM cluster will slow down the cluster to the speed of those 7200RPM drives.  The result is adding mimatched nodes will really make everything worse even if it technically can work.


If you want an active active setup across two sites with EVERYTHING active on both sites, you should setup three clusters with half of the nodes of each cluster at each site.  Each cluster should be made up of its own model type so you have a cluster of P4300's, one of P4300G2's, and one of P4500's.


If you don't need ACTIVE for all clusters across all sites, I would probably configure things slightly differently, but that would only be if you could use remote snapshot replication instead of true dual active/active.


Side note:  read up on best practice rules for active/active since the above question is more of a 101 level question the odds you have dual active sites setup correctly is probably slim.  #1 question to answer for dual active/active is where is your FOM?  If you have that at one of your two active sites, ask youself what will happen when you lose that site... answer is you lose quorum and everything stops!  You better have a thrid site if you need a seamless A/A setup.

Gediminas Vilutis
Frequent Advisor

Re: Cluster Design advice


16 nodes in cluster is max supported cluster size, however best practice is not to go above 10 nodes per cluster. The reason is that the more nodes in cluster, the more data needs to be moved/synced between nodes and in-cluster bandwith use starts to interfere with data access (ant fetching data blocks over TCP adds additional latencies). So I would advice to use P4300G2s in 2 clusters.



Another reason (from my practical observations) not to grow clusters above 8-10 nodes is upgrades. Patch installation (if there is no possibility to disconnect volumes) can take ages to install. E.g. patch requiring node reboot installs in about 8-15 minutes (~5 minutes system reboot + volumes sync). If your maintenance windows are short, it might be problem to install all upgrades to systems. For the same reason I would consider spliting all systems to 2 or 3 management groups - you will have an ability to control when upgrade which cluster.


And, as it was already mentioned, don't forget about FOM installation in 3rd site. It can be some simple microserver on standart UPS, running separate FOMs for all your management groups. Bandwith usage between FOM and nodes is minimal.



Occasional Contributor

Re: Cluster Design advice

Thank you both for your feedback, I currently have our reseller talking to HP as well who interestingly have said that 32 nodes is the maxiumum per cluster? Not that it affects my situation.

@ Oikjn
All nodes have 15k SAS drives.
Seems like we agree on the 3 cluster approach, I think ultimately this is what we will end up with.
FOM currently we have none (at HP's advise) but we to have 2 separate sites, this is something that we are going to need to consider again as Active/Active is where we wish to end up.


We are planning to upgrade our current 1GB link between the sites to a pair of 10GB link for the increased bandwidth and to remove our single point of failure. So I would hope this bandwidth is adequate for our storage traffic?

Yes upgrades is something else I hadn't considered we have a 4 hour maintenance windows, it seems like this could be a bit tight to complete and big upgrades. I may take your management group design into consideration for this purpose.

Thank you both for your input it has been very useful. We are at the early stage of this project but your comments will definetly have an impact on the final design.