HPE StoreVirtual Storage / LeftHand
cancel
Showing results for 
Search instead for 
Did you mean: 

Nodes with different size drives

Edward Rohen
Occasional Visitor

Nodes with different size drives

I have p4500 setup with 3 nodes with 300GB SAS drives and 2 nodes with 450GB SAS drives. So there is stranded space on the 2 nodes with the 450GB drives already. I just purchased two new nodes with 600GB SAS drives and I am wondering how to best utilize this space with out adding the two new nodes to the current cluster and stranding half of the space on the new nodes. Should I just set them up in their own cluster? When I first started using lefthand I started with the 3 nodes of 300GB drives, and I remember something about that you should have a odd number of nodes in the cluster, is this still Best Practices?
3 REPLIES
Jitun
HPE Pro

Re: Nodes with different size drives

Even if you add the 2 New Nodes to the Same Cluster, you would not be loosing any Space.
The overall Space of the Cluster will increase to the effective capacity of the total number of Nodes.
I would prefer you create a New Cluster and Add the units to it.

There is no requirement to have Odd number of Nodes in a Cluster.
Generally the Starter Pack has 2 Nodes in a Custer.
What you would require is an Odd number of Managers on the Management Group.

Within a management group, managers are storage systems that govern the activity of all of the storage systems in the group. All storage systems contain the management software, but you must designate which storage systems run that software by starting managers on them.
--------------------------------------------------------------
How to assign points? Click the KUDOS! star!
Jitun
HPE Pro

Re: Nodes with different size drives

Managers use a voting algorithm to coordinate storage system behavior. In this voting algorithm, a strict majority of managers (a quorum) must be running and communicating with each other in order
for the SAN/iQ software to function. An odd number of managers is recommended to ensure that a majority is easily maintained. An even number of managers can get into a state where no majority existsâ one-half of the managers do not agree with the other one-half. This state, known as a â split-brain,â may cause the management group to become unavailable.
For optimal fault tolerance in a single-site configuration, you should have 3 or 5 managers in your management group to provide the best balance between fault tolerance and performance. The maximum supported number of managers is 5.
--------------------------------------------------------------
How to assign points? Click the KUDOS! star!
teledata
Respected Contributor

Re: Nodes with different size drives

You can put the new nodes in their own cluster, but keep them in the same management group.

This makes it very convenient to move volumes between different clusters as requirements demand.

As far as usable capacity there are a few things to keep in mind. If you use 2-way replication there is some math to consider:

for the sake of keeping our numbers simple let's assume you do Network RAID 10 (2-way replication) for all your volumes:

5 node cluster (3 12x300 plus 2 12x450 but really gets treated as 5 x 12x300) =
6.69 TB
2 node cluster (2 12x600) = 5.35 TB

Between the 2 clusters you would have 12.04TB of usable space (after right sizing, hardware and network RAID overhead of course)

If you added the 2 600GB to the existing cluster you would have a total of 9.36 usable.

Now: if you broke ALL the different sized nodes into individual clusters (yet kept them in the same management group) you would net the most usable space:

3 node cluster (2 12x300) = 4 TB
2 node cluster (2 12x450) = 4 TB
2 node cluster (2 12x600) = 5.35 TB
This configuration would net you 13.35TB between 3 clusters

I assumed 100% replicated volumes and used this calculator: http://www.tdonline.com/hp-lefthand/storage-calculator/


http://www.tdonline.com