StoreVirtual Storage
Showing results for 
Search instead for 
Did you mean: 

Removing 2*LHN P4500 from 20 node cluster

Thomas Mill
Occasional Contributor

Removing 2*LHN P4500 from 20 node cluster

Hi folks


We have a 20 node LHN cluster, spread across 2 Datacentres....10 in LocationA and 10 in locationB.


We want to start reducing the size of the cluster, so are looking to take 2 out.


Aside from:


1) making sure they are not managers,

2) being aware of the restripe time

3) and potentially increasing the management bandwidth.....................


Is there any more to it than merely selecting the *Edit Cluster* option, Remove 2 Storage Systems (1 from each site) and then wait??



Honored Contributor

Re: Removing 2*LHN P4500 from 20 node cluster

it might sound "too easy", but thats all it takes.

Now I"ve done that with a 6 node cluster, but never something as large as what you are suggesting... might want to spool up 20 VSAs as a test lab before makeing the change to verify the function works on such a large group.

I kept our management group traffic low to avoid impact to our production servers and it took about a day to do the restrip with no serious service impact.
Thomas Mill
Occasional Contributor

Re: Removing 2*LHN P4500 from 20 node cluster

Thanks chap


Hope you dont mind me asking 1 more thing reagrding the Nodes to select, if you're not sure, not to worry.


I was going to select, for instance, nodes 8A (Site A ) and 8B (Site B), but if I'm not mistaken there is some dependancy for the health of the entire array based on *partner* nodes...and I *think* partner nodes tend to be the ones that were added in sequence................


So, my *limited* understanding suggests that for the array to be healthy, nodes 7B, 8A, 8B, 9A are important in this scenario.


If I decide to remove 8A and 8B, and both of them fail for whatever reason during restripe (or reboot at same time), then would my entire array have an issue?


I'm wondering, for instance, if I should instead look to remove nodes 2A and 8B instead, for instance.  they would have been added into the array at quite different times, so there's not likely to be any *dependancy* on each other.


Am I confusing myself?!?



Gediminas Vilutis
Frequent Advisor

Re: Removing 2*LHN P4500 from 20 node cluster

When you remove nodes from cluster, cluster does full restripe, gradually pumping out all data of removed nodes to remaining ones, but data still remains duplicated (taking that you use network raid 10). So you can afford to loose any one particular node (from remaining nodes or removed nodes - does not matter) from cluster at any time.


If you wander what would happen if both removed nodes fail during restripe, I would say that this situation still exists in current situation - if any two adjacent nodes fail, you loose access to data. So why to worry? :) 


From other hand load on removed nodes is much easier than on remaining ones. When you remove nodes from cluster, iSCSI sessions gets instantly transfered to some of remaining nodes, they only transfer data out of disks, only reads, no writes. For remaining nodes restripe is very IO costly - they do both reads and writes (cluster restripe reshuffles all data between nodes) and IO for iSCSI iniciators. So I would worry more about health of remaining nodes :)


BTW, if you want to make sure that your HW is in healthy state before restripe (i.e. no nearly failed disks), you can send management group support bundle to HP support and ask to check ADU reports for all systems for error statistics on all disks in cluster. Sometimes disks starts loging errors to ADU, but still rates are below some threshold for RAID controller to notice.