StoreVirtual Storage
1752766 Members
5275 Online
108789 Solutions
New Discussion

Re: change site configuration - move LH nodes to different site

 
oikjn
Honored Contributor

Re: change site configuration - move LH nodes to different site

if you first move the systes in RZ-cl02 to RZ without changing TC-cl02 and THEN repeat for TC-cl02 to TC you should not experiance a restripe.

 

Honestly, a restripe isn't the end of the world... it just delays your impliment of the next step (which given the length of this thread would have already been completed.  A restripe w/ all nodes in good health goes much quicker than a restripe to recover from a broken node.

 

Side note, it looks like the first site was sentup nicely with evens at one site and odds at another... it would have been better if you setup the 2nd cluster w/ that same pattern because then you could even switch between single/multi-site without a restripe.

 

If you are still uncomfortable the manually really does do a pretty good job of going over how everything works.  Read it or search the CMC help window and read up there and you should feel better.

pirx4711
Frequent Advisor

Re: change site configuration - move LH nodes to different site

Thanks for your patience. I reorganized the sites and no restriping occured. I'm extra cautious with this because we had a lot of trouble with our LH environment in the past 2 years and not everything worked as it should or I was told it should.

 

Last time a complete node failed (3 disk error) and we had to setup the node from scratch (by HP support) it was not added correct to the cluster. So resriping began and took ~10 days. Then support noticed that there still was a ghost node (the old one) and they then removed it. After that restriping did start again fpr 10 days. So we had a total of 20 days and some volume were in state unprotected.

 

 

oikjn
Honored Contributor

Re: change site configuration - move LH nodes to different site

yikes.  That sounds like a major headache...  better than dataloss, but still not comforting at all.

 

The good news w/ this situation is that if for whatever reason the cluster does decide to do a restripe, you will never lose your data redundancy so you won't be left naked like your botched node replacement.

 

As for the speed of a re-stripe, thats all case specific based on your load and the data change and continued change rate, AND the bandwidth you allow the cluster to use to do the restripe, but given that you are talking about 10-nodes, a data-restripe is likely to be non-trivial.