StoreVirtual Storage
cancel
Showing results for 
Search instead for 
Did you mean: 

Removal of Failed Node

JDohrmann
Occasional Advisor

Removal of Failed Node

We have a VSA node, without a valid support contract, that failed and cannot be brought back online.  How do we go about removing it from the CMC?  When ever we try to modify it an error is thrown stating "Could not find storage system with serial number: 00:0C:29:A7:D7:8E."

 

Thanks!

9 REPLIES
vlho
Advisor

Re: Removal of Failed Node

Hi,

 

you first have to migrate all volumes on RAID 0 level...

 

JDohrmann
Occasional Advisor

Re: Removal of Failed Node

Thank you for the response.  All the volumes are already at RAID level 0.

Paul Lazzari
Senior Member

Re: Removal of Failed Node

You can try to login via the console and under Configuration Management you will find "Remove from management group". May be this could help.

oikjn
Honored Contributor

Re: Removal of Failed Node

you don't need to migrate to NR0!

 

I've done this MANY times...  just simply create a new VSA with the same mac address, same name and same IP.  Boot it up and when it comes on, try and log into CMC and it will try and log into the node and then give an error saying the node does not think it is in the management group.

 

From there, it will show as RIP:mac in the cluster and you will see a node listed in the available nodes section.  You can then take that node (upgrade first if needed) and join it to the management group and then from there you shiould right click on the cluster and say "exchange nodes" and you will have the option to swap the RIP node with the new node.  It will show both the RIP and the new one until the rebuild is complete at which point the RIP node will simply disappear.

 

I"ve done this MANY times.  It will work and is definitely the fastest way.

oikjn
Honored Contributor

Re: Removal of Failed Node

yikes...  I just read that all your volumes were NR0...  if that is the case, you are screwed. 

 

Unless the original VSA VHDs are still in tact, then MAYBE you can try and buy support from HP and see if they can recover the original VHDs into a new node, but my guess is I would kiss that data goodbye.

 

not to beat a man when he's down, but how could you possibly run a san with NR0 unless it was just a single node cluster?  Even CMC will shout at you every time you open it saying its a bad idea.

JDohrmann
Occasional Advisor

Re: Removal of Failed Node

Yeah, this node is on it's own in the cluster.  We aren't trying to recover the data since it's only being used for backups.  I'm just trying to remove the failed node from the CMC.  Is there any way?

oikjn
Honored Contributor

Re: Removal of Failed Node

do you have other clusters in the management group which is why you are trying to save it?

 

past that, I don't know if the proceedure I suggested will work in NR0, but I guess it wouldn't hurt to try doing what I suggested.

JDohrmann
Occasional Advisor

Re: Removal of Failed Node

Yes, we have two clusters in that group.

oikjn
Honored Contributor

Re: Removal of Failed Node

I think as long as the management group thinks that RIP node has data on it that is supposed to be of use, then you are going to show the node.  I have never tried this on a cluster which has totally failed so I don't know if it will work, but you either have to just delete all the LUNs on that cluster and then delete the cluster (assuming CMC allows that)... once that is done, you should be able to remove the missing node.  If not, have you tried doing the node re-creation I suggested above?  I know that works when you have good data you can migrate and maybe it will work with bad data as well (would love to find out that one.)  The key is I think you have to create the conflicting mac address VSA in order to get the original to say RIP_MAC instead of just missing_MAC.