StoreVirtual Storage
1752812 Members
6255 Online
108789 Solutions
New Discussion юеВ

Re: Remove node from management group to create new one

 
Modfather
New Member

Remove node from management group to create new one

Hello All

 

I have 2x P500 G2 running lefthand 9.5. Initially they were configured in a network raid 10 Cluster with FOM. We recently experienced a failure of one node and had to ask HP to help us as failover did not work.

 

We now have a single node in the cluster configured with network raid 0 (San 01). The other node is sitting outside the cluster but within the management group (San 02).

 

I would like to remove San 02 and create a new management group with san02 upgraded to lefthand 11. I then aim to configure the storage differently and migrate the relevant data from the old cluster to the new one.

 

Once I have achieved this I aim to decommission the old Cluster/ Management Group and add San 01 to the new one / upgrade to lefthand 11 / reconfigure luns to network raid 10 / deploy FOM.

 

I have attached a screenshot of what the management group currently looks like.

 

When I try to remove San 02 I get an error advising I have to remove FOM first as removing San 02 will reduce the number of nodes below 2.

 

My main question is - Will removing FOM cause the active luns on SAN 01 to go offline?.

 

Also, any comments on my battle plan for adding the San into the new cluster and converting the volumes to network raid 10? will that cause any downtime whilst they reconfigure?

 

Im quite new to lefthand so any input is appreciated!

 

Thanks!

4 REPLIES 4
Modfather
New Member

Re: Remove node from management group to create new one

oikjn
Honored Contributor

Re: Remove node from management group to create new one

what didn't work with the failover?

 

 

You really should get a node working again and re-join it the cluster and get NR10 back up before doing any upgrade.

 

If you turn off the manager from SAN02 you should be able to remove the FOM and SAN02 from the management group, but if SAN02 is currently a voting manager member you will lose quorum when you remove the FOM so it won't let you do it and I would advise against trying to remove anything anyway.

 

The smartest thing to do would be to repair SAN02, get it back into the cluster, convert the LUNs back to NR10 and then do the upgrades.  My guess is your failovers didn't work because you haven't setup your initiators correctly and/or MPIO/DSM isn't configured corrrectly and that is why your failover didn't work.

Modfather
New Member

Re: Remove node from management group to create new one

We did a full powerdown and only one node came back up. HP came to the conclusion that the san 01 thought that san 02 was authoritive even though it was dead so would not allow access to data. They used the "HP only" low level commands to bring the node online as the winner.

02 is currently removed from the cluster so in theory shouldn't be a voting member? any way i can tell?

TBH the whole thing was not set up very well and is pretty much being used as drive storage for shared folders and exchange databases. I want to move the now repaired 02 san to a different v11 cluster so i can use it to p2v the servers (excluding exchange) that are connected to the existing cluster. Start from scratch
oikjn
Honored Contributor

Re: Remove node from management group to create new one

on the main mgt group summary page, what does it say for quorum? If it says "1" you are ok. If it says "2" that means you currently need the FOM and one manager in a node to be running. If you try and remove the FOM you WILL lose access to your SAN again. If that is the case, you have to get support to force the management group to forget that SAN02 is supposed to have a manager running and counted towards quorum.

There is little reason to "start from scratch". Its impossible to setup the clusters wrong (other than if you decide to use NR0 which can be changed on each lun very easily). The thing most people mess up with is configuring each server's iscsi connections correctly so that it actually uses MPIO and DSM (for windows). That doesn't require you to start over with a new MGT group. Best thing is read up on how to setup and configure LUNs and just create new LUNs and migrate the old LUNs to new ones on the same cluster.... my #1 priority if I were you would be to get that SAN02 or its replacement running and back in the cluster IMMEDIATELY so you can get NR10 on your luns again ASAP... I would be having a heart attack if my critical LUNs were running on NR0 for any period of time.