Operating System - HP-UX
1826443 Members
3816 Online
109692 Solutions
New Discussion

Re: How to add disks to VGs in a running cluster

 
SOLVED
Go to solution
Chris Bishop
Occasional Advisor

How to add disks to VGs in a running cluster


Here is my scenario. I have a two node cluster using shared disks from a Hitachi disk frame. This frame is older and is starting to fail. My storage admin has given me replacement LUNs from a new disk frame in order to mirror the data over and then remove the old disks. Another option would be to pvmove the data to the new disks and then remove the old disks. Is there any way to do this in a running cluster without taking it down?

Either way, what are the steps to safely get this done?

11 REPLIES 11
Stephen Doud
Honored Contributor

Re: How to add disks to VGs in a running cluster

If the VG is activated in exclusive mode (check the 'Status' line in vgdisplay), then work this as a normal LVM admin task initially. If it is activated in shared mode, you have more steps to do than those below.

1. After you have added the new PVs to the VG, moved the data and removed the defective drives from the VG (pvremove) create a map file of the VG using:
# vgexport -ps -m .map /dev/
2.Copy it to the other node

Then on the other node...
3. Determine the VG's group minor number:
# ll /dev//group (look for 0xNN0000, were NN is the unique number to remember)
4. vgexport
5. mkdir /dev/
6. mknod /dev//group c 64 0xNN0000
7. vgimport -s -m .map /dev/

8. Test package failover.
Chris Bishop
Occasional Advisor

Re: How to add disks to VGs in a running cluster

Thanks for the reply, Stephen. All of my Serviceguard VGs have the following status:

available, exclusive

Ideally, I would like to add a new disk to each VG with vgextend that matches exactly one of the old disks, pvmove the data to the new disk, then vgreduce the old disk from the VG. Will that work without taking down the cluster or the packages? I realize I will need to remap my VGs and recreate them on the other node afterwards.
Viktor Balogh
Honored Contributor

Re: How to add disks to VGs in a running cluster

Hi Chris,

>Ideally, I would like to add a new disk to each VG with vgextend that matches exactly one of the old disks, pvmove the data to the new disk, then vgreduce the old disk from the VG.

I would rather build an extra mirror to the LV, and after the synchronisation remove the old mirror half and reduce the VG by the defective/old LUNs. This way you would have a complete working copy of the FS all the time. I do not even want to think of a possible data loss in case pvmove would fail for some reason. Please consider it.
****
Unix operates with beer.
Eric SAUBIGNAC
Honored Contributor

Re: How to add disks to VGs in a running cluster

Bonjour Chris,

Whatever method you use, be aware of CLUSTER_LOCK_VG and CLUSTER_LOCK_PV or CLUSTER_LOCK_LUN. If you use one of this method as split brain arbitration, you will have to stop the whole cluster to have locations moved to a new frame.

HTH

Eric
Stephen Doud
Honored Contributor

Re: How to add disks to VGs in a running cluster

I agree with Viktor - it is safer to LVM mirror the new drive in and then unmirror the old drive and pvremove it. This will keep the data available for the running package, so you can keep the business critical application online.

Eric has a good point as well. Use 'cmviewconf | grep lock' to see if your cluster uses a lock VG/pv and if so, which it is. If one of the defective drives is the lock disk, you will have to update the cluster config file with a remaining pv, then halt the cluster eventually (unless this is A.11.19) to update the cluster binary with another drive in one of the cluster VGs.
Chris Bishop
Occasional Advisor

Re: How to add disks to VGs in a running cluster

I understand the mirroring technique is safer, but I have had problems in the past where the mirror to the new disk was added, but the removal of the mirror from the old disk did not work as expected. I have attached the details for the work I want to get done. If someone could look this over and provide the alternative commands for mirroring instead of pvmoving, I will certainly consider going that route. I am trying to get this done over the weekend.

Another one of my problems is the combination of the MAX PV and MAX PE settings on one of my large VGs will not allow me to add enough disks to mirror the LV. This is LV that contains the cluster lock disk so it must be mirrored. I will have to do some pvmoves in this VG on a smaller LV before I can add enough disks to mirror the larger LV.

As for the cluster lock disk, it is included in one of my VGs and I was not going to touch it. This disk would be part of the mirroring process in my attached file. The next scheduled downtime for this cluster is next weekend and I will reconfigure the cluster and change the cluster lock disk at that time.
Viktor Balogh
Honored Contributor

Re: How to add disks to VGs in a running cluster

>but the removal of the mirror from the old disk did not work as expected.

What went wrong with lvreduce? Take a look at the allocation policy, and if any PE is used after lvreduce. If there's any PE allocated, the pv can't be removed from the vg.
****
Unix operates with beer.
Chris Bishop
Occasional Advisor

Re: How to add disks to VGs in a running cluster

Attached is the same workplan using mirrors to walk over to the new disks instead of pvmoves.

What do you guys think?
Eric SAUBIGNAC
Honored Contributor
Solution

Re: How to add disks to VGs in a running cluster

Chris,

Some remarks before leaving the office :

- You have forgotten to vgreduce the old alternate path /dev/dsk/c10tXdY. You have to do it

- You don't want to reduce mirror off of the cluster disk LV. Why not ? It doesn't matter wether or not you have data on the disk. The most important thing is that the disk exists in the VG at cm[apply|check]conf or if an arbitration is needed. So you can safely reduce the mirror.

What could be annoying, would be to vgreduce the pv_lock disk. But, again, it could MATTER ONLY if a split brain condition araise (or at cmcheckconf/cmapplyconf). I mean, if the cluster lock disk no more exists it has no impact on the running cluster. More, you can start the cluster even if the lock disk is missing.

Don't forget to modify cluster configuration with pv_lock definition, for both nodes, and cmapplyconf and I guess everything should go well

Eric


Eric SAUBIGNAC
Honored Contributor

Re: How to add disks to VGs in a running cluster

Because my english is not so good, I must underline in my previous post :

"So you can safely reduce the mirror" --> "So you can SAFELY reduce the mirror in the LV."

Eric
Chris Bishop
Occasional Advisor

Re: How to add disks to VGs in a running cluster

Thanks to all who offered help on this issue. I was able to migrate off of the failing drives last night with no downtime using the mirror migration method. Due to the Max PV setting on some of my VGs I was forced to do a few pvmoves, but they completed successfully as well.

A special thanks to Eric who convinced me to go ahead and remove the mirror from the cluster lock disk as well. Now I have zero data on suspect disk drives. This coming weekend I will reconfigure the cluster off of the old cluster lock disk and the move will be complete.

Viktor, I think the lvreduce issue I had before was my fault. I was trying to mirror an old PV to a new PV, then remove the mirror from the old PV instead of looking at it from an LV perspective and removing the mirror from all old disks at once. If you mirror five old volumes over to five new volumes, then only lvreduce the mirror from one of the old disks, you get weird results. This was my error as all of the lvreduces last night worked as expected.

Again, thanks to all for adding your expertise.