Operating System - HP-UX
1752589 Members
4076 Online
108788 Solutions
New Discussion

Re: Concerns concerning spare disk for a vg which is on the locklun disk

 
Dee Jacobs
Advisor

Concerns concerning spare disk for a vg which is on the locklun disk

HP-UX 11.31  11i.v3

Just one more question about spare disk on a vg.

I have a two node cluster;therefore, I have an extra partition on vg01 for the locklun which prevents

quorum loss on node failure or LAN connectivity failure.

(We disabled automatic switchover on LAN issues, since our IT dept. occasionally gives us

short term LAN problems which affect both nodes simultaneously when they do occur.)

 

If I add a spare disk to this vg, how do I handle the layout of  the spare dsk?

Do I need to create the partition for locklun on the spare disk?

I'm sure I would need to do that on the replacement disk for the failed disk.

 

Is there special handling to re-activate the locklun after the replacement disk

is in place?

5 REPLIES 5

Re: Concerns concerning spare disk for a vg which is on the locklun disk

Dee,

 

I hestitated in responding to this, because I don't know the answer for sure... but here goes...

 

I'm assuming you are actually referring to a "cluster lock disk", rather than "cluster lock LUN", as "cluster lock LUNs" are not part of a volume group whereas "cluster lock disks" are.

 

As the serviceguard configs for cluster lock disks point at specific PVs, rather than just a volume group, I would expect in the event of that disk failing, whilst the LVM sparing routines should ensure that LVM data on the disk is moved to the spare, I don't think this would cause the cluster lock disk configuration to be changed, nor the cluster lock data actually be written to the spare disk. I would expect after the sparing is done, you would still have to halt the cluster, change the cluster config to point at the new spare disk in the cluster lock volume group, and then run a cmapplyconf to write the cluster lock data to the new disk.


I am an HPE Employee
Accept or Kudo
Dee Jacobs
Advisor

Re: Concerns concerning spare disk for a vg which is on the locklun disk

Thanks again, Duncan. There's sure a lot to learn on these disk configs.

 

It is a lock lun disk partitioned so:

 3
EFI 100MB
HPUX 1MB  this is for the lock
HPUX 100% this is the pv for vg01

 

_p3 is one of a mirrored pair. The second disk has no lock-lun partition.

I plan to add a spare on that vg.

So, if the one with the lock fails, the replacement disk should be partitioned

that way and do a proper applyconf to that disk, since the one that gets spared

off is just holding a pv for me. Then after it's all back together, I can pvmove

back to the original disk.

Do I have that right?

 

Re: Concerns concerning spare disk for a vg which is on the locklun disk

Dee,

 

So let me get this straight you plan to have a VG, vg01 with 3 Physical Volumes (PVs) in it. Two of those PVs are whole disk (/dev/disk/diskX, /dev/disk/diskY) and one is a partition on a disk (/dev/disk/diskZ_p3) Do I have that correct? diskZ has 2 other partitions... a partition for a cluster lock LUN (diskZ_p2) and the (required to hold partition tables) EFI partition (diskZ_p1). Of the two "whole" disks, one is a "LVM spare" (i.e. added with "vgextend -z"). Logical volumes are mirrored between diskX and diskZ_p3.

 

Do I have that all correct?

 

Now in the event of failure of diskZ, the spare disk (diskY) will pick up the mirror for diskZ_p3. But the LVM spare algorithms are only acting on LVM PVs - they don't even *know* about diskZ_p2 and the fact it is used as a cluster lock LUN, plus as you seem to have indicated that the spare disk is un-partitioned, there's nowhere for the new lock LUN to go! You can't partition a disk and keep the existing data on that disk - idisk will blow away any existing data on a disk, so if diskZ failed and diskY was being used as a LVM spare, you couldn't then run idisk on diskY - you would lose all the data that had been mirrored to diskY.

 

So if you really want to go down this route, you should make sure diskY is partitioned the same as diskZ, and that diskY_p3 is used as the LVM spare. Now you have another disk with a small partition on it you can use as your cluster lock LUN if diskZ fails. You should watch though if diskX, diskY and diskZ are all the same size that you don't allocate more PEs on diskX than can be fitted onto diskY_p3 or diskZ_p3 (maybe better just to partition diskX as well just to keep things consistent?)

 

One thing I did have wrong - you can update the cluster lock configuration without halting the cluster.


I am an HPE Employee
Accept or Kudo
Dee Jacobs
Advisor

Re: Concerns concerning spare disk for a vg which is on the locklun disk

Thank you, Duncan.

Your list of initial conditions is true.

I now understand about partitioning the spare.

Yes I have taken into account the number of PEs for the VG.

I have had a slight delay in working on this. Now the weather

threat this week.

I believe that I need to partition the mirror disk as well as the spare.

I'll report on progress within a week.

Dee Jacobs
Advisor

Re: Concerns concerning spare disk for a vg which is on the locklun disk

I have succeeded in sparing, failing, and recovery of the partitioned disk for the cluster lock partiion with a logical volume in P3. It took me a while to figure that when I put the replacement disk in, it needed to be partitioned.

 

There are two more small questions.

 

1.  After successful replacement,I did the pvmove to restore the lv to the disk from the spare. The LVM manual says that when the pvmove is completed, that the spare disk would return to standby spare automatically. I found this not to be true. The spare disk remains as active spare although all the PEs have been restored to the replacement disk. Is there something that needs to be done to return the spare to standby?

 

2. I see that the cluster lock is assigned to a specific disk within the VG. If this is the disk that fails, must I change the cluster configuration to indicate the remaining healthy disk? Is the cluster lock only queried on boot or will the cluster fail at the time of the disk failure?