Operating System - HP-UX
1758946 Members
3008 Online
108877 Solutions
New Discussion юеВ

VG lock: vgreduce fails .. physical extents are still in use

 
SOLVED
Go to solution
RiclyLeRoy
Frequent Advisor

VG lock: vgreduce fails .. physical extents are still in use

According your documentation I'm following steps to replace disk into LOCK cluster volume group named as vglock.

I added new disk to vglock but I when I apply vgreduce to remove old disk I get this message because PE are always in used:

vgreduce: Physical volume "/dev/disk/diskxx" could not be removed since some of its
physical extents are still in use.

 

My doubt to complete task:

1. How can I solve it about message shown ? PVMOVE is high risk because if transfer went bad I I'd loose my data into vglock.

2. vglock has special logical volume reserved to Service Guard ? I think It contains no data because It's only necessary to understand what nodes are as main ones in split brain cases. Right ?

3. I'd like to know if vglock must be marked as cluster volume group with "vgchange -c y". Before to start disk replacement into vglock, I have to activate vglock so I have to run "vgchange -c n vglock, " + "vgchange -a e vglock" ? (because I cannot run "vgchange -a n" ?)

4. When disk replacement task and cluster configuration is completed, I have to run "vgchange -c y vglock" + "vgchange -a n vglock" ?

7 REPLIES 7
Sush_S
HPE Pro

Re: VG lock: vgreduce fails .. physical extents are still in use

Hi  RiclyLeRoy ,
Have you tried changing lock information as shown in the video, followed by data migration onto the new disk, before proceeding with vg reduction?

Thanks,
Sush_S


I am an HPE Employee
Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise

Accept or Kudo

RiclyLeRoy
Frequent Advisor

Re: VG lock: vgreduce fails .. physical extents are still in use

What do you mean with "updating lock information" ?

video doesn't show how to update lock information, I thought to start from node1 adding the new disk (vgextend) to lock vg and to remove the old disk (vgremove) and I got message which I indicated in thread subject.

In video says to correct LVM  configuration for the cluster lock volume group with new disk if It is not done, then says to type lvmadm -l to validate the LVM information.

In the example It's shown '/dev/shared_vg' as lock volume group wth 2 physical disk /dev/disk/disk6 and /dev/disk/disk11 on different nodes but It doesn' explain how to make it.

 

 

georgek_1
HPE Pro
Solution

Re: VG lock: vgreduce fails .. physical extents are still in use

Hello RiclyLeRoy

 

I added new disk to vglock but I when I apply vgreduce to remove old disk I get this message because PE are always in used:

vgreduce: Physical volume "/dev/disk/diskxx" could not be removed since some of its
physical extents are still in use.

This error means some of the extends of this disk are still in use .
You may check the same by running # pvdisplay -v <faulty_vglock_disk> | more 

You need to move the data to new disk before running vgreduce the faulty disk from vglock .
It can be completed with pvmove or using mirror-disk (if you have it) .

1. How can I solve it about message shown ? PVMOVE is high risk because if transfer went bad I I'd loose my data into vglock.
Ans : refer the above , you may take a data backup before performing pvmove .

2. vglock has special logical volume reserved to Service Guard ? I think It contains no data because It's only necessary to understand what nodes are as main ones in split brain cases. Right ?
Ans :There no special lvols in lock vg , as it is like any other vg . The only difference is that the disk in lock vg has a special flag in it's header to mark it as a lock disk .
You may use it for using lvols for keeping data .

3. I'd like to know if vglock must be marked as cluster volume group with "vgchange -c y". 
Before to start disk replacement into vglock, I have to activate vglock so I have to run "vgchange -c n vglock, " + "vgchange -a e vglock" ? (because I cannot run "vgchange -a n" ?)

Ans : Yes , First cluster lock volume group </dev/lock_VG> needs to be designated as a cluster aware volume group .
may may run # vgchange -c n <vg_lock> , # vgchange -a y <vg_lock>

4. When disk replacement task and cluster configuration is completed, I have to run "vgchange -c y vglock" + "vgchange -a n vglock" ?
Ans :Once it is completed, you need to run # vgchange -a n vg_lock & # vgchange -c y vg_lock

I work for HPE/ I am an HPE Employee (HPE Community)


I am an HPE Employee. .
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
RiclyLeRoy
Frequent Advisor

Re: VG lock: vgreduce fails .. physical extents are still in use

@georgek_1 

about 2 question::

I found out there is no package is using lock volume group but thers is a logical volume too. Is't mandatory to create a logical volume to get the lock volume group even if none uses this lock vg to keep data ?

 

about 3 question

I tried to activate lock vg to add new disk running using these commands:

# vgchange -c n <vg_lock>    --> there is no problem. That's all right!

# vgchange -a y <vg_lock>   --> I got error, while It worked # vgchange -a e <vg_lock>, what happened ? Can you explain me please?

I have no other questions, I thank you very much for your precious support.

 

georgek_1
HPE Pro

Re: VG lock: vgreduce fails .. physical extents are still in use

Hello RiclyLeRoy,

I found out there is no package is using lock volume group but thers is a logical volume too. 
Is't mandatory to create a logical volume to get the lock volume group even if none uses this lock vg to keep data ?

Ans : No need to have any logical volume created in vglock , vg itself (with a single disk) is enough for lockvg .

 

I tried to activate lock vg to add new disk running using these commands:
# vgchange -c n <vg_lock>    --> there is no problem. That's all right!
# vgchange -a y <vg_lock>   --> I got error, while It worked # vgchange -a e <vg_lock>, what happened ? Can you explain me please?

Ans : It should work as one should be able to make the vg cluster un-aware (-c n) and activate it in normal mode .
Whats the error seeing while try activating the vg ?

 

I work for HPE/ I am an HPE Employee (HPE Community)


I am an HPE Employee. .
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
RiclyLeRoy
Frequent Advisor

Re: VG lock: vgreduce fails .. physical extents are still in use

@georgek_1 

I retried to run these commands

# vgchange -c n <vg_lock>
# vgchange -a y <vg_lock>

Now thats' all right, perhaps I'm making some mistakes; I migrated vg_lock from old disk to new one successfully.

I had the following error during cmcheckconf -C <cluster configuration file> :

First cluster lock volume group /dev/vglock needs to be designated as a cluster aware volume group

I follow your article at https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c01881756 and I removed comment to #VOLUME_GROUP dev/vg_lock even if there is no package-related lock VG involved.

Last 2 questions please:

  1. Can you confirm me there is no problem to uncomment #VOLUME_GROUP dev/vg_lock in cluster configuration file ?
  2. Excuse me but I didn't undesrtand because it's not shown in video: when data moved from old disk to new one, immediately after I have to run # vgchange -a n vg_lock & # vgchange -c y vg_lock, before to run cmcheckconf and cmapplyconf commands ?

 

However now new lock VG is active in cluster and I thank you HPE support who helped me to reach the goal

 

 

 

 

georgek_1
HPE Pro

Re: VG lock: vgreduce fails .. physical extents are still in use

1. Can you confirm me there is no problem to uncomment #VOLUME_GROUP dev/vg_lock in cluster configuration file ?
Ans : Yes, the lock vg has to be cluster aware and should be mentioned in cluster ascii file (configuration file) as "VOLUME_GROUP /dev/<vg_name>   .
2. Excuse me but I didn't undesrtand because it's not shown in video: 
when data moved from old disk to new one, immediately after I have to run # vgchange -a n vg_lock & # vgchange -c y vg_lock, before to run cmcheckconf and cmapplyconf commands ?
Ans : There is no need to run vgchange -a n/ -c y before running cmcheckconf/applyconf if you have the vgname mentioned in cluster configuration/ascii file .

 

I work for HPE/ I am an HPE Employee (HPE Community)


I am an HPE Employee. .
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo