- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- VG lock: vgreduce fails .. physical extents are st...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-02-2023 09:12 PM
тАО08-02-2023 09:12 PM
According your documentation I'm following steps to replace disk into LOCK cluster volume group named as vglock.
I added new disk to vglock but I when I apply vgreduce to remove old disk I get this message because PE are always in used:
vgreduce: Physical volume "/dev/disk/diskxx" could not be removed since some of its physical extents are still in use.
My doubt to complete task:
1. How can I solve it about message shown ? PVMOVE is high risk because if transfer went bad I I'd loose my data into vglock.
2. vglock has special logical volume reserved to Service Guard ? I think It contains no data because It's only necessary to understand what nodes are as main ones in split brain cases. Right ?
3. I'd like to know if vglock must be marked as cluster volume group with "vgchange -c y". Before to start disk replacement into vglock, I have to activate vglock so I have to run "vgchange -c n vglock, " + "vgchange -a e vglock" ? (because I cannot run "vgchange -a n" ?)
4. When disk replacement task and cluster configuration is completed, I have to run "vgchange -c y vglock" + "vgchange -a n vglock" ?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-04-2023 01:28 AM - edited тАО08-04-2023 02:03 AM
тАО08-04-2023 01:28 AM - edited тАО08-04-2023 02:03 AM
Re: VG lock: vgreduce fails .. physical extents are still in use
Hi RiclyLeRoy ,
Have you tried changing lock information as shown in the video, followed by data migration onto the new disk, before proceeding with vg reduction?
Thanks,
Sush_S
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-04-2023 01:59 AM
тАО08-04-2023 01:59 AM
Re: VG lock: vgreduce fails .. physical extents are still in use
What do you mean with "updating lock information" ?
video doesn't show how to update lock information, I thought to start from node1 adding the new disk (vgextend) to lock vg and to remove the old disk (vgremove) and I got message which I indicated in thread subject.
In video says to correct LVM configuration for the cluster lock volume group with new disk if It is not done, then says to type lvmadm -l to validate the LVM information.
In the example It's shown '/dev/shared_vg' as lock volume group wth 2 physical disk /dev/disk/disk6 and /dev/disk/disk11 on different nodes but It doesn' explain how to make it.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-04-2023 03:12 AM
тАО08-04-2023 03:12 AM
SolutionHello RiclyLeRoy
I added new disk to vglock but I when I apply vgreduce to remove old disk I get this message because PE are always in used:
vgreduce: Physical volume "/dev/disk/diskxx" could not be removed since some of its
physical extents are still in use.
This error means some of the extends of this disk are still in use .
You may check the same by running # pvdisplay -v <faulty_vglock_disk> | more
You need to move the data to new disk before running vgreduce the faulty disk from vglock .
It can be completed with pvmove or using mirror-disk (if you have it) .
1. How can I solve it about message shown ? PVMOVE is high risk because if transfer went bad I I'd loose my data into vglock.
Ans : refer the above , you may take a data backup before performing pvmove .
2. vglock has special logical volume reserved to Service Guard ? I think It contains no data because It's only necessary to understand what nodes are as main ones in split brain cases. Right ?
Ans :There no special lvols in lock vg , as it is like any other vg . The only difference is that the disk in lock vg has a special flag in it's header to mark it as a lock disk .
You may use it for using lvols for keeping data .
3. I'd like to know if vglock must be marked as cluster volume group with "vgchange -c y".
Before to start disk replacement into vglock, I have to activate vglock so I have to run "vgchange -c n vglock, " + "vgchange -a e vglock" ? (because I cannot run "vgchange -a n" ?)
Ans : Yes , First cluster lock volume group </dev/lock_VG> needs to be designated as a cluster aware volume group .
may may run # vgchange -c n <vg_lock> , # vgchange -a y <vg_lock>
4. When disk replacement task and cluster configuration is completed, I have to run "vgchange -c y vglock" + "vgchange -a n vglock" ?
Ans :Once it is completed, you need to run # vgchange -a n vg_lock & # vgchange -c y vg_lock
I work for HPE/ I am an HPE Employee (HPE Community)
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-06-2023 12:28 AM - last edited on тАО08-06-2023 11:43 PM by Sunitha_Mod
тАО08-06-2023 12:28 AM - last edited on тАО08-06-2023 11:43 PM by Sunitha_Mod
Re: VG lock: vgreduce fails .. physical extents are still in use
about 2 question::
I found out there is no package is using lock volume group but thers is a logical volume too. Is't mandatory to create a logical volume to get the lock volume group even if none uses this lock vg to keep data ?
about 3 question:
I tried to activate lock vg to add new disk running using these commands:
# vgchange -c n <vg_lock> --> there is no problem. That's all right!
# vgchange -a y <vg_lock> --> I got error, while It worked # vgchange -a e <vg_lock>, what happened ? Can you explain me please?
I have no other questions, I thank you very much for your precious support.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-07-2023 12:31 PM
тАО08-07-2023 12:31 PM
Re: VG lock: vgreduce fails .. physical extents are still in use
Hello RiclyLeRoy,
I found out there is no package is using lock volume group but thers is a logical volume too.
Is't mandatory to create a logical volume to get the lock volume group even if none uses this lock vg to keep data ?
Ans : No need to have any logical volume created in vglock , vg itself (with a single disk) is enough for lockvg .
I tried to activate lock vg to add new disk running using these commands:
# vgchange -c n <vg_lock> --> there is no problem. That's all right!
# vgchange -a y <vg_lock> --> I got error, while It worked # vgchange -a e <vg_lock>, what happened ? Can you explain me please?
Ans : It should work as one should be able to make the vg cluster un-aware (-c n) and activate it in normal mode .
Whats the error seeing while try activating the vg ?
I work for HPE/ I am an HPE Employee (HPE Community)
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-09-2023 11:23 PM - last edited on тАО08-10-2023 02:49 AM by Sunitha_Mod
тАО08-09-2023 11:23 PM - last edited on тАО08-10-2023 02:49 AM by Sunitha_Mod
Re: VG lock: vgreduce fails .. physical extents are still in use
I retried to run these commands
# vgchange -c n <vg_lock>
# vgchange -a y <vg_lock>
Now thats' all right, perhaps I'm making some mistakes; I migrated vg_lock from old disk to new one successfully.
I had the following error during cmcheckconf -C <cluster configuration file> :
First cluster lock volume group /dev/vglock needs to be designated as a cluster aware volume group
I follow your article at https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c01881756 and I removed comment to #VOLUME_GROUP dev/vg_lock even if there is no package-related lock VG involved.
Last 2 questions please:
- Can you confirm me there is no problem to uncomment #VOLUME_GROUP dev/vg_lock in cluster configuration file ?
- Excuse me but I didn't undesrtand because it's not shown in video: when data moved from old disk to new one, immediately after I have to run # vgchange -a n vg_lock & # vgchange -c y vg_lock, before to run cmcheckconf and cmapplyconf commands ?
However now new lock VG is active in cluster and I thank you HPE support who helped me to reach the goal
- Tags:
- leave mistaked
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-10-2023 11:41 AM
тАО08-10-2023 11:41 AM
Re: VG lock: vgreduce fails .. physical extents are still in use
1. Can you confirm me there is no problem to uncomment #VOLUME_GROUP dev/vg_lock in cluster configuration file ?
Ans : Yes, the lock vg has to be cluster aware and should be mentioned in cluster ascii file (configuration file) as "VOLUME_GROUP /dev/<vg_name> .
2. Excuse me but I didn't undesrtand because it's not shown in video:
when data moved from old disk to new one, immediately after I have to run # vgchange -a n vg_lock & # vgchange -c y vg_lock, before to run cmcheckconf and cmapplyconf commands ?
Ans : There is no need to run vgchange -a n/ -c y before running cmcheckconf/applyconf if you have the vgname mentioned in cluster configuration/ascii file .
I work for HPE/ I am an HPE Employee (HPE Community)
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
