<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: VG lock: vgreduce fails .. physical extents are still in use in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/vg-lock-vgreduce-fails-physical-extents-are-still-in-use/m-p/7194001#M948554</link>
    <description>&lt;P style="margin: 0;"&gt;1. Can you confirm me there is no problem to uncomment #VOLUME_GROUP dev/vg_lock in cluster configuration file ?&lt;BR /&gt;Ans : Yes, the lock vg has to be cluster aware and should be mentioned in cluster ascii file (configuration file) as "VOLUME_GROUP /dev/&amp;lt;vg_name&amp;gt; &amp;nbsp; .&lt;BR /&gt;2. Excuse me but I didn't undesrtand because it's not shown in video:&amp;nbsp;&lt;BR /&gt;when data moved from old disk to new one, immediately after I have to run # vgchange -a n vg_lock &amp;amp; # vgchange -c y vg_lock, before to run cmcheckconf and cmapplyconf commands ?&lt;BR /&gt;Ans : There is no need to run vgchange -a n/ -c y before running cmcheckconf/applyconf if you have the vgname mentioned in cluster configuration/ascii file .&lt;/P&gt;
&lt;P style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;I work for HPE/ I am an HPE Employee (HPE Community)&lt;/P&gt;</description>
    <pubDate>Thu, 10 Aug 2023 18:41:56 GMT</pubDate>
    <dc:creator>georgek_1</dc:creator>
    <dc:date>2023-08-10T18:41:56Z</dc:date>
    <item>
      <title>VG lock: vgreduce fails .. physical extents are still in use</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vg-lock-vgreduce-fails-physical-extents-are-still-in-use/m-p/7193428#M948540</link>
      <description>&lt;P&gt;According your documentation I'm following steps to replace disk into LOCK cluster volume group named as &lt;STRONG&gt;vglock.&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;I added new disk to vglock but I when I apply vgreduce to remove old disk I get this message because PE are always in used:&lt;/P&gt;&lt;PRE&gt;vgreduce: Physical volume "/dev/disk/diskxx" could not be removed since some of its
physical extents are still in use.&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;My doubt to complete task:&lt;/P&gt;&lt;P&gt;1. How can I solve it about message shown ? PVMOVE is high risk because if transfer went bad I I'd loose my data into vglock.&lt;/P&gt;&lt;P&gt;2.&amp;nbsp;vglock has special logical volume reserved to Service Guard ? I think It contains no data because It's only necessary to understand what nodes are as main ones in split brain cases. Right ?&lt;/P&gt;&lt;P&gt;3. I'd like to know if vglock &lt;STRONG&gt;must be marked as cluster volume group&lt;/STRONG&gt; with "vgchange -c y". Before to start disk replacement into vglock, I have to activate vglock so I have to run "vgchange -c n vglock, "&amp;nbsp;+ "vgchange -a e vglock" ? (because I cannot run "vgchange -a n" ?)&lt;/P&gt;&lt;P&gt;4. When disk replacement task and cluster configuration is completed, I have to run&amp;nbsp;"vgchange -c y vglock" + "vgchange -a n vglock" ?&lt;/P&gt;</description>
      <pubDate>Thu, 03 Aug 2023 04:12:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vg-lock-vgreduce-fails-physical-extents-are-still-in-use/m-p/7193428#M948540</guid>
      <dc:creator>RiclyLeRoy</dc:creator>
      <dc:date>2023-08-03T04:12:52Z</dc:date>
    </item>
    <item>
      <title>Re: VG lock: vgreduce fails .. physical extents are still in use</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vg-lock-vgreduce-fails-physical-extents-are-still-in-use/m-p/7193551#M948545</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;A href="https://community.hpe.com/t5/user/viewprofilepage/user-id/1148678" target="_self"&gt;&lt;SPAN class=""&gt;RiclyLeRoy&lt;/SPAN&gt;&lt;/A&gt;&lt;SPAN&gt;&amp;nbsp;,&lt;BR /&gt;Have you tried changing lock information as shown in the video, followed by data migration onto the new disk, before proceeding with vg reduction?&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;&lt;A href="https://community.hpe.com/t5/user/ViewProfilePage/user-id/2055988" target="_blank" rel="noopener"&gt;&lt;SPAN class=""&gt;Sush_S&lt;/SPAN&gt;&lt;/A&gt;&lt;BR /&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 04 Aug 2023 09:03:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vg-lock-vgreduce-fails-physical-extents-are-still-in-use/m-p/7193551#M948545</guid>
      <dc:creator>Sush_S</dc:creator>
      <dc:date>2023-08-04T09:03:21Z</dc:date>
    </item>
    <item>
      <title>Re: VG lock: vgreduce fails .. physical extents are still in use</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vg-lock-vgreduce-fails-physical-extents-are-still-in-use/m-p/7193558#M948546</link>
      <description>&lt;P&gt;What do you mean with "&lt;SPAN class=""&gt;&lt;SPAN&gt;updating lock information" &lt;/SPAN&gt;&lt;/SPAN&gt;?&lt;/P&gt;&lt;P&gt;video doesn't show how to update lock information, I thought to start from node1 adding the new disk (vgextend) to lock vg and to remove the old disk (vgremove) and I got message which I indicated in thread subject.&lt;/P&gt;&lt;P&gt;In video says &lt;EM&gt;to correct LVM&amp;nbsp; configuration for the &lt;FONT color="#0000FF"&gt;cluster lock volume group with new disk&lt;/FONT&gt; if It is not done&lt;/EM&gt;, then says to type&lt;EM&gt; lvmadm -l&lt;/EM&gt; to validate the LVM information.&lt;/P&gt;&lt;P&gt;In the example It's shown '/dev/shared_vg' as lock volume group wth 2 physical disk /dev/disk/disk6 and /dev/disk/disk11 on different nodes but It doesn' explain how to make it.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 04 Aug 2023 08:59:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vg-lock-vgreduce-fails-physical-extents-are-still-in-use/m-p/7193558#M948546</guid>
      <dc:creator>RiclyLeRoy</dc:creator>
      <dc:date>2023-08-04T08:59:09Z</dc:date>
    </item>
    <item>
      <title>Re: VG lock: vgreduce fails .. physical extents are still in use</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vg-lock-vgreduce-fails-physical-extents-are-still-in-use/m-p/7193564#M948547</link>
      <description>&lt;P style="margin: 0;"&gt;Hello RiclyLeRoy&lt;/P&gt;
&lt;P style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;I added new disk to vglock but I when I apply vgreduce to remove old disk I get this message because PE are always in used:&lt;/P&gt;
&lt;P style="margin: 0;"&gt;vgreduce: Physical volume "/dev/disk/diskxx" could not be removed since some of its&lt;BR /&gt;physical extents are still in use.&lt;/P&gt;
&lt;P style="margin: 0;"&gt;This error means some of the extends of this disk are still in use .&lt;BR /&gt;&lt;STRONG&gt;You may check the same by running # pvdisplay -v &amp;lt;faulty_vglock_disk&amp;gt; | more&amp;nbsp;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;&lt;STRONG&gt;You need to move the data to new disk before running vgreduce the faulty disk from vglock .&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;It can be completed with pvmove or using mirror-disk (if you have it) .&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;1. How can I solve it about message shown ? PVMOVE is high risk because if transfer went bad I I'd loose my data into vglock.&lt;BR /&gt;&lt;STRONG&gt;Ans : refer the above , you may take a data backup before performing pvmove .&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;2. vglock has special logical volume reserved to Service Guard ? I think It contains no data because It's only necessary to understand what nodes are as main ones in split brain cases. Right ?&lt;BR /&gt;&lt;STRONG&gt;Ans :There no special lvols in lock vg , as it is like any other vg . The only difference is that the disk in lock vg has a special flag in it's header to mark it as a lock disk .&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;You may use it for using lvols for keeping data .&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;3. I'd like to know if vglock must be marked as cluster volume group with "vgchange -c y".&amp;nbsp;&lt;BR /&gt;Before to start disk replacement into vglock, I have to activate vglock so I have to run "vgchange -c n vglock, " + "vgchange -a e vglock" ? (because I cannot run "vgchange -a n" ?)&lt;/P&gt;
&lt;P style="margin: 0;"&gt;&lt;STRONG&gt;Ans : Yes , First cluster lock volume group &amp;lt;/dev/lock_VG&amp;gt; needs to be designated as a cluster aware volume group .&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;may may run # vgchange -c n &amp;lt;vg_lock&amp;gt; , # vgchange -a y &amp;lt;vg_lock&amp;gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;4. When disk replacement task and cluster configuration is completed, I have to run "vgchange -c y vglock" + "vgchange -a n vglock" ?&lt;BR /&gt;&lt;STRONG&gt;Ans :Once it is completed, you need to run # vgchange -a n vg_lock &amp;amp; # vgchange -c y vg_lock&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;&lt;STRONG&gt;I work for HPE/ I am an HPE Employee (HPE Community)&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 04 Aug 2023 10:12:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vg-lock-vgreduce-fails-physical-extents-are-still-in-use/m-p/7193564#M948547</guid>
      <dc:creator>georgek_1</dc:creator>
      <dc:date>2023-08-04T10:12:00Z</dc:date>
    </item>
    <item>
      <title>Re: VG lock: vgreduce fails .. physical extents are still in use</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vg-lock-vgreduce-fails-physical-extents-are-still-in-use/m-p/7193614#M948549</link>
      <description>&lt;P&gt;&lt;EM&gt;&lt;a href="https://community.hpe.com/t5/user/viewprofilepage/user-id/1096538"&gt;@georgek_1&lt;/a&gt;&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;about 2 question&lt;/EM&gt;::&lt;/P&gt;
&lt;P&gt;I found out there is no package is using &lt;U&gt;lock&lt;/U&gt; volume group but thers is a logical volume too.&lt;FONT color="#FF0000"&gt;&lt;U&gt; Is't mandatory to create a logical volume to get the lock volume group&lt;/U&gt;&amp;nbsp;even if none uses this lock vg to keep data&lt;/FONT&gt; ?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;about 3 question&lt;/EM&gt;:&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I tried to activate lock vg to add new disk running using these commands:&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;# &lt;/STRONG&gt;&lt;STRONG&gt;vgchange -c n &amp;lt;vg_lock&amp;gt; &amp;nbsp; &amp;nbsp;&lt;/STRONG&gt;--&amp;gt; there is no problem. That's all right!&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;# vgchange -a y &amp;lt;vg_lock&amp;gt; &amp;nbsp; --&amp;gt; &lt;FONT color="#FF0000"&gt;I got error, while It worked&amp;nbsp;# vgchange -a e &amp;lt;vg_lock&amp;gt;, what happened ? Can you explain me please?&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;I have no other questions, I thank you very much for your precious support.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 07 Aug 2023 06:43:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vg-lock-vgreduce-fails-physical-extents-are-still-in-use/m-p/7193614#M948549</guid>
      <dc:creator>RiclyLeRoy</dc:creator>
      <dc:date>2023-08-07T06:43:53Z</dc:date>
    </item>
    <item>
      <title>Re: VG lock: vgreduce fails .. physical extents are still in use</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vg-lock-vgreduce-fails-physical-extents-are-still-in-use/m-p/7193723#M948550</link>
      <description>&lt;P style="margin: 0;"&gt;Hello RiclyLeRoy,&lt;/P&gt;
&lt;P style="margin: 0;"&gt;I found out there is no package is using lock volume group but thers is a logical volume too.&amp;nbsp;&lt;BR /&gt;Is't mandatory to create a logical volume to get the lock volume group even if none uses this lock vg to keep data ?&lt;/P&gt;
&lt;P style="margin: 0;"&gt;&lt;STRONG&gt;Ans : No need to have any logical volume created in vglock , vg itself (with a single disk) is enough for lockvg .&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;I tried to activate lock vg to add new disk running using these commands:&lt;BR /&gt;# vgchange -c n &amp;lt;vg_lock&amp;gt; &amp;nbsp; &amp;nbsp;--&amp;gt; there is no problem. That's all right!&lt;BR /&gt;# vgchange -a y &amp;lt;vg_lock&amp;gt; &amp;nbsp; --&amp;gt; I got error, while It worked # vgchange -a e &amp;lt;vg_lock&amp;gt;, what happened ? Can you explain me please?&lt;/P&gt;
&lt;P style="margin: 0;"&gt;&lt;STRONG&gt;Ans : It should work as one should be able to make the vg cluster un-aware (-c n) and activate it in normal mode .&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;Whats the error seeing while try activating the vg ?&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;I work for HPE/ I am an HPE Employee (HPE Community)&lt;/P&gt;</description>
      <pubDate>Mon, 07 Aug 2023 19:31:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vg-lock-vgreduce-fails-physical-extents-are-still-in-use/m-p/7193723#M948550</guid>
      <dc:creator>georgek_1</dc:creator>
      <dc:date>2023-08-07T19:31:51Z</dc:date>
    </item>
    <item>
      <title>Re: VG lock: vgreduce fails .. physical extents are still in use</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vg-lock-vgreduce-fails-physical-extents-are-still-in-use/m-p/7193926#M948552</link>
      <description>&lt;P&gt;&lt;a href="https://community.hpe.com/t5/user/viewprofilepage/user-id/1096538"&gt;@georgek_1&lt;/a&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I retried to run these commands&lt;/P&gt;
&lt;P&gt;# vgchange -c n &amp;lt;vg_lock&amp;gt;&lt;BR /&gt;# vgchange -a y &amp;lt;vg_lock&amp;gt;&lt;/P&gt;
&lt;P&gt;&lt;FONT color="#FF0000"&gt;Now thats' all right&lt;/FONT&gt;, perhaps I'm making some mistakes; I migrated vg_lock from old disk to new one successfully.&lt;/P&gt;
&lt;P&gt;I had the following error during cmcheckconf -C &amp;lt;cluster configuration file&amp;gt; :&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;SPAN class=""&gt;First cluster lock volume group /dev/vglock needs to be designated as a cluster aware volume group&lt;/SPAN&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class=""&gt;I follow your article at &lt;A href="https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c01881756" target="_blank" rel="noopener"&gt;https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c01881756&lt;/A&gt; and I &lt;SPAN&gt;removed comment to &lt;FONT color="#FF0000"&gt;#VOLUME_GROUP dev/vg_lock&lt;/FONT&gt; even if there is no package-related lock VG involved.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class=""&gt;&lt;SPAN&gt;Last 2 questions please:&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;OL&gt;
&lt;LI&gt;&lt;SPAN class=""&gt;&lt;SPAN&gt;Can you confirm me there is no problem to uncomment&amp;nbsp;&lt;FONT color="#FF0000"&gt;#VOLUME_GROUP dev/vg_lock&lt;/FONT&gt; in cluster configuration file ?&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;LI&gt;&lt;SPAN class=""&gt;&lt;SPAN&gt;Excuse me but I didn't undesrtand because it's not shown in video: when data moved from old disk to new one, immediately after I have to run &lt;EM&gt;# vgchange -a n vg_lock &amp;amp; # vgchange -c y vg_lock, &lt;/EM&gt;&lt;STRONG&gt;before to run cmcheckconf and cmapplyconf commands ?&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;
&lt;/OL&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN class=""&gt;&lt;SPAN&gt;&lt;U&gt;However now new lock VG is active in cluster&lt;/U&gt; and I thank you HPE support who helped me to reach the goal&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 10 Aug 2023 09:49:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vg-lock-vgreduce-fails-physical-extents-are-still-in-use/m-p/7193926#M948552</guid>
      <dc:creator>RiclyLeRoy</dc:creator>
      <dc:date>2023-08-10T09:49:19Z</dc:date>
    </item>
    <item>
      <title>Re: VG lock: vgreduce fails .. physical extents are still in use</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vg-lock-vgreduce-fails-physical-extents-are-still-in-use/m-p/7194001#M948554</link>
      <description>&lt;P style="margin: 0;"&gt;1. Can you confirm me there is no problem to uncomment #VOLUME_GROUP dev/vg_lock in cluster configuration file ?&lt;BR /&gt;Ans : Yes, the lock vg has to be cluster aware and should be mentioned in cluster ascii file (configuration file) as "VOLUME_GROUP /dev/&amp;lt;vg_name&amp;gt; &amp;nbsp; .&lt;BR /&gt;2. Excuse me but I didn't undesrtand because it's not shown in video:&amp;nbsp;&lt;BR /&gt;when data moved from old disk to new one, immediately after I have to run # vgchange -a n vg_lock &amp;amp; # vgchange -c y vg_lock, before to run cmcheckconf and cmapplyconf commands ?&lt;BR /&gt;Ans : There is no need to run vgchange -a n/ -c y before running cmcheckconf/applyconf if you have the vgname mentioned in cluster configuration/ascii file .&lt;/P&gt;
&lt;P style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;I work for HPE/ I am an HPE Employee (HPE Community)&lt;/P&gt;</description>
      <pubDate>Thu, 10 Aug 2023 18:41:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vg-lock-vgreduce-fails-physical-extents-are-still-in-use/m-p/7194001#M948554</guid>
      <dc:creator>georgek_1</dc:creator>
      <dc:date>2023-08-10T18:41:56Z</dc:date>
    </item>
  </channel>
</rss>

