<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Urgent: Failure possibly caused by PVG-Strict or Distributed allocation policies when I add mirror in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733364#M386643</link>
    <description>Look at this:&lt;BR /&gt;&lt;BR /&gt;--- Physical volumes ---&lt;BR /&gt;PV Name /dev/dsk/c145t0d1&lt;BR /&gt;PV Name /dev/dsk/c143t0d1 Alternate Link&lt;BR /&gt;...&lt;BR /&gt;&lt;BR /&gt;PV Name /dev/dsk/c145t0d2&lt;BR /&gt;PV Name /dev/dsk/c143t0d2 Alternate Link&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;So you have 2 physical volumes with alternate pathes (c145t0d1=c143t0d1).&lt;BR /&gt;&lt;BR /&gt;The LVOL is already on these 2 PVs.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Now you need 2 more PVs to mirror the existing 2 PVs.</description>
    <pubDate>Tue, 04 Jan 2011 09:31:31 GMT</pubDate>
    <dc:creator>Torsten.</dc:creator>
    <dc:date>2011-01-04T09:31:31Z</dc:date>
    <item>
      <title>Urgent: Failure possibly caused by PVG-Strict or Distributed allocation policies when I add mirror</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733358#M386637</link>
      <description>&lt;BR /&gt;Hello Master,&lt;BR /&gt;when I add one mirror get the error message as below,"Failure possibly caused by PVG-Strict or Distributed allocation policies",&lt;BR /&gt;how to deal with it, I need to create distributed LV, thanks in advance!&lt;BR /&gt;# lvcreate -n lv01 -D y -s g /dev/vg16&lt;BR /&gt;Logical volume "/dev/vg16/lv01" has been successfully created with&lt;BR /&gt;character device "/dev/vg16/rlv01".&lt;BR /&gt;Volume Group configuration for /dev/vg16 has been saved in /etc/lvmconf/vg16.conf&lt;BR /&gt;&lt;BR /&gt;# lvextend -l 25 /dev/vg16/lv01 /dev/dsk/c145t0d1 /dev/dsk/c145t0d2&lt;BR /&gt;Logical volume "/dev/vg16/lv01" has been successfully extended.&lt;BR /&gt;Volume Group configuration for /dev/vg16 has been saved in /etc/lvmconf/vg16.conf&lt;BR /&gt;You have mail in /var/mail/root&lt;BR /&gt;&lt;BR /&gt;# lvextend -m 1 /dev/vg16/lv01 /dev/dsk/c143t0d1 /dev/dsk/c143t0d2&lt;BR /&gt;&lt;BR /&gt;Device file path "/dev/dsk/c143t0d1" is an alternate path&lt;BR /&gt;to the Physical Volume. Using Primary Link "/dev/dsk/c145t0d1".&lt;BR /&gt;Device file path "/dev/dsk/c143t0d2" is an alternate path&lt;BR /&gt;to the Physical Volume. Using Primary Link "/dev/dsk/c145t0d2".&lt;BR /&gt;lvextend: Not enough free physical extents available.&lt;BR /&gt;Logical volume "/dev/vg16/lv01" could not be extended.&lt;BR /&gt;Failure possibly caused by PVG-Strict or Distributed allocation policies.&lt;BR /&gt;&lt;BR /&gt;# vgdisplay -v /dev/vg16 |more&lt;BR /&gt;&lt;BR /&gt;--- Volume groups ---&lt;BR /&gt;VG Name                     /dev/vg16&lt;BR /&gt;VG Write Access             read/write     &lt;BR /&gt;VG Status                   available                 &lt;BR /&gt;Max LV                      255    &lt;BR /&gt;Cur LV                      1      &lt;BR /&gt;Open LV                     1      &lt;BR /&gt;Max PV                      16     &lt;BR /&gt;Cur PV                      2      &lt;BR /&gt;Act PV                      2      &lt;BR /&gt;Max PE per PV               2559         &lt;BR /&gt;VGDA                        4   &lt;BR /&gt;PE Size (Mbytes)            4               &lt;BR /&gt;Total PE                    3070    &lt;BR /&gt;Alloc PE                    25      &lt;BR /&gt;Free PE                     3045    &lt;BR /&gt;Total PVG                   2        &lt;BR /&gt;Total Spare PVs             0              &lt;BR /&gt;Total Spare PVs in use      0                     &lt;BR /&gt;&lt;BR /&gt;   --- Logical volumes ---&lt;BR /&gt;   LV Name                     /dev/vg16/lv01&lt;BR /&gt;   LV Status                   available/syncd           &lt;BR /&gt;   LV Size (Mbytes)            100             &lt;BR /&gt;   Current LE                  25        &lt;BR /&gt;   Allocated PE                25          &lt;BR /&gt;   Used PV                     2       &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;   --- Physical volumes ---&lt;BR /&gt;   PV Name                     /dev/dsk/c145t0d1&lt;BR /&gt;   PV Name                     /dev/dsk/c143t0d1        Alternate Link&lt;BR /&gt;   PV Status                   available                &lt;BR /&gt;   Total PE                    2559    &lt;BR /&gt;   Free PE                     2546    &lt;BR /&gt;   Autoswitch                  On        &lt;BR /&gt;   Proactive Polling           On               &lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c145t0d2&lt;BR /&gt;   PV Name                     /dev/dsk/c143t0d2        Alternate Link&lt;BR /&gt;   PV Status                   available                &lt;BR /&gt;   Total PE                    511     &lt;BR /&gt;   Free PE                     499     &lt;BR /&gt;   Autoswitch                  On        &lt;BR /&gt;   Proactive Polling           On               &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;   --- Physical volume groups ---&lt;BR /&gt;   PVG Name                    PVG1                       &lt;BR /&gt;   PV Name                     /dev/dsk/c145t0d1          &lt;BR /&gt;   PV Name                     /dev/dsk/c145t0d2          &lt;BR /&gt;&lt;BR /&gt;   PVG Name                    PVG2                       &lt;BR /&gt;   PV Name                     /dev/dsk/c143t0d1          &lt;BR /&gt;   PV Name                     /dev/dsk/c143t0d2    &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 04 Jan 2011 08:25:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733358#M386637</guid>
      <dc:creator>study unix</dc:creator>
      <dc:date>2011-01-04T08:25:43Z</dc:date>
    </item>
    <item>
      <title>Re: Urgent: Failure possibly caused by PVG-Strict or Distributed allocation policies when I add mirror</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733359#M386638</link>
      <description>who can give some support?</description>
      <pubDate>Tue, 04 Jan 2011 08:53:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733359#M386638</guid>
      <dc:creator>study unix</dc:creator>
      <dc:date>2011-01-04T08:53:03Z</dc:date>
    </item>
    <item>
      <title>Re: Urgent: Failure possibly caused by PVG-Strict or Distributed allocation policies when I add mirror</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733360#M386639</link>
      <description>The important message is&lt;BR /&gt;&lt;BR /&gt;vextend: Not enough free physical extents available.</description>
      <pubDate>Tue, 04 Jan 2011 09:00:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733360#M386639</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2011-01-04T09:00:27Z</dc:date>
    </item>
    <item>
      <title>Re: Urgent: Failure possibly caused by PVG-Strict or Distributed allocation policies when I add mirror</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733361#M386640</link>
      <description>in the first place , I think you do not have enough space in extension of mirror disks in second lvextend command.&lt;BR /&gt;&lt;BR /&gt;and after all they are not match in size to each other; you used &lt;BR /&gt;# lvextend -l 25&lt;BR /&gt;# lvextend -m 1 &lt;BR /&gt;&lt;BR /&gt;do you have special purpose ?</description>
      <pubDate>Tue, 04 Jan 2011 09:02:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733361#M386640</guid>
      <dc:creator>Hakki Aydin Ucar</dc:creator>
      <dc:date>2011-01-04T09:02:44Z</dc:date>
    </item>
    <item>
      <title>Re: Urgent: Failure possibly caused by PVG-Strict or Distributed allocation policies when I add mirror</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733362#M386641</link>
      <description>Please post the first ~15 lines of&lt;BR /&gt;&lt;BR /&gt;# lvdisplay -v /dev/vg16/lv01&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;and you will see you created the LVOL on both PVs and now you try to mirror it to the *same* PVs!</description>
      <pubDate>Tue, 04 Jan 2011 09:04:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733362#M386641</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2011-01-04T09:04:24Z</dc:date>
    </item>
    <item>
      <title>Re: Urgent: Failure possibly caused by PVG-Strict or Distributed allocation policies when I add mirror</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733363#M386642</link>
      <description>&lt;BR /&gt;Hello Torsten,&lt;BR /&gt;Output as below, &lt;BR /&gt;and I will mirror c145t0d1,c145t0d2 to c143tod1,c143t0d2, but c145t0d1 is not the same disk to c143t0d2, why happen the error message ,and how to create one distributed LV ?&lt;BR /&gt;# lvdisplay -v /dev/vg16/lv01 |more&lt;BR /&gt;--- Logical volumes ---&lt;BR /&gt;LV Name                     /dev/vg16/lv01&lt;BR /&gt;VG Name                     /dev/vg16&lt;BR /&gt;LV Permission               read/write   &lt;BR /&gt;LV Status                   available/syncd           &lt;BR /&gt;Mirror copies               0            &lt;BR /&gt;Consistency Recovery        MWC                 &lt;BR /&gt;Schedule                    parallel     &lt;BR /&gt;LV Size (Mbytes)            100             &lt;BR /&gt;Current LE                  25        &lt;BR /&gt;Allocated PE                25          &lt;BR /&gt;Stripes                     0       &lt;BR /&gt;Stripe Size (Kbytes)        0                   &lt;BR /&gt;Bad block                   on           &lt;BR /&gt;Allocation                  PVG-strict/distributed&lt;BR /&gt;IO Timeout (Seconds)        default             &lt;BR /&gt;&lt;BR /&gt;   --- Distribution of logical volume ---&lt;BR /&gt;   PV Name            LE on PV  PE on PV  &lt;BR /&gt;   /dev/dsk/c145t0d1  13        13        &lt;BR /&gt;   /dev/dsk/c145t0d2  12        12        &lt;BR /&gt;&lt;BR /&gt;   --- Logical extents ---&lt;BR /&gt;   LE    PV1                PE1   Status 1 &lt;BR /&gt;   00000 /dev/dsk/c145t0d1  00013 current  &lt;BR /&gt;   00001 /dev/dsk/c145t0d2  00012 current  &lt;BR /&gt;   00002 /dev/dsk/c145t0d1  00014 current  &lt;BR /&gt;   00003 /dev/dsk/c145t0d2  00013 current  &lt;BR /&gt;   00004 /dev/dsk/c145t0d1  00015 current  &lt;BR /&gt;   00005 /dev/dsk/c145t0d2  00014 current  &lt;BR /&gt;   00006 /dev/dsk/c145t0d1  00016 current  &lt;BR /&gt;   00007 /dev/dsk/c145t0d2  00015 current  &lt;BR /&gt;   00008 /dev/dsk/c145t0d1  00017 current  &lt;BR /&gt;   00009 /dev/dsk/c145t0d2  00016 current  &lt;BR /&gt;   00010 /dev/dsk/c145t0d1  00018 current  &lt;BR /&gt;   00011 /dev/dsk/c145t0d2  00017 current  &lt;BR /&gt;   00012 /dev/dsk/c145t0d1  00019 current  &lt;BR /&gt;   00013 /dev/dsk/c145t0d2  00018 current  &lt;BR /&gt;   00014 /dev/dsk/c145t0d1  00020 current  &lt;BR /&gt;   00015 /dev/dsk/c145t0d2  00019 current  &lt;BR /&gt;   00016 /dev/dsk/c145t0d1  00021 current  &lt;BR /&gt;   00017 /dev/dsk/c145t0d2  00020 current  &lt;BR /&gt;   00018 /dev/dsk/c145t0d1  00022 current  &lt;BR /&gt;   00019 /dev/dsk/c145t0d2  00021 current  &lt;BR /&gt;   00020 /dev/dsk/c145t0d1  00023 current  &lt;BR /&gt;   00021 /dev/dsk/c145t0d2  00022 current  &lt;BR /&gt;   00022 /dev/dsk/c145t0d1  00024 current  &lt;BR /&gt;   00023 /dev/dsk/c145t0d2  00023 current</description>
      <pubDate>Tue, 04 Jan 2011 09:28:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733363#M386642</guid>
      <dc:creator>study unix</dc:creator>
      <dc:date>2011-01-04T09:28:32Z</dc:date>
    </item>
    <item>
      <title>Re: Urgent: Failure possibly caused by PVG-Strict or Distributed allocation policies when I add mirror</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733364#M386643</link>
      <description>Look at this:&lt;BR /&gt;&lt;BR /&gt;--- Physical volumes ---&lt;BR /&gt;PV Name /dev/dsk/c145t0d1&lt;BR /&gt;PV Name /dev/dsk/c143t0d1 Alternate Link&lt;BR /&gt;...&lt;BR /&gt;&lt;BR /&gt;PV Name /dev/dsk/c145t0d2&lt;BR /&gt;PV Name /dev/dsk/c143t0d2 Alternate Link&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;So you have 2 physical volumes with alternate pathes (c145t0d1=c143t0d1).&lt;BR /&gt;&lt;BR /&gt;The LVOL is already on these 2 PVs.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Now you need 2 more PVs to mirror the existing 2 PVs.</description>
      <pubDate>Tue, 04 Jan 2011 09:31:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733364#M386643</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2011-01-04T09:31:31Z</dc:date>
    </item>
    <item>
      <title>Re: Urgent: Failure possibly caused by PVG-Strict or Distributed allocation policies when I add mirror</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733365#M386644</link>
      <description>BTW, this is obviously an array (what model?), so it is most likely already a RAID (1 or 5 or whatever), so distribution or mirroring on the same array doesn't make sense at all.</description>
      <pubDate>Tue, 04 Jan 2011 09:39:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733365#M386644</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2011-01-04T09:39:37Z</dc:date>
    </item>
    <item>
      <title>Re: Urgent: Failure possibly caused by PVG-Strict or Distributed allocation policies when I add mirror</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733366#M386645</link>
      <description>I fully agree with Torsten, you are looking at the same disk.  Don't confuse multiple paths to the same disk.&lt;BR /&gt;&lt;BR /&gt;However, in spite of the considerable respect I have for Torsten, I must disagree with his view on mirroring.  &lt;BR /&gt;Mirroring is usually a mgmt decision.  The data may be best served by mirroring.  I do agree that the array itself becomes a single point of failure, but most disk issues are failure of the disk or the power or network pieces to the array.&lt;BR /&gt;The one thing I notice when folks mirror, is that they do not watch where they are mirroring to!  I have seen where folks have a disk, which is really just a logical disk, and they mirror to another disk that is actually a logical disk sitting on the exact same physical disk going down the same controller.&lt;BR /&gt;So...if you are going to mirror - know your array.  First check that the disk needs to be mirrored.  If you have RAID0+1 (2way mirror), then you don't need to mirror the disk.  It was done at the array level.&lt;BR /&gt;But if you do need to mirror disk - then pick your mirror disk properly!&lt;BR /&gt;&lt;BR /&gt;Just my 2 cents, &lt;BR /&gt;Rita</description>
      <pubDate>Tue, 04 Jan 2011 15:27:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733366#M386645</guid>
      <dc:creator>Rita C Workman</dc:creator>
      <dc:date>2011-01-04T15:27:20Z</dc:date>
    </item>
    <item>
      <title>Re: Urgent: Failure possibly caused by PVG-Strict or Distributed allocation policies when I add mirror</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733367#M386646</link>
      <description>&amp;gt;&amp;gt; So...if you are going to mirror - know your array.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;I fully agree. &lt;BR /&gt;Thats why I asked for the array model...&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 04 Jan 2011 20:23:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733367#M386646</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2011-01-04T20:23:09Z</dc:date>
    </item>
    <item>
      <title>Re: Urgent: Failure possibly caused by PVG-Strict or Distributed allocation policies when I add mirror</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733368#M386647</link>
      <description>and that's why you sir have the lettuce headband with a star!!&lt;BR /&gt;&lt;BR /&gt;:)</description>
      <pubDate>Tue, 04 Jan 2011 20:32:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733368#M386647</guid>
      <dc:creator>Rita C Workman</dc:creator>
      <dc:date>2011-01-04T20:32:55Z</dc:date>
    </item>
    <item>
      <title>Re: Urgent: Failure possibly caused by PVG-Strict or Distributed allocation policies when I add mirror</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733369#M386648</link>
      <description>;-)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Just to add - I know most of the HP arrays (beginning with AUTORAID, VA, EVA, MSA..., P2000, XP48 to 24k, up to 3PAR and P9500 in future); IMHO it makes no sense for (almost) all of them to mirror a LUN from OS to the same array...</description>
      <pubDate>Tue, 04 Jan 2011 20:55:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733369#M386648</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2011-01-04T20:55:22Z</dc:date>
    </item>
    <item>
      <title>Re: Urgent: Failure possibly caused by PVG-Strict or Distributed allocation policies when I add mirror</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733370#M386649</link>
      <description>Hi All,&lt;BR /&gt;the disk array is HSV210, and I will try to solve it, thanks!&lt;BR /&gt;# ioscan -fnC disk&lt;BR /&gt;Class     I  H/W Path        Driver   S/W State   H/W Type     Description&lt;BR /&gt;===========================================================================&lt;BR /&gt;disk      0  0/0/2/0.0.0.0   sdisk    CLAIMED     DEVICE       TEAC    DV-28E-N&lt;BR /&gt;                            /dev/dsk/c0t0d0   /dev/rdsk/c0t0d0&lt;BR /&gt;disk      1  0/1/1/1.2.0     sdisk    CLAIMED     DEVICE       COMPAQ  BD07289BB8&lt;BR /&gt;                            /dev/dsk/c3t2d0   /dev/rdsk/c3t2d0&lt;BR /&gt;disk    116  0/4/1/0.4.12.0.0.0.1      sdisk    CLAIMED     DEVICE       HP      HSV210</description>
      <pubDate>Wed, 05 Jan 2011 01:26:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/urgent-failure-possibly-caused-by-pvg-strict-or-distributed/m-p/4733370#M386649</guid>
      <dc:creator>study unix</dc:creator>
      <dc:date>2011-01-05T01:26:59Z</dc:date>
    </item>
  </channel>
</rss>

