<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: LVM Mirroring issues when using PV Groups in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-mirroring-issues-when-using-pv-groups/m-p/5174042#M458592</link>
    <description>Hi Again,&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt;&lt;BR /&gt;The concern is, if we lose an HBA / an entire disk shelf, how will this impact the mirror copies.&lt;BR /&gt;&amp;lt;&amp;lt;&amp;lt;&lt;BR /&gt;&lt;BR /&gt;It will not impact until another controller fails. Because lvmpvg file is configured in such a way.&lt;BR /&gt;Assume that C5 controller failed. Still C4 enclosure/disks has one copy of data. Only thing is, lvdisplay still shows the same way.</description>
    <pubDate>Fri, 08 May 2009 12:45:22 GMT</pubDate>
    <dc:creator>Ganesan R</dc:creator>
    <dc:date>2009-05-08T12:45:22Z</dc:date>
    <item>
      <title>LVM Mirroring issues when using PV Groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-mirroring-issues-when-using-pv-groups/m-p/5174037#M458587</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;We have a 2 node service guard cluster running on HP-UX 11.23. We have 2 ds2405 disk shelves with 13 disks in each shelf. &lt;BR /&gt;&lt;BR /&gt;These 26 disks on the two shelves are used in one volume group as 2 PVG's of 13 disks. These PVGs are configured as mirror sets.&lt;BR /&gt;&lt;BR /&gt;This volume group is a serviceguard package volume group.&lt;BR /&gt;&lt;BR /&gt;The problem I have - The mirroring was ok on one machine. Since it is serviceguard cluster, I took a vg export from node1 and imported it on node2. Updated the /etc/lvmpvg file on node2.&lt;BR /&gt;&lt;BR /&gt;When I moved the package and volume group from node 1 to node 2, we found that the disks in the mirror sets had disks from both PV groups.&lt;BR /&gt;&lt;BR /&gt;For ex &lt;BR /&gt;Ideal condition &lt;BR /&gt;If I have disks 1,2,3 in pvg 1 mirror set 1&lt;BR /&gt;disks 4,5 &amp;amp; 6 in  pvg2 &amp;amp; mirror set 2&lt;BR /&gt;After the package is moved to the alternate node&lt;BR /&gt;I have&lt;BR /&gt;disks 1,2,5 in mirror set 1 and &lt;BR /&gt;disks 3,4 &amp;amp; 6 in mirror set 2.&lt;BR /&gt;&lt;BR /&gt;Not sure what is going on.&lt;BR /&gt;&lt;BR /&gt;Thanks in advance for all your help.&lt;BR /&gt;Regards&lt;BR /&gt;Kaushik&lt;BR /&gt;&lt;BR /&gt;========================&lt;BR /&gt;lvdisplay -v /dev/vgwork/lvol_work | more&lt;BR /&gt;--- Logical volumes ---&lt;BR /&gt;LV Name                     /dev/vgwork/lvol_work&lt;BR /&gt;VG Name                     /dev/vgwork&lt;BR /&gt;LV Permission               read/write&lt;BR /&gt;LV Status                   available/syncd&lt;BR /&gt;Mirror copies               1&lt;BR /&gt;Consistency Recovery        MWC&lt;BR /&gt;Schedule                    parallel&lt;BR /&gt;LV Size (Mbytes)            1818624&lt;BR /&gt;Current LE                  56832&lt;BR /&gt;Allocated PE                113664&lt;BR /&gt;Stripes                     0&lt;BR /&gt;Stripe Size (Kbytes)        0&lt;BR /&gt;Bad block                   on&lt;BR /&gt;Allocation                  PVG-strict/distributed&lt;BR /&gt;IO Timeout (Seconds)        default&lt;BR /&gt;&lt;BR /&gt;   --- Distribution of logical volume ---&lt;BR /&gt;   PV Name                 LE on PV  PE on PV&lt;BR /&gt;   /dev/dsk/c4t2d0         4372      4372&lt;BR /&gt;   /dev/dsk/c4t3d0         4372      4372&lt;BR /&gt;   /dev/dsk/c4t4d0         4372      4372&lt;BR /&gt;   /dev/dsk/c4t5d0         4372      4372&lt;BR /&gt;   /dev/dsk/c4t6d0         4372      4372&lt;BR /&gt;   /dev/dsk/c4t7d0         4372      4372&lt;BR /&gt;   /dev/dsk/c4t8d0         4372      4372&lt;BR /&gt;   /dev/dsk/c4t9d0         4372      4372&lt;BR /&gt;   /dev/dsk/c4t10d0        4372      4372&lt;BR /&gt;   /dev/dsk/c4t11d0        4371      4371&lt;BR /&gt;   /dev/dsk/c4t12d0        4371      4371&lt;BR /&gt;   /dev/dsk/c4t13d0        4371      4371&lt;BR /&gt;   /dev/dsk/c4t14d0        4371      4371&lt;BR /&gt;   /dev/dsk/c5t2d0         4372      4372&lt;BR /&gt;   /dev/dsk/c5t3d0         4372      4372&lt;BR /&gt;   /dev/dsk/c5t4d0         4372      4372&lt;BR /&gt;   /dev/dsk/c5t5d0         4372      4372&lt;BR /&gt;   /dev/dsk/c5t6d0         4372      4372&lt;BR /&gt;   /dev/dsk/c5t7d0         4372      4372&lt;BR /&gt;   /dev/dsk/c5t8d0         4372      4372&lt;BR /&gt;   /dev/dsk/c5t9d0         4372      4372&lt;BR /&gt;   /dev/dsk/c5t10d0        4372      4372&lt;BR /&gt;   /dev/dsk/c5t11d0        4371      4371&lt;BR /&gt;   /dev/dsk/c5t12d0        4371      4371&lt;BR /&gt;   /dev/dsk/c5t13d0        4371      4371&lt;BR /&gt;   /dev/dsk/c5t14d0        4371      4371&lt;BR /&gt;&lt;BR /&gt;   --- Logical extents ---&lt;BR /&gt;   LE    PV1                     PE1   Status 1 PV2                     PE2   Status 2&lt;BR /&gt;   00000 /dev/dsk/c4t2d0         00000 current  /dev/dsk/c5t2d0         00000 current&lt;BR /&gt;   00001 /dev/dsk/c4t3d0         00000 current  /dev/dsk/c5t3d0         00000 current&lt;BR /&gt;   00002 /dev/dsk/c4t4d0         00000 current  /dev/dsk/c5t4d0         00000 current&lt;BR /&gt;   00003 /dev/dsk/c4t5d0         00000 current  /dev/dsk/c5t5d0         00000 current&lt;BR /&gt;   00004 /dev/dsk/c4t6d0         00000 current  /dev/dsk/c5t6d0         00000 current&lt;BR /&gt;   00005 /dev/dsk/c4t7d0         00000 current  /dev/dsk/c5t7d0         00000 current&lt;BR /&gt;   00006 /dev/dsk/c4t8d0         00000 current  /dev/dsk/c5t8d0         00000 current&lt;BR /&gt;   00007 /dev/dsk/c4t9d0         00000 current  /dev/dsk/c5t9d0         00000 current&lt;BR /&gt;   00008 /dev/dsk/c4t10d0        00000 current  /dev/dsk/c5t10d0        00000 current&lt;BR /&gt;   00009 /dev/dsk/c5t11d0        00000 current  /dev/dsk/c4t11d0        00000 current&lt;BR /&gt;   00010 /dev/dsk/c5t12d0        00000 current  /dev/dsk/c4t12d0        00000 current&lt;BR /&gt;   00011 /dev/dsk/c5t13d0        00000 current  /dev/dsk/c4t13d0        00000 current&lt;BR /&gt;   00012 /dev/dsk/c5t14d0        00000 current  /dev/dsk/c4t14d0        00000 current&lt;BR /&gt;   00013 /dev/dsk/c4t2d0         00001 current  /dev/dsk/c5t2d0         00001 current&lt;BR /&gt;   00014 /dev/dsk/c4t3d0         00001 current  /dev/dsk/c5t3d0         00001 current&lt;BR /&gt;   00015 /dev/dsk/c4t4d0         00001 current  /dev/dsk/c5t4d0         00001 current&lt;BR /&gt;   00016 /dev/dsk/c4t5d0         00001 current  /dev/dsk/c5t5d0         00001 current&lt;BR /&gt;=======================================&lt;BR /&gt;&lt;BR /&gt;vgdisplay -v vgwork&lt;BR /&gt;--- Volume groups ---&lt;BR /&gt;VG Name                     /dev/vgwork&lt;BR /&gt;VG Write Access             read/write&lt;BR /&gt;VG Status                   available, exclusive&lt;BR /&gt;Max LV                      255&lt;BR /&gt;Cur LV                      1&lt;BR /&gt;Open LV                     1&lt;BR /&gt;Max PV                      64&lt;BR /&gt;Cur PV                      26&lt;BR /&gt;Act PV                      26&lt;BR /&gt;Max PE per PV               35003&lt;BR /&gt;VGDA                        52&lt;BR /&gt;PE Size (Mbytes)            32&lt;BR /&gt;Total PE                    113724&lt;BR /&gt;Alloc PE                    113664&lt;BR /&gt;Free PE                     60&lt;BR /&gt;Total PVG                   2&lt;BR /&gt;Total Spare PVs             0&lt;BR /&gt;Total Spare PVs in use      0&lt;BR /&gt;&lt;BR /&gt;   --- Logical volumes ---&lt;BR /&gt;   LV Name                     /dev/vgwork/lvol_work&lt;BR /&gt;   LV Status                   available/syncd&lt;BR /&gt;   LV Size (Mbytes)            1818624&lt;BR /&gt;   Current LE                  56832&lt;BR /&gt;   Allocated PE                113664&lt;BR /&gt;   Used PV                     26&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;   --- Physical volumes ---&lt;BR /&gt;   PV Name                     /dev/dsk/c4t2d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     2&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c4t3d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     2&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c4t4d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     2&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c4t5d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     2&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c4t6d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     2&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c4t7d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     2&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c4t8d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     2&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c4t9d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     2&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c4t10d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     2&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c4t11d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     3&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c4t12d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     3&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c4t13d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     3&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c4t14d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     3&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c5t2d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     2&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c5t3d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     2&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c5t4d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     2&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c5t5d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     2&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c5t6d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     2&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c5t7d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     2&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c5t8d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     2&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c5t9d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     2&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c5t10d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     2&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c5t11d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     3&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c5t12d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     3&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c5t13d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     3&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c5t14d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     3&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;   --- Physical volume groups ---&lt;BR /&gt;   PVG Name                    PVG2&lt;BR /&gt;   PV Name                     /dev/dsk/c4t2d0&lt;BR /&gt;   PV Name                     /dev/dsk/c4t3d0&lt;BR /&gt;   PV Name                     /dev/dsk/c4t4d0&lt;BR /&gt;   PV Name                     /dev/dsk/c4t5d0&lt;BR /&gt;   PV Name                     /dev/dsk/c4t6d0&lt;BR /&gt;   PV Name                     /dev/dsk/c4t7d0&lt;BR /&gt;   PV Name                     /dev/dsk/c4t8d0&lt;BR /&gt;   PV Name                     /dev/dsk/c4t9d0&lt;BR /&gt;   PV Name                     /dev/dsk/c4t10d0&lt;BR /&gt;   PV Name                     /dev/dsk/c4t11d0&lt;BR /&gt;   PV Name                     /dev/dsk/c4t12d0&lt;BR /&gt;   PV Name                     /dev/dsk/c4t13d0&lt;BR /&gt;   PV Name                     /dev/dsk/c4t14d0&lt;BR /&gt;&lt;BR /&gt;   PVG Name                    PVG3&lt;BR /&gt;   PV Name                     /dev/dsk/c5t2d0&lt;BR /&gt;   PV Name                     /dev/dsk/c5t3d0&lt;BR /&gt;   PV Name                     /dev/dsk/c5t4d0&lt;BR /&gt;   PV Name                     /dev/dsk/c5t5d0&lt;BR /&gt;   PV Name                     /dev/dsk/c5t6d0&lt;BR /&gt;   PV Name                     /dev/dsk/c5t7d0&lt;BR /&gt;   PV Name                     /dev/dsk/c5t8d0&lt;BR /&gt;   PV Name                     /dev/dsk/c5t9d0&lt;BR /&gt;   PV Name                     /dev/dsk/c5t10d0&lt;BR /&gt;   PV Name                     /dev/dsk/c5t11d0&lt;BR /&gt;   PV Name                     /dev/dsk/c5t12d0&lt;BR /&gt;   PV Name                     /dev/dsk/c5t13d0&lt;BR /&gt;   PV Name                     /dev/dsk/c5t14d0&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 08 May 2009 08:37:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-mirroring-issues-when-using-pv-groups/m-p/5174037#M458587</guid>
      <dc:creator>kaushikbr</dc:creator>
      <dc:date>2009-05-08T08:37:50Z</dc:date>
    </item>
    <item>
      <title>Re: LVM Mirroring issues when using PV Groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-mirroring-issues-when-using-pv-groups/m-p/5174038#M458588</link>
      <description>What was the exact command to vgexport/vgimport?&lt;BR /&gt;&lt;BR /&gt;However, I don't think this is a problem as long as each disk has a mirror in a different chassis.</description>
      <pubDate>Fri, 08 May 2009 12:17:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-mirroring-issues-when-using-pv-groups/m-p/5174038#M458588</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2009-05-08T12:17:37Z</dc:date>
    </item>
    <item>
      <title>Re: LVM Mirroring issues when using PV Groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-mirroring-issues-when-using-pv-groups/m-p/5174039#M458589</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;Thanks for your reply.&lt;BR /&gt;Commands used&lt;BR /&gt;&lt;BR /&gt;vgexport -v -p -s -m /var/tmp/vgwork.map vgwork&lt;BR /&gt;&lt;BR /&gt;Before importing&lt;BR /&gt;&lt;BR /&gt;The usual &lt;BR /&gt;mkdir /dev/vgwork&lt;BR /&gt;mknod &lt;BR /&gt;and&lt;BR /&gt;vgimport -v -s -m /var/tmp/vgwork.map vgwork&lt;BR /&gt;&lt;BR /&gt;The concern is, if we lose an HBA / an entire disk shelf, how will this impact the mirror copies.&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Kaushik</description>
      <pubDate>Fri, 08 May 2009 12:33:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-mirroring-issues-when-using-pv-groups/m-p/5174039#M458589</guid>
      <dc:creator>kaushikbr</dc:creator>
      <dc:date>2009-05-08T12:33:29Z</dc:date>
    </item>
    <item>
      <title>Re: LVM Mirroring issues when using PV Groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-mirroring-issues-when-using-pv-groups/m-p/5174040#M458590</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;It's strange thing. It suppose to show exactly as first node. &lt;BR /&gt;&lt;BR /&gt;As Torsten said, there is no problem as long as each disk has a mirror in a different chassis.&lt;BR /&gt;&lt;BR /&gt;If you still wish to see PE1 from one enclosure and PE2 from another enclosure, you can try this.&lt;BR /&gt;&lt;BR /&gt;reduce the mirror by explicitely specifying the C5 chasis. So that the lv has only PE1 from C4 controller. After that you can extend the LV. now PE2 will be distributed on C5.</description>
      <pubDate>Fri, 08 May 2009 12:39:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-mirroring-issues-when-using-pv-groups/m-p/5174040#M458590</guid>
      <dc:creator>Ganesan R</dc:creator>
      <dc:date>2009-05-08T12:39:46Z</dc:date>
    </item>
    <item>
      <title>Re: LVM Mirroring issues when using PV Groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-mirroring-issues-when-using-pv-groups/m-p/5174041#M458591</link>
      <description>IMHO it will just continue, because you loose all mirrored copies, but you keep always 1 working disk.&lt;BR /&gt;&lt;BR /&gt;Imagine this:&lt;BR /&gt;&lt;BR /&gt;original ==&amp;gt; mirror&lt;BR /&gt;&lt;BR /&gt;failed ==&amp;gt; ok&lt;BR /&gt;ok ==&amp;gt; failed&lt;BR /&gt;ok ==&amp;gt; failed&lt;BR /&gt;...&lt;BR /&gt;&lt;BR /&gt;your data is ok, because 1 of each mirrored "pair" is ok.</description>
      <pubDate>Fri, 08 May 2009 12:41:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-mirroring-issues-when-using-pv-groups/m-p/5174041#M458591</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2009-05-08T12:41:39Z</dc:date>
    </item>
    <item>
      <title>Re: LVM Mirroring issues when using PV Groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-mirroring-issues-when-using-pv-groups/m-p/5174042#M458592</link>
      <description>Hi Again,&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt;&lt;BR /&gt;The concern is, if we lose an HBA / an entire disk shelf, how will this impact the mirror copies.&lt;BR /&gt;&amp;lt;&amp;lt;&amp;lt;&lt;BR /&gt;&lt;BR /&gt;It will not impact until another controller fails. Because lvmpvg file is configured in such a way.&lt;BR /&gt;Assume that C5 controller failed. Still C4 enclosure/disks has one copy of data. Only thing is, lvdisplay still shows the same way.</description>
      <pubDate>Fri, 08 May 2009 12:45:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-mirroring-issues-when-using-pv-groups/m-p/5174042#M458592</guid>
      <dc:creator>Ganesan R</dc:creator>
      <dc:date>2009-05-08T12:45:22Z</dc:date>
    </item>
    <item>
      <title>Re: LVM Mirroring issues when using PV Groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-mirroring-issues-when-using-pv-groups/m-p/5174043#M458593</link>
      <description>Hi Ganesan&lt;BR /&gt;&lt;BR /&gt;I tried doing that, I reduced the LV only to the disks on one disk shelf, did a pv move to  move the contents from the odd disk to the right one, mirrored it back again. Everything was ok. Since this is serviceguard cluster, I exported and imported the VG config on the alternate node, modifed the /etc/lvmpvg file and started the package on the alternate node. As soon as I failed over the package to the alternate node everything went back to square one. Moved the package back to the primary node, no luck.&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Kaushik</description>
      <pubDate>Fri, 08 May 2009 12:51:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-mirroring-issues-when-using-pv-groups/m-p/5174043#M458593</guid>
      <dc:creator>kaushikbr</dc:creator>
      <dc:date>2009-05-08T12:51:40Z</dc:date>
    </item>
    <item>
      <title>Re: LVM Mirroring issues when using PV Groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-mirroring-issues-when-using-pv-groups/m-p/5174044#M458594</link>
      <description>Hi kaushikbr,&lt;BR /&gt;&lt;BR /&gt;Sometimes back I have seen this kind of scenerio and as per HP it is not an issue at all. We may see this type of output if the lvcreate once and extened later on. &lt;BR /&gt;&lt;BR /&gt;This is exact HP explanation about this issue.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&lt;BR /&gt;This is not a problem.  Mirroring doesn't care who is the primary or secondary&lt;BR /&gt;disk, or how it is listed from lvdisplay -v.  Strict allocation will not allow&lt;BR /&gt;extents to mirror to the same disk or disks in the same pvg(physical volume&lt;BR /&gt;group).&lt;BR /&gt;&lt;BR /&gt;There are several ways to fix this:&lt;BR /&gt;&lt;BR /&gt;1.) The easiest and least impacting method.&lt;BR /&gt;&lt;BR /&gt;   a.) lvreduce -m 0 /dev/vgXX/lvolX /dev/dsk/cxtxdx (it doesn't matter&lt;BR /&gt;which disk, but be sure one of them is listed)&lt;BR /&gt;&lt;BR /&gt;   b.) lvextend -m 1 /dev/vgXX/lvolX /dev/dsk/cxtxdx (the disk that was&lt;BR /&gt;reduce)&lt;BR /&gt;&lt;BR /&gt;2.) This method would require file systems to be unmounted.&lt;BR /&gt;&lt;BR /&gt;   a.) vgchange -a n /dev/vgXX&lt;BR /&gt;&lt;BR /&gt;   b.) vgchange -a y /dev/vgXX&lt;BR /&gt;&lt;BR /&gt;3.) reboot the system&lt;BR /&gt;&lt;BR /&gt;NOTE: It does not matter which disk is first because as soon as the system&lt;BR /&gt;is rebooted or the vg is deactivated and then reactivated, the PV nums are used&lt;BR /&gt;to determined which disk will be listed first.&lt;BR /&gt;&lt;BR /&gt;See doc ULVMKBQA00000381 regarding pvnums.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&amp;gt;&lt;BR /&gt;&lt;BR /&gt;Hope this clarify your concern. If you have access to that document you can read it.</description>
      <pubDate>Fri, 08 May 2009 13:14:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-mirroring-issues-when-using-pv-groups/m-p/5174044#M458594</guid>
      <dc:creator>Ganesan R</dc:creator>
      <dc:date>2009-05-08T13:14:01Z</dc:date>
    </item>
    <item>
      <title>Re: LVM Mirroring issues when using PV Groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-mirroring-issues-when-using-pv-groups/m-p/5174045#M458595</link>
      <description>Hi Ganesan&lt;BR /&gt;&lt;BR /&gt;That is good document.&lt;BR /&gt;&lt;BR /&gt;We learn something new every day.&lt;BR /&gt;&lt;BR /&gt;Thank you all for valuable comments&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Kaushik</description>
      <pubDate>Fri, 08 May 2009 13:23:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-mirroring-issues-when-using-pv-groups/m-p/5174045#M458595</guid>
      <dc:creator>kaushikbr</dc:creator>
      <dc:date>2009-05-08T13:23:54Z</dc:date>
    </item>
  </channel>
</rss>

