<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: system hang when more than half vg00's disk are offline ? in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499066#M555986</link>
    <description>Let's take a closer look, please post (with cables connected):&lt;BR /&gt;&lt;BR /&gt;# lvlnboot -v&lt;BR /&gt;&lt;BR /&gt;# vgdisplay -v&lt;BR /&gt;&lt;BR /&gt;# strings /etc/lvmtab&lt;BR /&gt;&lt;BR /&gt;# ioscan -fn</description>
    <pubDate>Thu, 17 Sep 2009 08:16:12 GMT</pubDate>
    <dc:creator>Torsten.</dc:creator>
    <dc:date>2009-09-17T08:16:12Z</dc:date>
    <item>
      <title>system hang when more than half vg00's disk are offline ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499057#M555977</link>
      <description>My vg00 consists of 5 pvs, of which two are 35G local disk, the other three are disks from SAN. Mirror copy of logical volumes on the two local disks resides on the three SAN disks. Quorum of vg00 has been set off through 'vgchange -a y -q n /dev/vg00' and mirror status of all volums are syncd. But when I pull the fibre out, 3 SAN disks offline, system hang(for example, 'ls' command will not exit.)&lt;BR /&gt;&lt;BR /&gt;I found these comments in HP-UX system administrator's guide: logical volume management:&lt;BR /&gt;   For the volume group to remain fully operational, at least half the disks must remain present and available.&lt;BR /&gt;&lt;BR /&gt;My questions are:&lt;BR /&gt;  1. Is system hang normal when more than half disks are absent ? I thought it can choose a moderate way to warn me quorum missing like sending a mail to root ...&lt;BR /&gt;  2. Can this strategy be turn off ??&lt;BR /&gt;</description>
      <pubDate>Thu, 17 Sep 2009 03:15:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499057#M555977</guid>
      <dc:creator>vaughan_1</dc:creator>
      <dc:date>2009-09-17T03:15:04Z</dc:date>
    </item>
    <item>
      <title>Re: system hang when more than half vg00's disk are offline ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499058#M555978</link>
      <description>Check if really all LVOLs are mirrored. Sometimes people think an unmirrored swap is enough ... what is wrong.&lt;BR /&gt;&lt;BR /&gt;# lvdisplay -v /dev/vg00/lvol...&lt;BR /&gt;&lt;BR /&gt;and take a look at mirror copies count.</description>
      <pubDate>Thu, 17 Sep 2009 05:19:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499058#M555978</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2009-09-17T05:19:52Z</dc:date>
    </item>
    <item>
      <title>Re: system hang when more than half vg00's disk are offline ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499059#M555979</link>
      <description>all lvols of vg00 have been mirrored including lvol2(swap).&lt;BR /&gt;each lvol of vg00 has one mirror copy on san disks.&lt;BR /&gt;There are 2 local disk and 3 san disk.&lt;BR /&gt;I have pull out both two local disk(major copy of lvols), system change to use mirror copy on san disk and response normally within &amp;lt;60s.&lt;BR /&gt;But in contrast, when I pull out the fibre(all 3 san disk offline), system will hang for ever ...unless I plug the fibre again.</description>
      <pubDate>Thu, 17 Sep 2009 05:43:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499059#M555979</guid>
      <dc:creator>vaughan_1</dc:creator>
      <dc:date>2009-09-17T05:43:54Z</dc:date>
    </item>
    <item>
      <title>Re: system hang when more than half vg00's disk are offline ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499060#M555980</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;There are two types of quorum:&lt;BR /&gt;&lt;BR /&gt; Activation quorum&lt;BR /&gt; Running quorum&lt;BR /&gt;&lt;BR /&gt;a) Activation quorum applies when the VG is&lt;BR /&gt;activated, and requires at least 50% of the&lt;BR /&gt;disks that were in the VG at the end of its&lt;BR /&gt;last activation are present.&lt;BR /&gt;&lt;BR /&gt;The important part is that it is based on&lt;BR /&gt;how many PVs were left in the VG at the end&lt;BR /&gt;of the activation. If you have 5 PVs in the&lt;BR /&gt;VG, and one of them fails, then the&lt;BR /&gt;requirement to reactivate is the three&lt;BR /&gt;remaining PVs.&lt;BR /&gt;&lt;BR /&gt;Activation quorum can be over-ridden from the command line ("-q n" flag for&lt;BR /&gt;vgchange(1M)).&lt;BR /&gt;&lt;BR /&gt;b) Running quorum defines what happens&lt;BR /&gt;when a PV fails in the activated VG,&lt;BR /&gt;and requires that 50% or more of the PVs&lt;BR /&gt;in the VG remain available at any time.&lt;BR /&gt;&lt;BR /&gt;You can not override running quorum.&lt;BR /&gt;&lt;BR /&gt;The "-q n" option to vgchange(1M) only&lt;BR /&gt;applies to activation quorum.&lt;BR /&gt;&lt;BR /&gt;You can not drop to less than 50% of the active PVs in a VG in one step - that is a running quorum failure and can not be over-ridden!&lt;BR /&gt;&lt;BR /&gt;Make sure the VG never loses more than&lt;BR /&gt;50% of its PVs in a single failure.&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;&lt;BR /&gt;VK2COT</description>
      <pubDate>Thu, 17 Sep 2009 05:47:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499060#M555980</guid>
      <dc:creator>VK2COT</dc:creator>
      <dc:date>2009-09-17T05:47:11Z</dc:date>
    </item>
    <item>
      <title>Re: system hang when more than half vg00's disk are offline ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499061#M555981</link>
      <description>The system should continue to run.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;I assume you have multiple pathes to the SAN disks. Are all configured? If there are multiple pathes, did you pull all cables? Are there other VGs on SAN only? Maybe they hang the system.&lt;BR /&gt;&lt;BR /&gt;Consider to post some more configuration details. (ioscan -fn, strings /etc/lvmtab, vgdisplay -v)</description>
      <pubDate>Thu, 17 Sep 2009 06:03:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499061#M555981</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2009-09-17T06:03:16Z</dc:date>
    </item>
    <item>
      <title>Re: system hang when more than half vg00's disk are offline ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499062#M555982</link>
      <description>I understand. I should try my best to avoid such terrible single point failure then...&lt;BR /&gt;Thank you guyes, especially for the awesome explanation about quorum from VK2COT.</description>
      <pubDate>Thu, 17 Sep 2009 06:12:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499062#M555982</guid>
      <dc:creator>vaughan_1</dc:creator>
      <dc:date>2009-09-17T06:12:39Z</dc:date>
    </item>
    <item>
      <title>Re: system hang when more than half vg00's disk are offline ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499063#M555983</link>
      <description>I have never heard about this "running quorum". In a typical configuration if you have 2 mirrored disks and loose one of them, you have exactly 50%, but not *more* then 50%, hence a quorum problem! Same as the 2 disks vs. 3 disks situation.&lt;BR /&gt;&lt;BR /&gt;Will a system hang if you loose 1 out of 2 disks? No, it should continue to run!</description>
      <pubDate>Thu, 17 Sep 2009 06:22:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499063#M555983</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2009-09-17T06:22:04Z</dc:date>
    </item>
    <item>
      <title>Re: system hang when more than half vg00's disk are offline ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499064#M555984</link>
      <description>to Torsten, &lt;BR /&gt;  Yes! The system is running but abnormally. The scene is that any command like 'ls' does not exit and login from ssh hangs after I input password. Though it can return to normal after the fibre is plugged in again, that abnormal state of system means totally system unavailable for me.&lt;BR /&gt;  My HBA is a6795a, single port FC card with only those 3 disks in SAN. No multipath software installed.</description>
      <pubDate>Thu, 17 Sep 2009 06:29:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499064#M555984</guid>
      <dc:creator>vaughan_1</dc:creator>
      <dc:date>2009-09-17T06:29:59Z</dc:date>
    </item>
    <item>
      <title>Re: system hang when more than half vg00's disk are offline ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499065#M555985</link>
      <description>to Torsten,&lt;BR /&gt;  I lost all the 3 san disk while the local 2 disk are still connected on system. Is it the same as what you said 2 disks vs 3 disks situation? &lt;BR /&gt;  But I still feel puzzled as you for that unbelievable state system got into, which is more than a warning to the administrator.</description>
      <pubDate>Thu, 17 Sep 2009 06:39:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499065#M555985</guid>
      <dc:creator>vaughan_1</dc:creator>
      <dc:date>2009-09-17T06:39:22Z</dc:date>
    </item>
    <item>
      <title>Re: system hang when more than half vg00's disk are offline ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499066#M555986</link>
      <description>Let's take a closer look, please post (with cables connected):&lt;BR /&gt;&lt;BR /&gt;# lvlnboot -v&lt;BR /&gt;&lt;BR /&gt;# vgdisplay -v&lt;BR /&gt;&lt;BR /&gt;# strings /etc/lvmtab&lt;BR /&gt;&lt;BR /&gt;# ioscan -fn</description>
      <pubDate>Thu, 17 Sep 2009 08:16:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499066#M555986</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2009-09-17T08:16:12Z</dc:date>
    </item>
    <item>
      <title>Re: system hang when more than half vg00's disk are offline ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499067#M555987</link>
      <description>The 3 san disks environment is not in my company now. Here I create an simple one with one local disk and 2 san disks. This one act the same when the cables is pulled out(I did it right now.)&lt;BR /&gt;&lt;BR /&gt;bash-3.2# ls&lt;BR /&gt;.ICEauthority  2              dev            inq.hpux64     stand&lt;BR /&gt;.TTauthority   SD_CDROM       etc            lib            t.sh&lt;BR /&gt;.Xauthority    a              feedback.tar   loop.pl        tmp&lt;BR /&gt;.bash_history  banner         getinfo        lost+found     tmp_mnt&lt;BR /&gt;.dt            bin            getinfo.txt    mapfile        usr&lt;BR /&gt;.dtprofile     cma_dump.log   head           mnt            var&lt;BR /&gt;.gpmhp-hpa500  collect        home           net            vaughan.bak&lt;BR /&gt;.profile       collect.tar    infile         opt            vgwrite1.map&lt;BR /&gt;.sh_history    core           info1.txt      rhead          vgwrite1.out&lt;BR /&gt;.sw            cp.txt         inq.hpux1100   sbin&lt;BR /&gt;bash-3.2# ls&lt;BR /&gt;asdfadsf&lt;BR /&gt;sdfadsfdsaf&lt;BR /&gt;&lt;BR /&gt;( system hung after I pulled the cable out)&lt;BR /&gt;&lt;BR /&gt;bash-3.2# asdfadsf&lt;BR /&gt;bash: asdfadsf: command not found&lt;BR /&gt;bash-3.2# sdfadsfdsaf&lt;BR /&gt;bash: sdfadsfdsaf: command not found&lt;BR /&gt;bash-3.2# sdf&lt;BR /&gt;&lt;BR /&gt;( system return to noraml after I plug in the cable)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;bash-3.2# lvlnboot -v&lt;BR /&gt;Boot Definitions for Volume Group /dev/vg00:&lt;BR /&gt;Physical Volumes belonging in Root Volume Group:&lt;BR /&gt; /dev/dsk/c3t15d0 (0/0/2/1.15.0) -- Boot Disk&lt;BR /&gt; /dev/dsk/c40t0d0 (0/2/0/0.1.12.255.0.0.0) -- Boot Disk&lt;BR /&gt; /dev/dsk/c40t0d1 (0/2/0/0.1.12.255.0.0.1)&lt;BR /&gt;Boot: lvol1 on:  /dev/dsk/c3t15d0&lt;BR /&gt;   /dev/dsk/c40t0d0&lt;BR /&gt;Root: lvol3 on:  /dev/dsk/c3t15d0&lt;BR /&gt;   /dev/dsk/c40t0d0&lt;BR /&gt;Swap: lvol2 on:  /dev/dsk/c3t15d0&lt;BR /&gt;   /dev/dsk/c40t0d0&lt;BR /&gt;Dump: lvol2 on:  /dev/dsk/c3t15d0, 0&lt;BR /&gt;&lt;BR /&gt;--- Volume groups ---&lt;BR /&gt;VG Name                     /dev/vg00&lt;BR /&gt;VG Write Access             read/write&lt;BR /&gt;VG Status                   available&lt;BR /&gt;Max LV                      255&lt;BR /&gt;Cur LV                      8&lt;BR /&gt;Open LV                     10&lt;BR /&gt;Max PV                      16&lt;BR /&gt;Cur PV                      3&lt;BR /&gt;Act PV                      3&lt;BR /&gt;Max PE per PV               4384&lt;BR /&gt;VGDA                        6&lt;BR /&gt;PE Size (Mbytes)            8&lt;BR /&gt;Total PE                    9876&lt;BR /&gt;Alloc PE                    4164&lt;BR /&gt;Free PE                     5712&lt;BR /&gt;Total PVG                   0&lt;BR /&gt;Total Spare PVs             0&lt;BR /&gt;Total Spare PVs in use      0&lt;BR /&gt;&lt;BR /&gt;   --- Logical volumes ---&lt;BR /&gt;   LV Name                     /dev/vg00/lvol1&lt;BR /&gt;   LV Status                   available/syncd&lt;BR /&gt;   LV Size (Mbytes)            112&lt;BR /&gt;   Current LE                  14&lt;BR /&gt;   Allocated PE                28&lt;BR /&gt;   Used PV                     2&lt;BR /&gt;&lt;BR /&gt;   LV Name                     /dev/vg00/lvol2&lt;BR /&gt;   LV Status                   available/syncd&lt;BR /&gt;   LV Size (Mbytes)            256&lt;BR /&gt;   Current LE                  32&lt;BR /&gt;   Allocated PE                64&lt;BR /&gt;   Used PV                     2&lt;BR /&gt;&lt;BR /&gt;   LV Name                     /dev/vg00/lvol3&lt;BR /&gt;   LV Status                   available/syncd&lt;BR /&gt;   LV Size (Mbytes)            144&lt;BR /&gt;   Current LE                  18&lt;BR /&gt;   Allocated PE                36&lt;BR /&gt;&lt;BR /&gt;   LV Name   /dev/vg00/lvol4&lt;BR /&gt;   LV Status   available/syncd&lt;BR /&gt;   LV Size (Mbytes)  2048&lt;BR /&gt;   Current LE   256&lt;BR /&gt;   Allocated PE   512&lt;BR /&gt;   Used PV   2&lt;BR /&gt;&lt;BR /&gt;   LV Name   /dev/vg00/lvol5&lt;BR /&gt;   LV Status   available/syncd&lt;BR /&gt;   LV Size (Mbytes)  24&lt;BR /&gt;   Current LE   3&lt;BR /&gt;   Allocated PE   6&lt;BR /&gt;   Used PV   2&lt;BR /&gt;&lt;BR /&gt;   LV Name   /dev/vg00/lvol6&lt;BR /&gt;   LV Status   available/syncd&lt;BR /&gt;   LV Size (Mbytes)  11000&lt;BR /&gt;   Current LE   1375&lt;BR /&gt;   Allocated PE   2750&lt;BR /&gt;   Used PV   2&lt;BR /&gt;&lt;BR /&gt;   LV Name   /dev/vg00/lvol9dup&lt;BR /&gt;   LV Status   available/syncd&lt;BR /&gt;   LV Size (Mbytes)  2048&lt;BR /&gt;   Current LE   256&lt;BR /&gt;   Allocated PE   512&lt;BR /&gt;   Used PV   2&lt;BR /&gt;&lt;BR /&gt;   LV Name   /dev/vg00/lvol8&lt;BR /&gt;   LV Status   available/syncd&lt;BR /&gt;   LV Size (Mbytes)  1024&lt;BR /&gt;   Current LE   128&lt;BR /&gt;   Allocated PE   256&lt;BR /&gt;   Used PV   2&lt;BR /&gt;&lt;BR /&gt;   LV Name   /dev/vg00/lvol7new&lt;BR /&gt;   LV Status   available/syncd&lt;BR /&gt;   LV Size (Mbytes)  2048&lt;BR /&gt;   Current LE   256&lt;BR /&gt;   Allocated PE   512&lt;BR /&gt;   Used PV                     2&lt;BR /&gt;&lt;BR /&gt;   LV Name                     /dev/vg00/lvol9&lt;BR /&gt;   LV Status                   available/syncd&lt;BR /&gt;   LV Size (Mbytes)            2048&lt;BR /&gt;   Current LE                  256&lt;BR /&gt;   Allocated PE                512&lt;BR /&gt;   Used PV                     2&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;   --- Physical volumes ---&lt;BR /&gt;   PV Name                     /dev/dsk/c3t15d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    4374&lt;BR /&gt;   Free PE                     2292&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c40t0d0&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    3583&lt;BR /&gt;   Free PE                     2876&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c40t0d1&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    1919&lt;BR /&gt;   Free PE                     544&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;bash-3.2# strings /etc/lvmtab&lt;BR /&gt;/dev/vg00&lt;BR /&gt;/dev/dsk/c3t15d0&lt;BR /&gt;/dev/dsk/c40t0d0&lt;BR /&gt;/dev/dsk/c40t0d1&lt;BR /&gt;&lt;BR /&gt;bash-3.2# ioscan -fn&lt;BR /&gt;class       I  H/W Path      Driver    S/W State   H/W Type     Description&lt;BR /&gt;============================================================================&lt;BR /&gt;root        0                root      CLAIMED     BUS_NEXUS&lt;BR /&gt;ioa         0  0             sba       CLAIMED     BUS_NEXUS    System Bus Adapter (582)&lt;BR /&gt;ba          0  0/0           lba       CLAIMED     BUS_NEXUS    Local PCI Bus Adapter (782)&lt;BR /&gt;lan         0  0/0/0/0       btlan     CLAIMED     INTERFACE    HP PCI 10/100Base-TX Core&lt;BR /&gt;                            /dev/diag/lan0  /dev/ether0     /dev/lan0&lt;BR /&gt;ext_bus     0  0/0/1/0       c720      CLAIMED     INTERFACE    SCSI C896 Ultra Wide LVD&lt;BR /&gt;target      0  0/0/1/0.7     tgt       CLAIMED     DEVICE&lt;BR /&gt;ctl         0  0/0/1/0.7.0   sctl      CLAIMED     DEVICE       Initiator&lt;BR /&gt;                            /dev/rscsi/c0t7d0&lt;BR /&gt;ext_bus     1  0/0/1/1       c720      CLAIMED     INTERFACE    SCSI C896 Ultra Wide Single-Ended&lt;BR /&gt;target      1  0/0/1/1.7     tgt       CLAIMED     DEVICE&lt;BR /&gt;ctl         1  0/0/1/1.7.0   sctl      CLAIMED     DEVICE       Initiator&lt;BR /&gt;                            /dev/rscsi/c1t7d0&lt;BR /&gt;ext_bus     2  0/0/2/0       c720      CLAIMED     INTERFACE    SCSI C87x Fast Wide Single-Ended&lt;BR /&gt;target      2  0/0/2/0.7     tgt       CLAIMED     DEVICE&lt;BR /&gt;ctl         2  0/0/2/0.7.0   sctl      CLAIMED     DEVICE       Initiator&lt;BR /&gt;                            /dev/rscsi/c2t7d0&lt;BR /&gt;ext_bus     3  0/0/2/1       c720      CLAIMED     INTERFACE    SCSI C87x Ultra Wide Single-Ended&lt;BR /&gt;target      3  0/0/2/1.7     tgt       CLAIMED     DEVICE&lt;BR /&gt;ctl         3  0/0/2/1.7.0   sctl      CLAIMED     DEVICE       Initiator&lt;BR /&gt;                            /dev/rscsi/c3t7d0&lt;BR /&gt;target      4  0/0/2/1.15    tgt       CLAIMED     DEVICE&lt;BR /&gt;disk        1  0/0/2/1.15.0  sdisk     CLAIMED     DEVICE       SEAGATE ST336706LC&lt;BR /&gt;                            /dev/dsk/c3t15d0   /dev/rdsk/c3t15d0&lt;BR /&gt;tty         0  0/0/4/0       asio0     CLAIMED     INTERFACE    PCI Serial (103c1048)&lt;BR /&gt;                            /dev/GSPdiag1   /dev/mux0       /dev/tty0p1&lt;BR /&gt;                            /dev/diag/mux0  /dev/tty0p0     /dev/tty0p2&lt;BR /&gt;tty         1  0/0/5/0       asio0     CLAIMED     INTERFACE    PCI Serial (103c1048)&lt;BR /&gt;                            /dev/GSPdiag2   /dev/mux1&lt;BR /&gt;                            /dev/diag/mux1  /dev/tty1p1&lt;BR /&gt;ba          1  0/2           lba       CLAIMED     BUS_NEXUS    Local PCI Bus Adapter (782)&lt;BR /&gt;fc          0  0/2/0/0       td        CLAIMED     INTERFACE    HP Tachyon XL2 Fibre Channel Mass Storage Adapter&lt;BR /&gt;                            /dev/td0&lt;BR /&gt;fcp         0  0/2/0/0.1     fcp       CLAIMED     INTERFACE    FCP Domain&lt;BR /&gt;ext_bus    40  0/2/0/0.1.12.255.0      fcpdev    CLAIMED     INTERFACE    FCP Device Interface&lt;BR /&gt;target      5  0/2/0/0.1.12.255.0.0    tgt       CLAIMED     DEVICE&lt;BR /&gt;disk       59  0/2/0/0.1.12.255.0.0.0  sdisk     CLAIMED     DEVICE       ODYSYS  UWS_DISK&lt;BR /&gt;                            /dev/dsk/c40t0d0   /dev/rdsk/c40t0d0&lt;BR /&gt;disk       60  0/2/0/0.1.12.255.0.0.1  sdisk     CLAIMED     DEVICE       ODYSYS  UWS_DISK&lt;BR /&gt;                            /dev/dsk/c40t0d1   /dev/rdsk/c40t0d1&lt;BR /&gt;ba          2  0/4           lba       CLAIMED     BUS_NEXUS    Local PCI Bus Adapter (782)&lt;BR /&gt;ba          3  0/6           lba       CLAIMED     BUS_NEXUS    Local PCI Bus Adapter (782)&lt;BR /&gt;memory      0  8             memory    CLAIMED     MEMORY       Memory&lt;BR /&gt;processor   0  160           processor CLAIMED     PROCESSOR    Processor&lt;BR /&gt;iscsi       0  255/0         iscsi     CLAIMED     VIRTBUS      iSCSI Virtual Node&lt;BR /&gt;&lt;BR /&gt;bash-3.2# bdf&lt;BR /&gt;Filesystem          kbytes    used   avail %used Mounted on&lt;BR /&gt;/dev/vg00/lvol3     147456   78729   64464   55% /&lt;BR /&gt;/dev/vg00/lvol1     111637   34689   65784   35% /stand&lt;BR /&gt;/dev/vg00/lvol8    1048576  920831  120115   88% /var&lt;BR /&gt;/dev/vg00/lvol7new 2097152 1632643  435537   79% /usr&lt;BR /&gt;/dev/vg00/lvol4    2097152  205733 1773802   10% /tmp&lt;BR /&gt;/dev/vg00/lvol6    11264000 8861454 2333328   79% /opt&lt;BR /&gt;/dev/vg00/lvol5      24576    9110   14561   38% /home&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 17 Sep 2009 09:01:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499067#M555987</guid>
      <dc:creator>vaughan_1</dc:creator>
      <dc:date>2009-09-17T09:01:12Z</dc:date>
    </item>
    <item>
      <title>Re: system hang when more than half vg00's disk are offline ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499068#M555988</link>
      <description>Same number of used PEs on local disks and on SAN.&lt;BR /&gt;&lt;BR /&gt;BTW, what kind of array is it?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;To check if all mirrors are really on both, local and remote, consider to get a&lt;BR /&gt;&lt;BR /&gt;# lvdisplay -v /dev/vg00/lvol1|head -n 20 &lt;BR /&gt;&lt;BR /&gt;from all lvols.</description>
      <pubDate>Thu, 17 Sep 2009 09:49:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499068#M555988</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2009-09-17T09:49:21Z</dc:date>
    </item>
    <item>
      <title>Re: system hang when more than half vg00's disk are offline ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499069#M555989</link>
      <description>yes, the same number of used pes while major data copy resides in /dev/dsk/c3t15d0(local) and one mirror copy in /dev/dsk/c40t0d0(SAN disk1) and /dev/dsk/c40t0d1(SAN disk2)(not striped).&lt;BR /&gt;SAN disks are provided by FC target server developed by my company.&lt;BR /&gt;&lt;BR /&gt;--- Logical volumes ---&lt;BR /&gt;LV Name                     /dev/vg00/lvol1&lt;BR /&gt;VG Name                     /dev/vg00&lt;BR /&gt;LV Permission               read/write   &lt;BR /&gt;LV Status                   available/syncd           &lt;BR /&gt;Mirror copies               1            &lt;BR /&gt;Consistency Recovery        MWC                 &lt;BR /&gt;Schedule                    parallel     &lt;BR /&gt;LV Size (Mbytes)            112             &lt;BR /&gt;Current LE                  14        &lt;BR /&gt;Allocated PE                28          &lt;BR /&gt;Stripes                     0       &lt;BR /&gt;Stripe Size (Kbytes)        0                   &lt;BR /&gt;Bad block                   off          &lt;BR /&gt;Allocation                  strict/contiguous         &lt;BR /&gt;IO Timeout (Seconds)        default             &lt;BR /&gt;&lt;BR /&gt;   --- Distribution of logical volume ---&lt;BR /&gt;   PV Name            LE on PV  PE on PV  &lt;BR /&gt;   /dev/dsk/c3t15d0   14        14        &lt;BR /&gt;   /dev/dsk/c40t0d0   14        14        &lt;BR /&gt;&lt;BR /&gt;   --- Logical extents ---&lt;BR /&gt;--- Logical volumes ---&lt;BR /&gt;LV Name                     /dev/vg00/lvol2&lt;BR /&gt;VG Name                     /dev/vg00&lt;BR /&gt;LV Permission               read/write   &lt;BR /&gt;LV Status                   available/syncd           &lt;BR /&gt;Mirror copies               1            &lt;BR /&gt;Consistency Recovery        NONE                &lt;BR /&gt;Schedule                    parallel     &lt;BR /&gt;LV Size (Mbytes)            256             &lt;BR /&gt;Current LE                  32        &lt;BR /&gt;Allocated PE                64          &lt;BR /&gt;Stripes                     0       &lt;BR /&gt;Stripe Size (Kbytes)        0                   &lt;BR /&gt;Bad block                   off          &lt;BR /&gt;Allocation                  strict/contiguous         &lt;BR /&gt;IO Timeout (Seconds)        default             &lt;BR /&gt;&lt;BR /&gt;   --- Distribution of logical volume ---&lt;BR /&gt;   PV Name            LE on PV  PE on PV  &lt;BR /&gt;   /dev/dsk/c3t15d0   32        32        &lt;BR /&gt;   /dev/dsk/c40t0d0   32        32        &lt;BR /&gt;&lt;BR /&gt;   --- Logical extents ---&lt;BR /&gt;--- Logical volumes ---&lt;BR /&gt;LV Name                     /dev/vg00/lvol3&lt;BR /&gt;VG Name                     /dev/vg00&lt;BR /&gt;LV Permission               read/write   &lt;BR /&gt;LV Status                   available/syncd           &lt;BR /&gt;Mirror copies               1            &lt;BR /&gt;Consistency Recovery        MWC                 &lt;BR /&gt;Schedule                    parallel     &lt;BR /&gt;LV Size (Mbytes)            144             &lt;BR /&gt;Current LE                  18        &lt;BR /&gt;Allocated PE                36          &lt;BR /&gt;Stripes                     0       &lt;BR /&gt;Stripe Size (Kbytes)        0                   &lt;BR /&gt;Bad block                   off          &lt;BR /&gt;Allocation                  strict/contiguous         &lt;BR /&gt;IO Timeout (Seconds)        default             &lt;BR /&gt;&lt;BR /&gt;   --- Distribution of logical volume ---&lt;BR /&gt;   PV Name            LE on PV  PE on PV  &lt;BR /&gt;   /dev/dsk/c3t15d0   18        18        &lt;BR /&gt;   /dev/dsk/c40t0d0   18        18        &lt;BR /&gt;&lt;BR /&gt;   --- Logical extents ---&lt;BR /&gt;--- Logical volumes ---&lt;BR /&gt;LV Name                     /dev/vg00/lvol4&lt;BR /&gt;VG Name                     /dev/vg00&lt;BR /&gt;LV Permission               read/write   &lt;BR /&gt;LV Status                   available/syncd           &lt;BR /&gt;Mirror copies               1            &lt;BR /&gt;Consistency Recovery        MWC                 &lt;BR /&gt;Schedule                    parallel     &lt;BR /&gt;LV Size (Mbytes)            2048            &lt;BR /&gt;Current LE                  256       &lt;BR /&gt;Allocated PE                512         &lt;BR /&gt;Stripes                     0       &lt;BR /&gt;Stripe Size (Kbytes)        0                   &lt;BR /&gt;Bad block                   on           &lt;BR /&gt;Allocation                  strict                    &lt;BR /&gt;IO Timeout (Seconds)        default             &lt;BR /&gt;&lt;BR /&gt;   --- Distribution of logical volume ---&lt;BR /&gt;   PV Name            LE on PV  PE on PV  &lt;BR /&gt;   /dev/dsk/c3t15d0   256       256       &lt;BR /&gt;   /dev/dsk/c40t0d0   256       256       &lt;BR /&gt;&lt;BR /&gt;   --- Logical extents ---&lt;BR /&gt;--- Logical volumes ---&lt;BR /&gt;LV Name                     /dev/vg00/lvol5&lt;BR /&gt;VG Name                     /dev/vg00&lt;BR /&gt;LV Permission               read/write   &lt;BR /&gt;LV Status                   available/syncd           &lt;BR /&gt;Mirror copies               1            &lt;BR /&gt;Consistency Recovery        MWC                 &lt;BR /&gt;Schedule                    parallel     &lt;BR /&gt;LV Size (Mbytes)            24              &lt;BR /&gt;Current LE                  3         &lt;BR /&gt;Allocated PE                6           &lt;BR /&gt;Stripes                     0       &lt;BR /&gt;Stripe Size (Kbytes)        0                   &lt;BR /&gt;Bad block                   on           &lt;BR /&gt;Allocation                  strict                    &lt;BR /&gt;IO Timeout (Seconds)        default             &lt;BR /&gt;&lt;BR /&gt;   --- Distribution of logical volume ---&lt;BR /&gt;   PV Name            LE on PV  PE on PV  &lt;BR /&gt;   /dev/dsk/c3t15d0   3         3         &lt;BR /&gt;   /dev/dsk/c40t0d0   3         3         &lt;BR /&gt;&lt;BR /&gt;   --- Logical extents ---&lt;BR /&gt;--- Logical volumes ---&lt;BR /&gt;LV Name                     /dev/vg00/lvol6&lt;BR /&gt;VG Name                     /dev/vg00&lt;BR /&gt;LV Permission               read/write   &lt;BR /&gt;LV Status                   available/syncd           &lt;BR /&gt;Mirror copies               1            &lt;BR /&gt;Consistency Recovery        MWC                 &lt;BR /&gt;Schedule                    parallel     &lt;BR /&gt;LV Size (Mbytes)            11000           &lt;BR /&gt;Current LE                  1375      &lt;BR /&gt;Allocated PE                2750        &lt;BR /&gt;Stripes                     0       &lt;BR /&gt;Stripe Size (Kbytes)        0                   &lt;BR /&gt;Bad block                   on           &lt;BR /&gt;Allocation                  strict                    &lt;BR /&gt;IO Timeout (Seconds)        default             &lt;BR /&gt;&lt;BR /&gt;   --- Distribution of logical volume ---&lt;BR /&gt;   PV Name            LE on PV  PE on PV  &lt;BR /&gt;   /dev/dsk/c3t15d0   1375      1375      &lt;BR /&gt;   /dev/dsk/c40t0d1   1375      1375      &lt;BR /&gt;&lt;BR /&gt;   --- Logical extents ---&lt;BR /&gt;--- Logical volumes ---&lt;BR /&gt;LV Name                     /dev/vg00/lvol7new&lt;BR /&gt;VG Name                     /dev/vg00&lt;BR /&gt;LV Permission               read/write   &lt;BR /&gt;LV Status                   available/syncd           &lt;BR /&gt;Mirror copies               1            &lt;BR /&gt;Consistency Recovery        MWC                 &lt;BR /&gt;Schedule                    parallel     &lt;BR /&gt;LV Size (Mbytes)            2048            &lt;BR /&gt;Current LE                  256       &lt;BR /&gt;Allocated PE                512         &lt;BR /&gt;Stripes                     0       &lt;BR /&gt;Stripe Size (Kbytes)        0                   &lt;BR /&gt;Bad block                   on           &lt;BR /&gt;Allocation                  strict/contiguous         &lt;BR /&gt;IO Timeout (Seconds)        default             &lt;BR /&gt;&lt;BR /&gt;   --- Distribution of logical volume ---&lt;BR /&gt;   PV Name            LE on PV  PE on PV  &lt;BR /&gt;   /dev/dsk/c3t15d0   256       256       &lt;BR /&gt;   /dev/dsk/c40t0d0   256       256       &lt;BR /&gt;&lt;BR /&gt;   --- Logical extents ---&lt;BR /&gt;--- Logical volumes ---&lt;BR /&gt;LV Name                     /dev/vg00/lvol8&lt;BR /&gt;VG Name                     /dev/vg00&lt;BR /&gt;LV Permission               read/write   &lt;BR /&gt;LV Status                   available/syncd           &lt;BR /&gt;Mirror copies               1            &lt;BR /&gt;Consistency Recovery        MWC                 &lt;BR /&gt;Schedule                    parallel     &lt;BR /&gt;LV Size (Mbytes)            1024            &lt;BR /&gt;Current LE                  128       &lt;BR /&gt;Allocated PE                256         &lt;BR /&gt;Stripes                     0       &lt;BR /&gt;Stripe Size (Kbytes)        0                   &lt;BR /&gt;Bad block                   on           &lt;BR /&gt;Allocation                  strict                    &lt;BR /&gt;IO Timeout (Seconds)        default             &lt;BR /&gt;&lt;BR /&gt;   --- Distribution of logical volume ---&lt;BR /&gt;   PV Name            LE on PV  PE on PV  &lt;BR /&gt;   /dev/dsk/c3t15d0   128       128       &lt;BR /&gt;   /dev/dsk/c40t0d0   128       128       &lt;BR /&gt;&lt;BR /&gt;   --- Logical extents ---&lt;BR /&gt;--- Logical volumes ---&lt;BR /&gt;LV Name                     /dev/vg00/lvol9&lt;BR /&gt;VG Name                     /dev/vg00&lt;BR /&gt;LV Permission               read/write   &lt;BR /&gt;LV Status                   available/syncd           &lt;BR /&gt;Mirror copies               1            &lt;BR /&gt;Consistency Recovery        MWC                 &lt;BR /&gt;Schedule                    parallel     &lt;BR /&gt;LV Size (Mbytes)            2048            &lt;BR /&gt;Current LE                  256       &lt;BR /&gt;Allocated PE                512         &lt;BR /&gt;Stripes                     0       &lt;BR /&gt;Stripe Size (Kbytes)        0                   &lt;BR /&gt;Bad block                   on           &lt;BR /&gt;Allocation                  strict/contiguous         &lt;BR /&gt;IO Timeout (Seconds)        default             &lt;BR /&gt;&lt;BR /&gt;   --- Distribution of logical volume ---&lt;BR /&gt;   PV Name            LE on PV  PE on PV  &lt;BR /&gt;   /dev/dsk/c3t15d0   256       256       &lt;BR /&gt;   /dev/dsk/c40t0d0   256       256       &lt;BR /&gt;&lt;BR /&gt;   --- Logical extents ---&lt;BR /&gt;--- Logical volumes ---&lt;BR /&gt;LV Name                     /dev/vg00/lvol9dup&lt;BR /&gt;VG Name                     /dev/vg00&lt;BR /&gt;LV Permission               read/write   &lt;BR /&gt;LV Status                   available/syncd           &lt;BR /&gt;Mirror copies               1            &lt;BR /&gt;Consistency Recovery        MWC                 &lt;BR /&gt;Schedule                    parallel     &lt;BR /&gt;LV Size (Mbytes)            2048            &lt;BR /&gt;Current LE                  256       &lt;BR /&gt;Allocated PE                512         &lt;BR /&gt;Stripes                     0       &lt;BR /&gt;Stripe Size (Kbytes)        0                   &lt;BR /&gt;Bad block                   on           &lt;BR /&gt;Allocation                  strict/contiguous         &lt;BR /&gt;IO Timeout (Seconds)        default             &lt;BR /&gt;&lt;BR /&gt;   --- Distribution of logical volume ---&lt;BR /&gt;   PV Name            LE on PV  PE on PV  &lt;BR /&gt;   /dev/dsk/c3t15d0   256       256       &lt;BR /&gt;   /dev/dsk/c40t0d0   256       256       &lt;BR /&gt;&lt;BR /&gt;   --- Logical extents ---&lt;BR /&gt;&lt;BR /&gt;bash-3.2# ls -l /dev/vg00/* |grep '^b'&lt;BR /&gt;brw-r-----   1 root       root        64 0x000001 Sep 10 21:48 /dev/vg00/lvol1&lt;BR /&gt;brw-rw-rw-   1 root       root        64 0x000002 Jun 11 19:45 /dev/vg00/lvol2&lt;BR /&gt;brw-r-----   1 root       root        64 0x000003 Jun 11 15:49 /dev/vg00/lvol3&lt;BR /&gt;brw-r-----   1 root       root        64 0x000004 Jun 11 15:49 /dev/vg00/lvol4&lt;BR /&gt;brw-r-----   1 root       root        64 0x000005 Jun 11 15:49 /dev/vg00/lvol5&lt;BR /&gt;brw-r-----   1 root       root        64 0x000006 Jun 11 15:49 /dev/vg00/lvol6&lt;BR /&gt;brw-r-----   1 root       sys         64 0x000009 Aug 21 17:23 /dev/vg00/lvol7new&lt;BR /&gt;brw-r-----   1 root       root        64 0x000008 Jun 11 15:49 /dev/vg00/lvol8&lt;BR /&gt;brw-r-----   1 root       root        64 0x000009 Aug  5 18:40 /dev/vg00/lvol9&lt;BR /&gt;brw-rw-rw-   1 root       sys         64 0x000009 Sep  1 14:58 /dev/vg00/lvol9dup&lt;BR /&gt;</description>
      <pubDate>Fri, 18 Sep 2009 01:39:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499069#M555989</guid>
      <dc:creator>vaughan_1</dc:creator>
      <dc:date>2009-09-18T01:39:27Z</dc:date>
    </item>
    <item>
      <title>Re: system hang when more than half vg00's disk are offline ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499070#M555990</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;May I offer some clarification for&lt;BR /&gt;LVM running quorum as it seems to puzzle some.&lt;BR /&gt;&lt;BR /&gt;VG and LVs should still be available&lt;BR /&gt;and responding even when running quorum is&lt;BR /&gt;below 50% of the disks in the VG.&lt;BR /&gt;&lt;BR /&gt;The issue is that if less than 50% of the disks&lt;BR /&gt;in the volume group respond to the quorum check, any LVM configuration change cannot&lt;BR /&gt;proceed!&lt;BR /&gt;&lt;BR /&gt;So, your ls(1M) command should not be hung.&lt;BR /&gt;You have some other issue and we need&lt;BR /&gt;to verify what it is.&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;&lt;BR /&gt;VK2COT&lt;BR /&gt;</description>
      <pubDate>Fri, 18 Sep 2009 04:54:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499070#M555990</guid>
      <dc:creator>VK2COT</dc:creator>
      <dc:date>2009-09-18T04:54:13Z</dc:date>
    </item>
    <item>
      <title>Re: system hang when more than half vg00's disk are offline ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499071#M555991</link>
      <description>I'm with you VK2COT.&lt;BR /&gt;&lt;BR /&gt;The configuration looks good to me so far.&lt;BR /&gt;&lt;BR /&gt;Some minor changes I would do: set all Consistency Recovery to MWC and disable always the bad block relocation. But this is IMHO not releated to the problem. Next I would check for LVM patches. Can you post a &lt;BR /&gt;&lt;BR /&gt;# swlist&lt;BR /&gt;&lt;BR /&gt;BTW, did you ever try to boot from this SAN device (without the local disks)?</description>
      <pubDate>Fri, 18 Sep 2009 06:00:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499071#M555991</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2009-09-18T06:00:15Z</dc:date>
    </item>
    <item>
      <title>Re: system hang when more than half vg00's disk are offline ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499072#M555992</link>
      <description>BTW, did you ever try to boot from this SAN device (without the local disks)? &lt;BR /&gt;Yes! many times, all ok.&lt;BR /&gt;&lt;BR /&gt;Now I change my san disks into those from mylex FC array and is mirroring. I will check if system hang will happen this time.&lt;BR /&gt;&lt;BR /&gt;Just now I added the 2 san disk into vg00, without mirror on them, and then pull the cable out. The system act normally with only a warning that indicates quorum lost. So I suppose it is something related to mirror, not running quorum, that cause the system hang. &lt;BR /&gt;&lt;BR /&gt;bash-3.2# swlist &lt;BR /&gt;# Initializing...&lt;BR /&gt;# Contacting target "hpa500"...&lt;BR /&gt;#&lt;BR /&gt;# Target:  hpa500:/&lt;BR /&gt;#&lt;BR /&gt;&lt;BR /&gt;#&lt;BR /&gt;# Bundle(s):&lt;BR /&gt;#&lt;BR /&gt;&lt;BR /&gt;  100BaseT-01   B.11.11.01     HP-PB 100BaseT;Supptd HW=A3495A;SW=J2759BA &lt;BR /&gt;  ATM-00   K.11.11        PCI ATM;Supptd HW=A5483A/A5513A/A5515A/J3557A;SW=J3572AA/J3572BA &lt;BR /&gt;  ATM-01   K.11.11        HSC ATM;Supptd HW=J2468A/J2469A/J2499A/J3420B/J3573A;SW=J2806CA &lt;BR /&gt;  B5725AA   B.3.0.502      HP-UX Installation Utilities (Ignite-UX) &lt;BR /&gt;  B9788AA   1.3.1.01.00release Java 2 SDK 1.3 for HP-UX (700/800), PA1.1 + PA2.0 Add On &lt;BR /&gt;  BUNDLE   B.2009.08.31   Patch Bundle   &lt;BR /&gt;  CDE-English   B.11.11        English CDE Environment &lt;BR /&gt;  FDDI-00   B.11.11.01     PCI FDDI;Supptd HW=A3739A/A3739B;SW=J3626AA &lt;BR /&gt;  FibrChanl-00   B.11.11.17     PCI/HSC FibreChannel;Supptd HW=A6684A,A6685A,A5158A,A6795A &lt;BR /&gt;  GOLDAPPS11i   B.11.11.0406.5 Gold Applications Patches for HP-UX 11i v1, June 2004 &lt;BR /&gt;  GOLDBASE11i   B.11.11.0406.5 Gold Base Patches for HP-UX 11i v1, June 2004 &lt;BR /&gt;  GigEther-00   B.11.11.14     PCI/HSC GigEther;Supptd HW=A4926A/A4929A/A4924A/A4925A;SW=J1642AA &lt;BR /&gt;  HPUX11i-OE-MC   B.11.11        HP-UX Mission Critical Operating Environment Component &lt;BR /&gt;  HPUXBase64   B.11.11        HP-UX 64-bit Base OS &lt;BR /&gt;  HPUXBaseAux   B.11.11        HP-UX Base OS Auxiliary &lt;BR /&gt;  HyprFabrc-00   B.11.11.00     PCI/HSC HyperFabric; Supptd HW=A6092A/A4921A/A4920A/A4919A;SW=B6257AA &lt;BR /&gt;  Ignite-UX-10-20  B.3.0.502      HP-UX Installation Utilities for Installing 10.20 Systems &lt;BR /&gt;  Ignite-UX-11-00  B.3.0.502      HP-UX Installation Utilities for Installing 11.00 Systems &lt;BR /&gt;  Ignite-UX-11-11  B.3.0.502      HP-UX Installation Utilities for Installing 11.11 Systems &lt;BR /&gt;  J4258BA   B.04.11        Netscape Directory Server v4 for HP-UX &lt;BR /&gt;  J4274AA   B.01.02.06     HP WebQoS Peak Packaged Edition &lt;BR /&gt;  OnlineDiag   B.11.11.00.04  HPUX 11.11 Support Tools Bundle &lt;BR /&gt;  RAID-00   B.11.11.00     PCI RAID; Supptd HW=A5856A &lt;BR /&gt;  SLP    B.11.11.1.0.1.2 Service Location Protocol components &lt;BR /&gt;  TermIO-00   B.11.11.01     PCI MUX; Supptd HW=J3592A/J35923A; SW=J3596A &lt;BR /&gt;  iSCSI-00   B.11.11.03e    HP-UX iSCSI Software Initiator &lt;BR /&gt;  perl    D.5.8.8.D      Perl Programming Language &lt;BR /&gt;#&lt;BR /&gt;# Product(s) not contained in a Bundle:&lt;BR /&gt;#&lt;BR /&gt;&lt;BR /&gt;  LPFC    B.04.21.05     Light Pulse Adapter Driver &lt;BR /&gt;  PHCO_31314   1.0            cumulative SAM patch &lt;BR /&gt;  PHCO_32181   1.0            ugm cumulative patch &lt;BR /&gt;  PHCO_33215   1.0            libpam_unix cumulative patch &lt;BR /&gt;  PHCO_33288   1.0            Device IDs, mount(1M) cumulative patch &lt;BR /&gt;  PHCO_33533   1.0            libc cumulative patch &lt;BR /&gt;  PHKL_24554   1.0            vPar enablement patch &lt;BR /&gt;  PHKL_30398   1.0            KI FSS ID and KI_rfscall &lt;BR /&gt;  PHKL_32002   1.0            physio thread performance degradation &lt;BR /&gt;  PHKL_32005   1.0            thread suspend, DaS, panic, physio &lt;BR /&gt;  PHKL_32668   1.0            vPar enablement,DLKM load panic &lt;BR /&gt;  PHKL_33258   1.0            VxFS cumulative patch ;ml_flag race &lt;BR /&gt;  PHKL_33270   1.0            Cumulative VM patch &lt;BR /&gt;  PHKL_33363   1.0            vPars panic;Syscall cumulative;FSS;msem_lock &lt;BR /&gt;  PHNE_32477   1.0            ONC/NFS General Release/Performance Patch &lt;BR /&gt;  PHSS_30726   1.0            rp24xx 43.50 PDC Firmware Patch &lt;BR /&gt;  PHSS_30966   1.0            ld(1) and linker tools cumulative patch &lt;BR /&gt;  bash    3.2            bash           &lt;BR /&gt;  gettext   0.17           gettext        &lt;BR /&gt;  libiconv   1.12           libiconv       &lt;BR /&gt;  make    3.81           make           &lt;BR /&gt;  openssl   0.9.8k         openssl        &lt;BR /&gt;  termcap   1.3.1          termcap        &lt;BR /&gt;  wget    1.11.4         wget           &lt;BR /&gt;</description>
      <pubDate>Fri, 18 Sep 2009 06:36:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499072#M555992</guid>
      <dc:creator>vaughan_1</dc:creator>
      <dc:date>2009-09-18T06:36:04Z</dc:date>
    </item>
    <item>
      <title>Re: system hang when more than half vg00's disk are offline ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499073#M555993</link>
      <description>to Torsten, &lt;BR /&gt; To set Consistency Recovery of lovl2 to NONE which used as major swap is recommedated by ã  HP-UX System Administrator's Guide: Logical Volume Managementã  , and this setting only affects system booting process. So I don't think it matters.&lt;BR /&gt;Could you tell me how to disable the bad block relocating please? I forgot it...</description>
      <pubDate>Fri, 18 Sep 2009 06:46:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499073#M555993</guid>
      <dc:creator>vaughan_1</dc:creator>
      <dc:date>2009-09-18T06:46:43Z</dc:date>
    </item>
    <item>
      <title>Re: system hang when more than half vg00's disk are offline ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499074#M555994</link>
      <description>Test done. The same system hang happened. &lt;BR /&gt;One local disk and two SAN disks from FC array through FC switch. All lvols in vg00 have been mirrored, major copy on local and mirror copy on san disks. All Consistency Recovery are MWC except lvol2 NONE. All bad block relocate turned off.&lt;BR /&gt;System hung as usual after the cable of array on FC switch pulled out, and responsed normally after cable plugged in.&lt;BR /&gt;&lt;BR /&gt;some part of dmesg below:&lt;BR /&gt;Sep 18 17:03:41 hpa500 vmunix: DIAGNOSTIC SYSTEM WARNING:&lt;BR /&gt;Sep 18 17:03:41 hpa500 vmunix:    The diagnostic logging facility is no longer receiving excessive&lt;BR /&gt;Sep 18 17:03:41 hpa500 vmunix:    errors from the I/O subsystem.  82 I/O error entries were lost.&lt;BR /&gt;Sep 18 17:08:48 hpa500 vmunix: LVM: VG 64 0x000000: Lost quorum.&lt;BR /&gt;Sep 18 17:08:48 hpa500 vmunix: This may block configuration changes and I/Os. In order to reestablish quorum at least 1 of the following PVs (represented by current link) must become available:&lt;BR /&gt;Sep 18 17:08:48 hpa500 vmunix: &amp;lt;31 0x280100&amp;gt; &amp;lt;31 0x280200&amp;gt; &lt;BR /&gt;Sep 18 17:08:48 hpa500 vmunix: LVM: VG 64 0x000000: PVLink 31 0x280100 Failed! The PV is not accessible.&lt;BR /&gt;Sep 18 17:08:48 hpa500 vmunix: LVM: VG 64 0x000000: PVLink 31 0x280200 Failed! The PV is not accessible.&lt;BR /&gt;Sep 18 17:08:48 hpa500 vmunix: &lt;BR /&gt;Sep 18 17:08:48 hpa500 vmunix: SCSI: Read error -- dev: b 31 0x280100, errno: 126, resid: 1024,&lt;BR /&gt;Sep 18 17:08:48 hpa500 vmunix:  blkno: 8, sectno: 16, offset: 8192, bcount: 1024.&lt;BR /&gt;Sep 18 17:08:48 hpa500 vmunix: DIAGNOSTIC SYSTEM WARNING:&lt;BR /&gt;Sep 18 17:08:48 hpa500 vmunix:    The diagnostic logging facility has started receiving excessive&lt;BR /&gt;Sep 18 17:08:48 hpa500 vmunix:    errors from the I/O subsystem.  I/O error entries will be lost&lt;BR /&gt;Sep 18 17:08:48 hpa500 vmunix:    until the cause of the excessive I/O logging is corrected.&lt;BR /&gt;Sep 18 17:08:48 hpa500 vmunix:    If the diaglogd daemon is not active, use the Daemon Startup command&lt;BR /&gt;Sep 18 17:08:48 hpa500 vmunix:    in stm to start it.&lt;BR /&gt;Sep 18 17:08:48 hpa500 vmunix:    If the diaglogd daemon is active, use the logtool utility in stm&lt;BR /&gt;Sep 18 17:08:48 hpa500 vmunix:    to determine which I/O subsystem is logging excessive errors.&lt;BR /&gt;Sep 18 17:08:48 hpa500 vmunix: LVM: VG 64 0x000000: Reestablished quorum.&lt;BR /&gt;Sep 18 17:08:48 hpa500 vmunix: LVM: VG 64 0x000000: PVLink 31 0x280100 Recovered.&lt;BR /&gt;Sep 18 17:08:48 hpa500 vmunix: LVM: VG 64 0x000000: PVLink 31 0x280200 Recovered.&lt;BR /&gt;&lt;BR /&gt;-----------&lt;BR /&gt;Is there any special patch on my system ?</description>
      <pubDate>Fri, 18 Sep 2009 07:46:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499074#M555994</guid>
      <dc:creator>vaughan_1</dc:creator>
      <dc:date>2009-09-18T07:46:23Z</dc:date>
    </item>
    <item>
      <title>Re: system hang when more than half vg00's disk are offline ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499075#M555995</link>
      <description>Is it the logging facility that causes hang ?</description>
      <pubDate>Fri, 18 Sep 2009 07:50:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499075#M555995</guid>
      <dc:creator>vaughan_1</dc:creator>
      <dc:date>2009-09-18T07:50:04Z</dc:date>
    </item>
    <item>
      <title>Re: system hang when more than half vg00's disk are offline ?</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499076#M555996</link>
      <description>The last quality patch bundle is from 2004, but there is an individual bundle (BUNDLE B.2009.08.31 Patch Bundle) - nit sure what it is.&lt;BR /&gt;&lt;BR /&gt;I would install the latest patch bundles and online diags now.</description>
      <pubDate>Fri, 18 Sep 2009 08:27:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/system-hang-when-more-than-half-vg00-s-disk-are-offline/m-p/4499076#M555996</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2009-09-18T08:27:40Z</dc:date>
    </item>
  </channel>
</rss>

