<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: LVM vg[] POWERFAILED in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995548#M125301</link>
    <description>Hi!&lt;BR /&gt;&lt;BR /&gt;Also try the following to see if you get any errors:&lt;BR /&gt;&lt;BR /&gt;dd if=/dev/rdsk/cXtYdZ of=/dev/null bs=1024&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Dario</description>
    <pubDate>Thu, 12 Jun 2003 11:52:46 GMT</pubDate>
    <dc:creator>Dario_1</dc:creator>
    <dc:date>2003-06-12T11:52:46Z</dc:date>
    <item>
      <title>LVM vg[] POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995541#M125294</link>
      <description>I see the message in dmesg, does anyone know what's happend?&lt;BR /&gt;&lt;BR /&gt;LVM: vg[5]: pvnum=0 (dev_t=0x1f041100) is POWERFAILED&lt;BR /&gt;LVM: Recovered Path (device 0x1f041100) to PV 0 in VG 5.&lt;BR /&gt;LVM: Restored PV 0 to VG 5.&lt;BR /&gt;</description>
      <pubDate>Thu, 12 Jun 2003 08:46:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995541#M125294</guid>
      <dc:creator>j773303</dc:creator>
      <dc:date>2003-06-12T08:46:56Z</dc:date>
    </item>
    <item>
      <title>Re: LVM vg[] POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995542#M125295</link>
      <description>Could be a poor connection to one of you disks.&lt;BR /&gt;&lt;BR /&gt;Most likely hardware-related, anyway.&lt;BR /&gt;&lt;BR /&gt;Rgds Jarle</description>
      <pubDate>Thu, 12 Jun 2003 08:52:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995542#M125295</guid>
      <dc:creator>Jarle Bjorgeengen</dc:creator>
      <dc:date>2003-06-12T08:52:30Z</dc:date>
    </item>
    <item>
      <title>Re: LVM vg[] POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995543#M125296</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;basically LVM thought one of your disks/luns was no longer there, then it came back and recovered the old status.&lt;BR /&gt;&lt;BR /&gt;If this happens frequently, check connections to the disks / disk array, if you are using EMC or the like you may need the change the timeout value for the Luns by &lt;BR /&gt;pvchange -t 180 &lt;LUN_DEVICE&gt;&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Bernhard&lt;/LUN_DEVICE&gt;</description>
      <pubDate>Thu, 12 Jun 2003 08:53:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995543#M125296</guid>
      <dc:creator>Bernhard Mueller</dc:creator>
      <dc:date>2003-06-12T08:53:11Z</dc:date>
    </item>
    <item>
      <title>Re: LVM vg[] POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995544#M125297</link>
      <description>Hi,&lt;BR /&gt;this means the the first disk in your fifth 5 had a malfunction, but it was momentary.&lt;BR /&gt;&lt;BR /&gt;How to know the fifth vg: do a "strings /etc/lvmtab | grep vg" , the 5th in the list is yours.&lt;BR /&gt;&lt;BR /&gt;Have also a look at your "dmesg" output.&lt;BR /&gt;&lt;BR /&gt;   HTH,&lt;BR /&gt;    Massimo&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 12 Jun 2003 08:53:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995544#M125297</guid>
      <dc:creator>Massimo Bianchi</dc:creator>
      <dc:date>2003-06-12T08:53:20Z</dc:date>
    </item>
    <item>
      <title>Re: LVM vg[] POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995545#M125298</link>
      <description>You have a possible pending disk failure. If it is part of a Storage Array (XP, EMC) call the vendor.&lt;BR /&gt;&lt;BR /&gt;If it is part of a SAN, investigate the status of the connection.&lt;BR /&gt;&lt;BR /&gt;Below are some useful forum entries for troubleshooting problems like this.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0x80827d4cf554d611abdb0090277a778c,00.html" target="_blank"&gt;http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0x80827d4cf554d611abdb0090277a778c,00.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0xc291e7613948d5118fef0090279cd0f9,00.html" target="_blank"&gt;http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0xc291e7613948d5118fef0090279cd0f9,00.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0x667e7cffe265d711abdc0090277a778c,00.html" target="_blank"&gt;http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0x667e7cffe265d711abdc0090277a778c,00.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Share and Enjoy! Ian</description>
      <pubDate>Thu, 12 Jun 2003 08:55:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995545#M125298</guid>
      <dc:creator>Ian Dennison_1</dc:creator>
      <dc:date>2003-06-12T08:55:28Z</dc:date>
    </item>
    <item>
      <title>Re: LVM vg[] POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995546#M125299</link>
      <description>Hi &lt;BR /&gt;&lt;BR /&gt;It means that the system temporarily lost sight of the disks.  If these are in dmesg then they may not necessarily be recent. Also worth checking syslog or EMS if it is running.&lt;BR /&gt;&lt;BR /&gt;If they are connected via fibre-channel it is worth checking your patch state. &lt;BR /&gt;&lt;BR /&gt;It is also worth increasing the timeout parameter for the physical volume. The default is 30s (I think) but it is ok to up it to 180 and see if the problem goes away: eg &lt;BR /&gt;&lt;BR /&gt;pvchange -t 60 &lt;DISKNAME&gt;  /dev/dsk/c4t1d1 &lt;BR /&gt;&lt;BR /&gt;If the messages persist, you may have a problem with the disk. &lt;BR /&gt;&lt;BR /&gt;Cheers&lt;BR /&gt;Keely&lt;/DISKNAME&gt;</description>
      <pubDate>Thu, 12 Jun 2003 08:57:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995546#M125299</guid>
      <dc:creator>Keely Jackson</dc:creator>
      <dc:date>2003-06-12T08:57:52Z</dc:date>
    </item>
    <item>
      <title>Re: LVM vg[] POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995547#M125300</link>
      <description>I was advised by one of the senior hardware engineers recently that you should not go straight to a pv timeout of 180 or higher.  LVM does default to 30, so you want to bump up the timeout gradually.  Try 60-90 first.  You want to go with the smallest increment that may resolve these messages (if these messages are not associated with an actual HW failure).  &lt;BR /&gt;&lt;BR /&gt;The logic behind using the smallest increment is that the timeout affects the reporting of the disks.  If there is a problem, you want the quickest response back.  &lt;BR /&gt;&lt;BR /&gt;In addition, you want to check for the latest SCSI &amp;amp; LVM patches.  For instance SCSI patch PHKL_18543 for 11.00 (and it's dependencies).&lt;BR /&gt;&lt;BR /&gt;The problem disk should be c4t1d1:&lt;BR /&gt;# ll /dev/* |grep 041100</description>
      <pubDate>Thu, 12 Jun 2003 10:26:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995547#M125300</guid>
      <dc:creator>Cheryl Griffin</dc:creator>
      <dc:date>2003-06-12T10:26:39Z</dc:date>
    </item>
    <item>
      <title>Re: LVM vg[] POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995548#M125301</link>
      <description>Hi!&lt;BR /&gt;&lt;BR /&gt;Also try the following to see if you get any errors:&lt;BR /&gt;&lt;BR /&gt;dd if=/dev/rdsk/cXtYdZ of=/dev/null bs=1024&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Dario</description>
      <pubDate>Thu, 12 Jun 2003 11:52:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995548#M125301</guid>
      <dc:creator>Dario_1</dc:creator>
      <dc:date>2003-06-12T11:52:46Z</dc:date>
    </item>
    <item>
      <title>Re: LVM vg[] POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995549#M125302</link>
      <description>Hi!&lt;BR /&gt;&lt;BR /&gt;Also try the following to see if you get any errors:&lt;BR /&gt;&lt;BR /&gt;dd if=/dev/rdsk/cXtYdZ of=/dev/null bs=1024&lt;BR /&gt;&lt;BR /&gt;where cXtYdZ should be c4t1d1&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Dario</description>
      <pubDate>Thu, 12 Jun 2003 11:53:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995549#M125302</guid>
      <dc:creator>Dario_1</dc:creator>
      <dc:date>2003-06-12T11:53:56Z</dc:date>
    </item>
    <item>
      <title>Re: LVM vg[] POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995550#M125303</link>
      <description>I have an open hw case on this problem right now.....&lt;BR /&gt;&lt;BR /&gt;dev_t=-1f041100 is..... c4t1d1.&lt;BR /&gt;&lt;BR /&gt;vg[5] means is is the volume group with &lt;BR /&gt;/dev/vg???/group with a nodeid of 5.&lt;BR /&gt;&lt;BR /&gt;If this is hooked to an EMC array, they (EMC) will probably tell you to increase the timeout to 180.&lt;BR /&gt;i.e.  pvchange -t 180  /dev/dsk/c4t1d0.&lt;BR /&gt;What is the timeout value set to now???&lt;BR /&gt;run  pvdisplay -v /dev/dsk/c4t1d0 | more&lt;BR /&gt;&lt;BR /&gt;If you run "stm" to see the error logs, you may be surprised to find this error is occurring more than you think.&lt;BR /&gt;&lt;BR /&gt;Also vi file /var/adm/syslog/syslog.log.  In there you can see the time of the error.  &lt;BR /&gt;&lt;BR /&gt;Is this happening at the same time every day?  Perhaps during a huge backup?  &lt;BR /&gt;&lt;BR /&gt;I can't tell you what's causing it.  I wish I knew.  That's why I have a call into hp on it.  (and EMC).&lt;BR /&gt;&lt;BR /&gt;Steve &lt;BR /&gt;</description>
      <pubDate>Thu, 12 Jun 2003 16:59:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995550#M125303</guid>
      <dc:creator>Steve Post</dc:creator>
      <dc:date>2003-06-12T16:59:55Z</dc:date>
    </item>
    <item>
      <title>Re: LVM vg[] POWERFAILED</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995551#M125304</link>
      <description>one more big thing.  You should know that phkl_18&lt;WHATEVER&gt; is called "the patch from hell", and the "line-in-the-sand" patch.&lt;BR /&gt;&lt;BR /&gt;You can't uninstall it.  &lt;BR /&gt;You can't reinstall it. &lt;BR /&gt;If you have the patch on your system, leave it alone.  &lt;BR /&gt;Search on the patch in the forums, you'll see what I mean.  &lt;BR /&gt;&lt;/WHATEVER&gt;</description>
      <pubDate>Thu, 12 Jun 2003 17:06:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-vg-powerfailed/m-p/2995551#M125304</guid>
      <dc:creator>Steve Post</dc:creator>
      <dc:date>2003-06-12T17:06:57Z</dc:date>
    </item>
  </channel>
</rss>

