<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: LVM: PV 0 has been returned to vg[4] in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-pv-0-has-been-returned-to-vg-4/m-p/2434545#M656205</link>
    <description>This is generally indicative of a disk that has had a temporary failure/loss of power.  If you look above it in syslog, you should see an entry something like "PV0 has powerfailed . . ."  When the disk failure (temporary loss of scsi connectivity) happened, the syslogd recognized it and reported that the physical volume was no longer available in the VG.  The message you posted indicates the return to availability of the physical volume.</description>
    <pubDate>Thu, 03 Aug 2000 17:48:42 GMT</pubDate>
    <dc:creator>Alan Riggs</dc:creator>
    <dc:date>2000-08-03T17:48:42Z</dc:date>
    <item>
      <title>LVM: PV 0 has been returned to vg[4]</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-pv-0-has-been-returned-to-vg-4/m-p/2434543#M656203</link>
      <description>What is the meaning of this message:&lt;BR /&gt;&lt;BR /&gt;   LVM: PV 0 has been returned to vg[4]&lt;BR /&gt;</description>
      <pubDate>Thu, 03 Aug 2000 15:44:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-pv-0-has-been-returned-to-vg-4/m-p/2434543#M656203</guid>
      <dc:creator>John Lombardi</dc:creator>
      <dc:date>2000-08-03T15:44:29Z</dc:date>
    </item>
    <item>
      <title>Re: LVM: PV 0 has been returned to vg[4]</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-pv-0-has-been-returned-to-vg-4/m-p/2434544#M656204</link>
      <description>This message may appear shortly after the&lt;BR /&gt;POWERFAILED message. Maybe, that electric&lt;BR /&gt;power was interrupted for a short period&lt;BR /&gt;(or another short lasting defect appeared).&lt;BR /&gt;If the disk (physival volume, PV) becomes&lt;BR /&gt;functional again and in time, it comes back&lt;BR /&gt;to volume group 4 (vg[4]).</description>
      <pubDate>Thu, 03 Aug 2000 17:47:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-pv-0-has-been-returned-to-vg-4/m-p/2434544#M656204</guid>
      <dc:creator>Thomas Schler_1</dc:creator>
      <dc:date>2000-08-03T17:47:13Z</dc:date>
    </item>
    <item>
      <title>Re: LVM: PV 0 has been returned to vg[4]</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-pv-0-has-been-returned-to-vg-4/m-p/2434545#M656205</link>
      <description>This is generally indicative of a disk that has had a temporary failure/loss of power.  If you look above it in syslog, you should see an entry something like "PV0 has powerfailed . . ."  When the disk failure (temporary loss of scsi connectivity) happened, the syslogd recognized it and reported that the physical volume was no longer available in the VG.  The message you posted indicates the return to availability of the physical volume.</description>
      <pubDate>Thu, 03 Aug 2000 17:48:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-pv-0-has-been-returned-to-vg-4/m-p/2434545#M656205</guid>
      <dc:creator>Alan Riggs</dc:creator>
      <dc:date>2000-08-03T17:48:42Z</dc:date>
    </item>
    <item>
      <title>Re: LVM: PV 0 has been returned to vg[4]</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-pv-0-has-been-returned-to-vg-4/m-p/2434546#M656206</link>
      <description>Actually vg[4] is referencing the 4th valid entry in your /etc/fstab file.  LVM has determined that the physical volume is now available and is redefining the device file as the primary device file for the LUN.  If you have a disk array with dual controllers and have LVM configured for alternate links, you may want to run the vgdisplay -v command on that volume group and check if the primary and alternate paths are correct. &lt;BR /&gt;&lt;BR /&gt;If they are not, you can switch the two on-the-fly with vgreduce and vgextend.  Of course you need to have confidence that your alternate links are working to do this.&lt;BR /&gt;&lt;BR /&gt;Tony</description>
      <pubDate>Thu, 03 Aug 2000 17:52:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-pv-0-has-been-returned-to-vg-4/m-p/2434546#M656206</guid>
      <dc:creator>Anthony deRito</dc:creator>
      <dc:date>2000-08-03T17:52:51Z</dc:date>
    </item>
    <item>
      <title>Re: LVM: PV 0 has been returned to vg[4]</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-pv-0-has-been-returned-to-vg-4/m-p/2434547#M656207</link>
      <description>Hi,&lt;BR /&gt;Pls find below the HP document which explains about ur problem.&lt;BR /&gt;&lt;BR /&gt;vg[#]: pvnum=# (dev_t=##) is powerfailed; connection timed out DocId: KBRC00000668   Updated: 2/29/00 6:53:39 AM &lt;BR /&gt;&lt;BR /&gt;PROBLEM&lt;BR /&gt;Errors in /var/adm/syslog/syslog.log or on console: &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;vg[#]: pvnum=# (dev_t=##) is powerfailed&lt;BR /&gt;recv: Connection timed out&lt;BR /&gt;RESOLUTION&lt;BR /&gt;First, determine the physical volume in question: &lt;BR /&gt;&lt;BR /&gt;The disk device can be determined by the using the dev_t value. &lt;BR /&gt;&lt;BR /&gt;For example: &lt;BR /&gt;&lt;BR /&gt;dev_t value of 0x1c045000 is associated with c4t5d0 &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;            04 = instance number&lt;BR /&gt;             5 = SCSI address number&lt;BR /&gt;             0 = LUN number&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;The errors could be the result of one or more of the following: &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;If the error is accompanied by a message about pv[#] returned to vg[#], then the error can usually be attributed to a low timeout value on the disk driver. By default, this timeout is 30 seconds. &lt;BR /&gt;&lt;BR /&gt;Increase the timeout up to the maximum of 180 seconds: &lt;BR /&gt;pvchange -t 180 /dev/dsk/disk_device &lt;BR /&gt;&lt;BR /&gt;Increasing the timeout will not affect I/O performance on the disk. &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Make sure that the latest SCSI/LVM patch (and its dependencies) are installed. For s800 10.20, this patch is: &lt;BR /&gt;PHKL_16751 :LVM:JFS:PCI:SCSI:SIG_IGN:SIGCLD:LITS: &lt;BR /&gt;&lt;BR /&gt;As with all patches, please use the Patch Database at &lt;A href="http://itrc.hp.com" target="_blank"&gt;http://itrc.hp.com&lt;/A&gt; to determine the latest version. &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Check for an I/O bottleneck on the disk. &lt;BR /&gt;sar -d &lt;BR /&gt;&lt;BR /&gt;A high amount of traffic on a disk can cause severe performance problems and can cause requests to timeout. &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;If the error is NOT accompanied by a message about pv[#] returned to vg[#], then the error can usually be attributed to a hardware problem on the disk. DO NOT install patches on the system until the hardware has been diagnosed. &lt;BR /&gt;&lt;BR /&gt;Change the timeout value on the disk and watch for further errors. Contact HP Hardware Support immediately if the errors persist. &lt;BR /&gt;&lt;BR /&gt;If the powerfail messages are accompanied by lbolt errors &lt;BR /&gt;For example: &lt;BR /&gt;&lt;BR /&gt;SCSI: Request Timeout -- lbolt: #######, dev: ###### &lt;BR /&gt;&lt;BR /&gt;check the SCSI controller connections/terminators. Make sure all connections are tight. If the errors persist, contact HP Hardware support immediately. &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Check for an I/O bottleneck on the disk. &lt;BR /&gt;sar -d &lt;BR /&gt;&lt;BR /&gt;A high amount of traffic on a disk can cause severe performance problems and can mimic a hardware issue. &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Aug 2000 04:44:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-pv-0-has-been-returned-to-vg-4/m-p/2434547#M656207</guid>
      <dc:creator>Ramesh Donti</dc:creator>
      <dc:date>2000-08-04T04:44:04Z</dc:date>
    </item>
    <item>
      <title>Re: LVM: PV 0 has been returned to vg[4]</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-pv-0-has-been-returned-to-vg-4/m-p/2434548#M656208</link>
      <description>Thomas,&lt;BR /&gt;A power fail messages generated by the LVM means that the LVM was not able to successfully complete a command within a defined period of time. This does not necessarily means that the disk has a defect. &lt;BR /&gt;Please keep in mind that the system does not monitor the power lines to the disks!&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Aug 2000 06:51:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-pv-0-has-been-returned-to-vg-4/m-p/2434548#M656208</guid>
      <dc:creator>Patrick Wessel</dc:creator>
      <dc:date>2000-08-04T06:51:09Z</dc:date>
    </item>
    <item>
      <title>Re: LVM: PV 0 has been returned to vg[4]</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-pv-0-has-been-returned-to-vg-4/m-p/2434549#M656209</link>
      <description>&lt;BR /&gt;The normal cause of this message is a disk spinning down then up again, which is the first sign it is on its way out - ie. its going to die sometime in the short-medium term future. Youre within your rights to have HP replace it, so as a preventative measure log a hardware call and get it replaced.&lt;BR /&gt;&lt;BR /&gt;If you need more info run xstm and go to TOOLS -&amp;gt; UTILITY -&amp;gt; RUN and select LOGTOOL and then go to view raw log and look for any recent errors. Their should be some entries for the disk in question which you can then attempt to anaylze, or better still send it to the HP ITRC and have them analyze it.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Aug 2000 08:30:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-pv-0-has-been-returned-to-vg-4/m-p/2434549#M656209</guid>
      <dc:creator>Stefan Farrelly</dc:creator>
      <dc:date>2000-08-04T08:30:19Z</dc:date>
    </item>
    <item>
      <title>Re: LVM: PV 0 has been returned to vg[4]</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-pv-0-has-been-returned-to-vg-4/m-p/2434550#M656210</link>
      <description>The message is generated by the Logical Volume Manager (LVM) after a disk powerfails, then recovers, but a following I/O to the disk returns another error from the driver.  The following is the sequence of events:&lt;BR /&gt;1.  I/O was sent to the disk.&lt;BR /&gt;2.  The disk didn't respond in a reasonable amount of time.&lt;BR /&gt;3.  LVM considered the disk missing due to powerfail, and mirroring allowed the system continue to run without hanging (extents marked stale, etc).&lt;BR /&gt;4.  LVM polls the disk, and one of the polling I/Os returned in error rather than either success (disk online) or powerfail (no response), generating the message they saw.&lt;BR /&gt;5.  LVM continues to poll the disk, hoping the next poll is more successful.  It is.&lt;BR /&gt;6.  The disk returns to the volume group, and mirrors are resynced, etc. There's probably a message in syslog showing the disk timing-out and returning.  This could have been caused by an actual power or connection loss, or by over-working the disk or bus so the drive couldn't respond in time.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Aug 2000 08:46:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-pv-0-has-been-returned-to-vg-4/m-p/2434550#M656210</guid>
      <dc:creator>CHRIS_ANORUO</dc:creator>
      <dc:date>2000-08-04T08:46:24Z</dc:date>
    </item>
    <item>
      <title>Re: LVM: PV 0 has been returned to vg[4]</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/lvm-pv-0-has-been-returned-to-vg-4/m-p/2434551#M656211</link>
      <description>We have been experiencing similar problems recently during periods of very heavy I/O with Fibre Channel FC1010D disks connected to two K580's in an MC/Serviceguard environment.&lt;BR /&gt;&lt;BR /&gt;The problem was eventually traced to the Fibre Channel controller card in the K580 that wasn't experiencing the problems!&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Aug 2000 09:05:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/lvm-pv-0-has-been-returned-to-vg-4/m-p/2434551#M656211</guid>
      <dc:creator>John Palmer</dc:creator>
      <dc:date>2000-08-04T09:05:05Z</dc:date>
    </item>
  </channel>
</rss>

