<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: stale issue in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/stale-issue/m-p/3789391#M616152</link>
    <description>How do we recognize this situation?&lt;BR /&gt;===================================&lt;BR /&gt;&lt;BR /&gt;One way to identify the situation is to attempt to read the LV using dd as&lt;BR /&gt;in the following test:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;     T1) #dd if=/dev/vg00/lvtest of=/dev/null bs=256k &amp;amp;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;If this command returns an I/O error, then LVM cannot provide a valid copy&lt;BR /&gt;of the data, which must then be restored from backup, if necessary (of&lt;BR /&gt;course, we would not need to restore anything if the LV is brand new or&lt;BR /&gt;used for swap.)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;How can we avoid the situaton?&lt;BR /&gt;==============================&lt;BR /&gt;&lt;BR /&gt;Creating a new LV and modifying procedure A such that extension to the&lt;BR /&gt;desired size is the *last* step will circumvent the issue:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;     B1) create the LV of size 0 with no disk,&lt;BR /&gt;     B2) extend the LV with minimum size to disk1,&lt;BR /&gt;     B3) setup the mirror by extension to the second disk2 and then&lt;BR /&gt;     B4) extend it to the desired size.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;During the extension to the desired size, LVM again creates each PE with&lt;BR /&gt;the flag "current" just as before; however, now this is done on *both*&lt;BR /&gt;disk1 and disk2. Note that procedure B avoids the time consuming step A3 to&lt;BR /&gt;copy data from disk1 to disk2. If another mirror were added, LVM could read&lt;BR /&gt;a logical extent even in the case that one of the physical extents returned&lt;BR /&gt;a read error.</description>
    <pubDate>Wed, 17 May 2006 01:10:24 GMT</pubDate>
    <dc:creator>Mridul Shrivastava</dc:creator>
    <dc:date>2006-05-17T01:10:24Z</dc:date>
    <item>
      <title>stale issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/stale-issue/m-p/3789390#M616151</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;  In my system's vg01 have 5 disks mirror with other 5 disks, today I found there are 3 disks are stale. They are c6t0d0, c6t2d0, c6t4d0 by lvdislay, vgdisplay.&lt;BR /&gt;  I have tried "ioscan", "diskinfo", &lt;BR /&gt;"dd if=xx of=xx count=10000", but everything is ok. &lt;BR /&gt;  I have no idea, is this disk issue? Thanks.</description>
      <pubDate>Wed, 17 May 2006 01:04:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/stale-issue/m-p/3789390#M616151</guid>
      <dc:creator>emily_3</dc:creator>
      <dc:date>2006-05-17T01:04:57Z</dc:date>
    </item>
    <item>
      <title>Re: stale issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/stale-issue/m-p/3789391#M616152</link>
      <description>How do we recognize this situation?&lt;BR /&gt;===================================&lt;BR /&gt;&lt;BR /&gt;One way to identify the situation is to attempt to read the LV using dd as&lt;BR /&gt;in the following test:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;     T1) #dd if=/dev/vg00/lvtest of=/dev/null bs=256k &amp;amp;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;If this command returns an I/O error, then LVM cannot provide a valid copy&lt;BR /&gt;of the data, which must then be restored from backup, if necessary (of&lt;BR /&gt;course, we would not need to restore anything if the LV is brand new or&lt;BR /&gt;used for swap.)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;How can we avoid the situaton?&lt;BR /&gt;==============================&lt;BR /&gt;&lt;BR /&gt;Creating a new LV and modifying procedure A such that extension to the&lt;BR /&gt;desired size is the *last* step will circumvent the issue:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;     B1) create the LV of size 0 with no disk,&lt;BR /&gt;     B2) extend the LV with minimum size to disk1,&lt;BR /&gt;     B3) setup the mirror by extension to the second disk2 and then&lt;BR /&gt;     B4) extend it to the desired size.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;During the extension to the desired size, LVM again creates each PE with&lt;BR /&gt;the flag "current" just as before; however, now this is done on *both*&lt;BR /&gt;disk1 and disk2. Note that procedure B avoids the time consuming step A3 to&lt;BR /&gt;copy data from disk1 to disk2. If another mirror were added, LVM could read&lt;BR /&gt;a logical extent even in the case that one of the physical extents returned&lt;BR /&gt;a read error.</description>
      <pubDate>Wed, 17 May 2006 01:10:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/stale-issue/m-p/3789391#M616152</guid>
      <dc:creator>Mridul Shrivastava</dc:creator>
      <dc:date>2006-05-17T01:10:24Z</dc:date>
    </item>
    <item>
      <title>Re: stale issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/stale-issue/m-p/3789392#M616153</link>
      <description>Just try to re-sync the VG and check the status again,&lt;BR /&gt;&lt;BR /&gt;# vgsync /dev/vg01&lt;BR /&gt;&lt;BR /&gt;Regds,&lt;BR /&gt;Saravanan</description>
      <pubDate>Wed, 17 May 2006 01:11:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/stale-issue/m-p/3789392#M616153</guid>
      <dc:creator>m saravanan</dc:creator>
      <dc:date>2006-05-17T01:11:07Z</dc:date>
    </item>
    <item>
      <title>Re: stale issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/stale-issue/m-p/3789393#M616154</link>
      <description>We want&lt;BR /&gt;to setup a new logical volume (LV) that is mirrored, i.e. each logical&lt;BR /&gt;extent (LE) is mapped to two physical extents (PE) on different disks. It&lt;BR /&gt;is common practice to follow these steps:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;     A1) create the LV on a single disk1,&lt;BR /&gt;     A2) extend it to the desired size, and then&lt;BR /&gt;     A3) setup the mirror by extension to the second disk2.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Note that in step A2, by design LVM assigns the "current" flag to each new&lt;BR /&gt;(and, thus, still empty) PE (on disk1 only).&lt;BR /&gt;&lt;BR /&gt;In step A3 LVM copies each PE to the mirror. Depending on the LV's size,&lt;BR /&gt;this takes some time to complete. If the copy succeeds, the mirror PE also&lt;BR /&gt;gets the flag "current". If the copy fails, e.g., due to a read error on&lt;BR /&gt;disk1, the flag remains "stale".&lt;BR /&gt;&lt;BR /&gt;Usually after procedure A, all PEs are marked "current." A read error on&lt;BR /&gt;disk1, however, is confusingly displayed as a stale extent on disk2. If&lt;BR /&gt;disk2 is replaced, then the same read error will most likely be encountered&lt;BR /&gt;on disk1, and the extents still will be displayed as stale on disk2.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;How do we recognize this situation?&lt;BR /&gt;===================================&lt;BR /&gt;&lt;BR /&gt;One way to identify the situation is to attempt to read the LV using dd as&lt;BR /&gt;in the following test:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;     T1) #dd if=/dev/vg00/lvtest of=/dev/null bs=256k &amp;amp;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;If this command returns an I/O error, then LVM cannot provide a valid copy&lt;BR /&gt;of the data, which must then be restored from backup, if necessary (of&lt;BR /&gt;course, we would not need to restore anything if the LV is brand new or&lt;BR /&gt;used for swap.)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;How can we avoid the situaton?&lt;BR /&gt;==============================&lt;BR /&gt;&lt;BR /&gt;Creating a new LV and modifying procedure A such that extension to the&lt;BR /&gt;desired size is the *last* step will circumvent the issue:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;     B1) create the LV of size 0 with no disk,&lt;BR /&gt;     B2) extend the LV with minimum size to disk1,&lt;BR /&gt;     B3) setup the mirror by extension to the second disk2 and then&lt;BR /&gt;     B4) extend it to the desired size.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;During the extension to the desired size, LVM again creates each PE with&lt;BR /&gt;the flag "current" just as before; however, now this is done on *both*&lt;BR /&gt;disk1 and disk2. Note that procedure B avoids the time consuming step A3 to&lt;BR /&gt;copy data from disk1 to disk2. If another mirror were added, LVM could read&lt;BR /&gt;a logical extent even in the case that one of the physical extents returned&lt;BR /&gt;a read error.</description>
      <pubDate>Wed, 17 May 2006 01:11:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/stale-issue/m-p/3789393#M616154</guid>
      <dc:creator>Mridul Shrivastava</dc:creator>
      <dc:date>2006-05-17T01:11:10Z</dc:date>
    </item>
    <item>
      <title>Re: stale issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/stale-issue/m-p/3789394#M616155</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;1.  I have tried vgsync, lvsync all return I/O error.&lt;BR /&gt;&lt;BR /&gt;2.  I will create a new LVM according to the above procedure. But one question, base on my server's disk state, I will use one "current" good disk as disk1 and "stale" disk as disk2, correct?</description>
      <pubDate>Wed, 17 May 2006 01:26:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/stale-issue/m-p/3789394#M616155</guid>
      <dc:creator>emily_3</dc:creator>
      <dc:date>2006-05-17T01:26:00Z</dc:date>
    </item>
    <item>
      <title>Re: stale issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/stale-issue/m-p/3789395#M616156</link>
      <description>I create a new LV at good disk first,but couldn't lvextend to stale disk anymore. &lt;BR /&gt;&lt;BR /&gt;#lvextend -m 1 /dev/vg01/test /dev/dsk/c6t0d0&lt;BR /&gt;&lt;BR /&gt;lvextend: Warning: couldn't query physical volume "/dev/dsk/c6t0d0":&lt;BR /&gt;The specified path does not correspond to physical volume attached to&lt;BR /&gt;this volume group&lt;BR /&gt;&lt;BR /&gt;But I did "dd if=/dev/dek/c6t0d0 of=/dev/null count=10000", everything is ok.&lt;BR /&gt;</description>
      <pubDate>Wed, 17 May 2006 01:51:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/stale-issue/m-p/3789395#M616156</guid>
      <dc:creator>emily_3</dc:creator>
      <dc:date>2006-05-17T01:51:08Z</dc:date>
    </item>
    <item>
      <title>Re: stale issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/stale-issue/m-p/3789396#M616157</link>
      <description>I guess there is some corruption of LVM header, request you to deactivate the vg using vgchange -a n vg01 command, then restore the vg conf.&lt;BR /&gt;&lt;BR /&gt;vcfgrestore -n vg01 /dev/dsk/c6t0d0&lt;BR /&gt;&lt;BR /&gt;execute this for all the three stale disk and then activate the vg using vgchange -a y vg01 command.&lt;BR /&gt;I hope it works for u this time...&lt;BR /&gt;&lt;BR /&gt;Cheers</description>
      <pubDate>Wed, 17 May 2006 02:09:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/stale-issue/m-p/3789396#M616157</guid>
      <dc:creator>Mridul Shrivastava</dc:creator>
      <dc:date>2006-05-17T02:09:34Z</dc:date>
    </item>
    <item>
      <title>Re: stale issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/stale-issue/m-p/3789397#M616158</link>
      <description>Thanks for your suggestions. I have restored the vg files, but it seems takes very long time when activing the vg01. it is still processing. Is it normal?</description>
      <pubDate>Wed, 17 May 2006 03:47:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/stale-issue/m-p/3789397#M616158</guid>
      <dc:creator>emily_3</dc:creator>
      <dc:date>2006-05-17T03:47:45Z</dc:date>
    </item>
    <item>
      <title>Re: stale issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/stale-issue/m-p/3789398#M616159</link>
      <description>Hello, &lt;BR /&gt;&lt;BR /&gt;  The issue has been solved by restore the configuration files for the 3 harddisk. Thanks for help...</description>
      <pubDate>Wed, 17 May 2006 20:25:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/stale-issue/m-p/3789398#M616159</guid>
      <dc:creator>emily_3</dc:creator>
      <dc:date>2006-05-17T20:25:12Z</dc:date>
    </item>
  </channel>
</rss>

