<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: pvmove issue in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154210#M50318</link>
    <description>you could do FS extend online on RHEL for if ext2online is present. But the shrink ops is not the one we are practically done with.</description>
    <pubDate>Tue, 03 Feb 2009 02:19:22 GMT</pubDate>
    <dc:creator>skt_skt</dc:creator>
    <dc:date>2009-02-03T02:19:22Z</dc:date>
    <item>
      <title>pvmove issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154207#M50315</link>
      <description>Let me briefly summarize:&lt;BR /&gt;&lt;BR /&gt;LVM2, Kernel 2.6.9, RH AS 4.5&lt;BR /&gt;&lt;BR /&gt;I noticed a series of bad disks in my LVM VG. Since all the VG was full, I requested new disks. Once presented I was then to perform a pvmove to the unallocated PEs on the new disks.&lt;BR /&gt;&lt;BR /&gt;Stupidly, I allocated the space to the LV. Now I can't pvmove since all the extents are used.&lt;BR /&gt;&lt;BR /&gt;1) I'm using LVM2 and there is no online LV reducing command, so I have to use resize2fs to eventually unallocate the new disk, so I can pvmove the stuff off the old bad disk.&lt;BR /&gt;&lt;BR /&gt;A) is resize2fs safe?&lt;BR /&gt;&lt;BR /&gt;B) unmounting - which is required with resize2fs - will disrupt production and cause delays, I'd rather avoid this option.&lt;BR /&gt;&lt;BR /&gt;2) I guess I could wait til the morning, and request new storage and do it right.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Any options. Am I missing something?&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 02 Feb 2009 23:17:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154207#M50315</guid>
      <dc:creator>P_F</dc:creator>
      <dc:date>2009-02-02T23:17:08Z</dc:date>
    </item>
    <item>
      <title>Re: pvmove issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154208#M50316</link>
      <description>Here you have the procedure to reduce the file system:&lt;BR /&gt;&lt;A href="http://forums11.itrc.hp.com/service/forums/questionanswer.do?admit=109447626+1233622132384+28353475&amp;amp;threadId=1160556" target="_blank"&gt;http://forums11.itrc.hp.com/service/forums/questionanswer.do?admit=109447626+1233622132384+28353475&amp;amp;threadId=1160556&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;1) I'm using LVM2 and there is no online LV reducing command&lt;BR /&gt;&lt;BR /&gt;You have to umount the file system to reduce the size&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;A) is resize2fs safe?&lt;BR /&gt;&lt;BR /&gt;If you don't make any mistakes. Anyway, I will always have a good backup.&lt;BR /&gt;&lt;BR /&gt;B) unmounting - which is required with resize2fs - will disrupt production and cause delays, I'd rather avoid this option.&lt;BR /&gt;&lt;BR /&gt;Then no other option, maybe copy/rsync?&lt;BR /&gt;&lt;BR /&gt;2) I guess I could wait til the morning, and request new storage and do it right.&lt;BR /&gt;&lt;BR /&gt;Good news ;)</description>
      <pubDate>Tue, 03 Feb 2009 00:51:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154208#M50316</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2009-02-03T00:51:57Z</dc:date>
    </item>
    <item>
      <title>Re: pvmove issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154209#M50317</link>
      <description>Have you already extended the _filesystem_ to use the new disks?&lt;BR /&gt;&lt;BR /&gt;If you've used lvextend but not yet extended the filesystem (using resize2fs/ext2online/whatever), just lvreduce back to the original value *but not any smaller than that*. &lt;BR /&gt;&lt;BR /&gt;Use "tune2fs -l" to verify the current size of the filesystem if you're unsure: block count * block size = filesystem size in bytes. Then see if it matches what lvdisplay says about the LV size.&lt;BR /&gt;&lt;BR /&gt;Remember:&lt;BR /&gt;filesystem size &amp;gt; LV size: filesystem is corrupted, some data may be lost&lt;BR /&gt;filesystem size &amp;lt; LV size: either a resize operation is half-done, or someone has made a mistake&lt;BR /&gt;filesystem size = LV size: OK.&lt;BR /&gt;&lt;BR /&gt;If you've already extended the filesystem, the first priority is to take an extra backup to make sure you don't lose data, whatever happens.&lt;BR /&gt;&lt;BR /&gt;Shrinking the filesystem should be safe when the filesystem is unmounted and error-free. However, shrinking operations are much rarer than extensions, so you should be prepared.&lt;BR /&gt;&lt;BR /&gt;In addition, you're working with known bad disks. That is definitely a risk.&lt;BR /&gt;&lt;BR /&gt;I would take a fresh backup ASAP and "do it right" in the morning. &lt;BR /&gt;&lt;BR /&gt;If the disk fails during the night, I'd try to salvage the very latest data if reasonable, then nuke &amp;amp; recreate the LV and the filesystem using good disks only, then restore. Of course, my situation may be different from yours.&lt;BR /&gt;&lt;BR /&gt;MK</description>
      <pubDate>Tue, 03 Feb 2009 01:07:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154209#M50317</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2009-02-03T01:07:15Z</dc:date>
    </item>
    <item>
      <title>Re: pvmove issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154210#M50318</link>
      <description>you could do FS extend online on RHEL for if ext2online is present. But the shrink ops is not the one we are practically done with.</description>
      <pubDate>Tue, 03 Feb 2009 02:19:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154210#M50318</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2009-02-03T02:19:22Z</dc:date>
    </item>
    <item>
      <title>Re: pvmove issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154211#M50319</link>
      <description>Thanks for the responses.&lt;BR /&gt;&lt;BR /&gt;I have a follow-up question, more on LVM.&lt;BR /&gt;&lt;BR /&gt;1)&lt;BR /&gt;If an LVM disk develops bad sectors. Does LVM identify those sectors and will attempt not to write to bad disk sectors, or even the disk itself?&lt;BR /&gt;&lt;BR /&gt;2)&lt;BR /&gt;Suppose I complete on pvmove, and then extend the new data into the existing VG. Suppose I can't vgreduce the old bad disk, to remove from the VG. Could that data migrate back onto the old drive? Or could new data be put on the bad drive?</description>
      <pubDate>Tue, 03 Feb 2009 05:23:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154211#M50319</guid>
      <dc:creator>P_F</dc:creator>
      <dc:date>2009-02-03T05:23:51Z</dc:date>
    </item>
    <item>
      <title>Re: pvmove issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154212#M50320</link>
      <description>&amp;gt; Have you already extended the _filesystem_ to use the new disks?&lt;BR /&gt;&lt;BR /&gt;Yes. Fortunately, the SAN guys have presented new disks and I am pvmove'ing the data that is readable from the bad disks to the new disks.&lt;BR /&gt;&lt;BR /&gt;Of course, I won't be able to vgreduce those LUNS. So, I'm wondering what will happen to the partially corrupt luns, will new data somehow be written to them again?</description>
      <pubDate>Tue, 03 Feb 2009 05:31:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154212#M50320</guid>
      <dc:creator>P_F</dc:creator>
      <dc:date>2009-02-03T05:31:58Z</dc:date>
    </item>
    <item>
      <title>Re: pvmove issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154213#M50321</link>
      <description>if the disk has RD/WR error LINUX has a tendency to put the file system READ only. LVM itself does not fix it. But an fsck can help if the error is correctable. FSCK mean u have to have a backup</description>
      <pubDate>Tue, 03 Feb 2009 11:23:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154213#M50321</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2009-02-03T11:23:31Z</dc:date>
    </item>
    <item>
      <title>Re: pvmove issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154214#M50322</link>
      <description>&amp;gt; if the disk has RD/WR error LINUX has a tendency to put the file system READ only. LVM itself does not fix it. &lt;BR /&gt;&lt;BR /&gt;The type of error we are seeing on the disk is:&lt;BR /&gt;&lt;BR /&gt;Kernel: SCSI error : &amp;lt;0 0 4 10&amp;gt; return code = 0x20000&lt;BR /&gt;&lt;BR /&gt;Kernel: end request: I/O error dev sdk , sector 432582893&lt;BR /&gt;&lt;BR /&gt;It would seem the Linux' FS table would make those bad sectors off limits ?</description>
      <pubDate>Tue, 03 Feb 2009 12:31:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154214#M50322</guid>
      <dc:creator>P_F</dc:creator>
      <dc:date>2009-02-03T12:31:31Z</dc:date>
    </item>
    <item>
      <title>Re: pvmove issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154215#M50323</link>
      <description>did u confirm if the reported error is coming from the suspected failing disk?</description>
      <pubDate>Tue, 03 Feb 2009 15:14:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154215#M50323</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2009-02-03T15:14:19Z</dc:date>
    </item>
    <item>
      <title>Re: pvmove issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154216#M50324</link>
      <description>&amp;gt; did u confirm if the reported error is coming from the suspected failing disk?&lt;BR /&gt;&lt;BR /&gt;I used &lt;BR /&gt;# dd if=/dev/sdk of=/dev/null bs=1024k&lt;BR /&gt;&lt;BR /&gt;To look for i/o errors and, yes, I found some. Additionally, I can't run fsck on the disk since I can't umount them now.</description>
      <pubDate>Tue, 03 Feb 2009 15:29:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154216#M50324</guid>
      <dc:creator>P_F</dc:creator>
      <dc:date>2009-02-03T15:29:16Z</dc:date>
    </item>
    <item>
      <title>Re: pvmove issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154217#M50325</link>
      <description>For those following this thread...&lt;BR /&gt;&lt;BR /&gt;The corruption got so bad that I couldn't extend the new disks with the old disk's date pvmoved onto them - because there was a general filesystem corruption... this was after the 5th disk had been moved.&lt;BR /&gt;&lt;BR /&gt;Anyway, the forced me to reboot and disk so with a -F, after a very lenghty fsck it cleared and is up.&lt;BR /&gt;&lt;BR /&gt;So, the FS table ought to have fixed a lot of the mess.&lt;BR /&gt;&lt;BR /&gt;Thanks for the info to all who responded.&lt;BR /&gt;</description>
      <pubDate>Tue, 03 Feb 2009 20:06:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154217#M50325</guid>
      <dc:creator>P_F</dc:creator>
      <dc:date>2009-02-03T20:06:55Z</dc:date>
    </item>
    <item>
      <title>Re: pvmove issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154218#M50326</link>
      <description>General information useful</description>
      <pubDate>Tue, 03 Feb 2009 20:08:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154218#M50326</guid>
      <dc:creator>P_F</dc:creator>
      <dc:date>2009-02-03T20:08:20Z</dc:date>
    </item>
    <item>
      <title>Re: pvmove issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154219#M50327</link>
      <description>what is that -F u r saying?</description>
      <pubDate>Wed, 04 Feb 2009 00:33:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154219#M50327</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2009-02-04T00:33:11Z</dc:date>
    </item>
    <item>
      <title>Re: pvmove issue</title>
      <link>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154220#M50328</link>
      <description>&amp;gt; What is the -F ?&lt;BR /&gt;&lt;BR /&gt;# shutdown -rF now</description>
      <pubDate>Wed, 04 Feb 2009 01:01:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/pvmove-issue/m-p/5154220#M50328</guid>
      <dc:creator>P_F</dc:creator>
      <dc:date>2009-02-04T01:01:52Z</dc:date>
    </item>
  </channel>
</rss>

