<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: lvm migrations in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/lvm-migrations/m-p/6185389#M54402</link>
    <description>&lt;P&gt;so the device /dev/sdau is showing as a dead path in powermt display and i offline the device using echo and then delete the device using the same echo command you provided and guess what the server panicked and rebooted. I also was looking at the online storage config guide and the same steps were mentioned and i am not sure why it panicked for a device that was offlined and removed. Any ideas?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Simon&lt;/P&gt;</description>
    <pubDate>Wed, 28 Aug 2013 01:53:26 GMT</pubDate>
    <dc:creator>Simon_G</dc:creator>
    <dc:date>2013-08-28T01:53:26Z</dc:date>
    <item>
      <title>lvm migrations</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvm-migrations/m-p/6160443#M54390</link>
      <description>&lt;P&gt;I was doing lvm migrations on RAC clusters which had some shared storage (pvmove). After the migrations are done everything is fine (ASM disks). The problem is the original lvm disks mounted only on each individual node were zoned to all node in the cluster, So lvm is able to see all the vg's even the one from other servers. Here is what i did on each node.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1. pvmove of the vg's mounted on each node.&lt;/P&gt;&lt;P&gt;2. vgreduce and pvreduce on the same node only.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;with the old storage removed i was hoping to powermt check and remove the dead paths, but i am not able to do this. I am not able to remove the dead pathsfrom one node to the old storage of a vg that was mounted on another node.In otherwords the paths from the node to the old storage that was mounted on itself has been removed perfectly but the path to the other storage that is the problem. Is there a way to fix this dead paths issue without reboot? These are Redhat 5.9 boxes.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Simon&lt;/P&gt;</description>
      <pubDate>Tue, 06 Aug 2013 13:32:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvm-migrations/m-p/6160443#M54390</guid>
      <dc:creator>Simon_G</dc:creator>
      <dc:date>2013-08-06T13:32:02Z</dc:date>
    </item>
    <item>
      <title>Re: lvm migrations</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvm-migrations/m-p/6162169#M54391</link>
      <description>&lt;P&gt;You should be able to remove any old /dev/sd* paths by using the /sys filesystem.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;For example, to tell the system that /dev/sdX is gone for good and should be forgotten, you would do:&lt;/P&gt;&lt;PRE&gt;echo 1 &amp;gt; /sys/block/sdX/device/delete&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It's been a long time since I had PowerPath on any Linux system, but after removing the dead /dev/sd* paths, I would try "powermt check" again if necessary.&lt;/P&gt;</description>
      <pubDate>Wed, 07 Aug 2013 14:40:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvm-migrations/m-p/6162169#M54391</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2013-08-07T14:40:17Z</dc:date>
    </item>
    <item>
      <title>Re: lvm migrations</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvm-migrations/m-p/6185389#M54402</link>
      <description>&lt;P&gt;so the device /dev/sdau is showing as a dead path in powermt display and i offline the device using echo and then delete the device using the same echo command you provided and guess what the server panicked and rebooted. I also was looking at the online storage config guide and the same steps were mentioned and i am not sure why it panicked for a device that was offlined and removed. Any ideas?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Simon&lt;/P&gt;</description>
      <pubDate>Wed, 28 Aug 2013 01:53:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvm-migrations/m-p/6185389#M54402</guid>
      <dc:creator>Simon_G</dc:creator>
      <dc:date>2013-08-28T01:53:26Z</dc:date>
    </item>
    <item>
      <title>Re: lvm migrations</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvm-migrations/m-p/6187537#M54403</link>
      <description>&lt;P&gt;Hmm... did the panic message get recorded into the system log? If it did, it might include a kernel stack trace that should be useful in understanding the problem.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;What's your PowerPath version, anyway? If it is not the latest one, you might want to check the Release Notes of versions newer than you are currently using, and see if any relevant-looking bugs have been fixed in the newer versions.&lt;/P&gt;</description>
      <pubDate>Thu, 29 Aug 2013 10:44:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvm-migrations/m-p/6187537#M54403</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2013-08-29T10:44:35Z</dc:date>
    </item>
    <item>
      <title>Re: lvm migrations</title>
      <link>https://community.hpe.com/t5/operating-system-linux/lvm-migrations/m-p/6199131#M54413</link>
      <description>&lt;P&gt;It is 5.6 which is recent. We tried on 2 hosts and the same result. so we have decided to reboot and clear this dead path. It is really strange.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Simon&lt;/P&gt;</description>
      <pubDate>Tue, 10 Sep 2013 16:38:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/lvm-migrations/m-p/6199131#M54413</guid>
      <dc:creator>Simon_G</dc:creator>
      <dc:date>2013-09-10T16:38:21Z</dc:date>
    </item>
  </channel>
</rss>

