<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Removal of Failed Node in StoreVirtual Storage</title>
    <link>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6784680#M9796</link>
    <description>&lt;P&gt;Thank you for the response.&amp;nbsp; All the volumes are already at RAID level 0.&lt;/P&gt;</description>
    <pubDate>Mon, 14 Sep 2015 20:40:41 GMT</pubDate>
    <dc:creator>JDohrmann</dc:creator>
    <dc:date>2015-09-14T20:40:41Z</dc:date>
    <item>
      <title>Removal of Failed Node</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6784604#M9794</link>
      <description>&lt;P&gt;We have a VSA node, without a valid support contract, that failed and cannot be brought back online.&amp;nbsp; How do we go about removing it from the CMC?&amp;nbsp; When ever we try to modify it an error is thrown stating "Could not find storage system with serial number: 00:0C:29:A7:D7:8E."&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks!&lt;/P&gt;</description>
      <pubDate>Mon, 14 Sep 2015 16:23:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6784604#M9794</guid>
      <dc:creator>JDohrmann</dc:creator>
      <dc:date>2015-09-14T16:23:11Z</dc:date>
    </item>
    <item>
      <title>Re: Removal of Failed Node</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6784636#M9795</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;you first have to migrate all volumes on RAID 0 level...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 14 Sep 2015 17:14:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6784636#M9795</guid>
      <dc:creator>vlho</dc:creator>
      <dc:date>2015-09-14T17:14:57Z</dc:date>
    </item>
    <item>
      <title>Re: Removal of Failed Node</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6784680#M9796</link>
      <description>&lt;P&gt;Thank you for the response.&amp;nbsp; All the volumes are already at RAID level 0.&lt;/P&gt;</description>
      <pubDate>Mon, 14 Sep 2015 20:40:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6784680#M9796</guid>
      <dc:creator>JDohrmann</dc:creator>
      <dc:date>2015-09-14T20:40:41Z</dc:date>
    </item>
    <item>
      <title>Re: Removal of Failed Node</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6784785#M9801</link>
      <description>&lt;P&gt;You can try to login via the console and under Configuration Management you will find "Remove from management group". May be this could help.&lt;/P&gt;</description>
      <pubDate>Tue, 15 Sep 2015 08:27:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6784785#M9801</guid>
      <dc:creator>Paul Lazzari</dc:creator>
      <dc:date>2015-09-15T08:27:58Z</dc:date>
    </item>
    <item>
      <title>Re: Removal of Failed Node</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6784903#M9804</link>
      <description>&lt;P&gt;you don't need to migrate to NR0!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I've done this MANY times...&amp;nbsp; just simply create a new VSA with the same mac address, same name and same IP.&amp;nbsp; Boot it up and when it comes on, try and log into CMC and it will try and log into the node and then give an error saying the node does not think it is in the management group.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;From there, it will show as RIP:mac in the cluster and you will see a node listed in the available nodes section.&amp;nbsp; You can then take that node (upgrade first if needed) and join it to the management group and then from there you shiould right click on the cluster and say "exchange nodes" and you will have the option to swap the RIP node with the new node.&amp;nbsp; It will show both the RIP and the new one until the rebuild is complete at which point the RIP node will simply disappear.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I"ve done this MANY times.&amp;nbsp; It will work and is definitely the fastest way.&lt;/P&gt;</description>
      <pubDate>Tue, 15 Sep 2015 12:24:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6784903#M9804</guid>
      <dc:creator>oikjn</dc:creator>
      <dc:date>2015-09-15T12:24:23Z</dc:date>
    </item>
    <item>
      <title>Re: Removal of Failed Node</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6784905#M9805</link>
      <description>&lt;P&gt;yikes...&amp;nbsp; I just read that all your volumes were NR0...&amp;nbsp; if that is the case, you are screwed.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Unless the original VSA VHDs are still in tact, then MAYBE you can try and buy support from HP and see if they can recover the original VHDs into a new node,&amp;nbsp;but my guess is I would kiss that data goodbye.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;not to beat a man when he's down, but how could you possibly run a san with NR0 unless it was just a single node cluster?&amp;nbsp; Even CMC will shout at you every time you open it saying its a bad idea.&lt;/P&gt;</description>
      <pubDate>Tue, 15 Sep 2015 12:27:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6784905#M9805</guid>
      <dc:creator>oikjn</dc:creator>
      <dc:date>2015-09-15T12:27:25Z</dc:date>
    </item>
    <item>
      <title>Re: Removal of Failed Node</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6784927#M9808</link>
      <description>&lt;P&gt;Yeah, this node is on it's own in the cluster.&amp;nbsp; We aren't trying to recover the data since it's only being used for backups.&amp;nbsp; I'm just trying to remove the failed node from the CMC.&amp;nbsp; Is there any way?&lt;/P&gt;</description>
      <pubDate>Tue, 15 Sep 2015 13:12:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6784927#M9808</guid>
      <dc:creator>JDohrmann</dc:creator>
      <dc:date>2015-09-15T13:12:46Z</dc:date>
    </item>
    <item>
      <title>Re: Removal of Failed Node</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6784943#M9809</link>
      <description>&lt;P&gt;do you have other clusters in the management group which is why you are trying to save it?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;past that, I don't know if the proceedure I suggested will work in NR0, but I guess it wouldn't hurt to try doing what I suggested.&lt;/P&gt;</description>
      <pubDate>Tue, 15 Sep 2015 13:55:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6784943#M9809</guid>
      <dc:creator>oikjn</dc:creator>
      <dc:date>2015-09-15T13:55:07Z</dc:date>
    </item>
    <item>
      <title>Re: Removal of Failed Node</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6784976#M9811</link>
      <description>&lt;P&gt;Yes, we have two clusters in that group.&lt;/P&gt;</description>
      <pubDate>Tue, 15 Sep 2015 16:04:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6784976#M9811</guid>
      <dc:creator>JDohrmann</dc:creator>
      <dc:date>2015-09-15T16:04:10Z</dc:date>
    </item>
    <item>
      <title>Re: Removal of Failed Node</title>
      <link>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6785687#M9817</link>
      <description>&lt;P&gt;I think as long as the management group thinks that RIP node has data on it that is supposed to be of use, then you are going to show the node.&amp;nbsp; I have never tried this on a cluster which has totally failed so I don't know if it will work, but you either have to just delete all the LUNs on that cluster and then delete the cluster (assuming CMC allows that)... once that is done, you should be able to remove the missing node.&amp;nbsp; If not, have you tried doing the node re-creation I suggested above?&amp;nbsp; I know that works when you have good data you can migrate and maybe it will work with bad data as well (would love to find out that one.)&amp;nbsp; The key is I think you have to create the conflicting mac address VSA in order to get the original to say RIP_MAC instead of just missing_MAC.&lt;/P&gt;</description>
      <pubDate>Thu, 17 Sep 2015 14:05:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/storevirtual-storage/removal-of-failed-node/m-p/6785687#M9817</guid>
      <dc:creator>oikjn</dc:creator>
      <dc:date>2015-09-17T14:05:00Z</dc:date>
    </item>
  </channel>
</rss>

