<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Parity Initialization Status in ProLiant Servers (ML,DL,SL)</title>
    <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/parity-initialization-status/m-p/5377175#M149526</link>
    <description>&lt;P&gt;We didnt delete the LUN when the server came back up we had the option to re enable the disabled logical drive to recover some data which forced the array to initialize, its been stuck here since.&lt;/P&gt;</description>
    <pubDate>Mon, 31 Oct 2011 12:52:06 GMT</pubDate>
    <dc:creator>AssureWeb_MW</dc:creator>
    <dc:date>2011-10-31T12:52:06Z</dc:date>
    <item>
      <title>Parity Initialization Status</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/parity-initialization-status/m-p/5376705#M149523</link>
      <description>&lt;P&gt;We have a smart array 5i in embedded slot running from a DL380 G2&amp;nbsp; to a disk sub system (Old&amp;nbsp;I know) configured as a 450 GB raid 5 array the server went down over the weekend for some unknow reason and the controller disabled the logical drive due to errors, we dont have much hope for the data but started an Initialization on the logical drive as the array config utility can still see them&amp;nbsp;in the hope we may be able to recover something, problem is the has now been running for about 72 hours. Does anyone know how long this could pottentially take?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Mon, 31 Oct 2011 08:50:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/parity-initialization-status/m-p/5376705#M149523</guid>
      <dc:creator>AssureWeb_MW</dc:creator>
      <dc:date>2011-10-31T08:50:11Z</dc:date>
    </item>
    <item>
      <title>Re: Parity Initialization Status</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/parity-initialization-status/m-p/5377127#M149524</link>
      <description>&lt;P&gt;The 5i is an embedded SCSI.&amp;nbsp; The largest SCSI drive I recall HP delivering was a 300GB.&amp;nbsp; I doubt the smart array card firmware knows how to handle the extra blocks.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 31 Oct 2011 12:21:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/parity-initialization-status/m-p/5377127#M149524</guid>
      <dc:creator>John Kufrovich</dc:creator>
      <dc:date>2011-10-31T12:21:40Z</dc:date>
    </item>
    <item>
      <title>Re: Parity Initialization Status</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/parity-initialization-status/m-p/5377131#M149525</link>
      <description>&lt;P&gt;Never mind, you said, 450GB Raid 5 LUN.&amp;nbsp; Not enough coffee.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you are doing a inititialization again?&amp;nbsp; You deleted the LUN?&amp;nbsp; If you deleted the LUN,&amp;nbsp;the LUN device ID changed, the HOST will see it as a different volume.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 31 Oct 2011 12:26:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/parity-initialization-status/m-p/5377131#M149525</guid>
      <dc:creator>John Kufrovich</dc:creator>
      <dc:date>2011-10-31T12:26:21Z</dc:date>
    </item>
    <item>
      <title>Re: Parity Initialization Status</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/parity-initialization-status/m-p/5377175#M149526</link>
      <description>&lt;P&gt;We didnt delete the LUN when the server came back up we had the option to re enable the disabled logical drive to recover some data which forced the array to initialize, its been stuck here since.&lt;/P&gt;</description>
      <pubDate>Mon, 31 Oct 2011 12:52:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/parity-initialization-status/m-p/5377175#M149526</guid>
      <dc:creator>AssureWeb_MW</dc:creator>
      <dc:date>2011-10-31T12:52:06Z</dc:date>
    </item>
    <item>
      <title>Re: Parity Initialization Status</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/parity-initialization-status/m-p/5383347#M149527</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It sounds like the Raidset has hit a URE, unrecoverable read error on one of the members.&amp;nbsp; This will cause the init to stop.&amp;nbsp; You could look at a ADU report and look at the Hard Read Errors, since reset for each member.&amp;nbsp; But in the end, I think you will have backup data, delete the volume and replace a few drives.&amp;nbsp; And recreate everything.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 07 Nov 2011 12:29:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/parity-initialization-status/m-p/5383347#M149527</guid>
      <dc:creator>John Kufrovich</dc:creator>
      <dc:date>2011-11-07T12:29:20Z</dc:date>
    </item>
  </channel>
</rss>

