<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: EVA 3000 Disk Drives failed state in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/eva-3000-disk-drives-failed-state/m-p/5133217#M42446</link>
    <description>Hi &lt;BR /&gt;did you also try the "good disks" in the "wrong bays" and vice versa pls?&lt;BR /&gt;</description>
    <pubDate>Thu, 02 Oct 2008 06:55:56 GMT</pubDate>
    <dc:creator>IBaltay</dc:creator>
    <dc:date>2008-10-02T06:55:56Z</dc:date>
    <item>
      <title>EVA 3000 Disk Drives failed state</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva-3000-disk-drives-failed-state/m-p/5133216#M42445</link>
      <description>Hello;&lt;BR /&gt;&lt;BR /&gt;We are setting a new EVA 3000 in a test environment with only one drive enclosure and 8 disk drives of different capacities. The  problem is that only 3 of the disks are recognized by the Command View, with their LEDs green. The other 5 have red LEDs, and command View Operational State shows them as failed or not mated. Unable to update their firmware as Code Load are not available.&lt;BR /&gt;&lt;BR /&gt;Updated one controller to VCS 3.110 did not help, so we returned to VCS 3.028.&lt;BR /&gt;&lt;BR /&gt;Specs:&lt;BR /&gt;EVA 3000, 2 controllers HSV100 VCS 3.028&lt;BR /&gt;Command View EVA v6.0 Build 193&lt;BR /&gt;&lt;BR /&gt;Operational drives: 72GB 15K Firmware HP00 for one, and HP02 for 2 drives.&lt;BR /&gt;&lt;BR /&gt;Failed Drives are all firmware HP03. 2x72GB are 15k, 3x 146GB are 10k.&lt;BR /&gt;&lt;BR /&gt;Moveing the disk drives from one bay to the another has same results.&lt;BR /&gt;&lt;BR /&gt;Does anyone has any suggestions, or recommendation.&lt;BR /&gt;&lt;BR /&gt;Thank you&lt;BR /&gt;&lt;BR /&gt;Nick</description>
      <pubDate>Thu, 02 Oct 2008 06:18:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva-3000-disk-drives-failed-state/m-p/5133216#M42445</guid>
      <dc:creator>Nicolas Saade2</dc:creator>
      <dc:date>2008-10-02T06:18:55Z</dc:date>
    </item>
    <item>
      <title>Re: EVA 3000 Disk Drives failed state</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva-3000-disk-drives-failed-state/m-p/5133217#M42446</link>
      <description>Hi &lt;BR /&gt;did you also try the "good disks" in the "wrong bays" and vice versa pls?&lt;BR /&gt;</description>
      <pubDate>Thu, 02 Oct 2008 06:55:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva-3000-disk-drives-failed-state/m-p/5133217#M42446</guid>
      <dc:creator>IBaltay</dc:creator>
      <dc:date>2008-10-02T06:55:56Z</dc:date>
    </item>
    <item>
      <title>Re: EVA 3000 Disk Drives failed state</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva-3000-disk-drives-failed-state/m-p/5133218#M42447</link>
      <description>Yes "good disks" were placed in the "wrong bays" and vice versa</description>
      <pubDate>Thu, 02 Oct 2008 07:00:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva-3000-disk-drives-failed-state/m-p/5133218#M42447</guid>
      <dc:creator>Nicolas Saade2</dc:creator>
      <dc:date>2008-10-02T07:00:52Z</dc:date>
    </item>
    <item>
      <title>Re: EVA 3000 Disk Drives failed state</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva-3000-disk-drives-failed-state/m-p/5133219#M42448</link>
      <description>Hi Nick,&lt;BR /&gt;&lt;BR /&gt;Where did this "new" EVA3000 come from ?&lt;BR /&gt;&lt;BR /&gt;I'll guess that disks are actually bad.&lt;BR /&gt;&lt;BR /&gt;Even if the firmware on the drives is too new, you should just get a warning in Command View to that effect.&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;&lt;BR /&gt;Rob</description>
      <pubDate>Thu, 02 Oct 2008 07:01:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva-3000-disk-drives-failed-state/m-p/5133219#M42448</guid>
      <dc:creator>Rob Leadbeater</dc:creator>
      <dc:date>2008-10-02T07:01:38Z</dc:date>
    </item>
    <item>
      <title>Re: EVA 3000 Disk Drives failed state</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva-3000-disk-drives-failed-state/m-p/5133220#M42449</link>
      <description>All drives should power up to show their Green LED's - firmware is unlikely to be an issue.&lt;BR /&gt;&lt;BR /&gt;Since you appear to be new to the EVA, but wanting a not-new setup (the EVA 3000 has not been sold as new for a while) I would highly recommend:&lt;BR /&gt;&lt;BR /&gt;- getting at least 8 drives of the same capacity (better space usage for the reserved protection alogorithm)&lt;BR /&gt;&lt;BR /&gt;- Setting "double" protection (reserves space so the EVA can self heal in case drives fail) &lt;BR /&gt;&lt;BR /&gt;- once you get those working drives in , upgrade to the latest controller and drive firmware on all controllers&lt;BR /&gt;&lt;BR /&gt;- running an program that generates lots of I/O to give the drives you have a workout&lt;BR /&gt;&lt;BR /&gt;- Never use RAID 0 (any drive failure will cause data loss)</description>
      <pubDate>Thu, 02 Oct 2008 11:53:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva-3000-disk-drives-failed-state/m-p/5133220#M42449</guid>
      <dc:creator>McCready</dc:creator>
      <dc:date>2008-10-02T11:53:53Z</dc:date>
    </item>
    <item>
      <title>Re: EVA 3000 Disk Drives failed state</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva-3000-disk-drives-failed-state/m-p/5133221#M42450</link>
      <description>If this EVA is "new" and currently not used for data, you may consider to uninitialize it.&lt;BR /&gt;&lt;BR /&gt;BTW, if you compare firmware versions, you must also compare the drive models.&lt;BR /&gt;&lt;BR /&gt;Are all models the same?&lt;BR /&gt;&lt;BR /&gt;Different models typically have different firmware.</description>
      <pubDate>Thu, 02 Oct 2008 12:44:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva-3000-disk-drives-failed-state/m-p/5133221#M42450</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2008-10-02T12:44:26Z</dc:date>
    </item>
    <item>
      <title>Re: EVA 3000 Disk Drives failed state</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva-3000-disk-drives-failed-state/m-p/5133222#M42451</link>
      <description>Thank you all. It has not been resolved yet. But we arewaiting for disk replacements.</description>
      <pubDate>Tue, 14 Oct 2008 10:30:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva-3000-disk-drives-failed-state/m-p/5133222#M42451</guid>
      <dc:creator>Nicolas Saade2</dc:creator>
      <dc:date>2008-10-14T10:30:17Z</dc:date>
    </item>
    <item>
      <title>Re: EVA 3000 Disk Drives failed state</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva-3000-disk-drives-failed-state/m-p/5133223#M42452</link>
      <description>Just today I got same issue as was discussed in this forum. I have resolved successfully.&lt;BR /&gt;All drives powered up. I got 8 drives of the same capacity with "double" protection to reserve space so the EVA can self heal in case drives fail. Upgraded to the latest controller and drive firmware on all controllers.&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;DRJJ</description>
      <pubDate>Tue, 01 Sep 2009 07:44:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva-3000-disk-drives-failed-state/m-p/5133223#M42452</guid>
      <dc:creator>DR M.JAVED K JADOON(PHD</dc:creator>
      <dc:date>2009-09-01T07:44:22Z</dc:date>
    </item>
  </channel>
</rss>

