<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: EVA5000 available space? in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/eva5000-available-space/m-p/4552588#M33712</link>
    <description>I also used the sssu tool to view the state of the FATA diskgroup.&lt;BR /&gt;The result is here:&lt;BR /&gt;&lt;BR /&gt;\Disk Groups\FATA information:&lt;BR /&gt;object&lt;BR /&gt;  objectid .............................: 01010710B4080560CC29010000B0000000005E00&lt;BR /&gt;  objectname ...........................: \Disk Groups\FATA&lt;BR /&gt;  objecttype ...........................: diskgroupfolder&lt;BR /&gt;  objectwwn ............................:&lt;BR /&gt;  objecthexuid .........................: 6005-08b4-0001-29cc-0000-b000-005e-0000&lt;BR /&gt;  diskgroupname ........................: FATA&lt;BR /&gt;  uid ..................................: 257.7.16.1610942644.76236.45056.6160384&lt;BR /&gt;  objectparentuid ......................: 1542.6.6.101058054.101058054.101058054.101058054&lt;BR /&gt;  objectparenthexuid ...................: 0606-0606-0606-0606-0606-0606-0606-0606&lt;BR /&gt;  objectparentid .......................: 0606060606060606060606060606060606060606&lt;BR /&gt;  comments .............................:&lt;BR /&gt;  totaldisks ...........................: 118&lt;BR /&gt;  levelingstate ........................: inactive&lt;BR /&gt;  levelingprogress .....................: 86&lt;BR /&gt;  rssdiskstate .........................: mirrored&lt;BR /&gt;  srclevelactual .......................: vraid1&lt;BR /&gt;  diskdrivetype ........................: nearonline&lt;BR /&gt;  requestedsparepolicy .................: double&lt;BR /&gt;  currentsparepolicy ...................: double&lt;BR /&gt;  totalstoragespace ....................: 55871717376&lt;BR /&gt;  totalstoragespacegb ..................: 26641.71&lt;BR /&gt;  usedstoragespace .....................: 37580242944&lt;BR /&gt;  usedstoragespacegb ...................: 17919.66&lt;BR /&gt;  occupancyalarmlevel ..................: 95&lt;BR /&gt;  operationalstate .....................: good&lt;BR /&gt;  operationalstatedetail ...............: initialized_ok&lt;BR /&gt;  vraid0storagespace ...................: 1995374592&lt;BR /&gt;  vraid0storagespacegb .................: 951.47&lt;BR /&gt;  vraid1storagespace ...................: 1995374592&lt;BR /&gt;  vraid1storagespacegb .................: 951.47&lt;BR /&gt;  vraid5storagespace ...................: 1995374592&lt;BR /&gt;  vraid5storagespacegb .................: 951.47&lt;BR /&gt;&lt;BR /&gt;I warry of this:&lt;BR /&gt;levelingprogress .....................: 86&lt;BR /&gt;&lt;BR /&gt;It is constant during the last week at least.&lt;BR /&gt;Are there any means to re-initiate the leveling?&lt;BR /&gt;&lt;BR /&gt;By the way, the CV shows the next:&lt;BR /&gt;Operational state:  Good &lt;BR /&gt;Leveling state:  Inactive &lt;BR /&gt;Leveling progress: n/a  &lt;BR /&gt;RSS Disk state: Mirrored &lt;BR /&gt;</description>
    <pubDate>Mon, 21 Dec 2009 09:22:19 GMT</pubDate>
    <dc:creator>ViS_2</dc:creator>
    <dc:date>2009-12-21T09:22:19Z</dc:date>
    <item>
      <title>EVA5000 available space?</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva5000-available-space/m-p/4552586#M33710</link>
      <description>Hi!&lt;BR /&gt;I have the eva5000 array.&lt;BR /&gt;HSV110 3.028&lt;BR /&gt;It contains 118 FATA disks and 99 FC disks.&lt;BR /&gt;Each type of disks placed in its own disk group (2 DGs).&lt;BR /&gt;&lt;BR /&gt;Now FATA diskgroup shows:&lt;BR /&gt;Capacity Total: 26641.71 GB  &lt;BR /&gt;Occupancy Total: 17919.65 GB &lt;BR /&gt;Alarm level: 95%&lt;BR /&gt;&lt;BR /&gt;Disk failure protection &lt;BR /&gt;Requested level:  Double &lt;BR /&gt;Actual level: Double &lt;BR /&gt;&lt;BR /&gt;Available: &lt;BR /&gt;Vraid0 951.46 GB&lt;BR /&gt;Vraid1 951.46 GB&lt;BR /&gt;Vraid5 951.46 GB &lt;BR /&gt;&lt;BR /&gt;So the eva has nearly 9TB raw disk space.&lt;BR /&gt;Why the available space for any vraid level is 951.46 GB only?</description>
      <pubDate>Fri, 18 Dec 2009 15:15:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva5000-available-space/m-p/4552586#M33710</guid>
      <dc:creator>ViS_2</dc:creator>
      <dc:date>2009-12-18T15:15:31Z</dc:date>
    </item>
    <item>
      <title>Re: EVA5000 available space?</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva5000-available-space/m-p/4552587#M33711</link>
      <description>Looks like you have 118 x 250 GB disks&lt;BR /&gt;&lt;BR /&gt;Disk failure protection = double takes the space of 4 disks.&lt;BR /&gt;&lt;BR /&gt;If the occupancy is correct, you should have 8722 GB raw space, and you should see 2047 GB available for all RAID levels.&lt;BR /&gt;&lt;BR /&gt;This happens sometimes after adding or removing disks, the free space is not calculated correctly.  This is usually solved performing a controller resync (must be done by a HP technician).&lt;BR /&gt;&lt;BR /&gt;By the way, 3.028 has been out of support for years, please update to 3.110, and Command View 9.1.&lt;BR /&gt;</description>
      <pubDate>Fri, 18 Dec 2009 20:01:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva5000-available-space/m-p/4552587#M33711</guid>
      <dc:creator>Víctor Cespón</dc:creator>
      <dc:date>2009-12-18T20:01:49Z</dc:date>
    </item>
    <item>
      <title>Re: EVA5000 available space?</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva5000-available-space/m-p/4552588#M33712</link>
      <description>I also used the sssu tool to view the state of the FATA diskgroup.&lt;BR /&gt;The result is here:&lt;BR /&gt;&lt;BR /&gt;\Disk Groups\FATA information:&lt;BR /&gt;object&lt;BR /&gt;  objectid .............................: 01010710B4080560CC29010000B0000000005E00&lt;BR /&gt;  objectname ...........................: \Disk Groups\FATA&lt;BR /&gt;  objecttype ...........................: diskgroupfolder&lt;BR /&gt;  objectwwn ............................:&lt;BR /&gt;  objecthexuid .........................: 6005-08b4-0001-29cc-0000-b000-005e-0000&lt;BR /&gt;  diskgroupname ........................: FATA&lt;BR /&gt;  uid ..................................: 257.7.16.1610942644.76236.45056.6160384&lt;BR /&gt;  objectparentuid ......................: 1542.6.6.101058054.101058054.101058054.101058054&lt;BR /&gt;  objectparenthexuid ...................: 0606-0606-0606-0606-0606-0606-0606-0606&lt;BR /&gt;  objectparentid .......................: 0606060606060606060606060606060606060606&lt;BR /&gt;  comments .............................:&lt;BR /&gt;  totaldisks ...........................: 118&lt;BR /&gt;  levelingstate ........................: inactive&lt;BR /&gt;  levelingprogress .....................: 86&lt;BR /&gt;  rssdiskstate .........................: mirrored&lt;BR /&gt;  srclevelactual .......................: vraid1&lt;BR /&gt;  diskdrivetype ........................: nearonline&lt;BR /&gt;  requestedsparepolicy .................: double&lt;BR /&gt;  currentsparepolicy ...................: double&lt;BR /&gt;  totalstoragespace ....................: 55871717376&lt;BR /&gt;  totalstoragespacegb ..................: 26641.71&lt;BR /&gt;  usedstoragespace .....................: 37580242944&lt;BR /&gt;  usedstoragespacegb ...................: 17919.66&lt;BR /&gt;  occupancyalarmlevel ..................: 95&lt;BR /&gt;  operationalstate .....................: good&lt;BR /&gt;  operationalstatedetail ...............: initialized_ok&lt;BR /&gt;  vraid0storagespace ...................: 1995374592&lt;BR /&gt;  vraid0storagespacegb .................: 951.47&lt;BR /&gt;  vraid1storagespace ...................: 1995374592&lt;BR /&gt;  vraid1storagespacegb .................: 951.47&lt;BR /&gt;  vraid5storagespace ...................: 1995374592&lt;BR /&gt;  vraid5storagespacegb .................: 951.47&lt;BR /&gt;&lt;BR /&gt;I warry of this:&lt;BR /&gt;levelingprogress .....................: 86&lt;BR /&gt;&lt;BR /&gt;It is constant during the last week at least.&lt;BR /&gt;Are there any means to re-initiate the leveling?&lt;BR /&gt;&lt;BR /&gt;By the way, the CV shows the next:&lt;BR /&gt;Operational state:  Good &lt;BR /&gt;Leveling state:  Inactive &lt;BR /&gt;Leveling progress: n/a  &lt;BR /&gt;RSS Disk state: Mirrored &lt;BR /&gt;</description>
      <pubDate>Mon, 21 Dec 2009 09:22:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva5000-available-space/m-p/4552588#M33712</guid>
      <dc:creator>ViS_2</dc:creator>
      <dc:date>2009-12-21T09:22:19Z</dc:date>
    </item>
    <item>
      <title>Re: EVA5000 available space?</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva5000-available-space/m-p/4552589#M33713</link>
      <description>Still more incongruences, it says the leveling is at 86% but inactive...&lt;BR /&gt;If it has been there for one week, a resync would be a good idea to make the controllers clarify the situation.&lt;BR /&gt;Open a call with HP to do this.</description>
      <pubDate>Mon, 21 Dec 2009 10:08:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva5000-available-space/m-p/4552589#M33713</guid>
      <dc:creator>Víctor Cespón</dc:creator>
      <dc:date>2009-12-21T10:08:17Z</dc:date>
    </item>
    <item>
      <title>Re: EVA5000 available space?</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva5000-available-space/m-p/4552590#M33714</link>
      <description>You could try rebooting the controllers one at a time from CV, this often clears up these issues on the old firmware.</description>
      <pubDate>Tue, 22 Dec 2009 16:12:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva5000-available-space/m-p/4552590#M33714</guid>
      <dc:creator>Greybeard</dc:creator>
      <dc:date>2009-12-22T16:12:13Z</dc:date>
    </item>
  </channel>
</rss>

