<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: EVA Releveling in HPE EVA Storage</title>
    <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611252#M14155</link>
    <description>Hi,&lt;BR /&gt;&lt;BR /&gt;"The Best Practise" says that you should wait for the reconstruction to complete, not leveling.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h200005.www2.hp.com/bc/docs/support/SupportManual/lpg29448/lpg29448.pdf," target="_blank"&gt;http://h200005.www2.hp.com/bc/docs/support/SupportManual/lpg29448/lpg29448.pdf,&lt;/A&gt; page 8&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Mario.</description>
    <pubDate>Thu, 25 Aug 2005 07:21:31 GMT</pubDate>
    <dc:creator>Mario_66</dc:creator>
    <dc:date>2005-08-25T07:21:31Z</dc:date>
    <item>
      <title>EVA Releveling</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611244#M14147</link>
      <description>When a disk fails, the EVA relevels and distributes the data to the remaining disk.  In a frame with 168 disks in one group, configured for maximun performance and 4 disks for double sparing (protection level) the rebuild time is taking multiple days.  The EVA's are running at 95% + occupancy.  The latest Best Practise document states that only 5GB of free space are required for maintenance. Free space has nothing to do with the rebuild process other that having less to rebuild.  Host i/o's accessing the EVA during rebuilding of the array imppacts the process.  Is there anything other than limiting host i/o's that will speed up this process.  I can not find any documentation on how the rebuild process works.  I am concerned that when an EVA is at 98% and is bebuilding that another failure will cause it to corupt data.</description>
      <pubDate>Wed, 24 Aug 2005 20:07:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611244#M14147</guid>
      <dc:creator>Bill Mace</dc:creator>
      <dc:date>2005-08-24T20:07:09Z</dc:date>
    </item>
    <item>
      <title>Re: EVA Releveling</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611245#M14148</link>
      <description>A disk failure is followed by two tasks:&lt;BR /&gt;&lt;BR /&gt;- the recovery&lt;BR /&gt;it does the minimum work to restore redundancy&lt;BR /&gt;&lt;BR /&gt;- the leveling&lt;BR /&gt;it makes sure that user data is equally distributed over the disks&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;If another disk fails during recovery, it depends on the 'location' of the disk if data loss is the result or not (same as a traditional RAID system). The EVA divides its disk groups into different failure domains called RSS (Redundant Storage Set). It can tolerate the loss of multiple disks as long as each disk belonged to a different RSS.</description>
      <pubDate>Thu, 25 Aug 2005 01:44:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611245#M14148</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-08-25T01:44:52Z</dc:date>
    </item>
    <item>
      <title>Re: EVA Releveling</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611246#M14149</link>
      <description>Uwe,&lt;BR /&gt;      I totally agree, but the EVA is at risk during the reconstruct of the RSS's.  If the RSS reconstruct is coompllete then the EVA can handle another failure and resart a new releveling process.  I understand that it reconstructs RSS, thern relevels Vraid5, then relevels VRaid1.  The only time that it is critical is during  the reconstruct of the RSS's.  Outside of suspending I/O's (not possible in production), is there any other way to shorten the releveing time, still using 168 disk group,  The EVA's are all running with 98%+ occupancy.&lt;BR /&gt;     Thanks Bill&lt;BR /&gt;</description>
      <pubDate>Thu, 25 Aug 2005 03:44:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611246#M14149</guid>
      <dc:creator>Bill Mace</dc:creator>
      <dc:date>2005-08-25T03:44:59Z</dc:date>
    </item>
    <item>
      <title>Re: EVA Releveling</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611247#M14150</link>
      <description>Bill,&lt;BR /&gt;&lt;BR /&gt;this is nothing different than any other storage array that is not running some kind of ADG/RAID-6/RAID-5DP or whatever the vendor calls his implementation of 'double protection'. If too many disks fail, the data is gone :-(&lt;BR /&gt;&lt;BR /&gt;I am not aware that there is any way to 'tune' the leveling process, make it faster, slower, suspend it...</description>
      <pubDate>Thu, 25 Aug 2005 04:38:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611247#M14150</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-08-25T04:38:48Z</dc:date>
    </item>
    <item>
      <title>Re: EVA Releveling</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611248#M14151</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;can you clarify your first statement: Rebuild takes multiple days?&lt;BR /&gt;&lt;BR /&gt;What do you mean by that? Reconstruction or migration with or without leveling?&lt;BR /&gt;&lt;BR /&gt;Can you point me to the best practise document that you mentioned here?&lt;BR /&gt;&lt;BR /&gt;If I understand you correctly then the most problematical situation could be (Vraid0 is problematical by design ;o):&lt;BR /&gt;&lt;BR /&gt;Vraid5, one failed disk. During reconstruction phase another disk in the same RSS has failed. But it does not neccessary mean that you lose your data. It depends about failure type.&lt;BR /&gt;&lt;BR /&gt;I would say that it is not EVA specific issue. It is RAID5 limitation by design if we are talking about RAID5 by definition.&lt;BR /&gt;&lt;BR /&gt;BTW, I would not say that more free space will not speed up rebuilding process.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;M.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 25 Aug 2005 04:59:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611248#M14151</guid>
      <dc:creator>Mario_66</dc:creator>
      <dc:date>2005-08-25T04:59:51Z</dc:date>
    </item>
    <item>
      <title>Re: EVA Releveling</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611249#M14152</link>
      <description>I am having the same problem... The question here is not redundancy but one of the time it takes to relevel an entire disk group.&lt;BR /&gt;I have pinged HP on this and was told there is no way to "prioritize" the releveling process. What makes it worse is the more host IO the longer leveling takes....</description>
      <pubDate>Thu, 25 Aug 2005 05:17:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611249#M14152</guid>
      <dc:creator>David Ell</dc:creator>
      <dc:date>2005-08-25T05:17:56Z</dc:date>
    </item>
    <item>
      <title>Re: EVA Releveling</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611250#M14153</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;that's true, but what is a problem if the leveling is running longer then expected? &lt;BR /&gt;&lt;BR /&gt;I am not aware of any impact on data availability or data integrity. Leveling is optimization process and the most of the storages have some kind of it.&lt;BR /&gt;&lt;BR /&gt;Maybe I missed something?&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;M.</description>
      <pubDate>Thu, 25 Aug 2005 06:33:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611250#M14153</guid>
      <dc:creator>Mario_66</dc:creator>
      <dc:date>2005-08-25T06:33:09Z</dc:date>
    </item>
    <item>
      <title>Re: EVA Releveling</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611251#M14154</link>
      <description>It the the management overhead... Best practices states that you should wait for leveling to complete before replacing a drive. In addition, if you are experiencing drive failures often, you would never "catch up"</description>
      <pubDate>Thu, 25 Aug 2005 07:05:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611251#M14154</guid>
      <dc:creator>David Ell</dc:creator>
      <dc:date>2005-08-25T07:05:29Z</dc:date>
    </item>
    <item>
      <title>Re: EVA Releveling</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611252#M14155</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;"The Best Practise" says that you should wait for the reconstruction to complete, not leveling.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h200005.www2.hp.com/bc/docs/support/SupportManual/lpg29448/lpg29448.pdf," target="_blank"&gt;http://h200005.www2.hp.com/bc/docs/support/SupportManual/lpg29448/lpg29448.pdf,&lt;/A&gt; page 8&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Mario.</description>
      <pubDate>Thu, 25 Aug 2005 07:21:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611252#M14155</guid>
      <dc:creator>Mario_66</dc:creator>
      <dc:date>2005-08-25T07:21:31Z</dc:date>
    </item>
    <item>
      <title>Re: EVA Releveling</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611253#M14156</link>
      <description>Our Gold TAM recommends waiting for leveling to complete. This is interesting though</description>
      <pubDate>Thu, 25 Aug 2005 07:23:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611253#M14156</guid>
      <dc:creator>David Ell</dc:creator>
      <dc:date>2005-08-25T07:23:59Z</dc:date>
    </item>
    <item>
      <title>Re: EVA Releveling</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611254#M14157</link>
      <description>I agree that you should wait for the "reconstruction" to complete, not leveling. &lt;BR /&gt;&lt;BR /&gt;Also, the disk state should be ungroupped and not migrating.&lt;BR /&gt;&lt;BR /&gt;But what i want to know is where in the best practices do you see that only 5GB is needed for maintenance. Our recomendation was to leave 10% of total space unused.&lt;BR /&gt;&lt;BR /&gt;Can you post the link?&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;</description>
      <pubDate>Thu, 25 Aug 2005 08:19:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611254#M14157</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2005-08-25T08:19:42Z</dc:date>
    </item>
    <item>
      <title>Re: EVA Releveling</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611255#M14158</link>
      <description>Ignore my request, is in the document above.&lt;BR /&gt;&lt;BR /&gt;Thank you!</description>
      <pubDate>Thu, 25 Aug 2005 08:29:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611255#M14158</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2005-08-25T08:29:41Z</dc:date>
    </item>
    <item>
      <title>Re: EVA Releveling</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611256#M14159</link>
      <description>Ivan, The Best practices document link is posted above by Mario.  The refence I made to 5GB is on page 18.&lt;BR /&gt;&lt;BR /&gt;Mario, The EVA is not like tradictional Raid 5.  The sparing level 0,1,2 determines the number of drives to use for releveling.&lt;BR /&gt;The more free space you have means less data to move so it will be faster.  But at 98%+ utilization it takes a long time (Mutliple days).  As David pointed out, som,etimes a failure occurs during this process, then that drive has to be migrated out, then recovery of the RSS happens again, then the rebuild processs starts.&lt;BR /&gt;Host I/O's must continue and this is what additionally slows the rebuild.&lt;BR /&gt;&lt;BR /&gt;The data is in jepordy during the recovery of the RSS and no one seems to know how long this takes.&lt;BR /&gt;&lt;BR /&gt;Running at 99%+ occupancy, I have seen the sparing level drop From Double requested to Single available.  Going one step further I have seen Double requested and NONE Available.  I believe that this is an error and attribute this to 3.014 VCS.  We have array's running at 3.014, 3.020 and 3.025.&lt;BR /&gt;With None (sparring level) available I fell that this is a time were another failure will lead to data construction and can not get a definite answer to how long the Recovery phase takes.  &lt;BR /&gt;&lt;BR /&gt;David,  As you pointed out I have had failures during the releveling and it has started all over, so far no data lost.&lt;BR /&gt;&lt;BR /&gt;Bill&lt;BR /&gt;</description>
      <pubDate>Thu, 25 Aug 2005 08:42:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611256#M14159</guid>
      <dc:creator>Bill Mace_1</dc:creator>
      <dc:date>2005-08-25T08:42:51Z</dc:date>
    </item>
    <item>
      <title>Re: EVA Releveling</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611257#M14160</link>
      <description>Ivan,&lt;BR /&gt;&lt;BR /&gt;Consider the new 300 gb drives comming out.  An EVA with 168 drives at 10% means that 5TB raw is used as free space.  HP stated in the best practice that only 5 GB is required for maintenance.  That tells me that I could run at 99.99 peresent occupnacy as long as I have 5GB available for VCS code loads.&lt;BR /&gt;&lt;BR /&gt;Bill</description>
      <pubDate>Thu, 25 Aug 2005 08:48:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611257#M14160</guid>
      <dc:creator>Bill Mace_1</dc:creator>
      <dc:date>2005-08-25T08:48:31Z</dc:date>
    </item>
    <item>
      <title>Re: EVA Releveling</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611258#M14161</link>
      <description>Thanks Bill. We had 1 TB free space to follow the HP recomendations.&lt;BR /&gt;&lt;BR /&gt;I will raise the occupancy level to 95%. I will keep enough free space to ensure leveling and data reconstruction, as free space is used as sparing space.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 25 Aug 2005 08:56:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611258#M14161</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2005-08-25T08:56:37Z</dc:date>
    </item>
    <item>
      <title>Re: EVA Releveling</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611259#M14162</link>
      <description>&amp;gt; The sparing level 0,1,2 determines the number of drives to use for releveling.&lt;BR /&gt;&lt;BR /&gt;I've never heard that claim before and I've been working for 3.5+ years with EVA...&lt;BR /&gt;&lt;BR /&gt;It sounds like you're talking about the 'protections space' which is set aside in place of any dedicated spare disks. It has nothing to do with leveling.&lt;BR /&gt;&lt;BR /&gt;0 = no reservation&lt;BR /&gt;1 = 2x size of largest disk drive in group&lt;BR /&gt;2 = 4x size of largest disk drive in group&lt;BR /&gt;&lt;BR /&gt;The 'leveling' will always go over all disk drives in the group.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; The data is in jepordy during the recovery of the RSS and&lt;BR /&gt;&amp;gt; no one seems to know how long this takes.&lt;BR /&gt;&lt;BR /&gt;Of course not. It depends on the size of the RSS (number of disk drives), the speed of the disks, the amount of data to recover, the VRAID level of the data, concurrency with host I/O.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; Running at 99%+ occupancy, I have seen the sparing level&lt;BR /&gt;&amp;gt; drop From Double requested to Single available.&lt;BR /&gt;&lt;BR /&gt;After a disk failure I'd say. Add back a disk, do not create additional Vdisks and it should go back to double.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; Going one step further I have seen Double requested&lt;BR /&gt;&amp;gt; and NONE Available.&lt;BR /&gt;&lt;BR /&gt;Lost two unmarried disks?&lt;BR /&gt;&lt;BR /&gt;&amp;gt; With None (sparring level) available I fell that this is&lt;BR /&gt;&amp;gt; a time were another failure will lead to data construction&lt;BR /&gt;&lt;BR /&gt;ANY disk failure will trigger a recovery attempt. Whether it succeeds depends on whether there is enough free space. It can come from the protection space OR the free space for Vdisks, preference is given to the second.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; Consider the new 300 gb drives comming out.&lt;BR /&gt;&lt;BR /&gt;I've already installed an EVA with those disks, so they are there.&lt;BR /&gt;&lt;BR /&gt;-----&lt;BR /&gt;&lt;BR /&gt;Bill, you post from two different accounts:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://forums1.itrc.hp.com/service/forums/publicProfile.do?userId=CA1315413&amp;amp;forumId=1" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/publicProfile.do?userId=CA1315413&amp;amp;forumId=1&lt;/A&gt;&lt;BR /&gt;&lt;A href="http://forums1.itrc.hp.com/service/forums/publicProfile.do?userId=CA1317758&amp;amp;forumId=1" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/publicProfile.do?userId=CA1317758&amp;amp;forumId=1&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;-----&lt;BR /&gt;&lt;BR /&gt;Ivan,&lt;BR /&gt;&amp;gt; I will raise the occupancy level to 95%.&lt;BR /&gt;&amp;gt; I will keep enough free space to ensure leveling and&lt;BR /&gt;&amp;gt; data reconstruction, as free space is used as sparing space.&lt;BR /&gt;&lt;BR /&gt;Define a protection level &amp;gt; 0 and you have set aside space for reconstruction - that's its purpose. The occupancy level is just a warning highwater mark.</description>
      <pubDate>Thu, 25 Aug 2005 09:32:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611259#M14162</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-08-25T09:32:02Z</dc:date>
    </item>
    <item>
      <title>Re: EVA Releveling</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611260#M14163</link>
      <description>My protection level is double, but leaving "extra" free space, if you loose a disk, the free space will be used first to reconstruct the data. If no enough free space available, then "spare space" (configured in the protection level) will be used.</description>
      <pubDate>Thu, 25 Aug 2005 09:44:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611260#M14163</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2005-08-25T09:44:05Z</dc:date>
    </item>
    <item>
      <title>Re: EVA Releveling</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611261#M14164</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;Uwe, you should be faster next time. I have almost finished my reply and now it is worthless :)).&lt;BR /&gt;&lt;BR /&gt;BTW, I think that in a case that EVA does not have enough unallocated space, it will temporary use space dedicated for protection as a space for leveling process. I am not sure that it can be visible as decreasing protection level. I have never tried.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;M.</description>
      <pubDate>Thu, 25 Aug 2005 09:58:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611261#M14164</guid>
      <dc:creator>Mario_66</dc:creator>
      <dc:date>2005-08-25T09:58:29Z</dc:date>
    </item>
    <item>
      <title>Re: EVA Releveling</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611262#M14165</link>
      <description>I apologize for being such an underperformer ;-)</description>
      <pubDate>Thu, 25 Aug 2005 10:24:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611262#M14165</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-08-25T10:24:09Z</dc:date>
    </item>
    <item>
      <title>Re: EVA Releveling</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611263#M14166</link>
      <description>Uve,&lt;BR /&gt;&lt;BR /&gt;"Disk failure protection" is the correct term and I refered to it as sparing level.&lt;BR /&gt;&lt;BR /&gt;The nuber of drives is 168 in the default disk group and they are 146GB on most arrary's.  The 99% ocupancy is true for about 85% of the EVA's currently installed.&lt;BR /&gt;&lt;BR /&gt;I have two user ID's because when I treid to respond this morning it would not let me in and said I was not registered,  I did that stupid thing and automatically log in on my home machine and they could not find me, hence another ID to access this forum and hope that I could get and answer to my posts.&lt;BR /&gt;&lt;BR /&gt;You are also correct that the Disk failure protection level determines the amount of disk space X 2, X largest drive size x protection level selected.&lt;BR /&gt;&lt;BR /&gt;I still haven't found out how much time elapses from the start of reconstruction/recovery of the RSS to the time it starts the releveling.  That is the time I am concerned with as that is when I believe the EVA is susceptable to coruption.  I do not know weither it is 20 micro seconds or 20 minutes. another factor I know effects the releveling is the size of the vdisks.  Smaller vdisk level quicker but I did not want to go there. The disk group is also only Vraid5 so it does not have to do leveling of Vraid1.&lt;BR /&gt;&lt;BR /&gt;Now to make it interesting say that over the period of 2 years the group has had mulipte disk failures.  Say 20 disks have been replaced, (please don't quote 3% industry standard failure rate) then it is very possible that the RSS's have been compromised.  Is that not correct?  &lt;BR /&gt;&lt;BR /&gt;Is the RSS Disk state, None, Parity, Mirrored an indication of the health state of the RSS's.&lt;BR /&gt;&lt;BR /&gt;Bill</description>
      <pubDate>Thu, 25 Aug 2005 12:03:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/eva-releveling/m-p/3611263#M14166</guid>
      <dc:creator>Bill Mace_1</dc:creator>
      <dc:date>2005-08-25T12:03:05Z</dc:date>
    </item>
  </channel>
</rss>

