<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic snapclone or mirrorclone in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292260#M29014</link>
    <description>&lt;!--!*#--&gt;We have expanded our EVA8000 to a new cab so it is very busy in terms of IO not to mention that a number vdisks are being CA'd.  The new expansion cab has its own disk group and so we need to move a few vdisks across to that new disk group to create space in our existing disk group.&lt;BR /&gt;&lt;BR /&gt;Our options are snapclone or mirrorclone and i would like to know which one is less likely to affect performance.  Only one vdisk will be done at a time, outside of normal hours.  Some are up to (and over) 2GB in size but none are CA'd.&lt;BR /&gt;&lt;BR /&gt;For the snapclone method we would mark vdisk as write-through, shutdown server, create snapclone to container in new group, unpresent orginal vdisk and then present snapclone to the server before restarting server.  Then we can leave snapclone to complete overnight or weekend.&lt;BR /&gt;&lt;BR /&gt;With a mirrorclone we have to create a mirrorclone to new disk group, wait for the mirrorclone to complete, shutdown server, detach &amp;amp; fracture mirrorclone, unpresent original vdisk and present mirrorclone before starting up server.&lt;BR /&gt;&lt;BR /&gt;Is there a preferred method? - is either option better in terms of performance or time to complete??&lt;BR /&gt;</description>
    <pubDate>Wed, 22 Oct 2008 14:38:11 GMT</pubDate>
    <dc:creator>ben horan</dc:creator>
    <dc:date>2008-10-22T14:38:11Z</dc:date>
    <item>
      <title>snapclone or mirrorclone</title>
      <link>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292260#M29014</link>
      <description>&lt;!--!*#--&gt;We have expanded our EVA8000 to a new cab so it is very busy in terms of IO not to mention that a number vdisks are being CA'd.  The new expansion cab has its own disk group and so we need to move a few vdisks across to that new disk group to create space in our existing disk group.&lt;BR /&gt;&lt;BR /&gt;Our options are snapclone or mirrorclone and i would like to know which one is less likely to affect performance.  Only one vdisk will be done at a time, outside of normal hours.  Some are up to (and over) 2GB in size but none are CA'd.&lt;BR /&gt;&lt;BR /&gt;For the snapclone method we would mark vdisk as write-through, shutdown server, create snapclone to container in new group, unpresent orginal vdisk and then present snapclone to the server before restarting server.  Then we can leave snapclone to complete overnight or weekend.&lt;BR /&gt;&lt;BR /&gt;With a mirrorclone we have to create a mirrorclone to new disk group, wait for the mirrorclone to complete, shutdown server, detach &amp;amp; fracture mirrorclone, unpresent original vdisk and present mirrorclone before starting up server.&lt;BR /&gt;&lt;BR /&gt;Is there a preferred method? - is either option better in terms of performance or time to complete??&lt;BR /&gt;</description>
      <pubDate>Wed, 22 Oct 2008 14:38:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292260#M29014</guid>
      <dc:creator>ben horan</dc:creator>
      <dc:date>2008-10-22T14:38:11Z</dc:date>
    </item>
    <item>
      <title>Re: snapclone or mirrorclone</title>
      <link>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292261#M29015</link>
      <description>Hi,&lt;BR /&gt;maybe the less performance demanding method is simply ungrouping the HDD from the original and grouping them into the new DG</description>
      <pubDate>Wed, 22 Oct 2008 14:45:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292261#M29015</guid>
      <dc:creator>IBaltay</dc:creator>
      <dc:date>2008-10-22T14:45:54Z</dc:date>
    </item>
    <item>
      <title>Re: snapclone or mirrorclone</title>
      <link>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292262#M29016</link>
      <description>the above suggestion is valid according to the EVA Performance best practices if both DGs are from the identical HDDs. So if there is a possibility not touching the vdisks but only resize the DG sizes it can be done without the snapclones/mirrorclones</description>
      <pubDate>Wed, 22 Oct 2008 14:55:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292262#M29016</guid>
      <dc:creator>IBaltay</dc:creator>
      <dc:date>2008-10-22T14:55:29Z</dc:date>
    </item>
    <item>
      <title>Re: snapclone or mirrorclone</title>
      <link>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292263#M29017</link>
      <description>If the above cannot be used, then only the mirrorclone is usable because it is the continuous local copy...</description>
      <pubDate>Wed, 22 Oct 2008 14:58:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292263#M29017</guid>
      <dc:creator>IBaltay</dc:creator>
      <dc:date>2008-10-22T14:58:38Z</dc:date>
    </item>
    <item>
      <title>Re: snapclone or mirrorclone</title>
      <link>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292264#M29018</link>
      <description>&lt;!--!*#--&gt;Because our existing DG is over 70+ HDDs and the new EVA8000 expansion cabinet has over 40 HDDs we made a decision not to group these 40 disks into the existing DG as the levelling operation would be very large and encroach into the working day.  Instead we are going to move individual vdisks into the new disk group to create space in the existing disk group.</description>
      <pubDate>Wed, 22 Oct 2008 15:13:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292264#M29018</guid>
      <dc:creator>ben horan</dc:creator>
      <dc:date>2008-10-22T15:13:18Z</dc:date>
    </item>
    <item>
      <title>Re: snapclone or mirrorclone</title>
      <link>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292265#M29019</link>
      <description>I am sending you the EVA config best practices, because &lt;BR /&gt;a) the more spindles the more perf in the DG&lt;BR /&gt;b) 70 is not enough to be affraid of any perf degradation yet...&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h71028.www7.hp.com/ERC/downloads/4AA0-2787ENW.pdf?jumpid=reg_R1002_USEN" target="_blank"&gt;http://h71028.www7.hp.com/ERC/downloads/4AA0-2787ENW.pdf?jumpid=reg_R1002_USEN&lt;/A&gt;</description>
      <pubDate>Wed, 22 Oct 2008 15:18:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292265#M29019</guid>
      <dc:creator>IBaltay</dc:creator>
      <dc:date>2008-10-22T15:18:10Z</dc:date>
    </item>
    <item>
      <title>Re: snapclone or mirrorclone</title>
      <link>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292266#M29020</link>
      <description>"the levelling operation would be very large and encroach into the working day."&lt;BR /&gt;&lt;BR /&gt;I would not worry about a leveling process decreasing the performance of the disk group by so much that it effects your users in a bad way.&lt;BR /&gt;&lt;BR /&gt;Generally speaking, Leveling is a background process on the controllers that people ("users") don't even notice.&lt;BR /&gt;&lt;BR /&gt;Have you had a bad experience with a leveling process before?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Steven</description>
      <pubDate>Wed, 22 Oct 2008 15:35:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292266#M29020</guid>
      <dc:creator>Steven Clementi</dc:creator>
      <dc:date>2008-10-22T15:35:46Z</dc:date>
    </item>
    <item>
      <title>Re: snapclone or mirrorclone</title>
      <link>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292267#M29021</link>
      <description>Our EVA is fully populated with 300GB FC disks.  Once DG of 95 disks and one DG of 73 disks.  We have only 1TB free in each DG so they are very full.  The new expansion cab has 40 disks.  We feel that if we add the disks to the existing DG's levelling will take a very long time and even though it is a "background task" there may be performance degredation.</description>
      <pubDate>Wed, 22 Oct 2008 15:44:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292267#M29021</guid>
      <dc:creator>ben horan</dc:creator>
      <dc:date>2008-10-22T15:44:05Z</dc:date>
    </item>
    <item>
      <title>Re: snapclone or mirrorclone</title>
      <link>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292268#M29022</link>
      <description>but the EVA configuration best practices does not confirm/prove your concern and you may get the performance degradation within the new DG (only 40 disks) in comparison with the first (95) or second (73) existing DGs&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 22 Oct 2008 15:52:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292268#M29022</guid>
      <dc:creator>IBaltay</dc:creator>
      <dc:date>2008-10-22T15:52:02Z</dc:date>
    </item>
    <item>
      <title>Re: snapclone or mirrorclone</title>
      <link>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292269#M29023</link>
      <description>Ben,&lt;BR /&gt;&lt;BR /&gt;The EVA is designed to work best with larger disk groups that are equally divisible by 8.  So, by breaking up your 168 (21 RSS's of 8 drives each) into 2 disk groups of 73 and 95 you are actually forcing your EVA to run slower.  The disk group with 73 will create 8 RSS's of 8 drives and 1 RSS's of 9 disks.  The disk group with 95 will create 11 RSS's of 8 drives and 1 RSs with 7 drives.  These partial RSS's consisting of 7 and 9 drives will cause a performance issue.&lt;BR /&gt;&lt;BR /&gt;Best thing for you to do is:&lt;BR /&gt;&lt;BR /&gt;1. 33 of your new drives to the disk group with 95 disk in it.&lt;BR /&gt;2. remove one drive from your disk group with 73 in it and add it along with the remaining 7 new disk drives to the disk group with 128 disks in it.&lt;BR /&gt;3. User the Snapclone functionality to migrate VDisks to the larger disk group.&lt;BR /&gt;4. Remove 8 drives at a time from the smaller disk group and add them into the larger disk group in groups of 8 until you have only on disk group of 208 disk drives.&lt;BR /&gt;&lt;BR /&gt;Once this is done, you are now spreading your I/O over 208 spindles instead of only 73, 95 or 40 disk drives.  More spindles means more I/O throughput.&lt;BR /&gt;&lt;BR /&gt;Phil</description>
      <pubDate>Wed, 22 Oct 2008 20:03:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292269#M29023</guid>
      <dc:creator>Phillip Thayer</dc:creator>
      <dc:date>2008-10-22T20:03:21Z</dc:date>
    </item>
    <item>
      <title>Re: snapclone or mirrorclone</title>
      <link>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292270#M29024</link>
      <description>You guys are missing the fact that Ben's main concern is that when grouping the new disks, his users might notice some performance degradation while the DG levels.&lt;BR /&gt;&lt;BR /&gt;Personally, I don't think the users will see any difference in performance.  Sure, there will be some difference... but I think it is negligable compared to the performance cut they might notice after you move Virtual disks to a DG with 40 disks vs. 73 or 95 disks.&lt;BR /&gt;&lt;BR /&gt;Even then they would have to be a high performance user with a pretty high i/o profile on the EVA to notice any really big change.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;The short story is, I think you would be fine grouping the 40 disks in with one of your other groups, or both (splitting them to give both groups space (and yes.. trying to keep with the multiple of 8 best practice)).  Even if you started 6pm Friday and the leveling was well along it's way by monday morning, I think the price you'd pay is less then if you had a 3rd DG.&lt;BR /&gt;&lt;BR /&gt;Facts...&lt;BR /&gt;&lt;BR /&gt;Protection levels is PER Disk Group.  You will lose additional raw space by creating a 3rd DG.&lt;BR /&gt;&lt;BR /&gt;"partial RSS's consisting of 7 and 9 drives will cause a performance issue." - I never actually seen any performance issues when there is a RSS of less/more than 8.  It simply means that redundancy is not optimal in 1 set of disks.&lt;BR /&gt;&lt;BR /&gt;"Best thing for you to do is:" - might not be to combine all of your disks into 1 Disk Group.  Lot's of reasoning goes into the decesion to have a single or multiple disk groups... and we do not know how your EVA came to have 2 Disk Groups.&lt;BR /&gt;&lt;BR /&gt;Different applications require different i/o performance, internal politics sometimes plays a role in the decision to have one or multiple DG's, etc.&lt;BR /&gt;&lt;BR /&gt;It is very easy to state the Best Practice(s).&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Steven</description>
      <pubDate>Thu, 23 Oct 2008 00:31:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292270#M29024</guid>
      <dc:creator>Steven Clementi</dc:creator>
      <dc:date>2008-10-23T00:31:29Z</dc:date>
    </item>
    <item>
      <title>Re: snapclone or mirrorclone</title>
      <link>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292271#M29025</link>
      <description>&lt;!--!*#--&gt;Thanks guys - especially Steven. After your comments I think we will go for integrating the expansion cabinets 40 disks into the existing disk groups.  We will also aim to make the disk numbers divisible by 8 in the DGs.  We need to stay with the 2 DG's as one is a very sensitive email application and mangement like it to be in its own DG.&lt;BR /&gt;&lt;BR /&gt;Is there any best practise as to how many disks we can add at one time?  i.e. is 32 in one go okay?</description>
      <pubDate>Thu, 23 Oct 2008 07:50:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292271#M29025</guid>
      <dc:creator>ben horan</dc:creator>
      <dc:date>2008-10-23T07:50:47Z</dc:date>
    </item>
    <item>
      <title>Re: snapclone or mirrorclone</title>
      <link>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292272#M29026</link>
      <description>&amp;gt;Is there any best practise as to how many &amp;gt;disks we can add at one time?  i.e. is 32 in &amp;gt;one go okay?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;HDD installation&lt;BR /&gt;&lt;BR /&gt;HP recommends to install (not group) a maximum of 4 HDD at one time. The procedure is the following:&lt;BR /&gt;1. insert not more then 4 physical disks&lt;BR /&gt;2. wait until the activity indicator on each inserted drive becomes solid green and remains solid for 10 seconds&lt;BR /&gt;3. you can proceed with 4 other disks until 32&lt;BR /&gt;&lt;BR /&gt;HDD grouping to the DGs&lt;BR /&gt;the best is to check the original RSS layout&lt;BR /&gt;if you have any RSS with 6 members i would first saturate those hungry ones and then add the disks in the groups of 8 to have full vertical layout, which should be fairly easy with 18 enclosures&lt;BR /&gt;</description>
      <pubDate>Thu, 23 Oct 2008 09:27:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292272#M29026</guid>
      <dc:creator>IBaltay</dc:creator>
      <dc:date>2008-10-23T09:27:19Z</dc:date>
    </item>
    <item>
      <title>Re: snapclone or mirrorclone</title>
      <link>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292273#M29027</link>
      <description>&amp;gt; HP recommends to install (not group) a maximum of 4 HDD at one time.&lt;BR /&gt;&lt;BR /&gt;I strongly recommend to install one disk drive at a time and wait until it has been properly recognized in Command View EVA. Yes, this takes a lot of time, but I still see EVAs with duplicate disk drive names which is caused by CV-EVA.</description>
      <pubDate>Thu, 23 Oct 2008 09:32:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292273#M29027</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2008-10-23T09:32:50Z</dc:date>
    </item>
    <item>
      <title>Re: snapclone or mirrorclone</title>
      <link>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292274#M29028</link>
      <description>&lt;!--!*#--&gt;We will add disks to the expansion cabinet one at a time to be safe.&lt;BR /&gt;&lt;BR /&gt;We do actually have 2 disks with the same name in another disk group, is this okay? Is there any fix required?</description>
      <pubDate>Thu, 23 Oct 2008 12:26:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292274#M29028</guid>
      <dc:creator>ben horan</dc:creator>
      <dc:date>2008-10-23T12:26:29Z</dc:date>
    </item>
    <item>
      <title>Re: snapclone or mirrorclone</title>
      <link>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292275#M29029</link>
      <description>Each disk has its own unique wwn so the logical name should not be a problem</description>
      <pubDate>Thu, 23 Oct 2008 13:03:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292275#M29029</guid>
      <dc:creator>IBaltay</dc:creator>
      <dc:date>2008-10-23T13:03:18Z</dc:date>
    </item>
    <item>
      <title>Re: snapclone or mirrorclone</title>
      <link>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292276#M29030</link>
      <description>It is a problem, because one of the disks with a publicate name is not completely processed by CV-EVA. For example&lt;BR /&gt;- it is not included in the disk counts&lt;BR /&gt;- last time I was searching for it, the icon did not appear in the disk group hierarchy&lt;BR /&gt;&lt;BR /&gt;I find it very confusing if an incorrect number of disk drives shown ;-)</description>
      <pubDate>Thu, 23 Oct 2008 13:13:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292276#M29030</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2008-10-23T13:13:25Z</dc:date>
    </item>
    <item>
      <title>Re: snapclone or mirrorclone</title>
      <link>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292277#M29031</link>
      <description>&amp;gt; Is there any fix required?&lt;BR /&gt;&lt;BR /&gt;Sorry, Ben, missed the question.&lt;BR /&gt;In that case I simply give the visible disk a different name within CV-EVA. After a refresh / new discovery, the view should be correcct.</description>
      <pubDate>Thu, 23 Oct 2008 13:17:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/snapclone-or-mirrorclone/m-p/4292277#M29031</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2008-10-23T13:17:10Z</dc:date>
    </item>
  </channel>
</rss>

