<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Simultaneously SnapClone &amp;amp; Perf. Issue in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/simultaneously-snapclone-amp-perf-issue/m-p/4583772#M34223</link>
    <description>Hi Juan,&lt;BR /&gt;&lt;BR /&gt;Personally I don't think how the EVA gets the instruction (SSSU,RSM, etc) is significant. The commands are working but causing a performance issue which suggests a heavily loaded array. They might want to select a better time of day for the snapclones, if possible. It sounds like the copy operations are just pushing its limits so I would recommend running EVAperf and going from there. &lt;BR /&gt;</description>
    <pubDate>Wed, 17 Feb 2010 20:36:30 GMT</pubDate>
    <dc:creator>DogBytes</dc:creator>
    <dc:date>2010-02-17T20:36:30Z</dc:date>
    <item>
      <title>Simultaneously SnapClone &amp; Perf. Issue</title>
      <link>https://community.hpe.com/t5/disk-enclosures/simultaneously-snapclone-amp-perf-issue/m-p/4583771#M34222</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;I need to clarify a question about SnapClone creation and Schedule to my customer since last week they experienced a loss of performance and in some cases disk access issues reported in the log files of a HPUX and Wintel systems, during a SnapClone creation with SSSU scripting. &lt;BR /&gt;&lt;BR /&gt;The scenario &lt;BR /&gt;&lt;BR /&gt;2 EVA 8000 with VCS 6220 &amp;amp; CVE 9.1 replicated against each other in a CA configuration with two DWDM ( 2GB ) links between sites, &lt;BR /&gt;Each one have 2 DG one of FC disk and the other with FATA disk ( all of them with the latest FW Bundle ). &lt;BR /&gt;No Hardware issues occurred during the operation. &lt;BR /&gt;No CA issues registered during the operation&lt;BR /&gt;Unfortunately there are no Performance data collected.&lt;BR /&gt;&lt;BR /&gt;The customer operative &lt;BR /&gt;&lt;BR /&gt;They launch with a SSSU very basic and simple script ( Copy paste in a DOS window of the SMA ) 5 SnapClone creation with the next time interval , reported in the EVA´s Log.&lt;BR /&gt;&lt;BR /&gt;00:23:29 First Clone inside FATA GroupOrigin &amp;amp; Destination) Size in blocks: 1048576000.; Redundancy type: Vraid5&lt;BR /&gt;00:26:00 Second Clone inside FATA Group ( Origin &amp;amp; Destination) Size in blocks: 1048576000.; Redundancy type: Vraid5&lt;BR /&gt;00:32:15 Third Clone inside FATA Group ( Origin &amp;amp; Destination) Size in blocks: 1048576000.; Redundancy type: Vraid5&lt;BR /&gt;00:39:48 Fourth Clone inside FATA Group ( Origin &amp;amp; Destination) Size in blocks: 1048576000.; Redundancy type: Vraid5&lt;BR /&gt;00:43:00 Fifth Clone inside FATA Group ( Origin &amp;amp; Destination) Size in blocks: 1048576000.; Redundancy type: Vraid5&lt;BR /&gt;&lt;BR /&gt;All of them managing by the same Controller ( As reported by the Eva Navigator Tool).&lt;BR /&gt;Afther that the performance &amp;amp; disk access issues begins in various systems although their records were in the FC Disk Group. When the SnapClone operations finish ( Around 04:30) stop the incidents.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;The client acknowledges that the operative was not adequate but want to HP answer some questions&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Can be this operative the cause of the issues, and if it is there are some public advisory related.&lt;BR /&gt;&lt;BR /&gt;How is the maximum number and size of concurrent SnapClones recommended in a environment like this ( FATA Disk Group ). The EVA StorageWorks Replication Solutions Manager Administrator Guide only Said:&lt;BR /&gt;&lt;BR /&gt;"Snapclone best practices:&lt;BR /&gt;Minimize the number of concurrent snapclone operations (use fewer virtual disks). Organize clone&lt;BR /&gt;operations into consistency groups of virtual disks, and then create snapclones of the consistency&lt;BR /&gt;groups sequentially"&lt;BR /&gt;&lt;BR /&gt;Is the copy-paste in a DOS window a supported procedure in this case.&lt;BR /&gt;&lt;BR /&gt;Have Hp a SSSU script templates to use instead a rudimentary copy-paste of a basic creation and presentation command&lt;BR /&gt;&lt;BR /&gt;Is the use of RSM in this cases highly recommended&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Juan Antonio Mas</description>
      <pubDate>Sun, 14 Feb 2010 19:51:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/simultaneously-snapclone-amp-perf-issue/m-p/4583771#M34222</guid>
      <dc:creator>Juan Antonio Mas Galian</dc:creator>
      <dc:date>2010-02-14T19:51:01Z</dc:date>
    </item>
    <item>
      <title>Re: Simultaneously SnapClone &amp; Perf. Issue</title>
      <link>https://community.hpe.com/t5/disk-enclosures/simultaneously-snapclone-amp-perf-issue/m-p/4583772#M34223</link>
      <description>Hi Juan,&lt;BR /&gt;&lt;BR /&gt;Personally I don't think how the EVA gets the instruction (SSSU,RSM, etc) is significant. The commands are working but causing a performance issue which suggests a heavily loaded array. They might want to select a better time of day for the snapclones, if possible. It sounds like the copy operations are just pushing its limits so I would recommend running EVAperf and going from there. &lt;BR /&gt;</description>
      <pubDate>Wed, 17 Feb 2010 20:36:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/simultaneously-snapclone-amp-perf-issue/m-p/4583772#M34223</guid>
      <dc:creator>DogBytes</dc:creator>
      <dc:date>2010-02-17T20:36:30Z</dc:date>
    </item>
    <item>
      <title>Re: Simultaneously SnapClone &amp; Perf. Issue</title>
      <link>https://community.hpe.com/t5/disk-enclosures/simultaneously-snapclone-amp-perf-issue/m-p/4583773#M34224</link>
      <description>Without performance data, it is hard to say where the problem might be, but I would recommend using a mirrorclone instead. You will still take an initial I/O hit when you initially setup the mirrorclones, but no additional I/O has to take place when you fracture the mirrorclones so they can be presented to the desired hosts; resynching is fast too once you are done.&lt;BR /&gt;&lt;BR /&gt;I would do this in an SSSU script, and consider the use of the multimirror or multisnap command if the disks are all part of a single application.</description>
      <pubDate>Tue, 23 Feb 2010 16:15:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/simultaneously-snapclone-amp-perf-issue/m-p/4583773#M34224</guid>
      <dc:creator>McCready</dc:creator>
      <dc:date>2010-02-23T16:15:36Z</dc:date>
    </item>
  </channel>
</rss>

