Disk Enclosures
cancel
Showing results for 
Search instead for 
Did you mean: 

Simultaneously SnapClone & Perf. Issue

Juan Antonio Mas Galian
Occasional Visitor

Simultaneously SnapClone & Perf. Issue

Hi

I need to clarify a question about SnapClone creation and Schedule to my customer since last week they experienced a loss of performance and in some cases disk access issues reported in the log files of a HPUX and Wintel systems, during a SnapClone creation with SSSU scripting.

The scenario

2 EVA 8000 with VCS 6220 & CVE 9.1 replicated against each other in a CA configuration with two DWDM ( 2GB ) links between sites,
Each one have 2 DG one of FC disk and the other with FATA disk ( all of them with the latest FW Bundle ).
No Hardware issues occurred during the operation.
No CA issues registered during the operation
Unfortunately there are no Performance data collected.

The customer operative

They launch with a SSSU very basic and simple script ( Copy paste in a DOS window of the SMA ) 5 SnapClone creation with the next time interval , reported in the EVA´s Log.

00:23:29 First Clone inside FATA GroupOrigin & Destination) Size in blocks: 1048576000.; Redundancy type: Vraid5
00:26:00 Second Clone inside FATA Group ( Origin & Destination) Size in blocks: 1048576000.; Redundancy type: Vraid5
00:32:15 Third Clone inside FATA Group ( Origin & Destination) Size in blocks: 1048576000.; Redundancy type: Vraid5
00:39:48 Fourth Clone inside FATA Group ( Origin & Destination) Size in blocks: 1048576000.; Redundancy type: Vraid5
00:43:00 Fifth Clone inside FATA Group ( Origin & Destination) Size in blocks: 1048576000.; Redundancy type: Vraid5

All of them managing by the same Controller ( As reported by the Eva Navigator Tool).
Afther that the performance & disk access issues begins in various systems although their records were in the FC Disk Group. When the SnapClone operations finish ( Around 04:30) stop the incidents.


The client acknowledges that the operative was not adequate but want to HP answer some questions


Can be this operative the cause of the issues, and if it is there are some public advisory related.

How is the maximum number and size of concurrent SnapClones recommended in a environment like this ( FATA Disk Group ). The EVA StorageWorks Replication Solutions Manager Administrator Guide only Said:

"Snapclone best practices:
Minimize the number of concurrent snapclone operations (use fewer virtual disks). Organize clone
operations into consistency groups of virtual disks, and then create snapclones of the consistency
groups sequentially"

Is the copy-paste in a DOS window a supported procedure in this case.

Have Hp a SSSU script templates to use instead a rudimentary copy-paste of a basic creation and presentation command

Is the use of RSM in this cases highly recommended

Regards

Juan Antonio Mas
2 REPLIES
DogBytes
Valued Contributor

Re: Simultaneously SnapClone & Perf. Issue

Hi Juan,

Personally I don't think how the EVA gets the instruction (SSSU,RSM, etc) is significant. The commands are working but causing a performance issue which suggests a heavily loaded array. They might want to select a better time of day for the snapclones, if possible. It sounds like the copy operations are just pushing its limits so I would recommend running EVAperf and going from there.
McCready
Valued Contributor

Re: Simultaneously SnapClone & Perf. Issue

Without performance data, it is hard to say where the problem might be, but I would recommend using a mirrorclone instead. You will still take an initial I/O hit when you initially setup the mirrorclones, but no additional I/O has to take place when you fracture the mirrorclones so they can be presented to the desired hosts; resynching is fast too once you are done.

I would do this in an SSSU script, and consider the use of the multimirror or multisnap command if the disks are all part of a single application.
check out evamgt.wetpaint.com and evamgt google group