1855177 Members
26978 Online
104109 Solutions
New Discussion

snapclone performance

 
Edgar_8
Regular Advisor

snapclone performance

We have 2 EVA 5000 (PRD > SEC) currently in production, with CA snapclone configured between them. Recently
we have been experiencing performance issues during the scheduled snapclone runs. The SEC EVA has
FATA drives to which the snapclone is occurring. Typical performance issues are for example a VDISK
on the PRD EVA presented to a host will become not visible from the host side or I/O to the
presented VDISKS will be retarded.

Has anyone experienced such behaviour and what have your findings been? Alternatively does
anyone have recommendations regarding how we could troubleshoot this issue?

Thanks in advance!
12 REPLIES 12
Uwe Zessin
Honored Contributor

Re: snapclone performance

The problem is with the space allocation of the clone when you are dealing with large virtual disks, but I haven't seen it myself.

I think the workarounds are:
- put the virtual disks temporarily into writethrough mode
- temporarily suspend replication
.
Edgar_8
Regular Advisor

Re: snapclone performance

Hi Uwe,

Please could you elaborate.

Thanks in advance!
Peter Mattei
Honored Contributor

Re: snapclone performance

Edgar

What Uwe is talking about is the fact that the EVA cache can get flodded when you create a clone. To avoid this you convert the primary VDISK to writethrough before creating the clone.
--- You can do that manually.

With VCS3.028 you can control this from within CA. See the release not where it says:

Avoiding slow creation of multiple related snapshots
To minimize impact of snapshot and snapclone performance on system performance, transition cache on source virtual disks to write-through mode using Command View EVA before starting the snapshot or snapclone. The impact of creating one snapshot may be tolerable and may not require this action.
However, creation of multiple snapshots of the same virtual disk requires this action.

Find the document here:
http://h20000.www2.hp.com/bizsupport/TechSupport/DocumentIndex.jsp?contentType=SupportManual&docIndexId=3124&locale=en_US&prodTypeId=12169&prodSeriesId=321347

Cheers
Peter
I love storage
Alzhy
Honored Contributor

Re: snapclone performance

Why don't you just drop using CA/SnapClone and use Veritas Volume Manager with the FlashSnap option to address your EVA redundancy as well as Backup issues?


We've had a number of problems with EVA 5K's in the past as well as the BusinessCopy software. For small to mid-sized impmentations -- it should work. But for I/O intensive and large storage sets (i.e. large databases) - the EVA solution just don't fit our definition of "Enterprise" suitability.

Get the Hitachi based XP series instead.
Hakuna Matata.
Edgar_8
Regular Advisor

Re: snapclone performance

Hi,

Peter/Uwe, Just to share some additional info, our environments clones have been automated via RSM everyday at
specific times and within the clone script there is a cleanup of aged clones.
How do we determine/physically prove that the EVA cache is in fact being flooded during a clone? And
since the clone is automated, can the VDISK write cache be converted automatically within the RSM script?
Can you explain briefly the concepts of "write-back" vs "write-through" cache.

Nelson,is there a version of Veritas Volime Manager for RHE? Due to hefty pricing of XP technology, the XP
arrays aren't an option.

Thanks in advance!
Uwe Zessin
Honored Contributor

Re: snapclone performance

About the 'cache flood' - sorry, I don't know.


The latest VCS, CV-EVA and (I think) RSM code allows the user to switch between write-back/-through, but it is not in SSSU (v4.0.18 - there should be an update by the end of september. I don't know if it will get that feature, but I can find out if you need it)

"write-back" - data from the host is kept in the controller's write cache memory and the host is notified that the I/O has completed although the data has not been written to disk, yet. The controller decides on its own when it writes the data to the disks and can optimize I/Os that way. The data is not in danger, because a copy is kept in the write cache of the second controller and both memories are protected by batteries.

"write-through" - data from the host is immediately written to the disk. The host is notified _after_ the data is on the disk.
.
Edgar_8
Regular Advisor

Re: snapclone performance

Hi Uwe,

If we change the PRD VDISKS write-cache to "write-through" do you know if there would be any
performance hit of normal I/O operations and during a snapclone job? And is it more optimal/recommended
to have VDISKS in a "write-through" mode?

Thanks in advance!
Uwe Zessin
Honored Contributor

Re: snapclone performance

Hello Edgar,

sure there will a performance hit!!
And performing a snapclone will be a performance hit, too, because the EVA has more work to do.


But the setting to "write-through" caching is supposed to be only temporarily.

- put the virtual disk into write-through mode
- launch the snapshot / snapclone
- put the virtual disk back into write-back mode
.
Alzhy
Honored Contributor

Re: snapclone performance

Edgar, yes there are versions for RHE and most Enterprise Linux Implementations.

With VxVM Mirroring and FlashSnap Technology - you'll be able to mirror and snapshot accross arrays regardless of vendor (no vendor lock in). Sure it is a host based solution as others would contend but with todays fast fibre channel and CPUs - the impact on the servers are really neglibible. With FlashSnap (which synchronizes only changed blocks) your resynchs between your production storage (which can remain on EVA) and your snapshot/clone (which can be on the same EVA or on a different EVA or on a different array from a different vendor) will only deal with changed blocks!

I have not looked in while at what changes BC implements but in the past it can only do snapshot/clones on the same EVA. You need Continous Access to replicate to another EVA and the failover can sometimes be tricky. With VxVM/FlashSnap -- you can mirror (and do snapshots) accross 2 EVAs so when one EVA goes down you remain up on the other EVA. Not that EVAs are know to fail often - but it can as we've seen in the past.

Hakuna Matata.
Peter Mattei
Honored Contributor

Re: snapclone performance

Yes, there is a performance impact to the VDISK you are going to clone if you set writethrough.
But, when you create a snapclone without doing that all write cache of all VDISKs has to be flushed to disk before the clone can be taken. If you create clones in a busy time the controller has quite something to do!
So by setiing writethrough selectively you minimize the impact to all the other VDISKs.

Cheers
Peter
I love storage
Edgar_8
Regular Advisor

Re: snapclone performance

Hi Peter,

Please could you provde some feedback on the following:

1.How do we determine that the EVA cache is being flooded or maxed out?
2.Is it possible to switch cache modes from within an RSM clone script?

Thanks in advance!
Peter Mattei
Honored Contributor

Re: snapclone performance

1. Good question! I cannot tell right away.

2. Yes. Have a look at the RSM CLI Guide
http://h20000.www2.hp.com/bizsupport/TechSupport/DocumentIndex.jsp?contentType=SupportManual&locale=en_US&docIndexId=179911&taskId=101&prodTypeId=12169&prodSeriesId=471572

Her you find the command description on page 34

SET VDISK
Synopsis
set vd[isk]
[cache_m[ode]|cm={write_t[hrough]|wt|write_b[ack]|wb}] [remove_p[resentation]|rp=][refresh]
[{add_p[resentation]|ap={host name}}
|[lun=]]
[[inst[ant_restore]|instrest|irestore=]]

Description: Use the SET VDISK command to modify virtual disk properties.

Cheers
Peter
I love storage