1826580 Members
3757 Online
109695 Solutions
New Discussion

Optimal snapshot size

 
SOLVED
Go to solution
Brandon Poyner
Occasional Advisor

Optimal snapshot size

We use the VxFS snapshot feature during our nightly dataprotector backup. Last night the snapshot file system ran out of space, causing the snapshot to become disabled. How would I go about determining either the optimal snapshot size or the true contents of the snapshot (those files that differ from the source)? Thanks.

msgcnt 1 vxfs: mesg 028: vx_snap_alloc - /dev/eva2/dnx_snap snapshot file system out of space
msgcnt 2 vxfs: mesg 032: vx_disable - /dev/eva2/dnx_snap snapshot file system disabled
3 REPLIES 3
Alzhy
Honored Contributor

Re: Optimal snapshot size

Realistically, you should prepare for any eventuality that you snapshot may become as large as your source filesystem. More so to mitigate the risks during backups - make sure during your snapshot "window" that no significant filesystem activity (changes) occur. If they do or has the potential -- then prepapre your snapshots such that they have the potential to grow as big as the original...
Hakuna Matata.
Ivan Ferreira
Honored Contributor

Re: Optimal snapshot size

If you do incremental backups, a little more of the size of the backed up data should be the size of your snapshot. If you don't do incremental backups, you can do test incremental backups to get statistics about the amount of modified data.
Por que hacerlo dificil si es posible hacerlo facil? - Why do it the hard way, when you can do it the easy way?
A. Clay Stephenson
Acclaimed Contributor
Solution

Re: Optimal snapshot size

The tools supplied directly from Veritas have the ability to actually examine the snapshot buffer usage; the tools supplied in the OEM'ed version of vxfs do not have that ability. The size of the snapshot buffer depends upon two factors: 1) The activity of the original filesystem during the snapshot; 2) the duration of the snapshot. In principle, you could do a block by block read of each file in the two filesystems and gather your data but that seems impractical.

In practice, I have never needed a snapshot buffer larger than about 25% of the original but my backups are finished in about 4 hours and 15% is more typical. Note that only the first update/write of an original block need be written to the snapshot buffer so that the maximum buffer size would be 100% of the original.

One other "gotcha" that is far from obvious is that the logical device that houses the snapshot buffer should be mirrored or otherwise highly available because if the snapshot buffer becomes unavailable, the original filesystem will hang.



If it ain't broke, I can fix that.