StoreVirtual Storage
1752345 Members
5538 Online
108787 Solutions
New Discussion

SAN Replication sizes

 
SOLVED
Go to solution
RJ Gray
Occasional Contributor

SAN Replication sizes

I have been asked to submit this to confirm \ deny my conclusions on this.
Any comments would be appreciated

I have simulated the read \ write process on a drive in a controlled environment.
This test was designed to see if read \ writes create a larger snapshot than the actual file size left on the disk.

Process

I created two snapshots, one to remove any previous snapshot sizes and one to zero the snapshot size.
I copied and removed a 42Mb folder to from the C drive to a test drive an then removed it.
this process ran for around 10 minutes to simulate a days read write traffic.
The process was stopped and a snapshot taken.

This was repeated for VSS and Manual snapshots.

Results

After running the simulation according to the server, the disk used space was 91.3Mb, even if there were no files left in the directory.
This was the same size for both simulations

The final snapshot for the VSS was 91% of 3GB which would convert to around 1.36 GB of transferred data ( we have 2 Lefthand units, with 2 way replication, on further unit in a remote office for DR )

The final snapshot for the manual was 86% of 3.5 GB which would convert to around 1.50 GB of transferred data.

The timings of the two tests were not exact enough for a direct like to like comparison ( the test could be repeated) however the underlying principle is that the number of files and difference in file sizes between the two snapshots has no relation to the block level data change made at the disk level.

Conclusion

As the SAN uses block level we have two choices:-

1) accept that this is how the SAN works and put sufficient resources in place to accommodate it

2) change the method used for replicating data from block level to file level.
1 REPLY 1
Sheldon Smith
HPE Pro
Solution

Re: SAN Replication sizes

That's the way EVA snapshots work.

At the time of the snapshot T0 ("Tee-zero"), the internal pointers of "SNAPSHOT OF..." point at the same physical segments (pseg) as the parent vdisk. From that time on (T0+n), when an OS wants to write to a pseg with pointer count greater than 1, the EVA will find a free pseg in the disk group, allocate that to the snapshot, duplicate the t0 pseg, then finally complete the write.

From that time on (T0+n), the vdisk and snapshot have two different psegs for that relative location. Each pseg has a pointer count of 1, so each is now handled simply as a disk location to read/write.

The pointers for that relative location (pseg) of the vdisk and snapshot will NEVER go back together. Given enough time and enough volume-wide volatility, the snapshot will grow to the size of (but no larger than) the parent vdisk.

Note: While I am an HPE Employee, all of my comments (whether noted or not), are my own and are not any official representation of the company

Accept or Kudo