- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- SAN Replication sizes
StoreVirtual Storage
1752345
Members
5538
Online
108787
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-05-2009 08:44 AM
11-05-2009 08:44 AM
I have been asked to submit this to confirm \ deny my conclusions on this.
Any comments would be appreciated
I have simulated the read \ write process on a drive in a controlled environment.
This test was designed to see if read \ writes create a larger snapshot than the actual file size left on the disk.
Process
I created two snapshots, one to remove any previous snapshot sizes and one to zero the snapshot size.
I copied and removed a 42Mb folder to from the C drive to a test drive an then removed it.
this process ran for around 10 minutes to simulate a days read write traffic.
The process was stopped and a snapshot taken.
This was repeated for VSS and Manual snapshots.
Results
After running the simulation according to the server, the disk used space was 91.3Mb, even if there were no files left in the directory.
This was the same size for both simulations
The final snapshot for the VSS was 91% of 3GB which would convert to around 1.36 GB of transferred data ( we have 2 Lefthand units, with 2 way replication, on further unit in a remote office for DR )
The final snapshot for the manual was 86% of 3.5 GB which would convert to around 1.50 GB of transferred data.
The timings of the two tests were not exact enough for a direct like to like comparison ( the test could be repeated) however the underlying principle is that the number of files and difference in file sizes between the two snapshots has no relation to the block level data change made at the disk level.
Conclusion
As the SAN uses block level we have two choices:-
1) accept that this is how the SAN works and put sufficient resources in place to accommodate it
2) change the method used for replicating data from block level to file level.
Any comments would be appreciated
I have simulated the read \ write process on a drive in a controlled environment.
This test was designed to see if read \ writes create a larger snapshot than the actual file size left on the disk.
Process
I created two snapshots, one to remove any previous snapshot sizes and one to zero the snapshot size.
I copied and removed a 42Mb folder to from the C drive to a test drive an then removed it.
this process ran for around 10 minutes to simulate a days read write traffic.
The process was stopped and a snapshot taken.
This was repeated for VSS and Manual snapshots.
Results
After running the simulation according to the server, the disk used space was 91.3Mb, even if there were no files left in the directory.
This was the same size for both simulations
The final snapshot for the VSS was 91% of 3GB which would convert to around 1.36 GB of transferred data ( we have 2 Lefthand units, with 2 way replication, on further unit in a remote office for DR )
The final snapshot for the manual was 86% of 3.5 GB which would convert to around 1.50 GB of transferred data.
The timings of the two tests were not exact enough for a direct like to like comparison ( the test could be repeated) however the underlying principle is that the number of files and difference in file sizes between the two snapshots has no relation to the block level data change made at the disk level.
Conclusion
As the SAN uses block level we have two choices:-
1) accept that this is how the SAN works and put sufficient resources in place to accommodate it
2) change the method used for replicating data from block level to file level.
Solved! Go to Solution.
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-06-2009 08:29 AM
11-06-2009 08:29 AM
Solution
That's the way EVA snapshots work.
At the time of the snapshot T0 ("Tee-zero"), the internal pointers of "SNAPSHOT OF..." point at the same physical segments (pseg) as the parent vdisk. From that time on (T0+n), when an OS wants to write to a pseg with pointer count greater than 1, the EVA will find a free pseg in the disk group, allocate that to the snapshot, duplicate the t0 pseg, then finally complete the write.
From that time on (T0+n), the vdisk and snapshot have two different psegs for that relative location. Each pseg has a pointer count of 1, so each is now handled simply as a disk location to read/write.
The pointers for that relative location (pseg) of the vdisk and snapshot will NEVER go back together. Given enough time and enough volume-wide volatility, the snapshot will grow to the size of (but no larger than) the parent vdisk.
Note: While I am an HPE Employee, all of my comments (whether noted or not), are my own and are not any official representation of the company
At the time of the snapshot T0 ("Tee-zero"), the internal pointers of "SNAPSHOT OF..." point at the same physical segments (pseg) as the parent vdisk. From that time on (T0+n), when an OS wants to write to a pseg with pointer count greater than 1, the EVA will find a free pseg in the disk group, allocate that to the snapshot, duplicate the t0 pseg, then finally complete the write.
From that time on (T0+n), the vdisk and snapshot have two different psegs for that relative location. Each pseg has a pointer count of 1, so each is now handled simply as a disk location to read/write.
The pointers for that relative location (pseg) of the vdisk and snapshot will NEVER go back together. Given enough time and enough volume-wide volatility, the snapshot will grow to the size of (but no larger than) the parent vdisk.
Note: While I am an HPE Employee, all of my comments (whether noted or not), are my own and are not any official representation of the company
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP