- Community Home
- >
- Storage
- >
- Data Protection and Retention
- >
- StoreOnce Backup Storage
- >
- large backup of 500.000.000 files on 3PAR
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-22-2018 06:43 AM
01-22-2018 06:43 AM
large backup of 500.000.000 files on 3PAR
We have a customer with a very large file sharing environment on 3par. The data collection will grow up to 500.000.000 files that need to have a full backup once a week. Backup now already takes a durtion of 40 hours, on an "not-used" machine during the weekend ( Durtion is weekly growing). IO is the bottleneck as the filestructure needs to be read and followed completelty. Backup is to tape. IO to 3par is the bottleneck as the number of IO's exceeds largely disk I/O capacity and creates high latency and IO queuing. At the moment Netbackup is used.
we are looking ot find a quicker way to backup this environment.
- maybe it is possible to virtualise the environemnt and make a vM backup, but the we loose probably the possibility for single file restore.
- We believe we should find a solution that is reading at the physical layer and moving that raw data to the backup
has anyone else met this problem where large quantities of objects in large file systems need to have their backup and restore. Does someone has an idea of a solution to tackle this ( netapp has a solutionfor this but we stay with HPE 3par
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-22-2018 10:23 AM - edited 01-22-2018 10:23 AM
01-22-2018 10:23 AM - edited 01-22-2018 10:23 AM
Re: large backup of 500 million files on 3PAR
Have you looked at RMC to do a block backup of the file system?
It probably doesn't have the granularity you want?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-23-2018 04:20 PM
02-23-2018 04:20 PM
Re: large backup of 500 million files on 3PAR
Hi
I manage the product management team for RMC.
RMC is free with 3PAR if you have a multi-site license (or any of the old 3PAR recovery manager licenses), so it's certainly worth trying to see if it meets your needs.
The main issue with the file system you describe will be the tree walk, i.e. the backup app has to understand that massive number of files by reading that structure and understanding the changes. That will only get worse as the number of files and depth of directories increase.
RMC avoids that totally by operating at the storage layer. With file system like you describe RMC is able to access the blocks that have changed from the block map in the 3PAR controller, read only blocks that have changed, move them to StoreOnce and generate a sythetic full of your filesystem (that is stored deduplicated and compressed).
That sythetic full is still in the original FS format, so let's say it is a windows share, RMC allows that backup to be virtualized as a LUN and presented back to a host (e.g. a windows server or VM) and it will appear as a drive.
You can then pull any file, directory or group of them with drag and drop. Only the data you select is re-hydrated, decompressed and moved back to the 3PAR.
Of course, RMC also allows you to perform a restore to the original volume, a snap of the volume or a new volume.
For filesystem backups, as opposed to application integrated backups, RMC's catalogs has no knowledge of the files located in each backup. So the 'catch' is that to recover a file, you need to know where to look for it and what backup you want to mount.
With RMC for applications (e.g. VMware or Oracle) we get the detail of the VMs in the VMDK from vCenter or database details from Oracle.
If we can help, let us know :)
Neil Fleming
Team Manager WW Product Management StoreOnce and RMC.