StoreOnce Backup Storage
cancel
Showing results for 
Search instead for 
Did you mean: 

large backup of 500.000.000 files on 3PAR

 
vergoote_1
Occasional Contributor

large backup of 500.000.000 files on 3PAR

We have a customer with a very large file sharing environment on 3par. The data collection will grow up to 500.000.000 files that need to have a full backup once a week. Backup now already takes a durtion of 40 hours, on an "not-used" machine during the weekend ( Durtion is weekly growing). IO is the bottleneck as the filestructure needs to be read and followed completelty. Backup is to tape. IO to 3par is the bottleneck as the number of IO's exceeds largely disk I/O capacity and creates high latency and IO queuing. At the moment Netbackup is used. 

we are looking ot find a quicker way to backup this environment.

  •  maybe it is possible to virtualise the environemnt and make a vM backup, but the we loose probably the possibility for single file restore.
    • We believe we should find a solution that is reading at the physical layer and moving that raw data to the backup

has anyone else met this problem where large quantities of objects in large file systems need to have their backup and restore. Does someone has an idea of a solution to tackle this  ( netapp  has a solutionfor this but we stay with HPE 3par

senior systems architect
2 REPLIES 2
Dennis Handly
Acclaimed Contributor

Re: large backup of 500 million files on 3PAR

Have you looked at RMC to do a block backup of the file system?

It probably doesn't have the granularity you want?

Re: large backup of 500 million files on 3PAR

Hi

I manage the product management team for RMC.

RMC is free with 3PAR if you have a multi-site license (or any of the old 3PAR recovery manager licenses), so it's certainly worth trying to see if it meets your needs.

The main issue with the file system you describe will be the tree walk, i.e. the backup app has to understand that massive number of files by reading that structure and understanding the changes. That will only get worse as the number of files and depth of directories increase.

RMC avoids that totally by operating at the storage layer. With file system like you describe RMC is able to access the blocks that have changed from the block map in the 3PAR controller, read only blocks that have changed, move them to StoreOnce and generate a sythetic full of your filesystem (that is stored deduplicated and compressed).

That sythetic full is still in the original FS format, so let's say it is a windows share, RMC allows that backup to be virtualized as a LUN and presented back to a host (e.g. a windows server or VM) and it will appear as a drive.

You can then pull any file, directory or group of them with drag and drop. Only the data you select is re-hydrated, decompressed and moved back to the 3PAR.

Of course, RMC also allows you to perform a restore to the original volume, a snap of the volume or a new volume.

For filesystem backups, as opposed to application integrated backups, RMC's catalogs has no knowledge of the files located in each backup. So the 'catch' is that to recover a file, you need to know where to look for it and what backup you want to mount.

With RMC for applications (e.g. VMware or Oracle) we get the detail of the VMs in the VMDK from vCenter or database details from Oracle.

If we can help, let us know :)

Neil Fleming

Team Manager WW Product Management StoreOnce and RMC.