Storage Boards Cleanup
To make it easier to find information about HPE Storage products and solutions, we are doing spring cleaning. This includes consolidation of some older boards, and a simpler structure that more accurately reflects how people use HPE Storage.
Tape Libraries and Drives
cancel
Showing results for 
Search instead for 
Did you mean: 

fbackup for redhat 7.1

nlfarr
Occasional Visitor

fbackup for redhat 7.1

I am backing up data of about 20 GB to a DLT drive, but tar isn't being very useful as it has the 2GB maximum. I have seen fbackup mentioned elsewhere in the forums as what to use for large backups, but it's not part of my redhat 71 install and I can't seem to locate the package online
5 REPLIES
Gratien D'haese_1
Occasional Advisor

Re: fbackup for redhat 7.1

fbackup would be a great contribution to the Open Source Community!! Please, please HP.
Michael Tully
Honored Contributor

Re: fbackup for redhat 7.1

'fbackup' is a proprietry backup product created by HP. I would say that there is little chance of it being ported to Linux, unless HP produces it's own Linux version.

You might have to compile your own 'GNU' tar. You can the source from here. 'GNU' tar caters for files over 2Gb

http://www.gnu.org/software/tar
Anyone for a Mutiny ?
Chris Vail
Honored Contributor

Re: fbackup for redhat 7.1

You can get gnu tar from the wonderful people at www.gnu.org. This does not have the 2 gigabyte limit. You'll prolly need their C compiler and some of their libraries to compile it. But its worth the effort. HP users can get it from software.hp.com, but the gnu users can get it for almost any OS.
nlfarr
Occasional Visitor

Re: fbackup for redhat 7.1

Ok, I got GNU tar and installed it, but it still errored out when the tar archive reached 2 GB. Does the problem have something to do with the filesystem?
Chris Vail
Honored Contributor

Re: fbackup for redhat 7.1

It could be that your Linux system is set for 32 bit operations, which would be a big problem. You need to have a 64 bit O/S to effectively deal with large files and filesystems.
The reason for this is simple: the largest number you can have with a 32 bit, signed integer is 2147483647 bytes, or 2 gigabyes minus 1. I'm not familiar with Linux to advise you on how to deal with such large files. In HP-UX and Solaris, the filesystem is mounted in /etc/fstab with the largefiles option. Here's an example:
/dev/vgORA02c/lvold020c /d020c/oradata vxfs rw,suid,largefiles,delaylog,datainlog 0 2

This mounts it with a vxfs filesystem, read/write, with the suid bit set, the largefiles (<2GB) option set, and logging enabled.
Saturday night I moved more than 700GB from one set of disks to another using gnu tar. I know it works.

BTW: assign points: its a good way to say "thank you"


Chris