Operating System - HP-UX
1833772 Members
2240 Online
110063 Solutions
New Discussion

Trying to understand slow frecover

 
SOLVED
Go to solution
Randy Hagedorn
Regular Advisor

Trying to understand slow frecover

HI,

We are using a DLT7000 on an N4000 to perform our nightly production backup of about 75GB of data, consisting of a mix of very large and very small files. I have the need to frecover a large (7GB) Oracle export file onto our test system so that we can import serveral tables onto test.

I started an frecover session to retrieve the export file, but after two days, it is still running and has not recovery the export file.

Here is what the config file looked like when the fbackup was run.

blocksperrecord 128
records 32
checkpointfreq 64
readerprocesses 6
maxretries 30
retrylimit 5000000
maxvoluses 100
filesperfsm 200

Can any one see what may be causing such slow frecovery. The DLT7000 on the test is on a dedicated SCSI card, running on a K460.

If this length of time to recover a file is what I can expect, if we had an emergency, I think I'd be in big trouble to spend muliple days recoverying files.

Thanks,
Randy
1 REPLY 1
James R. Ferguson
Acclaimed Contributor
Solution

Re: Trying to understand slow frecover

Hi Randy:

First, 'fbackup' is designed to work while files are inuse. 'fbackup' attempts to insure a good copy of a file is placed on tape by comparing the timestamp of the file at the end of the copy to the timestamp of the file seen at the beginning of the transfer. If these do not match, 'fbackup' marks the file as "bad" and retries the copy. The retry ('maxretries') default is five (5) and can be defined in the 'config' file associated with 'fbackup'. Thus, if you have files that are changing as you attempt to back them up, you will spend considerable time retrying their copy whether or not that copy is ultimately successful.

Second, be sure to specify your own configuration file for 'fbackup'. The default values that you will otherwise obtain are out-dated and will generally give very poor performance during backup and recovery. Build a configuration file that looks like:

blocksperrecord 4096
records 64
checkpointfreq 4096
readerprocesses 6
maxretries 5
retrylimit 5000000
maxvoluses 200
filesperfsm 2000

The manpages for 'fbackup(1M)' document the default settings which is what you will get in the *absence* of an explicily defined set.

These parameters are recorded onto the actual backup tape and are thus used for a 'frecover' session too.

Checkpoint records allow the salvage of a backup when a bad tape spot is detected, since the records contain information about the file being backed up. The 'filesperfsm' parameter controls the frequency with which Fast Search Marks (FSM) are written. Both checkpoint and FSM records affect performance. FSMs take a tape drive out of streaming mode thereby adding to backup time. Conversely, however, FSMs improve the time it take to recover a file from tape.

In general, if your backup consists of a high proportion of small files, increase the value for 'filesperfsm'. If your backup consists of a high proportion of large files, then decrease the 'filesperfsm' value.

Regards!

...JRF...