HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Trying to understand slow frecover
Operating System - HP-UX
1833772
Members
2240
Online
110063
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-16-2007 01:28 AM
05-16-2007 01:28 AM
HI,
We are using a DLT7000 on an N4000 to perform our nightly production backup of about 75GB of data, consisting of a mix of very large and very small files. I have the need to frecover a large (7GB) Oracle export file onto our test system so that we can import serveral tables onto test.
I started an frecover session to retrieve the export file, but after two days, it is still running and has not recovery the export file.
Here is what the config file looked like when the fbackup was run.
blocksperrecord 128
records 32
checkpointfreq 64
readerprocesses 6
maxretries 30
retrylimit 5000000
maxvoluses 100
filesperfsm 200
Can any one see what may be causing such slow frecovery. The DLT7000 on the test is on a dedicated SCSI card, running on a K460.
If this length of time to recover a file is what I can expect, if we had an emergency, I think I'd be in big trouble to spend muliple days recoverying files.
Thanks,
Randy
We are using a DLT7000 on an N4000 to perform our nightly production backup of about 75GB of data, consisting of a mix of very large and very small files. I have the need to frecover a large (7GB) Oracle export file onto our test system so that we can import serveral tables onto test.
I started an frecover session to retrieve the export file, but after two days, it is still running and has not recovery the export file.
Here is what the config file looked like when the fbackup was run.
blocksperrecord 128
records 32
checkpointfreq 64
readerprocesses 6
maxretries 30
retrylimit 5000000
maxvoluses 100
filesperfsm 200
Can any one see what may be causing such slow frecovery. The DLT7000 on the test is on a dedicated SCSI card, running on a K460.
If this length of time to recover a file is what I can expect, if we had an emergency, I think I'd be in big trouble to spend muliple days recoverying files.
Thanks,
Randy
Solved! Go to Solution.
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-16-2007 01:50 AM
05-16-2007 01:50 AM
Solution
Hi Randy:
First, 'fbackup' is designed to work while files are inuse. 'fbackup' attempts to insure a good copy of a file is placed on tape by comparing the timestamp of the file at the end of the copy to the timestamp of the file seen at the beginning of the transfer. If these do not match, 'fbackup' marks the file as "bad" and retries the copy. The retry ('maxretries') default is five (5) and can be defined in the 'config' file associated with 'fbackup'. Thus, if you have files that are changing as you attempt to back them up, you will spend considerable time retrying their copy whether or not that copy is ultimately successful.
Second, be sure to specify your own configuration file for 'fbackup'. The default values that you will otherwise obtain are out-dated and will generally give very poor performance during backup and recovery. Build a configuration file that looks like:
blocksperrecord 4096
records 64
checkpointfreq 4096
readerprocesses 6
maxretries 5
retrylimit 5000000
maxvoluses 200
filesperfsm 2000
The manpages for 'fbackup(1M)' document the default settings which is what you will get in the *absence* of an explicily defined set.
These parameters are recorded onto the actual backup tape and are thus used for a 'frecover' session too.
Checkpoint records allow the salvage of a backup when a bad tape spot is detected, since the records contain information about the file being backed up. The 'filesperfsm' parameter controls the frequency with which Fast Search Marks (FSM) are written. Both checkpoint and FSM records affect performance. FSMs take a tape drive out of streaming mode thereby adding to backup time. Conversely, however, FSMs improve the time it take to recover a file from tape.
In general, if your backup consists of a high proportion of small files, increase the value for 'filesperfsm'. If your backup consists of a high proportion of large files, then decrease the 'filesperfsm' value.
Regards!
...JRF...
First, 'fbackup' is designed to work while files are inuse. 'fbackup' attempts to insure a good copy of a file is placed on tape by comparing the timestamp of the file at the end of the copy to the timestamp of the file seen at the beginning of the transfer. If these do not match, 'fbackup' marks the file as "bad" and retries the copy. The retry ('maxretries') default is five (5) and can be defined in the 'config' file associated with 'fbackup'. Thus, if you have files that are changing as you attempt to back them up, you will spend considerable time retrying their copy whether or not that copy is ultimately successful.
Second, be sure to specify your own configuration file for 'fbackup'. The default values that you will otherwise obtain are out-dated and will generally give very poor performance during backup and recovery. Build a configuration file that looks like:
blocksperrecord 4096
records 64
checkpointfreq 4096
readerprocesses 6
maxretries 5
retrylimit 5000000
maxvoluses 200
filesperfsm 2000
The manpages for 'fbackup(1M)' document the default settings which is what you will get in the *absence* of an explicily defined set.
These parameters are recorded onto the actual backup tape and are thus used for a 'frecover' session too.
Checkpoint records allow the salvage of a backup when a bad tape spot is detected, since the records contain information about the file being backed up. The 'filesperfsm' parameter controls the frequency with which Fast Search Marks (FSM) are written. Both checkpoint and FSM records affect performance. FSMs take a tape drive out of streaming mode thereby adding to backup time. Conversely, however, FSMs improve the time it take to recover a file from tape.
In general, if your backup consists of a high proportion of small files, increase the value for 'filesperfsm'. If your backup consists of a high proportion of large files, then decrease the 'filesperfsm' value.
Regards!
...JRF...
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP