Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

Backup Best Practises

SOLVED
Go to solution
Maddog1
Advisor

Backup Best Practises

I had an issue in a previous post about restore times being slower on our DR server than on our Live server due to the tape drive connection. On receiving help and through testing, I found specifiy a large blocksize helped decrease the restore time enough to be tolerable.

 

I am in future going to use the blocksize to backup to an LT04 cartridge drive via Scsi on our Live server to decrease time in the advent of needing to restore onto our DR server.

 

Cartridges are loaded and initialized as follows:

 

INIT/MEDIA_TYPE=COMPACTION 'tapedrive' ''label''

MOUNT/MEDIA_TYPE=COMPACTION/FORE 'tapedrive' ''label''

 

The backup commands I intend to use in future are as follows:

 

BACKUP/LOG/IGN=(INT,LABEL)/crc/group=0/BLOCK=32256/noass ''drive'':[dir] 'tapedrive'''saveset''.BCK

and

BACKUP/LOG/IMAGE/IGN=(INT,LABEL)/crc/group=0/BLOCK=32256 $1$DGA13: 'tapedrive'''saveset''.BCK

 

Apart from the IGN=INTERLOCK, which I know from posts is not preferable, I have never needed to specify the group or blocksize before as time has not been an issue for the backups. 

 

Can anyone see any problem with the above commands?

Is the integrity of the data less when backed up with a larger block size?

 

 

8 REPLIES
GuentherF
Trusted Contributor
Solution

Re: Backup Best Practises

GROUP=0 is appropriate for today's tape drive technology. Tape media failures today are more catastrophic than in the past where small spots on the media could become unreadable. Today is more all-or-nothing.

 

Always keep the /CRC. It protects the save and restore operation against any data corruption between the media in the drive and its transportation to/from host memory. If speed really is the goal you may want to time a run with and without /NOCRC (/CRC is the default). Especially with fast I/O paths and tape drive and slower CPUs.

 

Using the largest blocksize of 65024 gives you a slightly faster performance. There are just too many factors to even start guessing so you may have to do a timed run to see the difference for your environment. .

 

/Guenther

abrsvc
Respected Contributor

Re: Backup Best Practises

In my operation, we backup the OpenVMS systems to a separate "backup" server so that it is aessentially a disk-to-disk backup.  Then we save those to tape.  The overall time to tape becomes a non-issue.  Also, using this process, we actually save 2 days worth of backups per tape per day.  This gives us 2 copies of the backups on different tapes.

 

Dan

Maddog1
Advisor

Re: Backup Best Practises

Thanks. I just didn't know if specifying a large block size decreases the integrity of the backup rather than the default.

 

When I was testing, I didn't specify any block size on the backup command when restoring, as I presumed it used the blocksize on the saveset.

Would adding a blocksize to the restore backup command decrease the restore time even if the savesets on the cartridge were backed up using the default 8192 blocksize?

Duncan Morris
Honored Contributor

Re: Backup Best Practises

Blocksize is not used in restore operations. Tapes are read by block by block, so this is determined when the tape was written.

 

BACKUP

  /BLOCK_SIZE

        /BLOCK_SIZE=n

     Specifies the output block size, in bytes, for data records in
     BACKUP save sets and in disk-to-disk copies. You can specify a
     block size between 2048 and 65,535 bytes. BACKUP may adjust this
     value according to the constraints of the BACKUP format.

     The default block size for magnetic tapes is 8192 bytes. The
     default for disks is 32,256 bytes.

     For a disk-to-disk operation, the block size specifies the
     internal size of the copy buffers. The default block size in
     this case is 33,040 bytes.

Maddog1
Advisor

Re: Backup Best Practises

Thanks. I will implement the above backup commands for the directory and image backups.
Bob Blunt
Respected Contributor

Re: Backup Best Practises

Maddog1, don't forget to add

 

/cache=tape_data on the mount command for read or write.

 

The ultimate goal for any modern tape drive is to keep it's "pipe" full so it never has to start and stop any more than possible.  Virtually all cartridge tape drives today are a form of "streamer" tape drive that works best when you have JUST the perfect amount of data ready for it to write all the time.  This is part of the reason that the blocking factor, compression and using the available cache can be crucial.  Cartridge drives hide their workings from you but you can hear when the drive is winding and rewinding as it positions to keep the data flowing.  You won't be able to keep it streaming ALL the time but you can make it better with the right combination of account setup, initialize and mount flags and backup qualifiers.

 

While I do miss the ancient days using TA79s, TU81+ and other reel-to-reel drives for their open face, painfully clear what they were doing view of their media I do have to admit that I never really enjoyed carrying reels of tape with my arms stuck through the hubs and the Michelin Man look...  And the added capacity you get with modern tapedrives is cool (as long as they work correctly).

 

bob

John Gillings
Honored Contributor

Re: Backup Best Practises

Best practice? Lose the tapes. They were once a necessary evil, but in today's environment IMHO, they're too expensive, too slow, too fiddly (trying to tune for performance) and have far too many potential failure modes. Think seriously about why you're using tapes. Are they really necessary? Are you using them because "that's what we've always done"?

 

Look seriously at ways of backing up your data onto disk images using removable drives, maybe using shadowing as the basic mechanism. Have a few complete sets of media that you rotate through.

 

If you design this correctly you can eliminate the backup window - as it happens continuously. Look for a processing quiet point at which you can quiesce applications and change over shadow drives. You eliminate the need for a "restore" other than to just plug in the backup drives, and you always have spare drives available in case of failure. Given the low cost of disk storage and the comparatively high cost of tapes, this may even be a cheaper solution.

A crucible of informative mistakes
The_Doc_Man
Advisor

Re: Backup Best Practises

Your question regarding best practices is harder to answer than you might first think because best practices for me won't be best practices for you.  (Different business risk regarding data loss, among other things.)   If your site is subject to regulatory issues (as mine is), you find that you cannot allow data to NOT be backed up.

 

The decision to do backups at all is a business risk question and for any organization with really valuable data, it is hard as hell to justify doing no backups.  It is almost a "given" that you will run a backup.  So the next question is, "how far back must you go?" and that is another business risk decision.  How much does it cost you to go back x number of days?  When you factor in the cost of keeping those media for x+1 days (minimum), there comes a breakpoint where you say "we can afford to track up to x days before it is cheaper to just type it in again" (or do without it). 

 

The problem of eliminating tapes (as noted by another responder) is that if you WERE using tapes, you just have to replace them with something else.   We have tried many other solutions - for example, a WORM-drive CD Jukebox (3rd party vendor) - but we were not satisfied with reliability vs. cost vs. ease of handling.  The cost of having dismountable/removeable volumes leads to questions of why you have such things and aren't using them for live data?  Sounds crazy, right?  I can't tell you how often in the real world you run into pointy-haired managers a la "Dilbert" who can understand backup to tape but can't justify backup to dismountable disk.

 

There is a trade-off in terms of how many volumes of backup media you can keep so that you can do things like - go back in time to pick up files lost several days ago but the loss wasn't discovered until today.  I wish that never happened but sadly, it really does happen.  I've had to go back in time a couple of weeks in the worst-case scenarios.  Yes, I have clumsy users.  But isn't that expected?

 

For us, the decision is resolved by staggering our backup pattern using a tape-based storage system.

 

1.  We back up incrementals daily

2.  We do a full backup weekly

3.  Once per month, we set aside a weekly full backup to become the longer-term monthly backup.

4.  At the end of a year, the monthly tape goes back into rotation.

5.  Once per month we evaluate what is actually on the tape (i.e. verify that data can been identified)

6.  Once per month we recover files at random to verify that they are indeed recoverable.  Using the full backup as a source, we can recover a file not changed recently and compare the backup copy to the still-resident copy.  Yes, it is an issue in sampling theory as to how many files to check.

 

As to using big block sizes, I concur with the comments about streaming media being best used if you can build big blocks.  Not to mention multi-buffer counts being as high as you have memory to support.  The thing that eats your lunch speed-wise is tape rocking.  I.e. your streaming medium ran out of data so you have to stop the tape, rewind it, get another buffer ready, start the tape looking for the end of the previously written block, and switch from read to write mode.  That eats your tapes, your time, and your data reliability badly.

 

 

Security+ Certified; HP OpenVMS CSA (v8)