Operating System - OpenVMS
1828028 Members
1861 Online
109973 Solutions
New Discussion

Re: Disk to disk to tape

 
SOLVED
Go to solution
Peter Cuozzo
New Member

Disk to disk to tape

I have data disks dkb100,200,300,400 What I would like to do is copy the full contents of each disk to another data disk dkb500. I do not want to create any new versions on DKB500, I want to always overwrite existing files if changed or ignore if the same. Then I would backup my system disk DKB0 and DKB500 to tape.

For copying, is this the best syntax? Or is there a better way to accomplish this?
$ copy dkb100:[000000...]*.*;* dkb500:[dkb100...] /repl
$ copy dkb200:[000000...]*.*;* dkb500:[dkb200...] /repl
$ copy dkb300:[000000...]*.*;* dkb500:[dkb300...] /repl
$ copy dkb400:[000000...]*.*;* dkb500:[dkb400...] /repl
17 REPLIES 17
Hein van den Heuvel
Honored Contributor

Re: Disk to disk to tape

I would consider usin the lddriver (http://www.digiater.nl/) to create use disk-in-file as alternative to the per-disk directory trees.

This will still give you the advantage of direct access online backup (vs backup saveset) and yet give you just one big file to backup, making the backup easier.

You might want to delete and resize the ld devices before each refresh copy, and you might want to use backup rather than copy to do the copy. Just try both ways and compare!?

Hth,
Hein.
Hein van den Heuvel
Honored Contributor

Re: Disk to disk to tape

Jur, in a situation as per above, specifically if the LD device was done with an INIT/INDEX=BEGIN, the disk would effectively be nicely filled from low LBN to high.

How tricky would it be to 'truncate' the container file to be just big enough to hold only up to the last reserved cluster from its bitmap.sys?

Also, at first I thought that as this application is probably creating a readonly output it would be a great, relatively simple, candidate for compression.
But it is not really readonly, as indexf.sys, bitmap.sys and the directories are updated as the target is created. And the result would be random access so that would require challenging LBN to byte offset mapping.

Rambling... Sorry...

Hein.
Art Wiens
Respected Contributor
Solution

Re: Disk to disk to tape

For starters, you need to use BACKUP instead of COPY ... COPY will not traverse directories as you want.

But I think your criteria for replacement is not possible. Have a look at HELP BACKUP and read the descriptions for /NEW_VERSION , /OVERLAY and /REPLACE

I assume the duration of backing up all 5 disks to tape is too much for your environment and your staging it to disk first?

Depending on what else you might need to do with the files once they're on dkb500 ... if nothing, then I would suggest backing up your disks to dkb500 as image savesets eg:

$ backup /image dkb100: dkb500:[some_dir]dkb100.bck /saveset

Files can always be extracted from the savesets if required, listings of savesets can be produced. Backing them up to tape might be a bit faster too as they are single files, BACKUP doesn't have to work as hard building lists of files to gather up. One bonus is you have the entire disk online to quickly restore if one of your real disks goes south.

What problem are you really trying to solve?

Art
Jur van der Burg
Respected Contributor

Re: Disk to disk to tape

>How tricky would it be to 'truncate' the
>container file to be just big enough to hold
>only up to the last reserved cluster from its
>bitmap.sys?

Piece of cake. Find out where the last block of data is (dump/head for the last copied file), dismount the ld device, disconnect it, set file/attr=ebk=xxx, set file/truncate, ld connect, mount, ana/disk/repair.

Easy to try with LD disks :-)

Jur.
Peter Cuozzo
New Member

Re: Disk to disk to tape

What I am trying to accomplish is to:

Overnight processing window is shrinking
Have an easier recovery mechanism
Have an alternate backup for tape failure

I was also thinking is that in case of tape failure I could zip the sav sets and copy them to my Windows server and backup them up to tape (for off-site)

So I think I will pursue the backup/image method suggested by Art.

I am not technical enough for the lddriver method.

This is my time using this forum and I think the responses have been tremendous. Thanks to all.

By the way, how do you find new messages to respond to?
Art Wiens
Respected Contributor

Re: Disk to disk to tape

You can say "thanks" by assigning points to individual responses to your request ... scale of 0 -> 10 depending on how helpful you thought the response/suggestion was.

New items and responses to previous questions "bubble up" to the top of the list in the main forum category ie. OpenVMS in this case.

Cheers,
Art
Steven Schweda
Honored Contributor

Re: Disk to disk to tape

> I am not technical enough for the lddriver
> method.

LD is not that technical, and it could be the
best solution. I'd suggest doing a little
reading about it (and playing with it).
Dean McGorrill
Valued Contributor

Re: Disk to disk to tape

Hi There,
Arts right on, backup is the way to
go. out old ops group used to do something similar as to what you want to do. and used backup into savesets as I remember. good luck! Dean
Peter Cuozzo
New Member

Re: Disk to disk to tape

One more question. After the initial backup I will have on DKB500:[backup]
dkb0.bck;1
dkb100.bck;1
dkb200.bck;1
dkb300.bck;1
dkb400.bck;1

The next day when this runs again I do not want another version of the same .bck file. Which is the correct qualifier to prevent that /replace or /overlay?
Steven Schweda
Honored Contributor

Re: Disk to disk to tape

> [...] /replace or /overlay?

Neither? They're for restore operations (as
it says in the HELP).

You'll need to delete the old save-set file.
You can do it before or after you make the
new one, depending on how much disk space
you have, and on how much you value the data.
Andy Bustamante
Honored Contributor

Re: Disk to disk to tape


You can also use either:

$ set file/version_limit=
limits the number of versions for each backup save set.

$ set directory/version_limit=
limits the number of versions of any file in the target directory.

Another option of course is to have your batch job/command file test for the save set file and delete just prior to the backup if there's one there.

Andy
If you don't have time to do it right, when will you have time to do it over? Reach me at first_name + "." + last_name at sysmanager net
Peter Cuozzo
New Member

Re: Disk to disk to tape

I think I will use the purge command after each disk backup to be sure I have a new complete version before I delete the old one and to be sure I do not run out of disk space on the target drive.

$ backup dkb0: /ignore=interlock/image/ver/rec dkb500:[backup]dkb0.bck /saveset
$ purge dkb500:[backup]
$ backup dkb100: /ignore=interlock/image/ver/rec dkb500:[backup]dkb100.bck /saveset
$ purge dkb500:[backup]
...
Bill Hall
Honored Contributor

Re: Disk to disk to tape

Peter,

I would think you are guaranteed verification errors using /ignore=interlock and /verify together on a disk mounted write shared. I'd suggest that in this case you can save a lot of time and shorten your backup window by not using /verify.

What do you do now if you get a verification error? I'd guess the same thing you do when you get an open file warning, ignore it :-).

Bill
Bill Hall
Dean McGorrill
Valued Contributor

Re: Disk to disk to tape

Peter,
bills right, you might wind up with
a pile of verification errors. but then
again we don't how busy your files are.
you might want to look at the status
after each operation ($status symbol)
good luck - Dean
Jon Pinkley
Honored Contributor

Re: Disk to disk to tape

Peter,

Some random comments:

Do you know the reason backups are taking a long time? Is it due to reading the data from the source disks or the writing to tape? If it is because you are reading many small files, then disk to saveset on disk may not speed things up substantially. However, writing to a saveset will in general be faster than file to file, especially for many small files.

It appears that the output disk is on the same SCSI controller as the input disks, which can cause some additional contention for the same resource. Depending on what the DKB devices are, that may not be an issue. Hopefully your tape drive has its own dedicated SCSI controller.

I haven't seen anyone mention incremental backups yet. With VMS backup you have the ability to do a backup/image/record which makes a complete backup of the device, and tags each file with a backup timestamp.

Once that is done, you can do a

$ backup device:[000000...]*.*;*/modified/since=backup/fast ! differential backup

to create a "differential" backup; all files modified since the image backup. To save space, but increase the work necessary to recreate a volume from the backups, you can use

$ backup device:[000000...]*.*;*/modified/since=backup/record/fast ! incremental backup

This will tag each file backed up with a new backup timestamp, so the next incremental backup will only copy files that have been modified after the previous incremental backup. I prefer to use /record only with the full image backup.

To address your points:

----------------------------------------
Overnight processing window is shrinking
Have an easier recovery mechanism
Have an alternate backup for tape failure
----------------------------------------

The following I have not tried, but I think it should work, based on what I have read about the shadowing product. We are using controller based mirroring and snapshots at this time, so I have no recent experience with HBVS (since WBM optimizations).

To really cut your backup window, and to make it possible to have a meaningful verify pass, you need some way to get point-in-time snapshots.

One possibility is host based volume shadowing (HBVS, extra license required), and write bitmaps to track what has changed while the snapshots are not members of the shadowset. This would require using LDDRIVER to partition your DKB500 drive. Because DKB500 could easily become a bottleneck, you will probably not want to have more than one LD device as a shadowset member (SSM) at any point in time (too much contention for the underlying device). However, if only a small percentage of the disk space is modified during a single day, you could bring a single LD device back into the shadowset using the write bitmap (WBM), and only the data that had changed would need to be copied to resynchronize the LD to the DK device (the granularity of the bitmap is around 128 blocks (it may be 127, someone that knows will chime in).

Once synchronized, you would reinit the write bitmap, and split off the LD member, which would then be a point-in-time snapshot of the DK device, and VMS will remember what chunks of the DK disk get modified while the LD member was absent from the Shadowset, making the next resynchronization much quicker that a complete copy.

If there is a period of time that has little write activity, you could potentially have multiple LD devices shadowed with the corresponding DK devices. Note Well: if you do this, and you have write activity to multiple DK devices, the write performance is going to be much slower. The potential advantage is that you could quiess the system once, and dismount the LD SSMs within a short period of time. This would be the "down time" from the users perspective.

Summary of this scenario:

DKB500:[000DSK]D100.DSK ! LD container file for mirroring DKB100: This will be LDA100:
DKB500:[000DSK]D200.DSK ! LD container file for mirroring DKB200: This will be LDA200:
DKB500:[000DSK]D300.DSK ! LD container file for mirroring DKB300: This will be LDA300:
DKB500:[000DSK]D400.DSK ! LD container file for mirroring DKB400: This will be LDA400:

DSA100: (DKB100) ! normally only a single member.
DSA200: (DKB200) ! normally only a single member.
DSA300: (DKB300) ! normally only a single member.
DSA400: (DKB400) ! normally only a single member.

Initial copy (one at a time to reduce contention on DKB500) This will take much longer than mini copies with WBM.

Add LDA100 to DSA100: mirror and wait for normalization. When complete, create WBM for DSA100 and remove LDA100:
Add LDA200 to DSA200: mirror and wait for normalization. When complete, create WBM for DSA100 and remove LDA100:
Add LDA300 to DSA300: mirror and wait for normalization. When complete, create WBM for DSA100 and remove LDA100:
Add LDA400 to DSA400: mirror and wait for normalization. When complete, create WBM for DSA100 and remove LDA100:

Before backup, but during period of low activity on system:

Add LDA members back into shadowsets using the WBM so only modified segments are copied. If multiple LD devices will be members of shadowsets at the same time, wait until previous members have completed copy before adding the next. This is to reduce contention on the common DKB500: which hosts the LD devices.

When all members are normal, initialize WBMs.

Quiess system, remove LD snapshots (specifying WBM)

Mount LD devices readonly.

Continue processing. Backups can be done at any time, as long as they are complete before the next cycle. Note that verify can be done, as the LD devices are not changing.

The LDA device will not be returned to the shadowsets until just before the next snapshot is taken. We are not using shadowing for redundancy, we are using it for point-in-time copies.

Note that with this scenario, you will have an "online" readonly copy of the data that is on the most recent backup; the readonly LD devices. Very easy for users to restore their own files (file protections are just like the real copy). And you have one day's worth of "tape protection". I.e. if you come in to find there was a tape parity error, or the tape filled, etc., you can just start backup over again. The data is still the same.

I will now let the experts that actually use Host Based Volume Shadowing chime in.

Oh, you will need to be running a recent version of VMS for this to work with WBM optimizations.

RE: By the way, how do you find new messages to respond to?

The notes with the latest activity float to the top of the list.

Have fun, and welcome to the forum.

Jon
it depends
Hein van den Heuvel
Honored Contributor

Re: Disk to disk to tape

Peter,

Maybe you want to look at my early LD suggestion a little more attentively.

It is a very practical solution providing immediate online access to the backup, and can be setup to use just the space needed and nothing (much) more.

Jur,
I only tested with OpenVMS 7.2, and dfu 2.7
For those versions ANAL/DISK/REPAIR nor DFU can fix the inconsistency of device size not matching BITMAP.SYS size.
But no worry, the data is perfectly accessible!

A test script is posted below.
Full test log attached, where you for example can see how the fragmentation is cleaned up and how by default indexf.sys sits in the middle of the disk.

Highlighted log/commands:

$ ld create sys$login:lda9.disk /size=10000
$ ld connec sys$login:lda9.disk lda9:
:
$ init lda9: /ind=beg lda9
$ moun lda9: lda9 / foreign
%MOUNT-I-MOUNTED, LDA9 mounted
$ backup/noinit/image lda4: lda9: ! <-----
$ moun lda9: lda9 / nocache
$ create lda9:[000000]marker.tmp
this is the end
$ pipe dump/head/block=count=0 lda9:[000000]marker.tmp | search sys$pipe lbn
Count: 1 LBN: 646
$ dism lda9:
$ ld disconnect lda9:
$ set file/attr=ebk=700 lda9.disk ! round up a little
$ set file/trun lda9.disk ! Don't need the rest.
$ ld connect sys$login:lda9.disk lda9:
$ mount lda9: lda9
%MOUNT-W-INCONSIZE, inconsistent number of blocks reported, some data may not be accessible
%MOUNT-I-MOUNTED, LDA9 mounted
$ type lda9:[000000]marker.tmp
this is the end

Regards,
Hein.

$create ld_test.com
$!set proc/priv=cmkrnl
$! @SYS$STARTUP:LD$STARTUP
$ ld create sys$login:lda4.disk /size=10000
$ ld create sys$login:lda9.disk /size=10000
$ ld connec sys$login:lda4.disk lda4:
$ ld connec sys$login:lda9.disk lda9:
$ init lda4: lda4
$ moun lda4: lda4
$ cre lda4:[000000]A.TMP
$ cre lda4:[000000]B.TMP
$ create bad.com
$DECK/DOLLARS
$set noveri
$i = 100
$loop:
$appen sys$login:login.com lda4:[000000]a.tmp
$appen sys$login:login.com lda4:[000000]b.tmp
$if i.eq.(10*(i/10)) then copy sys$login:login.com lda4:[000000]x.tmp
$i = i - 1
$if i .gt. 0 then goto loop
$set veri
$exit
$EOD
$ @bad
$ pipe dump/head/block=count=0 lda4:[000000]a.tmp | search/win=0/stat sys$pipe lbn
$ pipe dump/head/block=count=0 lda4:[000000]b.tmp | search/win=0/stat sys$pipe lbn
$ pipe dump/head/block=count=0 lda4:[000000]x.tmp | search sys$pipe lbn
$ pipe dump/head/block=count=0 lda4:[000000]x.tmp.1 | search sys$pipe lbn
$ write sys$output "indexf"
$ pipe dump/head/block=count=0 lda4:[000000]indexf.sys | search sys$pipe lbn
$ dele lda4:[000000]a.tmp.
$ dism lda4
$ moun lda4 lda4/nocach
$ pipe dump/block=(start=2,count=2) lda4:[000000]BITMAP.SYS | sear sys$pipe " 000","Dump of"
$ dfu report lda4:
$ init lda9: /ind=beg lda9
$ moun lda9: lda9 / foreign
$ backup/noinit/image lda4: lda9:
$ dism lda9:
$ moun lda9: lda9 / nocache
$ dfu report lda9
$ pipe dump/head/block=count=0 lda9:[000000]b.tmp | search sys$pipe lbn
$ pipe dump/head/block=count=0 lda9:[000000]x.tmp | search sys$pipe lbn
$ pipe dump/head/block=count=0 lda9:[000000]x.tmp.1 | search sys$pipe lbn
$ write sys$output "indexf"
$ pipe dump/head/block=count=0 lda9:[000000]indexf.sys | search sys$pipe lbn
$ pipe dump/block=(start=2,count=2) lda9:[000000]BITMAP.SYS | sear sys$pipe " 000","Dump of"
$ create lda9:[000000]marker.tmp
this is the end
$ pipe dump/head/block=count=0 lda9:[000000]marker.tmp | search sys$pipe lbn
$! Count: 1 LBN: 646
$ dism lda9:
$ ld disconnect lda9:
$ set file/attr=ebk=700 lda9.disk
$ set file/trun lda9.disk
$ ld connect sys$login:lda9.disk lda9:
$ mount lda9: lda9
$ type lda9:[000000]marker.tmp
$ mcr dfu verify lda9:/rebuild/fix
$ anal/disk/repair lda9:


@ld_test


Hoff
Honored Contributor

Re: Disk to disk to tape

These might not be the answers you want... Others here have covered some good and solid techniques specific to your question. I'm going to back up (pun intended) a couple of steps, and look at the wider environment.

Without knowledge of your configuration, I can suggest a hardware upgrade; system or storage, or both. If you want to go faster, find the limiting factor(s), and get rid of it -- speed it up, split it up, replace it, or find a new solution.

If you're willing to throw a little hardware and HBVS at the problem, you can drop your primary BACKUP window to near zero, for instance.

I wouldn't bother creating a BACKUP with /IGNORE=INTERLOCK if any of the critical files are the ones that are open. Particularly if the files are open for write, or should there be databases here. (Databases have specific sequences that are required, and usually have database-specific tools to ensure a consistent and recoverable data archive is created.)

There are some small things you can do to speed your immediate processing, though what are likely far faster hardware configurations are available for comparatively little money on the used-equipment market -- I'm assuming moderate-vintage Alpha gear here, but this could quite easily be a VAX with a couple of SCSI-1 buses. (Alpha prices are in free-fall. And Integrity servers are pretty speedy, and are less expensive than the Alpha prices we all remember.)

An end-to-end evaluation is likely in order, as there are some exposures here, and there are (probably) some hardware upgrades in order.

Do also consider not performing the backup of the system disk as part of regular processing. Do back it up once (or twice) after a change to its contents (ECO, upgrade, etc), and keep the few volatile files off-disk and/or archived separately. Backing up piles of static files consumes a backup window, and those static files are easy to recover from tape or even a home-grown CD or DVD.

Stephen Hoffman
HoffmanLabs LLC