1823967 Members
4122 Online
109667 Solutions
New Discussion юеВ

Compaction

 
iman_1
Advisor

Compaction

Hi,

I had a query against the compaction used while backup on the tapes. The backup is done on DLT3 tapes which say that the capacity of the tapes is 10 GB in normal mode and 20 GB if compaction enabled.

If I want to backup some data say 20 gb on the tape I do the following ....

init /media=compaction mkb0: sysbck
mount /media=compaction /noassist mkb0:

Now my question is ....

1) First am I right in thinking this would enable me the data compaction/compression with 20 GB backup on tape.

2) If I init tape with /media=comp and then do not mount tape with /media=compact is it possible to achieve data compaction on the tape ?

rgds,
12 REPLIES 12
Jan van den Ende
Honored Contributor

Re: Compaction

Iman,

1) the COMPACTED capacity is only a raw estimate, Some data lends itself very well to compaction, and theN a much higher valuE is achievable, some data IS less fit for compression, and theN 2x compression is far from possible.
But for the general picture it ususally is pretty good.

Is the data you need to backup one (or a few similar) file, than you will need to experiment.
If it is a collection of all kinds of different files (an entire disk with userdata, something like that), then it is fair to expect between 19 and 21.

2) If mounting an INITed tape, then the /MEDIA qualifier of the MOUNT command is irrelevant. The structure already on the tape will be used. (It _IS_ relevant when mounting an NOT yet INITted tape!!)

hth

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
John Abbott_2
Esteemed Contributor

Re: Compaction

Hi,

We use a mixture of VMS Backup and DB utility backup. We have a habit of doing the mount as well as the init (with the switch), although I believe the mount/media=comp is not really necessary, my only concern is backup/init I assume would loose compacion, sorry can't test that for you.

Don't forget that the 20GB is an estimate and is based on how much compression the drive can make of your data.

Regards
John.
Don't do what Donny Dont does
Karl Rohwedder
Honored Contributor

Re: Compaction

I would also use the /MEDIA=COMP qualifier on the backup command, since some tape devices allow different settings per saveset.

regards Kalle
Jiri_5
Frequent Advisor

Re: Compaction

we use /med=com in init, mount and backup commands (why not?). Sometime we have problem if don't use /med=com in mount and backup though we init with /med=comp.
Sheldon Smith
HPE Pro

Re: Compaction

As a side note, like Jan briefly mentioned, some files do not compress well. For a file to compress, it has to have regularly repeating segments. To get an idea, try packing one in an archive with Zip or RAR or something of that nature. Text files compress well, PostScript files compress extremely well (down to 10% or smaller compared to the original). Files that have already be compacted (Zip, Rar, Jpeg, Mpeg to name a few), and simply those with no repeating data (encrypted for example) will exhibit little or no compression (99% or larger).
So the compression performed by the tape drive depends on the MIX of files you are backing up.

Note: While I am an HPE Employee, all of my comments (whether noted or not), are my own and are not any official representation of the company

Accept or Kudo

Heuser-Hofmann
Frequent Advisor

Re: Compaction

I'm using a program from a guy working for Quantum that activates compression by default for a DLT drive. It is written in C. Drop me a line if you're interested.

regards
Eberhard Heuser
Wim Van den Wyngaert
Honored Contributor

Re: Compaction

1) try it. Could as well be 10 GB as 30 GB. Try writing the save set 2 times. May work and could prove that you have high compression (or not).

2) help init/med : "applies to the entire cartridge". help backup/med says if supported by media (which isn't the case), the compression is per save set.

Also remind that backup has /group=10 as default. Thus 10% overhead is given. Use /group=0 to allow more data on tape.

Wim
Wim
Valentin Likoum
Frequent Advisor

Re: Compaction

Just to let you know: we had shoe-shining with compaction enabled on our Tandberg DLT8000 drive. No problem without compaction.
Obvious culprits (Backup account & system PQL* limits) were considered innocent by common mind of COV, so the reason stayed unclear. Finally we chose long-lived drive instead of compaction. So listen your drive.
Wim Van den Wyngaert
Honored Contributor

Re: Compaction

Some small remarks.

1) if you backup mainly compressed files (e.g. zip) then the compression of the drive may expand the file instead of compressing it.

2) I wonder if a drive can easier stay in streaming mode when no compressing is used, thus faster.

3) I wonder if doing dir of the files to backup before the backup starts would improve performance because the file headers would probably be cached by it.

Wim
Wim
Guenther Froehlin
Valued Contributor

Re: Compaction

I haven't checked this for a while but in the past you did have to specify /MEDIA=COMPACTION in each and every command you used that had this qualifier. Otherwise you could end up with no compaction.

About doing a DIRECTORY/SIZE before a backup operation: True, at one point BACKUP reads in the file header through an XQP call. And this goes through the ACP header cache. But in most backup cases this cache is too small to hold all headers for the duration of the backup. See SYSGEN ACP_HDRCACHE (1 unit = 1 file header block). And typically the ACP_HDRCACHE is shared between all systemwide mounted volumes.

LTO tape drives (especially LTO-3) can adjust there speed to avoid a stop-n-go situation thus improving backup speed where the disk input cannot keep up with tape speed (typically).
iman_1
Advisor

Re: Compaction

Dear All,

Thanx for all your info,

I have been able to compact the data which occupied 4 tapes into 2 tapes. Among other things data also had very big size .RDA files which are the database files for RDB.

I had another question for the same...

I had infact used below to compact the data

init /media=compaction mkb0: sysbck
mount /media=compaction /noassist mkb0:

and thereafter used the normal image backup with /verify option(did not use any compaction in backup command). If there is a file at the end of 1st tape of 2 tapes and its big enough to be accomadated in the first tape so its written in two tapes. But as you all know the /verify option write first tape first then compares the data with disk.How does the OpenVMS maintain the integrity of such big files which are spread across the two different volumes ? Could there be any problems while restoring the data from the tapes ?

Cheers,
Iman
John Gillings
Honored Contributor

Re: Compaction

Iman,

As others have stated, depending on your data, your 10GB tape will hold 10GB of uncompacted data, and a variable volume of "compacted" data. For pathological data, it may be as low as 5GB or as high as 100GB.

An issue I've seen recently is customers complaining that writing tapes with BACKUP/ENCRYPT (new feature) takes more tape than expected. That's because /ENCRYPT produces uncompressible data. If you want to encrypt and compress data, compress it FIRST using your favourite compression utility, then encrypt it (side benefit, compression also increases the security of the encryption).

> But as you all know the /verify option
>write first tape first then compares the
>data with disk.How does the OpenVMS
>maintain the integrity of such big files
>which are spread across the two different
>while restoring the data from the tapes ?

Are you asking about modifications to the large file after the backup started, but before it completed? OpenVMS should give some kind of ACCONFLICT message if this occurs, either to BACKUP when it attempts to open the file, or to the application which tries to open the file for modification while BACKUP is copying it. Check your backup logs to make sure there are no errors or warnings. Don't trust the backup copy of any file that has a warning issued against it.

If the file is not modified you can trust that BACKUP will be able to reconstruct it correctly on restore.
A crucible of informative mistakes