1825552 Members
2494 Online
109681 Solutions
New Discussion юеВ

Unknown tape blocksize

 

Unknown tape blocksize

I'm having trouble restoring an ultrium tape to disk, the tape should contain an oracle export file.
Is there a way to determine the blocksize with which the tape has been created?
13 REPLIES 13
Massimo Bianchi
Honored Contributor

Re: Unknown tape blocksize

Hi,
blovk size should not be a real issue, because you can read from the tape and pad data as you want with dd, or simply send data to imp.


Real problem is: what format is the tape ?

dd
cpio
pax
fbackup
omniback
oracle export

?

Massimo

Re: Unknown tape blocksize

It is an oracle export, but passing the data straight into imp is not possible (we only have a LTO drive on another machine).

Here's a snap from the oracle export log:

Note: RECORDLENGTH=65536 truncated to 65535

I've tried both blocksizes with dd, but it's only restoring 4 or 5 MB's a minute, which should be 500 or more.
Glen Trevino
Advisor

Re: Unknown tape blocksize

What utility/program did you use to write the data to the tape? fbackup, tar, cpio, etc... The command that you used to make the tape might help point us in the right direction, especially if cpio...


Glen
Massimo Bianchi
Honored Contributor

Re: Unknown tape blocksize

Did you try:

on the server with the LTO (i call it 0m as example)

dd if=/dev/rmt/0m bs=256k of=/big_dir/oracle.dmp

Massimo

Re: Unknown tape blocksize

I've issued the following command:
dd if=/dev/rmt/10m of=/big_dir/oracle.dmp bs=64k

After +/- 5 mins it restored +/- 5 MB which is far to slow.
Massimo Bianchi
Honored Contributor

Re: Unknown tape blocksize

Hi,
if that works, check the speed. You can go up with the parameter bs.

I checked the recordlenght on metalink, and it is only used as a buffer, it should not matter.

Use a larger bs block.

Massimo



Glen Trevino
Advisor

Re: Unknown tape blocksize

This is what Seagate reccommends for writing to their LTO Viper drives with Unix commands that use density and blocking factors.

For commands that use density and tape size settings the tape density is
124,000 bpi and the tape length is 1800 feet. For commands which use a
blocking factor we suggest a factor of 128.

try the dd with bs=128k... Although that doesn't mean it was actually written that way...

Glen

Re: Unknown tape blocksize

I've tried your options but it doesn't seem to get faster. But is there a way to determine the block size on tape?
Glen Trevino
Advisor

Re: Unknown tape blocksize

Another thing to try is the Oracle blocking size. 4K or 8K? Or maybe the filesystem blocking size. 512 bytes? I don't think there is a way to tell what blocking size was used on a tape (besides trial and error, which we're doing...).

Glen
Massimo Bianchi
Honored Contributor

Re: Unknown tape blocksize

Hi,
i think that we are not getting fast because we cannot keep the drive streaming....

Can you try the following:


dd if=/dev/rmt/10m bs=65535k of=/big_dir/oracle.dmp


So we should use a block of 65M, it should go.

BTW i do not know how to find out the original block size. Couldn't you ask the person that did the export ?

Massimo
Stuart Whitby
Trusted Contributor

Re: Unknown tape blocksize

My main question here would be on the data path. You may be maxing out something in betweeen the two points - though I'd be surprised at 4-5MB/min

Is the drive directly connected to the system? If so, using what kind of SCSI card (specifically, I'm looking to find out the max transfer speed). Any other devices on the same bus? Transferring from tape to disk on the same bus still needs to go via the kernel, so you're effectively halving your max throughput. What other activity is going on on this machine - is the backplane getting saturated to such an extent that data cannot be transferred any faster?

Tape blocksize itself should not be an issue here. The fact that it's reading without error shows that getting data off each block is working fine. If you're looking at changing this, then do it at backup time. Smaller blocksizes for large files will create a small performance hit when writing all the tape marks. Large blocksizes for small files will also create a performance hit since you're writing a whole lot of blank space before the end of the block. At recover time, the blocksize is what's on tape - you're not going to change the way its written at that point.

I'd also suggest getting a scrap tape and sending a 10MB file to it using "time tar cvf..." and recovering using "time tar tvf..." to check the write vs read time (using tvf will still read the entire contents). If there's a big discrepancy here, contact the drive vendor. Use 0m in both cases since you'll get the drive doing the same operations (starting from the beginning, writing/reading, then rewinding).

HTH,

Stuart.
A sysadmin should never cross his fingers in the hope commands will work. Makes for a lot of mistakes while typing.

Re: Unknown tape blocksize

Hi,

The machine we're restoring on is used to do
ignite backups during the weekend. We've also tried an other tape to see if the drive and the path to it is ok.
With a tape created with bs=8k we get up to speeds of 600MB/min.

I've sent an email to figure out with wich command (and blocksize of course) the tape was created. I think it's best to wait for that answer.

Re: Unknown tape blocksize

Hi,

Got a reply from the supplier of the tape, it seems they've created the tape with a blocksize of 64k.

After trying to restore (again) it seems that the tape was rotten, I've got a new tape from the supplier, the restore went fine.

Thanx for your help!