Operating System - HP-UX
1830939 Members
1938 Online
110017 Solutions
New Discussion

Re: Move database over slow connection

 
SOLVED
Go to solution
Clint Gibler
Frequent Advisor

Move database over slow connection

I have a database on an old server located in south america my only connection to them is a slow satelite connection. I need to move the database files to my servers here (without running the export database I want to move the actually files) so I need to figure out the most efficient way to move these files which are spread across 4 seperate volume groups on one machine. I know I can do a tar or zip of each directory and ftp it but I was wondering if that is the best or if I can somehow move them all at once.
6 REPLIES 6
Hein van den Heuvel
Honored Contributor
Solution

Re: Move database over slow connection


If the nextwork transfer speed is an issue, then I'd strongly re-consider using (compressed/zipped) export files, not the raw (compressed) db files.

Why copy all that index data around?
Just recreate when and where you need.

Using exports you can also cut up your data in pieces and overlap work:
Empty DB could be building, while first export file is transferring. On arrival, start loading it, while next export chunks transfers, when chunk loaded, created indexes for tables in that chunk. Doing so you can have 3+ things nicely going in parallel all the time:
transfer:N, Uncrompress:N-1, Load:N-2, Index:N-3, Statistics:N-4

If you do decided to transfer the raw, binary, DB files then
- recreate everything that can cleanly be recreated: REDO, UNDO, TEMP, to ensure maximum compression. And you may also choose to create undersized REDO, UNDO, TEMP, just during the transfer. Then on arrival recreat all full sized.

- compress all files before sending over the link

Many of few Volume groups would not seem to matter much, if the nextwork is known to become the bottleneck.

Hth,
Hein.





A. Clay Stephenson
Acclaimed Contributor

Re: Move database over slow connection

Well, you didn't bother to mention the data volumes but given my experience with satellite links, your fastest method might well be FedEx and tapes.

If you are going to transmit the files then I would suggest that 1) tar or fbackup to a file 2) compress that file 3) split the compressed file into chunks 4) cksum those chunks 5) ftp each chunk and cksum the received file, if the cksum's don't match, resend 6) cat the chunks back together into a compressed tar or fbackup image 7) uncompress the image 8) untar or frecover the files. The reason I suggest that you split into chunks is that there is much less re-transmission should a failure occur.

FTP has the lowest overhead so it should be the transmission mechanism. Search the forums for one of my ftpget.pl or ftpput.pl scripts to make the scripting easy (they will also automatically retransmit if an error occurs). The compression will help to offset the small pipe that you are using. The compression and split will also serve to encrypt the data although I assume your satellite link is secure.

I still think FedEx overnight may well be your fastest method.
If it ain't broke, I can fix that.
Hein van den Heuvel
Honored Contributor

Re: Move database over slow connection

"Never underestimate the bandwidth of a station wagon full of quarter-inch tapes hurtling down the highway" ((Andrew Tannenbaum?)

or a box full of 250GB (ide/usb/scsi) harddrives, or a nice filled storage array, send over by plane at the modern translation might be.

http://en.wikipedia.org/wiki/Talk:Sneakernet

http://www.everything2.com/index.pl?node_id=507783
Clint Gibler
Frequent Advisor

Re: Move database over slow connection

Physical means really aren't an option considering the remote location it would take at least a week just for the tape which means that system would be down the whole time, I guess we will just compress the files and send them over ftp over the weekend.
Thanks.
Yogeeraj_1
Honored Contributor

Re: Move database over slow connection

hi,

Export of the database, compress then FTP

this would be wiser in your case.

if you copy the datafiles, the problem would be that you would be also sending along unused space, indexes and some other objects that you can recreate once you do an import...


good luck!

kind regards
yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)
Volker Borowski
Honored Contributor

Re: Move database over slow connection

Hi,

here is Option three between backup and export:

If your DB has a clean split tablespace setup up between indexes and tables, just transfer only SYSTEM and the tablespaces that contain tables, and UNDO/Rollback if not too big (Otherwise, reconfigure to "small" before transfer and enlarge afterwards). Drop "all minus 2" Online Redo log groups.

Do a clean shutdown+startup !

Do a structure export (rows=NO, all others YES) of the database. This should be a very small dump and be quite fast.

Clean shutdown.

Transfer Controlfiles, SYSTEM, Online Redo, UNDO and DATA to identical positions.

Startup nomount

Alter database datafile '...' offline;

For all files of index tablespaces.

alter database open

drop index tablespaces

create new index tablespaces

Recreate additional online log groups as needed.

Reconfigure TEMP and UNDO (needed for index creation)

import structure-export (This should create the indexes)

Check !

I'd recommend to test this very carefully.
Checks to ensure that everything went well:

select count(*) form dba_objects;
select count(*) form dba_objects where status = 'INVALID';
select count(*) form dba_views;
select count(*) form dba_tables;
select count(*) form dba_indexes;
select count(*) form dba_data_files;
...

I think you need to measure a bit, to see what you gain from transfer speed against rebuild indexes and Import speed.
Most likely the export option will be most efficient.

This option has the advantage, that you can even do it online beforehand (... begin backup ...) and do a recover of the files transfered to become current. This makes only sense, if the amount of archive logs is not too big. But this will be a bit more tricky.

Option four would be to get an online-backup via FedEx, and while that is on the way, transfer the entrie redologs as they become available. Restore+recover all the way to become current as if you are running a manual shadow DB. Do a couple of logswitches at source DB and shut down.
Recover this rest and "Open reset logs" the target.

Good luck
Volker