- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Move database over slow connection
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-24-2006 04:56 AM
03-24-2006 04:56 AM
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-24-2006 05:18 AM
03-24-2006 05:18 AM
SolutionIf the nextwork transfer speed is an issue, then I'd strongly re-consider using (compressed/zipped) export files, not the raw (compressed) db files.
Why copy all that index data around?
Just recreate when and where you need.
Using exports you can also cut up your data in pieces and overlap work:
Empty DB could be building, while first export file is transferring. On arrival, start loading it, while next export chunks transfers, when chunk loaded, created indexes for tables in that chunk. Doing so you can have 3+ things nicely going in parallel all the time:
transfer:N, Uncrompress:N-1, Load:N-2, Index:N-3, Statistics:N-4
If you do decided to transfer the raw, binary, DB files then
- recreate everything that can cleanly be recreated: REDO, UNDO, TEMP, to ensure maximum compression. And you may also choose to create undersized REDO, UNDO, TEMP, just during the transfer. Then on arrival recreat all full sized.
- compress all files before sending over the link
Many of few Volume groups would not seem to matter much, if the nextwork is known to become the bottleneck.
Hth,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-24-2006 05:26 AM
03-24-2006 05:26 AM
Re: Move database over slow connection
If you are going to transmit the files then I would suggest that 1) tar or fbackup to a file 2) compress that file 3) split the compressed file into chunks 4) cksum those chunks 5) ftp each chunk and cksum the received file, if the cksum's don't match, resend 6) cat the chunks back together into a compressed tar or fbackup image 7) uncompress the image 8) untar or frecover the files. The reason I suggest that you split into chunks is that there is much less re-transmission should a failure occur.
FTP has the lowest overhead so it should be the transmission mechanism. Search the forums for one of my ftpget.pl or ftpput.pl scripts to make the scripting easy (they will also automatically retransmit if an error occurs). The compression will help to offset the small pipe that you are using. The compression and split will also serve to encrypt the data although I assume your satellite link is secure.
I still think FedEx overnight may well be your fastest method.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-24-2006 05:39 AM
03-24-2006 05:39 AM
Re: Move database over slow connection
or a box full of 250GB (ide/usb/scsi) harddrives, or a nice filled storage array, send over by plane at the modern translation might be.
http://en.wikipedia.org/wiki/Talk:Sneakernet
http://www.everything2.com/index.pl?node_id=507783
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-24-2006 06:20 AM
03-24-2006 06:20 AM
Re: Move database over slow connection
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-24-2006 07:17 PM
03-24-2006 07:17 PM
Re: Move database over slow connection
Export of the database, compress then FTP
this would be wiser in your case.
if you copy the datafiles, the problem would be that you would be also sending along unused space, indexes and some other objects that you can recreate once you do an import...
good luck!
kind regards
yogeeraj
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-25-2006 12:47 AM
03-25-2006 12:47 AM
Re: Move database over slow connection
here is Option three between backup and export:
If your DB has a clean split tablespace setup up between indexes and tables, just transfer only SYSTEM and the tablespaces that contain tables, and UNDO/Rollback if not too big (Otherwise, reconfigure to "small" before transfer and enlarge afterwards). Drop "all minus 2" Online Redo log groups.
Do a clean shutdown+startup !
Do a structure export (rows=NO, all others YES) of the database. This should be a very small dump and be quite fast.
Clean shutdown.
Transfer Controlfiles, SYSTEM, Online Redo, UNDO and DATA to identical positions.
Startup nomount
Alter database datafile '...' offline;
For all files of index tablespaces.
alter database open
drop index tablespaces
create new index tablespaces
Recreate additional online log groups as needed.
Reconfigure TEMP and UNDO (needed for index creation)
import structure-export (This should create the indexes)
Check !
I'd recommend to test this very carefully.
Checks to ensure that everything went well:
select count(*) form dba_objects;
select count(*) form dba_objects where status = 'INVALID';
select count(*) form dba_views;
select count(*) form dba_tables;
select count(*) form dba_indexes;
select count(*) form dba_data_files;
...
I think you need to measure a bit, to see what you gain from transfer speed against rebuild indexes and Import speed.
Most likely the export option will be most efficient.
This option has the advantage, that you can even do it online beforehand (... begin backup ...) and do a recover of the files transfered to become current. This makes only sense, if the amount of archive logs is not too big. But this will be a bit more tricky.
Option four would be to get an online-backup via FedEx, and while that is on the way, transfer the entrie redologs as they become available. Restore+recover all the way to become current as if you are running a manual shadow DB. Do a couple of logswitches at source DB and shut down.
Recover this rest and "Open reset logs" the target.
Good luck
Volker