- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- data migration
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-17-2005 05:36 PM
тАО05-17-2005 05:36 PM
data migration
We need to move an Oracle 8i DB(approx.1.2 TB) from one location to another. Anyone have any advice for the data move, that
would be the most viable performance & data integrity wise? We thought of the following options:
1 ftp data (approx 567 datafiles @ 2GB each)
2 backup & restore via backup software (netbackup)
3 export & import
Now our limitation with options 1&2 is network bandwidth, we dont have a GB Ethernet (only 100MB) link between
locations and with option 3 the dump file will be massive & uncertain as to what the final size would be
& whether we have enough storage to cater for it. In addition timeframe wise export/import is unknown.
Aanyone have any bright advice?
Thanks in advance!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-17-2005 05:58 PM
тАО05-17-2005 05:58 PM
Re: data migration
ftp might be slower.
Use rsync. I have used this to compare two servers ovr a 100MB link and have produced remarkable results ( I have never tried copying).
rsync available at http://hpux.connect.org.uk/hppd/hpux/Networking/Admin/rsync-2.6.4/ is worth considering as an option.
Regards,
Naveej
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-17-2005 07:57 PM
тАО05-17-2005 07:57 PM
Re: data migration
Option 2 seems to be the prefered one among all the 3
Because
Option 1, using ftp will be slow as you link is only 100MB. But it depends how much down time can you efford.
Option 3 is safe if you have a pre-created the database and all the tables, then instead of one big export, you can run exports for few tables, or by individual schemas and then import the data without indexes = N and then rebuild the indexes.
By this way you can schedule exports and imports simultaniously for more than one expa dn imp.
I agree with you that the time taken will be longer, but not that bad when you schedule simultaniously.
Option 2 will be best as you take a full cold backup on to a tape and then restore form the tape to the new server into the same location and startup the database. But this will depend on your backup tool and the tape read write capability.
We use Dataprotector backup tool and high speed LTO tapes and the 15 GB od data is written in 8 minutes on to the tape.
IA
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-17-2005 08:52 PM
тАО05-17-2005 08:52 PM
Re: data migration
I think you should try option 3 because a full export will reduce the size of the file you need to transfer from one location to the other since it will export only data: not free extents in tablespaces...
If you go for this method, you should use GRANTS=y FULL=Y ROWS=y DIRECT=y COMPRESS=y...
Best Regards,
Eric
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-17-2005 09:49 PM
тАО05-17-2005 09:49 PM
Re: data migration
The full export of my 20Gb database has just completed now and gave a 3Gb tmp file, something like 15% size reduction! So, in your case, you may get something like 200Gb of tmp file...
Eric
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-17-2005 09:50 PM
тАО05-17-2005 09:50 PM
Re: data migration
"something like 15% the database datafiles size"
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-17-2005 10:01 PM
тАО05-17-2005 10:01 PM
Re: data migration
Regarding the volume of data, if you can take a cold backup of your database and restore it to the target machine. Obviously, it means you have backup device attached to both sites.
For example, we have Nclass with LTO2 drives.
storage is XP128. Database backup is ~200Gb/hr onto one single drive ?
What kind of backup device do you use ? What is the backend storage in each location ?
Regards
Jean-Luc
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-17-2005 10:02 PM
тАО05-17-2005 10:02 PM
Re: data migration
Im currently testing the export to see how long it takes.
Thanks in advance!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-17-2005 10:46 PM
тАО05-17-2005 10:46 PM
Re: data migration
I would use the export-import option. It will cost you more time, but you can reduce fragmentation in the database.
Unlike Eric says, use COMPRESS=N, because if you use COMPRESS=Y, the export utility will check the highwater-mark of the table being exported and make that size of the initial extent. If you will update records in that extent, thet woun't fit and you tablespace will get fragmented enormously.
A nice toy to create an export is to do use the named pipe method. That way, the dumpfile will be placed on the other node directly.
Cheerio,
Renarios
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-17-2005 10:53 PM
тАО05-17-2005 10:53 PM
Re: data migration
You are confusing tablespace extents with pct_free for each table...
COMPRESS=y will import ALL EXTENTS into ONE EXTENT! This eliminates the tablespace fragmentation!!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-17-2005 10:55 PM
тАО05-17-2005 10:55 PM
Re: data migration
I'm using the pipe & compress=y (due to disk space limitation) to disk.How would I export via pipe to remote node, command line syntax please.
Thanks in advance!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-17-2005 11:03 PM
тАО05-17-2005 11:03 PM
Re: data migration
Your best bet is either a backup and restore via tape, or to disk and ship the disk.
liev free or die
harry d brown jr
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-18-2005 01:02 AM
тАО05-18-2005 01:02 AM
Re: data migration
Export/Import is really your tool of choice.
Just export the real data. No temp, no Undo, no Indexes (build those upon arrival).
Potential to overlap. Use the FILESIZE option in export. It will now create manageable chunks. Compress those as they arrive into a pipe over the network and store compressed there.
Potential to re-organize (Pagesize, tablespace layout,...)
Syntax? RTFM!
http://download-west.oracle.com/docs/cd/A91202_01/901_doc/server.901/a90192/ch01.htm#1005715
Just to be sure... the COMPRESS option does nothing to the export files themselfs, it does not compress those. It only influences the new tables. rtfm.
Hope this helps,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-18-2005 01:08 AM
тАО05-18-2005 01:08 AM
Re: data migration
Option 2 would be best - do an offline. Then once it is restored, shutdown the current one, and ftp the archive logs and have the DBA roll them onto the restored db.
Rgds...Geoff
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-18-2005 01:38 AM
тАО05-18-2005 01:38 AM
Re: data migration
Compressing will not compress your datafile, but put all extents into one initial extent.
For Eric:
You are right in a way, but if you compress all extents into one extent, eventually, your new database will get fragmented.
If you export non-compressed, all fragments will be placed contiguously into the tablespace, so at the end, your fragmentation will be less.
For the export over the network, this is the syntax:
mknod /tmp/output p
chmod 777 /tmp/output
nohup dd if=/tmp/output | remsh
exp ...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-18-2005 02:00 AM
тАО05-18-2005 02:00 AM
Re: data migration
To use lots of advanced options like COMPRESS, FILESIZE, etc... it is easier to do it with Oracle Enterprise Manager.
Renarios,
"You are right in a way, but if you compress all extents into one extent, eventually, your new database will get fragmented.
If you export non-compressed, all fragments will be placed contiguously into the tablespace, so at the end, your fragmentation will be less."
If you have 100 tables in the TOOLS tablespace with an average of 10 extents each, you get 1000 extents, right? So if you export with COMPRESS option you get, after importing, 100 extents (100 tables with 1 extent each)! 100 extents is less fragmentation than 1000, right!?
Best Regards,
Eric Antunes
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-18-2005 02:17 AM
тАО05-18-2005 02:17 AM
Re: data migration
What about export but writing on an NFS filesystem from remote server?
You export should last60-80 hours I would guess your dmp file to be at most 300GB...
Can you use snapshots?
Just thoughts
All the best
Victor
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-18-2005 02:40 AM
тАО05-18-2005 02:40 AM
Re: data migration
What you say is very true, but all extents are placed contigous in the tablespace, so in fact there is no fragmentation (must say: in an empty tablespace).
Edgar,
To speed up the export, use file=/tmp/output DIRECT=y RECORDLENGTH=65535 BUFFER=104857600
Cheerio,
Renarios
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-18-2005 03:44 AM
тАО05-18-2005 03:44 AM
Re: data migration
Allready had enough options but still if you have some spare space to copy the data to a alternate location then stop your database copy all datafiles to a new location and then before ftp just compress them using gunzip. Meanwhile when compression and ftp is going on you can start database at current location if your requirement suits it. This can be simply done if you are using something like business copy volumes (Alongwith XP Arrays).
Export/import in this much volume will be a tedious and a lengthy process.
HTH,
Devender
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-18-2005 04:36 PM
тАО05-18-2005 04:36 PM
Re: data migration
Thanks for all the advice its been most appreciated! Renarios et al, we completed the export in compressed mode last night about 10pm & it took in total approx.10 hrs compressing 1.2 TB to roughly 40GB dump. Our next challenge will be to import the data! Any bright ideas on that approach would be welcomed.
Thanks in advance!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-18-2005 05:58 PM
тАО05-18-2005 05:58 PM
Re: data migration
Import with indexes = N and you can rebuild the indexes later on. Like this import will be faster.
If you had exported schema or tables in groups then you could had scheduled more than one imp jobs simultanously.
IA
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-18-2005 06:17 PM
тАО05-18-2005 06:17 PM
Re: data migration
Submitted tooo quickly, there's more.
To speed up Import you can use the BUFFER parameter and COMMIT. Although exercise caution if rerunning an Import if you have used the COMMIT parameter previously and do not have a uniqueness constraint on the table (ie beware of duplicate rows).
Before running your Import of data rows you may consider running an Import using INDEXFILE to create a file of index creation statements. This will not include table constraints, so you should also specify CONSTRAINTS=Y to get the table constraints written to this file.
Then run your main data Import with INDEXES=N to Import the data. Once this is done you can run the index creation statements within your file (edited as necessary). Separating the two operations may well speed up the overall process as the indexes do not have to be maintained as data is inserted.
REMEMBER THE RULE OF THUMB: Import should be minimum 2 to 2.5 times the export time.
Attached is a document form metalink which describes seom basic changes to speed up import process.
IA
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-18-2005 06:19 PM
тАО05-18-2005 06:19 PM
Re: data migration
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-18-2005 06:36 PM
тАО05-18-2005 06:36 PM
Re: data migration
Thanks for the advice! FYI we exported the data as individual schema dumps so as to run multiple imports in parallel.And yeah, we also aware of the fact that the import should take approx.2-3 times longer.
Thanks in advance!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-25-2005 05:45 AM
тАО05-25-2005 05:45 AM
Re: data migration
Move this big boy to the new location and restore.
Wish we had one. We can use one right now.
no points please