1822818 Members
4256 Online
109645 Solutions
New Discussion юеВ

data migration

 
Edgar_8
Regular Advisor

data migration

Hi,

We need to move an Oracle 8i DB(approx.1.2 TB) from one location to another. Anyone have any advice for the data move, that
would be the most viable performance & data integrity wise? We thought of the following options:
1 ftp data (approx 567 datafiles @ 2GB each)
2 backup & restore via backup software (netbackup)
3 export & import

Now our limitation with options 1&2 is network bandwidth, we dont have a GB Ethernet (only 100MB) link between
locations and with option 3 the dump file will be massive & uncertain as to what the final size would be
& whether we have enough storage to cater for it. In addition timeframe wise export/import is unknown.
Aanyone have any bright advice?

Thanks in advance!
26 REPLIES 26
Naveej.K.A
Honored Contributor

Re: data migration

Hi,

ftp might be slower.

Use rsync. I have used this to compare two servers ovr a 100MB link and have produced remarkable results ( I have never tried copying).

rsync available at http://hpux.connect.org.uk/hppd/hpux/Networking/Admin/rsync-2.6.4/ is worth considering as an option.

Regards,
Naveej
practice makes a man perfect!!!
Indira Aramandla
Honored Contributor

Re: data migration

Hi Edgar,

Option 2 seems to be the prefered one among all the 3

Because
Option 1, using ftp will be slow as you link is only 100MB. But it depends how much down time can you efford.

Option 3 is safe if you have a pre-created the database and all the tables, then instead of one big export, you can run exports for few tables, or by individual schemas and then import the data without indexes = N and then rebuild the indexes.

By this way you can schedule exports and imports simultaniously for more than one expa dn imp.

I agree with you that the time taken will be longer, but not that bad when you schedule simultaniously.

Option 2 will be best as you take a full cold backup on to a tape and then restore form the tape to the new server into the same location and startup the database. But this will depend on your backup tool and the tape read write capability.

We use Dataprotector backup tool and high speed LTO tapes and the 15 GB od data is written in 8 minutes on to the tape.

IA
Never give up, Keep Trying
Eric Antunes
Honored Contributor

Re: data migration

Hi Edgar,

I think you should try option 3 because a full export will reduce the size of the file you need to transfer from one location to the other since it will export only data: not free extents in tablespaces...

If you go for this method, you should use GRANTS=y FULL=Y ROWS=y DIRECT=y COMPRESS=y...

Best Regards,

Eric
Each and every day is a good day to learn.
Eric Antunes
Honored Contributor

Re: data migration

Hi again,

The full export of my 20Gb database has just completed now and gave a 3Gb tmp file, something like 15% size reduction! So, in your case, you may get something like 200Gb of tmp file...

Eric
Each and every day is a good day to learn.
Eric Antunes
Honored Contributor

Re: data migration

Where I wrote "something like 15% size reduction", I wanted to say:

"something like 15% the database datafiles size"
Each and every day is a good day to learn.
Jean-Luc Oudart
Honored Contributor

Re: data migration

Hi Edgar

Regarding the volume of data, if you can take a cold backup of your database and restore it to the target machine. Obviously, it means you have backup device attached to both sites.

For example, we have Nclass with LTO2 drives.
storage is XP128. Database backup is ~200Gb/hr onto one single drive ?

What kind of backup device do you use ? What is the backend storage in each location ?

Regards
Jean-Luc
fiat lux
Edgar_8
Regular Advisor

Re: data migration

Hi Eric,

Im currently testing the export to see how long it takes.

Thanks in advance!
renarios
Trusted Contributor

Re: data migration

Hi Edgar,

I would use the export-import option. It will cost you more time, but you can reduce fragmentation in the database.
Unlike Eric says, use COMPRESS=N, because if you use COMPRESS=Y, the export utility will check the highwater-mark of the table being exported and make that size of the initial extent. If you will update records in that extent, thet woun't fit and you tablespace will get fragmented enormously.
A nice toy to create an export is to do use the named pipe method. That way, the dumpfile will be placed on the other node directly.

Cheerio,

Renarios

Nothing is more successfull as failure
Eric Antunes
Honored Contributor

Re: data migration

renarios,

You are confusing tablespace extents with pct_free for each table...

COMPRESS=y will import ALL EXTENTS into ONE EXTENT! This eliminates the tablespace fragmentation!!
Each and every day is a good day to learn.
Edgar_8
Regular Advisor

Re: data migration

Hi Renarios,

I'm using the pipe & compress=y (due to disk space limitation) to disk.How would I export via pipe to remote node, command line syntax please.

Thanks in advance!
harry d brown jr
Honored Contributor

Re: data migration

It would take you over 50 hours to transmit that much data over a 100mb pipe, and more likely is over 80 hours.

Your best bet is either a backup and restore via tape, or to disk and ship the disk.

liev free or die
harry d brown jr
Live Free or Die
Hein van den Heuvel
Honored Contributor

Re: data migration

Edgar,

Export/Import is really your tool of choice.
Just export the real data. No temp, no Undo, no Indexes (build those upon arrival).

Potential to overlap. Use the FILESIZE option in export. It will now create manageable chunks. Compress those as they arrive into a pipe over the network and store compressed there.

Potential to re-organize (Pagesize, tablespace layout,...)

Syntax? RTFM!
http://download-west.oracle.com/docs/cd/A91202_01/901_doc/server.901/a90192/ch01.htm#1005715

Just to be sure... the COMPRESS option does nothing to the export files themselfs, it does not compress those. It only influences the new tables. rtfm.

Hope this helps,
Hein.
Geoff Wild
Honored Contributor

Re: data migration

Well, best would be to have a tool like SRDF - on EMC frames...I'm assuming you don't - else you wouldn't be posting here.

Option 2 would be best - do an offline. Then once it is restored, shutdown the current one, and ftp the archive logs and have the DBA roll them onto the restored db.

Rgds...Geoff
Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
renarios
Trusted Contributor

Re: data migration

Hi Eric,

Compressing will not compress your datafile, but put all extents into one initial extent.

For Eric:
You are right in a way, but if you compress all extents into one extent, eventually, your new database will get fragmented.
If you export non-compressed, all fragments will be placed contiguously into the tablespace, so at the end, your fragmentation will be less.

For the export over the network, this is the syntax:
mknod /tmp/output p
chmod 777 /tmp/output
nohup dd if=/tmp/output | remsh dd of=/path/to/export/dir/export.dmp &
exp ...

Nothing is more successfull as failure
Eric Antunes
Honored Contributor

Re: data migration

Edgar,

To use lots of advanced options like COMPRESS, FILESIZE, etc... it is easier to do it with Oracle Enterprise Manager.

Renarios,

"You are right in a way, but if you compress all extents into one extent, eventually, your new database will get fragmented.
If you export non-compressed, all fragments will be placed contiguously into the tablespace, so at the end, your fragmentation will be less."

If you have 100 tables in the TOOLS tablespace with an average of 10 extents each, you get 1000 extents, right? So if you export with COMPRESS option you get, after importing, 100 extents (100 tables with 1 extent each)! 100 extents is less fragmentation than 1000, right!?

Best Regards,

Eric Antunes
Each and every day is a good day to learn.
Victor BERRIDGE
Honored Contributor

Re: data migration

Hi,
What about export but writing on an NFS filesystem from remote server?
You export should last60-80 hours I would guess your dmp file to be at most 300GB...
Can you use snapshots?


Just thoughts

All the best
Victor
renarios
Trusted Contributor

Re: data migration

Eric,

What you say is very true, but all extents are placed contigous in the tablespace, so in fact there is no fragmentation (must say: in an empty tablespace).

Edgar,

To speed up the export, use file=/tmp/output DIRECT=y RECORDLENGTH=65535 BUFFER=104857600

Cheerio,

Renarios
Nothing is more successfull as failure
Devender Khatana
Honored Contributor

Re: data migration

Hi,

Allready had enough options but still if you have some spare space to copy the data to a alternate location then stop your database copy all datafiles to a new location and then before ftp just compress them using gunzip. Meanwhile when compression and ftp is going on you can start database at current location if your requirement suits it. This can be simply done if you are using something like business copy volumes (Alongwith XP Arrays).

Export/import in this much volume will be a tedious and a lengthy process.

HTH,
Devender
Impossible itself mentions "I m possible"
Edgar_8
Regular Advisor

Re: data migration

Hi All,

Thanks for all the advice its been most appreciated! Renarios et al, we completed the export in compressed mode last night about 10pm & it took in total approx.10 hrs compressing 1.2 TB to roughly 40GB dump. Our next challenge will be to import the data! Any bright ideas on that approach would be welcomed.

Thanks in advance!
Indira Aramandla
Honored Contributor

Re: data migration

Hi Edgar,

Import with indexes = N and you can rebuild the indexes later on. Like this import will be faster.

If you had exported schema or tables in groups then you could had scheduled more than one imp jobs simultanously.


IA

Never give up, Keep Trying
Indira Aramandla
Honored Contributor

Re: data migration

Hi Edgar,

Submitted tooo quickly, there's more.

To speed up Import you can use the BUFFER parameter and COMMIT. Although exercise caution if rerunning an Import if you have used the COMMIT parameter previously and do not have a uniqueness constraint on the table (ie beware of duplicate rows).

Before running your Import of data rows you may consider running an Import using INDEXFILE to create a file of index creation statements. This will not include table constraints, so you should also specify CONSTRAINTS=Y to get the table constraints written to this file.

Then run your main data Import with INDEXES=N to Import the data. Once this is done you can run the index creation statements within your file (edited as necessary). Separating the two operations may well speed up the overall process as the indexes do not have to be maintained as data is inserted.

REMEMBER THE RULE OF THUMB: Import should be minimum 2 to 2.5 times the export time.

Attached is a document form metalink which describes seom basic changes to speed up import process.

IA
Never give up, Keep Trying
Indira Aramandla
Honored Contributor

Re: data migration

Forgot attachment
Never give up, Keep Trying
Edgar_8
Regular Advisor

Re: data migration

Hi Indira,

Thanks for the advice! FYI we exported the data as individual schema dumps so as to run multiple imports in parallel.And yeah, we also aware of the fact that the import should take approx.2-3 times longer.

Thanks in advance!
Gerald Thai
Occasional Advisor

Re: data migration

Considering the size (1.2TB), your best, quick and cheapest option is to rent a portable diskfarm. They can customize the OS to match your production machine. Attach this big boy to your backend and take a netbackup.
Move this big boy to the new location and restore.

Wish we had one. We can use one right now.

no points please