1748128 Members
4243 Online
108758 Solutions
New Discussion юеВ

data migration

 
Edgar_8
Regular Advisor

data migration

Hi,

We need to move an Oracle 8i DB(approx.1.2 TB) from one location to another. Anyone have any advice for the data move, that
would be the most viable performance & data integrity wise? We thought of the following options:
1 ftp data (approx 567 datafiles @ 2GB each)
2 backup & restore via backup software (netbackup)
3 export & import

Now our limitation with options 1&2 is network bandwidth, we dont have a GB Ethernet (only 100MB) link between
locations and with option 3 the dump file will be massive & uncertain as to what the final size would be
& whether we have enough storage to cater for it. In addition timeframe wise export/import is unknown.
Aanyone have any bright advice?

Thanks in advance!
26 REPLIES 26
Naveej.K.A
Honored Contributor

Re: data migration

Hi,

ftp might be slower.

Use rsync. I have used this to compare two servers ovr a 100MB link and have produced remarkable results ( I have never tried copying).

rsync available at http://hpux.connect.org.uk/hppd/hpux/Networking/Admin/rsync-2.6.4/ is worth considering as an option.

Regards,
Naveej
practice makes a man perfect!!!
Indira Aramandla
Honored Contributor

Re: data migration

Hi Edgar,

Option 2 seems to be the prefered one among all the 3

Because
Option 1, using ftp will be slow as you link is only 100MB. But it depends how much down time can you efford.

Option 3 is safe if you have a pre-created the database and all the tables, then instead of one big export, you can run exports for few tables, or by individual schemas and then import the data without indexes = N and then rebuild the indexes.

By this way you can schedule exports and imports simultaniously for more than one expa dn imp.

I agree with you that the time taken will be longer, but not that bad when you schedule simultaniously.

Option 2 will be best as you take a full cold backup on to a tape and then restore form the tape to the new server into the same location and startup the database. But this will depend on your backup tool and the tape read write capability.

We use Dataprotector backup tool and high speed LTO tapes and the 15 GB od data is written in 8 minutes on to the tape.

IA
Never give up, Keep Trying
Eric Antunes
Honored Contributor

Re: data migration

Hi Edgar,

I think you should try option 3 because a full export will reduce the size of the file you need to transfer from one location to the other since it will export only data: not free extents in tablespaces...

If you go for this method, you should use GRANTS=y FULL=Y ROWS=y DIRECT=y COMPRESS=y...

Best Regards,

Eric
Each and every day is a good day to learn.
Eric Antunes
Honored Contributor

Re: data migration

Hi again,

The full export of my 20Gb database has just completed now and gave a 3Gb tmp file, something like 15% size reduction! So, in your case, you may get something like 200Gb of tmp file...

Eric
Each and every day is a good day to learn.
Eric Antunes
Honored Contributor

Re: data migration

Where I wrote "something like 15% size reduction", I wanted to say:

"something like 15% the database datafiles size"
Each and every day is a good day to learn.
Jean-Luc Oudart
Honored Contributor

Re: data migration

Hi Edgar

Regarding the volume of data, if you can take a cold backup of your database and restore it to the target machine. Obviously, it means you have backup device attached to both sites.

For example, we have Nclass with LTO2 drives.
storage is XP128. Database backup is ~200Gb/hr onto one single drive ?

What kind of backup device do you use ? What is the backend storage in each location ?

Regards
Jean-Luc
fiat lux
Edgar_8
Regular Advisor

Re: data migration

Hi Eric,

Im currently testing the export to see how long it takes.

Thanks in advance!
renarios
Trusted Contributor

Re: data migration

Hi Edgar,

I would use the export-import option. It will cost you more time, but you can reduce fragmentation in the database.
Unlike Eric says, use COMPRESS=N, because if you use COMPRESS=Y, the export utility will check the highwater-mark of the table being exported and make that size of the initial extent. If you will update records in that extent, thet woun't fit and you tablespace will get fragmented enormously.
A nice toy to create an export is to do use the named pipe method. That way, the dumpfile will be placed on the other node directly.

Cheerio,

Renarios

Nothing is more successfull as failure
Eric Antunes
Honored Contributor

Re: data migration

renarios,

You are confusing tablespace extents with pct_free for each table...

COMPRESS=y will import ALL EXTENTS into ONE EXTENT! This eliminates the tablespace fragmentation!!
Each and every day is a good day to learn.