Operating System - HP-UX
1829585 Members
1633 Online
109992 Solutions
New Discussion

Re: Issue with doing an Oracle database export bigger than 2 GB

 
SOLVED
Go to solution
MAD_2
Super Advisor

Issue with doing an Oracle database export bigger than 2 GB

 
Contrary to popular belief, Unix is user friendly. It's just very particular about who it makes friends with
9 REPLIES 9
Pete Randall
Outstanding Contributor
Solution

Re: Issue with doing an Oracle database export bigger than 2 GB

I think you'll find that enabling large files will be the answer, so the sooner you can do that, the better off you'll be.

Pete

Pete
Jeff Schussele
Honored Contributor

Re: Issue with doing an Oracle database export bigger than 2 GB

Hi Adam,

Pete's got it.
That's the classic symptom of not having largefiles enabled on the FS. The file grows until it hit's the limit & process then errors out - every time.
You're going nowhere with this until you enable largefiles on that FS.

Rgds,
Jeff
PERSEVERANCE -- Remember, whatever does not kill you only makes you stronger!
A. Clay Stephenson
Acclaimed Contributor

Re: Issue with doing an Oracle database export bigger than 2 GB

While largefiles enablement of the filesystem is probably the answer, you may have to use another approach if the application itself is the problem. I have actually seen this in some ports of the Oracle import/export code.

The workaround is to use a named pipe.

mknod p /tmp/mypipe;

You then do a dd if=/tmp/mipe of=myfile first. This process will block waiting on input. The export will then output to /dev/mypipe and you overcome the 2GB limit that will then not apply to a raw device (pipe).
If it ain't broke, I can fix that.
Steven E. Protter
Exalted Contributor

Re: Issue with doing an Oracle database export bigger than 2 GB

If largefiles is enabled on the target filesystem there is another possible problem.

There is a bug related to 64 bit oracle if the database or database data was migrated from a 32 bit system.

You need to read through this post and follow my mantra.

http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0x0060ef70e827d711abdc0090277a778c,00.html

Make sure the java version you have is 1.2 or 1.3. 1.4 is not certified with Oracle.

You MUST have all java patches and oracle patches installed. Here is the current oracle patch list for 11.11

If there is a new patch, use that. Make it a depot on itrc then do 1 install.

PHSS_22898 There is a newer patch
PHKL_25506
PHNE_26388
PHSS_26560
PHCO_25452
PHCO_24402

P
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
BLADE_1
Frequent Advisor

Re: Issue with doing an Oracle database export bigger than 2 GB

hi,

You can direct the export file to tape..just specify the tape path in file=xxx.

Other way is that u specify filesize in the export command. This way u can split the export dump into multiple subfiles and keep a filesize below 2GB.

Or else u can also create pipes and compress it at the same time

$ mknod -p exppipe
$ exp file=exppipe ....&2>1 ( run it in background )
$ sleep 300
$ compress exppipe expdat.z

exp / file=exp file....

But if u can enable largefiles...nothing like it..


hope this answers your question..
fortune favours the brave
Yogeeraj_1
Honored Contributor

Re: Issue with doing an Oracle database export bigger than 2 GB

hi,

you can also try this to prevent the oracle dump files to exceed 2GB, hence multiple files which can esily be extracted.

use export to a PIPE and have compress and split read the pipe. The result is a couple of 500meg compressed files that consistute the export. At 500meg, any utility can deal with this files and can be moved around easier.

Here is the CSH script use to show you how it is done. It does a full export and then tests the integrity of the export by doing a full import show = y. that gives a file with all of my source code and ddl to boot.

#!/bin/csh -vx

setenv UID /
setenv FN exp.`date +%j_%Y`.dmp
setenv PIPE /tmp/exp_tmp_ora8i.dmp

setenv MAXSIZE 500m
setenv EXPORT_WHAT "full=y COMPRESS=n"

echo $FN

cd /nfs/atc-netapp1/expbkup_ora8i
ls -l

rm expbkup.log export.test exp.*.dmp* $PIPE
mknod $PIPE p

date > expbkup.log
( gzip < $PIPE ) | split -b $MAXSIZE - $FN. &
#split -b $MAXSIZE $PIPE $FN. &

exp userid=$UID buffer=20000000 file=$PIPE $EXPORT_WHAT >>& expbkup.log
date >> expbkup.log


date > export.test
cat `echo $FN.* | sort` | gunzip > $PIPE &
#cat `echo $FN.* | sort` > $PIPE &
imp userid=sys/o8isgr8 file=$PIPE show=y full=y >>& export.test
date >> export.test

tail expbkup.log
tail export.test

ls -l
rm -f $PIPE

------------ eof -------------------------

Another solution if you are running 8i would be:

E.g.
$ORACLE_HOME/bin/exp $ACC_PASS filesize=1024M file=\($DMP_PATH1/yddbexp"$dt"FULLa.dmp, $DMP_PATH1/yddbexp"$dt"FULLb.dmp, $DMP_PATH1/yddbexp"$dt"FULLc.dmp, $DMP_PATH1/yddbexp"$dt"FULLd.dmp, $DMP_PATH1/yddbexp"$dt"FULLe.dmp\) buffer=409600 log=$LOG_PATH/yddbexp"$dt"FULL.log full=Y grants=Y rows=Y compress=N direct=n



Hope this helps too!

Best Regards
Yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)
Rory R Hammond
Trusted Contributor

Re: Issue with doing an Oracle database export bigger than 2 GB


mknod -p /dev/oracle_pipe

If the pipe exist
The following one liner works:

exp scott/tiger buffer=1000000 file=/dev/oracle_pipe full=y grants=y log=export.log | gzip < /dev/oracle_pipe
> exp.dmp.gz

If you don't have large files enable, This assumes that the gziped results will be less the 2 gig.

You may have to import this:
gunzip< xp.dmp.gz > /dev/oracle_pipe | imp scott/tiger full=y ignore=y commit=y buffer=1000000 file= /dev/oracle_pipe



There are a 100 ways to do things and 97 of them are right
MAD_2
Super Advisor

Re: Issue with doing an Oracle database export bigger than 2 GB

Thanks everyone for your answers. As you have noticed, I have not assigned points to each of your answers, that is because I am still evaluating a couple of them to see if they would work as alternative methods to resolve this type of problem. The solution was in fact to enable large files and why it took so long for me to do that is because I was having this problem on a production system and had to unmount/remount the device, which was holding some essential processes and could not do at a "convenient" time.

Prior to me doing the unmounting/remounting, we had to find a different solution because that export was a hot item, we were a few hours behind, so what we ended up doing was breaking up the export files into 1GB through the export facility, here is an example as it would be used on a script:

==========================================

TSTAMP=`date +%Y%m%d`
DFILE=/filesystemwith/ORexp$TSTAMP.dmp
LFILE=/filesystemwith/ORexp$TSTAMP.log
LOGIN=login/passwd

exp $LOGIN file=$DFILE FILESIZE=1G COMPRESS=Y FULL=Y LOG=$LFILE

==========================================
Contrary to popular belief, Unix is user friendly. It's just very particular about who it makes friends with
Yogeeraj_1
Honored Contributor

Re: Issue with doing an Oracle database export bigger than 2 GB

hi,

A few comments:

1. "compress=y"
You are using "compress=y" on the export - which is deadly!! You should use compresss=N so that exp does not add up all of the extents in the table and puts them as the initial.

NB. compress in exp means "should I compress all of your currently allocated extents into a single intial extent".

2. "FILE="
Please specify the file names like in my example above, otherwise you may run into errors!



Hope this helps!
Best Regards
Yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)