- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Issue with doing an Oracle database export big...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-28-2003 10:03 AM
01-28-2003 10:03 AM
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-28-2003 10:07 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-28-2003 10:26 AM
01-28-2003 10:26 AM
Re: Issue with doing an Oracle database export bigger than 2 GB
Pete's got it.
That's the classic symptom of not having largefiles enabled on the FS. The file grows until it hit's the limit & process then errors out - every time.
You're going nowhere with this until you enable largefiles on that FS.
Rgds,
Jeff
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-28-2003 10:55 AM
01-28-2003 10:55 AM
Re: Issue with doing an Oracle database export bigger than 2 GB
The workaround is to use a named pipe.
mknod p /tmp/mypipe;
You then do a dd if=/tmp/mipe of=myfile first. This process will block waiting on input. The export will then output to /dev/mypipe and you overcome the 2GB limit that will then not apply to a raw device (pipe).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-28-2003 11:27 AM
01-28-2003 11:27 AM
Re: Issue with doing an Oracle database export bigger than 2 GB
There is a bug related to 64 bit oracle if the database or database data was migrated from a 32 bit system.
You need to read through this post and follow my mantra.
http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0x0060ef70e827d711abdc0090277a778c,00.html
Make sure the java version you have is 1.2 or 1.3. 1.4 is not certified with Oracle.
You MUST have all java patches and oracle patches installed. Here is the current oracle patch list for 11.11
If there is a new patch, use that. Make it a depot on itrc then do 1 install.
PHSS_22898 There is a newer patch
PHKL_25506
PHNE_26388
PHSS_26560
PHCO_25452
PHCO_24402
P
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-28-2003 08:39 PM
01-28-2003 08:39 PM
Re: Issue with doing an Oracle database export bigger than 2 GB
You can direct the export file to tape..just specify the tape path in file=xxx.
Other way is that u specify filesize in the export command. This way u can split the export dump into multiple subfiles and keep a filesize below 2GB.
Or else u can also create pipes and compress it at the same time
$ mknod -p exppipe
$ exp file=exppipe ....&2>1 ( run it in background )
$ sleep 300
$ compress exppipe expdat.z
exp / file=exp file....
But if u can enable largefiles...nothing like it..
hope this answers your question..
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-28-2003 09:19 PM
01-28-2003 09:19 PM
Re: Issue with doing an Oracle database export bigger than 2 GB
you can also try this to prevent the oracle dump files to exceed 2GB, hence multiple files which can esily be extracted.
use export to a PIPE and have compress and split read the pipe. The result is a couple of 500meg compressed files that consistute the export. At 500meg, any utility can deal with this files and can be moved around easier.
Here is the CSH script use to show you how it is done. It does a full export and then tests the integrity of the export by doing a full import show = y. that gives a file with all of my source code and ddl to boot.
#!/bin/csh -vx
setenv UID /
setenv FN exp.`date +%j_%Y`.dmp
setenv PIPE /tmp/exp_tmp_ora8i.dmp
setenv MAXSIZE 500m
setenv EXPORT_WHAT "full=y COMPRESS=n"
echo $FN
cd /nfs/atc-netapp1/expbkup_ora8i
ls -l
rm expbkup.log export.test exp.*.dmp* $PIPE
mknod $PIPE p
date > expbkup.log
( gzip < $PIPE ) | split -b $MAXSIZE - $FN. &
#split -b $MAXSIZE $PIPE $FN. &
exp userid=$UID buffer=20000000 file=$PIPE $EXPORT_WHAT >>& expbkup.log
date >> expbkup.log
date > export.test
cat `echo $FN.* | sort` | gunzip > $PIPE &
#cat `echo $FN.* | sort` > $PIPE &
imp userid=sys/o8isgr8 file=$PIPE show=y full=y >>& export.test
date >> export.test
tail expbkup.log
tail export.test
ls -l
rm -f $PIPE
------------ eof -------------------------
Another solution if you are running 8i would be:
E.g.
$ORACLE_HOME/bin/exp $ACC_PASS filesize=1024M file=\($DMP_PATH1/yddbexp"$dt"FULLa.dmp, $DMP_PATH1/yddbexp"$dt"FULLb.dmp, $DMP_PATH1/yddbexp"$dt"FULLc.dmp, $DMP_PATH1/yddbexp"$dt"FULLd.dmp, $DMP_PATH1/yddbexp"$dt"FULLe.dmp\) buffer=409600 log=$LOG_PATH/yddbexp"$dt"FULL.log full=Y grants=Y rows=Y compress=N direct=n
Hope this helps too!
Best Regards
Yogeeraj
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-29-2003 10:06 AM
01-29-2003 10:06 AM
Re: Issue with doing an Oracle database export bigger than 2 GB
mknod -p /dev/oracle_pipe
If the pipe exist
The following one liner works:
exp scott/tiger buffer=1000000 file=/dev/oracle_pipe full=y grants=y log=export.log | gzip < /dev/oracle_pipe
> exp.dmp.gz
If you don't have large files enable, This assumes that the gziped results will be less the 2 gig.
You may have to import this:
gunzip< xp.dmp.gz > /dev/oracle_pipe | imp scott/tiger full=y ignore=y commit=y buffer=1000000 file= /dev/oracle_pipe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-03-2003 04:29 AM
02-03-2003 04:29 AM
Re: Issue with doing an Oracle database export bigger than 2 GB
Prior to me doing the unmounting/remounting, we had to find a different solution because that export was a hot item, we were a few hours behind, so what we ended up doing was breaking up the export files into 1GB through the export facility, here is an example as it would be used on a script:
==========================================
TSTAMP=`date +%Y%m%d`
DFILE=/filesystemwith
LFILE=/filesystemwith
LOGIN=login/passwd
exp $LOGIN file=$DFILE FILESIZE=1G COMPRESS=Y FULL=Y LOG=$LFILE
==========================================
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-03-2003 04:50 AM
02-03-2003 04:50 AM
Re: Issue with doing an Oracle database export bigger than 2 GB
A few comments:
1. "compress=y"
You are using "compress=y" on the export - which is deadly!! You should use compresss=N so that exp does not add up all of the extents in the table and puts them as the initial.
NB. compress in exp means "should I compress all of your currently allocated extents into a single intial extent".
2. "FILE="
Please specify the file names like in my example above, otherwise you may run into errors!
Hope this helps!
Best Regards
Yogeeraj