Simpler Navigation for Servers and Operating Systems - Please Update Your Bookmarks
Completed: a much simpler Servers and Operating Systems section of the Community. We combined many of the older boards, so you won't have to click through so many levels to get at the information you need. Check the consolidated boards here as many sub-forums are now single boards.
If you have bookmarked forums or discussion boards in Servers and Operating Systems, we suggest you check and update them as needed.
Showing results for 
Search instead for 
Did you mean: 

file too large

HP-UX User_1
Frequent Advisor

file too large

I am trying to export a file and i am getting this error "/export/CONVERSION/son_db.exp # more cost_00107.unl
cost_00107.unl: Value too large to be stored in data type" what could be the cause.

Shannon Petry
Honored Contributor

Re: file too large

Without knowing details of what your doing I can not help you.

It appears by filename that you are exporting a CATIA file. Is this correct?

If so, what app is making the error file cost_00107.unl? That is not a CATIA file type. So is this your own code?

SO if you explain the apps you are trying to use, then I think more will be able to help.

Microsoft. When do you want a virus today?
Ian Lochray
Respected Contributor

Re: file too large

Oracle report several errors with utilities giving this error message when the file size exceeds 2Gb. How big is your export file?
HP-UX User_1
Frequent Advisor

Re: file too large

Ian we are having trouble exporting the file as it is larger than 2GB
Ian Lochray
Respected Contributor

Re: file too large

Oracle provide the following guidelines on getting round this problem. I have tried them when I encountered the same problem and they do work.
Displayed below are the messages of the selected thread.
Thread Status: Closed
From: 06-Dec-00 04:56
Subject: Export terminated after 2GB

RDBMS Version: 8.0.5
Operating System and Version: Solaris 2.6
Error Number (if applicable):
Product (i.e. SQL*Loader, Import, etc.):
Product Version:

Export terminated after 2GB

We have oracle 8.0.5 on Solaris 2.6
I run an export command
But it gets terminated as the export file size gets 2GB
which is the restriction of filesize.

Can any one tell me how do i export and import the database.


From: Oracle, Jaikishan Tada 06-Dec-00 08:45
Subject: Re : Export terminated after 2GB


This Note provides different options for exporting greater than 2GB file :

QREF: Export/Import/SQL*Load large files in Unix - Quick Reference
- Jaikishan
Oracle Support Services

From: Melissa Haller 06-Dec-00 20:16
Subject: Re : Export terminated after 2GB

Workaround #1:
Investigate to see if there is a way to slit up the export at the schema level.
Perhaps you can export the schema with the highest number of objects in a
separate export in order to fit under the 2GB limit. Also, investigate whether
certain large tables can be exported separately.

Workaround #2:


Use the UNIX pipe and split commands:

Export command:

echo|exp file=>(split -b 1024m - expdmp-) userid=scott/tiger tables=X

Note: You can put any "exp" parameters. This is working only in ksh and
has been tested on Sun Solaris 5.5.1.

Import command:

echo|imp file=<(cat expdmp-*) userid=scott/tiger tables=X

Splitting and compressing at the same time:

Export command:

echo|exp file=>(compress|split -b 1024m - expdmp-) userid=scott/tiger tables=X

Import command:

echo|imp file=<(cat expdmp-*|zcat) userid=scott/tiger tables=X

Workaround #3:
This is almost the same as above, but, in a three-step implementation using
explicit UNIX pipes without the split command, only relying on compress:

Export command:

1) Make the pipe
mknod /tmp/exp_pipe p

2) Compress in background
compress < /tmp/exp_pipe > export.dmp.Z &
cat p | compress > output.Z &
cat p > output.file & )

3) Export to the pipe
exp file=/tmp/exp_pipe userid=scott/tiger tables=X

Import command:

1) Make the pipe
mknod /tmp/imp_pipe p

2) uncompress in background
uncompress < export.dmp.Z > /tmp/imp_pipe &
cat output_file > /tmp/imp_pipe &

3) Import thru the pipe
imp file=/tmp/imp_pipe userid=scott/tiger tables=X
Ian Lochray
Respected Contributor

Re: file too large

BTW, I used workaround three.
Brian Crabtree
Honored Contributor

Re: file too large

What version of Oracle are you using, and what OS are you using? HP-UX 10.20 allowed for larger files to be created with a patch and filesystem setting, and HP-UX 11.0 allows for large files to be created with just the filesystem being changed. You might want to verify that you can create a large file on the disk, and see if that resolves the problem first.

(Although, you should learn the options in the previous message, as they are worthwhile to learn when you have very large exports and not enough disk space for the dump).

Honored Contributor

Re: file too large

hi michelle,

Maybe you are facing the 2 gig limit on the OS!

I did something a years ago with my Oracle 7 Database exports. Maybe you can find some hints from this.

============================================================The 2 gig limit will apply IF you let export write directly to the file. I use
export to a PIPE and have compress and split read the pipe. The result is a
couple of 500meg compressed files that consistute the export. At 500meg, any
utility can deal with this files and I can move them around easier.

Here is the CSH script I use to show you how it is done. It does a full export
and then tests the integrity of the export by doing a full import show = y.
that gives me a file with all of my source code and ddl to boot.

#!/bin/csh -vx

setenv UID /
setenv FN exp.`date +%j_%Y`.dmp
setenv PIPE /tmp/exp_tmp_ora8i.dmp

setenv MAXSIZE 500m
setenv EXPORT_WHAT "full=y COMPRESS=n"

echo $FN

cd /nfs/atc-netapp1/expbkup_ora8i
ls -l

rm expbkup.log export.test exp.*.dmp* $PIPE
mknod $PIPE p

date > expbkup.log
( gzip < $PIPE ) | split -b $MAXSIZE - $FN. &
#split -b $MAXSIZE $PIPE $FN. &

exp userid=$UID buffer=20000000 file=$PIPE $EXPORT_WHAT >>& expbkup.log
date >> expbkup.log

date > export.test
cat `echo $FN.* | sort` | gunzip > $PIPE &
#cat `echo $FN.* | sort` > $PIPE &
imp userid=sys/o8isgr8 file=$PIPE show=y full=y >>& export.test
date >> export.test

tail expbkup.log
tail export.test

ls -l
rm -f $PIPE

------------ eof -------------------------

Otherwise, if you are running an Oracle 8i database, there is already a solution! try:
export dt=`date +%Y-%m%d`

$ORACLE_HOME/bin/exp $ACC_PASS filesize=1024M file=\($DMP_PATH1/cmtdbexp"$dt"FULLa.dmp, $DMP_PATH1/cmtdbexp"$dt"FULLb.dmp, $DMP_PATH1/cmtdbexp"$dt"FULLc.dmp, $DMP_PATH1/cmtdbexp"$dt"FULLd.dmp, $DMP_PATH1/cmtdbexp"$dt"FULLe.dmp\) buffer=409600 log=$LOG_PATH/cmtdbexp"$dt"FULL.log full=Y grants=Y rows=Y compress=N direct=n
Hope this helps

Best Regards
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)
Steven E. Protter
Exalted Contributor

Re: file too large

I've seen this, its OS filesystem.

For 10.20 and 11.00 you need the largefiles option in /etc/fstab and you will need to re-create the filesystems.

For 11.11 you'll need to recreate, but need not modify /etc/fstab

For Ignite transfers, remember that files larger than 2G don't get built into make_tape_recovery or Ignite golden images. You need to handle these transfers seperately.

Don't forget to specify largefiles in the Ignite profile for the target systems.

Steven E Protter
Owner of ISN Corporation
Judy Traynor
Valued Contributor

Re: file too large

I am late on this one, but the previous answer is correct - If that does not work - post again
Sail With the Wind
Judy Traynor
Valued Contributor

Re: file too large

I am late on this one, but the previous answer is correct - If that does not work - post again - the filesystem must have the large files option.

You may want to check your swap space, and MAXDSIZ.
Sail With the Wind
Tom Maloy
Respected Contributor

Re: file too large

You should not need to recreate the file system. Set the largefiles option with fsadm:

/usr/sbin/fsadm -F vxfs -o largefiles /dev/vgXX/rlvolname

using appropriate values for vgXX and rlvolname.

And add the largefiles option in fstab.

Carpe diem!