- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- file too large
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-06-2002 07:02 AM
тАО12-06-2002 07:02 AM
file too large
cost_00107.unl: Value too large to be stored in data type" what could be the cause.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-06-2002 07:16 AM
тАО12-06-2002 07:16 AM
Re: file too large
It appears by filename that you are exporting a CATIA file. Is this correct?
If so, what app is making the error file cost_00107.unl? That is not a CATIA file type. So is this your own code?
SO if you explain the apps you are trying to use, then I think more will be able to help.
Regards,
Shannon
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-06-2002 07:21 AM
тАО12-06-2002 07:21 AM
Re: file too large
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-06-2002 07:45 AM
тАО12-06-2002 07:45 AM
Re: file too large
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-06-2002 08:00 AM
тАО12-06-2002 08:00 AM
Re: file too large
Oracle provide the following guidelines on getting round this problem. I have tried them when I encountered the same problem and they do work.
Displayed below are the messages of the selected thread.
Thread Status: Closed
From: ravigj@cdac.ernet.in 06-Dec-00 04:56
Subject: Export terminated after 2GB
RDBMS Version: 8.0.5
Operating System and Version: Solaris 2.6
Error Number (if applicable):
Product (i.e. SQL*Loader, Import, etc.):
Product Version:
Export terminated after 2GB
Hi,
We have oracle 8.0.5 on Solaris 2.6
I run an export command
But it gets terminated as the export file size gets 2GB
which is the restriction of filesize.
Can any one tell me how do i export and import the database.
Ravi
From: Oracle, Jaikishan Tada 06-Dec-00 08:45
Subject: Re : Export terminated after 2GB
Hi,
This Note provides different options for exporting greater than 2GB file :
QREF: Export/Import/SQL*Load large files in Unix - Quick Reference
- Jaikishan
Oracle Support Services
From: Melissa Haller 06-Dec-00 20:16
Subject: Re : Export terminated after 2GB
Workaround #1:
--------------
Investigate to see if there is a way to slit up the export at the schema level.
Perhaps you can export the schema with the highest number of objects in a
separate export in order to fit under the 2GB limit. Also, investigate whether
certain large tables can be exported separately.
Workaround #2:
--------------
!!! IMPORTANT: THESE EXAMPLES ONLY WORK IN KORN SHELL (KSH) !!!
Use the UNIX pipe and split commands:
Export command:
echo|exp file=>(split -b 1024m - expdmp-) userid=scott/tiger tables=X
Note: You can put any "exp" parameters. This is working only in ksh and
has been tested on Sun Solaris 5.5.1.
Import command:
echo|imp file=<(cat expdmp-*) userid=scott/tiger tables=X
Splitting and compressing at the same time:
Export command:
echo|exp file=>(compress|split -b 1024m - expdmp-) userid=scott/tiger tables=X
Import command:
echo|imp file=<(cat expdmp-*|zcat) userid=scott/tiger tables=X
Workaround #3:
--------------
This is almost the same as above, but, in a three-step implementation using
explicit UNIX pipes without the split command, only relying on compress:
Export command:
1) Make the pipe
mknod /tmp/exp_pipe p
2) Compress in background
compress < /tmp/exp_pipe > export.dmp.Z &
-or-
cat p | compress > output.Z &
-or-
cat p > output.file & )
3) Export to the pipe
exp file=/tmp/exp_pipe userid=scott/tiger tables=X
Import command:
1) Make the pipe
mknod /tmp/imp_pipe p
2) uncompress in background
uncompress < export.dmp.Z > /tmp/imp_pipe &
-or-
cat output_file > /tmp/imp_pipe &
3) Import thru the pipe
imp file=/tmp/imp_pipe userid=scott/tiger tables=X
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-06-2002 08:01 AM
тАО12-06-2002 08:01 AM
Re: file too large
BTW, I used workaround three.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-06-2002 02:50 PM
тАО12-06-2002 02:50 PM
Re: file too large
(Although, you should learn the options in the previous message, as they are worthwhile to learn when you have very large exports and not enough disk space for the dump).
Brian
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-06-2002 10:26 PM
тАО12-06-2002 10:26 PM
Re: file too large
Maybe you are facing the 2 gig limit on the OS!
I did something a years ago with my Oracle 7 Database exports. Maybe you can find some hints from this.
============================================================The 2 gig limit will apply IF you let export write directly to the file. I use
export to a PIPE and have compress and split read the pipe. The result is a
couple of 500meg compressed files that consistute the export. At 500meg, any
utility can deal with this files and I can move them around easier.
Here is the CSH script I use to show you how it is done. It does a full export
and then tests the integrity of the export by doing a full import show = y.
that gives me a file with all of my source code and ddl to boot.
#!/bin/csh -vx
setenv UID /
setenv FN exp.`date +%j_%Y`.dmp
setenv PIPE /tmp/exp_tmp_ora8i.dmp
setenv MAXSIZE 500m
setenv EXPORT_WHAT "full=y COMPRESS=n"
echo $FN
cd /nfs/atc-netapp1/expbkup_ora8i
ls -l
rm expbkup.log export.test exp.*.dmp* $PIPE
mknod $PIPE p
date > expbkup.log
( gzip < $PIPE ) | split -b $MAXSIZE - $FN. &
#split -b $MAXSIZE $PIPE $FN. &
exp userid=$UID buffer=20000000 file=$PIPE $EXPORT_WHAT >>& expbkup.log
date >> expbkup.log
date > export.test
cat `echo $FN.* | sort` | gunzip > $PIPE &
#cat `echo $FN.* | sort` > $PIPE &
imp userid=sys/o8isgr8 file=$PIPE show=y full=y >>& export.test
date >> export.test
tail expbkup.log
tail export.test
ls -l
rm -f $PIPE
------------ eof -------------------------
============================================================
Otherwise, if you are running an Oracle 8i database, there is already a solution! try:
============================================================
#!/bin/sh
LOG_PATH=/BACKUP/export/logfiles
ORACLE_HOME=/d01/app/oracle/product/8.1.7
DMP_PATH1=/backup1/export
#DMP_PATH2=/backup2/export
ACC_PASS=system/manager
export dt=`date +%Y-%m%d`
$ORACLE_HOME/bin/exp $ACC_PASS filesize=1024M file=\($DMP_PATH1/cmtdbexp"$dt"FULLa.dmp, $DMP_PATH1/cmtdbexp"$dt"FULLb.dmp, $DMP_PATH1/cmtdbexp"$dt"FULLc.dmp, $DMP_PATH1/cmtdbexp"$dt"FULLd.dmp, $DMP_PATH1/cmtdbexp"$dt"FULLe.dmp\) buffer=409600 log=$LOG_PATH/cmtdbexp"$dt"FULL.log full=Y grants=Y rows=Y compress=N direct=n
#YD-12/01/1999
============================================================
Hope this helps
Best Regards
Yogeeraj
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-10-2002 11:05 AM
тАО12-10-2002 11:05 AM
Re: file too large
For 10.20 and 11.00 you need the largefiles option in /etc/fstab and you will need to re-create the filesystems.
For 11.11 you'll need to recreate, but need not modify /etc/fstab
For Ignite transfers, remember that files larger than 2G don't get built into make_tape_recovery or Ignite golden images. You need to handle these transfers seperately.
Don't forget to specify largefiles in the Ignite profile for the target systems.
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-13-2002 06:27 AM
тАО12-13-2002 06:27 AM