1821537 Members
2650 Online
109633 Solutions
New Discussion юеВ

Data export problem

 
SOLVED
Go to solution
radi_1
Frequent Advisor

Data export problem

Hi
I have a customer who has an hp server runnig 10.20 hp-ux.The file system type is hsf and the application he is running is oracle 7.1.3.
He can not export his data to tape.The export procedur aborts when the export file reaches the size of aproximatley 2.15 GB with error message that it can not write to export file,
and the file sysyem assigned for exporting is 3.7GB.I have checked some hp-ux documentation
and understood that hp-ux 10.20 and above can
permit more than 2GB file size.So what seems to be the prblem?
Regards
never take simple maters for granted
17 REPLIES 17
KapilRaj
Honored Contributor

Re: Data export problem

tar has a limitation of 2 GB. Is that the issue?. NFS also has a limitation of 2 GB

kaps
Nothing is impossible
Alexander M. Ermes
Honored Contributor

Re: Data export problem

Hi there.
Why not exporting directly to tape ?
Give it the device file as target and it should work.

exp full=y file=/dev/rmt/0m parfile=export.parfile

imp full=y file=/dev/rmt/0m parfile=import.parfile

If you have the Oracle documentation for 7.1.3, look ad the ORACLE7 Server for Unix / Administrator's Reference Guide page 3-somewhere ( afaik ).
Rgds
Alexander M. Ermes
.. and all these memories are going to vanish like tears in the rain! final words from Rutger Hauer in "Blade Runner"
T G Manikandan
Honored Contributor

Re: Data export problem

If you are exporting using the Oracle utility exp to tape then you should use the export using pipes.

I have attached a document which shows how to perform the same.


If you are exporting to a file system then you must check whether your file system is largefiles enabled.

On 10.20 with large files enabled the maximum file size can be 128GB.

If your file system is not largefiles enabled then you can do

usr/sbin/fsadm -F hfs -o largefiles /dev/vg02/lvol1

Thanks
T G Manikandan
Honored Contributor

Re: Data export problem

I am not sure whether including the old Oracle exp utility supports exports more than 2GB.

So use the unix pipes for exporting the data to a file which is greater than 2GB.
check the previous posting attachment


radi_1
Frequent Advisor

Re: Data export problem

To Kapil,
The problem occurs before reaching the stage of
actualy backing to tape.the export file if created fully will be compressed before it is sent to tape using tar.
To Alex,
It has been suggested to us to export directly to tape but I have to compress the big dump file first, so can the command that you sent c be altered to take compression into account?But what worries me is that at certain stage the export file can not be writen into as the error message says.
Regards
never take simple maters for granted
F. X. de Montgolfier
Valued Contributor

Re: Data export problem

Hi,

Oracle 7 specific documentation for HP-UX:
http://docs.oracle.com/database_mp_7.html

If you created the filesystem without specifying the -o largefiles option, you filesystem does not support files larger than 2 GB.

You can check this in /etc/fstab, where you will see largefiles if the large files support is enabled.

In the previous doc, Oracle says that fsadm will enable you to convert a filesystem to large files support, but it is specified as a command for HP-UX 11.0, so you should check the fsadm man.

If you have large file support, however, I am not aware of an issue with exp, but am using oracle 8i...

Hope this helps,

FiX
Ian Lochray
Respected Contributor
Solution

Re: Data export problem

The Oracle exp utility itself has a 2Gb limitation prior to 8.1.3. Here is an Oracle Metalink article that explains the limitation and the workarounds.


Bookmark Fixed font Go to End

Doc ID: Note:1057099.6
Subject: Unable to export when export file grows larger than 2GB
Type: PROBLEM
Status: PUBLISHED
Content Type: TEXT/PLAIN
Creation Date: 19-AUG-1998
Last Revision Date: 28-JAN-2002


Problem Description: ==================== You are attempting to perform a large export. When the export file grows beyond 2GB, the export fails with the following errors reported in the export log file: EXP-00015: error on row of table , column , datatype EXP-00002: error in writing to export file EXP-00000: Export terminated unsuccessfully Examine the file size of the export dump file. It should be approximately 21474M or 2.1G. This is because prior to 8.1.3 there is no large file support for Oracle Import, Export, or SQL*Loader utilties. Search Words: ============= 2G EXPORT EXP IMPORT IMP GIGABYTES Solution Description: ===================== This is a restriction of the Oracle utilities as of the time this article was published. There is some confusion over the >2GB patch released by Oracle which allows datafiles to be >2GB datafiles. This patch and file size only applies to the RDBMS itself, not its utilties. However, some workarounds are available. Solution Explanation: ===================== The Oracle export dump files still are restricted to less than 2GB as specified in the product documentation. The same holds true for import files and SQL* Loader data files. Here are some workarounds for exporting data that results in dump files of a size >2GB: Workaround #1: -------------- Investigate to see if there is a way to slit up the export at the schema level. Perhaps you can export the schema with the highest number of objects in a separate export in order to fit under the 2GB limit. Also, investigate whether certain large tables can be exported separately. Workaround #2: -------------- !!! IMPORTANT: THESE EXAMPLES ONLY WORK IN KORN SHELL (KSH) !!! Use the UNIX pipe and split commands: Export command: echo|exp file=>(split -b 1024m - expdmp-) userid=scott/tiger tables=X Note: You can put any "exp" parameters. This is working only in ksh and has been tested on Sun Solaris 5.5.1. Import command: echo|imp file=<(cat expdmp-*) userid=scott/tiger tables=X Splitting and compressing at the same time: Export command: echo|exp file=>(compress|split -b 1024m - expdmp-) userid=scott/tiger tables=X Import command: echo|imp file=<(cat expdmp-*|zcat) userid=scott/tiger tables=X Workaround #3: -------------- This is almost the same as above, but, in a three-step implementation using explicit UNIX pipes without the split command, only relying on compress: Export command: 1) Make the pipe mknod /tmp/exp_pipe p 2) Compress in background compress < /tmp/exp_pipe > export.dmp.Z & -or- cat p | compress > output.Z & -or- cat p > output.file & ) 3) Export to the pipe exp file=/tmp/exp_pipe userid=scott/tiger tables=X Import command: 1) Make the pipe mknod /tmp/imp_pipe p 2) uncompress in background uncompress < export.dmp.Z > /tmp/imp_pipe & -or- cat output_file > /tmp/imp_pipe & 3) Import thru the pipe imp file=/tmp/imp_pipe userid=scott/tiger tables=X
.

--------------------------------------------------------------------------------

Copyright (c) 1995,2000 Oracle Corporation. All Rights Reserved. Legal Notices and Terms of Use.

radi_1
Frequent Advisor

Re: Data export problem

to T.G
Exporting to tape is done by tar within a batch
file after compressing the exported file created in the file system.Anyway,can you tell how to check wether the f.s is largefile enabled?
never take simple maters for granted
T G Manikandan
Honored Contributor

Re: Data export problem

check for largefile using

#fsadm -F hfs /dev/vg00/lvol7


If you are using tar to write it to a tape,then even there is a limitation.

tar,cpio have 2GB limitations.
get the Gnu tar from the hp porting centre.

http://hpux.connect.org.uk/hppd/hpux/Gnu/tar-1.13.25/

THe Gnu version of tar supports archiving files greater than 2GB.

Thanks
Carlos Fernandez Riera
Honored Contributor

Re: Data export problem

I think you are not fixing where is the issue: is it the FS, is it the exp utility, may be any unused flag, or a misconfiguration of ulimit?

1- Check if your FS accepts lafgefiles:
a- fsadm -F hfs /dev/vgxx/lvxx
b- try to write a file dd if=/dev/rdsk/c0t0d0 of=/yourfs/4gbfile bs=1024k count=4096

2 and 3 - exp system full=y file=/dev/null volsize=0. Also exp system full=y file=/dev/rmt/0m volsize=0

3- check the ulimit. ulimit. See man sh-posix.


unsupported
Yogeeraj_1
Honored Contributor

Re: Data export problem

hi,

Below a quote from my notes:

Normally, you would export to a device that does not support seeking such as a tape (not recommended, really slow) or a pipe.

Why not using compression? it'll considerably cut down on the size?

I myself use both compression AND split to make my export be in many managable sized file (500meg is my chosen size). You could just use split and not compress if you want.

Basically, you would create a pipe in the OS via:

$ mknod somefilename p

and then export to that pipe. you would set up another process in the background that 'eats' the contents of this pipe and puts it somewhere. I use split, you could use 'cat' to just put it into another file (if cat supports files >2 gig -- thats the problem here, most utilities do not, you need to use a special file io api to 2 gig file support).

Here is a script you can use as a template. Yes, it uses
compression but you can take that out. Its here to show you one method of doing this.

------------------------------
#!/bin/csh -vx

setenv UID /
setenv FN exp.`date +%j_%Y`.dmp
setenv PIPE /tmp/exp_tmp_ora8i.dmp

setenv MAXSIZE 500m
setenv EXPORT_WHAT "full=y COMPRESS=n"

echo $FN

cd /nfs/atc-netapp1/expbkup_ora8i
ls -l

rm expbkup.log export.test exp.*.dmp* $PIPE
mknod $PIPE p

date > expbkup.log
( gzip < $PIPE ) | split -b $MAXSIZE - $FN. &

# uncomment this to just SPLIT the file, not compress and split
#split -b $MAXSIZE $PIPE $FN. &

exp userid=$UID buffer=20000000 file=$PIPE $EXPORT_WHAT >>&
expbkup.log
date >> expbkup.log


date > export.test
cat `echo $FN.* | sort` | gunzip > $PIPE &

# uncomment this to just SPLIT the file, not compress and split
#cat `echo $FN.* | sort` > $PIPE &

imp userid=sys/o8isgr8 file=$PIPE show=y full=y >>& export.test
date >> export.test

tail expbkup.log
tail export.test

ls -l
rm -f $PIPE
--------------------------------------------------

This also always does an 'integrity' check of the export right after it is done with an import show=y, that shows how to use these split files with import.
--------------------------------------------------

!!!
cat `echo $FN.* | sort` | gunzip > $PIPE &


sorts the filenames, sends them to cat, which give them to gunzip in the right order.


Hope this helps!

Regards
Yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)
Carlos Fernandez Riera
Honored Contributor

Re: Data export problem

BTW, you dont need to compress the file to send it to tape, tapes drivers commpress, by default, all data written to it. Let the driver work.
unsupported
Christian Gebhardt
Honored Contributor

Re: Data export problem

Hi
we uses following scenario:

# mknod exp_pipe.dmp p
# nohup compress < exp_pipe.dmp > databasedumpfile.dmp.Z &
# nohup $ORACLE_HOME/bin/exp user/password file=exp_pipe.dmp otherexpparameters &


Chris
radi_1
Frequent Advisor

Re: Data export problem

To Ian Lochray,
please could you send me a step by step commands for workaround 2 & 3 you sent me in your reply above.Suposing the tables size I want to export is 2.5GB and I want to create a split dump file of 1GB with a name of (exp.dmp)and what would be the names of the eventual split files.Actually I do not realy understand the syntax of exp command and what you mean by (userid=scott/tiger tables=X).
Regards
never take simple maters for granted
Ian Lochray
Respected Contributor

Re: Data export problem

Oracle provided three workarounds in their note. The first one is to do multiple exports each containing subsets of the schema's objects. I have never tried this option as it does not seem very sensible - you would need to create a list of all the database objects and list them on the exp command. I always use method three - to compress the export via a pipe as described by others in their earlier replies.
Alexander M. Ermes
Honored Contributor

Re: Data export problem

Hi there.
What about doing a first export with no data ?

exp system/xxxx file=struc_exp.dmp parfile=exp_no_rows.parfile
--------------------------
sample file :

buffer=1000000
full=yes
compress=y
grants=y
indexes=y
rows=n
constraints=y
---------------------------------
then export user by user

exp system/xxxx file=struc_exp.dmp user=scott
parfile=xyz.parfile


or export table by table

exp system/xxxx file=struc_exp.dmp parfile=exp_tables.parfile
--------------------------
sample tables.parfile

buffer=1000000
full=yes
compress=y
grants=y
indexes=y
rows=y
constraints=y
tables=(a1,a2,a3)
--------------------------------

that way you can keep your export files smaller than 2 GB.
But the tape export shopuld be able to handle more than 2 GB.
Rgds
Alexander M. Ermes
.. and all these memories are going to vanish like tears in the rain! final words from Rutger Hauer in "Blade Runner"
radi_1
Frequent Advisor

Re: Data export problem

Hi
Thank you all,Ian's workaround #3 worked perfectly.points will be asigned accordingly.
Regards.
never take simple maters for granted