cancel
Showing results for 
Search instead for 
Did you mean: 

export command

allan_48
Occasional Contributor

export command

Hi, I made an export on my database. I want to export my whole db. While exportting the database, suddenly the export process stopped. The error message below.
"EXP-00015: error on row 16234 of table BILL_IMAGES, column BI_IMAGE, datatype 24
EXP-00222:
System error message 27
EXP-00002: error in writing to export file
EXP-00002: error in writing to export file
EXP-00000: Export terminated unsuccessfully"
If this error appears to have an error on the table BILL_IMAGES, How can i export my database excluding that table/s.


thanks
regards
10 REPLIES
harry d brown jr
Honored Contributor

Re: export command



Did you run out of space in your filesystem during the export??

live free or die
harry d brown jr
Live Free or Die
RAC_1
Honored Contributor

Re: export command

HP-UX error 27 is as follows.

System error message 27

I think export growing over 2GB size and the your FS does not support it. Check it.

mount -p | grep "/FS"
Does it support largefiles?

If not you will have to enable it and then try.
There is no substitute to HARDWORK
allan_48
Occasional Contributor

Re: export command

By the way my oracle resides on a HP-UX 10.20. The filesystem I'm using for the export dump is a NFS. when i created the filesystem I have specify largefiles.The command I use is written below
"newfs -F vxfs -o largefiles /dev/lan01/rora7".

thanks
TwoProc
Honored Contributor

Re: export command

You've got a write error to the file. Nothing wrong with table BILL_IMAGES (at least at this point).
You probably should export your database per schema (can do many at once), and then a second no-rows database export of the whole database.
Then, import the whole database - no rows first, then schema at a time. This will probably go a long way to control the size of the file, as well as allow you to import multiple schemas at a time after the initial no-rows import. You'll gain speed, and reduce your import time.

Re: NFS - is it possible that even though the file system can handle large files, the NFS mount cannot ??? Also, I've seen many errors using NFS for large file transfers - it will usually be just small timeouts - but get enough of them and the whole process fail s to complete with a write error. I guess my basic advice is try not to use NFS - export to local disk then copy over the other machine. If the other machine is the one with the disk space - then install an Oracle client there (on the NFS host machine), and run the export from that machine, letting SQLNET handle the traffic for you, instead of NFS.
We are the people our parents warned us about --Jimmy Buffett
Victor Fridyev
Honored Contributor

Re: export command

Hi,

Which NFS do you run? Version 2 or 3 ?
AFAIK version 2 is default for HPUX10.20. This version does not support largefiles option.

HTH
Entities are not to be multiplied beyond necessity - RTFM
A. Clay Stephenson
Acclaimed Contributor

Re: export command

Eventhough you are outputting to a filesystem that supports largefiles that does not mean that the application can handle large files and some versions of exp definitely have this problem. You can do a workaround with a named pipe.

mkfifo -m 660 /tmp/mypipe

Now start a process attempting to read from that pipe:

cat < /tmp/mypipe > /xxx/yyy/zzz &
or
dd if=/dev/mypipe of=/xxx/yyy/zzz &

Now start your exp but output to /tmp/mypipe.

Because you are writing to a device node, exp doesn't have the same limitations --- eventhough ultimately the output is going to a file.

WEhen you are finished, rm /tmp/mypipe.
If it ain't broke, I can fix that.
Bill Hassell
Honored Contributor

Re: export command

System error message is probably errno 27:

from /usr/include/sys/errno.h:
define EFBIG 27 /* File too large */

from man 2 errno:
[EFBIG] File too large. The size of a file exceeded the maximum file size (for the file system) or ULIMIT was exceeded (see ulimit(2)), or a bad semaphore number in a semop() call (see semop(2)).

In other words, the file is too big for the destination file system. Your export will have to be directed to a local disk if it exceeds 2Gb since there is no support for large files on NFS in 10.20.


Bill Hassell, sysadmin
A. Clay Stephenson
Acclaimed Contributor

Re: export command

Ooops, I'm an idiot, I didn't notice the 10.20 NFS aspect of this problem and that is definitely a killer although since you mentioned Oracle 7.X.X in a related post, I suspect that you will still need the named pipe workaround even on a local filesystem.
If it ain't broke, I can fix that.
allan_48
Occasional Contributor

Re: export command

I've exported filesystem is on HP-UX 11i. I'vr tried to mount the exported filesystem on a HP-UX 10.20 and got the same error. Is the solution the same(named pipe).


thanks
Yogeeraj_1
Honored Contributor

Re: export command

hi,

Why not use compression? This will cut down on the size considerably?

I would prefer to use both compression AND split to make my export be in many managable
sized file (500meg is my chosen size). You could just use split and not compress if you want.

Basically, you would create a pipe in the OS and then export to that pipe.

You would also set up another process in the
background that 'eats' the contents of this pipe and puts it on the NFS mount point. You can either use split or 'cat' to just put it into another file.

If you need a demo script, do let us know.

regard
yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)