- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Data export problem
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2002 12:24 AM
тАО12-12-2002 12:24 AM
I have a customer who has an hp server runnig 10.20 hp-ux.The file system type is hsf and the application he is running is oracle 7.1.3.
He can not export his data to tape.The export procedur aborts when the export file reaches the size of aproximatley 2.15 GB with error message that it can not write to export file,
and the file sysyem assigned for exporting is 3.7GB.I have checked some hp-ux documentation
and understood that hp-ux 10.20 and above can
permit more than 2GB file size.So what seems to be the prblem?
Regards
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2002 12:34 AM
тАО12-12-2002 12:34 AM
Re: Data export problem
kaps
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2002 12:43 AM
тАО12-12-2002 12:43 AM
Re: Data export problem
Why not exporting directly to tape ?
Give it the device file as target and it should work.
exp full=y file=/dev/rmt/0m parfile=export.parfile
imp full=y file=/dev/rmt/0m parfile=import.parfile
If you have the Oracle documentation for 7.1.3, look ad the ORACLE7 Server for Unix / Administrator's Reference Guide page 3-somewhere ( afaik ).
Rgds
Alexander M. Ermes
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2002 12:52 AM
тАО12-12-2002 12:52 AM
Re: Data export problem
I have attached a document which shows how to perform the same.
If you are exporting to a file system then you must check whether your file system is largefiles enabled.
On 10.20 with large files enabled the maximum file size can be 128GB.
If your file system is not largefiles enabled then you can do
usr/sbin/fsadm -F hfs -o largefiles /dev/vg02/lvol1
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2002 12:54 AM
тАО12-12-2002 12:54 AM
Re: Data export problem
So use the unix pipes for exporting the data to a file which is greater than 2GB.
check the previous posting attachment
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2002 01:11 AM
тАО12-12-2002 01:11 AM
Re: Data export problem
The problem occurs before reaching the stage of
actualy backing to tape.the export file if created fully will be compressed before it is sent to tape using tar.
To Alex,
It has been suggested to us to export directly to tape but I have to compress the big dump file first, so can the command that you sent c be altered to take compression into account?But what worries me is that at certain stage the export file can not be writen into as the error message says.
Regards
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2002 01:13 AM
тАО12-12-2002 01:13 AM
Re: Data export problem
Oracle 7 specific documentation for HP-UX:
http://docs.oracle.com/database_mp_7.html
If you created the filesystem without specifying the -o largefiles option, you filesystem does not support files larger than 2 GB.
You can check this in /etc/fstab, where you will see largefiles if the large files support is enabled.
In the previous doc, Oracle says that fsadm will enable you to convert a filesystem to large files support, but it is specified as a command for HP-UX 11.0, so you should check the fsadm man.
If you have large file support, however, I am not aware of an issue with exp, but am using oracle 8i...
Hope this helps,
FiX
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2002 01:22 AM
тАО12-12-2002 01:22 AM
SolutionBookmark Fixed font Go to End
Doc ID: Note:1057099.6
Subject: Unable to export when export file grows larger than 2GB
Type: PROBLEM
Status: PUBLISHED
Content Type: TEXT/PLAIN
Creation Date: 19-AUG-1998
Last Revision Date: 28-JAN-2002
Problem Description: ==================== You are attempting to perform a large export. When the export file grows beyond 2GB, the export fails with the following errors reported in the export log file: EXP-00015: error on row of table , column , datatype EXP-00002: error in writing to export file EXP-00000: Export terminated unsuccessfully Examine the file size of the export dump file. It should be approximately 21474M or 2.1G. This is because prior to 8.1.3 there is no large file support for Oracle Import, Export, or SQL*Loader utilties. Search Words: ============= 2G EXPORT EXP IMPORT IMP GIGABYTES Solution Description: ===================== This is a restriction of the Oracle utilities as of the time this article was published. There is some confusion over the >2GB patch released by Oracle which allows datafiles to be >2GB datafiles. This patch and file size only applies to the RDBMS itself, not its utilties. However, some workarounds are available. Solution Explanation: ===================== The Oracle export dump files still are restricted to less than 2GB as specified in the product documentation. The same holds true for import files and SQL* Loader data files. Here are some workarounds for exporting data that results in dump files of a size >2GB: Workaround #1: -------------- Investigate to see if there is a way to slit up the export at the schema level. Perhaps you can export the schema with the highest number of objects in a separate export in order to fit under the 2GB limit. Also, investigate whether certain large tables can be exported separately. Workaround #2: -------------- !!! IMPORTANT: THESE EXAMPLES ONLY WORK IN KORN SHELL (KSH) !!! Use the UNIX pipe and split commands: Export command: echo|exp file=>(split -b 1024m - expdmp-) userid=scott/tiger tables=X Note: You can put any "exp" parameters. This is working only in ksh and has been tested on Sun Solaris 5.5.1. Import command: echo|imp file=<(cat expdmp-*) userid=scott/tiger tables=X Splitting and compressing at the same time: Export command: echo|exp file=>(compress|split -b 1024m - expdmp-) userid=scott/tiger tables=X Import command: echo|imp file=<(cat expdmp-*|zcat) userid=scott/tiger tables=X Workaround #3: -------------- This is almost the same as above, but, in a three-step implementation using explicit UNIX pipes without the split command, only relying on compress: Export command: 1) Make the pipe mknod /tmp/exp_pipe p 2) Compress in background compress < /tmp/exp_pipe > export.dmp.Z & -or- cat p | compress > output.Z & -or- cat p > output.file & ) 3) Export to the pipe exp file=/tmp/exp_pipe userid=scott/tiger tables=X Import command: 1) Make the pipe mknod /tmp/imp_pipe p 2) uncompress in background uncompress < export.dmp.Z > /tmp/imp_pipe & -or- cat output_file > /tmp/imp_pipe & 3) Import thru the pipe imp file=/tmp/imp_pipe userid=scott/tiger tables=X
.
--------------------------------------------------------------------------------
Copyright (c) 1995,2000 Oracle Corporation. All Rights Reserved. Legal Notices and Terms of Use.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2002 01:23 AM
тАО12-12-2002 01:23 AM
Re: Data export problem
Exporting to tape is done by tar within a batch
file after compressing the exported file created in the file system.Anyway,can you tell how to check wether the f.s is largefile enabled?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2002 01:29 AM
тАО12-12-2002 01:29 AM
Re: Data export problem
#fsadm -F hfs /dev/vg00/lvol7
If you are using tar to write it to a tape,then even there is a limitation.
tar,cpio have 2GB limitations.
get the Gnu tar from the hp porting centre.
http://hpux.connect.org.uk/hppd/hpux/Gnu/tar-1.13.25/
THe Gnu version of tar supports archiving files greater than 2GB.
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2002 01:41 AM
тАО12-12-2002 01:41 AM
Re: Data export problem
1- Check if your FS accepts lafgefiles:
a- fsadm -F hfs /dev/vgxx/lvxx
b- try to write a file dd if=/dev/rdsk/c0t0d0 of=/yourfs/4gbfile bs=1024k count=4096
2 and 3 - exp system full=y file=/dev/null volsize=0. Also exp system full=y file=/dev/rmt/0m volsize=0
3- check the ulimit. ulimit. See man sh-posix.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2002 01:56 AM
тАО12-12-2002 01:56 AM
Re: Data export problem
Below a quote from my notes:
Normally, you would export to a device that does not support seeking such as a tape (not recommended, really slow) or a pipe.
Why not using compression? it'll considerably cut down on the size?
I myself use both compression AND split to make my export be in many managable sized file (500meg is my chosen size). You could just use split and not compress if you want.
Basically, you would create a pipe in the OS via:
$ mknod somefilename p
and then export to that pipe. you would set up another process in the background that 'eats' the contents of this pipe and puts it somewhere. I use split, you could use 'cat' to just put it into another file (if cat supports files >2 gig -- thats the problem here, most utilities do not, you need to use a special file io api to 2 gig file support).
Here is a script you can use as a template. Yes, it uses
compression but you can take that out. Its here to show you one method of doing this.
------------------------------
#!/bin/csh -vx
setenv UID /
setenv FN exp.`date +%j_%Y`.dmp
setenv PIPE /tmp/exp_tmp_ora8i.dmp
setenv MAXSIZE 500m
setenv EXPORT_WHAT "full=y COMPRESS=n"
echo $FN
cd /nfs/atc-netapp1/expbkup_ora8i
ls -l
rm expbkup.log export.test exp.*.dmp* $PIPE
mknod $PIPE p
date > expbkup.log
( gzip < $PIPE ) | split -b $MAXSIZE - $FN. &
# uncomment this to just SPLIT the file, not compress and split
#split -b $MAXSIZE $PIPE $FN. &
exp userid=$UID buffer=20000000 file=$PIPE $EXPORT_WHAT >>&
expbkup.log
date >> expbkup.log
date > export.test
cat `echo $FN.* | sort` | gunzip > $PIPE &
# uncomment this to just SPLIT the file, not compress and split
#cat `echo $FN.* | sort` > $PIPE &
imp userid=sys/o8isgr8 file=$PIPE show=y full=y >>& export.test
date >> export.test
tail expbkup.log
tail export.test
ls -l
rm -f $PIPE
--------------------------------------------------
This also always does an 'integrity' check of the export right after it is done with an import show=y, that shows how to use these split files with import.
--------------------------------------------------
!!!
cat `echo $FN.* | sort` | gunzip > $PIPE &
sorts the filenames, sends them to cat, which give them to gunzip in the right order.
Hope this helps!
Regards
Yogeeraj
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2002 01:59 AM
тАО12-12-2002 01:59 AM
Re: Data export problem
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2002 02:05 AM
тАО12-12-2002 02:05 AM
Re: Data export problem
we uses following scenario:
# mknod exp_pipe.dmp p
# nohup compress < exp_pipe.dmp > databasedumpfile.dmp.Z &
# nohup $ORACLE_HOME/bin/exp user/password file=exp_pipe.dmp otherexpparameters &
Chris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2002 04:46 AM
тАО12-12-2002 04:46 AM
Re: Data export problem
please could you send me a step by step commands for workaround 2 & 3 you sent me in your reply above.Suposing the tables size I want to export is 2.5GB and I want to create a split dump file of 1GB with a name of (exp.dmp)and what would be the names of the eventual split files.Actually I do not realy understand the syntax of exp command and what you mean by (userid=scott/tiger tables=X).
Regards
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2002 05:42 AM
тАО12-12-2002 05:42 AM
Re: Data export problem
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-12-2002 05:50 AM
тАО12-12-2002 05:50 AM
Re: Data export problem
What about doing a first export with no data ?
exp system/xxxx file=struc_exp.dmp parfile=exp_no_rows.parfile
--------------------------
sample file :
buffer=1000000
full=yes
compress=y
grants=y
indexes=y
rows=n
constraints=y
---------------------------------
then export user by user
exp system/xxxx file=struc_exp.dmp user=scott
parfile=xyz.parfile
or export table by table
exp system/xxxx file=struc_exp.dmp parfile=exp_tables.parfile
--------------------------
sample tables.parfile
buffer=1000000
full=yes
compress=y
grants=y
indexes=y
rows=y
constraints=y
tables=(a1,a2,a3)
--------------------------------
that way you can keep your export files smaller than 2 GB.
But the tape export shopuld be able to handle more than 2 GB.
Rgds
Alexander M. Ermes
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-14-2002 11:19 PM
тАО12-14-2002 11:19 PM
Re: Data export problem
Thank you all,Ian's workaround #3 worked perfectly.points will be asigned accordingly.
Regards.