<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Data export problem in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862984#M96677</link>
    <description>BTW, you dont need to compress the file to send it to tape, tapes drivers commpress, by default, all data written to it. Let the driver work.</description>
    <pubDate>Thu, 12 Dec 2002 09:59:17 GMT</pubDate>
    <dc:creator>Carlos Fernandez Riera</dc:creator>
    <dc:date>2002-12-12T09:59:17Z</dc:date>
    <item>
      <title>Data export problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862972#M96665</link>
      <description>Hi &lt;BR /&gt;I have a customer who has an hp server runnig 10.20 hp-ux.The file system type is hsf and the application he is running is oracle 7.1.3.&lt;BR /&gt;He can not export his data to tape.The export procedur aborts when the export file reaches the size of aproximatley 2.15 GB with error message that it can not write to export file,&lt;BR /&gt;and the file sysyem assigned for exporting is 3.7GB.I have checked some hp-ux documentation&lt;BR /&gt;and understood that hp-ux 10.20 and above can &lt;BR /&gt;permit more than 2GB file size.So what seems to be the prblem?&lt;BR /&gt;Regards&lt;BR /&gt;</description>
      <pubDate>Thu, 12 Dec 2002 08:24:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862972#M96665</guid>
      <dc:creator>radi_1</dc:creator>
      <dc:date>2002-12-12T08:24:29Z</dc:date>
    </item>
    <item>
      <title>Re: Data export problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862973#M96666</link>
      <description>tar has a limitation of 2 GB. Is that the issue?. NFS also has a limitation of 2 GB&lt;BR /&gt;&lt;BR /&gt;kaps</description>
      <pubDate>Thu, 12 Dec 2002 08:34:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862973#M96666</guid>
      <dc:creator>KapilRaj</dc:creator>
      <dc:date>2002-12-12T08:34:47Z</dc:date>
    </item>
    <item>
      <title>Re: Data export problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862974#M96667</link>
      <description>Hi there.&lt;BR /&gt;Why not exporting directly to tape ?&lt;BR /&gt;Give it the device file as target and it should work.&lt;BR /&gt;&lt;BR /&gt;exp full=y file=/dev/rmt/0m parfile=export.parfile&lt;BR /&gt;&lt;BR /&gt;imp full=y file=/dev/rmt/0m parfile=import.parfile&lt;BR /&gt;&lt;BR /&gt;If you have the Oracle documentation for 7.1.3, look ad the ORACLE7 Server for Unix / Administrator's Reference Guide page 3-somewhere ( afaik ).&lt;BR /&gt;Rgds&lt;BR /&gt;Alexander M. Ermes</description>
      <pubDate>Thu, 12 Dec 2002 08:43:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862974#M96667</guid>
      <dc:creator>Alexander M. Ermes</dc:creator>
      <dc:date>2002-12-12T08:43:52Z</dc:date>
    </item>
    <item>
      <title>Re: Data export problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862975#M96668</link>
      <description>If you are exporting using the Oracle utility exp to tape then you should use the export using pipes.&lt;BR /&gt;&lt;BR /&gt;I have attached a document which shows how to perform the same.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;If you are exporting to a file system then you must check whether your file system is largefiles enabled.&lt;BR /&gt;&lt;BR /&gt;On 10.20 with large files enabled the maximum file size can be 128GB.&lt;BR /&gt;&lt;BR /&gt;If your file system is not largefiles enabled then you can do&lt;BR /&gt;&lt;BR /&gt;usr/sbin/fsadm -F hfs -o largefiles /dev/vg02/lvol1&lt;BR /&gt;&lt;BR /&gt;Thanks</description>
      <pubDate>Thu, 12 Dec 2002 08:52:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862975#M96668</guid>
      <dc:creator>T G Manikandan</dc:creator>
      <dc:date>2002-12-12T08:52:00Z</dc:date>
    </item>
    <item>
      <title>Re: Data export problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862976#M96669</link>
      <description>I am not sure whether including the old Oracle exp utility supports exports more than 2GB.&lt;BR /&gt;&lt;BR /&gt;So use the unix pipes for exporting the data to a file which is greater than 2GB.&lt;BR /&gt;check the previous posting attachment&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 12 Dec 2002 08:54:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862976#M96669</guid>
      <dc:creator>T G Manikandan</dc:creator>
      <dc:date>2002-12-12T08:54:47Z</dc:date>
    </item>
    <item>
      <title>Re: Data export problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862977#M96670</link>
      <description>To Kapil,&lt;BR /&gt;The problem occurs before reaching the stage of&lt;BR /&gt;actualy backing to tape.the export file if created fully will be compressed before it is sent to tape using tar.&lt;BR /&gt;To Alex,&lt;BR /&gt;It has been suggested to us to export directly to tape but I have to compress the big dump file first, so can the command that you sent c be altered to take compression into account?But what worries me is that at certain stage the export file can not be writen into as the error message says. &lt;BR /&gt;Regards</description>
      <pubDate>Thu, 12 Dec 2002 09:11:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862977#M96670</guid>
      <dc:creator>radi_1</dc:creator>
      <dc:date>2002-12-12T09:11:11Z</dc:date>
    </item>
    <item>
      <title>Re: Data export problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862978#M96671</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Oracle 7 specific documentation for HP-UX: &lt;BR /&gt;&lt;A href="http://docs.oracle.com/database_mp_7.html" target="_blank"&gt;http://docs.oracle.com/database_mp_7.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;If you created the filesystem without specifying the -o largefiles option, you filesystem does not support files larger than 2 GB.&lt;BR /&gt;&lt;BR /&gt;You can check this in /etc/fstab, where you will see largefiles if the large files support is enabled.&lt;BR /&gt;&lt;BR /&gt;In the previous doc, Oracle says that fsadm will enable you to convert a filesystem to large files support, but it is specified as a command for HP-UX 11.0, so you should check the fsadm man.&lt;BR /&gt;&lt;BR /&gt;If you have large file support, however, I am not aware of an issue with exp, but am using oracle 8i...&lt;BR /&gt;&lt;BR /&gt;Hope this helps,&lt;BR /&gt;&lt;BR /&gt;FiX</description>
      <pubDate>Thu, 12 Dec 2002 09:13:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862978#M96671</guid>
      <dc:creator>F. X. de Montgolfier</dc:creator>
      <dc:date>2002-12-12T09:13:05Z</dc:date>
    </item>
    <item>
      <title>Re: Data export problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862979#M96672</link>
      <description>The Oracle exp utility itself has a 2Gb limitation prior to 8.1.3.  Here is an Oracle Metalink article that explains the limitation and the workarounds.&lt;BR /&gt;  &lt;BR /&gt;    &lt;BR /&gt;Bookmark Fixed font  Go to End &lt;BR /&gt;&lt;BR /&gt;Doc ID:  Note:1057099.6 &lt;BR /&gt;Subject:  Unable to export when export file grows larger than 2GB &lt;BR /&gt;Type:  PROBLEM &lt;BR /&gt;Status:  PUBLISHED &lt;BR /&gt; Content Type:  TEXT/PLAIN &lt;BR /&gt;Creation Date:  19-AUG-1998 &lt;BR /&gt;Last Revision Date:  28-JAN-2002 &lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;Problem Description:  ====================   You are attempting to perform a large export. When the export file grows beyond  2GB, the export fails with the following errors reported in the export log file:    EXP-00015: error on row  of table , column , datatype    EXP-00002: error in writing to export file   EXP-00000: Export terminated unsuccessfully  Examine the file size of the export dump file. It should be approximately  21474M or 2.1G. This is because prior to 8.1.3 there is no large file support  for Oracle Import, Export, or SQL*Loader utilties.    Search Words:  =============  2G EXPORT EXP IMPORT IMP GIGABYTES  Solution Description:  =====================   This is a restriction of the Oracle utilities as of the time this article was  published. There is some confusion over the &amp;gt;2GB patch released by  Oracle which allows datafiles to be &amp;gt;2GB datafiles.  This patch and file size only applies to the RDBMS itself, not its utilties. However, some  workarounds are available.   Solution Explanation:  =====================    The Oracle export dump files still are restricted to less than 2GB as specified  in the product documentation. The same holds true for import files and SQL* Loader data files. Here are some workarounds for exporting data that results in dump files of a size &amp;gt;2GB:   Workaround #1: -------------- Investigate to see if there is a way to slit up the export at the schema level.  Perhaps you can export the schema with the highest number of objects in a  separate export in order to fit under the 2GB limit. Also, investigate whether  certain large tables can be exported separately.   Workaround #2: --------------  !!! IMPORTANT: THESE EXAMPLES ONLY WORK IN KORN SHELL (KSH) !!!  Use the UNIX pipe and split commands:    Export command:    echo|exp file=&amp;gt;(split -b 1024m - expdmp-) userid=scott/tiger tables=X    Note: You can put any "exp" parameters. This is working only in ksh and         has been tested on Sun Solaris 5.5.1.    Import command:    echo|imp file=&amp;lt;(cat expdmp-*) userid=scott/tiger tables=X  Splitting and compressing at the same time:    Export command:    echo|exp file=&amp;gt;(compress|split -b 1024m - expdmp-) userid=scott/tiger tables=X    Import command:    echo|imp file=&amp;lt;(cat expdmp-*|zcat) userid=scott/tiger tables=X   Workaround #3: -------------- This is almost the same as above, but, in a three-step implementation using  explicit UNIX pipes without the split command, only relying on compress:    Export command:      1) Make the pipe     mknod /tmp/exp_pipe  p                           2) Compress in background     compress &amp;lt; /tmp/exp_pipe &amp;gt; export.dmp.Z &amp;amp;         -or-     cat p | compress &amp;gt; output.Z &amp;amp;     -or-     cat p &amp;gt; output.file &amp;amp; )      3) Export to the pipe     exp file=/tmp/exp_pipe userid=scott/tiger tables=X     Import command:      1) Make the pipe     mknod /tmp/imp_pipe  p      2) uncompress in background     uncompress &amp;lt; export.dmp.Z &amp;gt; /tmp/imp_pipe &amp;amp;      -or-     cat output_file &amp;gt; /tmp/imp_pipe &amp;amp;      3) Import thru the pipe     imp file=/tmp/imp_pipe userid=scott/tiger tables=X &lt;BR /&gt;.  &lt;BR /&gt;&lt;BR /&gt;--------------------------------------------------------------------------------&lt;BR /&gt; &lt;BR /&gt; Copyright (c) 1995,2000 Oracle Corporation. All Rights Reserved. Legal Notices and Terms of Use.  &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 12 Dec 2002 09:22:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862979#M96672</guid>
      <dc:creator>Ian Lochray</dc:creator>
      <dc:date>2002-12-12T09:22:46Z</dc:date>
    </item>
    <item>
      <title>Re: Data export problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862980#M96673</link>
      <description>to T.G&lt;BR /&gt;Exporting to tape is done by tar within a batch&lt;BR /&gt;file after compressing the exported file created in the file system.Anyway,can you tell how to check wether the f.s is largefile enabled?</description>
      <pubDate>Thu, 12 Dec 2002 09:23:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862980#M96673</guid>
      <dc:creator>radi_1</dc:creator>
      <dc:date>2002-12-12T09:23:59Z</dc:date>
    </item>
    <item>
      <title>Re: Data export problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862981#M96674</link>
      <description>check for largefile using&lt;BR /&gt;&lt;BR /&gt;#fsadm -F hfs /dev/vg00/lvol7&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;If you are using tar to write it to a tape,then even there is a limitation.&lt;BR /&gt;&lt;BR /&gt;tar,cpio have 2GB limitations.&lt;BR /&gt;get the Gnu tar from the hp porting centre.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://hpux.connect.org.uk/hppd/hpux/Gnu/tar-1.13.25/" target="_blank"&gt;http://hpux.connect.org.uk/hppd/hpux/Gnu/tar-1.13.25/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;THe Gnu version of tar supports archiving files greater than 2GB.&lt;BR /&gt;&lt;BR /&gt;Thanks</description>
      <pubDate>Thu, 12 Dec 2002 09:29:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862981#M96674</guid>
      <dc:creator>T G Manikandan</dc:creator>
      <dc:date>2002-12-12T09:29:58Z</dc:date>
    </item>
    <item>
      <title>Re: Data export problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862982#M96675</link>
      <description>I think you are not fixing where is the issue: is it the FS, is it the exp utility, may be any unused flag, or a misconfiguration of ulimit?&lt;BR /&gt;&lt;BR /&gt;1- Check if your FS accepts lafgefiles:&lt;BR /&gt;   a- fsadm -F hfs /dev/vgxx/lvxx&lt;BR /&gt;   b- try to write a file  dd if=/dev/rdsk/c0t0d0 of=/yourfs/4gbfile bs=1024k count=4096&lt;BR /&gt;&lt;BR /&gt;2 and 3 - exp system full=y file=/dev/null volsize=0. Also exp system full=y file=/dev/rmt/0m volsize=0&lt;BR /&gt;&lt;BR /&gt;3- check the ulimit. ulimit. See man sh-posix.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 12 Dec 2002 09:41:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862982#M96675</guid>
      <dc:creator>Carlos Fernandez Riera</dc:creator>
      <dc:date>2002-12-12T09:41:09Z</dc:date>
    </item>
    <item>
      <title>Re: Data export problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862983#M96676</link>
      <description>hi,&lt;BR /&gt;&lt;BR /&gt;Below a quote from my notes:&lt;BR /&gt;&lt;QUOTE&gt;&lt;BR /&gt;Normally, you would export to a device that does not support seeking such as a tape (not recommended, really slow) or a pipe. &lt;BR /&gt;&lt;BR /&gt;Why not using compression? it'll considerably cut down on the size? &lt;BR /&gt;&lt;BR /&gt;I myself use both compression AND split to make my export be in many managable sized file (500meg is my chosen size). You could just use split and not compress if you want. &lt;BR /&gt;&lt;BR /&gt;Basically, you would create a pipe in the OS via: &lt;BR /&gt;&lt;BR /&gt;$ mknod somefilename p &lt;BR /&gt;&lt;BR /&gt;and then export to that pipe. you would set up another process in the background that 'eats' the contents of this pipe and puts it somewhere. I use split, you could use 'cat' to just put it into another file (if cat supports files &amp;gt;2 gig -- thats the problem here, most utilities do not, you need to use a special file io api to 2 gig file support). &lt;BR /&gt;&lt;BR /&gt;Here is a script you can use as a template. Yes, it uses &lt;BR /&gt;compression but you can take that out. Its here to show you one method of doing this. &lt;BR /&gt;&lt;BR /&gt;------------------------------ &lt;BR /&gt;#!/bin/csh -vx &lt;BR /&gt;&lt;BR /&gt;setenv UID / &lt;BR /&gt;setenv FN exp.`date +%j_%Y`.dmp &lt;BR /&gt;setenv PIPE /tmp/exp_tmp_ora8i.dmp &lt;BR /&gt;&lt;BR /&gt;setenv MAXSIZE 500m &lt;BR /&gt;setenv EXPORT_WHAT "full=y COMPRESS=n" &lt;BR /&gt;&lt;BR /&gt;echo $FN &lt;BR /&gt;&lt;BR /&gt;cd /nfs/atc-netapp1/expbkup_ora8i &lt;BR /&gt;ls -l &lt;BR /&gt;&lt;BR /&gt;rm expbkup.log export.test exp.*.dmp* $PIPE &lt;BR /&gt;mknod $PIPE p &lt;BR /&gt;&lt;BR /&gt;date &amp;gt; expbkup.log &lt;BR /&gt;( gzip &amp;lt; $PIPE ) | split -b $MAXSIZE - $FN. &amp;amp; &lt;BR /&gt;&lt;BR /&gt;# uncomment this to just SPLIT the file, not compress and split &lt;BR /&gt;#split -b $MAXSIZE $PIPE $FN. &amp;amp; &lt;BR /&gt;&lt;BR /&gt;exp userid=$UID buffer=20000000 file=$PIPE $EXPORT_WHAT &amp;gt;&amp;gt;&amp;amp; &lt;BR /&gt;expbkup.log &lt;BR /&gt;date &amp;gt;&amp;gt; expbkup.log &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;date &amp;gt; export.test &lt;BR /&gt;cat `echo $FN.* | sort` | gunzip &amp;gt; $PIPE &amp;amp; &lt;BR /&gt;&lt;BR /&gt;# uncomment this to just SPLIT the file, not compress and split &lt;BR /&gt;#cat `echo $FN.* | sort` &amp;gt; $PIPE &amp;amp; &lt;BR /&gt;&lt;BR /&gt;imp userid=sys/o8isgr8 file=$PIPE show=y full=y &amp;gt;&amp;gt;&amp;amp; export.test &lt;BR /&gt;date &amp;gt;&amp;gt; export.test &lt;BR /&gt;&lt;BR /&gt;tail expbkup.log &lt;BR /&gt;tail export.test &lt;BR /&gt;&lt;BR /&gt;ls -l &lt;BR /&gt;rm -f $PIPE &lt;BR /&gt;-------------------------------------------------- &lt;BR /&gt;&lt;BR /&gt;This also always does an 'integrity' check of the export right after it is done with an import show=y, that shows how to use these split files with import. &lt;BR /&gt;-------------------------------------------------- &lt;BR /&gt;&lt;BR /&gt;!!! &lt;BR /&gt;cat `echo $FN.* | sort` | gunzip &amp;gt; $PIPE &amp;amp; &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;sorts the filenames, sends them to cat, which give them to gunzip in the right order. &lt;BR /&gt;&lt;/QUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;Hope this helps!&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Yogeeraj</description>
      <pubDate>Thu, 12 Dec 2002 09:56:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862983#M96676</guid>
      <dc:creator>Yogeeraj_1</dc:creator>
      <dc:date>2002-12-12T09:56:46Z</dc:date>
    </item>
    <item>
      <title>Re: Data export problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862984#M96677</link>
      <description>BTW, you dont need to compress the file to send it to tape, tapes drivers commpress, by default, all data written to it. Let the driver work.</description>
      <pubDate>Thu, 12 Dec 2002 09:59:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862984#M96677</guid>
      <dc:creator>Carlos Fernandez Riera</dc:creator>
      <dc:date>2002-12-12T09:59:17Z</dc:date>
    </item>
    <item>
      <title>Re: Data export problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862985#M96678</link>
      <description>Hi &lt;BR /&gt;we uses following scenario:&lt;BR /&gt;&lt;BR /&gt;# mknod exp_pipe.dmp p&lt;BR /&gt;# nohup compress &amp;lt; exp_pipe.dmp &amp;gt;  databasedumpfile.dmp.Z &amp;amp;&lt;BR /&gt;# nohup $ORACLE_HOME/bin/exp user/password file=exp_pipe.dmp otherexpparameters &amp;amp;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Chris&lt;BR /&gt;</description>
      <pubDate>Thu, 12 Dec 2002 10:05:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862985#M96678</guid>
      <dc:creator>Christian Gebhardt</dc:creator>
      <dc:date>2002-12-12T10:05:25Z</dc:date>
    </item>
    <item>
      <title>Re: Data export problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862986#M96679</link>
      <description>To Ian Lochray,&lt;BR /&gt;please could you send me a step by step commands for workaround 2 &amp;amp; 3 you sent me in your reply above.Suposing the tables size I want to export is 2.5GB and I want to create a split dump file of 1GB with a name of (exp.dmp)and what would be the names of the eventual split files.Actually I do not realy understand the syntax of exp command and what you mean by (userid=scott/tiger tables=X).&lt;BR /&gt;Regards</description>
      <pubDate>Thu, 12 Dec 2002 12:46:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862986#M96679</guid>
      <dc:creator>radi_1</dc:creator>
      <dc:date>2002-12-12T12:46:03Z</dc:date>
    </item>
    <item>
      <title>Re: Data export problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862987#M96680</link>
      <description>Oracle provided three workarounds in their note.  The first one is to do multiple exports each containing subsets of the schema's objects.  I have never tried this option as it does not seem very sensible - you would need to create a list of all the database objects and list them on the exp command.  I always use method three - to compress the export via a pipe as described by others in their earlier replies.</description>
      <pubDate>Thu, 12 Dec 2002 13:42:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862987#M96680</guid>
      <dc:creator>Ian Lochray</dc:creator>
      <dc:date>2002-12-12T13:42:09Z</dc:date>
    </item>
    <item>
      <title>Re: Data export problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862988#M96681</link>
      <description>Hi there.&lt;BR /&gt;What about doing a first export with no data ?&lt;BR /&gt;&lt;BR /&gt;exp system/xxxx file=struc_exp.dmp parfile=exp_no_rows.parfile&lt;BR /&gt;--------------------------&lt;BR /&gt;sample file :&lt;BR /&gt;&lt;BR /&gt;buffer=1000000&lt;BR /&gt;full=yes&lt;BR /&gt;compress=y&lt;BR /&gt;grants=y&lt;BR /&gt;indexes=y&lt;BR /&gt;rows=n&lt;BR /&gt;constraints=y&lt;BR /&gt;---------------------------------&lt;BR /&gt;then export user by user&lt;BR /&gt;&lt;BR /&gt;exp system/xxxx file=struc_exp.dmp user=scott&lt;BR /&gt;parfile=xyz.parfile&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;or export table by table&lt;BR /&gt;&lt;BR /&gt;exp system/xxxx file=struc_exp.dmp parfile=exp_tables.parfile&lt;BR /&gt;--------------------------&lt;BR /&gt;sample tables.parfile&lt;BR /&gt;&lt;BR /&gt;buffer=1000000&lt;BR /&gt;full=yes&lt;BR /&gt;compress=y&lt;BR /&gt;grants=y&lt;BR /&gt;indexes=y&lt;BR /&gt;rows=y&lt;BR /&gt;constraints=y&lt;BR /&gt;tables=(a1,a2,a3)&lt;BR /&gt;--------------------------------&lt;BR /&gt;&lt;BR /&gt;that way you can keep your export files smaller than 2 GB.&lt;BR /&gt;But the tape export shopuld be able to handle more than 2 GB.&lt;BR /&gt;Rgds&lt;BR /&gt;Alexander M. Ermes</description>
      <pubDate>Thu, 12 Dec 2002 13:50:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862988#M96681</guid>
      <dc:creator>Alexander M. Ermes</dc:creator>
      <dc:date>2002-12-12T13:50:24Z</dc:date>
    </item>
    <item>
      <title>Re: Data export problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862989#M96682</link>
      <description>Hi &lt;BR /&gt;Thank you all,Ian's workaround #3 worked perfectly.points will be asigned accordingly.&lt;BR /&gt;Regards.</description>
      <pubDate>Sun, 15 Dec 2002 07:19:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/data-export-problem/m-p/2862989#M96682</guid>
      <dc:creator>radi_1</dc:creator>
      <dc:date>2002-12-15T07:19:31Z</dc:date>
    </item>
  </channel>
</rss>

