<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: nfs &amp;amp; big files in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175225#M161900</link>
    <description>it 's on disk :&lt;BR /&gt;a linux nfs file system (debian 3.0)&lt;BR /&gt;exported on my hpux 10.x and 11.00&lt;BR /&gt;&lt;BR /&gt;but i didn't understand if with your solution i had to do it on the nfs file systeme or with my command fbackup ??</description>
    <pubDate>Tue, 27 Jan 2004 10:22:43 GMT</pubDate>
    <dc:creator>Tenon_1</dc:creator>
    <dc:date>2004-01-27T10:22:43Z</dc:date>
    <item>
      <title>nfs &amp; big files</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175222#M161897</link>
      <description>hi,&lt;BR /&gt;a little problem with my backup ...&lt;BR /&gt;&lt;BR /&gt;i would like to backup more than 2 GB with&lt;BR /&gt;fbackup on nfs but it seems to stop at 2 GB &lt;BR /&gt;#fabckup -f "file on nfs" -g graph &lt;BR /&gt;so ... how can i split my backup ? or compact or .. ?&lt;BR /&gt;&lt;BR /&gt;Thanks</description>
      <pubDate>Tue, 27 Jan 2004 10:12:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175222#M161897</guid>
      <dc:creator>Tenon_1</dc:creator>
      <dc:date>2004-01-27T10:12:03Z</dc:date>
    </item>
    <item>
      <title>Re: nfs &amp; big files</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175223#M161898</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Do you backup to tape or to disk. For files &amp;gt;2 GB you need to set the largefiles option on on the filesystem. ( fsadm -F xvfs -o largefiles /mountpoint)? &lt;BR /&gt;&lt;BR /&gt;Gideon</description>
      <pubDate>Tue, 27 Jan 2004 10:15:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175223#M161898</guid>
      <dc:creator>G. Vrijhoeven</dc:creator>
      <dc:date>2004-01-27T10:15:33Z</dc:date>
    </item>
    <item>
      <title>Re: nfs &amp; big files</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175224#M161899</link>
      <description>fbackup will back up files bigger than 2 GB.&lt;BR /&gt;&lt;BR /&gt;The problem is you are copying the file on a temporary basis to a filesystem that doesn't have largefiles enabled.&lt;BR /&gt;&lt;BR /&gt;fadm can upgrade a filesystem to largefiles on the fly.&lt;BR /&gt;&lt;BR /&gt;newfs -F vxfs -o largefiles will do it if the fs is unmounted and you don't mind losing the data on it.&lt;BR /&gt;&lt;BR /&gt;For 11.00 you'll need to add the largefiles parameter to the mount statements in /etc/fstab&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Tue, 27 Jan 2004 10:21:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175224#M161899</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2004-01-27T10:21:41Z</dc:date>
    </item>
    <item>
      <title>Re: nfs &amp; big files</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175225#M161900</link>
      <description>it 's on disk :&lt;BR /&gt;a linux nfs file system (debian 3.0)&lt;BR /&gt;exported on my hpux 10.x and 11.00&lt;BR /&gt;&lt;BR /&gt;but i didn't understand if with your solution i had to do it on the nfs file systeme or with my command fbackup ??</description>
      <pubDate>Tue, 27 Jan 2004 10:22:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175225#M161900</guid>
      <dc:creator>Tenon_1</dc:creator>
      <dc:date>2004-01-27T10:22:43Z</dc:date>
    </item>
    <item>
      <title>Re: nfs &amp; big files</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175226#M161901</link>
      <description>Tenon,&lt;BR /&gt;&lt;BR /&gt;Can you give us the error.&lt;BR /&gt;If you use DDS ( dds1) it can only hold 2Gb of data.&lt;BR /&gt;&lt;BR /&gt;Gideon&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 27 Jan 2004 10:25:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175226#M161901</guid>
      <dc:creator>G. Vrijhoeven</dc:creator>
      <dc:date>2004-01-27T10:25:57Z</dc:date>
    </item>
    <item>
      <title>Re: nfs &amp; big files</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175227#M161902</link>
      <description>ok &lt;BR /&gt;my problem is from my nfs server&lt;BR /&gt;i had to change my nfs version on linux&lt;BR /&gt;&lt;BR /&gt;but is it really impossible to split backup files during my backup ?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 27 Jan 2004 10:37:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175227#M161902</guid>
      <dc:creator>Tenon_1</dc:creator>
      <dc:date>2004-01-27T10:37:05Z</dc:date>
    </item>
    <item>
      <title>Re: nfs &amp; big files</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175228#M161903</link>
      <description>Hi,&lt;BR /&gt;  &lt;BR /&gt;   I was about to tell u.. NFS version 2 cant handle large files (&amp;gt; 2gB). NFS version 3 can.&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;&lt;BR /&gt;Sundar</description>
      <pubDate>Wed, 28 Jan 2004 17:29:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175228#M161903</guid>
      <dc:creator>Sundar_7</dc:creator>
      <dc:date>2004-01-28T17:29:41Z</dc:date>
    </item>
    <item>
      <title>Re: nfs &amp; big files</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175229#M161904</link>
      <description>What you could do is run fbackup to stdout (-f -) and then pipe to a script/command that splits the result in chunks of, for instance, 2 Gb.&lt;BR /&gt;&lt;BR /&gt;For instance:&lt;BR /&gt;fbackup &lt;FBACKUP-OPTIONS&gt; -f - | split -b 2048m - bck-$(date +%Y%m%d)&lt;BR /&gt;&lt;BR /&gt;Restore:&lt;BR /&gt;cat bck-&lt;DATUM&gt;* | frestore -f - &lt;FRESTORE options=""&gt;&lt;/FRESTORE&gt;&lt;/DATUM&gt;&lt;/FBACKUP-OPTIONS&gt;</description>
      <pubDate>Thu, 29 Jan 2004 03:53:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175229#M161904</guid>
      <dc:creator>Elmar P. Kolkman</dc:creator>
      <dc:date>2004-01-29T03:53:51Z</dc:date>
    </item>
    <item>
      <title>Re: nfs &amp; big files</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175230#M161905</link>
      <description>nfs version 3 should support largefiles&lt;BR /&gt;</description>
      <pubDate>Thu, 29 Jan 2004 04:04:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175230#M161905</guid>
      <dc:creator>T G Manikandan</dc:creator>
      <dc:date>2004-01-29T04:04:11Z</dc:date>
    </item>
    <item>
      <title>Re: nfs &amp; big files</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175231#M161906</link>
      <description>Also make sure that you re-export the file system for the nfs after the file system was enabled largefiles.</description>
      <pubDate>Thu, 29 Jan 2004 04:10:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nfs-amp-big-files/m-p/3175231#M161906</guid>
      <dc:creator>T G Manikandan</dc:creator>
      <dc:date>2004-01-29T04:10:11Z</dc:date>
    </item>
  </channel>
</rss>

