<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Solution for large files? in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/solution-for-large-files/m-p/2681493#M1290</link>
    <description>Hello,&lt;BR /&gt;  Being a newb idiot at linux here is my issue...&lt;BR /&gt;&lt;BR /&gt;We are needing to write larger files for databases.  Currently, we are limited to a 4gb file and must hit the 70gb range.  We are using redhat 7.1 ( the hp version ).  I have done some reading about ext3 but can't seem to find a solid source as to how to install it into the kernal.  Is ext3 an option?  If it is an option, how do we go about adding it.  Any suggestions/options would be much appreciated :)&lt;BR /&gt;&lt;BR /&gt;Thanks in advance!!&lt;BR /&gt;&lt;BR /&gt;Desperately seeking larger files</description>
    <pubDate>Tue, 12 Mar 2002 18:54:16 GMT</pubDate>
    <dc:creator>James Stenglein</dc:creator>
    <dc:date>2002-03-12T18:54:16Z</dc:date>
    <item>
      <title>Solution for large files?</title>
      <link>https://community.hpe.com/t5/operating-system-linux/solution-for-large-files/m-p/2681493#M1290</link>
      <description>Hello,&lt;BR /&gt;  Being a newb idiot at linux here is my issue...&lt;BR /&gt;&lt;BR /&gt;We are needing to write larger files for databases.  Currently, we are limited to a 4gb file and must hit the 70gb range.  We are using redhat 7.1 ( the hp version ).  I have done some reading about ext3 but can't seem to find a solid source as to how to install it into the kernal.  Is ext3 an option?  If it is an option, how do we go about adding it.  Any suggestions/options would be much appreciated :)&lt;BR /&gt;&lt;BR /&gt;Thanks in advance!!&lt;BR /&gt;&lt;BR /&gt;Desperately seeking larger files</description>
      <pubDate>Tue, 12 Mar 2002 18:54:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/solution-for-large-files/m-p/2681493#M1290</guid>
      <dc:creator>James Stenglein</dc:creator>
      <dc:date>2002-03-12T18:54:16Z</dc:date>
    </item>
    <item>
      <title>Re: Solution for large files?</title>
      <link>https://community.hpe.com/t5/operating-system-linux/solution-for-large-files/m-p/2681494#M1291</link>
      <description>If you are using 7.1, you should theoretically be able to get files up to 2TB.&lt;BR /&gt;&lt;BR /&gt;If you have a 4gig file size cap, then I'd suggest looking at your database server, and see whether IT has a 4gig limitation.&lt;BR /&gt;&lt;BR /&gt;*does a quick test on the RH7.1 box beside him*&lt;BR /&gt;&lt;BR /&gt;dd if=/dev/zero of=/tmp/Junk count=4097 bs=`echo "1024 * 1024" | bc`&lt;BR /&gt;&lt;BR /&gt;...&lt;BR /&gt;&lt;BR /&gt;ls -al /tmp/Junk&lt;BR /&gt;ls: Junk: Value too large for defined data type&lt;BR /&gt;&lt;BR /&gt;err oops.. lets try something else..&lt;BR /&gt;&lt;BR /&gt;wc -c /tmp/Junk&lt;BR /&gt;&lt;BR /&gt;4296075872 Junk&lt;BR /&gt;&lt;BR /&gt;Given that 4Gb is 4294967296.. A successful test.  RH 7.1 doens't have an issue (at the file system and kernel level) with 4gb files.&lt;BR /&gt;&lt;BR /&gt;As for converting an ext2 filesystem to ext3, there's another thread here on how to do that.  I've never done it myself however.</description>
      <pubDate>Tue, 12 Mar 2002 22:02:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/solution-for-large-files/m-p/2681494#M1291</guid>
      <dc:creator>Stuart Browne</dc:creator>
      <dc:date>2002-03-12T22:02:07Z</dc:date>
    </item>
    <item>
      <title>Re: Solution for large files?</title>
      <link>https://community.hpe.com/t5/operating-system-linux/solution-for-large-files/m-p/2681495#M1292</link>
      <description>I'm not exactly sure what the defaults are on HP's RH7.1, but I would take a look at /etc/security/limits.conf  This file allows you to set a bunch of limits on things like memory, concurrent processes, and file sizes.  It's basically meant to keep a single user from saturating a system.  The file is accessed by a pam module, specifically, pam_limits.so.  On my system the docs are located in   /usr/share/doc/pam-0.??/txts/README.pam_limits.&lt;BR /&gt;The maximum file size depends on your file system (reiserfs, ext2, ext3, etc.) your kernel version (2.2 vs. 2.4) and your architecture (32-bit vs 64-bit).  I really doubt you're going over the allowable limit if you're using a 2.4 kernel and an ext? fs.&lt;BR /&gt;&lt;BR /&gt;I hope this helps.  Good luck.&lt;BR /&gt;</description>
      <pubDate>Wed, 13 Mar 2002 01:47:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/solution-for-large-files/m-p/2681495#M1292</guid>
      <dc:creator>Christopher C. Weis</dc:creator>
      <dc:date>2002-03-13T01:47:33Z</dc:date>
    </item>
    <item>
      <title>Re: Solution for large files?</title>
      <link>https://community.hpe.com/t5/operating-system-linux/solution-for-large-files/m-p/2681496#M1293</link>
      <description>Check out a few of these links and see if they can help you out.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.zip.com.au/~akpm/linux/ext3/ext3-usage.html" target="_blank"&gt;http://www.zip.com.au/~akpm/linux/ext3/ext3-usage.html&lt;/A&gt;&lt;BR /&gt;&lt;A href="http://www.zip.com.au/~akpm/linux/ext3/" target="_blank"&gt;http://www.zip.com.au/~akpm/linux/ext3/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://people.spoiled.org/jha/ext3-faq.html" target="_blank"&gt;http://people.spoiled.org/jha/ext3-faq.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 13 Mar 2002 15:32:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/solution-for-large-files/m-p/2681496#M1293</guid>
      <dc:creator>D. Jackson_1</dc:creator>
      <dc:date>2002-03-13T15:32:39Z</dc:date>
    </item>
    <item>
      <title>Re: Solution for large files?</title>
      <link>https://community.hpe.com/t5/operating-system-linux/solution-for-large-files/m-p/2681497#M1294</link>
      <description>First off...thanks for your prompt solutions.  :)&lt;BR /&gt;&lt;BR /&gt;Stuart....&lt;BR /&gt;  Using your dd if=/dev/zero of=/tmp/Junk count=4097 bs=`echo "1024 * 1024" | bc` command I was able to pass the 4294967296 mark sucessfully ( over 5gb ) with both root and oracle users.  Is there a limitation on a datatype or maybe the NFS transfer?  We are basically using this RH box as a oracle dump with massive .dmp files and cannot create ( with oracle:dba user ) anything over that golden 4gb mark.  If it helps...we are mounting this server to the DB server and moving the dumps on.  &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Christopher....&lt;BR /&gt;  I checked the limits.conf and there is nothing set in it to specify a limit on a file size.  I am using a 2.4.2-2 kernal and ext2 fs.  As seen from above, you were both correct that it is not the FS itself.&lt;BR /&gt;&lt;BR /&gt;Any other suggestions or ideas?  &lt;BR /&gt;&lt;BR /&gt;Again...thanks for your help :)</description>
      <pubDate>Wed, 13 Mar 2002 15:33:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/solution-for-large-files/m-p/2681497#M1294</guid>
      <dc:creator>James Stenglein</dc:creator>
      <dc:date>2002-03-13T15:33:42Z</dc:date>
    </item>
    <item>
      <title>Re: Solution for large files?</title>
      <link>https://community.hpe.com/t5/operating-system-linux/solution-for-large-files/m-p/2681498#M1295</link>
      <description>D. Jackson,&lt;BR /&gt;&lt;BR /&gt;These are perfect for what I was looking for.  Fortunaltey, the other 2 experts have already directed my issue to another direction.&lt;BR /&gt;&lt;BR /&gt;Thanks for the sites  :)</description>
      <pubDate>Wed, 13 Mar 2002 15:41:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/solution-for-large-files/m-p/2681498#M1295</guid>
      <dc:creator>James Stenglein</dc:creator>
      <dc:date>2002-03-13T15:41:28Z</dc:date>
    </item>
    <item>
      <title>Re: Solution for large files?</title>
      <link>https://community.hpe.com/t5/operating-system-linux/solution-for-large-files/m-p/2681499#M1296</link>
      <description>What OS is the Oracle server running on?&lt;BR /&gt;&lt;BR /&gt;The RH box seems to be used only for data storage using that NFS mount.&lt;BR /&gt;&lt;BR /&gt;If it is the same area of kernel revision of Linux (ie. &amp;gt;2.2.16), then ok.&lt;BR /&gt;&lt;BR /&gt;I still think it could be the database server.  I know Oracle can handle large raw disk systems, but does it possibly have a limit as to the size of a dump "file" ?  What version of Oracle? 8? 9?</description>
      <pubDate>Wed, 13 Mar 2002 22:03:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/solution-for-large-files/m-p/2681499#M1296</guid>
      <dc:creator>Stuart Browne</dc:creator>
      <dc:date>2002-03-13T22:03:02Z</dc:date>
    </item>
    <item>
      <title>Re: Solution for large files?</title>
      <link>https://community.hpe.com/t5/operating-system-linux/solution-for-large-files/m-p/2681500#M1297</link>
      <description>We are running Oracle 8 on hp-ux servers nfs mounted to our "new" RH servers in an effort to find more space.  I can say that we will not be able to use RH as a production server anytime in the near future unless we can get this large file issue ironed out :)  As far as the file transfers, FTP works for the large files and that is about it.  As usual, thanks for taking the time to help.  Any suggestions would be a great help.  :)</description>
      <pubDate>Thu, 14 Mar 2002 14:10:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/solution-for-large-files/m-p/2681500#M1297</guid>
      <dc:creator>James Stenglein</dc:creator>
      <dc:date>2002-03-14T14:10:18Z</dc:date>
    </item>
    <item>
      <title>Re: Solution for large files?</title>
      <link>https://community.hpe.com/t5/operating-system-linux/solution-for-large-files/m-p/2681501#M1298</link>
      <description>Ok, I'm going to have to ask people with HP-UX experience here (as I dont have a HP-UX machine handy with anything resembling 'ample' disk space availalbe) whether the different OS versions have file-size limitations.&lt;BR /&gt;&lt;BR /&gt;Reguardless of the NFS server's abilites, if the HP-UX box can't handle files larger than 4Gb, then you are buggered.&lt;BR /&gt;&lt;BR /&gt;Did you find out whether the Oracle version you are using can handle large files?&lt;BR /&gt;&lt;BR /&gt;Unfortunately, I don't use either Oracle or HP-UX on a regular basis, so I do not know these answers off the top of my head.</description>
      <pubDate>Fri, 15 Mar 2002 00:02:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/solution-for-large-files/m-p/2681501#M1298</guid>
      <dc:creator>Stuart Browne</dc:creator>
      <dc:date>2002-03-15T00:02:49Z</dc:date>
    </item>
  </channel>
</rss>

