Operating System - Linux
1829894 Members
2463 Online
109993 Solutions
New Discussion

Solution for large files?

 
James Stenglein
Occasional Advisor

Solution for large files?

Hello,
Being a newb idiot at linux here is my issue...

We are needing to write larger files for databases. Currently, we are limited to a 4gb file and must hit the 70gb range. We are using redhat 7.1 ( the hp version ). I have done some reading about ext3 but can't seem to find a solid source as to how to install it into the kernal. Is ext3 an option? If it is an option, how do we go about adding it. Any suggestions/options would be much appreciated :)

Thanks in advance!!

Desperately seeking larger files
8 REPLIES 8
Stuart Browne
Honored Contributor

Re: Solution for large files?

If you are using 7.1, you should theoretically be able to get files up to 2TB.

If you have a 4gig file size cap, then I'd suggest looking at your database server, and see whether IT has a 4gig limitation.

*does a quick test on the RH7.1 box beside him*

dd if=/dev/zero of=/tmp/Junk count=4097 bs=`echo "1024 * 1024" | bc`

...

ls -al /tmp/Junk
ls: Junk: Value too large for defined data type

err oops.. lets try something else..

wc -c /tmp/Junk

4296075872 Junk

Given that 4Gb is 4294967296.. A successful test. RH 7.1 doens't have an issue (at the file system and kernel level) with 4gb files.

As for converting an ext2 filesystem to ext3, there's another thread here on how to do that. I've never done it myself however.
One long-haired git at your service...

Re: Solution for large files?

I'm not exactly sure what the defaults are on HP's RH7.1, but I would take a look at /etc/security/limits.conf This file allows you to set a bunch of limits on things like memory, concurrent processes, and file sizes. It's basically meant to keep a single user from saturating a system. The file is accessed by a pam module, specifically, pam_limits.so. On my system the docs are located in /usr/share/doc/pam-0.??/txts/README.pam_limits.
The maximum file size depends on your file system (reiserfs, ext2, ext3, etc.) your kernel version (2.2 vs. 2.4) and your architecture (32-bit vs 64-bit). I really doubt you're going over the allowable limit if you're using a 2.4 kernel and an ext? fs.

I hope this helps. Good luck.
D. Jackson_1
Honored Contributor

Re: Solution for large files?

James Stenglein
Occasional Advisor

Re: Solution for large files?

First off...thanks for your prompt solutions. :)

Stuart....
Using your dd if=/dev/zero of=/tmp/Junk count=4097 bs=`echo "1024 * 1024" | bc` command I was able to pass the 4294967296 mark sucessfully ( over 5gb ) with both root and oracle users. Is there a limitation on a datatype or maybe the NFS transfer? We are basically using this RH box as a oracle dump with massive .dmp files and cannot create ( with oracle:dba user ) anything over that golden 4gb mark. If it helps...we are mounting this server to the DB server and moving the dumps on.


Christopher....
I checked the limits.conf and there is nothing set in it to specify a limit on a file size. I am using a 2.4.2-2 kernal and ext2 fs. As seen from above, you were both correct that it is not the FS itself.

Any other suggestions or ideas?

Again...thanks for your help :)
James Stenglein
Occasional Advisor

Re: Solution for large files?

D. Jackson,

These are perfect for what I was looking for. Fortunaltey, the other 2 experts have already directed my issue to another direction.

Thanks for the sites :)
Stuart Browne
Honored Contributor

Re: Solution for large files?

What OS is the Oracle server running on?

The RH box seems to be used only for data storage using that NFS mount.

If it is the same area of kernel revision of Linux (ie. >2.2.16), then ok.

I still think it could be the database server. I know Oracle can handle large raw disk systems, but does it possibly have a limit as to the size of a dump "file" ? What version of Oracle? 8? 9?
One long-haired git at your service...
James Stenglein
Occasional Advisor

Re: Solution for large files?

We are running Oracle 8 on hp-ux servers nfs mounted to our "new" RH servers in an effort to find more space. I can say that we will not be able to use RH as a production server anytime in the near future unless we can get this large file issue ironed out :) As far as the file transfers, FTP works for the large files and that is about it. As usual, thanks for taking the time to help. Any suggestions would be a great help. :)
Stuart Browne
Honored Contributor

Re: Solution for large files?

Ok, I'm going to have to ask people with HP-UX experience here (as I dont have a HP-UX machine handy with anything resembling 'ample' disk space availalbe) whether the different OS versions have file-size limitations.

Reguardless of the NFS server's abilites, if the HP-UX box can't handle files larger than 4Gb, then you are buggered.

Did you find out whether the Oracle version you are using can handle large files?

Unfortunately, I don't use either Oracle or HP-UX on a regular basis, so I do not know these answers off the top of my head.
One long-haired git at your service...