1826381 Members
4316 Online
109692 Solutions
New Discussion

Re: file too large

 
vpons
Frequent Advisor

file too large

I have an oracle.dmp.gz file about 3.3G. When I try to uncompress it, it aborted with anerror "File too large". The file system has 20G free space. I moved the file to windows xp and uncompressed it. Now when I try to ftp over to unix, I get the same error. I have to import data using this .dmp file. Can someone suggest what is the best approach and why I continue to get "file size too big" error?
7 REPLIES 7
James R. Ferguson
Acclaimed Contributor

Re: file too large

Hi:

You need to enable 'largefiles' for the filesystem into which you want to put it.

# mkfs -F vxfs -m /dev/vgNN/lvolX

...will show the current state --- largefiles or nolargefiles.

To enable 'largefiles' if necessary:

# fsadm -F vxfs -o largefiles /dev/vgNN/rlvolX

...note the use of the raw device.

OR:

# fsadm -F vxfs -o largefiles /mountpoint

Regards!

...JRF...
Ivan Krastev
Honored Contributor

Re: file too large

Check for largefiles option and enable it if it is necessary:

# fsadm -F vxfs -o largefiles mount_point


regards,
ivan
Heiner E. Lennackers
Respected Contributor

Re: file too large

It would be great to publish the platform and operating system version. Without this we can only guess ..

In HP-UX 11.11 many programs are still PA-RISC1.1 (32bit) binaries and can not handle file greater 2GB. gzip and ftpd are belonging to this files.

You can try to unzip your compressed file as stream:

cat oracle.dmp.gz | gzip -dc > oracle.dmp

In this case it does not matter how big the files are.

HeL
if this makes any sense to you, you have a BIG problem
vpons
Frequent Advisor

Re: file too large

It is HP 10.20.

Please let me know if this is the action to take:
1. Unmount the file system (do not need data so I will delete all)
create a file system for large files
2. newfs -S hfs -o largefiles /dev/vg01/rdbexports
3. mount filesystem
4. make changes in fstab --what should I add here?


Thanks,


Ivan Krastev
Honored Contributor

Re: file too large

You can use fsadm without destroing data:

/usr/sbin/fsadm -F hfs -o largefiles /dev/vgXX/rlvolXXX


See the whitepaper about largefiles - http://docs.hp.com/en/940/lgfiles4.pdf

regards,
ivan
vpons
Frequent Advisor

Re: file too large

Resolved.
Steven Schweda
Honored Contributor

Re: file too large

> In HP-UX 11.11 many programs are still
> PA-RISC1.1 (32bit) binaries and can not
> handle file greater 2GB. gzip and ftpd are
> belonging to this files.

Programs don't need 64-bit pointers to deal
with large files. They do need to be built
with large-file support enabled. (They do
need to use 64-bit integers to hold file
sizes and offsets, but "32-bit" programs can
normally be written to do that.)