Operating System - HP-UX
1833782 Members
2559 Online
110063 Solutions
New Discussion

Re: Problem with large files over 4 GB (HP-UX 11.0)

 
Fritz MAYER
Advisor

Problem with large files over 4 GB (HP-UX 11.0)

It's not possible for me to create a file larger than 4 GB on a vxfs file system. The 100 MB filesystem was created with

# lvcreate -L 102400 /dev/vg_test
# newfs -F vxfs -o largefiles /dev/vg_test/rlvol1

The file system is mounted with the option "largefiles" (in /etc/fstab). Then I tried to create a 10GB testfile with the following command:

# dd if=/dev/vg00/rlvol6 of=testfile bs=16384k count=640
The command terminates after writing 256 records and the file size is 4294967296 bytes (=4GB). Trying to add a single byte to the file results in an error message "Value too large to be stored in data type".

Reading the documents /usr/share/doc/lg_files.txt or
the White Paper "Supported File and Files System Sizes for JFS and HFS" didn't help.

The last installed General Release Patch Bundle is from June 2001.

Thanks in advance for any advice!

Fritz Mayer
7 REPLIES 7
Frank Slootweg
Honored Contributor

Re: Problem with large files over 4 GB (HP-UX 11.0)

The section "5.2 Commands Supporting Large Files" does not list dd(1), i.e. dd(1) does not support large files.

Why dd(1) fails at 4GB instead of the 'expected' 2GB is another question, but non-supported is just that.

However I think that in your actual application you are not using dd(1) (and not reading from a raw logical volume), so it is probably better to describe the real problem/question.

On the other hand, cat(1) *does* support large file, so you might want to write the 4GB file and then use cat(1) to (try to) append (">>") to it.

If you need further help, then please post a full (failing) example, including all commands/options/parameters, all output, bdf(1M) and ll(1) output, etc..
A. Clay Stephenson
Acclaimed Contributor

Re: Problem with large files over 4 GB (HP-UX 11.0)

Hi Fritz:

Methinks the problem is not with the filesystem itself but rather with the 'dd' command. Try using cat or cp or better still (if you want really good data) a small c program.
If it ain't broke, I can fix that.
Mark van Hassel
Respected Contributor

Re: Problem with large files over 4 GB (HP-UX 11.0)

From the "HPUX 11 large files white paper":

>
4.1 Backup utilities that support large files

The following backup utilities will back up large files:

1. dd

2. fbackup/frecover

Both of these commands require no user intervention to back up large
files. If a backup contains large files and an attempt is made to
restore the files on a filesystem that does not support large files,
the large files will be skipped. fbackup and frecover will use a new
format for this release so a backup tape created on HP-UX 10.20 or
later can not be restored on a release of HP-UX prior to 10.20.

4.2 Backup utilities that do not support large files

4.2.1 tar, cpio, pax, ftio

>

So backup is supported, but what about file creation ?




The surest sign that life exists elsewhere in the universe is that none of it has tried to contact us
Frank Slootweg
Honored Contributor

Re: Problem with large files over 4 GB (HP-UX 11.0)

OOPS!

You are right Mark! I did not consider dd(1) to be a 'backup command'. To be 'safe', I had searched on "^dd", i.e. "dd" at the beginning of the line, because that format was used for the other commands, but the backup commands used another format.

Anyway, like I wrote, we should probably concentrate on the real problem and should probably not be reading from a *device* file.

Trying to 'make up' for my error :-) : It would be nice to know which exact command/circumstances gave the "Value too large to be stored in data type" error. That error probably comes from libc.1 (which dd(1) uses ("chatr /usr/bin/dd")) and is errno EOVERFLOW. errno(2) does not list EOVERFLOW, so we need to know the command/system_call to say anything sensible about it.

Frank "Hmmm! Egg! Nice!" :-) Slootweg
James R. Ferguson
Acclaimed Contributor

Re: Problem with large files over 4 GB (HP-UX 11.0)

Hi Fritz:

A simple test can be done with 'prealloc'.

# prealloc mybigfile 2200000000

(i.e. greater than 2^31 = 2147483648)

This will return with an error quite quickly if you don't have enough space or the filesystem doesn't support 'largefiles'.

Regards!

...JRF...
Fritz MAYER
Advisor

Re: Problem with large files over 4 GB (HP-UX 11.0)

Thankyou for all your advices.

Apparently it's a matter of which command I am using. Creating the large file with "dd" or "cat" results in the mentioned error message. I didn't consider that
"dd" can be used to back up large files but not to create them. It sounds somehow queer!? The cause of the failure of the "cat"-command is still unknown.

Here are some commands for demonstrating the effects:

# fsadm /test
fsadm: /etc/default/fs is used for determining the file system type
nomultifsets
largefiles

# ll bigfile
-rw-r--r-- 1 root sys 4294967296 Nov 13 12:12 bigfile

# cat >> bigfile
bigfile: Value too large to be stored in data type

# echo "x" >> bigfile
bigfile: Value too large to be stored in data type

# prealloc bigfile2 10737418240
# ll bigfile2
-rw-r--r-- 1 root sys 10737418240 Nov 13 14:33 bigfile2

# dd if=bigfile2 of=bigfile3 bs=16384k
# ll bigfile3
-rw-r--r-- 1 root sys 10737418240 Nov 13 15:01 bigfile3


As we can see the "prealloc" command can create a large file (in the sample 10 GB), while some others can't. I tried the same "prealloc" command on a file system not mounted with the large file option and immediately got an error message "prealloc: File too large". "dd" can really backup large file but not create (it still sounds queer ;-) So I've learnt that the large file option works, but only with some commands.

Thanks once more and reagards,
Fritz
Fritz MAYER
Advisor

Re: Problem with large files over 4 GB (HP-UX 11.0)

Today I feel the painful duty to publish the truth about the solution of my large file problem. I detected the cause why the command

# dd if=/dev/vg00/rlvol6 of=testfile bs=16384k count=640

terminated after writing 4 GB.

The filesystem /dev/vg00/rlvol6 from which dd read its input has a size of only 4 GB !!!!

:-)))
:-)))

I know, it sounds really stupid, but sometimes you (I) overlook the closest cause of a problem and loose belief in Unix.

Nevertheless thanks for advices!

FWM