Operating System - HP-UX
1834227 Members
2806 Online
110066 Solutions
New Discussion

Limit file size in UNIX. How to solve it?

 
SOLVED
Go to solution
Peston Foong
Advisor

Limit file size in UNIX. How to solve it?

Hi, all.
All input are appreciated.

In Unix, there is a limit to the size of a file. That is to say, the max size of a file can only be the same as you memory. If you memory is 2 GB memory then your dumping file size would not grown than 2GB and stop at the limited to the size 2GB.
I know this because my company daily dumping file size are same same daily. How come? Since the transaction daily should be more or less. Shouldn't be the same transaction of the dumping file size.

How do I solve this problem? Any patches will solve the problem???

Now, our daily dumping file is as a daily backup for the company. The size is more than 2GB.

Regards,
Peston Foong.
8 REPLIES 8
Stefan Farrelly
Honored Contributor

Re: Limit file size in UNIX. How to solve it?


Your question is a bit confusing.

1. The maximum size of a file in unix used to be 2 GB. Not anymore, you can have files up to hundreds of Gigabytes now. You need to set the option largefiles on your filesystems first (fsadm -o largefiles, see man on fsadm).

2. Correct, if you have a program in memory which crashdumps you will only get a dumpfile the size of the program in memory, which if you have say 2 GB of RAM, will only be a max of 2 GB (but likely much smaller). But this is programs aborting or crashing, I doubt you are doing this daily! I guess you are using the dump command for backups ? This will fix the maxfilesize limit until you modify your filesystems to increase it - see 1. above.
Im from Palmerston North, New Zealand, but somehow ended up in London...
Peston Foong
Advisor

Re: Limit file size in UNIX. How to solve it?

Hi, Stefan Farrelly.

Let me give details about my system. My company is using BaaN IV application as daily input transaction. Informix as a database. HPUX 11 64 bits as our platform os.

We have schedule the dumping file daily through the BaaN schedule. How do I use fsadm -o largefiles. Since we have schedule the dumping file on BaaN.

Please help. Thank you so much.


Regards,
Peston.
Stefan Farrelly
Honored Contributor

Re: Limit file size in UNIX. How to solve it?


To set largefiles on hpux do;

/usr/sbin/fsadm -F vxfs -o largefiles

eg. fsadm -F vxfs -o largefiles /var

Do this for each mountpoint you want largefiles on for.
Im from Palmerston North, New Zealand, but somehow ended up in London...
James R. Ferguson
Acclaimed Contributor

Re: Limit file size in UNIX. How to solve it?

Hi Peston:

Be aware, when you enable largefile support, that the older UNIX utilities like 'tar', cpio' and 'pax' do *not* support handling files greater than 2GB. 'fbackup' and 'frecover' are standard HP-UX utilities that *do* support largefiles, however.

A worthwhile paper to read is "HP-UX Large Files White Paper Version 1.4 (HP-UX 10.x, HP-UX 11.0)" found in PDF format here:

http://docs.hp.com/hpux/os/11.0/index.html

Regards!

...JRF...
Peston Foong
Advisor

Re: Limit file size in UNIX. How to solve it?

Hi,

Should I have to bring the system in a single user mode with this command:-

usr/sbin/fsadm -F vxfs -o largefiles

eg. fsadm -F vxfs -o largefiles /baandump

This is the bdf report:-
$ bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 86016 26448 55922 32% /
/dev/vg00/lvol1 67733 33146 27813 54% /stand
/dev/vg00/lvol8 614400 466604 138635 77% /var
/dev/vg00/lvol7 675840 386332 271445 59% /usr
/dev/vg00/lvol4 135168 32373 96413 25% /tmp
/dev/vg00/lvol6 770048 580416 177821 77% /opt
/dev/vg00/lvol5 258048 95946 152036 39% /home
/dev/vg00/crash 512000 16820 464238 3% /debug_crash
/dev/vg01/lvol1 520192 376445 134775 74% /informix
/dev/vg07/lvol1 3145728 1531639 1516689 50% /baan4
/dev/vg07/lvol2 4194304 932586 3057917 23% /baandump

I do on the /baandump directory only. Is okay with the rest?

Thank you so much.

Regards
Peston.

Michael Tully
Honored Contributor

Re: Limit file size in UNIX. How to solve it?

Hi Peston,

Only implement the largefiles option
on filesystems greater than 2Gb. Any
of the other filesystems that are system
related do not need this as they are
under 2Gb.

As James has rightly pointed out, the
white paper he has pasted as a link
gives all the information you would need.

Regards
Michael
Anyone for a Mutiny ?
Peston Foong
Advisor

Re: Limit file size in UNIX. How to solve it?

Hi, All.

Should I have to bring the system in a single user mode with this command:-

eg.usr/sbin/fsadm -F vxfs -o largefiles /baandump

Is this hpux command can be run on-line without to turn in single user mode?

If I issue this command on /baandump only under /baandump the size of file would be unlimited whereas the rest like /var, /tmp, /baan4, /informix and so on would be remain unchange, is it?

Please help.

Regards,
Peston.

Shannon Petry
Honored Contributor
Solution

Re: Limit file size in UNIX. How to solve it?

Well, since you dont seem to want to read the white paper that everyone keeps referring to...here it is.

TO use fsadm, you DO NOT have to be in single user mode, but MUST unmount the filesystem!

also, you must use fsadm on the RAW device, and not the mountpoint.

I.E. (lets say /ban_dump is a logical volume on vg01 called bandump1)
umount /ban_dump
fsadm -o largefiles /dev/vg01/rbandump1
fsadm /dev/vg01/rbandump1
Should echo back...
largefiles

Then mount it back up.

I would not change any file system that 1 can not handle 2GB+ files, or 2 should not ever need to support 2GB+ files. I.E. /var is 600MB, you would be wasting time to go to single user mode, and enable large file support.

NOTE: When creating a logical volume in HP-UX 11.xxx you should see an option under VxFS options to enable large file support there.

Regards,
Shannon
Microsoft. When do you want a virus today?