Databases
cancel
Showing results for 
Search instead for 
Did you mean: 

Hpux error 32 - filesys overflow

N.D
Occasional Advisor

Hpux error 32 - filesys overflow

there was an error on our unix machine 'hpux error 32' causing one of our databases to fall over, is there any kernal parameters we can amend to prevent this from happening again ?
6 REPLIES
John Palmer
Honored Contributor

Re: Hpux error 32 - filesys overflow

Hi,

Possibly, but more information is required.

From /usr/include/sys/errno.h, error 32 simply means 'Broken Pipe'.

What sort of database?
Are there any other meaningful error messages?
Was anything unusual happening?
Can you reproduce the problem?

Regards,
John
Armin Feller
Honored Contributor

Re: Hpux error 32 - filesys overflow

Hi,

if you are speeking about a "filesystem overflow" then you can tune up the kernel parameter NFILE, but you should not tune this parameter directly, a better way is to increase MAXUSER.

Regards,
Armin
N.D
Occasional Advisor

Re: Hpux error 32 - filesys overflow

sorry error said file table overflow. the unix box is running 11.0 and the db was oracle 8.1.6
N.D
Occasional Advisor

Re: Hpux error 32 - filesys overflow

other error messages say - oratable cannot read file
John Palmer
Honored Contributor

Re: Hpux error 32 - filesys overflow

File table overflow is caused when you have too many open files system-wide.

You need to increase the kernel parameter 'nfile' and reboot.

You can check the status of the file table with 'sar -v 1'. The value in the column labelled file-sz is /.

Regards,
John
Jean-Louis Phelix
Honored Contributor

Re: Hpux error 32 - filesys overflow

Hi,

In general it's recommended not to modify kernel parameters directly but rather use indirect modifiers like MAXUSERS. In this case NFILE is a very cheap parameter which can really be put to a very big value (> 50000)without any impact on your system while other indirect modifiers would also increase NINODE for example (inode CACHE not a table size, not used by VxFs, very expensive in size and time and notoriously always too high). I talk about this one because 'sar -v' will probably tell you that it's almost full, although it's normal since it's a cache. The only parameter that you could relate to NFILE could be NFLOCKS which 'should' be around 10% of NFILE.

Regards.
It works for me (© Bill McNAMARA ...)