Operating System - HP-UX
1819792 Members
3087 Online
109607 Solutions
New Discussion юеВ

Unable to open buffer file

 
David Child_1
Honored Contributor

Unable to open buffer file

One of the DBA's is getting the following error message when running an 'isql' script:

"Unable to open buffer file"

The problem is random and occurs during different operations in the script. It only shows up every once in a while, which is making it more difficult to track down.

My current thoughts are the following kernel parameters: nbuf and/or bufpages

On this particular server these are set as follows: nbuf=389930 and bufpages=491520

I want to get off this and use dynamic buffer cache, but I need to come up with a reason why for management.

Any assistance would be greately appreciated.

p.s. This is a K460 running 11.0
4 REPLIES 4
A. Clay Stephenson
Acclaimed Contributor

Re: Unable to open buffer file

Hi David,

I am more inclined to belive that you are running out of system-wide file descriptors.
Do a sar -v 2 2 and look for overflows.

Normally, for pure database servers (I think ISQL is Informix), dynamic buffer cache is
not optimal. Hard setting it via bufpages
leads to much less contention. In any event, a problem with buffers might lead to performance problems but not the ability to open a file.

My 2 cents, Clay
If it ain't broke, I can fix that.
Thierry Poels_1
Honored Contributor

Re: Unable to open buffer file

Hi,
this seems to happen if you run out of space on /tmp or /var/tmp.
Other possibility is that there was an old bufferfile left of another user, which the other user cannot overwrite.

good luck,
Thierry.
All unix flavours are exactly the same . . . . . . . . . . for end users anyway.
neylan tokerler
Occasional Advisor

Re: Unable to open buffer file

hi,
1) check permissions on /tmp
2) look for db related files /tmp , if the appl. session is not properly closed, then you may get this error. try removing those files related with the db under /tmp.
terryshawaii.rr.com
New Member

Re: Unable to open buffer file

I've run into this error before running a batch job either in a unix shell, dec or microsoft batchjob. If its a unix cron I find it occours more often.

The cause is simple contention over system resources between the database server software and the system calls being made via the shell or batch job. The errors will seem to be random but they will occour most often within a looping construct that makes heavy use of aggragate functions (like count or avg). You can configure your database to allow for a larger buffer but this does not always solve the problem, you can also increase the size of the partition the buffer uses this does not always solve the problem.

I find what works everytime with these small random errors is time. Testing for this return error in a shell or batch job and then pause the job for a second or two. This will give the operating system and database software to catch up on housekeeping. After pausing for a second run the select statment again. Set a limit on how many times you try again, I like to try three times and have not had a fail yet. I also drop this out to an error log and check it from time to time to ensure I'm not running out of resources.

The only other option is to transfer the shell scripts or batch jobs to code, depending on the system either C, C++ or Ada all publish extensions/libs for both Sybase, Oracle and probably MicroSoft as well. Using a compiler will alow a complete testing and flushing of buffer space that you can't accomplish within a shell.

Know this post is old but thought I'd reply since I recall when I first encountered this error in 1997. Drove me crazy and no-one at tech support for Sybase could help. First encountered it in a HP Unix environment.