cancel
Showing results for 
Search instead for 
Did you mean: 

Memory error

SOLVED
Go to solution
Jorge Cocomess
Super Advisor

Memory error

Hello Everyone -

Anyone seen this error (below) before??

The specs on the Alpha severs as follows:
Alpha 4100 w/2.0GB of RAM - OpenVMS 7.3-2

Error: Not enough memory. Attempting to allocate
64000 entries of 2 bytes for 'c042bf'


Thanks,
J
12 REPLIES
Jim_McKinney
Honored Contributor

Re: Memory error

What context? Is "c042bf" a PID? Where do you see this message? on the console? in a batch log? during an interactive session?

Anyway, sounds like a working set is too small.
Jan van den Ende
Honored Contributor

Re: Memory error

Jorge,


Error: Not enough memory. Attempting to allocate
64000 entries of 2 bytes for 'c042bf'


This is NOT a VMS error.
All of them are of the format:
%xxx-y-zzz , text

Where xxx = facility ( LIB or SYS or SOR 0r... many others

y = S, I, W, E, or F
for Success (usually supperssed messages!), Informational, Warning, Error or Fatal
zzz = short message mnemonic, like in this case it probably would have been INSVIRMEM
text = description. In this case a likely text would have been Insufficient Virtual Memory.

I suspect this message is from a package (a Unix port maybe ?) that captures several errors and then tries to give a "more descriptive error message" - only, the standard VMS messages are MUCH more informational.

Can you tell us what software gave this message?

Proost.

Have one on me.

jep
Don't rust yours pelled jacker to fine doll missed aches.
Jorge Cocomess
Super Advisor

Re: Memory error

I saw the error in the batch log. If the workingset is too small, how do you go about checking it and increase it?

Thanks for your help.

Jim_McKinney
Honored Contributor
Solution

Re: Memory error

> how do you go about checking it and increase it?


In the simple case...

$ set default sys$system:
$ run authorize
UAF> show USERNAME
UAF> mod USERNAME/wsex=nnnnn
UAF> exit

where USERNAME is the owner of that batch log file and nnnnn is some number greater than the previous value of WSEXTENT (in pages). 64000*2 bytes is only 250 pages sp that's just the last thing that occurred in the process prior to failure. If you don't have the source to determine the needs then just guess. On an alpha server with memory such as yours 32000 pages is certainly not out of line (64k is unlikely to be a problem either).
Arch_Muthiah
Honored Contributor

Re: Memory error

Jorge,

I have no info what kind of application you with other system's loads, anyway I would like to suggest to consider couple of more sysgen params for this kind error.

Usually when a process starts executing an image, the process works within WSDEFAULT.

When that process has higher page fault rate than PFRATH, then it will go upto WSQUOTA by the size of WSINC.
i.e WSDEFAULT - WSINC - WSQUOTA

If the process keeps increasing page fault rate and the free list size exceeds BORROWLIM, then that process continues to get the pagelets by WSINC until the free list size is inadequate or WSEXTENT is reached
i.e WSDEFAULT - WSINC - WSQUOTA - WSINC - WSEXTENT.

So better you can look into atleast DSDEFAULT, WSQUOTA, and WSEXTENT values and once you confirm this is the reason, you can follow Jim's way to modify the params values.

Archunan
Regards
Archie
Wim Van den Wyngaert
Honored Contributor

Re: Memory error

That's not correct.

A process starts with wsdefault (doing dcl).
When an image is started, you get wsquota.
When you page fault (a lot), the ws can grow to wsext if memory allows it.

So, if the working set is too small, you need to increase wsquota (and may be wsextent).

But memory is normally allocated from your pagefilequota ...

Wim
Wim
Wim Van den Wyngaert
Honored Contributor

Re: Memory error

Correction. Line 2.

You get the right to use wsquota. You just have to use the memory and you get it. In contract to wsextent for which you have to work (earn it).

Wim
Wim
Hein van den Heuvel
Honored Contributor

Re: Memory error

Jorge, like Jan writes, this is not an VMS message, but a layered product report which may point back to a configuration problem of that product, or an interpretation of a possible underlying VMS reported error (during malloc/getvm)
It could be something like in Oracle where a shared pool can be configured which can run out even though 'the system' has more memory and the process has the right to use that memory.
Please dig deeper for details around the message.

What product/application?
Did it ever work?
What changed?
Is there an applicaiton error log perhaps?

Jim, Archunan, I respectfully dissagre with any suggestion that this might have anything to do with Working Sets. Working sets are strictly a performance influencing setting. They have no functional effect. No program (other than VMS SORT and RDB :-) ever fails due to working set settings, they just run slower than needed.

As Wim indicates, it is more likely to be PAGFILQUO, but it could be lack of pagefile itself or a VIRTUAL MEMORY limit. The process could be out of its 1GB base memory. Or it could have poorly alloacted (fragmented) that base memory. It could be out of stack, or is could be a program imposed restriction in which case no VMS setting can fix it and noone here can help untill we know the program.

Good luck,
Hein.
Wim Van den Wyngaert
Honored Contributor

Re: Memory error

Hein,

Almost correct. TCPTRACE (tested in VMS 6.2) locks pages in the working set. If wsquota is too low it fails. Try with a wsquota of 512. It will fail.

Wim
Wim
Jim_McKinney
Honored Contributor

Re: Memory error

In retrospect, I concur with Hein that pgflquo is the more likely culprit here.
Jorge Cocomess
Super Advisor

Re: Memory error

Hi,

I stopped and restarted the process. So far, I am not seeing any errors. This is an in-house accounting application.

Thanks everyone. You're the best!!
Jan van den Ende
Honored Contributor

Re: Memory error

Jorge,



I saw the error in the batch log.


and


I stopped and restarted the process. So far, I am not seeing any errors.


Am I correct then that this is a continuously-running batch job?

If so, there are just two possibilities:
1), Your job happened to have to deal with a chunk of processing that was unusually big (for whatever reason). That means it is likely to happen again whenever a similar load is to be dealt with.
Raising PGFLQUO would greatly reduce the risk of this happening again.
2), there is some slow-creeping memory leak: allocated memory is not (completely) freed after the need, and upon repeat of the functionality a fresh chunk is allocated.
Now there really are only 2 solutions: the fundamentally correct of locating the incorrect dealloc and repair it, or the much simpler: regelarly restart, in a frequency high enough that the accumulated wasted memory does not yet hit the limit.
Especially for batch processes this is rather simple to automate, eg on a dayly or weekly basis.

hth

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.