1748195 Members
2978 Online
108759 Solutions
New Discussion юеВ

Re: Memory allocation

 
srinivas_42
Occasional Advisor

Memory allocation

All,
Sys config: 4 Processor / 8GB memory / 12GB swap / HPUX 11.11 / Oracle 9.2.0.6/ 2 instances

I keep getting the error, not enough memory once in a while, I noticed database processes that time reache 300 or more.

Questions:

1. For every one database connection, hpux create one unix process, is that right?
2. How much memory will be allocated for each process/connection?, is this memory taken outside, whatever configured for the database usage (SGA)?
3. If above (2) is true, how do we limit this. Otherwise every system will runout of memory just for few connections.

Thanks,

Srini
12 REPLIES 12
Hein van den Heuvel
Honored Contributor

Re: Memory allocation

1. For every one database connection, hpux create one unix process, is that right?

YES. Using MTS you can pool connections, but this is generally not adviced.

2. How much memory will be allocated for each process/connection?, is this memory taken outside, whatever configured for the database usage (SGA)?

5 - 50 MB, private memory, in addition to mapping the shared SGA. Some of this can not be controlled, for example you will appreciate that each process will have a stack. Some (private workspaces) can be controlled directly, Some (eg IO buffers for tablescan) indirectly

>> 3. If above (2) is true, how do we limit this.

Oracle used to have a only few per process knobs, specifically sort_size and sort_retained_size for which you can set defaults in init.ora.

With 9.2 you should probably focus on setting "pga_aggregate_target" to control the per-instance non-sga memory.
Just google for pga_aggregate_target and you'll get a ton of usage hints.

Cheers,
Hein.
srinivas_42
Occasional Advisor

Re: Memory allocation

Hi

Thanks for your reply. First to understand your reply correctly, are you saying that each database connection process will take 5-50mb of memory/swap space?. If it is, on a 9GB memory system you can not have more than 300 database connections.

In my understanding pga_aggregate_target will have non-SGA memory per session, but that is very small may be 1MB per session right?
Our PGA Aggregate is configured 50MB, and SGA 500MB. I am not sure where is the rest of memory going and why each oracle process is occupying 50MB.
Kent Ostby
Honored Contributor

Re: Memory allocation

Srini --

Also, make sure you have enough swap space.

The error ENOMEM can indicate memory or swap issues.

HP recommends that you have 2xRAM worth of DEVICE swap. So if you are running 8 GB of memory, you need to bump that swap up another 4 GB Device Swap.
"Well, actually, she is a rocket scientist" -- Steve Martin in "Roxanne"
Ninad_1
Honored Contributor

Re: Memory allocation

Srini,

Yes you are right in your understanding of memory used by oracle processes.
I cant comment on the pga_aggregate ...

Why each oracle conncetion is using so much memory. Well - definitely each connection will be performing some task - so it will require its own data, stack and text area apart from the shared memory it will be using from the SGA. To see more details do the following -
ps -eaf | grep yourOra_SID | grep 'LOCAL'
Note pid of any one connection process.
start glance
Press s and enter the pid
Press M [ Capital M ] to see the memory regions used by that process - to understand why the connection is using that much memory.

>> Where is all the memory gone ??
do a
UNIX95= ps -e -o "pid,user,sz,args" | sort -nr -k 3 | more

To see top memory using processes and you will know who is using more memory.

Hope this clarifies,
Ninad
Bill Hassell
Honored Contributor

Re: Memory allocation

Most of the time, "not enough memory" does not mean that there is no more RAM, it usually means that a kernel limit has been reached. Since HP-UX is a virtual memory system, RAM is logically extended onto the disk so that idle or less important programs will be deactivated and rolled out (paged out) to disk which is the swap area. So the first test is:

swapinfo -tam

If the swap areas are actually occupied and pushing the limits (80-90% used) then indeed you are running out of memory and the first fixup is to add more swap. However, swap creates from 50:1 to 200:1 slowdown in performance and that affects everyone. Make sure that the swap rate is slow by running vmstat several times when the system is really busy. If the po (page out) column is more than 10-20 then your system is badly undersized and needs a lot more RAM, perhaps 16 Gb.

Now if there are no significant page outs, then ulimit and maxdsiz and maxdsiz_64 may be limiting your programs. But you need to identify which program (or programs) is having a problem. If the program core dumps, then use the file command (ie, file core) to see the program name. Then determine if this program needs permission to use more RAM.


Bill Hassell, sysadmin
srinivas_42
Occasional Advisor

Re: Memory allocation

Thanks all for the rplies. It seems everybody seem to be agreeing that a 9GB RAM system can only support 200 - 300 users. I still doubt on that though, because we hear about systems of that size supporting thousands of users. I feel there is some kernel parameter or database parameter to tune, which decides how much each process/connection is going to get. I agree to the fact that, it depends on what the process is doing. But Oracle should be performing all of its functions in its own database space (shared pool etc.), the additional process which is getting created for each database connection, should only handle little stuff in it. I am not sure, posting this question in database subgroup, also populates to any sysads out there in the forums, may be they can add some light to it too.
Bill Hassell
Honored Contributor

Re: Memory allocation

The tasks being performed are much more important in defining the performance of the database, middleware and user response time. An application that spends most of it's time looking up simple records in response to user commands might easily handle thousands of users. But an application that performs complex searches and reports, often in automated batches, might only handle 10 users maximum.

So there really is no rule at all regarding RAM versus users -- that decision is very much depedent on the ability of the DBAs, efficiency of the most complex SQL tasks and proper indexing. The only rule is to use as much RAM as you can without significant page outs. RAM is expensive so utilizing as much of it as possible is a good thing. If the workload increases so that page outs become significant, trimming the SGA to a smaller size my help a bit but eventually, performance will need to be maintained with increasing user count. There is no simple rule for figruing this out.


Bill Hassell, sysadmin
Ariel Cary
Frequent Advisor

Re: Memory allocation

Hi,

I have also seen this high memory consumption behavior when running Oracle9iR2 on HP-UX IA64 (I am assuming you're using Itanium) compared to other Unix platforms where Oracle uses kind of marginal percentage of what it uses on IA64.
Previous replies on increasing swap space are correct on this platform since if you don't have enough swap memory, a process will fail to create even if there's enough free RAM. But that will not lower the memory used by the Oracle shadow processes.
There is this Oracle init parameter CURSOR_SPACE_FOR_TIME that effects the memory used by each Oracle shadow process.
* The Oracle initialization parameter CURSOR_SPACE_FOR_TIME is changed from the default value FALSE to TRUE.
(please see complete documentation at http://download-east.oracle.com/docs/html/A97350_05/toc.htm)
It is supposed to be set to TRUE. I dont have a HP-UX box with 9iR2 installed. I checked on a Linux x86 10.2.0.2 and it's set to FALSE.

Also:
* Oracle9i release 2 (9.2.0.1.0) changes the default setting for virtual memory data pages from D (4 KB) to L (1 GB) on HP-UX.
However, if applications are constrained in memory and tend to run a very large number of processes, then this drastic page size increase may lead processes to indicate large memory allocations, followed by an "out of memory" error. In those cases, you must lower the page size to a value between the "D" (default) size of 4 KB and the "L" (largest) size of 1 GB.

In the document pointed out above, they show a way to decrease this virtual memory footprint. I haven't tried doing it, so I'd suggest you try it first on your test environment.

Hope this helps.

-Ariel
Leon Allen
Regular Advisor

Re: Memory allocation

Please DO check that your dbc_min_pct and dbc_max_pct unix kernel parameters are adjusted low (eg 2 and 7; 5 and 10, for example).

(Use sam to check these)

THEN tune your SGA.

Time's fun when your having flys (ancient frog saying)