Operating System - HP-UX
1753638 Members
5917 Online
108798 Solutions
New Discussion юеВ

Re: Oracle SGA with multiple shared mem segments

 
SOLVED
Go to solution
Tim Nelson
Honored Contributor

Oracle SGA with multiple shared mem segments

rx7620
32GB RAM
HPUX 11.23
6/09 patch bundle

shmmax=28gb

The shared memory segments are being split. Typically we see a single segment via ipcs for oracle.

e.g.
sga=20gb
ipcs returns two 9gb and one 2gb segment

if set sga=10gb
ipcs shows two 4gb and one 2gb segment



Any ideas ?
11 REPLIES 11
Tim Nelson
Honored Contributor

Re: Oracle SGA with multiple shared mem segments

Oracle 10.2.0.3
Tim Nelson
Honored Contributor

Re: Oracle SGA with multiple shared mem segments

I believe I found what I needed.
Metalink ID 759565.1

and a good discussion here.

http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1353911

My rx7620 is a NUMA system and there is nothing wrong with the shared mem allocations.

(unless someone has more to discuss ;)
Michael Steele_2
Honored Contributor
Solution

Re: Oracle SGA with multiple shared mem segments

Hi

a) did you turn off ccNUMA?
b) How did you turn ccNuma off?
c) Are you using Npars? Vpars? Dynamic Vpars? Other?
d) How are your db's and app's spread out?
Support Fatherhood - Stop Family Law
Tim Nelson
Honored Contributor

Re: Oracle SGA with multiple shared mem segments

a) did not turn off NUMA
b) n/a
c) just a single nPar
d) Only on DB

Re: Oracle SGA with multiple shared mem segments

Tim,

I doubt your SGA shared memory sizes are related to NUMA - more likely just down to your kernel parms related to shared memory. If you have metalink access have a read of article 15566.1 which explains how oracle will allocate shared memory segments.

HTH

Duncan

I am an HPE Employee
Accept or Kudo
Tim Nelson
Honored Contributor

Re: Oracle SGA with multiple shared mem segments

I did check out the metalink article and it does look like oracle sees NUMA.

This and a ccNUMA video I watched does suggest to disable if we are in a flat config(1 nPar, no vPars, ILM) but also mentions to be cautious on disabling. There also is mention of increased benfits with 11iv3 but we cannot go there yet.


# kctune shmmax
Tunable Value Expression Changes
shmmax 28000000000 28000000000 Immed


# ipcs -ma|grep oracle
m 5734405 0x00000000 --rw-rw---- oracle dba oracle dba 52 9665806336 8418 8876 9:21:44 9:21:44 11:15:36
m 262152 0x00000000 --rw-rw---- oracle dba oracle dba 52 9680453632 8418 8876 9:21:44 9:21:44 11:15:36
m 229385 0x00000000 --rw-rw---- oracle dba oracle dba 52 2166444032 8418 8876 9:21:44 9:21:44 11:15:36
m 229386 0xd9edaca0 --rw-rw---- oracle dba oracle dba 52 45056 8418 8876 9:21:44 9:21:44 11:15:36

# mpsched -s
System Configuration
=====================

Locality Domain Count: 2
Processor Count : 8

Domain Processors
------ ----------
0 1 2 3 4
1 0 5 6 7

Don Morris_1
Honored Contributor

Re: Oracle SGA with multiple shared mem segments

Might as well check the memory topology as well to see what Oracle sees.

Compile and run the attached with either +DD64 or +DD32 -D_PSTAT64 as you see fit and run with no arguments.
Tim Nelson
Honored Contributor

Re: Oracle SGA with multiple shared mem segments

Looks to be all interleaved (ILV) , as expected..

--- System wide locality info: ---
index ldom physid type total free used
0 0 0 CLM 0 0 0
1 1 1 CLM 0 0 0
2 -1 -1 ILV 31G 7496M 24G
----- ----- -----
31G 7496M 24G


I guess the question still exists is...

Is oracle seeing the system as NUMA type a bad thing, even though all the memory is interleaved ?

Don Morris_1
Honored Contributor

Re: Oracle SGA with multiple shared mem segments

From a raw latency point of view, it shouldn't really matter -- Oracle can bind processes/threads to processors and craft "local" segments -- but the segment cost would be the same either way.

It could hurt you a little in that a single larger SGA has a better chance to form large pages and hence large translations -- but in your segment sizes, it wouldn't be too bad [the two 9Gb if really 9Gb and not "about" 9Gb would be 2 4Gb pages and 1 1Gb page at best, the 2Gb would be 2 1Gb pages] so I doubt you're getting a real increase in TLB miss rates either.

More likely is that a non-NUMA scheduling policy would be better -- there's really not much reason for any bindings, and there may well be scheduling hotspots you could avoid.