1824297 Members
4778 Online
109669 Solutions
New Discussion юеВ

High IPC/Sem Wait %

 
SOLVED
Go to solution
Tim D Fulford
Honored Contributor

High IPC/Sem Wait %

Hi

I have a system with processes with high IPC & Sem Wait %, PROC_[IPC|SEM]_WAIT_PCT in MeasureWare/glance. My first reaction was to check semephores. & indeed I think semmnu & semume are too low (30 & 10)

o Does anyone have anything else I should check?
o Any suggested values for the two above (I have 120 & 50 up to 4096 & 128!)
o Any other suggestions?

Cheers

Tim
-
7 REPLIES 7
Rita C Workman
Honored Contributor

Re: High IPC/Sem Wait %

Well your value is set at the default value.
Instead of saying what I suggest..you need to decide what is best for your shop.
May I suggest checking all your semm* parms and going over what your ~init.ora is set for processes.
You can check out these couple of threads that may help you to ensure all your semmaphore parms are set properly and of course that you have the memory to run at those settings:
Doc on all parms:
http://docs.hp.com//hpux/onlinedocs/os/KCparams.OverviewAll.html
Thread on calc/semmaphores:
http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0x10ca6af52b04d5118fef0090279cd0f9,00.html

Hope this helps,
Rit
harry d brown jr
Honored Contributor

Re: High IPC/Sem Wait %


Without knowing how your processes work, its a little difficult to calculate the proper semaphore requirements. Here is an example I have of one of my systems:

sema 1
semaem 16384
semmap 5200
semmni 2600
semmns 5200
semmsl 2048
semmnu 1300
semume 64
semvmx 32767

live free or die
harry
Live Free or Die
fg_1
Trusted Contributor
Solution

Re: High IPC/Sem Wait %

Tim

Here is a kernel example from one of my systems running 11.00/64bit on an L3000. This server runs our ob2 application only.

Tunable parameters

STRMSGSZ 65535
bufpages 0
create_fastlinks 1
dbc_max_pct 15
dbc_min_pct 15
max_thread_proc 256
maxfiles 2048
maxfiles_lim 2048
maxswapchunks 4096
maxuprc ((NPROC*9)/10)
maxusers 250
maxvgs 80
msgmap (MSGTQL+2)
msgmax 32768
msgmnb 65535
msgmni (NPROC)
msgseg (MSGTQL*4)
msgssz 128
msgtql (NPROC*10)
ncallout 2064
nfile (15*NPROC+2048)
nflocks (NPROC)
ninode (8*NPROC+2048)
nkthread 2048
nproc ((MAXUSERS*3)+64)
nstrpty 60
nstrtel (MAXUSERS)
nswapdev 25
semmni (NPROC*5)
semmns (SEMMNI*2)
semmnu (NPROC-4)
semume 64
semvmx 32768
shmmax 0X40000000
shmmni 512
shmseg 32
timeslice 1
unlockable_mem (MAXUSERS*10)

Just a guide, not a rule of thumb. The link that rita put out there is excellent for explaining each parameter in detail and the relevance to each other.

A. Clay Stephenson
Acclaimed Contributor

Re: High IPC/Sem Wait %

Hi:

I just noticed that Frank has obvoiusly used one of the stupid tuned parameter sets for databases. While most of those are at least ok, the timeslice value of 1 rather than 10 almost certainly is a performance killer. Set it to 10 and leave it there.
If it ain't broke, I can fix that.
fg_1
Trusted Contributor

Re: High IPC/Sem Wait %

Tim

I have been smitten but ann is absolutely correct in her assessment.

Thank you ann for the correction of that, these were systems that I just inherited not too long ago.
Tim D Fulford
Honored Contributor

Re: High IPC/Sem Wait %

Many thanks for the above.
Rita - I actually use informix, but point taken & thanks for the kernel param link
Harry - Thanks, these values are (with Rita's stuff) too high but a nice starting point
Frank - I'm not going to use all your values but this is great as it puts in some equations & relative values
Ann - Thank you, I read Doug Gruman's Performance Tuning cookbook & he recons it is a bad idea to fiddle with the timeslice too.

Just out of interest
nproc 2048
semmns 4096
semmni (SEMMNS) [==> 4096]
semmap (SEMMNI+2) [==> 4098]
semmnu (NPROC-4) [==> 2044]
semume 128
semaem 32768
semvmx (SEMAEM) [==> 32768]

Cheers

Tim
-
Dragan Krnic
Frequent Advisor

Re: High IPC/Sem Wait %

An interesting development. I stumbled upon the same contradiction, that HP suggests through template tuned system settings that a much smaller timeslice is good for heavily loaded servers.

It even made some sense to me. Being able to cchedule jobs 100 times instead of only 10 times a second, as these values seem to suggest, should advance finer more equitable CPU sharing, unless of course the scheduling itself involves high overhead, which didn't seem likely 'cause in 1/100th of a second a good processor can execute 4-5 million instructions. Does significant part of it go on the very operation of scheduling?

I hope someone from HP comes forward and clears the mass. Where I implemented it, nobody seemed to notice any appreciable difference. And then Ann comes and absolutely deprecates the practice. What's the clue, Ann?
Why is 100/100 performance killer?