1751687 Members
3937 Online
108781 Solutions
New Discussion юеВ

Re: Oracle and memory

 
SOLVED
Go to solution

Oracle and memory

I have an rp8400 w/40GB or ram but the DBA is only using 1/4 of the memory.

Q. Does not the philosophy the more memory for oracle the happier it will be still apply?

I am being told by the dba that the more memory for oracle = the more memory oracle has to manage so that does not mean better performance. Does everyone agree w/this analysis?
22 REPLIES 22
RAC_1
Honored Contributor

Re: Oracle and memory

There are limits as to what is good setting. But in most cases the more memory for oracle the better it performs.

What are you settings for oracle SGA, shared memory-shmmax??

Also other kernel tunables??
There is no substitute to HARDWORK
Steven E. Protter
Exalted Contributor
Solution

Re: Oracle and memory

A. Yes

Are you running 32 bit or 64 bit Oracle.

In general, without any known limits, Oracle will perform better with more memory.

If its a small database and performance is maxing out you gain nothing by adding memory.

However the last paragraph has never happened to anyone I know.

Assuming there is no plans long term to increase your machines workload, increasing certain parts of the Oracle SGA might provide some performance impact.

To analyze os performance:

http://www.hpux.ws/system.perf.sh

HP-UX only.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Eric Antunes
Honored Contributor

Re: Oracle and memory

Hi Stafford,

I fully agree with your DBA.

See Metalink Note 1012046.1 to calculate the shared_pool_size requirements

Best Regards,

Eric Antunes
Each and every day is a good day to learn.
Jean-Luc Oudart
Honored Contributor

Re: Oracle and memory

Hi

from Oracle9i you have advisory utilities and you should use them to tune your memory usage on the server.

see attachment

Regards
Jean-Luc
fiat lux
Eric Antunes
Honored Contributor

Re: Oracle and memory

Hi again,

See Metalink Note 62143.1 (Understanding and Tuning the Shared Pool) witch has the reference to Note 1012046.1...

Eric
Each and every day is a good day to learn.
Patti Johnson
Respected Contributor

Re: Oracle and memory

Depending on the size of the database and the number of users being supported 10G may be enough memory for Oracle. If there are no performance issues and the user community is happy, then their is no reason to increase Oracle's memory usage.
There can be performance problems if the dba just throws memory at a problem. For example making the shared_pool very large instead of pinning packages and using bind variables will not help performance. Lots of memory for the buffer cache is no subsitute for a well tuned application that accesses few block to complete a query.

In general Oracle does like a lot of memory, but if the database is using 10G then it may be performing well.

As a DBA I do like it when the Unix admin "wants" the database to have lots of memory :)

Patti

Re: Oracle and memory

Thanks for the information it has been most helpful. The database is ~ 900GB on Oracle 9i.

Re: Oracle and memory

Oops 64-bit
TwoProc
Honored Contributor

Re: Oracle and memory

10g is pretty good size, unless you're still seeing very high I/O. In that case, EVEN if you've got a 97% hit ratio, increasing the size of db_block_buffers *could* greatly increase throughput. However, it may not. Increasing the size of the SGA is not NEARLY as effective as tuning the code that is doing lots of reads to and from disk. But, if that is already done to a reasonable level, then increasing the buffer cache could deliver great benefits, but then again maybe not. You'd have to get some benchmarks, make the change and see.

What people don't generally grasp is that when your buffer cache hit ratio is 97% - the remaining three percent represents just about 70% - 90% of your disk i/o (except redo logs and archive logs). So, theoretically a 1% increase in the hit ratio to 98% from 97% which looks small, represents as much as 1/3 reduction of total i/o (except for redo and arch logs)!
From my experience this is largely true and accurate.

The problem is, that the increase in SGA that got you from say 96% to 97% may cost you as little as 1G more or so, while the jump from 97% to 98% may cost you 6G more, and the jump to 99% may cost you another 15Ggig!

In other words, it follows the law of diminishing returns. Now, at that point of getting another 1% reduction in I/O you've now got an offset in performance from the brand-new CPU load that you've got from now maintaining, let's say 6 G more of 8k blocks - that's housekeeping and maintenance for 786,432 more blocks! Well, if you weren't CPU bound(per process and in total) before - then you may be much better off now. Glance, or even better - perfview will tell you this now, along with your DBA's help by using statspack. The next question is, does the next increment of 1% or so, which would require even more blocks to achieve get you going forward?

Someone mentioned the shared pool. This is where the executing code lives. It is vitally important that the cache hit ratio for this area remains high too. Make sure that is checked. Keep in mind that a 1% increase in this area is MUCH easier in terms of commmitted memory to achieve than db_buffer_cache. So, check this, and resolve it first.

Overall, I agree with the others who say more memory is better. In general it is, but it would be best to know what you're doing and why. Then, measure the system to make that the intended effects are positive, because you could be trading in your i/o problems for cpu ones.
We are the people our parents warned us about --Jimmy Buffett