Operating System - HP-UX
1753485 Members
4488 Online
108794 Solutions
New Discussion юеВ

Re: dbc_max_pct considerations

 
SOLVED
Go to solution
Ed Loehr
Advisor

dbc_max_pct considerations

Scenario: ia64 rx4640 running B.11.23 with 16gb of RAM, 10-20 high-volume PostgreSQL database clusters, with database volume far in excess of 16GB.

Question: How to best set dbc_max_pct to maximize use of the 16gb of RAM for data caching while giving priority to application memory demand?

The conventional wisdom in this forum seems to be to generally set the buffer size to around 300mb to avoid double buffering in DB cache. However, the needs of the 10-20 clusters will be dynamic enough that it would be a major headache to manually balance the allocation of RAM for DB cache buffers across 10-20 clusters. I'd much rather let the OS do it.

If DB processes and other admin processes can demand and receive RAM allocations at the expense of the OS cache, then I'd think the optimal setting would be dbc_min_pct = 300mb or so and dbc_max_pct = 99% (i.e., let the OS use as much RAM as is available for I/O caching, but give it all up if the apps call for it).

Thoughts, suggestions, experiences?
11 REPLIES 11
Jeff Schussele
Honored Contributor
Solution

Re: dbc_max_pct considerations

Hi,

Absolutely do *not* do that. vhand & the buffer cache do not play well together. The buffer cache will take it all & vhand will fight to get it back for the processes. Your system will spend far too much time mediating that battle. In fact I remember reading that bufcache is far more aggressive at recapturing memory from processes than two processes fighting among themselves. This seems like a classic situation where you need to either: 1) Install more memory or 2) Let the DBs cache themselves.
In my experience cache access seems to slow down > 600-800MB & a lot of that is the kernel having to manipulate the cache table & the lengthening of lookups as the table grows.
In fact in the hp-ux 11i tuning and performance book on page 286 it reads
/Quote
On typical large memory systems, the larger the size of physical memory, the larger the buffer cache that is derired. Buffer caches more than 500 MB in size may cause performance degradation however, due to VAS allocation and overall management of the large number of buffers.
/EndQuote
To test that all you need to due is start with a "normal" bufcache size & note the % cache hits & then keep increasing the bufcache size. At a point no matter *how* big you make it the cache hits will not increase any further & at that point you're essentially wasting memory that the apps could use.
Another thing you also might want to consider is using a fixed cache size. But either way, you should have the DBs cache themselves for optimum performance.

My 2 cents,
Jeff
PERSEVERANCE -- Remember, whatever does not kill you only makes you stronger!
Ed Loehr
Advisor

Re: dbc_max_pct considerations


Thanks, Jeff. That's not the answer I'd hoped for since it puts a significantly larger DBA burden on us than we've dealt with on our Linux boxes, where the OS will apparently use the vast majority of available RAM for disk caching, but readily give it up to applications as needed. That allows us to defer caching to the OS rather than constantly monitoring and adjusting DB cache sizes for 10-20 database clusters, which no small task.

Kent Ostby
Honored Contributor

Re: dbc_max_pct considerations

I agree with Jeff's general suggestion (don't do this).

The other consideration is simply that if you have that much buffer cache, by the time you go through it to find what should be swapped out, it will be time to start vhand again and you essentially never get around to swapping things out.

I would still plan to set up as close to 2 x RAM's worth of device swap as your available disk space will allow.
"Well, actually, she is a rocket scientist" -- Steve Martin in "Roxanne"
Ed Loehr
Advisor

Re: dbc_max_pct considerations

Kent: 32GB of swap is a lot. Can you help me understand the tradeoff? Why is 2x better than 1x or 1.5x or 4x?
Ed Loehr
Advisor

Re: dbc_max_pct considerations


Just to make sure I'm not missing something big...

Suppose I have 16 GB of physical RAM, my apps and OS overhead are using 1 GB, and the OS has a fixed 500 MB buffer cache. Suppose I then read 10 GB of disk data (big table scan), only 500 MB of that will be cached even though we have 14.5GB of free RAM not being used for any other purpose? Am I missing something?
Jeff_Traigle
Honored Contributor

Re: dbc_max_pct considerations

I guess I'm not really understanding what the problem is. My experience is with Oracle RDBMS, but I imagine PostgreSQL is pretty similar in it's use of it's own cache buffers.

The entire point of the low setting data buffer cache rule-of-thumb is to avoid double-buffering, allowing the database to do it's own data caching without majorly sucking up RAM with the OS buffering the same stuff unnecessarily. In your last post, you seem unclear about how this works...

If your OS data buffer is set to 500MB and the database reads in 10GB, the OS will only cache 500MB of it, but the database itself is going to cache whatever it's SGA (or whatever the PostgreSQL equivalent term may be) is configured to allow. You'll see it at the OS level as user memory usage (as opposed to system or data buffer usage). You won't end up with 14.5GB of memory unutilized.

In short... let the database do its thing and keep the OS out of its way as much as possible. :)
--
Jeff Traigle
Ed Loehr
Advisor

Re: dbc_max_pct considerations


Once again, my concern is for the administrative headache of keeping the memory allocations among 10-20 clusters appropriately balanced. The memory needs of our clusters change over time, not always with our foreknowledge. That means there may be need for reallocations of RAM for DB caching. For 10-20 DB clusters, that means 10-20 configuration file settings to coordinate. One OS setting is far easier to manage than 10-20 cluster configuration settings. Furthermore, how shall I optimally divide the RAM among these 10-20 clusters? Very hard question for dynamic set of databases, but one that has been worked on for decades in service to OS-level caching. OS-level caching is preferrable precisely so one doesn't have to mediate caching among multiple applications. Can't say it any clearer than that.

As to my "14.5 GB of free mem" scenario, no confusion here. You changed my premise by adding DB caching, which allows you to escape the core question: Is HP-UX B.11.23, like Linux, able to utilize free memory for I/O buffering when such memory was not explicitly allocated via dbc_min/max_pct for that purpose? I guess the answer is no.

Regardless, I hear your view: my choices are to get more memory or manually manage memory allocations 10-20 DB configurations. Both of those options look pretty wasteful and inefficient. :(
Ed Loehr
Advisor

Re: dbc_max_pct considerations


JT: It also occurs to me there may be confusion on terminology: a PostgreSQL "cluster" is a single master DB server process, it's subservient proceses, and the database(s) served up by those server processes. Not sure if "cluster" has the same meaning in Oracle, but if I only had one PostgreSQL cluster, then yes, life would be simple here, just set the cluster config and go. But there are a lot of obvious reasons why one might want to have multiple independent clusters on a single box (separate customers, independent upgrades, easy load migration, etc).
A. Clay Stephenson
Acclaimed Contributor

Re: dbc_max_pct considerations

Databases in excess of 16GB are tiny by today's standards. I think you are going to
find that the gains for buffer cache beyond about 2GB are very small. My experience has been with Oracle but I find that the "double-buffering" in HP-UX 11.11 and up to be a non-issue. Oracle almost always performs better now with fully cooked i/o and fairly generous buffer caches. Beyond a certain buffer cache size, the search times within the cache can approach or exceed the time required to get the data from the disk --- especially if its a disk array equipped with lots of cache itself. In that case, you find that you are triply buffered. In any event, as an absolte maximum, I would set dbc_max_pct to 20% --- and that is very generous. Of course, the only true is is to measure.
If it ain't broke, I can fix that.