- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: dbc_max_pct considerations
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2004 07:10 AM
тАО08-18-2004 07:10 AM
Question: How to best set dbc_max_pct to maximize use of the 16gb of RAM for data caching while giving priority to application memory demand?
The conventional wisdom in this forum seems to be to generally set the buffer size to around 300mb to avoid double buffering in DB cache. However, the needs of the 10-20 clusters will be dynamic enough that it would be a major headache to manually balance the allocation of RAM for DB cache buffers across 10-20 clusters. I'd much rather let the OS do it.
If DB processes and other admin processes can demand and receive RAM allocations at the expense of the OS cache, then I'd think the optimal setting would be dbc_min_pct = 300mb or so and dbc_max_pct = 99% (i.e., let the OS use as much RAM as is available for I/O caching, but give it all up if the apps call for it).
Thoughts, suggestions, experiences?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2004 07:31 AM
тАО08-18-2004 07:31 AM
SolutionAbsolutely do *not* do that. vhand & the buffer cache do not play well together. The buffer cache will take it all & vhand will fight to get it back for the processes. Your system will spend far too much time mediating that battle. In fact I remember reading that bufcache is far more aggressive at recapturing memory from processes than two processes fighting among themselves. This seems like a classic situation where you need to either: 1) Install more memory or 2) Let the DBs cache themselves.
In my experience cache access seems to slow down > 600-800MB & a lot of that is the kernel having to manipulate the cache table & the lengthening of lookups as the table grows.
In fact in the hp-ux 11i tuning and performance book on page 286 it reads
/Quote
On typical large memory systems, the larger the size of physical memory, the larger the buffer cache that is derired. Buffer caches more than 500 MB in size may cause performance degradation however, due to VAS allocation and overall management of the large number of buffers.
/EndQuote
To test that all you need to due is start with a "normal" bufcache size & note the % cache hits & then keep increasing the bufcache size. At a point no matter *how* big you make it the cache hits will not increase any further & at that point you're essentially wasting memory that the apps could use.
Another thing you also might want to consider is using a fixed cache size. But either way, you should have the DBs cache themselves for optimum performance.
My 2 cents,
Jeff
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2004 08:28 AM
тАО08-18-2004 08:28 AM
Re: dbc_max_pct considerations
Thanks, Jeff. That's not the answer I'd hoped for since it puts a significantly larger DBA burden on us than we've dealt with on our Linux boxes, where the OS will apparently use the vast majority of available RAM for disk caching, but readily give it up to applications as needed. That allows us to defer caching to the OS rather than constantly monitoring and adjusting DB cache sizes for 10-20 database clusters, which no small task.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2004 08:41 AM
тАО08-18-2004 08:41 AM
Re: dbc_max_pct considerations
The other consideration is simply that if you have that much buffer cache, by the time you go through it to find what should be swapped out, it will be time to start vhand again and you essentially never get around to swapping things out.
I would still plan to set up as close to 2 x RAM's worth of device swap as your available disk space will allow.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2004 08:56 AM
тАО08-18-2004 08:56 AM
Re: dbc_max_pct considerations
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2004 09:13 AM
тАО08-18-2004 09:13 AM
Re: dbc_max_pct considerations
Just to make sure I'm not missing something big...
Suppose I have 16 GB of physical RAM, my apps and OS overhead are using 1 GB, and the OS has a fixed 500 MB buffer cache. Suppose I then read 10 GB of disk data (big table scan), only 500 MB of that will be cached even though we have 14.5GB of free RAM not being used for any other purpose? Am I missing something?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2004 09:49 AM
тАО08-18-2004 09:49 AM
Re: dbc_max_pct considerations
The entire point of the low setting data buffer cache rule-of-thumb is to avoid double-buffering, allowing the database to do it's own data caching without majorly sucking up RAM with the OS buffering the same stuff unnecessarily. In your last post, you seem unclear about how this works...
If your OS data buffer is set to 500MB and the database reads in 10GB, the OS will only cache 500MB of it, but the database itself is going to cache whatever it's SGA (or whatever the PostgreSQL equivalent term may be) is configured to allow. You'll see it at the OS level as user memory usage (as opposed to system or data buffer usage). You won't end up with 14.5GB of memory unutilized.
In short... let the database do its thing and keep the OS out of its way as much as possible. :)
Jeff Traigle
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2004 11:34 AM
тАО08-18-2004 11:34 AM
Re: dbc_max_pct considerations
Once again, my concern is for the administrative headache of keeping the memory allocations among 10-20 clusters appropriately balanced. The memory needs of our clusters change over time, not always with our foreknowledge. That means there may be need for reallocations of RAM for DB caching. For 10-20 DB clusters, that means 10-20 configuration file settings to coordinate. One OS setting is far easier to manage than 10-20 cluster configuration settings. Furthermore, how shall I optimally divide the RAM among these 10-20 clusters? Very hard question for dynamic set of databases, but one that has been worked on for decades in service to OS-level caching. OS-level caching is preferrable precisely so one doesn't have to mediate caching among multiple applications. Can't say it any clearer than that.
As to my "14.5 GB of free mem" scenario, no confusion here. You changed my premise by adding DB caching, which allows you to escape the core question: Is HP-UX B.11.23, like Linux, able to utilize free memory for I/O buffering when such memory was not explicitly allocated via dbc_min/max_pct for that purpose? I guess the answer is no.
Regardless, I hear your view: my choices are to get more memory or manually manage memory allocations 10-20 DB configurations. Both of those options look pretty wasteful and inefficient. :(
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2004 11:53 AM
тАО08-18-2004 11:53 AM
Re: dbc_max_pct considerations
JT: It also occurs to me there may be confusion on terminology: a PostgreSQL "cluster" is a single master DB server process, it's subservient proceses, and the database(s) served up by those server processes. Not sure if "cluster" has the same meaning in Oracle, but if I only had one PostgreSQL cluster, then yes, life would be simple here, just set the cluster config and go. But there are a lot of obvious reasons why one might want to have multiple independent clusters on a single box (separate customers, independent upgrades, easy load migration, etc).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-18-2004 03:52 PM
тАО08-18-2004 03:52 PM
Re: dbc_max_pct considerations
find that the gains for buffer cache beyond about 2GB are very small. My experience has been with Oracle but I find that the "double-buffering" in HP-UX 11.11 and up to be a non-issue. Oracle almost always performs better now with fully cooked i/o and fairly generous buffer caches. Beyond a certain buffer cache size, the search times within the cache can approach or exceed the time required to get the data from the disk --- especially if its a disk array equipped with lots of cache itself. In that case, you find that you are triply buffered. In any event, as an absolte maximum, I would set dbc_max_pct to 20% --- and that is very generous. Of course, the only true is is to measure.