1850504 Members
2686 Online
104054 Solutions
New Discussion

Memory Question

 
SOLVED
Go to solution
Terry Gibbar
Advisor

Memory Question

Would someone please explain to me how the physical memory on a server is allocated? This server has 2.5 GB of memory, but only 82.1 MB shows as Free Mem. Why would 1.18 GB show up as Buffer Cache? Should Buffer Cache be considered as free memory? You probably already new it, but the numbers listed below are from glance.

Total VM : 749.9mb Sys Mem : 299.1mb User Mem: 972.1mb Phys Mem: 2.50gb
Active VM: 294.9mb Buf Cache: 1.18gb Free Mem: 82.1mb

Inside Pitch, Gone.
4 REPLIES 4
Andy Monks
Honored Contributor
Solution

Re: Memory Question

the buffer cache is used for caching data/files. It's range is controlled with 4 kernel parameters (nbuf, bufpages, dbc_max_pct, dbc_min_pct).

nbuf/bufpages are the old way to set/control the buffer cache. However, normally people use the dbc pair. These are the minimum/maximum percentage of physical memory. The default max%page is 50. Now, normally, this is quite high. However, depending on your application this maybe fine. For example Oracle uses it's own equivilent of the buffer cache (SGA), and therefore allocating the memory to the buffercache is pointless. Normally (on large memory systems), having a buffer cache of 10-20% is more than enough.

The system will reduce the size of the buffer cache if under memory pressure. Now as you've got 70+MB free, it's obviously not having problems with memory.

Hope that helps.
Tim Malnati
Honored Contributor

Re: Memory Question

Andy has given you some good info, but here are some additional thoughts.

How the buffer cache is used has a very significant impact on the machine. If the data is used primariily for read operations, it has a very beneficial effect where the data retrieval for the application occurs in drastically less time than having to get it from disk. As Andy indicated, memory pressure will cause the cache to reduce by dumping data and become available for system or user allocation. This is quick and for the most part can be considered as an extension of your free memory (with the exception of the minimun percent).

The same is not true if you do a large amount of write activity in your system. In this situation, changed data (dirty bits) have to be written back to disk before that memory area can become available for other needs. The syncer scans this buffer every 30 seconds and writes data out to disk as a normal housekeeping activity. But the syncer has a real time priority on the system. If there is a large amount of data to go, user processes will wait. On a single processor system, this effect will make the machine feel like it has just burped. Believe me when I say the potential is there to bring the machine to its knees if the disk bottleneck gets out of hand. Watching the sync process in glance will give you a good feel for what is going on.

The fact that you show 82MB free suggests that either your system has not filled up the buffer yet or your kernel is set up with far less than the default 50% dbc_max_pct. HP does not recommend cache buffer much above 200MB. My experience has shown that on a system with limited write activity that you can build a real race horse with larger sizes. Having plenty of room in the buffer for all the reference tables for your rdbms can drastically improve disk activity. In these days of large scale disk storage units (like XP256's and EMC's) that share disk resources, limiting disk resource needs has a more profound effect on the entire floor. In the situation I just discribed, most of the production day's write activity is actual transactions as they happen. If anyone is watching this thead from the lab, I would suggest that being able to dynamically change the dbc_min_pct and the dbc_max_pct is a logical next step. Having the ability to reduce the buffer size when you know there will be a lot of write activity would be real beneficial. A good example is a data warehouse during monthly loads.
Dave Wherry
Esteemed Contributor

Re: Memory Question

I'll add a little more to the information Andy and Tim already posted. Specifically Andy's comments on Oracle and buffer cache.
He is absoluely correct about Oracle buffering in the SGA and therefore system buffer cache is not needed. I should say may not be needed because I got burned on this.

I have a V2500 with 8GB of memory. The default dbc_max_pct was never changed, it was still at 50% meaning I had 4GB allocated for buffer cache. When I found this I decreased it to 15% and took a terrible performance hit along the lines of what Tim was talking about.

The problem was how my JFS filesystems were mounted. The key is to use the mincache=direct option when mounting the file systems. This bypasses the cache and writes directly to the disks. The write had already been buffered in the SGA, no need to buffer again. I did not have enough buffer cache and vhand, or the syncer, was killing me just like Tim said.

Leave nbuf and bufpages set to 0 and the system will dynamically allocate buffer cache within the dbc_max and dbc_min limits. The moral is to also look beyond the kernel parameters.

Since you already use Glance, type t to go to the system tables screen and f to go to the second page. There you will see how much memory is allocated for buffer cache, what is currently used and the high water mark. If your Used matches your Buffer Cache Max, check those file system mounts and try to get away from that double buffering. There is a very good article on this in the May/June Interex magazine.

Tim, good call on making the dbc parameters dynamic. During the day I have 500,000 reads to every write. At night, with batch jobs and uploads my writes are greater than the reads. I can't wait for a truly dynamically tunable kernel.

Terry Gibbar
Advisor

Re: Memory Question

ALL, EXCELLENT Information!!!!!!!!

Thanks a bunch,
Terry
Inside Pitch, Gone.