HPE 9000 and HPE e3000 Servers
cancel
Showing results for 
Search instead for 
Did you mean: 

performance questions

 
SOLVED
Go to solution
Vball23
Advisor

performance questions

I am moving my 2-node (RP8400) cluster of oracle databases to a 7-node (partitions) Itanium cluster. I am setting up the Itanium partitions now and am researching performance tuning. So far, I have come up with a few questions.

A performance cookbook I have found that was written by a few HP engineers covers these points:

It asks if the system is primarily random access or sequential access on the FS. It mentions this on using buffer cache:

When access is primarily random, any read-ahead I/O performed by the buffer cache routines is "wasted": logical read requests will invoke routines that will look through buffer cache and not get hits. Then, performance degradation results because a physical read to disk will be performed for nearly every logical read request. When mincache=direct is used, it causes the routines to bypass buffer cache: I/O goes directly from disk to the process's own buffer space, eliminating the middle steps of searching the buffer cache and moving data from the disk to the buffer cache, and from there into the process memory. If mincache=direct is used when read patterns are very sequential, you will get hammered in the performance arena, because very sequential reading will take big advantage of read ahead in the buffer cache, making logical I/O wait less often for physical reads. You want much more logical than physical reading for performance (when patterns are sequential).

Here is an example of the exception to the rule: We have seen special cases such as a large, 32-bit Oracle application in which the amount of shared memory limited the size of the SGA, thus limiting the amount of memory allocated to the buffer pool space; and (more important) Oracle was found to be reading sequentially 68 percent of the time! When the mincache=direct option was removed (and the buffer cache enlarged), the number of physical I/Os was greatly reduced, which increased performance substantially. Remember, this was a specific, unique, pathological case; often experimentation and/or research is required to know if your system/application will behave this way.

From you experience, would these systems benefit more from using buffer cache, or from disabling buffer cache?

Also, each partition will have 16gb RAM and are currently configured like this:

# swapinfo -tam
Mb Mb Mb PCT START/ Mb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 4096 0 4096 0% 0 - 1 /dev/vg00/lvol2
reserve - 435 -435
memory 16367 2106 14261 13%
total 20463 2541 17922 12% - 0 -

There is enough disk space to increase swap to whatever. The cookbook mentions this:

swapmem_on

This trick to enable pseudo swap is used to increase the amount of reservable virtual memory. It's only useful when you can't configure as much swap as you need. For example, say you have more physical memory installed than you have disk available to use as swap: in this case if pseudo swap is not turned on, you'll never be able to use all the memory you have installed. The problem is, managing pseudo swap takes up some memory itself, and can slow performance! We recommend you set this to 0 unless you have a boatload of memory and not enough disk available for allocating to swap.

So, is this telling me that I should increase the disk swap to 32gb and disable swapmen_on? Should I increase disk swap and leave swapmem_on=1?




4 REPLIES 4
DCE
Honored Contributor

Re: performance questions

I would leave the pseudo swap on. Psuedo swap allows you to reserve swap space without it actually being there. With 16 GB of memory you should not ever have to actually use swap. I would set up between 8 - 12 GB of physical swap. Keep an eye on the swap space with swapinfo. If you are actually using all of the 8 -12 GB of swap you can easily extend it. But if you are actually using that much swap, as opposed to just reserving it, your system performance is really going to suffer.
rick jones
Honored Contributor

Re: performance questions

Im a "networking" rather than DB guy, but from what i've read and heard from the DB guys with whom I have lunch, seems if you have a 64 bit Oracle you'd do just as well to have a larger Oracle SGA (I think that is the term) as using the filesystem buffer cache. Presumeably, oracle will know what it needs to keep and what it needs to cast-out better than the filesystem.
there is no rest for the wicked yet the virtuous have no pillows
A. Clay Stephenson
Acclaimed Contributor
Solution

Re: performance questions

Up until 11.11, I was able to notice (and measure) better Oracle performance with the mincache=direct,convosync=direct mount options but at 11.11 and beyond the numbers reversed and fully cooked i/o gives the better performance. I think you will find that buffer caches up to about 1200-1600MB will do well although 800GB is still generous. I actually prefer a fixed buffer cache (by setting bufpages to a non-zero value) to a dynamic buffer cache. You will also find that large SGA's (up to several GB's) perform well.

You should really think of pseudoswap as nothing more than kernel bookeeping. Up to 75% of the boxes physical memory can be counted as swap space (although it's never used for that purpose). As long as you have 0.25 x RAM configured as swap space, you machine can utilize all of its memory with swapmem_on=1. Most of my boxes with large amounts of memory are configured this way and they never swap. That's why I bought all that memory is the first place. The old rules about 2-3X RAM (4-6X if mirrored) as swap space are literally decades old.

If it ain't broke, I can fix that.
Vball23
Advisor

Re: performance questions

Thx for the info. It was extremely helpful, especially the info from A. Clay.

Thanks again.