Showing results for 
Search instead for 
Did you mean: 

Optimizing shared memory setting for multiple sybase istances

Frequent Advisor

Optimizing shared memory setting for multiple sybase istances


I've an L3000 server running hp-ux 11.11 with 4Gb of memory and multiple sybase istances .

So, what is best value for shmmax for better performance ? Actually shmmax is set to 3Gb ; this the outupt of ipcs -mob

m 0 root 348
m 1 root 61760
m 2 root 8192
m 3 root 1048576
m 3076 sybase 874766336
m 5 sybase 252899328
m 518 sybase 520687616
m 519 sybase 75747328
m 2056 sybase 65536
m 2057 aiiadm 725752
m 1546 aiiadm 2936
m 11 aiiadm 1142429
m 524 aiiadm 104190
m 24589 root 1052672
m 14 www 184324
m 1039 oaccengr 626400
m 16 oaccengr 13803061
m 529 oaccengr 101944
m 530 oaclocgr 851400
m 19 oaclocgr 34309813
m 532 oaclocgr 194008
m 1045 sybase 65540
m 23 sybase 65540

Sum of all segments is about 1.7Gb .Then I think could increase segment size for sybase until i reach about 3Gb ..

What do you think about ?

James Murtagh
Honored Contributor

Re: Optimizing shared memory setting for multiple sybase istances

Hi Claudio,

Changing shmmax will bring no performance benefits that I am aware of, its simply an address space limit. Your largest single shmem segment is just under 900 MB so make sure and keep it above that. However, I think on 11i shmmax is dynamic too so you won't have to reboot if you were to change it.
A. Clay Stephenson
Acclaimed Contributor

Re: Optimizing shared memory setting for multiple sybase istances

The setting of shmmax has little to do with performance. It does not limit the global shared memory space but rather limits the size of any one segment. Of course, if shmmax were too small then you limit performance by not allowing any one segment to be large enough to properly cache data, for example. The main purpose of shmmax is to limit rogue processes from grabbing all the resources of the machine. You can set shmmax to a large value; the ACTUAL shared memory usage is controlled by the Sybase (or Oracle, Informix, ...) configuration files (up to the limnits imposed by shmmax).

You really have to search for a balance of resources. For example, you could make your Sybase shared memory usage so large that processes are starved for memory and the machine has no option but to start swapping --- and that has a much worse effect upon the system than almost anything else.

As you tune, check the output of Glance. You can also run vmstat and look at the po (PageOut) vales. If that value goes above very, very small numbers then you have gone too far. You always want to leave some headroom so that swapping is avoided.

You might also find that reducing buffer cache might free up more memory for data base shared memory -- where databases really like to do their caching.

By the way, 4GB is really considered a rather small amount of memory these days for database servers.

If it ain't broke, I can fix that.
Mark Greene_1
Honored Contributor

Re: Optimizing shared memory setting for multiple sybase istances

You ought to be able to get documentation either from Sybase or from the vendor from whom you bought the application that specifies recommened kernel parameter settings for your Sybase products & versions and the version of HP-UX you are running.

the future will be a lot like now, only later
Bill Hassell
Honored Contributor

Re: Optimizing shared memory setting for multiple sybase istances

Sybase, like Oracle, can benefit by having a larger shared memory area for each instance. That is NOT automatic though--the DBA must define the size and features that will be placed in the shared memory area.

HOWEVER, if Sybase is running 32bit code, you will have a constant battle in trying to avoid ENOMEM (errno 12) errors because there is only 1 memory map for shared memory and it contains not only shared memory segments but shared libraries, memory mapped files and a bunch of other items. Thuis, simply stopping and then restarting an instance of Sybase may fail because the map has become fragmenbted and there is no single space left that is large enough.

ipcs is VERY misleading. Adding up all the segments does not tell you whether a given segment size will fit. There may be 900 megs available but the largest contiguous chunk os only 100 megs. This is all due to the severe limitations with 32bit programs trying to do really big things.

Fortunately, there is a workaround in HP-UX 11.xx: memory windows. Add all of the memory windows patches, then assign each instance of Sybase (and related middleware that also accesses shared memory) to a separate window. These windows are private maps where shared libraries, memory-mapped files, etc are not found so the instance has unlimited access to the entire map.

And as mentioned, 4Gb is a bit small, especially when you use memory windows. You could allow each instance to use 900 megs for shared memory but then paging will start and performance will be pretty bad. Increase RAM to at least 6Gb, 8Gb would be best.

As far as increasing the shared memory area beyond 900 megs for each instance, Sybase apps will have to be relinked with -Wl -N so the quadrant 1 and quadrant 2 areas can be combined and Sybase can allocate up to 1750 megs of shared memory. Have your DBA read the mem_mgt and proc_mgt white papers in /usr/share/doc (11.0 only)

All this complexity is due to 32bit apps. If Sybase has a 64bit version (should by now, it's been about 6 years since 64bit HP-UX was released) all of these limits go away.

Bill Hassell, sysadmin