1847055 Members
6819 Online
110261 Solutions
New Discussion

Problem with fbackup

 
Richard Davies_6
Occasional Contributor

Problem with fbackup

I have a D330 running HPUX 10.20 that has been behaving well for the last 3 years, with backups running every mon-fri with no real problem, until tonight. Now, whenever fbackup runs it fails with the following message

fbackup(1009): cannot get the specified shared memory segment.

Nothing have changed on the machine for at least the last year. Any idea what the problem is, and more importantly, how do I solve it and get fbackup working again.
5 REPLIES 5
A. Clay Stephenson
Acclaimed Contributor

Re: Problem with fbackup

Fbackup needs shared memory segments for its reader processes. I suspect that memory has become so fragmented in the 32-bit world that there is not a big enough single contigious chunk to allow the request. You can use ipcs to examine shared memory (man ipcs for details) but it may be simpler to reboot the box. "Leftover" shared memory segents (as well as temp files) are a very common artifact of kill -9's.
If it ain't broke, I can fix that.
James R. Ferguson
Acclaimed Contributor

Re: Problem with fbackup

Hi Richard:

One possibility is that someone has killed 'fbackup' processes at an earlier time. This will leave large amounts of memory tied-up. A reboot would release these.

Another possiblity is that you have too high a number of 'records' and/or 'readerprocesses' set in the 'fbackup' config file (if you are using one).

Regards!

...JRF...
Michael Tully
Honored Contributor

Re: Problem with fbackup

If you have an unstable application that has memory leaks (we have) a regular reboot will release any of these dead shared memory segments. I'm not saying this is the solution to everything (far from it) but a maintenance reboot one every three months can do more help than harm.
Anyone for a Mutiny ?
Steven E. Protter
Exalted Contributor

Re: Problem with fbackup

You can do some kernel changes and increase the shmmax, shared memory box. Increase the resources.

msgmap (MSGTQL+2)
msgmax 32768
msgmnb 65535
msgmni (NPROC)

msgssz 128


semmsl_override 2048
semume 64
semvmx 32768
sendfile_max 0
shmem 1
shmmax 0X40000000
shmmni 512
shmseg 32


Might be somewhere to go with the kernel.

ipcs will show the semaphores and queues.

ipcrm will let you remove specific segments.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Bill Hassell
Honored Contributor

Re: Problem with fbackup

When fbackup first starts, it will allocate a shared memory area to hold all the names and paths of the files that will be written to tape. That's why there is a pause (seconds to minutes) before the tape starts moving. For really large filesystems (in terms of quantity of files, not data size), a very large shared memory segment may be needed. And as mentioned, killing fbackup with -9 will leave that area reserved until the next reboot. This, the next fbackup can't run because there isn't enough memory available for a new shared memory area (and why you never use kill -9 first).

The good news is you can usually return the segment back to the pool with ipcrm. Start with:

ipcs -bmop

Look for NATTCH = 0 meaning that no process is currently attached. You do have to be careful about using ipcsrm since it will remove a given segment regardless whether a process is using it or not and this can make some applications very unhappy.


Bill Hassell, sysadmin