1834786 Members
2834 Online
110070 Solutions
New Discussion

Re: fbackup

 
Carol Clark
Advisor

fbackup

Can anyone help with the following error when using fbackup

fbackup(1004): session begins on Wed Jun 28 23:50:55 2006
fbackup(1517): /net not backed up - 'n' option (NFS) not specified
fbackup(3203): volume 1 has been used 30 time(s)
fbackup(3024): writing volume 1 to the output file /dev/rmt/0m
fbackup(9999): Not enough space
fbackup(1105): WARNING: could not open directory /stand
fbackup(9999): Not enough space
fbackup(1105): WARNING: could not open directory /stand/build
fbackup(3005): WARNING: file number 917369 was NOT backed up
fbackup(3005): WARNING: file number 917371 was NOT backed up
fbackup(9999): Not enough space
fbackup(1105): WARNING: could not open directory /stand/build/mod_wk.d
fbackup(9999): Not enough space
fbackup(1105): WARNING: could not open directory /stand/build/mod_wk.d/krm
fbackup(3005): WARNING: file number 917381 was NOT backed up
fbackup(9999): Not enough space
fbackup(3005): WARNING: file number 917382 was NOT backed up
fbackup(1105): WARNING: could not open directory /stand/dlkm
fbackup(9999): Not enough space
fbackup(1105): WARNING: could not open directory /stand/dlkm.vmunix.prev
fbackup(9999): Not enough space
fbackup(1105): WARNING: could not open directory /stand/dlkm.vmunix.prev/mod.d

The problem seems to be just with /stand and it appear to complete the backup (although I am going to check that)

Has anyone got any advise ?
13 REPLIES 13
Jaime Bolanos Rojas.
Honored Contributor

Re: fbackup

Carol, never seen this error before.
I would think that /stand directory was too big and you run out of space for the backup.

For testing purposes you can do an fbackup just of the /stand directory, to see if there is a problem with it.

Regards,

jaime.
Work hard when the need comes out.
OldSchool
Honored Contributor

Re: fbackup

Also, could you post the command-line (or script) that you're running when you get this?
Carol Clark
Advisor

Re: fbackup

/stand by itself backups up fine
Mridul Shrivastava
Honored Contributor

Re: fbackup

/stand is successful it means the size of backup is more than the deive on which you are taking backup..

Are you appending the backup on the tape, it may be filling up whole tape and giving the error.
Time has a wonderful way of weeding out the trivial
Carol Clark
Advisor

Re: fbackup

The backup does seem to be backup up - we are currently verifying that. The backup script has been running successfully for quite sometime

The command we are using is

/usr/sbin/fbackup -f $BACKUPDEVICE -c $CONFIG $BACKUPDIR -I $INDEXLOG

A. Clay Stephenson
Acclaimed Contributor

Re: fbackup

Issue a "bdf" command and post it. I suspect that /var is very nearly full.
If it ain't broke, I can fix that.
Carol Clark
Advisor

Re: fbackup

bdf of /var

/dev/vg00/lvol8 1175552 897232 278320 76% /var
A. Clay Stephenson
Acclaimed Contributor

Re: fbackup

I should have been more specific. Do simply a "bdf" and post that. While I suspect /var, if you have other filesystems mounted below /var then one of them could be filling up -- particularly /var/adm. I also assume that you were running this backup as root.
If it ain't broke, I can fix that.
Carol Clark
Advisor

Re: fbackup

There are no other file systems mounted under /var and yes we are running it as root
Carol Clark
Advisor

Re: fbackup

Here is a full bdf

$ bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 262144 155288 106528 59% /
/dev/vg00/lvol1 251696 40784 185736 18% /stand
/dev/vg00/lvol8 1175552 897232 278320 76% /var
/dev/vg00/lvol7 1572864 1360752 210616 87% /usr
/dev/vg00/lvol6 65536 17552 47784 27% /tmp
/dev/vg03/progtemp 2097152 930887 1093465 46% /progtemp
/dev/vg00/lvol5 1048576 984592 63544 94% /opt
/dev/vg03/yrend_app
4194304 1520810 2506523 38% /opt/coins9/yrend
/dev/vg03/yrend_data
18874368 14733888 4108136 78% /opt/coins9/yrend/data
/dev/vg02/opt_coins9_test
4194304 1699391 2340134 42% /opt/coins9/test
/dev/vg02/opt_coins9_test_data
14680064 11672928 2983712 80% /opt/coins9/test/data
/dev/vg03/opt_coins9_oatest
4194304 2306944 1770162 57% /opt/coins9/oatest
/dev/vg03/opt_coins9_oatest_data
16777216 15053620 1696732 90% /opt/coins9/oatest/data
/dev/vg02/opt_coins9_live
4194304 2557113 1546032 62% /opt/coins9/live
/dev/vg02/opt_coins9_live_var_spool
2097152 1207715 835437 59% /opt/coins9/live/var/spool
/dev/vg01/opt_coins9_live_aiarchive
2097152 297851 1686899 15% /opt/coins9/live/aiarchive
/dev/vg03/learn_app
4194304 1554863 2474494 39% /opt/coins9/learn
/dev/vg02/opt_coins9_jdconcp
2097152 1230073 812943 60% /opt/coins9/jdconcp
/dev/vg02/opt_coins9_jdconcp_data
4194304 2641673 1455595 64% /opt/coins9/jdconcp/data
/dev/vg02/opt_coins9_jdcon
2097152 1069115 963813 53% /opt/coins9/jdcon
/dev/vg02/opt_coins9_jdcon_data
4194304 2636324 1460610 64% /opt/coins9/jdcon/data
/dev/vg02/opt_coins9_cplive
4194304 1469988 2555258 37% /opt/coins9/cplive
/dev/vg01/opt_coins9_coinsadmin
2097152 1360380 690791 66% /opt/coins9/coinsadmin
/dev/vg03/opt_coins9_cim
37748736 20706160 16909496 55% /opt/coins9/cim
/dev/vg00/lvol4 2097152 1001928 1088336 48% /home
/dev/vg00/lvol9 3145728 2335610 759511 75% /data_protector
/dev/vg02/opt_coins9_live_data
22544384 17665888 4840448 78% /opt/coins9/live/data
/dev/vg00/opt_coins9_live_data_ai
2097152 5383 1963024 0% /opt/coins9/live/data/ai
/dev/vg02/opt_coins9_cplive_data
22544384 13226648 9245008 59% /opt/coins9/cplive/data
/dev/vg02/restore 22544384 15865552 6522456 71% /opt/coins9/restore
/dev/vg01/opt_coins9_coinsadmin10
4194304 2449246 1636139 60% /opt/coins9/coinsadmin10
/dev/vg03/learn_data
22544384 16726944 5771992 74% /opt/coins9/learn/data
/dev/vg02/opt_coins9_fop
262144 13934 232761 6% /opt/coins9/fop
A. Clay Stephenson
Acclaimed Contributor

Re: fbackup

After looking at all of this (and I am making the assumption that this failure occurs after this fbackup has been running for some time before the failure occurs), I think the "Not enough space" messages have to do with shared memory or memory allocation. Your maxdsiz may need to be increased, shmmax may be too small, you may need more swap space, or (more likely) there are shared memory segments that are no longer attached to any processes. Done any kill -9's lately? Do an ipcs -ma and look for any shared memory id's that have nattch = 0 (it MAY be safe to remove these using ipcrm).

If the kernel tunables and swap space are reasonable then I would reboot (if practical) and try your fbackup again.
If it ain't broke, I can fix that.
Bill Hassell
Honored Contributor

Re: fbackup

I agree with Clay's assesment. fbackup uses shared memory for two purposes, to keep an index of all the files being backed up and buffers used by all the reader processes (up to 6) that help keep fbackup busy. Since the file number is 917382, it looks like a million files may be in your backup scope and I would expect fbackup to need several hundred megs of shared memory. Look at shmmax and bump it up to 900 megs for good measure to allow fbackup to use the memory it needs. fbackup doesn't use a lot of local memory so maxdsiz is probably OK, but just in case, make sure maxdsiz is larger than 100 megs.


Bill Hassell, sysadmin
Carol Clark
Advisor

Re: fbackup

Thanks for all your help - we are going to schedule a reboot over the weekend and then take it from there