Operating System - HP-UX
1753798 Members
7841 Online
108805 Solutions
New Discussion юеВ

Re: bdf: could not find the mount point

 
Morten Kristiansen
Frequent Advisor

bdf: could not find the mount point

Hi,

we have a server running hp-ux 11.11 that filled the / 100%. Caused by a growing /etc/rc.log, logging alot from a spooler. I've copied the rc.log to another volume group and ran "cat /dev/null > /etc/rc.log". That emptyed the rc.log, and it's still logging more.

The problem both before I emptyed the rc.log and after is that I don't get any output from the bdf command. Here is some examples from the bdf command and the /etc/fstab:

ourserver:/etc#cat /etc/fstab
/dev/vg00/lvol3 / vxfs delaylog 0 1
/dev/vg00/lvol1 /stand hfs defaults 0 1
/dev/vg00/lvol9 /tmp vxfs delaylog 0 2
/dev/vg00/lvol5 /home vxfs delaylog 0 2
/dev/vg00/lvol6 /opt vxfs delaylog 0 2
/dev/vg00/lvol7 /usr vxfs delaylog 0 2
/dev/vg00/lvol8 /var vxfs delaylog 0 2
/dev/vg00/ld /ld vxfs rw,suid,nolargefiles,delaylog,datainlog 0 2
/dev/san/lvol1 /d1 vxfs delaylog,nodatainlog,largefiles,rw,suid 0 2
/dev/san/lvol2 /d2 vxfs rw,suid,largefiles,delaylog,datainlog 0 2
/dev/san/lvol3 /d3 vxfs delaylog,nodatainlog,largefiles,rw,suid 0 2
/dev/san/lvol4 /d4 vxfs rw,suid,largefiles,delaylog,datainlog 0 2
/dev/san/lvol5 /db1 vxfs rw,suid,largefiles,delaylog,datainlog 0 2
/dev/san/lvol6 /db2 vxfs rw,suid,largefiles,delaylog,datainlog 0 2
/dev/backup/lvbackup /backup vxfs delaylog,nodatainlog,largefiles,rw,suid 0 2
anotherserver:/d2/Amex /d2/Amex nfs rw,soft 0 0
anotherserver:/d1/home/mydata/.data /d1/home/mydata/.data nfs rw,soft 0 0
#anotherserver:/d2/Current/prod /d2/Current/prod nfs rw,soft 0 0
anotherserver.oslo.fast.no:/d2/Current/prod /d2/Current/prod nfs rw,suid 0 0
ourserver:/etc#bdf /tmp
bdf: could not find the mount point for /tmp
ourserver:/etc#bdf /home
bdf: could not find the mount point for /home
ourserver:/etc#bdf /opt
bdf: could not find the mount point for /opt
ourserver:/etc#bdf /db2
bdf: could not find the mount point for /db2
ourserver:/etc#bdf /d2/Amex
bdf: could not find the mount point for /d2/Amex
ourserver:/etc#uname -a
HP-UX ourserver B.11.11 U 9000/800 864980691 unlimited-user license


Can anybody tell me how to solve this and what is causing the problem? I've searched the google for solutions, but I can't find any similar.
7 REPLIES 7
A. Clay Stephenson
Acclaimed Contributor

Re: bdf: could not find the mount point

It's possible (in fact, quite probable) that /etc/mnttab is corrupt as a result of the filesystem filling up. What is the output of "mount"? and what does "cat /etc/mnttab" look like?


The runaway rc.log is often caused by extra (bogus) files in /etc/rc.config.d. Make sure there are no extraneous files in this directory. For example, even xxx.save or similar files are forbiddeb because the entire directory is sourced by the rc commands. Tail'ing the logfile may help to identify the problem.

Once you find the reason for the runaway log, you will proabaly have to reboot to clear everything up.
If it ain't broke, I can fix that.
Morten Kristiansen
Frequent Advisor

Re: bdf: could not find the mount point

You're quite right:

ourserver:/#cat /etc/mnttab
ourserver:/#ls -l /etc/mnttab
-rw-r--r-- 1 root root 0 Apr 28 19:52 /etc/mnttab

it's empty, and the output from "mount" command:

ourserver:/etc#mount
mount: file system table may be corrupt


The problem in rc.log is caused by the spooler, here are some dump from the rc.log. I couldn't find anything about this problem either:

May 2 23:17:17:165 spooler: RREPLY: status=OK(0) <-10.0.0.2/48001 h=38 d=0
May 2 23:17:17:165 spooler: head mtype=200 status=0 seq=16
May 2 23:17:17:166 spooler: SREQUEST: hubpost ->10.0.0.2/48001
May 2 23:17:17:166 spooler: head mtype=100 cmd=hubpost seq=17 ts=1178140637 frm=10.0.0.1/65328
May 2 23:17:17:166 spooler: head tout=10 addr=
May 2 23:17:17:166 spooler: data nimid=OF76979930-84871 nimts=1178140632 source=10.0.0.1
May 2 23:17:17:166 spooler: data md5sum=HEX(16):7b646c30e546dba9ace14a642591f84d robot=ourserver
May 2 23:17:17:166 spooler: data domain=NSA pri=5 subject=alarm prid=cdm supp_key=disk//d2 udata=PDS(88)
May 2 23:17:17:170 spooler: RREPLY: status=OK(0) <-10.0.0.2/48001 h=38 d=0
May 2 23:17:17:170 spooler: head mtype=200 status=0 seq=17
May 2 23:17:17:170 spooler: SREQUEST: hubpost ->10.0.0.2/48001
May 2 23:17:17:170 spooler: head mtype=100 cmd=hubpost seq=18 ts=1178140637 frm=10.0.0.1/65328
May 2 23:17:17:170 spooler: head tout=10 addr=
May 2 23:17:17:170 spooler: data nimid=OF76979930-84872 nimts=1178140632 source=10.0.0.1
May 2 23:17:17:170 spooler: data md5sum=HEX(16):08007a0ba2c15971d14fae741fff9362 robot=ourserver
May 2 23:17:17:170 spooler: data domain=NSA pri=5 subject=alarm prid=cdm supp_key=disk//d1 udata=PDS(88)
May 2 23:17:17:175 spooler: RREPLY: status=OK(0) <-10.0.0.2/48001 h=38 d=0
May 2 23:17:17:175 spooler: head mtype=200 status=0 seq=18
May 2 23:17:17:176 spooler: SREQUEST: hubpost ->10.0.0.2/48001
May 2 23:17:17:176 spooler: head mtype=100 cmd=hubpost seq=19 ts=1178140637 frm=10.0.0.1/65328
May 2 23:17:17:176 spooler: head tout=10 addr=
May 2 23:17:17:176 spooler: data nimid=OF76979930-84873 nimts=1178140632 source=10.0.0.1
May 2 23:17:17:176 spooler: data md5sum=HEX(16):ae250f0a8f4838d151d505fca97d93ff robot=ourserver
May 2 23:17:17:176 spooler: data domain=NSA pri=5 subject=alarm prid=cdm supp_key=disk//backup
May 2 23:17:17:176 spooler: data udata=PDS(92)
May 2 23:17:17:179 spooler: RREPLY: status=OK(0) <-10.0.0.2/48001 h=38 d=0
May 2 23:17:17:179 spooler: head mtype=200 status=0 seq=19
May 2 23:17:17:180 spooler: SREQUEST: hubpost ->10.0.0.2/48001
May 2 23:17:17:180 spooler: head mtype=100 cmd=hubpost seq=20 ts=1178140637 frm=10.0.0.1/65328
May 2 23:17:17:180 spooler: head tout=10 addr=
May 2 23:17:17:180 spooler: data nimid=OF76979930-84874 nimts=1178140632 source=10.0.0.1
May 2 23:17:17:180 spooler: data md5sum=HEX(16):ce455722fd69bd1006bd8be80f50982f robot=ourserver
May 2 23:17:17:180 spooler: data domain=NSA pri=4 subject=alarm prid=cdm supp_key=disk//d1/home/data/.data
May 2 23:17:17:180 spooler: data udata=PDS(114)
May 2 23:17:17:184 spooler: RREPLY: status=OK(0) <-10.0.0.2/48001 h=38 d=0
May 2 23:17:17:184 spooler: head mtype=200 status=0 seq=20
May 2 23:17:17:184 spooler: SREQUEST: hubpost ->10.0.0.2/48001
May 2 23:17:17:184 spooler: head mtype=100 cmd=hubpost seq=21 ts=1178140637 frm=10.0.0.1/65328
May 2 23:17:17:184 spooler: head tout=10 addr=
May 2 23:17:17:184 spooler: data nimid=OF76979930-84875 nimts=1178140632 source=10.0.0.1
May 2 23:17:17:184 spooler: data md5sum=HEX(16):ce95d987d7cbd64c4c0d591c118f7294 robot=ourserver
May 2 23:17:17:184 spooler: data domain=NSA pri=4 subject=alarm prid=cdm supp_key=disk//d2/Amex
May 2 23:17:17:184 spooler: data udata=PDS(99)
May 2 23:17:17:189 spooler: RREPLY: status=OK(0) <-10.0.0.2/48001 h=38 d=0
May 2 23:17:17:189 spooler: head mtype=200 status=0 seq=21
May 2 23:17:17:189 spooler: FlushMessages - out-queue is empty - done
May 2 23:17:17:189 spooler: FlushMessages - 22 messages sent to 10.0.0.2:48001
May 2 23:17:17:191 spooler: SREQUEST: _close ->10.0.0.2/48001
May 2 23:17:17:192 spooler: head mtype=100 cmd=_close seq=22 ts=1178140637 frm=10.0.0.1/65328
May 2 23:17:17:192 spooler: head tout=10 addr=
May 2 23:17:22:190 spooler: FlushMessages - no data in the spooler queues
May 2 23:17:27:190 spooler: FlushMessages - no data in the spooler queues
May 2 23:17:32:190 spooler: FlushMessages - no data in the spooler queues
May 2 23:17:37:190 spooler: FlushMessages - no data in the spooler queues
May 2 23:17:42:190 spooler: FlushMessages - no data in the spooler queues
May 2 23:17:47:190 spooler: FlushMessages - no data in the spooler queues
May 2 23:17:52:190 spooler: FlushMessages - no data in the spooler queues
May 2 23:17:57:190 spooler: FlushMessages - no data in the spooler queues
May 2 23:18:02:190 spooler: FlushMessages - no data in the spooler queues
A. Clay Stephenson
Acclaimed Contributor

Re: bdf: could not find the mount point

My best guess is that this is the HPPDS spooler that is not often used. I would edit /etc/rc.config.d/pd and set PD_CLIENT=0 and make sure the PD_SPOOLERS="" and that PD_OTHER_CLIENTS="" as well. This will prevent the distributed printer system from starting and should allow you to clean up the mess.

For good measure, you might also edit the "lp" file in the same directory and set LP=0 to preventr lpsched from starting as well.
If it ain't broke, I can fix that.
Morten Kristiansen
Frequent Advisor

Re: bdf: could not find the mount point

All your suggested settings in the pd file was already as your suggestions. But the LP attribute in the lp file was set to 1. I guess it dosen't make any sence having LP=1 and nothing set in the pd file? Am I right?

Is there any chance to restore /etc/mnttab without a reboot?

mk
A. Clay Stephenson
Acclaimed Contributor

Re: bdf: could not find the mount point

No, the two spooler subsystems are really independent. I would go ahead and set LP=0 BUT make sure that there are no "extra" files in the /etc/rc.config.d directory because the last LP=X wins (or whatever variable) when the files are sourced. I would strongly suggest a reboot because /mnttab is probably just the tip of the iceberg. You really don't know how many critical processes are missing or in a less than sane state.
If it ain't broke, I can fix that.
boomer_2
Super Advisor

Re: bdf: could not find the mount point

Hi morten,
try mv /etc/mnttab /etc/mnttab.old and then run;
mount command...probably u should be able to restore /etc/mnttab...
Morten Kristiansen
Frequent Advisor

Re: bdf: could not find the mount point

I've tried to run "mount -a", with no success, so I gues A. Clay is right. I have to do a reboote. Just a little bit scared that some other files also is damaged, causing a reboote to fail. But I guess we have to try and see what's happening.