<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic NFS mounting problem in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/nfs-mounting-problem/m-p/2420039#M766423</link>
    <description>Hi  &lt;BR /&gt;    I  am enclosing the log file  for the NFS mounts which I encounter in a &lt;BR /&gt;particular machine,  when I do nfsstat it says that  nfs server is already &lt;BR /&gt;active. Also it shows that  all the deamons rpc.montd|statd|biod are running.  &lt;BR /&gt;Where exactly could be the problem?  Thanks a lot in advance&lt;BR /&gt;&lt;BR /&gt;ShivKumar&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;########### Node "sdf2ora2": Starting package at Fri Mar 17 14:12:14 GMT 2000 &lt;BR /&gt;###########&lt;BR /&gt;Mar 17 14:12:14 - "sdf2ora2": Activating volume group /dev/vg_nfs with &lt;BR /&gt;exclusive option.&lt;BR /&gt;Activated volume group in Exclusive Mode.&lt;BR /&gt;Volume group "/dev/vg_nfs" has been successfully changed.&lt;BR /&gt;Mar 17 14:12:19 - Node "sdf2ora2": Checking filesystems:&lt;BR /&gt;   /dev/vg_nfs/lv_nfs&lt;BR /&gt;file system is clean - log replay is not required&lt;BR /&gt;Mar 17 14:12:23 - Node "sdf2ora2": Mounting /dev/vg_nfs/lv_nfs at /export&lt;BR /&gt;Mar 17 14:12:25 - Node "sdf2ora2": Starting nfs service nfs.monitor using&lt;BR /&gt;   "/etc/cmcluster/nfs/nfs.mon"&lt;BR /&gt;Mar 17 14:12:25 - Node "sdf2ora2": Exporting filesystem on  -o &lt;BR /&gt;root=sdf2ora1:sdf2ora2:sdf2ora1-f:sdf2ora2-f /export&lt;BR /&gt;Mar 17 14:12:26 - Node "sdf2ora2": Adding IP address 10.224.73.141 to subnet &lt;BR /&gt;10.224.73.128&lt;BR /&gt;Mar 17 14:12:26 - Node "sdf2ora2": Adding IP address 10.224.64.14 to subnet &lt;BR /&gt;10.224.64.0&lt;BR /&gt;message 86762_agbinet_4001 queued (to unixp)&lt;BR /&gt;killing biod&lt;BR /&gt;killing automount&lt;BR /&gt;    starting NFS CLIENT networking&lt;BR /&gt;&lt;BR /&gt;    starting up the portmapper&lt;BR /&gt;        portmap already started, using pid: 739&lt;BR /&gt;    starting up the BIO daemons&lt;BR /&gt;        /usr/sbin/biod 4&lt;BR /&gt;exportfs error: nothing to export.&lt;BR /&gt;    Reading in /etc/exports&lt;BR /&gt;    starting up the Status Monitor daemon&lt;BR /&gt;        rpc.statd already started, using pid: 765&lt;BR /&gt;    starting up the Lock Manager daemon&lt;BR /&gt;        rpc.lockd already started, using pid: 771&lt;BR /&gt;    starting up the Automount daemon&lt;BR /&gt;        /usr/sbin/automount -f /etc/auto_master&lt;BR /&gt;    mounting remote NFS file systems ...&lt;BR /&gt;&lt;BR /&gt;        ########### Node "sdf2ora2": Halting package at Sun Mar 19 01:14:29 GMT &lt;BR /&gt;2000 ###########&lt;BR /&gt;message 88085_agbinet_4001 queued (to unixp)&lt;BR /&gt;killing biod&lt;BR /&gt;killing automount&lt;BR /&gt;Mar 19 01:14:33 - Node "sdf2ora2": Remove IP address 10.224.73.141 from subnet &lt;BR /&gt;10.224.73.128&lt;BR /&gt;Mar 19 01:14:33 - Node "sdf2ora2": Remove IP address 10.224.64.14 from subnet &lt;BR /&gt;10.224.64.0&lt;BR /&gt;exportfs error: options ignored for unexport.&lt;BR /&gt;        ERROR:  Function un_export_fs&lt;BR /&gt;        ERROR:  Failed to unexport  -o &lt;BR /&gt;root=sdf2ora1:sdf2ora2:sdf2ora1-f:sdf2ora2-f /export&lt;BR /&gt;Mar 19 01:14:34 - Node "sdf2ora2": Halting NFS service nfs.monitor&lt;BR /&gt;killing rpc.lockd pid = 771&lt;BR /&gt;killing rpc.statd pid = 765&lt;BR /&gt;Mar 19 01:14:35 - Node "sdf2ora2": Restarting rpc.statd&lt;BR /&gt;Mar 19 01:14:35 - Node "sdf2ora2": Restarting rpc.lockd&lt;BR /&gt;Mar 19 01:14:35 - Node "sdf2ora2": Unmounting filesystem on /dev/vg_nfs/lv_nfs&lt;BR /&gt;        WARNING:   Running fuser to remove anyone using the file system &lt;BR /&gt;directly.&lt;BR /&gt;/dev/vg_nfs/lv_nfs:    25752o(root)&lt;BR /&gt;&lt;BR /&gt;Mar 19 01:14:37 - Node "sdf2ora2": Deactivating volume group /dev/vg_nfs&lt;BR /&gt;Deactivated volume group in Exclusive Mode.&lt;BR /&gt;Volume group "/dev/vg_nfs" has been successfully changed.</description>
    <pubDate>Mon, 20 Mar 2000 08:29:23 GMT</pubDate>
    <dc:creator>Shiv Kumar_2</dc:creator>
    <dc:date>2000-03-20T08:29:23Z</dc:date>
    <item>
      <title>NFS mounting problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nfs-mounting-problem/m-p/2420039#M766423</link>
      <description>Hi  &lt;BR /&gt;    I  am enclosing the log file  for the NFS mounts which I encounter in a &lt;BR /&gt;particular machine,  when I do nfsstat it says that  nfs server is already &lt;BR /&gt;active. Also it shows that  all the deamons rpc.montd|statd|biod are running.  &lt;BR /&gt;Where exactly could be the problem?  Thanks a lot in advance&lt;BR /&gt;&lt;BR /&gt;ShivKumar&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;########### Node "sdf2ora2": Starting package at Fri Mar 17 14:12:14 GMT 2000 &lt;BR /&gt;###########&lt;BR /&gt;Mar 17 14:12:14 - "sdf2ora2": Activating volume group /dev/vg_nfs with &lt;BR /&gt;exclusive option.&lt;BR /&gt;Activated volume group in Exclusive Mode.&lt;BR /&gt;Volume group "/dev/vg_nfs" has been successfully changed.&lt;BR /&gt;Mar 17 14:12:19 - Node "sdf2ora2": Checking filesystems:&lt;BR /&gt;   /dev/vg_nfs/lv_nfs&lt;BR /&gt;file system is clean - log replay is not required&lt;BR /&gt;Mar 17 14:12:23 - Node "sdf2ora2": Mounting /dev/vg_nfs/lv_nfs at /export&lt;BR /&gt;Mar 17 14:12:25 - Node "sdf2ora2": Starting nfs service nfs.monitor using&lt;BR /&gt;   "/etc/cmcluster/nfs/nfs.mon"&lt;BR /&gt;Mar 17 14:12:25 - Node "sdf2ora2": Exporting filesystem on  -o &lt;BR /&gt;root=sdf2ora1:sdf2ora2:sdf2ora1-f:sdf2ora2-f /export&lt;BR /&gt;Mar 17 14:12:26 - Node "sdf2ora2": Adding IP address 10.224.73.141 to subnet &lt;BR /&gt;10.224.73.128&lt;BR /&gt;Mar 17 14:12:26 - Node "sdf2ora2": Adding IP address 10.224.64.14 to subnet &lt;BR /&gt;10.224.64.0&lt;BR /&gt;message 86762_agbinet_4001 queued (to unixp)&lt;BR /&gt;killing biod&lt;BR /&gt;killing automount&lt;BR /&gt;    starting NFS CLIENT networking&lt;BR /&gt;&lt;BR /&gt;    starting up the portmapper&lt;BR /&gt;        portmap already started, using pid: 739&lt;BR /&gt;    starting up the BIO daemons&lt;BR /&gt;        /usr/sbin/biod 4&lt;BR /&gt;exportfs error: nothing to export.&lt;BR /&gt;    Reading in /etc/exports&lt;BR /&gt;    starting up the Status Monitor daemon&lt;BR /&gt;        rpc.statd already started, using pid: 765&lt;BR /&gt;    starting up the Lock Manager daemon&lt;BR /&gt;        rpc.lockd already started, using pid: 771&lt;BR /&gt;    starting up the Automount daemon&lt;BR /&gt;        /usr/sbin/automount -f /etc/auto_master&lt;BR /&gt;    mounting remote NFS file systems ...&lt;BR /&gt;&lt;BR /&gt;        ########### Node "sdf2ora2": Halting package at Sun Mar 19 01:14:29 GMT &lt;BR /&gt;2000 ###########&lt;BR /&gt;message 88085_agbinet_4001 queued (to unixp)&lt;BR /&gt;killing biod&lt;BR /&gt;killing automount&lt;BR /&gt;Mar 19 01:14:33 - Node "sdf2ora2": Remove IP address 10.224.73.141 from subnet &lt;BR /&gt;10.224.73.128&lt;BR /&gt;Mar 19 01:14:33 - Node "sdf2ora2": Remove IP address 10.224.64.14 from subnet &lt;BR /&gt;10.224.64.0&lt;BR /&gt;exportfs error: options ignored for unexport.&lt;BR /&gt;        ERROR:  Function un_export_fs&lt;BR /&gt;        ERROR:  Failed to unexport  -o &lt;BR /&gt;root=sdf2ora1:sdf2ora2:sdf2ora1-f:sdf2ora2-f /export&lt;BR /&gt;Mar 19 01:14:34 - Node "sdf2ora2": Halting NFS service nfs.monitor&lt;BR /&gt;killing rpc.lockd pid = 771&lt;BR /&gt;killing rpc.statd pid = 765&lt;BR /&gt;Mar 19 01:14:35 - Node "sdf2ora2": Restarting rpc.statd&lt;BR /&gt;Mar 19 01:14:35 - Node "sdf2ora2": Restarting rpc.lockd&lt;BR /&gt;Mar 19 01:14:35 - Node "sdf2ora2": Unmounting filesystem on /dev/vg_nfs/lv_nfs&lt;BR /&gt;        WARNING:   Running fuser to remove anyone using the file system &lt;BR /&gt;directly.&lt;BR /&gt;/dev/vg_nfs/lv_nfs:    25752o(root)&lt;BR /&gt;&lt;BR /&gt;Mar 19 01:14:37 - Node "sdf2ora2": Deactivating volume group /dev/vg_nfs&lt;BR /&gt;Deactivated volume group in Exclusive Mode.&lt;BR /&gt;Volume group "/dev/vg_nfs" has been successfully changed.</description>
      <pubDate>Mon, 20 Mar 2000 08:29:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nfs-mounting-problem/m-p/2420039#M766423</guid>
      <dc:creator>Shiv Kumar_2</dc:creator>
      <dc:date>2000-03-20T08:29:23Z</dc:date>
    </item>
    <item>
      <title>Re: NFS mounting problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nfs-mounting-problem/m-p/2420040#M766424</link>
      <description>Hi, I am not sure what's your problem but if you run&lt;BR /&gt;"cmviewcl -v" and check that you Highly available&lt;BR /&gt;package is up and running, then you can nfs mount&lt;BR /&gt;the filesystems from your other hosts by using:&lt;BR /&gt;mount nfs:/&lt;FILESYSTEM&gt; &lt;LOCAL mountpoint=""&gt;, provided&lt;BR /&gt;"nfs" is your package name.  You can use the package floating IP address &lt;BR /&gt;"10.224.73.141" instead of the package name and the filesystem/s you're &lt;BR /&gt;exporting should be in the XFS[@]array inside your control script, ie. &lt;BR /&gt;/ect/cmcluster/nfs/nfs.cntl if you followed default name convention.&lt;BR /&gt;&lt;BR /&gt;  If your problem is that the package doesn't start,&lt;BR /&gt;check that the processes being monitored by &lt;BR /&gt;"/etc/cmcluster/nfs/nfs.mon" started before restarting&lt;BR /&gt;the package. Just in case, make sure you're starting&lt;BR /&gt;the MC/Service Guard packages from run level 3.&lt;/LOCAL&gt;&lt;/FILESYSTEM&gt;</description>
      <pubDate>Mon, 20 Mar 2000 12:12:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nfs-mounting-problem/m-p/2420040#M766424</guid>
      <dc:creator>Fernando Santana</dc:creator>
      <dc:date>2000-03-20T12:12:03Z</dc:date>
    </item>
  </channel>
</rss>

