<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: SG(SLES) vs NFS(soft and/or hard mounts) in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439775#M82152</link>
    <description>here is the log in full detail....&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;####### Node "lhsap10": Halting package at Wed May 20 21:07:01 EDT 2009 #######&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": *** Begin: Executing script [/opt/cmcluster/PP7/PP7.cntl] args [stop PP7]&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (get_source): Found /etc/cmcluster.conf - Source it&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": *** Begin: Executing script [/opt/cmcluster/conf/PP7/sapwas.sh] args [spawn]&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (get_source): Found /etc/cmcluster.conf - Source it&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (sapwas_main): Entering SGeSAP stop runtime steps ...&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (sapwas_main): A.02.00.00&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (get_source): Found /opt/cmcluster/conf/PP7/sap.config - Source it&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (get_source): Found /opt/cmcluster/sap/sap.functions - Source it&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (get_source): Found /opt/cmcluster/conf/sap.functions - Source it&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (get_source): Found /opt/cmcluster/sap/SID/customer.functions - Source it&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (checksum_files): Check if files to run source command on are identical&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (checksum_files): Files to checksum are [ /opt/cmcluster/conf/PP7/sap.config]&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (checksum_files): Checksums are identical&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (checksum_files): Check if files to run source command on are identical&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (checksum_files): Files to checksum are [ /opt/cmcluster/sap/sap.functions /opt/cmcluster/conf/sap.functions]&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (checksum_files): Checksums are identical&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": *** Begin: Executing script [/opt/cmcluster/conf/PP7/sapwas.sh] MODE [stop]&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (initialize): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_version): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_version): SGeSAP: A.02.00.00 Sg: A.11.18.00 Linux: 2.6.16.60-0.33-smp&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_perl): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_parameters): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (ip_mapper): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (ip_mapper): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (ip_mapper): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (ip_mapper): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (ip_mapper): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (ip_mapper 10.1.1.230): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_parameters): Package will handle SAP J2EE database service&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (ip_mapper 131.195.119.230): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_parameters): Package will handle SAP J2EE system central services&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_access /usr/sap/PP7/SYS/exe/ctrun): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_access): WARNING: NFS-Server not specified for /usr/sap/PP7/SYS/exe/ctrun. Skipping step.&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_access /usr/sap/trans/bin): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_access): WARNING: NFS-Server not specified for /usr/sap/trans/bin. Skipping step.&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (app_handler stop): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_own_app): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_own_app): Starting to check lhsap10-be for instance JC90 ...&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (is_node_alive lhsap10-be): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (watchdog): Watchdog timer initiated for (PID: 29126 Timeout: 260 secs)&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (watchdog): Watchdog process itself: WDPID=[29127]&lt;BR /&gt;May 20 21:07:23 - Node "lhsap10": (login_check lhsap10-be): TRACE POINT&lt;BR /&gt;May 20 21:07:25 - Node "lhsap10": (login_check): ssh -p 224 -o ConnectTimeout=30 access of root to App-Server host lhsap10-be working&lt;BR /&gt;May 20 21:07:26 - Node "lhsap10": (login_check): ssh -p 224 -o ConnectTimeout=30 access of pp7adm to App-Server host lhsap10-be working&lt;BR /&gt;May 20 21:07:26 - Node "lhsap10": (check_own_app): LINUX App-Server host lhsap10-be for instance JC90 responding&lt;BR /&gt;May 20 21:07:26 - Node "lhsap10": (check_own_app): Starting to check lhsap20-be for instance J93 ...&lt;BR /&gt;May 20 21:07:27 - Node "lhsap10": (is_node_alive lhsap20-be): TRACE POINT&lt;BR /&gt;May 20 21:07:27 - Node "lhsap10": (watchdog): Watchdog timer initiated for (PID: 29350 Timeout: 260 secs)&lt;BR /&gt;May 20 21:07:27 - Node "lhsap10": (watchdog): Watchdog process itself: WDPID=[29353]&lt;BR /&gt;May 20 21:07:29 - Node "lhsap10": (login_check lhsap20-be): TRACE POINT&lt;BR /&gt;May 20 21:07:30 - Node "lhsap10": (login_check): ssh -p 224 -o ConnectTimeout=30 access of root to App-Server host lhsap20-be working&lt;BR /&gt;May 20 21:07:31 - Node "lhsap10": (login_check): ssh -p 224 -o ConnectTimeout=30 access of pp7adm to App-Server host lhsap20-be working&lt;BR /&gt;May 20 21:07:31 - Node "lhsap10": (check_own_app): LINUX App-Server host lhsap20-be for instance J93 responding&lt;BR /&gt;May 20 21:07:31 - Node "lhsap10": (stop_own_app 2): TRACE POINT&lt;BR /&gt;May 20 21:07:31 - Node "lhsap10": (treatment_test): (3 /\ 2)&lt;BR /&gt;May 20 21:07:31 - Node "lhsap10": (stop_own_app): Instance JC90 on host lhsap10-be not running - skipping step&lt;BR /&gt;May 20 21:07:31 - Node "lhsap10": (treatment_test): (1 /\ 2)&lt;BR /&gt;May 20 21:07:31 - Node "lhsap10": (stop_own_app): Instance J93 on host lhsap20-be is configured to be excluded - skipping step&lt;BR /&gt;May 20 21:07:31 - Node "lhsap10": (ci_remove_shmem normal SCS 80): TRACE POINT&lt;BR /&gt;May 20 21:07:31 - Node "lhsap10": (clean_ipc SCS 80 pp7adm): TRACE POINT&lt;BR /&gt;May 20 21:07:32 - Node "lhsap10": (clean_ipc): WARNING: shmem has processes attached&lt;BR /&gt;May 20 21:07:32 - Node "lhsap10": (app_remove_shmem): TRACE POINT&lt;BR /&gt;May 20 21:07:32 - Node "lhsap10": (check_own_app): TRACE POINT&lt;BR /&gt;May 20 21:07:32 - Node "lhsap10": (check_own_app): Starting to check lhsap10-be for instance JC90 ...&lt;BR /&gt;May 20 21:07:32 - Node "lhsap10": (is_node_alive lhsap10-be): TRACE POINT&lt;BR /&gt;May 20 21:07:32 - Node "lhsap10": (watchdog): Watchdog timer initiated for (PID: 29589 Timeout: 260 secs)&lt;BR /&gt;May 20 21:07:32 - Node "lhsap10": (watchdog): Watchdog process itself: WDPID=[29591]&lt;BR /&gt;May 20 21:07:35 - Node "lhsap10": (login_check lhsap10-be): TRACE POINT&lt;BR /&gt;May 20 21:07:37 - Node "lhsap10": (login_check): ssh -p 224 -o ConnectTimeout=30 access of root to App-Server host lhsap10-be working&lt;BR /&gt;May 20 21:07:38 - Node "lhsap10": (login_check): ssh -p 224 -o ConnectTimeout=30 access of pp7adm to App-Server host lhsap10-be working&lt;BR /&gt;May 20 21:07:38 - Node "lhsap10": (check_own_app): LINUX App-Server host lhsap10-be for instance JC90 responding&lt;BR /&gt;May 20 21:07:38 - Node "lhsap10": (check_own_app): Starting to check lhsap20-be for instance J93 ...&lt;BR /&gt;May 20 21:07:38 - Node "lhsap10": (is_node_alive lhsap20-be): TRACE POINT&lt;BR /&gt;May 20 21:07:38 - Node "lhsap10": (watchdog): Watchdog timer initiated for (PID: 29841 Timeout: 260 secs)&lt;BR /&gt;May 20 21:07:38 - Node "lhsap10": (watchdog): Watchdog process itself: WDPID=[29844]&lt;BR /&gt;May 20 21:07:40 - Node "lhsap10": (login_check lhsap20-be): TRACE POINT&lt;BR /&gt;May 20 21:07:42 - Node "lhsap10": (login_check): ssh -p 224 -o ConnectTimeout=30 access of root to App-Server host lhsap20-be working&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (login_check): ssh -p 224 -o ConnectTimeout=30 access of pp7adm to App-Server host lhsap20-be working&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (check_own_app): LINUX App-Server host lhsap20-be for instance J93 responding&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (stop_own_app 2): TRACE POINT&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (treatment_test): (3 /\ 2)&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (stop_own_app): Instance JC90 on host lhsap10-be not running - skipping step&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (treatment_test): (1 /\ 2)&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (stop_own_app): Instance J93 on host lhsap20-be is configured to be excluded - skipping step&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (stop_saposcol_app): TRACE POINT&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (stop_saposcol_app): Configured to be skipped&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (stop_addons_prejci): TRACE POINT&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (stop_cs SCS): TRACE POINT&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (stop_cs): Halt Java System Central Services Instance ...&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (stop_direct SCS 80 131.195.119.230): TRACE POINT&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (stop_direct): Direct shutdown attempt on local host...&lt;BR /&gt;May 20 21:07:45 - Node "lhsap10": (stop_direct): Instance on lsapepoci stopped&lt;BR /&gt;May 20 21:07:45 - Node "lhsap10": (stop_direct): Waiting for cleanup of resources.....&lt;BR /&gt;May 20 21:07:45 - Node "lhsap10": (stop_direct): Waiting for cleanup of resources with ps -ef|grep SCS80_lsapepoci|grep -v sapstartsrv&lt;BR /&gt;May 20 21:07:45 - Node "lhsap10": (watchdog): Watchdog timer initiated for (PID: 31215 Timeout: 260 secs)&lt;BR /&gt;May 20 21:07:45 - Node "lhsap10": (watchdog): Watchdog process itself: WDPID=[31217]&lt;BR /&gt;May 20 21:07:51 - Node "lhsap10": (stop_sapstartsrv SCS 80 131.195.119.230 LINUX pp7adm): TRACE POINT&lt;BR /&gt;May 20 21:07:51 - Node "lhsap10": (is_ip_local 131.195.119.230): TRACE POINT&lt;BR /&gt;May 20 21:07:51 - Node "lhsap10": (is_ip_local): 131.195.119.230 considered to be local&lt;BR /&gt;May 20 21:07:51 - Node "lhsap10": (stop_sapstartsrv): Instance Service shutdown attempt on local host...&lt;BR /&gt;May 20 21:07:53 - Node "lhsap10": (stop_sapstartsrv): There was no local instance service running for SCS80&lt;BR /&gt;May 20 21:07:53 - Node "lhsap10": (crit_test_app lsapepoci 80 pp7adm SCS 1): TRACE POINT&lt;BR /&gt;May 20 21:07:53 - Node "lhsap10": (crit_test_app): Trying to connect enqueue service of instance SCS80 ...&lt;BR /&gt;May 20 21:08:04 - Node "lhsap10": (crit_test_app): No connection to instance SCS80: rc=8&lt;BR /&gt;May 20 21:08:04 - Node "lhsap10": (crit_test_app): Instance SCS80 not responding&lt;BR /&gt;May 20 21:08:04 - Node "lhsap10": (stop_addons_postjci): TRACE POINT&lt;BR /&gt;May 20 21:08:04 - Node "lhsap10": (stop_addons_predb): TRACE POINT&lt;BR /&gt;May 20 21:08:04 - Node "lhsap10": (stop_ORACLE_jdb): TRACE POINT&lt;BR /&gt;May 20 21:08:04 - Node "lhsap10": (ora_setenv): TRACE POINT&lt;BR /&gt;May 20 21:08:06 - Node "lhsap10": (ora_setenv): ORASID=orapp7 ORACLE_SID=PP7 ORACLE_HOME=/oracle/PP7/102_64 SAPDATA_HOME=/oracle/PP7&lt;BR /&gt;May 20 21:08:06 - Node "lhsap10": (stop_ORACLE_jdb): Halting J2EE database ...&lt;BR /&gt;May 20 21:08:21 - Node "lhsap10": (stop_ORACLE_jdb): J2EE Database stopped successfully&lt;BR /&gt;May 20 21:08:21 - Node "lhsap10": (ora_stop_listener): TRACE POINT&lt;BR /&gt;May 20 21:08:21 - Node "lhsap10": (ora_stop_listener): Stopping ORACLE listener LIST_PP7&lt;BR /&gt;May 20 21:08:30 - Node "lhsap10": (ora_stop_listener): The command completed successfully&lt;BR /&gt;May 20 21:08:30 - Node "lhsap10": (ora_wait): TRACE POINT&lt;BR /&gt;May 20 21:08:30 - Node "lhsap10": (ora_wait): Wait for Oracle shadow process cleanup (Timeout: 260 secs)&lt;BR /&gt;May 20 21:08:30 - Node "lhsap10": (ora_wait):  3792 ?        00:00:00 oracle&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": (stop_addons_postdb): TRACE POINT&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": (stop_saposcol): TRACE POINT&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": *** Done: Executing script [/opt/cmcluster/conf/PP7/sapwas.sh] MODE [stop]&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": (sapwas_main): Leaving SGeSAP stop runtime steps&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": *** Done: Executing script [/opt/cmcluster/conf/PP7/sapwas.sh] args [spawn]&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Remove IP address 10.1.1.230 from subnet 10.1.1.0&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Remove IP address 131.195.119.230 from subnet 131.195.119.0&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Stoping rmtab synchronization process&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unexporting filesystem on lsapepodb:/export/sapmnt/PP7&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unexporting filesystem on lhsap10-be:/export/sapmnt/PP7&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unexporting filesystem on lhsap20-be:/export/sapmnt/PP7&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /usr/sap/PP7/SCS80&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/mirrlogB&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/mirrlogA&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/origlogB&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/origlogA&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/oraarch&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/sapreorg&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/sapdatatemp&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/sapdata4&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/sapdata3&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/sapdata2&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/sapdata1&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7&lt;BR /&gt;May 20 21:08:32 - Node "lhsap10": Unmounting filesystem on /export/sapmnt/PP7&lt;BR /&gt;WARNING: Running fuser to remove anyone using the file system directly.&lt;BR /&gt;Cannot stat file /proc/4989/fd/100: Permission denied&lt;BR /&gt;Cannot stat file /proc/4990/fd/102: Permission denied&lt;BR /&gt;Cannot stat file /proc/4991/fd/102: Permission denied&lt;BR /&gt;umount: /export/sapmnt/PP7: device is busy&lt;BR /&gt;umount: /export/sapmnt/PP7: device is busy&lt;BR /&gt;ERROR: Function umount_fs; Failed to unmount /dev/vgPP7FIXE/lvsapmnt&lt;BR /&gt;May 20 21:08:32 - Node "lhsap10": Deactivating volume group vgPP7FIXE&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;          Can't deactivate volume group "vgPP7FIXE" with 1 open logical volume(s)&lt;BR /&gt;ERROR: Function deactivate_volume_group; Failed to deactivate vgPP7FIXE&lt;BR /&gt;Attempting to deltag to vg vgPP7FIXE...&lt;BR /&gt;deltag was successful on vg vgPP7FIXE.&lt;BR /&gt;May 20 21:09:02 - Node "lhsap10": Deactivating volume group vgPP7db01&lt;BR /&gt;Attempting to deltag to vg vgPP7db01...&lt;BR /&gt;deltag was successful on vg vgPP7db01.&lt;BR /&gt;###### Node "lhsap10": Package halted with ERROR at Wed May 20 21:09:03 EDT 2009 ######&lt;BR /&gt;</description>
    <pubDate>Tue, 16 Jun 2009 12:19:27 GMT</pubDate>
    <dc:creator>George Barbitsas</dc:creator>
    <dc:date>2009-06-16T12:19:27Z</dc:date>
    <item>
      <title>SG(SLES) vs NFS(soft and/or hard mounts)</title>
      <link>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439769#M82146</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;We have a cluster made up of 2 x86 servers with 2 packages configured.  Further more the way the servers were configured is nfs server/client on both nodes....no autofs used.&lt;BR /&gt;&lt;BR /&gt;Usually both packages are running on their preferred node, but nonetheless there is an application(not SG package) on the other node that uses one of the shared filesystems(nfs).&lt;BR /&gt;&lt;BR /&gt;My issue arises when I issue a cmhalpkg on the package that has a shared nfs fs to that other application.  Here is an extraction of my log.&lt;BR /&gt;&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Stoping rmtab synchronization process&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unexporting filesystem on lsapepodb:/export/sapmnt/PP7&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unexporting filesystem on lhsap10-be:/export/sapmnt/PP7&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unexporting filesystem on lhsap20-be:/export/sapmnt/PP7&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /usr/sap/PP7/SCS80&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/mirrlogB&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/mirrlogA&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/origlogB&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/origlogA&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/oraarch&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/sapreorg&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/sapdatatemp&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/sapdata4&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/sapdata3&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/sapdata2&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/sapdata1&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7&lt;BR /&gt;May 20 21:08:32 - Node "lhsap10": Unmounting filesystem on /export/sapmnt/PP7&lt;BR /&gt;WARNING: Running fuser to remove anyone using the file system directly.&lt;BR /&gt;Cannot stat file /proc/4989/fd/100: Permission denied&lt;BR /&gt;Cannot stat file /proc/4990/fd/102: Permission denied&lt;BR /&gt;Cannot stat file /proc/4991/fd/102: Permission denied&lt;BR /&gt;umount: /export/sapmnt/PP7: device is busy&lt;BR /&gt;umount: /export/sapmnt/PP7: device is busy&lt;BR /&gt;ERROR: Function umount_fs; Failed to unmount /dev/vgPP7FIXE/lvsapmnt&lt;BR /&gt;May 20 21:08:32 - Node "lhsap10": Deactivating volume group vgPP7FIXE&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;I have to log in to the other node and kill stuff by hand(oracle...sap...java) in order to free up the lock on the filesystem.  I have tried soft mount AND hard mounts with no success. &lt;BR /&gt;&lt;BR /&gt;any insight would be greatfull&lt;BR /&gt;</description>
      <pubDate>Mon, 15 Jun 2009 14:38:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439769#M82146</guid>
      <dc:creator>George Barbitsas</dc:creator>
      <dc:date>2009-06-15T14:38:20Z</dc:date>
    </item>
    <item>
      <title>Re: SG(SLES) vs NFS(soft and/or hard mounts)</title>
      <link>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439770#M82147</link>
      <description>I would like to add a fix to the typo I made in the TITLE &lt;BR /&gt;&lt;BR /&gt;SG(SLES) vs NFS(soft or hard mounts)</description>
      <pubDate>Mon, 15 Jun 2009 14:42:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439770#M82147</guid>
      <dc:creator>George Barbitsas</dc:creator>
      <dc:date>2009-06-15T14:42:26Z</dc:date>
    </item>
    <item>
      <title>Re: SG(SLES) vs NFS(soft and/or hard mounts)</title>
      <link>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439771#M82148</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;I think one of the nodes needs to be rebooted.&lt;BR /&gt;&lt;BR /&gt;This issue is not related to the nfs mount options.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Mon, 15 Jun 2009 14:52:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439771#M82148</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2009-06-15T14:52:56Z</dc:date>
    </item>
    <item>
      <title>Re: SG(SLES) vs NFS(soft and/or hard mounts)</title>
      <link>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439772#M82149</link>
      <description>the servers where rebooted many times especially when the umount didn't go through.  After killing the programs by hand I rebboted the boxes and same thing happend.</description>
      <pubDate>Mon, 15 Jun 2009 14:55:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439772#M82149</guid>
      <dc:creator>George Barbitsas</dc:creator>
      <dc:date>2009-06-15T14:55:28Z</dc:date>
    </item>
    <item>
      <title>Re: SG(SLES) vs NFS(soft and/or hard mounts)</title>
      <link>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439773#M82150</link>
      <description>This is an architectural problem and has nothing to do with mount options. Never mount/umount NFS shares in the package start/stop script! You are not able to kill processes with pending I/O!&lt;BR /&gt;Get HA-NFS (or place the NFS share on a third highly available NFS server like a Netapp filer cluster) and mount statically or via autofs! Don't use overlapping mount points e.g. mount /usr/sap from SAN disks and /usr/sap/trans from NFS.&lt;BR /&gt;First make sure the NFS share is back (HA-NFS will do this for you) and then kill/stop leftover processes, it doesn't work the other way around...&lt;BR /&gt;&lt;BR /&gt;My 2 cents,&lt;BR /&gt;Armin&lt;BR /&gt;</description>
      <pubDate>Tue, 16 Jun 2009 07:21:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439773#M82150</guid>
      <dc:creator>Armin Kunaschik</dc:creator>
      <dc:date>2009-06-16T07:21:04Z</dc:date>
    </item>
    <item>
      <title>Re: SG(SLES) vs NFS(soft and/or hard mounts)</title>
      <link>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439774#M82151</link>
      <description>hanfs is installed and configured....ill have another look at the configuration....but if we look at the log i posted we clearly see that the processes are not being killed and umount never takes place.</description>
      <pubDate>Tue, 16 Jun 2009 11:40:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439774#M82151</guid>
      <dc:creator>George Barbitsas</dc:creator>
      <dc:date>2009-06-16T11:40:31Z</dc:date>
    </item>
    <item>
      <title>Re: SG(SLES) vs NFS(soft and/or hard mounts)</title>
      <link>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439775#M82152</link>
      <description>here is the log in full detail....&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;####### Node "lhsap10": Halting package at Wed May 20 21:07:01 EDT 2009 #######&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": *** Begin: Executing script [/opt/cmcluster/PP7/PP7.cntl] args [stop PP7]&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (get_source): Found /etc/cmcluster.conf - Source it&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": *** Begin: Executing script [/opt/cmcluster/conf/PP7/sapwas.sh] args [spawn]&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (get_source): Found /etc/cmcluster.conf - Source it&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (sapwas_main): Entering SGeSAP stop runtime steps ...&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (sapwas_main): A.02.00.00&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (get_source): Found /opt/cmcluster/conf/PP7/sap.config - Source it&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (get_source): Found /opt/cmcluster/sap/sap.functions - Source it&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (get_source): Found /opt/cmcluster/conf/sap.functions - Source it&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (get_source): Found /opt/cmcluster/sap/SID/customer.functions - Source it&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (checksum_files): Check if files to run source command on are identical&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (checksum_files): Files to checksum are [ /opt/cmcluster/conf/PP7/sap.config]&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (checksum_files): Checksums are identical&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (checksum_files): Check if files to run source command on are identical&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (checksum_files): Files to checksum are [ /opt/cmcluster/sap/sap.functions /opt/cmcluster/conf/sap.functions]&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (checksum_files): Checksums are identical&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": *** Begin: Executing script [/opt/cmcluster/conf/PP7/sapwas.sh] MODE [stop]&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (initialize): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_version): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_version): SGeSAP: A.02.00.00 Sg: A.11.18.00 Linux: 2.6.16.60-0.33-smp&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_perl): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_parameters): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (ip_mapper): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (ip_mapper): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (ip_mapper): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (ip_mapper): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (ip_mapper): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (ip_mapper 10.1.1.230): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_parameters): Package will handle SAP J2EE database service&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (ip_mapper 131.195.119.230): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_parameters): Package will handle SAP J2EE system central services&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_access /usr/sap/PP7/SYS/exe/ctrun): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_access): WARNING: NFS-Server not specified for /usr/sap/PP7/SYS/exe/ctrun. Skipping step.&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_access /usr/sap/trans/bin): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_access): WARNING: NFS-Server not specified for /usr/sap/trans/bin. Skipping step.&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (app_handler stop): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_own_app): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (check_own_app): Starting to check lhsap10-be for instance JC90 ...&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (is_node_alive lhsap10-be): TRACE POINT&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (watchdog): Watchdog timer initiated for (PID: 29126 Timeout: 260 secs)&lt;BR /&gt;May 20 21:07:21 - Node "lhsap10": (watchdog): Watchdog process itself: WDPID=[29127]&lt;BR /&gt;May 20 21:07:23 - Node "lhsap10": (login_check lhsap10-be): TRACE POINT&lt;BR /&gt;May 20 21:07:25 - Node "lhsap10": (login_check): ssh -p 224 -o ConnectTimeout=30 access of root to App-Server host lhsap10-be working&lt;BR /&gt;May 20 21:07:26 - Node "lhsap10": (login_check): ssh -p 224 -o ConnectTimeout=30 access of pp7adm to App-Server host lhsap10-be working&lt;BR /&gt;May 20 21:07:26 - Node "lhsap10": (check_own_app): LINUX App-Server host lhsap10-be for instance JC90 responding&lt;BR /&gt;May 20 21:07:26 - Node "lhsap10": (check_own_app): Starting to check lhsap20-be for instance J93 ...&lt;BR /&gt;May 20 21:07:27 - Node "lhsap10": (is_node_alive lhsap20-be): TRACE POINT&lt;BR /&gt;May 20 21:07:27 - Node "lhsap10": (watchdog): Watchdog timer initiated for (PID: 29350 Timeout: 260 secs)&lt;BR /&gt;May 20 21:07:27 - Node "lhsap10": (watchdog): Watchdog process itself: WDPID=[29353]&lt;BR /&gt;May 20 21:07:29 - Node "lhsap10": (login_check lhsap20-be): TRACE POINT&lt;BR /&gt;May 20 21:07:30 - Node "lhsap10": (login_check): ssh -p 224 -o ConnectTimeout=30 access of root to App-Server host lhsap20-be working&lt;BR /&gt;May 20 21:07:31 - Node "lhsap10": (login_check): ssh -p 224 -o ConnectTimeout=30 access of pp7adm to App-Server host lhsap20-be working&lt;BR /&gt;May 20 21:07:31 - Node "lhsap10": (check_own_app): LINUX App-Server host lhsap20-be for instance J93 responding&lt;BR /&gt;May 20 21:07:31 - Node "lhsap10": (stop_own_app 2): TRACE POINT&lt;BR /&gt;May 20 21:07:31 - Node "lhsap10": (treatment_test): (3 /\ 2)&lt;BR /&gt;May 20 21:07:31 - Node "lhsap10": (stop_own_app): Instance JC90 on host lhsap10-be not running - skipping step&lt;BR /&gt;May 20 21:07:31 - Node "lhsap10": (treatment_test): (1 /\ 2)&lt;BR /&gt;May 20 21:07:31 - Node "lhsap10": (stop_own_app): Instance J93 on host lhsap20-be is configured to be excluded - skipping step&lt;BR /&gt;May 20 21:07:31 - Node "lhsap10": (ci_remove_shmem normal SCS 80): TRACE POINT&lt;BR /&gt;May 20 21:07:31 - Node "lhsap10": (clean_ipc SCS 80 pp7adm): TRACE POINT&lt;BR /&gt;May 20 21:07:32 - Node "lhsap10": (clean_ipc): WARNING: shmem has processes attached&lt;BR /&gt;May 20 21:07:32 - Node "lhsap10": (app_remove_shmem): TRACE POINT&lt;BR /&gt;May 20 21:07:32 - Node "lhsap10": (check_own_app): TRACE POINT&lt;BR /&gt;May 20 21:07:32 - Node "lhsap10": (check_own_app): Starting to check lhsap10-be for instance JC90 ...&lt;BR /&gt;May 20 21:07:32 - Node "lhsap10": (is_node_alive lhsap10-be): TRACE POINT&lt;BR /&gt;May 20 21:07:32 - Node "lhsap10": (watchdog): Watchdog timer initiated for (PID: 29589 Timeout: 260 secs)&lt;BR /&gt;May 20 21:07:32 - Node "lhsap10": (watchdog): Watchdog process itself: WDPID=[29591]&lt;BR /&gt;May 20 21:07:35 - Node "lhsap10": (login_check lhsap10-be): TRACE POINT&lt;BR /&gt;May 20 21:07:37 - Node "lhsap10": (login_check): ssh -p 224 -o ConnectTimeout=30 access of root to App-Server host lhsap10-be working&lt;BR /&gt;May 20 21:07:38 - Node "lhsap10": (login_check): ssh -p 224 -o ConnectTimeout=30 access of pp7adm to App-Server host lhsap10-be working&lt;BR /&gt;May 20 21:07:38 - Node "lhsap10": (check_own_app): LINUX App-Server host lhsap10-be for instance JC90 responding&lt;BR /&gt;May 20 21:07:38 - Node "lhsap10": (check_own_app): Starting to check lhsap20-be for instance J93 ...&lt;BR /&gt;May 20 21:07:38 - Node "lhsap10": (is_node_alive lhsap20-be): TRACE POINT&lt;BR /&gt;May 20 21:07:38 - Node "lhsap10": (watchdog): Watchdog timer initiated for (PID: 29841 Timeout: 260 secs)&lt;BR /&gt;May 20 21:07:38 - Node "lhsap10": (watchdog): Watchdog process itself: WDPID=[29844]&lt;BR /&gt;May 20 21:07:40 - Node "lhsap10": (login_check lhsap20-be): TRACE POINT&lt;BR /&gt;May 20 21:07:42 - Node "lhsap10": (login_check): ssh -p 224 -o ConnectTimeout=30 access of root to App-Server host lhsap20-be working&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (login_check): ssh -p 224 -o ConnectTimeout=30 access of pp7adm to App-Server host lhsap20-be working&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (check_own_app): LINUX App-Server host lhsap20-be for instance J93 responding&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (stop_own_app 2): TRACE POINT&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (treatment_test): (3 /\ 2)&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (stop_own_app): Instance JC90 on host lhsap10-be not running - skipping step&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (treatment_test): (1 /\ 2)&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (stop_own_app): Instance J93 on host lhsap20-be is configured to be excluded - skipping step&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (stop_saposcol_app): TRACE POINT&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (stop_saposcol_app): Configured to be skipped&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (stop_addons_prejci): TRACE POINT&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (stop_cs SCS): TRACE POINT&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (stop_cs): Halt Java System Central Services Instance ...&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (stop_direct SCS 80 131.195.119.230): TRACE POINT&lt;BR /&gt;May 20 21:07:43 - Node "lhsap10": (stop_direct): Direct shutdown attempt on local host...&lt;BR /&gt;May 20 21:07:45 - Node "lhsap10": (stop_direct): Instance on lsapepoci stopped&lt;BR /&gt;May 20 21:07:45 - Node "lhsap10": (stop_direct): Waiting for cleanup of resources.....&lt;BR /&gt;May 20 21:07:45 - Node "lhsap10": (stop_direct): Waiting for cleanup of resources with ps -ef|grep SCS80_lsapepoci|grep -v sapstartsrv&lt;BR /&gt;May 20 21:07:45 - Node "lhsap10": (watchdog): Watchdog timer initiated for (PID: 31215 Timeout: 260 secs)&lt;BR /&gt;May 20 21:07:45 - Node "lhsap10": (watchdog): Watchdog process itself: WDPID=[31217]&lt;BR /&gt;May 20 21:07:51 - Node "lhsap10": (stop_sapstartsrv SCS 80 131.195.119.230 LINUX pp7adm): TRACE POINT&lt;BR /&gt;May 20 21:07:51 - Node "lhsap10": (is_ip_local 131.195.119.230): TRACE POINT&lt;BR /&gt;May 20 21:07:51 - Node "lhsap10": (is_ip_local): 131.195.119.230 considered to be local&lt;BR /&gt;May 20 21:07:51 - Node "lhsap10": (stop_sapstartsrv): Instance Service shutdown attempt on local host...&lt;BR /&gt;May 20 21:07:53 - Node "lhsap10": (stop_sapstartsrv): There was no local instance service running for SCS80&lt;BR /&gt;May 20 21:07:53 - Node "lhsap10": (crit_test_app lsapepoci 80 pp7adm SCS 1): TRACE POINT&lt;BR /&gt;May 20 21:07:53 - Node "lhsap10": (crit_test_app): Trying to connect enqueue service of instance SCS80 ...&lt;BR /&gt;May 20 21:08:04 - Node "lhsap10": (crit_test_app): No connection to instance SCS80: rc=8&lt;BR /&gt;May 20 21:08:04 - Node "lhsap10": (crit_test_app): Instance SCS80 not responding&lt;BR /&gt;May 20 21:08:04 - Node "lhsap10": (stop_addons_postjci): TRACE POINT&lt;BR /&gt;May 20 21:08:04 - Node "lhsap10": (stop_addons_predb): TRACE POINT&lt;BR /&gt;May 20 21:08:04 - Node "lhsap10": (stop_ORACLE_jdb): TRACE POINT&lt;BR /&gt;May 20 21:08:04 - Node "lhsap10": (ora_setenv): TRACE POINT&lt;BR /&gt;May 20 21:08:06 - Node "lhsap10": (ora_setenv): ORASID=orapp7 ORACLE_SID=PP7 ORACLE_HOME=/oracle/PP7/102_64 SAPDATA_HOME=/oracle/PP7&lt;BR /&gt;May 20 21:08:06 - Node "lhsap10": (stop_ORACLE_jdb): Halting J2EE database ...&lt;BR /&gt;May 20 21:08:21 - Node "lhsap10": (stop_ORACLE_jdb): J2EE Database stopped successfully&lt;BR /&gt;May 20 21:08:21 - Node "lhsap10": (ora_stop_listener): TRACE POINT&lt;BR /&gt;May 20 21:08:21 - Node "lhsap10": (ora_stop_listener): Stopping ORACLE listener LIST_PP7&lt;BR /&gt;May 20 21:08:30 - Node "lhsap10": (ora_stop_listener): The command completed successfully&lt;BR /&gt;May 20 21:08:30 - Node "lhsap10": (ora_wait): TRACE POINT&lt;BR /&gt;May 20 21:08:30 - Node "lhsap10": (ora_wait): Wait for Oracle shadow process cleanup (Timeout: 260 secs)&lt;BR /&gt;May 20 21:08:30 - Node "lhsap10": (ora_wait):  3792 ?        00:00:00 oracle&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": (stop_addons_postdb): TRACE POINT&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": (stop_saposcol): TRACE POINT&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": *** Done: Executing script [/opt/cmcluster/conf/PP7/sapwas.sh] MODE [stop]&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": (sapwas_main): Leaving SGeSAP stop runtime steps&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": *** Done: Executing script [/opt/cmcluster/conf/PP7/sapwas.sh] args [spawn]&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Remove IP address 10.1.1.230 from subnet 10.1.1.0&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Remove IP address 131.195.119.230 from subnet 131.195.119.0&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Stoping rmtab synchronization process&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unexporting filesystem on lsapepodb:/export/sapmnt/PP7&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unexporting filesystem on lhsap10-be:/export/sapmnt/PP7&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unexporting filesystem on lhsap20-be:/export/sapmnt/PP7&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /usr/sap/PP7/SCS80&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/mirrlogB&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/mirrlogA&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/origlogB&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/origlogA&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/oraarch&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/sapreorg&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/sapdatatemp&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/sapdata4&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/sapdata3&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/sapdata2&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7/sapdata1&lt;BR /&gt;May 20 21:08:31 - Node "lhsap10": Unmounting filesystem on /oracle/PP7&lt;BR /&gt;May 20 21:08:32 - Node "lhsap10": Unmounting filesystem on /export/sapmnt/PP7&lt;BR /&gt;WARNING: Running fuser to remove anyone using the file system directly.&lt;BR /&gt;Cannot stat file /proc/4989/fd/100: Permission denied&lt;BR /&gt;Cannot stat file /proc/4990/fd/102: Permission denied&lt;BR /&gt;Cannot stat file /proc/4991/fd/102: Permission denied&lt;BR /&gt;umount: /export/sapmnt/PP7: device is busy&lt;BR /&gt;umount: /export/sapmnt/PP7: device is busy&lt;BR /&gt;ERROR: Function umount_fs; Failed to unmount /dev/vgPP7FIXE/lvsapmnt&lt;BR /&gt;May 20 21:08:32 - Node "lhsap10": Deactivating volume group vgPP7FIXE&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;VG vgPP7FIXE is busy, will try deactivation...&lt;BR /&gt;          Can't deactivate volume group "vgPP7FIXE" with 1 open logical volume(s)&lt;BR /&gt;ERROR: Function deactivate_volume_group; Failed to deactivate vgPP7FIXE&lt;BR /&gt;Attempting to deltag to vg vgPP7FIXE...&lt;BR /&gt;deltag was successful on vg vgPP7FIXE.&lt;BR /&gt;May 20 21:09:02 - Node "lhsap10": Deactivating volume group vgPP7db01&lt;BR /&gt;Attempting to deltag to vg vgPP7db01...&lt;BR /&gt;deltag was successful on vg vgPP7db01.&lt;BR /&gt;###### Node "lhsap10": Package halted with ERROR at Wed May 20 21:09:03 EDT 2009 ######&lt;BR /&gt;</description>
      <pubDate>Tue, 16 Jun 2009 12:19:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439775#M82152</guid>
      <dc:creator>George Barbitsas</dc:creator>
      <dc:date>2009-06-16T12:19:27Z</dc:date>
    </item>
    <item>
      <title>Re: SG(SLES) vs NFS(soft and/or hard mounts)</title>
      <link>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439776#M82153</link>
      <description>Hi &lt;BR /&gt;&lt;BR /&gt;I think your un_export_fs function goes well since it does not complaint about un exporting the file system.&lt;BR /&gt;can you see &lt;BR /&gt;/export/sapmnt/PP7 mounted on your client after the package halt script passes un export?&lt;BR /&gt;&lt;BR /&gt;check if your NFS and LVM patches are up to date .&lt;BR /&gt;Do you kill processes only from your client side?&lt;BR /&gt;&lt;BR /&gt;Regs</description>
      <pubDate>Wed, 17 Jun 2009 06:02:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439776#M82153</guid>
      <dc:creator>wci</dc:creator>
      <dc:date>2009-06-17T06:02:33Z</dc:date>
    </item>
    <item>
      <title>Re: SG(SLES) vs NFS(soft and/or hard mounts)</title>
      <link>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439777#M82154</link>
      <description>yes, when the application is not up on the other node and I issue a cmhaltpkg everything ofes well usually.&lt;BR /&gt;&lt;BR /&gt;when the application is up on the other node the filesystem can't umount because of pending IO....not sure but this seems to be  an architectural problem like one poster suggested...&lt;BR /&gt;&lt;BR /&gt;anyone else have any ideas?</description>
      <pubDate>Wed, 17 Jun 2009 11:01:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439777#M82154</guid>
      <dc:creator>George Barbitsas</dc:creator>
      <dc:date>2009-06-17T11:01:28Z</dc:date>
    </item>
    <item>
      <title>Re: SG(SLES) vs NFS(soft and/or hard mounts)</title>
      <link>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439778#M82155</link>
      <description>Is this, by chance, a 2-node SAP cluster with the central instance on one node and a dialogue instance (of the same SID) on the other?&lt;BR /&gt;&lt;BR /&gt;Armin&lt;BR /&gt;</description>
      <pubDate>Wed, 17 Jun 2009 13:45:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439778#M82155</guid>
      <dc:creator>Armin Kunaschik</dc:creator>
      <dc:date>2009-06-17T13:45:09Z</dc:date>
    </item>
    <item>
      <title>Re: SG(SLES) vs NFS(soft and/or hard mounts)</title>
      <link>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439779#M82156</link>
      <description>it is a 2 node cluster with 2 packages on one node, and some SAP components(unclustered) on the other node, but these components NFS mount the filesystem that is used by one of the two packages....&lt;BR /&gt;&lt;BR /&gt;when I issue a cmhalpkg on that particular package the exported mountpoint cannot be unmount because of pending io</description>
      <pubDate>Wed, 17 Jun 2009 13:55:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439779#M82156</guid>
      <dc:creator>George Barbitsas</dc:creator>
      <dc:date>2009-06-17T13:55:55Z</dc:date>
    </item>
    <item>
      <title>Re: SG(SLES) vs NFS(soft and/or hard mounts)</title>
      <link>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439780#M82157</link>
      <description>As I said before, this is an architecture problem!&lt;BR /&gt;Place the other application into a SG package, regardless if it's switchable or not. But in a working cluster any application should be able to failover to other nodes. The benefit of this is, that you don't need to failback the production application to get the non-cluster-aware application back to work.&lt;BR /&gt;If you did that, it's easy to insert a cmhaltpkg &lt;DEPENDENT application=""&gt; into to stop command section of the prod package.&lt;BR /&gt;On the start side you should create a dependency that &lt;DEPENDENT package=""&gt; will not start until the prod package is up. A bit more scripting/configuration is involved, if the production crashes and fails over to the other node. In this case you need to bring up (in this order) HA-NFS, stop &lt;DEPENDENT application=""&gt;11.18, you can create pre-Scripts and run the necessary actions. &lt;BR /&gt;&lt;BR /&gt;And the last thing: SGeSAP is not a big help with this setup. If you're able to script the SAP/application startup, you don't need SGeSAP. SGeSAP is bloated and slow and too expensive... but this is only a personal opinion.&lt;BR /&gt;&lt;BR /&gt;My 2 cents,&lt;BR /&gt;Armin&lt;BR /&gt;&lt;/DEPENDENT&gt;&lt;/DEPENDENT&gt;&lt;/DEPENDENT&gt;</description>
      <pubDate>Thu, 18 Jun 2009 08:17:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/sg-sles-vs-nfs-soft-and-or-hard-mounts/m-p/4439780#M82157</guid>
      <dc:creator>Armin Kunaschik</dc:creator>
      <dc:date>2009-06-18T08:17:11Z</dc:date>
    </item>
  </channel>
</rss>

