<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: package requires cleanup on node in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/package-requires-cleanup-on-node/m-p/5229734#M669563</link>
    <description>The case opened with HP suggests this was introduced in Serviceguard 11.18.  Using cmviewcl -f line on Serviceguard 11.17 does not have a last_halt_failed in the output.&lt;BR /&gt;&lt;BR /&gt;Regardless of the version, the following script will automatically do the cleanup required before executing a cmhaltnode command.&lt;BR /&gt;&lt;BR /&gt;/usr/sbin/cmviewcl -v -f line | grep "last_halt_failed=yes" | \&lt;BR /&gt;while read L_FAILED&lt;BR /&gt;do&lt;BR /&gt;    L_TEMP=${L_FAILED#package:}&lt;BR /&gt;    L_FAILED_PKG=${L_TEMP%%\|*}&lt;BR /&gt;    L_TEMP=${L_FAILED#*node:}&lt;BR /&gt;    L_FAILED_NODE=${L_TEMP%%\|*}&lt;BR /&gt;    L_TEMP=$(/usr/sbin/cmviewcl -v -f line -p $L_FAILED_PKG | \&lt;BR /&gt;            /usr/bin/grep "node:$L_FAILED_NODE\|available")&lt;BR /&gt;    L_AVAIL_PKG=${L_TEMP##*available=}&lt;BR /&gt;    if [[ $L_AVAIL_PKG = "no" ]]&lt;BR /&gt;    then&lt;BR /&gt;        echo "Enabling package $L_FAILED_PKG on node $L_FAILED_NODE"&lt;BR /&gt;        /usr/sbin/cmmodpkg -e -n $L_FAILED_NODE $L_FAILED_PKG&lt;BR /&gt;    fi&lt;BR /&gt;done</description>
    <pubDate>Thu, 18 Mar 2010 13:55:48 GMT</pubDate>
    <dc:creator>Ken Englander</dc:creator>
    <dc:date>2010-03-18T13:55:48Z</dc:date>
    <item>
      <title>package requires cleanup on node</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/package-requires-cleanup-on-node/m-p/5229730#M669559</link>
      <description>SG v11.19&lt;BR /&gt;We have a script that is intended to halt all packages on a specific node and then run cmhaltnode.  Depending on the circumstances, I sometimes get error messages when the cmhaltnode command is run similar to the following.&lt;BR /&gt;&lt;BR /&gt;Unable to halt the cluster: package crs_uxprf_clus requires cleanup on node hpux6.&lt;BR /&gt;Ensure that all package components are halted and run&lt;BR /&gt;  cmmodpkg -e -n hpux6 crs_uxprf_clus&lt;BR /&gt;to allow the node to halt.&lt;BR /&gt;Unable to halt the cluster: package cerner_hpux5 requires cleanup on node hpux6.&lt;BR /&gt;Ensure that all package components are halted and run&lt;BR /&gt;  cmmodpkg -e -n hpux6 cerner_hpux5&lt;BR /&gt;to allow the node to halt.&lt;BR /&gt;&lt;BR /&gt;I am not able to guess why SG is selecting certain packages to enable switching, i.e. I cannot figure out the pattern for its logic.&lt;BR /&gt;&lt;BR /&gt;I'm attaching a file showing the cluster status (cmviewcl -v) after the cmhaltnode command was issued.  It also contains output from cmviewcl -v -f line command.&lt;BR /&gt;&lt;BR /&gt;Any insights would be appreciated!&lt;BR /&gt;&lt;BR /&gt;Thanks!</description>
      <pubDate>Thu, 11 Mar 2010 22:44:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/package-requires-cleanup-on-node/m-p/5229730#M669559</guid>
      <dc:creator>Ken Englander</dc:creator>
      <dc:date>2010-03-11T22:44:11Z</dc:date>
    </item>
    <item>
      <title>Re: package requires cleanup on node</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/package-requires-cleanup-on-node/m-p/5229731#M669560</link>
      <description>Ken, Serviceguard requires cleanup of a package whose halt script has failed.  Do the package logs show this happened in the course of attempting to halt these packages through the automation script activity?&lt;BR /&gt;&lt;BR /&gt;Since cerner_hpux5 is dependent on crs_uxprf_clus, does your automation script halt the cerner_hpux5 package before attempting to halt the crs_uxprf_clus package?</description>
      <pubDate>Fri, 12 Mar 2010 13:18:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/package-requires-cleanup-on-node/m-p/5229731#M669560</guid>
      <dc:creator>Stephen Doud</dc:creator>
      <dc:date>2010-03-12T13:18:03Z</dc:date>
    </item>
    <item>
      <title>Re: package requires cleanup on node</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/package-requires-cleanup-on-node/m-p/5229732#M669561</link>
      <description>Hi Stephen,&lt;BR /&gt;&lt;BR /&gt;That appears to be exactly right, at least that is what is unique in the output from cmviewcl -v -f line.  I cannot independently verify what happened as this was at one point in multiple tests we were running on this cluster.&lt;BR /&gt;&lt;BR /&gt;Can you clarify one thing, please?  Is this new with SG 11.18 or 11.19.  I seem to recall reading that something about the last_halt_failed being a new option AND I do not remember ever seeing this error prior to working with 11.19 (we skipped over 11.18).&lt;BR /&gt;&lt;BR /&gt;Thanks!&lt;BR /&gt;&lt;BR /&gt;Ken</description>
      <pubDate>Fri, 12 Mar 2010 22:27:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/package-requires-cleanup-on-node/m-p/5229732#M669561</guid>
      <dc:creator>Ken Englander</dc:creator>
      <dc:date>2010-03-12T22:27:45Z</dc:date>
    </item>
    <item>
      <title>Re: package requires cleanup on node</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/package-requires-cleanup-on-node/m-p/5229733#M669562</link>
      <description>I also find last_halt_ identifiers in A.11.18 'cmviewcl -v -f line' output, so I believe the framework was put in place on A.11.18.  I believe the 'requires cleanup' functionality was also in place on A.11.18, because it is used when reporting problems with predecessor and successor packages in SADTA clusters.&lt;BR /&gt;&lt;BR /&gt;I see that you have a case open on this, so I recommend pursueing this question through the support center.</description>
      <pubDate>Wed, 17 Mar 2010 14:48:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/package-requires-cleanup-on-node/m-p/5229733#M669562</guid>
      <dc:creator>Stephen Doud</dc:creator>
      <dc:date>2010-03-17T14:48:35Z</dc:date>
    </item>
    <item>
      <title>Re: package requires cleanup on node</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/package-requires-cleanup-on-node/m-p/5229734#M669563</link>
      <description>The case opened with HP suggests this was introduced in Serviceguard 11.18.  Using cmviewcl -f line on Serviceguard 11.17 does not have a last_halt_failed in the output.&lt;BR /&gt;&lt;BR /&gt;Regardless of the version, the following script will automatically do the cleanup required before executing a cmhaltnode command.&lt;BR /&gt;&lt;BR /&gt;/usr/sbin/cmviewcl -v -f line | grep "last_halt_failed=yes" | \&lt;BR /&gt;while read L_FAILED&lt;BR /&gt;do&lt;BR /&gt;    L_TEMP=${L_FAILED#package:}&lt;BR /&gt;    L_FAILED_PKG=${L_TEMP%%\|*}&lt;BR /&gt;    L_TEMP=${L_FAILED#*node:}&lt;BR /&gt;    L_FAILED_NODE=${L_TEMP%%\|*}&lt;BR /&gt;    L_TEMP=$(/usr/sbin/cmviewcl -v -f line -p $L_FAILED_PKG | \&lt;BR /&gt;            /usr/bin/grep "node:$L_FAILED_NODE\|available")&lt;BR /&gt;    L_AVAIL_PKG=${L_TEMP##*available=}&lt;BR /&gt;    if [[ $L_AVAIL_PKG = "no" ]]&lt;BR /&gt;    then&lt;BR /&gt;        echo "Enabling package $L_FAILED_PKG on node $L_FAILED_NODE"&lt;BR /&gt;        /usr/sbin/cmmodpkg -e -n $L_FAILED_NODE $L_FAILED_PKG&lt;BR /&gt;    fi&lt;BR /&gt;done</description>
      <pubDate>Thu, 18 Mar 2010 13:55:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/package-requires-cleanup-on-node/m-p/5229734#M669563</guid>
      <dc:creator>Ken Englander</dc:creator>
      <dc:date>2010-03-18T13:55:48Z</dc:date>
    </item>
  </channel>
</rss>

