<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: $REPLY/URGENT messaging in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/reply-urgent-messaging/m-p/5753951#M28392</link>
    <description>&lt;P&gt;I don't think there's a supported way to clear pending OPCOM messages.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Presumably the noise has died down by now, but if you have a repeat, I'd guess you can clear the buffer by stopping and restarting OPCOM. The process should be killable with STOP/ID. You can then restart it by executing SYS$STARTUP:VMS$CONFIG-050_OPCOM from SYSTEM&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Mon, 06 Aug 2012 21:26:19 GMT</pubDate>
    <dc:creator>John Gillings</dc:creator>
    <dc:date>2012-08-06T21:26:19Z</dc:date>
    <item>
      <title>$REPLY/URGENT messaging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/reply-urgent-messaging/m-p/5753639#M28387</link>
      <description>&lt;P&gt;Over this past weekend we had an issue with a DCL procedure which ran amok. &amp;nbsp;Within the procedure&lt;/P&gt;&lt;P&gt;is this line: &amp;nbsp;$repl/bell/urgent/user "blah blah blah".&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Due to an unforseen issue this repl message was broadcast 60,000+ times!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The message is coming from the sys$batch of a 4 node OpenVMS cluster.&lt;/P&gt;&lt;P&gt;Two nodes have a one system disk and the other two nodes have their own system disk.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The repl messaging finished on the broadcasting nodes quite some time ago but the&lt;/P&gt;&lt;P&gt;messaging continues on the remaining two nodes of the cluster.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Which is causing the user community some consternation/confusion.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;A temporary workaround was to add $set broadcast=nourgent to sylogin.com&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;What I would like to do is find the buffer (network?) where the remaining messages are coming from.&lt;/P&gt;&lt;P&gt;Is this possible and, if so, where would I look?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks in advance.&lt;/P&gt;</description>
      <pubDate>Mon, 06 Aug 2012 14:19:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/reply-urgent-messaging/m-p/5753639#M28387</guid>
      <dc:creator>Ranger1</dc:creator>
      <dc:date>2012-08-06T14:19:31Z</dc:date>
    </item>
    <item>
      <title>Re: $REPLY/URGENT messaging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/reply-urgent-messaging/m-p/5753951#M28392</link>
      <description>&lt;P&gt;I don't think there's a supported way to clear pending OPCOM messages.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Presumably the noise has died down by now, but if you have a repeat, I'd guess you can clear the buffer by stopping and restarting OPCOM. The process should be killable with STOP/ID. You can then restart it by executing SYS$STARTUP:VMS$CONFIG-050_OPCOM from SYSTEM&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 06 Aug 2012 21:26:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/reply-urgent-messaging/m-p/5753951#M28392</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2012-08-06T21:26:19Z</dc:date>
    </item>
    <item>
      <title>Re: $REPLY/URGENT messaging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/reply-urgent-messaging/m-p/5755115#M28394</link>
      <description>&lt;P&gt;thanks John.&amp;nbsp; The messages have cleared so no need to kill OPCOM.&amp;nbsp; (Duh.&amp;nbsp; Why didn't I think of that?!!!)&lt;/P&gt;&lt;P&gt;What is odd, though, is that none of the messaging is going to operator.log&amp;nbsp; I would have thought any&lt;/P&gt;&lt;P&gt;$repl/to= would go there but these 60,000 spurious messages are not.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regardless, the messages have finally given up the ghost.&lt;/P&gt;</description>
      <pubDate>Tue, 07 Aug 2012 14:00:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/reply-urgent-messaging/m-p/5755115#M28394</guid>
      <dc:creator>Ranger1</dc:creator>
      <dc:date>2012-08-07T14:00:19Z</dc:date>
    </item>
    <item>
      <title>Re: $REPLY/URGENT messaging</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/reply-urgent-messaging/m-p/5756213#M28396</link>
      <description>&lt;P&gt;This REPLY command issues a BRAODCAST using the $BRKTHRU system service. OPCOM is NOT involved in this operation.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;A cluster-wide broadcast is being handled by the CLUSTER_SERVER process on the other nodes in the cluster.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;So stopping OPCOM would not have achieved anthing.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Volker.&lt;/P&gt;</description>
      <pubDate>Wed, 08 Aug 2012 07:25:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/reply-urgent-messaging/m-p/5756213#M28396</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2012-08-08T07:25:25Z</dc:date>
    </item>
  </channel>
</rss>

