<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Strange things in DECNET+ in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686296#M73108</link>
    <description>Why no crash?&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; halted CPU 0&lt;BR /&gt;&amp;gt;&amp;gt; halt code = 2&lt;BR /&gt;&amp;gt;&amp;gt; kernel stack not valid halt&lt;BR /&gt;&amp;gt;&amp;gt; PC = ffffffff801551a4&lt;BR /&gt;&lt;BR /&gt;probably because the designers decided that after a kernel stack corruption it was to dangerous to perform even a dump. It could corrupt the disk if the dump code or parameters got wierd.&lt;BR /&gt;&lt;BR /&gt;Edwin</description>
    <pubDate>Thu, 08 Dec 2005 05:01:24 GMT</pubDate>
    <dc:creator>Edwin Gersbach_2</dc:creator>
    <dc:date>2005-12-08T05:01:24Z</dc:date>
    <item>
      <title>Strange things in DECNET+</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686287#M73099</link>
      <description>This morning I was caught by alarms leading to a node that was saturated on decnet level. VMS 7.3 patched until about 25-aug-2003. Decnet 7.3 eco 3 dd 28-oct-2002.&lt;BR /&gt;&lt;BR /&gt;Further info in attachment.&lt;BR /&gt;&lt;BR /&gt;Anyone any idea what happened and how to investigate after the reboot ?&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Wed, 07 Dec 2005 02:30:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686287#M73099</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-12-07T02:30:19Z</dc:date>
    </item>
    <item>
      <title>Re: Strange things in DECNET+</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686288#M73100</link>
      <description>Hi Wim&lt;BR /&gt;&lt;BR /&gt;as You can see, You have defined maximum tranport connections are defined as 500&lt;BR /&gt;&lt;BR /&gt;So the limit of connections is reached.&lt;BR /&gt;&lt;BR /&gt;You can change this with sys$startup:net$configure or by editing the file  SYS$SPECIFIC:[SYSMGR]NET$NSP_TRANSPORT_STARTUP.NCL;&lt;BR /&gt;&lt;BR /&gt;As a Guideline: &lt;BR /&gt;&lt;BR /&gt;select 1000 transport connections with a maximum Window of 20 and maximum receive buffers of 20000. &lt;BR /&gt;&lt;BR /&gt;Be aware tat maximum Window has a upper limit of 65535 (?? not absoluteli sure. It may be less)&lt;BR /&gt;&lt;BR /&gt;Hope that helps&lt;BR /&gt;&lt;BR /&gt;Regards &lt;BR /&gt;&lt;BR /&gt;Heinz</description>
      <pubDate>Wed, 07 Dec 2005 02:51:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686288#M73100</guid>
      <dc:creator>Heinz W Genhart</dc:creator>
      <dc:date>2005-12-07T02:51:07Z</dc:date>
    </item>
    <item>
      <title>Re: Strange things in DECNET+</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686289#M73101</link>
      <description>H.,&lt;BR /&gt;&lt;BR /&gt;Forgot to mention that this is a AS1000 and that 500 is very high for this node. &lt;BR /&gt;&lt;BR /&gt;Normally about 50 connections are open.&lt;BR /&gt;&lt;BR /&gt;But I killed (almost) every process using connections and still 500 were used. 1 process I killed freed about 40 connections but they were taken again within 1 minute.&lt;BR /&gt;So, I guess there was some kind of attack in decnet.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Wed, 07 Dec 2005 03:22:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686289#M73101</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-12-07T03:22:25Z</dc:date>
    </item>
    <item>
      <title>Re: Strange things in DECNET+</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686290#M73102</link>
      <description>Wlm,&lt;BR /&gt;&lt;BR /&gt;I would recommend NOT killing processes, but first displaying the list of active connections (and their originating and receiving processes) to a file.&lt;BR /&gt;&lt;BR /&gt;Deleting processes is like washing down a crime scene, it destroys evidence of what is happening (or has happened).&lt;BR /&gt;&lt;BR /&gt;The syntax for DECnet+ escapes me at the moment, but the Phase IV (NCP) syntax would be SHOW KNOWN LINKS.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Wed, 07 Dec 2005 07:37:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686290#M73102</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2005-12-07T07:37:13Z</dc:date>
    </item>
    <item>
      <title>Re: Strange things in DECNET+</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686291#M73103</link>
      <description>Bob,&lt;BR /&gt;&lt;BR /&gt;I tried that but the programs (ncl and net$mgmt) both hanged. So, I tried to kill the processes 1 by 1 until I killed the one who held the connections.&lt;BR /&gt;&lt;BR /&gt;In the mean time I found out that many other nodes logged "reject received" in the operator log file.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Wed, 07 Dec 2005 08:19:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686291#M73103</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-12-07T08:19:23Z</dc:date>
    </item>
    <item>
      <title>Re: Strange things in DECNET+</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686292#M73104</link>
      <description>Sometimes people are given a higher priority to get the machine operational than to finding out what caused the problem.&lt;BR /&gt;&lt;BR /&gt;Is there anything in the operator log or security audit of failed connections or attempts?&lt;BR /&gt;&lt;BR /&gt;Do you have any monitoring that might show what time the additional connections started?&lt;BR /&gt;&lt;BR /&gt;Was any processes in a Resource Wait state? (decnet primarily)</description>
      <pubDate>Wed, 07 Dec 2005 17:54:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686292#M73104</guid>
      <dc:creator>Peter Zeiszler</dc:creator>
      <dc:date>2005-12-07T17:54:58Z</dc:date>
    </item>
    <item>
      <title>Re: Strange things in DECNET+</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686293#M73105</link>
      <description>Just a hint to the crash reason:&lt;BR /&gt;&lt;BR /&gt;---------------&lt;BR /&gt;I Found the TNS1 process still active and tried to kill it. That restarted the&lt;BR /&gt;system.&lt;BR /&gt;MXM01/MGRWVW&amp;gt;stop/id=000000A0&lt;BR /&gt;---------------&lt;BR /&gt;&lt;BR /&gt;But:&lt;BR /&gt;&lt;BR /&gt;000000A0 TCPIP$INETACP   HIB     10      691   0 00:00:13.57       217    144&lt;BR /&gt;0000012E AUDIT_CLIENT    LEF      6     1469   0 00:00:06.41       323    176&lt;BR /&gt;0001B62F TCPIP$TNS1      HIB      6      120   0 00:00:00.27       532     32&lt;BR /&gt;&lt;BR /&gt;So, you killed INETACP which I guess is hooked rather deeply in the kernel.&lt;BR /&gt;&lt;BR /&gt;Edwin</description>
      <pubDate>Thu, 08 Dec 2005 02:06:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686293#M73105</guid>
      <dc:creator>Edwin Gersbach_2</dc:creator>
      <dc:date>2005-12-08T02:06:26Z</dc:date>
    </item>
    <item>
      <title>Re: Strange things in DECNET+</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686294#M73106</link>
      <description>Peter,&lt;BR /&gt;&lt;BR /&gt;The monitoring was stuck itself.&lt;BR /&gt;&lt;BR /&gt;The process in RWxxx was a TPU session.&lt;BR /&gt;&lt;BR /&gt;No audit alarm.&lt;BR /&gt;&lt;BR /&gt;Nothing special in accounting.&lt;BR /&gt;&lt;BR /&gt;No log files with other error messages (on client + server).&lt;BR /&gt;&lt;BR /&gt;Because almost all decnet using processes were killed and still 500 connections were used, I think it must be a decnet bug. All nodes that were connected in decnet still had given messages to the node (were accepted and found back afterwards) but also received rejected messages.&lt;BR /&gt;&lt;BR /&gt;Wim&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 08 Dec 2005 02:10:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686294#M73106</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-12-08T02:10:15Z</dc:date>
    </item>
    <item>
      <title>Re: Strange things in DECNET+</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686295#M73107</link>
      <description>Edwin,&lt;BR /&gt;&lt;BR /&gt;Very good. I made that mistake. But even that should not halt the system. Why was there no crash ?&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Thu, 08 Dec 2005 02:12:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686295#M73107</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-12-08T02:12:39Z</dc:date>
    </item>
    <item>
      <title>Re: Strange things in DECNET+</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686296#M73108</link>
      <description>Why no crash?&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; halted CPU 0&lt;BR /&gt;&amp;gt;&amp;gt; halt code = 2&lt;BR /&gt;&amp;gt;&amp;gt; kernel stack not valid halt&lt;BR /&gt;&amp;gt;&amp;gt; PC = ffffffff801551a4&lt;BR /&gt;&lt;BR /&gt;probably because the designers decided that after a kernel stack corruption it was to dangerous to perform even a dump. It could corrupt the disk if the dump code or parameters got wierd.&lt;BR /&gt;&lt;BR /&gt;Edwin</description>
      <pubDate>Thu, 08 Dec 2005 05:01:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686296#M73108</guid>
      <dc:creator>Edwin Gersbach_2</dc:creator>
      <dc:date>2005-12-08T05:01:24Z</dc:date>
    </item>
    <item>
      <title>Re: Strange things in DECNET+</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686297#M73109</link>
      <description>Edwin,&lt;BR /&gt;&amp;gt;&amp;gt; halted CPU 0&lt;BR /&gt;&amp;gt;&amp;gt; halt code = 2&lt;BR /&gt;&amp;gt;&amp;gt; kernel stack not valid halt&lt;BR /&gt;&amp;gt;&amp;gt; PC = ffffffff801551a4&lt;BR /&gt;&amp;gt;probably because the designers decided that after a kernel stack &lt;BR /&gt;&amp;gt;corruption it was to dangerous to perform even a dump. It could &lt;BR /&gt;&amp;gt;corrupt the disk if the dump code or parameters got wierd.&lt;BR /&gt;&lt;BR /&gt;Not so, if your console is setup correctly, i.e. AUTO_ACTION is set to RESTART, then VMS will restart for the explicit purpose of taking an appropriate bugcheck. In this case it would have been KRNLSTAKNV. &lt;BR /&gt;There are of course situations where even this restricted restart is not possible, and others where the bugcheck code cannot write the dumpfile, but the ability to preserve the evidence after a pathological halt has always been present in VMS.&lt;BR /&gt;&lt;BR /&gt;JT:</description>
      <pubDate>Thu, 08 Dec 2005 05:54:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686297#M73109</guid>
      <dc:creator>John Travell</dc:creator>
      <dc:date>2005-12-08T05:54:20Z</dc:date>
    </item>
    <item>
      <title>Re: Strange things in DECNET+</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686298#M73110</link>
      <description>AUTO_ACTION is on boot.&lt;BR /&gt;&lt;BR /&gt;Anyone bad experiences with auto_action=RESTART ? E.g. automatic reboot failing ?&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Thu, 08 Dec 2005 06:17:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686298#M73110</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-12-08T06:17:20Z</dc:date>
    </item>
    <item>
      <title>Re: Strange things in DECNET+</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686299#M73111</link>
      <description>We have not had a problem with auto_action=RESTART.  If there are any issues with having this as RESTART I would like to know also.&lt;BR /&gt;&lt;BR /&gt;We had to make all of ours be restart after an issue with memory not having the system create the crash dump.</description>
      <pubDate>Thu, 08 Dec 2005 10:56:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686299#M73111</guid>
      <dc:creator>Peter Zeiszler</dc:creator>
      <dc:date>2005-12-08T10:56:20Z</dc:date>
    </item>
    <item>
      <title>Re: Strange things in DECNET+</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686300#M73112</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;I have no clue as to what happened, but&lt;BR /&gt;&amp;gt;&amp;gt; kernel stack not valid halt&lt;BR /&gt;should DEFINITELY be a reason to write a dump!&lt;BR /&gt;During the CrashDumpAnalysis course prior to last Bootcamp one whole chapter was dedicated to just that kind of dumps.&lt;BR /&gt;But, you DO need the dumpfile...  :-(&lt;BR /&gt;&lt;BR /&gt;So, basically you now have two problems:&lt;BR /&gt;a- What happened to DecNet?&lt;BR /&gt;b- WHY is there no dumpfile?&lt;BR /&gt;&lt;BR /&gt;Some help I am, eh?&lt;BR /&gt;Sorry.&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Thu, 08 Dec 2005 13:42:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686300#M73112</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2005-12-08T13:42:21Z</dc:date>
    </item>
    <item>
      <title>Re: Strange things in DECNET+</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686301#M73113</link>
      <description>Jan,&lt;BR /&gt;&lt;BR /&gt;I was surprised to discover that no dump was done. I don't quite understand why in this case you have to specify "restart" to get the dump. The logic ???&lt;BR /&gt;&lt;BR /&gt;In the mean time, I discovered that the problem began 6 dec at 0:05. I gues a collision between several decnet things happening at the same time (T2T, ncl).&lt;BR /&gt;&lt;BR /&gt;In any case, I will classify this problem as a "very rare bug" and hope I will never see it again. But if I see it, I will crash the system myself.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Thu, 08 Dec 2005 14:43:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686301#M73113</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-12-08T14:43:15Z</dc:date>
    </item>
    <item>
      <title>Re: Strange things in DECNET+</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686302#M73114</link>
      <description>All, the point is that a KRNLSTAKNV is a pathological halt. When one of these has occurred, the CPU is in CONSOLE code, not VMS code. &lt;BR /&gt;Since the bugcheck code is part of VMS it will not happen UNLESS the CONSOLE takes action to restart VMS for the purposes of taking a restart bugcheck.&lt;BR /&gt;&lt;BR /&gt;AUTO_ACTION is NOT just for BOOT, it comes into play EVERY time an uncontrolled entry to console code occurs. Just about the ONLY thing that constitutes a controlled entry is the end of a shutdown (or a bugcheck), where VMS (and probably Unix) tells the console to expect a halt, and perhaps what other action to take as well (think power off, reboot).&lt;BR /&gt;&lt;BR /&gt;Power on, Kernel stack not valid, double error, halt instruction. All are considered uncontrolled console entries and cause the console to do whatever AUTO_ACTION dictates.&lt;BR /&gt;&lt;BR /&gt;I DID once have a case where RESTART caused a problem, back in the days when V6.1 was current. A problem caused corruption of the system page table, which led to code winding down the stack. The KRNLSTAKNV triggered auto_action restart, the attempt to restart fell over the corrupted SPT, which led to another KRNLSTAKNV restart, which led to...&lt;BR /&gt;This problem was a bit of a bitch, it took us three weeks to fix it.&lt;BR /&gt;&lt;BR /&gt;The issues around RESTART are mainly related to whether you want a cluster node to rejoin immediately or not. There are some sites where a failed machine is left failed until the next 'reboot opportunity', whenever that was. Turning off RESTART causes loss of the dump if a pathological halt occurs. For such a situation, a better solution may be to always stop at SYSBOOT and wait for a continue command.&lt;BR /&gt;&lt;BR /&gt;JT:</description>
      <pubDate>Thu, 08 Dec 2005 15:53:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/strange-things-in-decnet/m-p/3686302#M73114</guid>
      <dc:creator>John Travell</dc:creator>
      <dc:date>2005-12-08T15:53:32Z</dc:date>
    </item>
  </channel>
</rss>

