<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic NISC_MAX_PKTSZ in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/nisc-max-pktsz/m-p/3210326#M62265</link>
    <description>I have a cluster that is doing its cluster communications via FDDI. Therefore we limit the size of NISC_MAX_PKTSZ to 4468.&lt;BR /&gt;Autogen calculated a value of 8192.&lt;BR /&gt;Anyone any idea why autogen does that ?</description>
    <pubDate>Fri, 05 Mar 2004 04:36:43 GMT</pubDate>
    <dc:creator>Wim Van den Wyngaert</dc:creator>
    <dc:date>2004-03-05T04:36:43Z</dc:date>
    <item>
      <title>NISC_MAX_PKTSZ</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/nisc-max-pktsz/m-p/3210326#M62265</link>
      <description>I have a cluster that is doing its cluster communications via FDDI. Therefore we limit the size of NISC_MAX_PKTSZ to 4468.&lt;BR /&gt;Autogen calculated a value of 8192.&lt;BR /&gt;Anyone any idea why autogen does that ?</description>
      <pubDate>Fri, 05 Mar 2004 04:36:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/nisc-max-pktsz/m-p/3210326#M62265</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-03-05T04:36:43Z</dc:date>
    </item>
    <item>
      <title>Re: NISC_MAX_PKTSZ</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/nisc-max-pktsz/m-p/3210327#M62266</link>
      <description>This parameter is maximum size and PEDRIVER will automaticly adjust the size actually used.  &lt;BR /&gt;From SYSGEN HELP: On Alpha, to optimize performance, the default value is the largest packet size currently supported by OpenVMS.&lt;BR /&gt; &lt;BR /&gt;My advice is to use the default value and let PEDRIVER handle the different sizes for the different adapters and LANs.</description>
      <pubDate>Fri, 05 Mar 2004 06:51:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/nisc-max-pktsz/m-p/3210327#M62266</guid>
      <dc:creator>Åge Rønning</dc:creator>
      <dc:date>2004-03-05T06:51:29Z</dc:date>
    </item>
    <item>
      <title>Re: NISC_MAX_PKTSZ</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/nisc-max-pktsz/m-p/3210328#M62267</link>
      <description>I've got a cluster with ethernet and FDDI. NISC_MAX_PKTSZ is the max that can be used but PEDRIVER works would the max size for each link. AUTOGEN needs to pick a size at least as big as the max possible.&lt;BR /&gt;</description>
      <pubDate>Fri, 05 Mar 2004 07:30:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/nisc-max-pktsz/m-p/3210328#M62267</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2004-03-05T07:30:24Z</dc:date>
    </item>
    <item>
      <title>Re: NISC_MAX_PKTSZ</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/nisc-max-pktsz/m-p/3210329#M62268</link>
      <description>it does that in all versions after 7.3 (or so)&lt;BR /&gt;if you want an explanation, look in the 7.3 release notes (see &lt;A href="http://h71000.www7.hp.com/DOC/73final/6637/6637pro_007.html" target="_blank"&gt;http://h71000.www7.hp.com/DOC/73final/6637/6637pro_007.html&lt;/A&gt; )&lt;BR /&gt;&lt;BR /&gt;there are also some caveats when you have mixed versions and satellites booting over fddi...&lt;BR /&gt;&lt;BR /&gt;-anders</description>
      <pubDate>Thu, 11 Mar 2004 10:01:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/nisc-max-pktsz/m-p/3210329#M62268</guid>
      <dc:creator>Anders Sundqvist</dc:creator>
      <dc:date>2004-03-11T10:01:12Z</dc:date>
    </item>
    <item>
      <title>Re: NISC_MAX_PKTSZ</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/nisc-max-pktsz/m-p/3210330#M62269</link>
      <description>Hello Wim,&lt;BR /&gt;&lt;BR /&gt;maybe things have changed since, but when we introduced FDDI (AXP VMS 6.2(-?), the party line was that to efficiently use FDDI you had to SPECIFICALLY specify the famous 4468.&lt;BR /&gt;Explanation:&lt;BR /&gt;the initial handshake trial is at NISCS_MAX_PKTSZ. If that fails, the next try is at 1498 (IMMSMW, at least something like that), being the 10MB ethernet "large" packet size. (and if that doesn't shake, go progressively less...).&lt;BR /&gt;Now , FDDI supports up to 4468.&lt;BR /&gt;So, try 8192, no shake, &amp;amp; land at LPSIZE (~1500) :=&amp;gt; two-thirds of your packet is unused :=&amp;gt; you need 3 times as many packets.&lt;BR /&gt;&lt;BR /&gt;I never heard or read that for any 7.x it changed, but... ;-)&lt;BR /&gt;&lt;BR /&gt;I don't suppose AUTOGEN checks whether you use FDDI...&lt;BR /&gt;&lt;BR /&gt;fwiiw,&lt;BR /&gt;&lt;BR /&gt;Jan&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 11 Mar 2004 10:19:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/nisc-max-pktsz/m-p/3210330#M62269</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2004-03-11T10:19:42Z</dc:date>
    </item>
    <item>
      <title>Re: NISC_MAX_PKTSZ</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/nisc-max-pktsz/m-p/3210331#M62270</link>
      <description>Looks like it is to buffer Gigabit Ethernet  and ATM traffic (I didn't find in on the first try, because the name is slightly wrong - it is NISC*S*_MAX_PKTSZ).&lt;BR /&gt;&lt;BR /&gt;There is a table in that paragraph:&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/DOC/73final/6637/6637pro_007.html" target="_blank"&gt;http://h71000.www7.hp.com/DOC/73final/6637/6637pro_007.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;5.16.3 NISCS_MAX_PKTSZ System Parameter Definition Corrected&lt;BR /&gt;</description>
      <pubDate>Thu, 11 Mar 2004 12:39:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/nisc-max-pktsz/m-p/3210331#M62270</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-03-11T12:39:45Z</dc:date>
    </item>
    <item>
      <title>Re: NISC_MAX_PKTSZ</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/nisc-max-pktsz/m-p/3210332#M62271</link>
      <description>ATM is supported as a cluster interconnect, and ATM presently supports packet payload sizes up to the value of 9180 that is the maximum value for NISCS_MAX_PKTSZ under SYSGEN.&lt;BR /&gt;&lt;BR /&gt;Actual maximum packet sizes supported by various Gigabit Ethernet hardware products vary from 2K to 16K bytes -- there is no single standard. (For example, details for various Cisco Catalyst models may be found at &lt;A href="http://www.cisco.com/warp/public/473/148.html)." target="_blank"&gt;http://www.cisco.com/warp/public/473/148.html).&lt;/A&gt; And experts feel the CRC protection provided becomes insufficient in theory at about 10K bytes in packet size, so something on the order of 9K bytes is likely to become most common in practice, I'd guess. &lt;BR /&gt;&lt;BR /&gt;At 7.3 and above, PEDRIVER probes for the actual maximum packet size which gets through a given path at a given point in time, so the only "hurt" involved in specifying a larger-than-necessary value for NISCS_MAX_PKTSZ is that there will be a bit of wasted memory as all packet buffers for PEDRIVER will be allocated at the larger size.&lt;BR /&gt;&lt;BR /&gt;I've submitted a Problem Tracking Report to get the SYSGEN help and the documentation updated and corrected.</description>
      <pubDate>Fri, 19 Mar 2004 12:43:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/nisc-max-pktsz/m-p/3210332#M62271</guid>
      <dc:creator>Keith Parris</dc:creator>
      <dc:date>2004-03-19T12:43:48Z</dc:date>
    </item>
  </channel>
</rss>

