<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: %SYSTEM-W-DATAOVERUN, data overrun in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/system-w-dataoverun-data-overrun/m-p/3771689#M76096</link>
    <description>Lisa,&lt;BR /&gt;&lt;BR /&gt;&lt;QUOTE&gt;&lt;BR /&gt;14:13:11.71 Receive QIO issued&lt;BR /&gt;14:13:11.71 Receive AST delivered 20 bytes&lt;BR /&gt;---&amp;gt; DAT msg 4104 - 6A 42 47 44 63 4F 4F 37 52 43 38 31 01 D4 B1 03 10 04 06 08&lt;BR /&gt;---&amp;gt; CRC msg 2 - 73AA&lt;BR /&gt;DAP status code of 50C8 generated&lt;BR /&gt;&amp;lt;--- STS msg 4 - 50 C8 00 09&lt;BR /&gt;&amp;lt;--- CRC msg 2 - 278F&lt;BR /&gt;14:13:11.71 XMT QIO complete, 6 bytes&lt;BR /&gt;&lt;/QUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;This seems to be a different kind of error !&lt;BR /&gt;&lt;BR /&gt;The DAP status message returned is:&lt;BR /&gt;&lt;BR /&gt;0x50C8 = MAC: 5 MIC: 310 (in octal notation)&lt;BR /&gt;&lt;BR /&gt;MAC code 5 indicates: FILE_XFER - Error encountered while file was open&lt;BR /&gt;&lt;BR /&gt;MIC code 310 (octal) seems to indicate: CRC error&lt;BR /&gt;&lt;BR /&gt;If you look at the exchange of DAP messages in your trace, this would make sense. This DAP status message is immediately returned after receiving the CRC message. So something got corrupted while in transit over the network. RMS/FAL use end-to-end CRC checks for additional data protection.&lt;BR /&gt;&lt;BR /&gt;You're using DECnet-over-IP (indicated by the IP$... remote node name string at the beginning of NET$SERVER.LOG). &lt;BR /&gt;&lt;BR /&gt;What error did you get on the node running the BACKUP command ?&lt;BR /&gt;&lt;BR /&gt;You should be able to test the reliabilty of the network connection between the 2 nodes using &lt;BR /&gt;&lt;BR /&gt;NCL&amp;gt; LOOP loopback applic name domain:10.100.50.18, length 4096, count 1000&lt;BR /&gt;&lt;BR /&gt;You can also add ,FORMAT xx to specify hex bit pattern to be used inside the looped messages. If any data corruption occurs, there will be an error message. There will be no message, if the loopback test succeeds.&lt;BR /&gt;&lt;BR /&gt;If this really is a true CRC error, the corruption can occur anywhere on the sending node, the network or on the receiving node.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
    <pubDate>Thu, 04 May 2006 01:34:44 GMT</pubDate>
    <dc:creator>Volker Halle</dc:creator>
    <dc:date>2006-05-04T01:34:44Z</dc:date>
    <item>
      <title>%SYSTEM-W-DATAOVERUN, data overrun</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/system-w-dataoverun-data-overrun/m-p/3771685#M76092</link>
      <description>Hello, &lt;BR /&gt;&lt;BR /&gt;We are trying to do a copy of data between an Alpha running OpenVMS 7.3-1 to an Itanium running OpenVMS V8.2-1.  Because we need an entire directory structure copied, not just data within one directory, we are using the backup command. We are attempting to do a backup across Decnet and we are receiving an error. The save set on the destination node gets created, and starts to allocate thousands of blocks. The process is then interrupted with a data overrun. Here are the messages we receive. &lt;BR /&gt;&lt;BR /&gt;NXXXXA&amp;gt;@BCK2NXXB.COM&lt;BR /&gt;$ set ver&lt;BR /&gt;$ show time&lt;BR /&gt; 13-APR-2006 09:45:47&lt;BR /&gt;$ backup/list=dra5:[000000]bck2nxxb.lst -&lt;BR /&gt;       dra5:[PMDF...] -&lt;BR /&gt;       nxxxxb"bkbkbk xxxxxxxx"::$1$DKC2:[000000]nXXa.bck/sav&lt;BR /&gt;%BACKUP-F-WRITEERR, error writing NXXXXB"bkbkbk password"::$1$DKC2:[000000]NXXA.BCK;1&lt;BR /&gt;-RMS-F-SYS, QIO system service request failed&lt;BR /&gt;-SYSTEM-F-LINKABORT, network partner aborted logical link&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;NXXXXB FAL Log =================================================&lt;BR /&gt;&lt;BR /&gt;NXXXXB&amp;gt;type sys$manager:NET$SERVER.LOG&lt;BR /&gt;$ Set NoOn&lt;BR /&gt;$ VERIFY = F$VERIFY(F$TRNLNM("SYLOGIN_VERIFY"))&lt;BR /&gt;&lt;BR /&gt;       --------------------------------------------------------&lt;BR /&gt;&lt;BR /&gt;       Connect request received at 13-APR-2006 09:46:12.88&lt;BR /&gt;           from remote process IP$10.100.50.18::"0=BKBKBK"&lt;BR /&gt;           for object "SYS$COMMON:[SYSEXE]FAL.EXE"&lt;BR /&gt;&lt;BR /&gt;       --------------------------------------------------------&lt;BR /&gt;&lt;BR /&gt;%SYSTEM-W-DATAOVERUN, data overrun&lt;BR /&gt;            job terminated at 13-APR-2006 09:52:00.97&lt;BR /&gt;&lt;BR /&gt; Accounting information:&lt;BR /&gt; Buffered I/O count:              55932      Peak working set size:       5760&lt;BR /&gt; Direct I/O count:                 5000      Peak virtual size:         177808&lt;BR /&gt; Page faults:                      1220      Mounted volumes:                0&lt;BR /&gt; Charged CPU time:        0 00:00:02.44      Elapsed time:       0 00:11:06.35&lt;BR /&gt;&lt;BR /&gt;We are copying mailboxes and emails so we will have to shutdown email while we are trying to move the users data. We realize we could do a backup on the same node where the data sits. Then copy the saveset over to the new node. Then unpack it on the new nod. But we are trying to save time by doing the backup and copy in one step if possible.  Any suggestions would be appreciated.&lt;BR /&gt;&lt;BR /&gt;Thanks, &lt;BR /&gt;Lisa Collins&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 13 Apr 2006 13:33:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/system-w-dataoverun-data-overrun/m-p/3771685#M76092</guid>
      <dc:creator>Lisa Collins</dc:creator>
      <dc:date>2006-04-13T13:33:41Z</dc:date>
    </item>
    <item>
      <title>Re: %SYSTEM-W-DATAOVERUN, data overrun</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/system-w-dataoverun-data-overrun/m-p/3771686#M76093</link>
      <description>Have you tried?:&lt;BR /&gt;&lt;BR /&gt;SET RMS_DEFAULT /NETWORK_BLOCK_COUNT = bigger_number&lt;BR /&gt;&lt;BR /&gt;and/or&lt;BR /&gt;&lt;BR /&gt;BACKUP /BLOCK_SIZE = smaller_number&lt;BR /&gt;&lt;BR /&gt;&amp;gt; Any suggestions would be appreciated.&lt;BR /&gt;&lt;BR /&gt;Always a risk before you see the suggestions.</description>
      <pubDate>Thu, 13 Apr 2006 16:39:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/system-w-dataoverun-data-overrun/m-p/3771686#M76093</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2006-04-13T16:39:23Z</dc:date>
    </item>
    <item>
      <title>Re: %SYSTEM-W-DATAOVERUN, data overrun</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/system-w-dataoverun-data-overrun/m-p/3771687#M76094</link>
      <description>Lisa,&lt;BR /&gt;&lt;BR /&gt;for further analysis, consider to DEF/SYS FAL$LOG FF on the remote node and repeat your BACKUP operation. You'll find a full FAL DAP trace in the remote NET$SERVER.LOG and should be able to find out, exactly which operation fails with DATAOVERUN.&lt;BR /&gt;&lt;BR /&gt;Note that there was a patch for a similar sounding problem in VMS732_RMS-V0200. Check the value of the NETWORK BLOCK count on both systems with SHOW RMS.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Fri, 14 Apr 2006 01:24:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/system-w-dataoverun-data-overrun/m-p/3771687#M76094</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2006-04-14T01:24:07Z</dc:date>
    </item>
    <item>
      <title>Re: %SYSTEM-W-DATAOVERUN, data overrun</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/system-w-dataoverun-data-overrun/m-p/3771688#M76095</link>
      <description>Volker, &lt;BR /&gt;&lt;BR /&gt;Sorry it has taken so long to get back to your reply. I did what you said and here is the latter part of my net$server.log&lt;BR /&gt;&lt;BR /&gt;14:13:11.70 Receive  QIO issued&lt;BR /&gt;14:13:11.70 Receive  AST delivered  4106 bytes&lt;BR /&gt;---&amp;gt; DAT msg  4104 - 39 6C 4F 4B 63 42 7A 75 74 70 75 35 01 D4 A1 03 10 04 06 08&lt;BR /&gt;---&amp;gt; CRC msg     2 - 6356&lt;BR /&gt;14:13:11.70 Receive  QIO issued&lt;BR /&gt;14:13:11.70 Receive  AST delivered  4106 bytes&lt;BR /&gt;---&amp;gt; DAT msg  4104 - 66 45 79 4C 41 2F 37 56 4F 50 5A 5A 01 D4 A9 03 10 04 06 08&lt;BR /&gt;---&amp;gt; CRC msg     2 - C4D3&lt;BR /&gt;14:13:11.71 Receive  QIO issued&lt;BR /&gt;14:13:11.71 Receive  AST delivered    20 bytes&lt;BR /&gt;---&amp;gt; DAT msg  4104 - 6A 42 47 44 63 4F 4F 37 52 43 38 31 01 D4 B1 03 10 04 06 08&lt;BR /&gt;---&amp;gt; CRC msg     2 - 73AA&lt;BR /&gt;DAP status code of 50C8 generated&lt;BR /&gt;&amp;lt;--- STS msg     4 - 50 C8 00 09&lt;BR /&gt;&amp;lt;--- CRC msg     2 - 278F&lt;BR /&gt;14:13:11.71 XMT QIO complete,      6 bytes&lt;BR /&gt;&lt;BR /&gt;Logical link was terminated on    3-MAY-2006 14:13:11.71&lt;BR /&gt;Mailbox message type 0035 received&lt;BR /&gt;&lt;BR /&gt;Total connect time for logical link was    0 00:01:48.32&lt;BR /&gt;Total CPU time used for connection was     0 00:00:01.97&lt;BR /&gt;&lt;BR /&gt;File Access Statistics for RECV-Side XMIT-Side Composite&lt;BR /&gt;-------------------------- --------- --------- ---------&lt;BR /&gt;# DAP Message QIO Calls        15004         4     15008&lt;BR /&gt;# DAP Messages Exchanged       16671         6     16677&lt;BR /&gt;# User Records/Blocks          16665         0     16665&lt;BR /&gt;# Bytes of User Data        61435904         0  61435904&lt;BR /&gt;# Bytes in DAP Layer        61590203        88  61590291&lt;BR /&gt;User Data Throughput (bps)         0         0         0&lt;BR /&gt;DAP Layer Throughput (bps)         0         0         0&lt;BR /&gt;Average Record/Block Size          0         0         0&lt;BR /&gt;% User Data in DAP Layer        0.0%      0.0%      0.0%&lt;BR /&gt;-------------------------- --------- --------- ---------&lt;BR /&gt;&lt;BR /&gt;Negotiated DAP buffer size = 4156 bytes&lt;BR /&gt;Buffered I/O count during connection = 31235&lt;BR /&gt;Direct I/O count during connection   = 9636&lt;BR /&gt;Peak working set size for process = 5648 pages&lt;BR /&gt;&lt;BR /&gt;Successful Start Transaction Branch = 0&lt;BR /&gt;Start Transaction Branch loops      = 0&lt;BR /&gt;&lt;BR /&gt;Total RECV_WAIT = 969 and XMIT_WAIT not kept&lt;BR /&gt;Total READ_WAIT not kept and WRIT_WAIT = 1067&lt;BR /&gt;Defered AST PUT's = 230, Lost AST logging messages = 1&lt;BR /&gt;COUNTER1 = 0 and COUNTER2 = 0&lt;BR /&gt;COUNTER3 = 0 and COUNTER4 = 0&lt;BR /&gt;&lt;BR /&gt;FAL terminated execution on       3-MAY-2006 14:13:11.72&lt;BR /&gt;========================================================&lt;BR /&gt;&lt;BR /&gt;A Show RMS on the originating server shows:&lt;BR /&gt;&lt;BR /&gt;$ sh rms&lt;BR /&gt;          MULTI-  |                MULTIBUFFER COUNTS               | NETWORK&lt;BR /&gt;          BLOCK   | Indexed  Relative            Sequential         |  BLOCK&lt;BR /&gt;          COUNT   |                     Disk   Magtape  Unit Record |  COUNT&lt;BR /&gt;Process     0     |    0         0        0       0         0       |    0&lt;BR /&gt;System     32     |    0         0        0       0         0       |    8&lt;BR /&gt;&lt;BR /&gt;On the remote node it shows the same&lt;BR /&gt;&lt;BR /&gt;          MULTI-  |                MULTIBUFFER COUNTS               | NETWORK&lt;BR /&gt;          BLOCK   | Indexed  Relative            Sequential         |  BLOCK&lt;BR /&gt;          COUNT   |                     Disk   Magtape  Unit Record |  COUNT&lt;BR /&gt;Process     0     |    0         0        0       0         0       |    0&lt;BR /&gt;System     32     |    0         0        0       0         0       |    8&lt;BR /&gt;&lt;BR /&gt;Thank  you, Lisa</description>
      <pubDate>Wed, 03 May 2006 13:18:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/system-w-dataoverun-data-overrun/m-p/3771688#M76095</guid>
      <dc:creator>Lisa Collins</dc:creator>
      <dc:date>2006-05-03T13:18:09Z</dc:date>
    </item>
    <item>
      <title>Re: %SYSTEM-W-DATAOVERUN, data overrun</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/system-w-dataoverun-data-overrun/m-p/3771689#M76096</link>
      <description>Lisa,&lt;BR /&gt;&lt;BR /&gt;&lt;QUOTE&gt;&lt;BR /&gt;14:13:11.71 Receive QIO issued&lt;BR /&gt;14:13:11.71 Receive AST delivered 20 bytes&lt;BR /&gt;---&amp;gt; DAT msg 4104 - 6A 42 47 44 63 4F 4F 37 52 43 38 31 01 D4 B1 03 10 04 06 08&lt;BR /&gt;---&amp;gt; CRC msg 2 - 73AA&lt;BR /&gt;DAP status code of 50C8 generated&lt;BR /&gt;&amp;lt;--- STS msg 4 - 50 C8 00 09&lt;BR /&gt;&amp;lt;--- CRC msg 2 - 278F&lt;BR /&gt;14:13:11.71 XMT QIO complete, 6 bytes&lt;BR /&gt;&lt;/QUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;This seems to be a different kind of error !&lt;BR /&gt;&lt;BR /&gt;The DAP status message returned is:&lt;BR /&gt;&lt;BR /&gt;0x50C8 = MAC: 5 MIC: 310 (in octal notation)&lt;BR /&gt;&lt;BR /&gt;MAC code 5 indicates: FILE_XFER - Error encountered while file was open&lt;BR /&gt;&lt;BR /&gt;MIC code 310 (octal) seems to indicate: CRC error&lt;BR /&gt;&lt;BR /&gt;If you look at the exchange of DAP messages in your trace, this would make sense. This DAP status message is immediately returned after receiving the CRC message. So something got corrupted while in transit over the network. RMS/FAL use end-to-end CRC checks for additional data protection.&lt;BR /&gt;&lt;BR /&gt;You're using DECnet-over-IP (indicated by the IP$... remote node name string at the beginning of NET$SERVER.LOG). &lt;BR /&gt;&lt;BR /&gt;What error did you get on the node running the BACKUP command ?&lt;BR /&gt;&lt;BR /&gt;You should be able to test the reliabilty of the network connection between the 2 nodes using &lt;BR /&gt;&lt;BR /&gt;NCL&amp;gt; LOOP loopback applic name domain:10.100.50.18, length 4096, count 1000&lt;BR /&gt;&lt;BR /&gt;You can also add ,FORMAT xx to specify hex bit pattern to be used inside the looped messages. If any data corruption occurs, there will be an error message. There will be no message, if the loopback test succeeds.&lt;BR /&gt;&lt;BR /&gt;If this really is a true CRC error, the corruption can occur anywhere on the sending node, the network or on the receiving node.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Thu, 04 May 2006 01:34:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/system-w-dataoverun-data-overrun/m-p/3771689#M76096</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2006-05-04T01:34:44Z</dc:date>
    </item>
    <item>
      <title>Re: %SYSTEM-W-DATAOVERUN, data overrun</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/system-w-dataoverun-data-overrun/m-p/3771690#M76097</link>
      <description>Lise,&lt;BR /&gt;&lt;BR /&gt;just for reference, here is a pointer to the DAP Protocol specification:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://ftp.digital.com/pub/DEC/DECnet/PhaseIV/dap_v5_6_0.txt" target="_blank"&gt;http://ftp.digital.com/pub/DEC/DECnet/PhaseIV/dap_v5_6_0.txt&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;This is the protocol used between RMS and FAL for remote file access operations in DECnet.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Thu, 04 May 2006 01:43:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/system-w-dataoverun-data-overrun/m-p/3771690#M76097</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2006-05-04T01:43:55Z</dc:date>
    </item>
  </channel>
</rss>

