<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic socket peek buffer sizes in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738044#M60435</link>
    <description>When I issue a socket peek read (TCPIP$C_MSG_PEEK) via the QIOW interface, I notice that QIOW always returns with a maximum byte count of 1024. Is this adjustable?  &lt;BR /&gt;&lt;BR /&gt;TIA.&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Thu, 13 Jan 2011 17:13:15 GMT</pubDate>
    <dc:creator>Elli M Barasch</dc:creator>
    <dc:date>2011-01-13T17:13:15Z</dc:date>
    <item>
      <title>socket peek buffer sizes</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738044#M60435</link>
      <description>When I issue a socket peek read (TCPIP$C_MSG_PEEK) via the QIOW interface, I notice that QIOW always returns with a maximum byte count of 1024. Is this adjustable?  &lt;BR /&gt;&lt;BR /&gt;TIA.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 13 Jan 2011 17:13:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738044#M60435</guid>
      <dc:creator>Elli M Barasch</dc:creator>
      <dc:date>2011-01-13T17:13:15Z</dc:date>
    </item>
    <item>
      <title>Re: socket peek buffer sizes</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738045#M60436</link>
      <description>Can we see a code fragment so we know a bit more about the environment (TCP/UDP) and how the read is done?</description>
      <pubDate>Thu, 13 Jan 2011 17:16:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738045#M60436</guid>
      <dc:creator>Richard Whalen</dc:creator>
      <dc:date>2011-01-13T17:16:46Z</dc:date>
    </item>
    <item>
      <title>Re: socket peek buffer sizes</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738046#M60437</link>
      <description>&lt;!--!*#--&gt;      int peek_count = MIN(howmuch,50000);&lt;BR /&gt;&lt;BR /&gt;#if 0&lt;BR /&gt;      printf("peeking %d bytes | ",peek_count);&lt;BR /&gt;      status = sys$qiow(0,conn_channel,IO$_READVBLK,&amp;amp;iosb,0,0,cursor, peek_count,0,TCPIP$C_MSG_PEEK,0,0);&lt;BR /&gt;      EXIT_IF_BAD(status,iosb);&lt;BR /&gt;      if (iosb.iosb$w_bcnt == 0)&lt;BR /&gt;        break;&lt;BR /&gt;      int count = iosb.iosb$w_bcnt;&lt;BR /&gt;      printf("peeked %d and now reading\n",iosb.iosb$w_bcnt);&lt;BR /&gt;      status = sys$qiow(0,conn_channel, IO$_READVBLK, &amp;amp;iosb,0,0, cursor,count,0, 0,0,0 );&lt;BR /&gt;      EXIT_IF_BAD(status,iosb);&lt;BR /&gt;      if (count != iosb.iosb$w_bcnt)&lt;BR /&gt;         printf("count mismatch, expd/got = %d/%d\n",count,iosb.iosb$w_bcnt);&lt;BR /&gt;#else&lt;BR /&gt;      printf("reading %d |",peek_count);&lt;BR /&gt;      status = sys$qiow(0,conn_channel, IO$_READVBLK, &amp;amp;iosb,0,0, cursor,peek_count,0, 0,0,0 );&lt;BR /&gt;      EXIT_IF_BAD(status,iosb);&lt;BR /&gt;      printf("got %d\n",iosb.iosb$w_bcnt);&lt;BR /&gt;#endif</description>
      <pubDate>Wed, 23 Feb 2011 17:32:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738046#M60437</guid>
      <dc:creator>Elli M Barasch</dc:creator>
      <dc:date>2011-02-23T17:32:59Z</dc:date>
    </item>
    <item>
      <title>Re: socket peek buffer sizes</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738047#M60438</link>
      <description>How much is 'howmuch'?&lt;BR /&gt;&lt;BR /&gt;/Guenther</description>
      <pubDate>Wed, 23 Feb 2011 20:24:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738047#M60438</guid>
      <dc:creator>GuentherF</dc:creator>
      <dc:date>2011-02-23T20:24:42Z</dc:date>
    </item>
    <item>
      <title>Re: socket peek buffer sizes</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738048#M60439</link>
      <description>Elli,&lt;BR /&gt;&lt;BR /&gt;  It may help to see how you created the socket. Please include the values of any constants. If you can work out the device name, the output of SHOW DEVICE/FULL BGnnnn might also be interesting.</description>
      <pubDate>Wed, 23 Feb 2011 21:04:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738048#M60439</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2011-02-23T21:04:47Z</dc:date>
    </item>
    <item>
      <title>Re: socket peek buffer sizes</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738049#M60440</link>
      <description>howmuch varies from 64000 down to 1. The code never issues a read QIO of more than 50000 bytes (as per the MIN statement).  howmuch is decremented by the iosb.iosb$w_bcnt on each iteration of the loop in which this code fragment executes. &lt;BR /&gt;&lt;BR /&gt;The loop terminates when a certain string is found in the stream, or when howmuch = 0.&lt;BR /&gt;&lt;BR /&gt;The code was written conditionally in two ways... One with a peek, one w/o a peek.  The results are identical: the QIO completion size always maxes out at 1024 bytes.&lt;BR /&gt;&lt;BR /&gt;Additionally, there is an 'interactive' part of this program that issues a "command" to the partner, prompting the partner to send a 5 byte response.  Even though I issue a 64 byte read, I always get a 5 byte I/O completion.  How does this work?  Is there an inter-byte timeout that causes the driver to complete the I/O? IOW, why doesn't my 64 byte read block? &lt;BR /&gt;&lt;BR /&gt;If I am correct in this "inter-byte timeout" assumption, is it possible that my network partner is pausing after it transmits every 1024 bytes?  The delay might be long enough for the driver to then complete the I/O and force me to issue additional reads.   I'd like to be able to increase this "inter-byte timeout" so that I can reduce the number of QIOs required.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 23 Feb 2011 21:06:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738049#M60440</guid>
      <dc:creator>Elli M Barasch</dc:creator>
      <dc:date>2011-02-23T21:06:09Z</dc:date>
    </item>
    <item>
      <title>Re: socket peek buffer sizes</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738050#M60441</link>
      <description>&lt;!--!*#--&gt;Kind of textbook socket set-up here...&lt;BR /&gt;&lt;BR /&gt; conn_sockchar.prot = TCPIP$C_TCP;&lt;BR /&gt; conn_sockchar.type = TCPIP$C_STREAM;&lt;BR /&gt; conn_sockchar.af   = TCPIP$C_AF_INET;&lt;BR /&gt;&lt;BR /&gt; struct timeval tmo={15,0};  // 2 second timeout on any I/O&lt;BR /&gt;&lt;BR /&gt; tmo_itemlist.length = sizeof(tmo);&lt;BR /&gt; tmo_itemlist.type = TCPIP$C_RCVTIMEO;&lt;BR /&gt; tmo_itemlist.address = &amp;amp;tmo;&lt;BR /&gt;&lt;BR /&gt; sockopt_itemlist.length = sizeof(tmo_itemlist);&lt;BR /&gt; sockopt_itemlist.type = TCPIP$C_SOCKOPT;&lt;BR /&gt; sockopt_itemlist.address = &amp;amp;tmo_itemlist;&lt;BR /&gt;&lt;BR /&gt; // whom are we connecting to?&lt;BR /&gt; CLEAR(serv_addr);&lt;BR /&gt; serv_addr.sin_family = TCPIP$C_AF_INET;&lt;BR /&gt; serv_addr.sin_port = htons(atoi(argv[2]));          // second arg is port number &lt;BR /&gt;&lt;BR /&gt; // set up socket type&lt;BR /&gt;  status = sys$qiow(0, conn_channel,  IO$_SETMODE,  &amp;amp;iosb,  0,   0, &amp;amp;conn_sockchar,   0, 0, 0,  0,  0 );&lt;BR /&gt;  EXIT_IF_BAD(status,iosb);&lt;BR /&gt;&lt;BR /&gt;  // set up socket timeout&lt;BR /&gt;  status = sys$qiow(0, conn_channel,  IO$_SETMODE,  &amp;amp;iosb,  0,   0, 0, 0, 0, 0, &amp;amp;sockopt_itemlist,  0 );&lt;BR /&gt;  EXIT_IF_BAD(status,iosb);&lt;BR /&gt;&lt;BR /&gt;  // connect to remote port&lt;BR /&gt;  status = sys$qiow(0,  conn_channel,  IO$_ACCESS,  &amp;amp;iosb, 0, 0,  0,  0, &amp;amp;serv_itemlist, 0,  0,  0  );&lt;BR /&gt;  EXIT_IF_BAD(status,iosb);&lt;BR /&gt;&lt;BR /&gt;  // Send the prompt  - a CR&lt;BR /&gt;  status = sys$qiow(0,conn_channel, IO$_WRITEVBLK,  &amp;amp;iosb, 0, 0, "\r", 1, 0, 0, 0, 0);&lt;BR /&gt;  EXIT_IF_BAD(status,iosb);&lt;BR /&gt;</description>
      <pubDate>Wed, 23 Feb 2011 22:11:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738050#M60441</guid>
      <dc:creator>Elli M Barasch</dc:creator>
      <dc:date>2011-02-23T22:11:22Z</dc:date>
    </item>
    <item>
      <title>Re: socket peek buffer sizes</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738051#M60442</link>
      <description>The comment above on the TMO set up is wrong.  No matter what the time value is, the behavior is the same.</description>
      <pubDate>Wed, 23 Feb 2011 22:13:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738051#M60442</guid>
      <dc:creator>Elli M Barasch</dc:creator>
      <dc:date>2011-02-23T22:13:02Z</dc:date>
    </item>
    <item>
      <title>Re: socket peek buffer sizes</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738052#M60443</link>
      <description>"Even though I issue a 64 byte read, I always get a 5 byte I/O completion"&lt;BR /&gt;&lt;BR /&gt;You actually specified that you have a read buffer of 64 bytes. If there are only 5 bytes in the buffer then that's is what you get.&lt;BR /&gt;&lt;BR /&gt;Typically with TCP/IP - a STREAM protocol - you have to read in a loop and extract your data. There is no gurantee on the amount of data you may get with a read.&lt;BR /&gt;&lt;BR /&gt;To change the default internal read buffer size do a IO$_SETMODE with TCPIP$C_RCVBUF.&lt;BR /&gt;&lt;BR /&gt;But no matter how large you make all the buffers (program's private or system buffer) the data stream typically is "hacked" into smaller pieces.&lt;BR /&gt;&lt;BR /&gt;Btw. the timeout only works if NO data has been received so far. Once a byte or more is in the system buffer the QIO calls returns when either there is no more data in the system buffer or your program buffer has been filled.&lt;BR /&gt;&lt;BR /&gt;/Guenther</description>
      <pubDate>Wed, 23 Feb 2011 23:49:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738052#M60443</guid>
      <dc:creator>GuentherF</dc:creator>
      <dc:date>2011-02-23T23:49:24Z</dc:date>
    </item>
    <item>
      <title>Re: socket peek buffer sizes</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738053#M60444</link>
      <description>Guenther, I don't think you understood my question.&lt;BR /&gt;&lt;BR /&gt;Issuing a QIO on a stream isn't the same as issuing a QIO on a disk device.  There's no way to know if there's more in the stream coming, so the driver has to make a decision when to complete the I/O.  &lt;BR /&gt;&lt;BR /&gt;The streams I am reading are well over 20,000 bytes long.  If I issue a 50,000 byte QIOW read, I'd expect to block until all 50,000 bytes arrive.  This is not the case.  Similarly, the 64 byte read terminates after only 5 bytes have arrived.  The driver decides to complete the I/O well before the requested number of bytes arrive.  So what criteria does the driver use to complete the I/O?   The driver must use some sort of "no-activity" timer, or perhaps it uses the size of a single transport that was received off the wire.  If there is a timer, I am asking if there is any way to tune it.&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Elli</description>
      <pubDate>Thu, 24 Feb 2011 13:55:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738053#M60444</guid>
      <dc:creator>Elli M Barasch</dc:creator>
      <dc:date>2011-02-24T13:55:53Z</dc:date>
    </item>
    <item>
      <title>Re: socket peek buffer sizes</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738054#M60445</link>
      <description>&amp;gt;The streams I am reading are well over 20,000 bytes long. If I issue a 50,000 byte QIOW read, I'd expect to block until all 50,000 bytes arrive. This is not the case. Similarly, the 64 byte read terminates after only 5 bytes have arrived. The driver decides to complete the I/O well before the requested number of bytes arrive. So what criteria does the driver use to complete the I/O? The driver must use some sort of "no-activity" timer, or perhaps it uses the size of a single transport that was received off the wire. If there is a timer, I am asking if there is any way to tune it.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Ah, this old chestnut.&lt;BR /&gt;&lt;BR /&gt;As you quite correctly state, TCP provides a stream of data.  &lt;BR /&gt;&lt;BR /&gt;And you're presuming to treat that stream as a record device.&lt;BR /&gt;&lt;BR /&gt;It's not, unfortunately.&lt;BR /&gt;&lt;BR /&gt;UDP does give you something closer to the record-oriented model.   (Depending on exactly what is going on, UDP multicasts or even raw Ethernet datagrams can be a pretty handy solution to some classes of application problems, but I digress.)&lt;BR /&gt;&lt;BR /&gt;TCP is free to give you 50,000 separate I/Os of 1 byte each or the full 50,000 bytes in one shot.  Or 49,999 hits of 1 byte each, and then 50,000 bytes containing the last byte and most of the next transfer.  Or any combination between that.  &lt;BR /&gt;&lt;BR /&gt;With TCP-based application communications, you get to do the segmentation and window processing in your code.  Attempts to use timers to segment TCP traffic into records will tend to combine with the intervening IP routers and switches and other application traffic to conspire to find edge cases in socket code, too.&lt;BR /&gt;&lt;BR /&gt;I'd suggest looking for middleware.  Socket-level programming is something like programming assembler.  It's possible, functional, feasible and such, but it's usually easier to punt on that and to use available networking libraries and available middleware packages.  Rolling your own is something that involves, well, dealing with TCP streams and buffers and such.  And you probably have application and customer code to write, rather than all of the glue code involved with socket programming.&lt;BR /&gt;&lt;BR /&gt;If you can't migrate to a middleware interface into the IP network (and there are certainly reasons why VMS application programmers might not find that feasible) then you'll have to deal with the sliding window yourself.  It's common to see a 2x buffer (or more) for the reads, and to aim the I/Os at the next available byte in the buffer.  Basically assembling the incoming data into the record structures.&lt;BR /&gt;&lt;BR /&gt;Double-buffering a TCP stream gets a little more ugly (and can involve rather more buffer copies than might be pleasant to perform on a busy server), as you don't really know how much you're going to get in response to each read.</description>
      <pubDate>Thu, 24 Feb 2011 15:03:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738054#M60445</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2011-02-24T15:03:16Z</dc:date>
    </item>
    <item>
      <title>Re: socket peek buffer sizes</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738055#M60446</link>
      <description>I get it, then.  I'm seeing the actual transport as it arrives off the wire.  My network partner must be segmenting into 1024 byte frames -- and in the case of the 'short' packet - 5 bytes. So I get what I get.&lt;BR /&gt;&lt;BR /&gt;The driver does not wait; it just delivers as soon as something shows up, unless I provide the IO$M_LOCKBUF qualifier, in which case the I/O won't complete until I receive all the bytes I ask for or my partner closes the connection.  Unfortunately, that's not an option for me.&lt;BR /&gt;&lt;BR /&gt;Thanks for the help.  I just wanted an explanation.  My code works fine; I just thought I could tweak things a bit.</description>
      <pubDate>Thu, 24 Feb 2011 15:12:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738055#M60446</guid>
      <dc:creator>Elli M Barasch</dc:creator>
      <dc:date>2011-02-24T15:12:46Z</dc:date>
    </item>
    <item>
      <title>Re: socket peek buffer sizes</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738056#M60447</link>
      <description>With TCP/IP you typically include your own record header which indicates what is a record in your case. In the simplest form you just send a length value for the number of bytes to follow. Mostly you end up with a more sophisticated wrapper around your data to correctly identify a record and resynch to the true start of the next record.&lt;BR /&gt;&lt;BR /&gt;Good ole DECnet made life much easier! A true record oriented protocol.&lt;BR /&gt;&lt;BR /&gt;/Guenther</description>
      <pubDate>Thu, 24 Feb 2011 16:12:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738056#M60447</guid>
      <dc:creator>GuentherF</dc:creator>
      <dc:date>2011-02-24T16:12:51Z</dc:date>
    </item>
    <item>
      <title>Re: socket peek buffer sizes</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738057#M60448</link>
      <description>Oh, forgot to mention. The amount of data you receive with one read has absolutely no relation to the size of data send in one call. So the 1024 bytes comes from the fact that the BGdrivers system buffer by default is 1024 bytes. If a nice burst of data is coming in that is what you get.&lt;BR /&gt;&lt;BR /&gt;/Guenther</description>
      <pubDate>Thu, 24 Feb 2011 16:15:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738057#M60448</guid>
      <dc:creator>GuentherF</dc:creator>
      <dc:date>2011-02-24T16:15:38Z</dc:date>
    </item>
    <item>
      <title>Re: socket peek buffer sizes</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738058#M60449</link>
      <description>Guenther, Um. See my original question...  How do I tweak that?  :^)&lt;BR /&gt;</description>
      <pubDate>Thu, 24 Feb 2011 16:22:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738058#M60449</guid>
      <dc:creator>Elli M Barasch</dc:creator>
      <dc:date>2011-02-24T16:22:42Z</dc:date>
    </item>
    <item>
      <title>Re: socket peek buffer sizes</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738059#M60450</link>
      <description>As I mentioned before:&lt;BR /&gt;&lt;BR /&gt;To change the default internal read buffer size do a IO$_SETMODE with TCPIP$C_RCVBUF.&lt;BR /&gt;&lt;BR /&gt;I haven't played much with that but I doubt you'll see a significant performance gain with larger RCVBUF values.&lt;BR /&gt;&lt;BR /&gt;/Guenther</description>
      <pubDate>Thu, 24 Feb 2011 18:12:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738059#M60450</guid>
      <dc:creator>GuentherF</dc:creator>
      <dc:date>2011-02-24T18:12:56Z</dc:date>
    </item>
    <item>
      <title>Re: socket peek buffer sizes</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738060#M60451</link>
      <description>Sorry for the misunderstanding.</description>
      <pubDate>Thu, 24 Feb 2011 20:00:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738060#M60451</guid>
      <dc:creator>Elli M Barasch</dc:creator>
      <dc:date>2011-02-24T20:00:16Z</dc:date>
    </item>
    <item>
      <title>Re: socket peek buffer sizes</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738061#M60452</link>
      <description>Guenther, turns out you're 100% correct.  I raised the value to 4096 and saw no change in behavior.  But when I lowered the value to 512 I saw the I/O's complete at 512 bytes each.  This leads me to conclude that my partner is sending 1024 byte packets, and that's the best I'm going to do.&lt;BR /&gt;&lt;BR /&gt;Thanks again.</description>
      <pubDate>Thu, 24 Feb 2011 20:42:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738061#M60452</guid>
      <dc:creator>Elli M Barasch</dc:creator>
      <dc:date>2011-02-24T20:42:29Z</dc:date>
    </item>
    <item>
      <title>Re: socket peek buffer sizes</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738062#M60453</link>
      <description>"Guenther, turns out you're 100%"&lt;BR /&gt;&lt;BR /&gt;I never am. ;-)&lt;BR /&gt;&lt;BR /&gt;/Guenther</description>
      <pubDate>Thu, 24 Feb 2011 23:02:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/socket-peek-buffer-sizes/m-p/4738062#M60453</guid>
      <dc:creator>GuentherF</dc:creator>
      <dc:date>2011-02-24T23:02:50Z</dc:date>
    </item>
  </channel>
</rss>

