<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: I/O on VMS in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/i-o-on-vms/m-p/3309300#M2785</link>
    <description>Willem wrote...&lt;BR /&gt;&amp;gt; IIRC, record IO (using RMS) has a limit of 32K bytes per IO. &lt;BR /&gt;&amp;gt; You may be able to do MUCH more if you bypass RMS - but I wouldn't recommend that.&lt;BR /&gt;&lt;BR /&gt;This is not exactly right. &lt;BR /&gt;The operative word being 'record IO'.&lt;BR /&gt;The maximum RECORD size is close to 32K, but the actual IO size can be up to 127 blocks (of 512 bytes) for sequential files and 63 blocks for indexed and relative.&lt;BR /&gt;&lt;BR /&gt;RMS is also willing and able to do unbuffered IO throught he SYS$READ and SYS$WRITE calls with a max of 127 blocks (16 bits unsigned) for the 'normal' RAB. On Alpha, by using a RAB64' you can specify an 32 bit size for up to  buffer size up to 2**31-1 bytes. This may be limited by the targetted device.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/doc/731FINAL/4523/4523pro_032.html#read_service_routine" target="_blank"&gt;http://h71000.www7.hp.com/doc/731FINAL/4523/4523pro_032.html#read_service_routine&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Wim recommends against not using RMS but that really depends on the application. For 'records' in a file, RMS can help a lot. (that buffering, sharing, transparently handling records crossing buffer boundaries, read ahead, file extents for writes and so on).&lt;BR /&gt;&lt;BR /&gt;Even for blocks in a file you may want to use RMS record mode (with UDF records, to get read-ahead / write-behind). &lt;BR /&gt;But for minimal CPU usage, you may want to go down to block IO through SYS$READ / SYS$WRITE and there are several good reasons to use the VMS native IO function: SYS$QIO(W).&lt;BR /&gt;&lt;BR /&gt;If you need further help, then please describe you application/intended use in more detail.&lt;BR /&gt;&lt;BR /&gt;Also... if yo think about going low level then be sure to check out the IO Reference manual: &lt;A href="http://h71000.www7.hp.com/doc/732FINAL/aa-pv6sf-tk/aa-pv6sf-tk.HTMl" target="_blank"&gt;http://h71000.www7.hp.com/doc/732FINAL/aa-pv6sf-tk/aa-pv6sf-tk.HTMl&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Hope this helps,&lt;BR /&gt;&lt;BR /&gt;Hein.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Sat, 19 Jun 2004 17:00:40 GMT</pubDate>
    <dc:creator>Hein van den Heuvel</dc:creator>
    <dc:date>2004-06-19T17:00:40Z</dc:date>
    <item>
      <title>I/O on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/i-o-on-vms/m-p/3309296#M2781</link>
      <description>Hello, does anyone know what is the largest single I/O VMS can do?</description>
      <pubDate>Fri, 18 Jun 2004 10:21:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/i-o-on-vms/m-p/3309296#M2781</guid>
      <dc:creator>Gary_167</dc:creator>
      <dc:date>2004-06-18T10:21:04Z</dc:date>
    </item>
    <item>
      <title>Re: I/O on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/i-o-on-vms/m-p/3309297#M2782</link>
      <description>IIRC, record IO (using RMS) has a limit of 32K bytes per IO. You may be able to do MUCH more if you bypass RMS - but I wouldn't recommend that.&lt;BR /&gt;&lt;BR /&gt;Willem</description>
      <pubDate>Fri, 18 Jun 2004 10:27:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/i-o-on-vms/m-p/3309297#M2782</guid>
      <dc:creator>Willem Grooters</dc:creator>
      <dc:date>2004-06-18T10:27:57Z</dc:date>
    </item>
    <item>
      <title>Re: I/O on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/i-o-on-vms/m-p/3309298#M2783</link>
      <description>The 'HP OpenVMS I/O User's Reference Manual' at &lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/doc/732FINAL/aa-pv6sf-tk/aa-pv6sf-tk.PDF" target="_blank"&gt;http://h71000.www7.hp.com/doc/732FINAL/aa-pv6sf-tk/aa-pv6sf-tk.PDF&lt;/A&gt; says, in reference to Logical I/Os:&lt;BR /&gt;&lt;BR /&gt;"Non-DSA disk devices can read or write up to 65,535 bytes in a single request. DSA devices connected to an HSC50 can transfer up to 4 billion bytes in a single request. In all cases, the maximum size of the transfer is limited by the number of pages that can be faulted into the process' working set, and then locked into physical&lt;BR /&gt;memory."&lt;BR /&gt;&lt;BR /&gt;(DSA devices would be those on MSCP-speaking controllers.)</description>
      <pubDate>Fri, 18 Jun 2004 12:46:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/i-o-on-vms/m-p/3309298#M2783</guid>
      <dc:creator>Keith Parris</dc:creator>
      <dc:date>2004-06-18T12:46:47Z</dc:date>
    </item>
    <item>
      <title>Re: I/O on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/i-o-on-vms/m-p/3309299#M2784</link>
      <description>Talking about impressive numbers....&lt;BR /&gt;&lt;BR /&gt;Let's start a competition: who can DEMONSTRATE (verifiably) the largest transfer?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&amp;lt;8-])&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Fri, 18 Jun 2004 12:51:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/i-o-on-vms/m-p/3309299#M2784</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2004-06-18T12:51:34Z</dc:date>
    </item>
    <item>
      <title>Re: I/O on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/i-o-on-vms/m-p/3309300#M2785</link>
      <description>Willem wrote...&lt;BR /&gt;&amp;gt; IIRC, record IO (using RMS) has a limit of 32K bytes per IO. &lt;BR /&gt;&amp;gt; You may be able to do MUCH more if you bypass RMS - but I wouldn't recommend that.&lt;BR /&gt;&lt;BR /&gt;This is not exactly right. &lt;BR /&gt;The operative word being 'record IO'.&lt;BR /&gt;The maximum RECORD size is close to 32K, but the actual IO size can be up to 127 blocks (of 512 bytes) for sequential files and 63 blocks for indexed and relative.&lt;BR /&gt;&lt;BR /&gt;RMS is also willing and able to do unbuffered IO throught he SYS$READ and SYS$WRITE calls with a max of 127 blocks (16 bits unsigned) for the 'normal' RAB. On Alpha, by using a RAB64' you can specify an 32 bit size for up to  buffer size up to 2**31-1 bytes. This may be limited by the targetted device.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/doc/731FINAL/4523/4523pro_032.html#read_service_routine" target="_blank"&gt;http://h71000.www7.hp.com/doc/731FINAL/4523/4523pro_032.html#read_service_routine&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Wim recommends against not using RMS but that really depends on the application. For 'records' in a file, RMS can help a lot. (that buffering, sharing, transparently handling records crossing buffer boundaries, read ahead, file extents for writes and so on).&lt;BR /&gt;&lt;BR /&gt;Even for blocks in a file you may want to use RMS record mode (with UDF records, to get read-ahead / write-behind). &lt;BR /&gt;But for minimal CPU usage, you may want to go down to block IO through SYS$READ / SYS$WRITE and there are several good reasons to use the VMS native IO function: SYS$QIO(W).&lt;BR /&gt;&lt;BR /&gt;If you need further help, then please describe you application/intended use in more detail.&lt;BR /&gt;&lt;BR /&gt;Also... if yo think about going low level then be sure to check out the IO Reference manual: &lt;A href="http://h71000.www7.hp.com/doc/732FINAL/aa-pv6sf-tk/aa-pv6sf-tk.HTMl" target="_blank"&gt;http://h71000.www7.hp.com/doc/732FINAL/aa-pv6sf-tk/aa-pv6sf-tk.HTMl&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Hope this helps,&lt;BR /&gt;&lt;BR /&gt;Hein.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sat, 19 Jun 2004 17:00:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/i-o-on-vms/m-p/3309300#M2785</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2004-06-19T17:00:40Z</dc:date>
    </item>
    <item>
      <title>Re: I/O on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/i-o-on-vms/m-p/3309301#M2786</link>
      <description>Hein,&lt;BR /&gt;&lt;BR /&gt;My mistake, you're right on record size vs. IO size - which doesn't have to be the same.&lt;BR /&gt;On bypassing RMS: yes, it depends. If your application doesn't have to interact with 'native'(RMS-using) applications, I think it's Ok to bypass RMS. But otherwise? What about locking, journalling....&lt;BR /&gt;&lt;BR /&gt;(I'm not that familiar with low-level issues of RMS, thanks Hein for the link)</description>
      <pubDate>Sun, 20 Jun 2004 14:57:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/i-o-on-vms/m-p/3309301#M2786</guid>
      <dc:creator>Willem Grooters</dc:creator>
      <dc:date>2004-06-20T14:57:36Z</dc:date>
    </item>
    <item>
      <title>Re: I/O on VMS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/i-o-on-vms/m-p/3309302#M2787</link>
      <description>In a discussion on comp.os.vms recently (entitled "Split I/Os to contiguous file???"), a couple of people pointed out that the UCB field UCB$L_MAXBCNT indicates the maximum size of transfer (in units of bytes) that the device will support. Rob Brooks gave this example there for an HSG80-attached disk:&lt;BR /&gt;&lt;BR /&gt;"$ ANAL/SYS&lt;BR /&gt;SDA&amp;gt; read iodef&lt;BR /&gt;SDA&amp;gt; SHOW DEVICE $1$DGA12&lt;BR /&gt;[stuff appears on screen]&lt;BR /&gt;SDA&amp;gt; EXAM UCB+UCB$L_MAXBCNT&lt;BR /&gt;UCB+00190:  00000000.00020000   "........"&lt;BR /&gt;SDA&amp;gt; EVAL 20000 / ^D512 ! bytes per block&lt;BR /&gt;Hex = 00000000.00000100   Decimal = 256"&lt;BR /&gt;&lt;BR /&gt;I took a brief look at the listings for DKDRIVER and DUDRIVER for 7.3-2.&lt;BR /&gt;&lt;BR /&gt;DUDRIVER seems to have had an upper limit of 2^24 bytes introduced at 6.2 to work around a problem related to path switching. And for a disk served by the VMS MSCP Server, the maximum seems to be 127 blocks.&lt;BR /&gt;&lt;BR /&gt;The default for DKDRIVER seems to be 127 blocks, but it can do as high as 256 blocks if the port hardware can support it.&lt;BR /&gt;</description>
      <pubDate>Mon, 21 Jun 2004 12:19:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/i-o-on-vms/m-p/3309302#M2787</guid>
      <dc:creator>Keith Parris</dc:creator>
      <dc:date>2004-06-21T12:19:27Z</dc:date>
    </item>
  </channel>
</rss>

