<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: direct io size in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/direct-io-size/m-p/3134710#M61452</link>
    <description>Hi Amit! Thanks.&lt;BR /&gt;1. Test copy for reading was from disk to nl:&lt;BR /&gt;2. Would you explain the mechanism for setting direct io size? Where may be place of sym caching for it?&lt;BR /&gt;</description>
    <pubDate>Sun, 14 Dec 2003 00:34:26 GMT</pubDate>
    <dc:creator>eran_6</dc:creator>
    <dc:date>2003-12-14T00:34:26Z</dc:date>
    <item>
      <title>direct io size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/direct-io-size/m-p/3134705#M61447</link>
      <description>We have two EMC boxes: Symmetrix &amp;amp; DMX. During performance test for two identical defined disks in two diffrent boxes DECPS return for  reading/writing same files diffrent d_io_size:&lt;BR /&gt;for Symmetrix - 128 pages and for DMX - 64 pages.&lt;BR /&gt;OVMS 7.3-1. There is XFC, but files set "no_cache". &lt;BR /&gt;The test done by simple copy command from the same process.&lt;BR /&gt;&lt;BR /&gt;What may be a reason for this diffrence?</description>
      <pubDate>Wed, 03 Dec 2003 08:04:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/direct-io-size/m-p/3134705#M61447</guid>
      <dc:creator>eran_6</dc:creator>
      <dc:date>2003-12-03T08:04:09Z</dc:date>
    </item>
    <item>
      <title>Re: direct io size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/direct-io-size/m-p/3134706#M61448</link>
      <description>Hello visa,&lt;BR /&gt;when you write 128 pages or 64 pages do you want say blocks?&lt;BR /&gt;Do you see allocated block or used blocks?&lt;BR /&gt;If you see allocated block may be different cluster size between two disk.&lt;BR /&gt;Type SHO DEV &lt;DEV&gt;/FUL and look at details of disk. Also use DIR/FULL to see allocated and use blocks.&lt;BR /&gt; &lt;BR /&gt;H.T.H.&lt;BR /&gt;Antoniov&lt;BR /&gt;&lt;/DEV&gt;</description>
      <pubDate>Wed, 03 Dec 2003 11:03:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/direct-io-size/m-p/3134706#M61448</guid>
      <dc:creator>Antoniov.</dc:creator>
      <dc:date>2003-12-03T11:03:02Z</dc:date>
    </item>
    <item>
      <title>Re: direct io size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/direct-io-size/m-p/3134707#M61449</link>
      <description>1. Pages in DecPS are same blocks&lt;BR /&gt;&lt;BR /&gt;2. Was written the disks definition the same,&lt;BR /&gt;   so cluster size is 362 for both.&lt;BR /&gt;&lt;BR /&gt;Thanks</description>
      <pubDate>Thu, 04 Dec 2003 03:16:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/direct-io-size/m-p/3134707#M61449</guid>
      <dc:creator>eran_6</dc:creator>
      <dc:date>2003-12-04T03:16:57Z</dc:date>
    </item>
    <item>
      <title>Re: direct io size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/direct-io-size/m-p/3134708#M61450</link>
      <description>Just a few hints:&lt;BR /&gt;* The boxes will no doubt have their own controllers - which can have a difference in handling the chuncks of data.&lt;BR /&gt;* Each box will have it's own cache - out of VMS's control. "no_cache" is strictly VMS-bound so that would not influence the EMC boxes. Handling of these caches is the boxes' concern. Since these are different, there might be a difference in handling &lt;BR /&gt;* If the path to one box passes other type of hardware (FC switches) than the other, this may cause a difference in pagesize, simply because the ability of this different hardware to handle the datastream.&lt;BR /&gt;* If the hardware underway is the same, but still different, it could (my guess) be possible that configuration could be part of the problem. Probably even if path to each box is parallel using the same switches, there could be a difference in confuration casuing this difference.&lt;BR /&gt;</description>
      <pubDate>Thu, 04 Dec 2003 04:38:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/direct-io-size/m-p/3134708#M61450</guid>
      <dc:creator>Willem Grooters</dc:creator>
      <dc:date>2003-12-04T04:38:17Z</dc:date>
    </item>
    <item>
      <title>Re: direct io size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/direct-io-size/m-p/3134709#M61451</link>
      <description>You may want to check which is the source disk of the copy operation.&lt;BR /&gt;if you are coping sym ---&amp;gt; sym that might&lt;BR /&gt;explain your results because data may stil&lt;BR /&gt;be in cache.&lt;BR /&gt;</description>
      <pubDate>Fri, 12 Dec 2003 09:16:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/direct-io-size/m-p/3134709#M61451</guid>
      <dc:creator>Amit Levin</dc:creator>
      <dc:date>2003-12-12T09:16:13Z</dc:date>
    </item>
    <item>
      <title>Re: direct io size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/direct-io-size/m-p/3134710#M61452</link>
      <description>Hi Amit! Thanks.&lt;BR /&gt;1. Test copy for reading was from disk to nl:&lt;BR /&gt;2. Would you explain the mechanism for setting direct io size? Where may be place of sym caching for it?&lt;BR /&gt;</description>
      <pubDate>Sun, 14 Dec 2003 00:34:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/direct-io-size/m-p/3134710#M61452</guid>
      <dc:creator>eran_6</dc:creator>
      <dc:date>2003-12-14T00:34:26Z</dc:date>
    </item>
  </channel>
</rss>

