<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Disk IO retry - OpenVMS 7.3-2 in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996542#M83951</link>
    <description>If you run backup it can throw a number of i/o's at the disk, and seeing a delay of 0.6 seconds is not unusual at all. So it depends on the quota's that you gave the backup process. I've seen people giving it a diolm of 4096, guess what happens with the response time of one i/o if you queue that many i/o's to the disk.&lt;BR /&gt;&lt;BR /&gt;Jur.</description>
    <pubDate>Wed, 30 May 2007 03:07:12 GMT</pubDate>
    <dc:creator>Jur van der Burg</dc:creator>
    <dc:date>2007-05-30T03:07:12Z</dc:date>
    <item>
      <title>Disk IO retry - OpenVMS 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996520#M83929</link>
      <description>We have a four node ES45 cluster. Shared system disk and shared nine other disk. We do not shadow any disk. &lt;BR /&gt;We have a EMC storage array that does Raid for us and presents OpenVMS with 10 disk.&lt;BR /&gt;We are running OpenVMS 7.3-2 with Update V8. Yes I know we are a little behind with the updates. We access disk via all four node in the cluster with 2 HBA cards per server. Both cards are single port and have fibre cables connected to them. WE do not MSCP serve disk between node.&lt;BR /&gt;A few weeks ago someone did a reconfig of some type on the EMC storage. As a result IO to the disk on all 4 node was stalled for 4.9 seconds. Then things resumed.&lt;BR /&gt;The EMC support team claim that no other systems connected were effected. These other systems being Windows and Solaris. They also claim that the stall in IO would have only been approx 1 second.&lt;BR /&gt;My point is the following:&lt;BR /&gt;- The other systems might not measure antthing more than a second of stall as an outage.&lt;BR /&gt;&lt;BR /&gt;- If the outage was only 1 second approx ..then IO would have only stalled for 1 second and not 4.9.&lt;BR /&gt;Would this be the case ?&lt;BR /&gt;Could a 1 second stall in IO cause VMS to stall IO for approx 4.9 seconds. &lt;BR /&gt;Checked the operator logs and other logs. No multi path switching took place during the IO stall.&lt;BR /&gt;&lt;BR /&gt;We consider a .5 second outage as application unavailable. Our cluster is more or less as realtime as you can get ..cluster RECN Interval set as low as 4 seconds and associated params.&lt;BR /&gt;&lt;BR /&gt;Comments ?</description>
      <pubDate>Wed, 09 May 2007 03:53:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996520#M83929</guid>
      <dc:creator>Kevin Raven (UK)</dc:creator>
      <dc:date>2007-05-09T03:53:22Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO retry - OpenVMS 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996521#M83930</link>
      <description>Kevin,&lt;BR /&gt;&lt;BR /&gt;could an IO error have triggered mount-verification on the disks ?&lt;BR /&gt;&lt;BR /&gt;Mount-verifications might not be logged to OPCOM, see the MVSUPMSG_INTVL and MVSUPMSG_NUM sysgen parameters.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Wed, 09 May 2007 05:14:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996521#M83930</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2007-05-09T05:14:59Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO retry - OpenVMS 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996522#M83931</link>
      <description>Hi Kevin&lt;BR /&gt;&lt;BR /&gt;My response to this situation...&lt;BR /&gt;The word glib comes to mind.&lt;BR /&gt;&lt;BR /&gt;ANY IO delay is unacceptable!&lt;BR /&gt;&lt;BR /&gt;I would say the people managing the EMC box to fix it.&lt;BR /&gt;(Ie. if the EMC box was working before, then it can work again).&lt;BR /&gt;&lt;BR /&gt;So what did they change? &lt;BR /&gt;Have these delays started on ALL systems since the EMC revision?&lt;BR /&gt;Is there something implied by the change to the EMC that implies revising the FABRIC configuration?&lt;BR /&gt;&lt;BR /&gt;You claim there's no path switching going on, which could account for a delay, as there's obviously a problem with the new configuration,&lt;BR /&gt;&lt;BR /&gt;To confirm this, do a:&lt;BR /&gt;$sho device &lt;ALL_EMC_DISKS&gt; /fu&lt;BR /&gt;&lt;BR /&gt;Look and see if all "operations completed"&lt;BR /&gt;are where you expect them to be.&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Steven&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/ALL_EMC_DISKS&gt;</description>
      <pubDate>Wed, 09 May 2007 05:16:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996522#M83931</guid>
      <dc:creator>Steve-Thompson</dc:creator>
      <dc:date>2007-05-09T05:16:18Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO retry - OpenVMS 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996523#M83932</link>
      <description>The storage guys will never do a config during production time again. So we will not see any further 1 second or 4.9 second IO delays. I just wanted to get to the bottom of how a 1 second delay in IO on the EMC storage (If that was the case !) can transpond into a 4.9 second stall on the VMS servers.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 09 May 2007 05:27:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996523#M83932</guid>
      <dc:creator>Kevin Raven (UK)</dc:creator>
      <dc:date>2007-05-09T05:27:09Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO retry - OpenVMS 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996524#M83933</link>
      <description>"Kevin,&lt;BR /&gt;&lt;BR /&gt;could an IO error have triggered mount-verification on the disks ?&lt;BR /&gt;&lt;BR /&gt;Mount-verifications might not be logged to OPCOM, see the MVSUPMSG_INTVL and MVSUPMSG_NUM sysgen parameters.&lt;BR /&gt;&lt;BR /&gt;Volker."&lt;BR /&gt;&lt;BR /&gt;$ mc sysgen show MVSUPMSG&lt;BR /&gt;Parameter Name           Current    Default     Min.      Max.     Unit  Dynamic&lt;BR /&gt;--------------           -------    -------    -------   -------   ----  -------&lt;BR /&gt;MVSUPMSG_INTVL               3600       3600         0         -1 Seconds    D&lt;BR /&gt;MVSUPMSG_NUM                    5          5         0         -1 Pure-numbe D&lt;BR /&gt;$</description>
      <pubDate>Wed, 09 May 2007 05:47:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996524#M83933</guid>
      <dc:creator>Kevin Raven (UK)</dc:creator>
      <dc:date>2007-05-09T05:47:57Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO retry - OpenVMS 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996525#M83934</link>
      <description>could the error have lead to a queue full being reported back to VMS for that storage controller port then VMS backing off sending I/O for a while?</description>
      <pubDate>Wed, 09 May 2007 05:59:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996525#M83934</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2007-05-09T05:59:47Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO retry - OpenVMS 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996526#M83935</link>
      <description>Read this too &lt;BR /&gt;&lt;A href="http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=1066685" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=1066685&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;May be the minimum recovery time is about 5 seconds ?&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Wed, 09 May 2007 09:13:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996526#M83935</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2007-05-09T09:13:11Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO retry - OpenVMS 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996527#M83936</link>
      <description>I also have EMC storage on Alpha. Running UPDATE V 7 and FIbre-SCSI V9. We have a dedicated DMX3000 attached to 19 Alpha's either 2 HBA's or 4. &lt;BR /&gt;&lt;BR /&gt;While deploying additional storage (HDS) we had a cluster hang. Any attempt to I/O on EMC would "hang" that server. &lt;BR /&gt;&lt;BR /&gt;No Mount verification never came back from the frame. So apparently, not all communication you would expect to see is available on the EMC paths. &lt;BR /&gt;&lt;BR /&gt;We backed out the HDS changes /crashed rebooted and all the I/O was restored.</description>
      <pubDate>Wed, 09 May 2007 09:19:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996527#M83936</guid>
      <dc:creator>James Cristofero</dc:creator>
      <dc:date>2007-05-09T09:19:08Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO retry - OpenVMS 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996528#M83937</link>
      <description>The big question is whether the EMC controllers returned an error or not. If they just stalled for one second then there's nothing in VMS that would stall the request for more than that time. If the controller returned an error then mountverification would have kicked in, which may have been surpressed. Now mountverification will stall all I/O's, and issue a packack to DKdriver every second until is gets a response or mountverification timeout (3600 seconds default). The packack issues a scsi test unit ready command. So if that command was delayed by the controller it may explain the delay. From a VMS perspecitive it could be that multipath may add some additional seconds as  it participates in the error recovery.&lt;BR /&gt;&lt;BR /&gt;Bottom line is that I think that the controller has returned an error, and that recovery on such a serious event may take a couple of seconds.&lt;BR /&gt;&lt;BR /&gt;Jur.&lt;BR /&gt;</description>
      <pubDate>Wed, 09 May 2007 14:06:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996528#M83937</guid>
      <dc:creator>Jur van der Burg</dc:creator>
      <dc:date>2007-05-09T14:06:33Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO retry - OpenVMS 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996529#M83938</link>
      <description>From a VMS perspecitive it could be that multipath may add some additional seconds as it participates in the error recovery.&lt;BR /&gt;&lt;BR /&gt;--&lt;BR /&gt;&lt;BR /&gt;Multipath can add some additional time, but since multipath does its work in the context of mount verification, you'd expect to see the OPCOM messages.  However, if mount verification suppression is enabled (as it is by default), then it's difficult to figure out what's going on.&lt;BR /&gt;&lt;BR /&gt;Attempting to troubleshoot this after the fact is nearly impossible.  A Tool to use *while this problem is happening* would be the DKLOG SDA extension, which will log all the SCSI commands and the SCSI statuses coming back from the controller.&lt;BR /&gt;&lt;BR /&gt;-- Rob</description>
      <pubDate>Wed, 09 May 2007 15:02:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996529#M83938</guid>
      <dc:creator>Robert Brooks_1</dc:creator>
      <dc:date>2007-05-09T15:02:47Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO retry - OpenVMS 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996530#M83939</link>
      <description>Thanks for the input so far. I will be trying to emulate the IO hang on our development cluster Friday. I will let you know how it goes or what I find out.&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;Kevin</description>
      <pubDate>Thu, 10 May 2007 09:46:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996530#M83939</guid>
      <dc:creator>Kevin Raven (UK)</dc:creator>
      <dc:date>2007-05-10T09:46:26Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO retry - OpenVMS 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996531#M83940</link>
      <description>Kevin,&lt;BR /&gt;&lt;BR /&gt;when trying to reproduce the IO hang, consider to start some of the OpenVMS 'built-in' SDA extensions to capture some more detailled data.&lt;BR /&gt;&lt;BR /&gt;You can get some help and examples of using them at:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://eisner.encompasserve.org/~halle/" target="_blank"&gt;http://eisner.encompasserve.org/~halle/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;The following extensions may be useful:&lt;BR /&gt;&lt;BR /&gt;$ ANAL/SYS&lt;BR /&gt;SDA&amp;gt; DKLOG&lt;BR /&gt;SDA&amp;gt; IO&lt;BR /&gt;SDA&amp;gt; FC&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Fri, 11 May 2007 01:49:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996531#M83940</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2007-05-11T01:49:07Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO retry - OpenVMS 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996532#M83941</link>
      <description>Concerning IO blockage :&lt;BR /&gt;&lt;BR /&gt;I did a test on a 4100 with HSZ70 running 7.3 and found that&lt;BR /&gt; &lt;BR /&gt;1) splitting a shadow set froze IO for 0.2 seconds&lt;BR /&gt;2) reforming a shadow set froze IO for 2 seconds&lt;BR /&gt;3) upon shadow copy completion (with bitmaps, didn't check it in the test without them), IO's were blocked for 0.3 sec, 0.6 sec and 0.51 sec (3 times a lock was taken ?)&lt;BR /&gt;&lt;BR /&gt;With/without bitmap had no big influence. During the shadow copy some IO's took 0.07 sec instead of 0.01.&lt;BR /&gt;&lt;BR /&gt;Fwiw&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Mon, 14 May 2007 05:00:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996532#M83941</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2007-05-14T05:00:42Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO retry - OpenVMS 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996533#M83942</link>
      <description>Same test on interbuilding cluster of GS160 with dual HSG80 in each building but only with bitmap.&lt;BR /&gt;&lt;BR /&gt;1) dism 2.5 sec&lt;BR /&gt;2) mount 1.9 sec&lt;BR /&gt;3) on completion copy 8.9 + 0.6 + 1.9 sec&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Mon, 14 May 2007 06:35:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996533#M83942</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2007-05-14T06:35:48Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO retry - OpenVMS 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996534#M83943</link>
      <description>Did a test Friday with a test/development EMC storage array. A config was written. Some disks that were not live were changed on the EMC storage and the config written.&lt;BR /&gt;&lt;BR /&gt;A looping DCL script wrote time stamps to a flat file. The time stamps were written at a rate of 55 per 1/100th of a second or 55*100 per second.&lt;BR /&gt;&lt;BR /&gt;During the EMC config several IO stalls took place that ranged from .03 seconds to a massive 1.8 seconds. &lt;BR /&gt;&lt;BR /&gt;Now need to run further test.&lt;BR /&gt;&lt;BR /&gt;This was the first pass.&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Kevin&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 14 May 2007 06:37:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996534#M83943</guid>
      <dc:creator>Kevin Raven (UK)</dc:creator>
      <dc:date>2007-05-14T06:37:25Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO retry - OpenVMS 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996535#M83944</link>
      <description>&lt;BR /&gt;&amp;gt;During the EMC config several IO stalls took&lt;BR /&gt;&amp;gt;place that ranged from .03 seconds to a &lt;BR /&gt;&amp;gt;massive 1.8 seconds.&lt;BR /&gt;&lt;BR /&gt;I'll bet you are adding storage and pushing&lt;BR /&gt;out zoning changes (RCSNs are the gotchas).&lt;BR /&gt;&lt;BR /&gt;You will want to avoid storage changes and&lt;BR /&gt;particularly zoning changes during normal&lt;BR /&gt;working hours.&lt;BR /&gt;&lt;BR /&gt;In a previous job I heard about how they&lt;BR /&gt;used to merrily make storage and zoning changes&lt;BR /&gt;during working hours.  Guess what?  Real-time&lt;BR /&gt;instrument acquisitions don't like long&lt;BR /&gt;pauses - do they?&lt;BR /&gt;&lt;BR /&gt;So, painfully all that worked was moved to&lt;BR /&gt;off hours - (2 a.m. on weekends).&lt;BR /&gt;&lt;BR /&gt;Welcome to the real-world Neo...&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 14 May 2007 12:48:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996535#M83944</guid>
      <dc:creator>Rob Young_4</dc:creator>
      <dc:date>2007-05-14T12:48:53Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO retry - OpenVMS 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996536#M83945</link>
      <description>&lt;BR /&gt;Kevin,&lt;BR /&gt;&lt;BR /&gt;Another thought ... I realize you probably&lt;BR /&gt;aren't zoning on this EMC config.  But what&lt;BR /&gt;may be happening is when a new hyper/meta is&lt;BR /&gt;created and presented it may require the&lt;BR /&gt;Symm to momentarily place a global lock&lt;BR /&gt;on the cache (or section of cache) to set&lt;BR /&gt;aside cache lines for the newly created &lt;BR /&gt;hyper/meta.  With multiple gigabytes of cache&lt;BR /&gt;it may take a while to take the lock out,&lt;BR /&gt;do the work and release (&amp;gt;1 second being&lt;BR /&gt;a "while").&lt;BR /&gt;&lt;BR /&gt;I have a lot of "may" above - I don't know.&lt;BR /&gt;The problem is there is a good deal of&lt;BR /&gt;unknowns (to me) about how the Symm cache&lt;BR /&gt;works, and I have been digging for a long&lt;BR /&gt;time so it might just be a closely held&lt;BR /&gt;piece of engineering knowledge (or I haven't&lt;BR /&gt;stumbled upon the right person - yet).&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;You're going to have to open a call with EMC&lt;BR /&gt;support and describe your problem, perhaps&lt;BR /&gt;they can shed some light.&lt;BR /&gt;&lt;BR /&gt;Commenting on the hang you are experiencing..&lt;BR /&gt;it really isn't that great but in a real-time&lt;BR /&gt;data acquisition scenario it could well&lt;BR /&gt;be unacceptable.  My comment about storage&lt;BR /&gt;and zoning changes moving to off hours was&lt;BR /&gt;based on my personal history with EMC Symms.&lt;BR /&gt;&lt;BR /&gt;Rob</description>
      <pubDate>Mon, 14 May 2007 19:34:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996536#M83945</guid>
      <dc:creator>Rob Young_4</dc:creator>
      <dc:date>2007-05-14T19:34:31Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO retry - OpenVMS 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996537#M83946</link>
      <description>Typo in my last post :&lt;BR /&gt;3) on completion copy 8.9 + 0.6 + 1.9 sec&lt;BR /&gt;must be &lt;BR /&gt;3) on completion copy 0.9 + 0.6 + 1.9 sec&lt;BR /&gt;</description>
      <pubDate>Tue, 15 May 2007 01:34:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996537#M83946</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2007-05-15T01:34:11Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO retry - OpenVMS 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996538#M83947</link>
      <description>The config changes were as below ...extract from e-mail from our EMC chaps....&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;However, the change we did make was to the SCSI3 bit on 3 devices. This array would still go through the same process to prepare and commit the change therefore forcing an IML of the directors. It is at this point where we are seeing a delay. It's normal for the array to behave in this fashion and at this stage it looks like any config change we make is going to effect your servers..............</description>
      <pubDate>Tue, 15 May 2007 04:31:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996538#M83947</guid>
      <dc:creator>Kevin Raven (UK)</dc:creator>
      <dc:date>2007-05-15T04:31:35Z</dc:date>
    </item>
    <item>
      <title>Re: Disk IO retry - OpenVMS 7.3-2</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996539#M83948</link>
      <description>&lt;BR /&gt;&amp;gt; SCSCI 3 bit set, IML the directors&lt;BR /&gt;&lt;BR /&gt;Well you can close the loop on this one.&lt;BR /&gt;Curiously, why would 4 drives out of dozens&lt;BR /&gt;not have that bit set?&lt;BR /&gt;&lt;BR /&gt;When they went to change control and had&lt;BR /&gt;approval for doing this work did they inform&lt;BR /&gt;change control that the directors on the Symm&lt;BR /&gt;would be rebooting?   etc.</description>
      <pubDate>Tue, 15 May 2007 09:48:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-io-retry-openvms-7-3-2/m-p/3996539#M83948</guid>
      <dc:creator>Rob Young_4</dc:creator>
      <dc:date>2007-05-15T09:48:35Z</dc:date>
    </item>
  </channel>
</rss>

