<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Disk in OFFLINE STATUS in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/disk-in-offline-status/m-p/5212301#M97096</link>
    <description>&lt;BR /&gt;Closing thread and assign points!&lt;BR /&gt;&lt;BR /&gt;Thanks again to all!&lt;BR /&gt;</description>
    <pubDate>Wed, 02 Dec 2009 15:30:39 GMT</pubDate>
    <dc:creator>smsc_1</dc:creator>
    <dc:date>2009-12-02T15:30:39Z</dc:date>
    <item>
      <title>Disk in OFFLINE STATUS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-in-offline-status/m-p/5212294#M97089</link>
      <description>Hello all,&lt;BR /&gt;I tried to install a new hard disk, used  controller doesn't matter because I have the same issue with a different controllers (like EVA or an old HSZ80), so I think the issue was on OpenVMS 8.3-1H1.&lt;BR /&gt;&lt;BR /&gt;The problem is when I phisically remove the disk and using &lt;BR /&gt;show device d&lt;BR /&gt;I always see the device in OFFLINE status&lt;BR /&gt;I also tried following command under MC SYSMAN:&lt;BR /&gt;IO AUTO&lt;BR /&gt;&lt;BR /&gt;But the device was still there.&lt;BR /&gt;&lt;BR /&gt;How remove the OFFLINE device without reboot the system??&lt;BR /&gt;&lt;BR /&gt;Thanks!&lt;BR /&gt;</description>
      <pubDate>Wed, 02 Dec 2009 14:32:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-in-offline-status/m-p/5212294#M97089</guid>
      <dc:creator>smsc_1</dc:creator>
      <dc:date>2009-12-02T14:32:28Z</dc:date>
    </item>
    <item>
      <title>Re: Disk in OFFLINE STATUS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-in-offline-status/m-p/5212295#M97090</link>
      <description>smsc,&lt;BR /&gt;&lt;BR /&gt;If I understand your posting correctly, then the answer is: The device cannot be removed from the IO database. It will continue to be there.&lt;BR /&gt;&lt;BR /&gt;IO AUTO adds devices, it does not subtract them. &lt;BR /&gt;&lt;BR /&gt;Is there a particular problem with the device showing in the SHOW DEVICE display, albeit with a status of OFFLINE.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Wed, 02 Dec 2009 14:38:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-in-offline-status/m-p/5212295#M97090</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2009-12-02T14:38:40Z</dc:date>
    </item>
    <item>
      <title>Re: Disk in OFFLINE STATUS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-in-offline-status/m-p/5212296#M97091</link>
      <description>If removing the disk device unit is critical requirement (and you can't just ignore this and leave the disk unit marked offline), you must reboot the system or (if the disk is cluster-served) the whole cluster. &lt;BR /&gt;&lt;BR /&gt;Disk devices and most other non-cloned devices will and always have required a reboot to clear.  &lt;BR /&gt;&lt;BR /&gt;There is no mechanism to unwind any I/O activity and unload most devices, nor most device drivers.&lt;BR /&gt;&lt;BR /&gt;Cloned devices do tend to have support for this, but disks aren't cloned devices.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 02 Dec 2009 14:41:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-in-offline-status/m-p/5212296#M97091</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2009-12-02T14:41:45Z</dc:date>
    </item>
    <item>
      <title>Re: Disk in OFFLINE STATUS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-in-offline-status/m-p/5212297#M97092</link>
      <description>smsc,&lt;BR /&gt;&lt;BR /&gt;As Bob says, this is a limitation of the VMS disk IO system.  It bugs me too, as I end up having many "zombie" $1$dga disk once they have been presented and added the the IO database.  For cloned devices that are not MSCP served, for example LD devices created from other disk, that isn't a limitation, they can be deleted without any visible trace.&lt;BR /&gt;&lt;BR /&gt;Rob Brooks or Jur van der Burg can probably give a better explanation as to the reasons for this design "restriction".&lt;BR /&gt;&lt;BR /&gt;Jon</description>
      <pubDate>Wed, 02 Dec 2009 14:48:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-in-offline-status/m-p/5212297#M97092</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2009-12-02T14:48:10Z</dc:date>
    </item>
    <item>
      <title>Re: Disk in OFFLINE STATUS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-in-offline-status/m-p/5212298#M97093</link>
      <description>&amp;gt;Rob Brooks or Jur van der Burg can probably give a better explanation as to the reasons for this design "restriction".&lt;BR /&gt;&lt;BR /&gt;There's no mechanism for "device garbage collection."&lt;BR /&gt;&lt;BR /&gt;To implement "device garbage collection" here, HP OpenVMS Engineering would need ensure the device is inactive and that all I/O requests (IRPs) are returned from the XQP or ACP or where the driver has stored them, that new I/Os are blocked and not in flight, that the device is not going to toss an interrupt at the host (or that any arriving interrupts are sent to the blackhole or to the new-device path), and that the various I/O (UCB, VCB, potentially the DDB) and higher-level data structures (volume locks, MSCP connections, channels) are all correctly detected and unwound and deleted.  The drivers and the I/O subsystem do not currently provide a mechanism for ensuring that for disks.&lt;BR /&gt;&lt;BR /&gt;Feasible?  Likely yes.  Cloned devices do have most of this capability now, but then they're also usually not cluster-served devices, and the drivers tend to have explicit support for unwinding the activity.  Justifying the implementation and testing effort for garbage collection for MSCP and disks and for TMSCP and tapes is going to be a project in itself, regardless.&lt;BR /&gt;</description>
      <pubDate>Wed, 02 Dec 2009 15:13:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-in-offline-status/m-p/5212298#M97093</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2009-12-02T15:13:46Z</dc:date>
    </item>
    <item>
      <title>Re: Disk in OFFLINE STATUS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-in-offline-status/m-p/5212299#M97094</link>
      <description>One of the reasons this can't be done without a major overhaul of VMS is for example mscp serving. If a device is served, and would then be removed it would be left on another node. Of course one could design a method to do that, but I think given the current VMS' state its unlikely to happen.&lt;BR /&gt;&lt;BR /&gt;LD does not have this issue because it's a cloned device, and it does NOT allow mscp serving (on purpose!).&lt;BR /&gt;&lt;BR /&gt;Jur.&lt;BR /&gt;</description>
      <pubDate>Wed, 02 Dec 2009 15:23:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-in-offline-status/m-p/5212299#M97094</guid>
      <dc:creator>Jur van der Burg</dc:creator>
      <dc:date>2009-12-02T15:23:06Z</dc:date>
    </item>
    <item>
      <title>Re: Disk in OFFLINE STATUS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-in-offline-status/m-p/5212300#M97095</link>
      <description>&lt;BR /&gt;Thanks for clarification (@ all).&lt;BR /&gt;So, the final solution will be reboot the machine. And for sure there's no particular problem with the device showing OFFLINE status, it's just a cosmetic thing!&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 02 Dec 2009 15:30:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-in-offline-status/m-p/5212300#M97095</guid>
      <dc:creator>smsc_1</dc:creator>
      <dc:date>2009-12-02T15:30:02Z</dc:date>
    </item>
    <item>
      <title>Re: Disk in OFFLINE STATUS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-in-offline-status/m-p/5212301#M97096</link>
      <description>&lt;BR /&gt;Closing thread and assign points!&lt;BR /&gt;&lt;BR /&gt;Thanks again to all!&lt;BR /&gt;</description>
      <pubDate>Wed, 02 Dec 2009 15:30:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-in-offline-status/m-p/5212301#M97096</guid>
      <dc:creator>smsc_1</dc:creator>
      <dc:date>2009-12-02T15:30:39Z</dc:date>
    </item>
    <item>
      <title>Re: Disk in OFFLINE STATUS</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/disk-in-offline-status/m-p/5212302#M97097</link>
      <description>Oh, yeah, forgot one: you also need to deal with MSCP and TMSCP server versions that are running on other boxes in the cluster that don't have the "quiesce and unwind" support.</description>
      <pubDate>Wed, 02 Dec 2009 16:53:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/disk-in-offline-status/m-p/5212302#M97097</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2009-12-02T16:53:56Z</dc:date>
    </item>
  </channel>
</rss>

