<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: problem mounting disk on cluster in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/problem-mounting-disk-on-cluster/m-p/4932163#M72721</link>
    <description>Oops -&lt;BR /&gt;&lt;BR /&gt;should have read better before typing ahead.&lt;BR /&gt;Ignore my post - re-read Uwe..&lt;BR /&gt;Sorry.&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
    <pubDate>Fri, 07 Oct 2005 12:43:24 GMT</pubDate>
    <dc:creator>Jan van den Ende</dc:creator>
    <dc:date>2005-10-07T12:43:24Z</dc:date>
    <item>
      <title>problem mounting disk on cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/problem-mounting-disk-on-cluster/m-p/4932160#M72718</link>
      <description>Hi all,&lt;BR /&gt;This is the setup &lt;BR /&gt;Server - (contains disks used by all the nodes)-&lt;BR /&gt;Node1 - (contains disks used by all nodes)&lt;BR /&gt;Node2 - Satellite&lt;BR /&gt;Node3 - Satellite&lt;BR /&gt;&lt;BR /&gt;Node1 had been turned on today only. All the other nodes were already on. Normally all the disks on Node1 are as a volume set with Disk2 logical. This is how everyone accesses data on the disks on Node1. &lt;BR /&gt;&lt;BR /&gt;However, currently when I booted up Node1 it showed Disk2 on that node. But when I tried to access disk2 from other nodes -"there is no disk2" &lt;BR /&gt;&lt;BR /&gt;I did sh dev and all the Nodes (except Node1 )showed &lt;BR /&gt;&lt;BR /&gt;Node1$DKB0    MntVerifyTimeOut&lt;BR /&gt;Node1$DKB100  MntVerifyTimeOut&lt;BR /&gt;Node2$DKB200  MntVerifyTimeOut&lt;BR /&gt;&lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;Disk2 logical points to Node1$DKB100:&lt;BR /&gt;&lt;BR /&gt;what should be my set of steps so that I can make it available as disk2 right now. I do not wish to make it a permanent change. So please let me know how I can undo the changes as well. &lt;BR /&gt;&lt;BR /&gt;Note: Node1 containing disk2 is working fine and I can access all the data through there. &lt;BR /&gt;Thanks in advance&lt;BR /&gt;Nipun</description>
      <pubDate>Fri, 07 Oct 2005 09:36:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/problem-mounting-disk-on-cluster/m-p/4932160#M72718</guid>
      <dc:creator>nipun_2</dc:creator>
      <dc:date>2005-10-07T09:36:42Z</dc:date>
    </item>
    <item>
      <title>Re: problem mounting disk on cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/problem-mounting-disk-on-cluster/m-p/4932161#M72719</link>
      <description>So, NODE1 was shutdown, but the disks were not dismounted from all other nodes. The system was down for longer than system parameter MVTIMEOUT.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;You can try to dismount the disks on the other nodes:&lt;BR /&gt;$ dismount /ABORT NODE1$DKB0:&lt;BR /&gt;&lt;BR /&gt;and then manually remount, unless there are still some files open. Sometimes its possible to get rid of the mount with:&lt;BR /&gt;$ dismount /ABORT /OVERRIDE=CHECKS NODE1$DKB0:&lt;BR /&gt;&lt;BR /&gt;but there are situations were your really have to reboot.</description>
      <pubDate>Fri, 07 Oct 2005 09:51:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/problem-mounting-disk-on-cluster/m-p/4932161#M72719</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-10-07T09:51:14Z</dc:date>
    </item>
    <item>
      <title>Re: problem mounting disk on cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/problem-mounting-disk-on-cluster/m-p/4932162#M72720</link>
      <description>Nipun,&lt;BR /&gt;&lt;BR /&gt;set SYSGEN param MSCP_LOAD to 1  (one),&lt;BR /&gt;and SYSGEN param MSCP_SERVE_ALL to 5&lt;BR /&gt;&lt;BR /&gt;.. you will need a reboot of node1&lt;BR /&gt;&lt;BR /&gt;Alternatively, you can assign non-zero alloclass to each clusternode.&lt;BR /&gt;That will however require ALL nodes to reboot.&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;joe</description>
      <pubDate>Fri, 07 Oct 2005 09:51:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/problem-mounting-disk-on-cluster/m-p/4932162#M72720</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2005-10-07T09:51:51Z</dc:date>
    </item>
    <item>
      <title>Re: problem mounting disk on cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/problem-mounting-disk-on-cluster/m-p/4932163#M72721</link>
      <description>Oops -&lt;BR /&gt;&lt;BR /&gt;should have read better before typing ahead.&lt;BR /&gt;Ignore my post - re-read Uwe..&lt;BR /&gt;Sorry.&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Fri, 07 Oct 2005 12:43:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/problem-mounting-disk-on-cluster/m-p/4932163#M72721</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2005-10-07T12:43:24Z</dc:date>
    </item>
    <item>
      <title>Re: problem mounting disk on cluster</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/problem-mounting-disk-on-cluster/m-p/4932164#M72722</link>
      <description>As always...thanks a lot Uwe</description>
      <pubDate>Mon, 21 Nov 2005 09:23:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/problem-mounting-disk-on-cluster/m-p/4932164#M72722</guid>
      <dc:creator>nipun_2</dc:creator>
      <dc:date>2005-11-21T09:23:49Z</dc:date>
    </item>
  </channel>
</rss>

