<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Mounting of HBVS disks in sylogicals.com fails on a node. in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076688#M87310</link>
    <description>Hoff,&lt;BR /&gt;&lt;BR /&gt;Thanks for your speedy reply (always appreciated!)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;If I understand you correctly, then this is a multiple member shadowset. Node1 has dka300, Node 2 has dkc300.&lt;BR /&gt;&lt;BR /&gt;Would running an io autoconfigure help?&lt;BR /&gt;&lt;BR /&gt;There is already a delay in the routine so it waits for the main server to be up before continuing so I will put it in there.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Mark</description>
    <pubDate>Sun, 28 Oct 2007 20:20:13 GMT</pubDate>
    <dc:creator>MarkOfAus</dc:creator>
    <dc:date>2007-10-28T20:20:13Z</dc:date>
    <item>
      <title>Mounting of HBVS disks in sylogicals.com fails on a node.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076686#M87308</link>
      <description>When a node was shutdown, and subsequently rebooted, it failed in the sylogicals.com to mount a shadowed disk set. &lt;BR /&gt;The disk set is mounted on the other machine as DSA3 consisting of the internal disk DKA300.&lt;BR /&gt;It fails with this message "%MOUNT-F-NOSUCHDEV".&lt;BR /&gt;&lt;BR /&gt;This wouldn't be such a big issue if not for the fact it contains the SYSUAF, RIGHTSLIST, LICENSE etc.&lt;BR /&gt;&lt;BR /&gt;The command in sylogicals:&lt;BR /&gt;mount/system dsa3:/shad=($4$dkc300) /noassist data3&lt;BR /&gt;&lt;BR /&gt;Any assistance would be greatly appreciated.&lt;BR /&gt;&lt;BR /&gt;Also, how do you stop the cluster messages about a node shutting down appearing on other nodes (is it central?)?&lt;BR /&gt;&lt;BR /&gt;Cheers&lt;BR /&gt;Mark &lt;BR /&gt;</description>
      <pubDate>Sun, 28 Oct 2007 19:51:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076686#M87308</guid>
      <dc:creator>MarkOfAus</dc:creator>
      <dc:date>2007-10-28T19:51:08Z</dc:date>
    </item>
    <item>
      <title>Re: Mounting of HBVS disks in sylogicals.com fails on a node.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076687#M87309</link>
      <description>The message was probably correct, when it was issued.  The device was not known.&lt;BR /&gt;&lt;BR /&gt;Stick a time delay in front of the mount or use a retry loop with a delay in the processing; your bootstrap probably got to the MOUNT faster than the device configure process detected the particular device.&lt;BR /&gt;&lt;BR /&gt;What I usually have is an f$getdvi("whatsit","EXISTS") lexical combined in a loop with a WAIT command, and a IF counter .le. limit THEN GOTO label and related counter processing to avoid an infinite loop.  &lt;BR /&gt;&lt;BR /&gt;This logic is then usually wrapped into a subroutine, and the code mounting the volume calls the subroutine for each of the volumes.&lt;BR /&gt;&lt;BR /&gt;I'd probably scrounge up another member for that shadowset, too.  A single-volume shadowset does certainly have some uses, but the configurations here are somewhat specialized.  The biggest real benefit of RAID-1 HBVS comes only from having multiple spindles...&lt;BR /&gt;</description>
      <pubDate>Sun, 28 Oct 2007 20:10:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076687#M87309</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2007-10-28T20:10:32Z</dc:date>
    </item>
    <item>
      <title>Re: Mounting of HBVS disks in sylogicals.com fails on a node.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076688#M87310</link>
      <description>Hoff,&lt;BR /&gt;&lt;BR /&gt;Thanks for your speedy reply (always appreciated!)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;If I understand you correctly, then this is a multiple member shadowset. Node1 has dka300, Node 2 has dkc300.&lt;BR /&gt;&lt;BR /&gt;Would running an io autoconfigure help?&lt;BR /&gt;&lt;BR /&gt;There is already a delay in the routine so it waits for the main server to be up before continuing so I will put it in there.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Mark</description>
      <pubDate>Sun, 28 Oct 2007 20:20:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076688#M87310</guid>
      <dc:creator>MarkOfAus</dc:creator>
      <dc:date>2007-10-28T20:20:13Z</dc:date>
    </item>
    <item>
      <title>Re: Mounting of HBVS disks in sylogicals.com fails on a node.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076689#M87311</link>
      <description>[[[If I understand you correctly, then this is a multiple member shadowset. Node1 has dka300, Node 2 has dkc300.]]]&lt;BR /&gt;&lt;BR /&gt;That this is a multi-member shadowset wasn't obvious to me from what was posted -- on re-reading it, I can infer what was intended.  (I don't like inferring these sorts of things, though.  Tends to get me in (more) trouble.  But I digress.)&lt;BR /&gt;&lt;BR /&gt;Regardless, if this is a multi-member shadowset, I'd specify both devices on the shadowset virtual unit (VU) mount command.  But that's me.  Something like this:&lt;BR /&gt;&lt;BR /&gt;mount/system -&lt;BR /&gt;dsa3:/shad=($4$dkc300:,$whatever$dka300) -&lt;BR /&gt;/noassist data3&lt;BR /&gt;&lt;BR /&gt;I'd probably also look to string together the SCSI buses, assuming the (OpenVMS Alpha?) hosts, versions, and SCSI controllers permit it.  And to enable port allocation classes.&lt;BR /&gt;&lt;BR /&gt;[[[Would running an io autoconfigure help?]]]&lt;BR /&gt;&lt;BR /&gt;With the timing of the discovery of the device?  Probably not.  It's already running.  Well, explicitly running it might well perturb and/or delay things such that the devices are discovered and configured.  But so would a wait-loop.&lt;BR /&gt;&lt;BR /&gt;And as a side-note, do take a look at the SYS$EXAMPLES:MSCPMOUNT.COM example command procedure; that sort of processing can be useful in configurations that have nodes and served disks coming and going.  (I don't like tossing MOUNT /CLUSTER around, due to bad experiences with same over the years.  I tend to prefer issuing a MOUNT /SYSTEM on each node.)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sun, 28 Oct 2007 21:36:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076689#M87311</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2007-10-28T21:36:14Z</dc:date>
    </item>
    <item>
      <title>Re: Mounting of HBVS disks in sylogicals.com fails on a node.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076690#M87312</link>
      <description>Mark,&lt;BR /&gt;&lt;BR /&gt;  Because of all the ways disks can be connected to an OpenVMS system, you can't necessarily just mount a disk.&lt;BR /&gt;&lt;BR /&gt;  Instead spreading the code to mount a volume in many places, I prefer to remove all my MOUNT commands into a module which can be called when necessary. Abstract the idea of a "disk" into a logical entity and hide the detail. So, your SYLOGICALS might do something like:&lt;BR /&gt;&lt;BR /&gt;$ @SYS$STARTUP:GET_DISK CLUSTER_DATA&lt;BR /&gt;$ IF .NOT$STATUS &lt;BR /&gt;$ THEN&lt;BR /&gt;$ !  handle error&lt;BR /&gt;$ ENDIF&lt;BR /&gt;&lt;BR /&gt;  When GET_DISK has returned succesfully you know you can access the storage area via its logical name.&lt;BR /&gt;&lt;BR /&gt;  Let GET_DISK know the details of where CLUSTER_DATA is stored and how it's mounted.&lt;BR /&gt;&lt;BR /&gt;Use F$GETDVI item "EXISTS" to see if the physical disks exist yet, with a time delay and retry if they're not visible. Then use F$GETDVI "MOUNTED" to check if you need to mount it. Finally you can mount the disk.&lt;BR /&gt;&lt;BR /&gt;Using this type of mechanism you can make it very easy to move logical entities around, and change details like physical disk, shadowed or non-shadowed, how many members, and if they're required to be mounted. In a split site, you can also implement blanket rules for mounting 3, 2 or 1 member shadow sets via user defined SYSGEN parameters. My recommendation for mounting shadow sets is to wait for all members to be present and use /POLICY=REQUIRE_MEMBERS. This reduces the changes of mounting shadow sets backwards.&lt;BR /&gt;&lt;BR /&gt;Regarding the cluster messages, are you talking about OPCOM or connection manager messages? Maybe post a sample, and explain how and/or where you want the message to be written.</description>
      <pubDate>Sun, 28 Oct 2007 21:38:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076690#M87312</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2007-10-28T21:38:51Z</dc:date>
    </item>
    <item>
      <title>Re: Mounting of HBVS disks in sylogicals.com fails on a node.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076691#M87313</link>
      <description>Hoff,&lt;BR /&gt;&lt;BR /&gt;I apologies for making you infer, I was tardy in not fully explaining the situation.&lt;BR /&gt;&lt;BR /&gt;"Regardless, if this is a multi-member shadowset, I'd specify both devices on the shadowset virtual unit (VU) mount command. But that's me. Something like this:&lt;BR /&gt;&lt;BR /&gt;mount/system -&lt;BR /&gt;dsa3:/shad=($4$dkc300:,$whatever$dka300) -&lt;BR /&gt;/noassist data3&lt;BR /&gt;"&lt;BR /&gt;&lt;BR /&gt;Why?&lt;BR /&gt;&lt;BR /&gt;I have a common routine, see attached. As per your previous reply, I added a routine to check if the device exists, see the WAIT_FOR_DEVICE "subroutine". The key part applies to EMU2 ie, if node.eqs."EMU2"...&lt;BR /&gt;&lt;BR /&gt;Emu2 owns the disk $4$dkc300, emu1 owns the disk $3$dka300. Together they happily form dsa3: (oh the irony!)&lt;BR /&gt;&lt;BR /&gt;This is what happened in the startup.log after the changes were made:&lt;BR /&gt;&lt;BR /&gt;-BEGIN LOG---------------------------------&lt;BR /&gt;%STDRV-I-STARTUP, OpenVMS startup begun at 29-OCT-2007 13:07:19.30&lt;BR /&gt;SYLOGICALS.COM&amp;gt; Begin&lt;BR /&gt;MOUNT_COMMON.COM&amp;gt; Begin&lt;BR /&gt;node=EMU1&lt;BR /&gt;MOUNT_COMMON&amp;gt; Device exists, ready to mount (dkc300)&lt;BR /&gt;%MOUNT-F-NOSUCHDEV, no such device available&lt;BR /&gt;MOUNT_COMMON.COM&amp;gt; End&lt;BR /&gt;-END LOG---------------------------------&lt;BR /&gt;&lt;BR /&gt;Then I halted the console, and tried again, and this is the output from the successful startup:&lt;BR /&gt;&lt;BR /&gt;-BEGIN LOG---------------------------------&lt;BR /&gt;%STDRV-I-STARTUP, OpenVMS startup begun at 29-OCT-2007 13:21:08.33&lt;BR /&gt;SYLOGICALS.COM&amp;gt; Begin&lt;BR /&gt;MOUNT_COMMON.COM&amp;gt; Begin&lt;BR /&gt;node=EMU1&lt;BR /&gt;MOUNT_COMMON&amp;gt; Device exists, ready to mount (dkc300)&lt;BR /&gt;%MOUNT-I-MOUNTED, DATA3 mounted on _DSA3:&lt;BR /&gt;%MOUNT-I-SHDWMEMCOPY, _$4$DKC300: (EMU2) added to the shadow set with a copy operation&lt;BR /&gt;%MOUNT-I-ISAMBR, _$3$DKA300: (EMU1) is a member of the shadow set&lt;BR /&gt;MOUNT_COMMON.COM&amp;gt; End&lt;BR /&gt;-END LOG---------------------------------&lt;BR /&gt;&lt;BR /&gt;Is it not curious that it failed the first time but succeeded in the latter without any modification to the routine&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Mark</description>
      <pubDate>Sun, 28 Oct 2007 22:57:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076691#M87313</guid>
      <dc:creator>MarkOfAus</dc:creator>
      <dc:date>2007-10-28T22:57:49Z</dc:date>
    </item>
    <item>
      <title>Re: Mounting of HBVS disks in sylogicals.com fails on a node.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076692#M87314</link>
      <description>John,&lt;BR /&gt;&lt;BR /&gt;I have two major routines for disk mounting. One is the one attached in the previous reply to Hoff. The other routine is called by systartup_vms.com to mount the data disks. This works ok (so far...)&lt;BR /&gt;&lt;BR /&gt;The routine under discussion here has the sole purpose, in this circumstance, to mount the shadowed disk(s) which contain the sysuaf, rightslists, license, proxy et al. The cluster is running, the other node is running and is the master for the dsa3 shadow set.&lt;BR /&gt;&lt;BR /&gt;"Because of all the ways disks can be connected to an OpenVMS system, you can't necessarily just mount a disk.&lt;BR /&gt;"&lt;BR /&gt;&lt;BR /&gt;I tried to do this with the routine, and as Hoff also suggested, it took a look at mscp_mount and the concepts I used in my own command file. So I am trying to get to your suggested mode of operation, but I seem to have some form of timing issue.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;"Use F$GETDVI item "EXISTS" to see if the physical disks exist yet, with a time delay and retry if they're not visible. Then use F$GETDVI "MOUNTED" to check if you need to mount it. Finally you can mount the disk."&lt;BR /&gt;&lt;BR /&gt;I would then be interested in your view of the routine I wrote. Are you saying that I should also check to see if logical device DSA3 is mounted? That I can do. I have perhaps wrongly assumed that if the primary server is up (in normal day-to-day operation), that DSA3 is already active &amp;amp; mounted.&lt;BR /&gt;&lt;BR /&gt;As an aside, how can I prevent dsa3: from going into mount verification if the system shuts down - increase the timeout? Can I test for this in f$getdvi?&lt;BR /&gt;&lt;BR /&gt;"My recommendation for mounting shadow sets is to wait for all members to be present and use /POLICY=REQUIRE_MEMBERS. This reduces the changes of mounting shadow sets backwards."&lt;BR /&gt;&lt;BR /&gt;Oh, I would love to do this, but operational circumstances prevent this. Therefore, I have tried to ensure that the primary node "Emu1" is up and only via the use of userd1 parameters will "Emu2" (the secondary node), come up by itself.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;"Regarding the cluster messages, are you talking about OPCOM or connection manager messages? Maybe post a sample, and explain how and/or where you want the message to be written."&lt;BR /&gt;&lt;BR /&gt;Sure can post it:&lt;BR /&gt;&lt;BR /&gt;------------------------------------------&lt;BR /&gt;SHUTDOWN message on EMU1 from user MARK at _EMU2$OPA0:   08:59:00&lt;BR /&gt;EMU2 will shut down in 0 minutes; back up shortly via automatic reboot.  Please&lt;BR /&gt;log off node EMU2.&lt;BR /&gt;Standalone&lt;BR /&gt;------------------------------------------&lt;BR /&gt;&lt;BR /&gt;This confuses the users on EMU2, who start logging out (well at least they are well trained to follow operator messages :-) )&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Mark&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sun, 28 Oct 2007 23:18:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076692#M87314</guid>
      <dc:creator>MarkOfAus</dc:creator>
      <dc:date>2007-10-28T23:18:20Z</dc:date>
    </item>
    <item>
      <title>Re: Mounting of HBVS disks in sylogicals.com fails on a node.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076693#M87315</link>
      <description>John,&lt;BR /&gt;&lt;BR /&gt;Oops, I should have written:&lt;BR /&gt;&lt;BR /&gt;This confuses the users on EMU1, who start logging out (well at least they are well trained to follow operator messages :-) )&lt;BR /&gt;&lt;BR /&gt;I wrote EMU2 instead of EMU1.&lt;BR /&gt;The message appears on EMU1 users' terminals, and they don't know to check the specific node name, so they start logging out (and complaining).&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Mark</description>
      <pubDate>Sun, 28 Oct 2007 23:24:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076693#M87315</guid>
      <dc:creator>MarkOfAus</dc:creator>
      <dc:date>2007-10-28T23:24:37Z</dc:date>
    </item>
    <item>
      <title>Re: Mounting of HBVS disks in sylogicals.com fails on a node.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076694#M87316</link>
      <description>Mark,&lt;BR /&gt;&lt;BR /&gt;regarding the suppression of the shutdown messages on other cluster members, do you use the logical name SHUTDOWN$INFORM_NODES ?&lt;BR /&gt;&lt;BR /&gt;HTH,&lt;BR /&gt;&lt;BR /&gt;Bart Zorn&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Oct 2007 03:50:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076694#M87316</guid>
      <dc:creator>Bart Zorn_1</dc:creator>
      <dc:date>2007-10-29T03:50:26Z</dc:date>
    </item>
    <item>
      <title>Re: Mounting of HBVS disks in sylogicals.com fails on a node.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076695#M87317</link>
      <description>Ok, so EMU1 is the primary and EMU2 is the secondary.  I'd (still) mount the disks as previously stated, specifying all nodes and the full path.  And I'd use the wait loop as previously specified.  (I tend to combine the whole MOUNT sequence into the subroutine; the test for existence and the wait, a test for having been mounted and the MOUNT, etc.)   And I'd look to configure shared SCSI buses (assuming the two systems are co-located within the range of appropriate SCSI cables), as this substantially improves uptime and reduces network load.&lt;BR /&gt;&lt;BR /&gt;As for the disaster-level processing and the usual sorts of situations, I'd simply look to avoid starting the applications on the secondaries, or (better) at coding the applications to use locks or such at startup to manage the election of a primary.  Or (best) to code the environment to use all of the available cluster member nodes in parallel.  I've found that manual switch-over processes tend to fail during disasters; best to have these set up as automatic as us reasonably feasible.  Humans can tend to be the error trigger, particularly for seldom-used sequences.&lt;BR /&gt;&lt;BR /&gt;If you are using humans as key components in the fail-over, you'll want to test the fail-over sequencing periodically.&lt;BR /&gt;&lt;BR /&gt;If you'd like to chat on this topic using larger text windows, feel free to contact me off-line.  Then one of us can publish up a summary for folks here, or similar such.&lt;BR /&gt;&lt;BR /&gt;Stephen Hoffman&lt;BR /&gt;HoffmanLabs LLC&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Oct 2007 12:07:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076695#M87317</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2007-10-29T12:07:14Z</dc:date>
    </item>
    <item>
      <title>Re: Mounting of HBVS disks in sylogicals.com fails on a node.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076696#M87318</link>
      <description>Mark,&lt;BR /&gt;&lt;BR /&gt;If each of the shadow members has only a single system with a direct connection, i.e. if DKA300 is directly attached only to nodeA and DKC300 is directly attached only to nodeB, and you can't share the SCSI bus between the systems, you may be interested in trying to avoid a full copy when the member is reintroduced when a system boots.&lt;BR /&gt;&lt;BR /&gt;If you are running non-VAX VMS 7.3+, you should be able to take advantage of write bitmaps to minimize the time it takes to return a member to steady state.&lt;BR /&gt;&lt;BR /&gt;If a member's only path is via the system that is being shutdown, that system can request that the member be dismounted by the other system (using sysman).  The command to dismount is &lt;BR /&gt;&lt;BR /&gt;$ dismount &lt;MEMBER&gt; /policy=minicopy&lt;BR /&gt;&lt;BR /&gt;The bitmap is created on the node that does the dismount, therefore the dismount must be done on a node that will remain up during the reboot.&lt;BR /&gt;&lt;BR /&gt;I haven't used the method John Gillings recommended, [/policy=require_members], but I just tried it and it works.&lt;BR /&gt;&lt;BR /&gt;So in syshutdwn&lt;BR /&gt;&lt;BR /&gt;$! with LOG_IO priv&lt;BR /&gt;$! if member only accessible via this node&lt;BR /&gt;$! request other node to dismount member&lt;BR /&gt;$! with use of sysmanini this can be done with single dcl command line.&lt;BR /&gt;&lt;BR /&gt;Contents of exe_other_node.sysmanini&lt;BR /&gt;set environment /node=&lt;OTHER_NODE&gt;&lt;BR /&gt;set profile/priv=log_io&lt;BR /&gt;&lt;BR /&gt;$ define/user sysmanini exe_other_node.sysmanini&lt;BR /&gt;$ mcr sysman do dismount/policy=minicopy &lt;MEMBER&gt;&lt;BR /&gt;&lt;BR /&gt;When disks are mounted, you do not have to specify /policy=minicopy unless you want the mount to fail if the member can't be mounted without a full copy.  If a minicopy bitmap exists, it will be used.  You can specify /policy=require_members, although this is most important on the initial mount of the virtual unit, to ensure that the most recent member is used as the master.&lt;BR /&gt;&lt;BR /&gt;I've attached an example showing commands and their effect on bitmaps and remounting of a member (that was static during the time it was dismounted).&lt;BR /&gt;&lt;BR /&gt;Good Luck,&lt;BR /&gt;&lt;BR /&gt;Jon&lt;/MEMBER&gt;&lt;/OTHER_NODE&gt;&lt;/MEMBER&gt;</description>
      <pubDate>Mon, 29 Oct 2007 14:55:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076696#M87318</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2007-10-29T14:55:00Z</dc:date>
    </item>
    <item>
      <title>Re: Mounting of HBVS disks in sylogicals.com fails on a node.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076697#M87319</link>
      <description>Here's the attachment I left off.&lt;BR /&gt;&lt;BR /&gt;Jon</description>
      <pubDate>Mon, 29 Oct 2007 14:56:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076697#M87319</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2007-10-29T14:56:55Z</dc:date>
    </item>
    <item>
      <title>Re: Mounting of HBVS disks in sylogicals.com fails on a node.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076698#M87320</link>
      <description>Bart,&lt;BR /&gt;&lt;BR /&gt;"do you use the logical name SHUTDOWN$INFORM_NODES"&lt;BR /&gt;&lt;BR /&gt;No, but I must say that was the first thing I checked for.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Mark.</description>
      <pubDate>Mon, 29 Oct 2007 15:21:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076698#M87320</guid>
      <dc:creator>MarkOfAus</dc:creator>
      <dc:date>2007-10-29T15:21:27Z</dc:date>
    </item>
    <item>
      <title>Re: Mounting of HBVS disks in sylogicals.com fails on a node.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076699#M87321</link>
      <description>Hoff,&lt;BR /&gt;&lt;BR /&gt;" Ok, so EMU1 is the primary and EMU2 is the secondary. I'd (still) mount the disks as previously stated, specifying all nodes and the full path. And I'd use the wait loop as previously specified. (I tend to combine the whole MOUNT sequence into the subroutine; the test for existence and the wait, a test for having been mounted and the MOUNT, etc.) "&lt;BR /&gt;&lt;BR /&gt;Ok, I will rationalise the approach; point taken.&lt;BR /&gt;&lt;BR /&gt;"And I'd look to configure shared SCSI buses (assuming the two systems are co-located within the range of appropriate SCSI cables), as this substantially improves uptime and reduces network load."&lt;BR /&gt;&lt;BR /&gt;The systems are geographically seperated, and have their own closed fibre connections.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;"As for the disaster-level processing and the usual sorts of situations, I'd simply look to avoid starting the applications on the secondaries, or (better) at coding the "&lt;BR /&gt;&lt;BR /&gt;No problem there as the licensing we have precludes running the application on both servers. So the secondary server is really just idling behind the scenes as a real-time backup, receiving data.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;"...Or (best) to code the environment to use all of the available cluster member nodes in parallel. I've found that manual switch-over processes tend to fail during disasters; best to have these set up as automatic as us reasonably feasible. Humans can tend to be the error trigger, particularly for seldom-used sequences."&lt;BR /&gt;&lt;BR /&gt;I guess I don't have an option, given the constraints, so a manual switch-over is the only way I can go, for now.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Mark.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Oct 2007 15:33:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076699#M87321</guid>
      <dc:creator>MarkOfAus</dc:creator>
      <dc:date>2007-10-29T15:33:11Z</dc:date>
    </item>
    <item>
      <title>Re: Mounting of HBVS disks in sylogicals.com fails on a node.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076700#M87322</link>
      <description>Jon,&lt;BR /&gt;&lt;BR /&gt;"If each of the shadow members has only a single system with a direct connection, i.e. if DKA300 is directly attached only to nodeA and DKC300 is directly attached only to nodeB, and you can't share the SCSI bus between the systems, you may be interested in trying to avoid a full copy when the member is reintroduced when a system boots."&lt;BR /&gt;&lt;BR /&gt;You are correct. Each system has its own disks, no shared storage (Vms 7.3-2).&lt;BR /&gt;&lt;BR /&gt;You are also astute. I had looked at the minicopy, for future usage, because the disk at issue today is only a 36GB disk, so when it comes back into the shadow set the full copy is fairly quick. When the 300G disks are added in the next few weeks, that "fairly quick" copy will be a "bloody long one".&lt;BR /&gt;&lt;BR /&gt;"So in syshutdwn&lt;BR /&gt;&lt;BR /&gt;$! with LOG_IO priv&lt;BR /&gt;$! if member only accessible via this node&lt;BR /&gt;$! request other node to dismount member&lt;BR /&gt;$! with use of sysmanini this can be done with single dcl command line.&lt;BR /&gt;&lt;BR /&gt;Contents of exe_other_node.sysmanini&lt;BR /&gt;set environment /node=&lt;OTHER_NODE&gt;&lt;BR /&gt;set profile/priv=log_io&lt;BR /&gt;&lt;BR /&gt;$ define/user sysmanini exe_other_node.sysmanini&lt;BR /&gt;$ mcr sysman do dismount/policy=minicopy &lt;MEMBER&gt;&lt;BR /&gt;"&lt;BR /&gt;&lt;BR /&gt;Brilliant! Thank you Jon, I will do this.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/MEMBER&gt;&lt;/OTHER_NODE&gt;</description>
      <pubDate>Mon, 29 Oct 2007 15:58:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076700#M87322</guid>
      <dc:creator>MarkOfAus</dc:creator>
      <dc:date>2007-10-29T15:58:35Z</dc:date>
    </item>
    <item>
      <title>Re: Mounting of HBVS disks in sylogicals.com fails on a node.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076701#M87323</link>
      <description>Bart,&lt;BR /&gt;&lt;BR /&gt;"regarding the suppression of the shutdown messages on other cluster members, do you use the logical name SHUTDOWN$INFORM_NODES ?"&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;I think you may be onto something, though. I was under the impression that if it was blank it notifies none. Perhaps I should revise that assumption to "if it is blank, it will notify all nodes."?&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Mark.</description>
      <pubDate>Mon, 29 Oct 2007 16:00:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076701#M87323</guid>
      <dc:creator>MarkOfAus</dc:creator>
      <dc:date>2007-10-29T16:00:46Z</dc:date>
    </item>
    <item>
      <title>Re: Mounting of HBVS disks in sylogicals.com fails on a node.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076702#M87324</link>
      <description>If anyone is aware of a better way to handle the "member with a single connection" case, than using sysman, I would be interested.&lt;BR /&gt;&lt;BR /&gt;I have read the "HP Volume Shadowing for OpenVMS Alpha 7.3-2" manual, and it is silent on the subject as far as I know.  The main focus of minicopy is for backups.  However, from experience I can say that a master minicopy bitmap on a system with only an MSCP served connection is sufficient to avoid a full copy, and that the bitmap survives on the node it is created on, across the removal and reintroduction of the other node with the direct connection.&lt;BR /&gt;&lt;BR /&gt;A nice "enhancement" to dismount/policy=minicopy would be the ability to specify a node for which the dismount should be initiated, and therefore where the master bitmap should be created.&lt;BR /&gt;&lt;BR /&gt;For example:&lt;BR /&gt;&lt;BR /&gt;$ dismount/policy=minicopy=node:omega $4$DKC300: ! not implemented !!!&lt;BR /&gt;would tell omega to dismount the member and create the master minicopy bitmap.  Perhaps there could be a list of nodes specified, in which case the first node in the list that was currently a cluster member would master the bitmap.  The check for log_io privilege would be on the requesting node, so this assumes the security domain is the cluster, i.e. homogenous privileges on all nodes of the cluster (shared SYSAUF).&lt;BR /&gt;&lt;BR /&gt;Also, if anyone knows of any problems with my suggestion, I would also like to hear about them, as I have never seen this recommended or documented.&lt;BR /&gt;&lt;BR /&gt;Jon&lt;BR /&gt;</description>
      <pubDate>Tue, 30 Oct 2007 13:38:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076702#M87324</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2007-10-30T13:38:20Z</dc:date>
    </item>
    <item>
      <title>Re: Mounting of HBVS disks in sylogicals.com fails on a node.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076703#M87325</link>
      <description>Jon,&lt;BR /&gt;&lt;BR /&gt;"I have read the "HP Volume Shadowing for OpenVMS Alpha 7.3-2" manual, and it is silent on the subject as far as I know. The main focus of minicopy is for backups. However, from experience I can say that a master minicopy bitmap on a system with only an MSCP served connection is sufficient to avoid a full copy, and that the bitmap survives on the node it is created on, across the removal and reintroduction of the other node with the direct connection.&lt;BR /&gt;"&lt;BR /&gt;&lt;BR /&gt;The manual is helpful, but as you suggest, it is often one-tracked in its explanations. No alternative scenarios are given, because, to me, examples mean much more than paragraph after paragraph of explanatory notes. Often the manuals assume a level of OpenVMS knowledge by the reader that is not there.&lt;BR /&gt;&lt;BR /&gt;"A nice "enhancement" to dismount/policy=minicopy would be the ability to specify a node for which the dismount should be initiated, and therefore where the master bitmap should be created.&lt;BR /&gt;"&lt;BR /&gt;&lt;BR /&gt;This is a brilliant idea, and I can't understand why it isn't available BUT the LOG_IO privilege issue seems a sticking point and is probably why using sysman is the only way to do it.&lt;BR /&gt;&lt;BR /&gt;I am going to use your suggestion today, first manually then in a command file at shutdown.</description>
      <pubDate>Tue, 30 Oct 2007 15:31:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076703#M87325</guid>
      <dc:creator>MarkOfAus</dc:creator>
      <dc:date>2007-10-30T15:31:35Z</dc:date>
    </item>
    <item>
      <title>Re: Mounting of HBVS disks in sylogicals.com fails on a node.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076704#M87326</link>
      <description>I think we had a similar discussion about 6 months ago. Using SMISERVER to perform the dismount on another node is an option I hadn't considered. That would certainly allow you to create the master write bitmap where it belongs.&lt;BR /&gt;&lt;BR /&gt; I'm still inclined to handle the dismount/mount processes manually though. Mounting and dismounting locally attached shadowset members can be a dangerous business, and I'd argue that you have less control if you automate the process. I tend to just write the mount/dismount scripts and then execute them when and where I choose.</description>
      <pubDate>Tue, 30 Oct 2007 23:29:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076704#M87326</guid>
      <dc:creator>Martin Hughes</dc:creator>
      <dc:date>2007-10-30T23:29:30Z</dc:date>
    </item>
    <item>
      <title>Re: Mounting of HBVS disks in sylogicals.com fails on a node.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076705#M87327</link>
      <description>I believe the thread Martin is referring to is this one:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=1118643" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=1118643&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;After thinking a bit more about this, there should really be an option to "do the right thing" when dismounting a virtual unit at shutdown.  I.e. if the virtual unit has at least one member that has a direct path to another cluster member, but there are some members of the virtual unit that are directly attached only to the system being shutdown, then the dismount should first initiate a dismount of the members that have no direct paths to other cluster members, and this dismount should create a minicopy bitmap on a cluster member that currently has the virtual unit mounted, and has a direct connection to one of the other members.&lt;BR /&gt;&lt;BR /&gt;The purpose of doing this is to avoid full copies when the system being shutdown reboots.  Also, by dismounting the member, the remaining cluster nodes won't have to timeout the connection to the (MSCP served) member that stops responding when the MSCP serving node shuts down.  With HBMM, multiple cluster nodes can have master copies, with minicopy, this doesn't seem to be possible, as the master copy is created on the node that creates the bitmap with the dismount of mount command.&lt;BR /&gt;&lt;BR /&gt;Since this discussion is not related to "failure to mount disks", perhaps we should start a new topic discussing the use of minicopy during shutdown.&lt;BR /&gt;&lt;BR /&gt;Jon&lt;BR /&gt;</description>
      <pubDate>Wed, 31 Oct 2007 00:33:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/mounting-of-hbvs-disks-in-sylogicals-com-fails-on-a-node/m-p/5076705#M87327</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2007-10-31T00:33:01Z</dc:date>
    </item>
  </channel>
</rss>

