<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Multiple SYSUAF in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827572#M77791</link>
    <description>Yes, it can be as simple as pointing the system logical SYSUAF to different devices for each instance.</description>
    <pubDate>Thu, 20 Jul 2006 13:54:09 GMT</pubDate>
    <dc:creator>John Yu_1</dc:creator>
    <dc:date>2006-07-20T13:54:09Z</dc:date>
    <item>
      <title>Multiple SYSUAF</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827571#M77790</link>
      <description>Does anyone know if it is possible to have two seperate sysuaf's on a VMS cluster?  I am merging two clusters into one cluster and I would like to keep the sysuaf's seperate. VMS version 7.3-2</description>
      <pubDate>Thu, 20 Jul 2006 13:42:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827571#M77790</guid>
      <dc:creator>LM_2</dc:creator>
      <dc:date>2006-07-20T13:42:12Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple SYSUAF</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827572#M77791</link>
      <description>Yes, it can be as simple as pointing the system logical SYSUAF to different devices for each instance.</description>
      <pubDate>Thu, 20 Jul 2006 13:54:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827572#M77791</guid>
      <dc:creator>John Yu_1</dc:creator>
      <dc:date>2006-07-20T13:54:09Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple SYSUAF</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827573#M77792</link>
      <description>Sure. It is no technical problem. Even if you have one common disk, you can give each node its own SYSUAF.&lt;BR /&gt;&lt;BR /&gt;Whether you will be lucky if you start maintaining it is another question. You can easily merge two SYSUAF files, provided that UICs and/or usernames are unique.</description>
      <pubDate>Thu, 20 Jul 2006 13:55:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827573#M77792</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2006-07-20T13:55:48Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple SYSUAF</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827574#M77793</link>
      <description>LM,&lt;BR /&gt;&lt;BR /&gt;_IF_ that is what you want, it is fully supported.&lt;BR /&gt;&lt;BR /&gt;And in migration trajects, it is often used.&lt;BR /&gt;&lt;BR /&gt;_BUT_ be aware of the extra complexity involved.&lt;BR /&gt;Especially if you  have UICs that in one SYSUAF belong to an account different from that same UIC value in the other UAF, be aware that owner ships and access rights belonging to USER1 from SYSUAF1, simply BELONG to USER2 from SYSUAF2.&lt;BR /&gt;Under water, the UIC is the _ONLY_ value that DEFINES those ISs!!&lt;BR /&gt;&lt;BR /&gt;But if you are aware of that, and at your site that does not pose a problem, you are OK to go, and it is fully supported.&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Thu, 20 Jul 2006 13:56:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827574#M77793</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2006-07-20T13:56:48Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple SYSUAF</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827575#M77794</link>
      <description>I know about the sysuaf system logical - but how would it know which user uses what sysuaf - this is confusing on how to type it out - but how would I assign the one sysuaf to a certain group of user's and have the other group use the second one...or would it look at both sysuafs when someone logs in......??? &lt;BR /&gt;</description>
      <pubDate>Thu, 20 Jul 2006 14:02:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827575#M77794</guid>
      <dc:creator>LM_2</dc:creator>
      <dc:date>2006-07-20T14:02:11Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple SYSUAF</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827576#M77795</link>
      <description>The operating system uses a single SYSUAF for authorization. You cannot have multiple active SYSUAF files within a single OpenVMS instance. If the user is not present in the active SYSUAF, (s)he cannot login on that node.</description>
      <pubDate>Thu, 20 Jul 2006 14:08:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827576#M77795</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2006-07-20T14:08:48Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple SYSUAF</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827577#M77796</link>
      <description>NO can do.&lt;BR /&gt;&lt;BR /&gt;WHY do you want this? Maybe there are better ways to achieve your goals.&lt;BR /&gt;&lt;BR /&gt;Are you concerned about username or UIC duplicates? You just HAVE to deal with those before joining, no escape.&lt;BR /&gt;&lt;BR /&gt;For reporting purposes? I think your best bet is to construct something recognizable for each 'group'.&lt;BR /&gt;- It could be part of the account name,&lt;BR /&gt;- the logical names for the default device&lt;BR /&gt;- a set of UIC groups, or application private data in the USERDATA portion of the SYSUAF record.&lt;BR /&gt;&lt;BR /&gt;Later on you can report based on that.&lt;BR /&gt;&lt;BR /&gt;hth,&lt;BR /&gt;Hein.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 20 Jul 2006 14:11:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827577#M77796</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2006-07-20T14:11:40Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple SYSUAF</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827578#M77797</link>
      <description>So pretty much what I need to do if I wanted this to happen - is have each node have a seperate sysuaf file - and then have one set of user's log into the first node - and the other set of user's log into the second node.  Not sure this would be a good scenerio to do.  Something I will need to think about.</description>
      <pubDate>Thu, 20 Jul 2006 14:13:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827578#M77797</guid>
      <dc:creator>LM_2</dc:creator>
      <dc:date>2006-07-20T14:13:32Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple SYSUAF</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827579#M77798</link>
      <description>Logins will use whichever SYSUAF is pointed to by the SYSUAF system logical on the node the user is logged in on.&lt;BR /&gt;&lt;BR /&gt;So if you have two groups of users defined in two different SYSUAFs that will be logging in on overlapping nodes I'm afraid there's no easy solution.  (It's possible you could program a solution using LOGINOUT &lt;BR /&gt;callouts but this would be very convoluted).&lt;BR /&gt;&lt;BR /&gt;If you can restrict access to each group of users to non-overlapping nodes then you can use two SYSUAFs.  You can manage both SYSUAFs from any node by defining SYSUAF as a process logical.</description>
      <pubDate>Thu, 20 Jul 2006 14:17:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827579#M77798</guid>
      <dc:creator>Jess Goodman</dc:creator>
      <dc:date>2006-07-20T14:17:00Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple SYSUAF</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827580#M77799</link>
      <description>To clarify what I am doing is currently we have two seperate divisions who are running on two different clusters.  One division is moving over to my cluster - so we have two seperate sysuaf files and we have not looked into it yet - but I am sure we will have duplicate uic's/usernames - I have over 700 vms accounts and they have over 300.  So my goal was to see if we could keep our same sysuaf's - I didn't think it could be done - but it was kind of wishful thinking.</description>
      <pubDate>Thu, 20 Jul 2006 14:17:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827580#M77799</guid>
      <dc:creator>LM_2</dc:creator>
      <dc:date>2006-07-20T14:17:33Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple SYSUAF</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827581#M77800</link>
      <description>I agree that this separation is not a good one - why merge the cluster if users are restricted to one node?&lt;BR /&gt;&lt;BR /&gt;By the way: this not only includes interactive logins - it also limits the use for batch jobs, network tasks and (I think) print jobs.</description>
      <pubDate>Thu, 20 Jul 2006 14:19:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827581#M77800</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2006-07-20T14:19:56Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple SYSUAF</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827582#M77801</link>
      <description>LM,&lt;BR /&gt;&lt;BR /&gt;&lt;QUOTE&gt;&lt;BR /&gt;but how would it know which user uses what sysuaf &lt;BR /&gt;&lt;/QUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;In a cluster of more than one nodes (let's forget for the moment that it _IS_ also valid to have single-node clusters) any user-proces) is always active on _ONE_ node.&lt;BR /&gt;Your different nodes _CAN_ have different SYSUAFs by using different definitions of SYSUAF, OR, by using the default, the SYSUAFs in the SYS$SYSTEMs on the different system disks, OR, by locating (some of) the SYSUAFs in SYS$SPECIFIC:[SYSEXE] instead of SYS$COMMON:[SYSEXE].&lt;BR /&gt;Whichever mechanism you use (if you really want to make it confusing, you can even mix them!), each process derives its SYSUAF info from the currently governing SYSAUF _ON THAT NODE_.&lt;BR /&gt;So, which SYSUAF is used, is determined by the node that the user logs in to.&lt;BR /&gt;&lt;BR /&gt;hth&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe&lt;BR /&gt;</description>
      <pubDate>Thu, 20 Jul 2006 14:22:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827582#M77801</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2006-07-20T14:22:55Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple SYSUAF</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827583#M77802</link>
      <description>Just try your merge with convert /excep&lt;BR /&gt;See how much trouble there is&lt;BR /&gt;&lt;BR /&gt;Extract clean records with &lt;BR /&gt;$convert/share sysuaf.dat sysuaf_node_1.dat&lt;BR /&gt;$convert/share sysuaf.dat sysuaf_node_2.dat&lt;BR /&gt;$convert/stat/merge/excep=sysuaf_problems.dat sysuaf_node_1.dat sysuaf_node_2.dat&lt;BR /&gt;Make the smaller node node_1&lt;BR /&gt;&lt;BR /&gt;Similar for rightslist.&lt;BR /&gt;&lt;BR /&gt;Now at least you'll know the scope of the problem.&lt;BR /&gt;&lt;BR /&gt;fwiw,&lt;BR /&gt;Hein.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 20 Jul 2006 14:28:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827583#M77802</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2006-07-20T14:28:52Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple SYSUAF</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827584#M77803</link>
      <description>LM,&lt;BR /&gt;&lt;BR /&gt;I see you point now.&lt;BR /&gt;&lt;BR /&gt;What can be done (and I HAVE done this more than once) is to start out the way you plan.&lt;BR /&gt;&lt;BR /&gt;But after that, as soon as you can reasonably manage, you disentangle the conflicts.&lt;BR /&gt;&lt;BR /&gt;Start by merging the non-conflicting accounts (the easiest, and hopefully the largest part)&lt;BR /&gt;&lt;BR /&gt;Then change UICs (and the corresponding ownerships) one at a time.&lt;BR /&gt;Copy the changed account at that time from the "smaller" SYSUAF the "bigger" one.&lt;BR /&gt;&lt;BR /&gt;When you are done, have all systems point to your now integrated SYSUAF.&lt;BR /&gt;&lt;BR /&gt;--THIS is the time that you need to prepare the helpdesk for, and warn the users, because the 'small" SYSUAF passwords will now no longer be valid.&lt;BR /&gt;(unless you want to do some trickery on the records of the SYSUAF-to-be-deserted, and selectively merge them into the surviving one. Can also be done, but requires some delicate programming).&lt;BR /&gt;&lt;BR /&gt;hth&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Thu, 20 Jul 2006 14:38:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827584#M77803</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2006-07-20T14:38:13Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple SYSUAF</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827585#M77804</link>
      <description>If you only need two instances of SYSUAF.&lt;BR /&gt;&lt;BR /&gt;For nodes on each of the orig clusters&lt;BR /&gt;"SYSUAF" = "CLUSTER1_NODES:SYSUAF.DAT"&lt;BR /&gt;or&lt;BR /&gt;"SYSUAF" = "CLUSTER2_NODES:SYSUAF.DAT"&lt;BR /&gt;&lt;BR /&gt;whichever sysuaf they get would depend on the node they log into.</description>
      <pubDate>Thu, 20 Jul 2006 14:44:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827585#M77804</guid>
      <dc:creator>John Yu_1</dc:creator>
      <dc:date>2006-07-20T14:44:35Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple SYSUAF</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827586#M77805</link>
      <description>When you are ready Merging UAF's has been described here before and its in the sysman utility manual.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 20 Jul 2006 16:06:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827586#M77805</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2006-07-20T16:06:41Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple SYSUAF</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827587#M77806</link>
      <description>LM,&lt;BR /&gt;&lt;BR /&gt;&amp;gt;I know about the sysuaf system logical - &amp;gt;but how would it know which user uses what &amp;gt;sysuaf - this is confusing on how to type &amp;gt;it out - but how would I assign the one &amp;gt;sysuaf to a certain group of user's and &amp;gt;have the other group use the second &amp;gt;one...or would it look at both sysuafs when &amp;gt;someone logs in......???&lt;BR /&gt;&lt;BR /&gt;This is from our cluster manual....&lt;BR /&gt;&lt;BR /&gt;1) Keep your two sets system common files in two different non system disks. dka700 &amp;amp; dka800.&lt;BR /&gt;&lt;BR /&gt;That is one set of SYSUAF.DAT, NETPROXY.DAT, SYS$SYSTEM:RIGHTSLIST.DAT and SYS$SYSTEM:VMSMAIL_PROFILE.DATA.&lt;BR /&gt;&lt;BR /&gt;2) Modify SYS$COMMON:[SYSMGR]SYLOGICALS.COM on each system disk and define the logical names that points to the two setsts of system common files which are on dka700 and dka800.&lt;BR /&gt;ex: &lt;BR /&gt;$ DEFINE/SYSTEM/EXEC SYSUAF -&lt;BR /&gt;dka700:[VMS$COMMON.SYSEXE]SYSUAF.DAT&lt;BR /&gt; &lt;BR /&gt;$ DEFINE/SYSTEM/EXEC SYSUAF -&lt;BR /&gt;dka800:[VMS$COMMON.SYSEXE]SYSUAF.DAT&lt;BR /&gt;&lt;BR /&gt;3) make sure the system disks are mounted correctly with each reboot by Copying the SYS$EXAMPLES:CLU_MOUNT_DISK.COM file to the&lt;BR /&gt;[VMS$COMMON.SYSMGR] and Edit SYLOGICALS.COM and include commands to mount, with the appropriate volume label, the system disk containing the shared files.&lt;BR /&gt;Example: If the system disk is $1$DJA16, include the following command:&lt;BR /&gt;$ @SYS$SYSDEVICE:[VMS$COMMON.SYSMGR]CLU_MOUNT_DISK.COM $1$DJA16: volume-label&lt;BR /&gt;&lt;BR /&gt;And make sure Disks holding common files must be mounted early in the system startup&lt;BR /&gt;procedure (SYLOGICALS.COM) and that the disks are mounted with each OpenVMS Cluster&lt;BR /&gt;reboot.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Archunan&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 20 Jul 2006 16:55:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827587#M77806</guid>
      <dc:creator>Arch_Muthiah</dc:creator>
      <dc:date>2006-07-20T16:55:42Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple SYSUAF</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827588#M77807</link>
      <description>LM,&lt;BR /&gt;&lt;BR /&gt;&amp;gt;if it is possible to have two &lt;BR /&gt;&amp;gt;seperate sysuaf's on a VMS cluster?&lt;BR /&gt;&lt;BR /&gt;  You really don't want to do this. The same username in two different UAFs will have different passwords. Users will have to know which node they're logging into to know which password to use. &lt;BR /&gt;&lt;BR /&gt;  If the UICs and identifiers are not in synch, it's total CHAOS. Even if they are in synch, you'll be guaranteed that things like new mail counts will always be wrong, no matter which node you login to.&lt;BR /&gt;&lt;BR /&gt;  OpenVMS clusters are designed to share a common security domain, which includes the UAF, RIGHTSLIST, password history, mail profile, audit settings, queue manager, license data base and more (see SYLOGICALS.TEMPLATE for a list of files). Although it's (too!) easy to build a cluster where these files are NOT shared, all manner of strange and unsafe behaviours occur.&lt;BR /&gt;&lt;BR /&gt;  If you're going to build a cluster, it's far far better to put in the effort to configure it correctly up front, rather than try to puzzle out what's wrong later. Read the cluster configuration guide and follow Hein's advice to merge your existing UAFs and RIGHTSLISTs.</description>
      <pubDate>Thu, 20 Jul 2006 20:19:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827588#M77807</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2006-07-20T20:19:35Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple SYSUAF</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827589#M77808</link>
      <description>LM,&lt;BR /&gt;&lt;BR /&gt;I agree with John Gillings, this is not a particularly safe situation.&lt;BR /&gt;&lt;BR /&gt;It is not only a matter of SYSUAF, it is also a matter of RIGHTSLIST and a variety of other things.&lt;BR /&gt;&lt;BR /&gt;Not to write a long-winded post, but there are several mileposts that must be checked off:&lt;BR /&gt;&lt;BR /&gt;- you must ensure that the UIC ranges are disjoint&lt;BR /&gt;- you must ensure that the Identifiers are disjoint&lt;BR /&gt;- if anybody needs a change of UIC or identifier, this should be done BEFORE attempting to merge the clusters&lt;BR /&gt;- You should create a Rights Identifier for each of the two divisions, and grant it to each user, as appropriate (Systems staff will probably need BOTH identifiers).&lt;BR /&gt;- Modify SYLOGIN.COM to execute a file which will setup the base environment for each company AND whether they can log into a particular node (some of this could be done by LOGINOUT callbacks).&lt;BR /&gt;&lt;BR /&gt;If I thought about it for a bit, there are probably other issues that need be taken care of. I have done this type of merge in the past for clients, and it can be complex. In the end, the result is generally worth the effort.&lt;BR /&gt;&lt;BR /&gt;I have presented some issues relating to this in the "Inheritance Based Environments for OpenVMS Systems and OpenVMS Clusters" (published in the OpenVMS Technical Journal, Volume 3, PDF reprints are available at &lt;A href="http://www.rlgsc.com/publications/vmstechjournal/inheritance.html" target="_blank"&gt;http://www.rlgsc.com/publications/vmstechjournal/inheritance.html&lt;/A&gt; ), and "OpenVMS User Environments", a seminar about the resulting multi-organization environment at HP World 2004 (a summary presentation from this session is at &lt;A href="http://www.rlgsc.com/hpworld/2004/N227.html" target="_blank"&gt;http://www.rlgsc.com/hpworld/2004/N227.html&lt;/A&gt; ).&lt;BR /&gt;&lt;BR /&gt;I hope that the above is helpful.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Fri, 21 Jul 2006 11:27:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827589#M77808</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2006-07-21T11:27:20Z</dc:date>
    </item>
    <item>
      <title>Re: Multiple SYSUAF</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827590#M77809</link>
      <description>I have a two node cluster that shares a sysuaf file.  The sysuaf file is stored on the a shadow disk, $1$DGA100:, located on a MSA drive that is connected to both nodes.  &lt;BR /&gt;&lt;BR /&gt;I created a directory $1$DGA100 calling it cluster_share, $1$DGA100:[CLUSTER_SHARE].  I mount $1$DGA100: in sylogicals.com and define a logical clghd$share that points to $1$DGA100:[CLUSTER_SHARE].  &lt;BR /&gt;&lt;BR /&gt;Here's my sylogicals.com:&lt;BR /&gt;&lt;BR /&gt;$ mount/cluster $1$dga100: hdgldat1 hdgldat1&lt;BR /&gt;$ define/system/executive cglhd$share $1$dga100:[cluster_share]&lt;BR /&gt;$ @cglhd$share:cglhd_sylogicals.com&lt;BR /&gt;&lt;BR /&gt;Looking closely at sylogicals.com, you'll notice that the last thing it executes is another command procedure: cglhd_sylogicals.com.  This command procedure redefines all the system logicals to point to the new location of system files.  &lt;BR /&gt;&lt;BR /&gt;Here's cglhd_sylogicals.com:&lt;BR /&gt;&lt;BR /&gt;$ DEFINE/SYSTEM/EXECUTIVE SYSUAF            cglhd$share:SYSUAF.DAT&lt;BR /&gt;$ DEFINE/SYSTEM/EXECUTIVE SYSUAFALT         cglhd$share:SYSUAFALT.DAT&lt;BR /&gt;$ DEFINE/SYSTEM/EXECUTIVE SYSALF            cglhd$share:SYSALF.DAT&lt;BR /&gt;$ DEFINE/SYSTEM/EXECUTIVE RIGHTSLIST        cglhd$share:RIGHTSLIST.DAT&lt;BR /&gt;$ DEFINE/SYSTEM/EXECUTIVE NETPROXY          cglhd$share:NETPROXY.DAT&lt;BR /&gt;$ DEFINE/SYSTEM/EXECUTIVE NET$PROXY         cglhd$share:NET$PROXY.DAT&lt;BR /&gt;$ DEFINE/SYSTEM/EXECUTIVE NETOBJECT         cglhd$share:NETOBJECT.DAT&lt;BR /&gt;$ DEFINE/SYSTEM/EXECUTIVE VMS$OBJECTS      cglhd$share:VMS$OBJECTS.DAT&lt;BR /&gt;$ DEFINE/SYSTEM/EXECUTIVE VMS$AUDIT_SERVER cglhd$share:VMS$AUDIT_SERVER.DAT&lt;BR /&gt;$ DEFINE/SYSTEM/EXECUTIVE PASSWORD_HISTORY  cglhd$share:VMS$PASSWORD_HISTORY.DATA&lt;BR /&gt;$ DEFINE/SYSTEM/EXECUTIVE VMS$PASSWORD_DICTIONARY cglhd$share:VMS$PASSWORD_DICTIONARY.DATA</description>
      <pubDate>Fri, 21 Jul 2006 12:35:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/multiple-sysuaf/m-p/3827590#M77809</guid>
      <dc:creator>Jim Lahman_1</dc:creator>
      <dc:date>2006-07-21T12:35:49Z</dc:date>
    </item>
  </channel>
</rss>

