Operating System - OpenVMS
1827636 Members
3434 Online
109966 Solutions
New Discussion

Allow user to access cluster

 
SOLVED
Go to solution
owilliams
Frequent Advisor

Allow user to access cluster

I hope I explain this correctly. I need a user I create to access all 3 nodes in a cluster. I can create the user on only one node. How can I create the user an account on the other 2 nodes? New to VMS any help is appreciated!
21 REPLIES 21
Hein van den Heuvel
Honored Contributor

Re: Allow user to access cluster

You may be ready already.

Once the username is created into the cluster-common SYSUAF.DAT it can be used on all nodes.

>> I can create the user on only one node.

What bring you to this conclusion. What is the (non)-problem that you see? Be detailed. include EXACT (cut & paste) commands and error messages if needed.

Did you try accessing the other nodes?
What mechanism? Telnet? 'SET HOST'? Application controlled (eg Oracle, ftp,...)

hth,
Hein.

Robert Gezelter
Honored Contributor

Re: Allow user to access cluster

If your cluster is using common account files, the default would be for the account to be available on all members of the cluster.

Is there some reason why you believe that the user cannot use other members of the cluster?

- Bob Gezelter, http://www.rlgsc.com
owilliams
Frequent Advisor

Re: Allow user to access cluster

I tried accessing by Telnet and Set host. I get a user authorization error.
Robert Gezelter
Honored Contributor

Re: Allow user to access cluster

More information is needed, but if you can login using one of the nodes and not the others , it is possible that one or more of the nodes is not using the common account file.

There are also other possibilities. More information about your system configuration is needed to be definitive.

- Are you using any special authentication mechanisms?
- Where are the UAF files for each member of the cluster located

There are also a variety of accidental mis-configurations that could be the cause of the problem. Have any changes been made to the cluster configuration recently?

If you can give us the information, we can attempt to troubleshoot this problem in this forum. If it is more complex, or it can not be addressed in the forum, a consultant with system management experience could sort out the problem [Disclosure: Our firm does provide such services, as do several other active members of the community].

- Bob Gezelter, http://www.rlgsc.com
Jon Pinkley
Honored Contributor

Re: Allow user to access cluster

Do you have a common system disk for the cluster? If you do not, then you will need to do something special to make sure all members are accessing a common set of security files (at least SYSUAF and RIGHTSLIST)

If you don't know the answer to that question, please do the following:



Please provide output from:

$ mcr sysman set environment/cluter
SYSMAN> do show logical sys$sysdevice/full
SYSMAN> do show logical sys$common/ful
SYSMAN> do show logical sys$specific/ful
SYSMAN> do show logical sysuaf/full
SYSMAN> do show logical rightslist/full
SYSMAN> do directory/file sysuaf
SYSMAN> do directory/file rightslist

As the others have said, the sympoms are consistent with non-shared authorization/rightslist files.

These normally will be in the SYS$COMMON:[SYSEXE] directory. In almost all cases, you want these to be using the same files from every cluster node, because from a security standpoint, the cluster is "the system".

The common files don't have to be in sys$common:, but they should be using the same files on each node in the cluster.

The output of the last two directory commands will have a file ID, if these are not the same on the all nodes, then you are not using a common set. Even if they are the same, they could be on different devices, therefore the other show logical commands.

Probably best if you cut and paste the output to a notepad text file and attach, as the output will be easier to read in a fixed width font.

Jon
it depends
Hoff
Honored Contributor
Solution

Re: Allow user to access cluster

It appears likely that there is either something else going on here with the network, or with the particular local system configuration, or (fairly common, in my experience) that the particular cluster is mis-configured.

On OpenVMS V7.2 and later, look at the contents of the SYLOGICALS.TEMPLATE file for the list of files that should be shared in a cluster, or that -- at a minimum -- must be coordinated. This file is the template for the SYLOGICALS.COM procedure during OpenVMS installations, and it is a standard text file.

It has been quite common to miss one or more of these (shared) files over the years, which was the genesis of the creation of the (shared) file list in the SYLOGICALS.TEMPLATE file.

Once these files are configured correctly and any duplicates resolved, then the creation of a username on one node can and will apply (by default) to all cluster members. Transparently. Further, the same security profiles, queues and other such characteristics of a cluster can and do apply to all nodes.

Resolving duplicates is somewhat tedious, unfortunately. There are descriptions of the basic sequence in the appendix of the Cluster Systems Manual. I tend to use listings and a manual ("manual" as in "by-hand") process to MERGE and to flag duplicate UICs, identifiers, and usernames. With a little preliminary work with the text files acquired from commands such as AUTHORIZE (UAF) LIST, you can aim MERGE at the various listings from AUTHORIZE and such, and have it flag any duplicates that require resolution.)

Stephen Hoffman
HoffmanLabs
Jon Pinkley
Honored Contributor

Re: Allow user to access cluster

Oops, I had a typo. That should be:

$ mcr sysman set environment/cluster

not:

$ mcr sysman set environment/cluter

Jon
it depends
Jim Lahman_1
Advisor

Re: Allow user to access cluster

Do you have a shareable system disk or does each node have its own system disk? That is, does each node boot off of its disk or boot from a shareable disk located on a storage array such as an msa1000?

If its the latter, than you need to set up a shareable directory among all nodes that contains cluster wide system files.
Cheers!
Jon Pinkley
Honored Contributor

Re: Allow user to access cluster

When clusters first appeared in V4.0, merging SYSUAF files was more common.

While manually merging is certainly one option, I would recommend making copies of the files and using convert/merge/exception=x.x as a first pass.

In any event, as Robert Gezelter and Hoff have stated, if you have uncoordinated security files, cleaning up probably isn't something that will be resolved on this forum, as it can be complex. For example, there may be multiple identifiers associated with the same UIC value. If you are new to VMS, you will probably require some assistance.

If this is a "new" problem, then perhaps it will be easy to fix. The longer things have been out of sync, the more divergent they become. If no new accounts have been added or modified, then only things like last login time, etc. will be different.

Did you inherit the cluster? Is the previous owner still available to ask questions?

Disclaimer: I don't do consulting, so you will need to find someone else, if you so decide. There are several people on the forums that do provide these services for a fee. Alternatively, you can read the documentation Hoff suggested, and use Google to search for things like merge sysuaf in Google groups, and fix the problem yourself. You will definitely learn more by doing it yourself, but there is a much higher degree of risk, and having someone that has done this before is probably better if this is a production system. Doing it wrong could leave your system insecure.


it depends
comarow
Trusted Contributor

Re: Allow user to access cluster

If you have a a cluster common UAF (the standard) you do it once.

If not, you create the account on every node.
Doesn't get simpler than that.
Hoff
Honored Contributor

Re: Allow user to access cluster

>>>
If not, you create the account on every node.
<<<

If multiple SYSUAF files are present, it is wise to maintain the binary values of the identifiers, the binary values of the UICs and username strings in synchronization, lest the queue manager or system security manifest any of various ill-desired behaviors.

It's not difficult to do, but it's more work than most folks want. You end up with DCL procedures or other tools to keep the files in synchronization.

If there are access restrictions required, it's easier to use an identifier to allow or deny access at login time; a classic SYLOGIN access check.

One of the very cases I've encountered where parallel UAFs can be particularly useful is a cluster with wildly different nodes (either in terms of system model and system performance and related configuration, or in terms of system tuning) requiring substantially different SYSUAF quotas.
owilliams
Frequent Advisor

Re: Allow user to access cluster

This is the results of the suggested commands:
$ mcr sysman set environment/cluster

%SYSMAN-I-ENV, current command environment:

Clusterwide on local cluster

Username CONRES will be used on nonlocal nodes



SYSMAN> do show logical sys$sysdevice/full

%SYSMAN-I-OUTPUT, command execution on node NRCAVB

"SYS$SYSDEVICE" [exec] = "$3$DKB100:" [concealed,terminal] (LNM$SYSTEM_TABLE)

%SYSMAN-I-OUTPUT, command execution on node AGS80A

"SYS$SYSDEVICE" [exec] = "$1$DGA886:" [concealed,terminal] (LNM$SYSTEM_TABLE)

%SYSMAN-I-OUTPUT, command execution on node NRCAVA

"SYS$SYSDEVICE" [exec] = "$1$DGA238:" [concealed,terminal] (LNM$SYSTEM_TABLE)

SYSMAN> do show logical sys$common/ful

%SYSMAN-I-OUTPUT, command execution on node NRCAVB

"SYS$COMMON" [exec] = "$3$DKB100:[SYS0.SYSCOMMON.]" [concealed,terminal] (LNM$SYSTEM_TABLE)

%SYSMAN-I-OUTPUT, command execution on node AGS80A

"SYS$COMMON" [exec] = "$1$DGA886:[SYS2.SYSCOMMON.]" [concealed,terminal] (LNM$SYSTEM_TABLE)

%SYSMAN-I-OUTPUT, command execution on node NRCAVA

"SYS$COMMON" [exec] = "$1$DGA238:[SYS0.SYSCOMMON.]" [concealed,terminal] (LNM$SYSTEM_TABLE)

SYSMAN> do show logical sys$specific/full

%SYSMAN-I-OUTPUT, command execution on node NRCAVB

"SYS$SPECIFIC" [exec] = "$3$DKB100:[SYS0.]" [concealed,terminal] (LNM$SYSTEM_TABLE)

%SYSMAN-I-OUTPUT, command execution on node AGS80A

"SYS$SPECIFIC" [exec] = "$1$DGA886:[SYS2.]" [concealed,terminal] (LNM$SYSTEM_TABLE)

%SYSMAN-I-OUTPUT, command execution on node NRCAVA

"SYS$SPECIFIC" [exec] = "$1$DGA238:[SYS0.]" [concealed,terminal] (LNM$SYSTEM_TABLE)

SYSMAN> do show logical sysuaf/full

%SYSMAN-I-OUTPUT, command execution on node NRCAVB

%SHOW-S-NOTRAN, no translation for logical name SYSUAF

%SYSMAN-I-OUTPUT, command execution on node AGS80A

%SHOW-S-NOTRAN, no translation for logical name SYSUAF

%SYSMAN-I-OUTPUT, command execution on node NRCAVA

"SYSUAF" [exec] = "CLUS$EXE:SYSUAF.DAT" (LNM$SYSTEM_TABLE)

SYSMAN> do show logical rightslist/full

%SYSMAN-I-OUTPUT, command execution on node NRCAVB

%SHOW-S-NOTRAN, no translation for logical name RIGHTSLIST

%SYSMAN-I-OUTPUT, command execution on node AGS80A

%SHOW-S-NOTRAN, no translation for logical name RIGHTSLIST

%SYSMAN-I-OUTPUT, command execution on node NRCAVA

"RIGHTSLIST" [exec] = "CLUS$EXE:RIGHTSLIST.DAT" (LNM$SYSTEM_TABLE)

SYSMAN> do directory/file sysuaf

%SYSMAN-I-OUTPUT, command execution on node NRCAVB

Directory CLUSTERDISK:[CONRES]

SYSUAF.LIS;3 (11645,47195,0)

SYSUAF.LIS;2 (10417,25,0)

SYSUAF.LIS;1 (8504,71,0)

Total of 3 files.

%SYSMAN-I-OUTPUT, command execution on node AGS80A

Directory CLUSTERDISK:[CONRES]

SYSUAF.LIS;3 (11645,47195,0)

SYSUAF.LIS;2 (10417,25,0)

SYSUAF.LIS;1 (8504,71,0)

Total of 3 files.

%SYSMAN-I-OUTPUT, command execution on node NRCAVA

Directory $1$DGA253:[CLUSTER_COMMON.SYSEXE]

SYSUAF.DAT;8 (48497,26,0)

SYSUAF.DAT;7 (3001,94,0)

Total of 2 files.

SYSMAN> do directory/file rightslist

%SYSMAN-I-OUTPUT, command execution on node NRCAVB

%DIRECT-W-NOFILES, no files found

%SYSMAN-I-OUTPUT, command execution on node AGS80A

%DIRECT-W-NOFILES, no files found

%SYSMAN-I-OUTPUT, command execution on node NRCAVA

Directory $1$DGA253:[CLUSTER_COMMON.SYSEXE]

RIGHTSLIST.DAT;8 (3060,145,0)

Total of 1 file.

SYSMAN>

EdgarZamora_1
Respected Contributor

Re: Allow user to access cluster

New to VMS and managing a multiple system disk cluster... I don't envy you.

I would like you to clarify what you mean by your statement "I can create the user on only one node." Do you mean that you're managing that cluster but you can only login to one node? Or are you getting an error when you try to create from another node? More detauls would be helpful.

With that said, your cluster has multiple system disks, and it looks like it has multiple SYSUAF databases too. For a user to be able to login on each node, the account will have to be in all those SYSUAF databases. Obviously, the simplest solution is for you to login to each node and then create the user account on each. If you are only able to login to one node, you will have to ask the system managers (or a privileged user) on the other nodes to create the accounts on their nodes. If that's not the case, and it's only you... if you have privileges and CAN MOUNT (on the node you're logged into) the volume(s) that contains the SYSUAF of the other nodes (most likely their SYS$SYSDEVICE:) then you can define SYSUAF to point to that disk/dir and then invoke AUTHORIZE. This is dangerous stuff I don't recommend someone new to VMS to be doing this. If this be your only option, come back here and we can explain further and give you details.

If you are the system manager for this cluster, you yourself should be able to login to all the nodes of your cluster. Otherwise managing your cluster will be a pain, especially for a beginner.
owilliams
Frequent Advisor

Re: Allow user to access cluster

Unfortunately, I am the System Manager for the Cluster.
I get an error when I try to create a user on one of the other nodes. I have all the Privs I need and I CAN MOUNT if I have to. However, being a novice it would be nice to have specific instructions on how and if possible figure out how the old system manager did his job. In the brief conversation I had with him before he left he mentions the Cluster is poorly configured. I guess I get to find out just how bad it is.
EdgarZamora_1
Respected Contributor

Re: Allow user to access cluster

What is the error message? By any chance, is it this error that you get?

SOURCE> mc authorize
%UAF-E-NAOFIL, unable to open system authorization file (SYSUAF.DAT)
-RMS-E-FNF, file not found
Do you want to create a new file?

If it is, do this before you run authorize:

$ SET DEFAULT SYS$SYSTEM

Hoff
Honored Contributor

Re: Allow user to access cluster

I'd start with the MERGE as described in the back of the cluster manual, and bring the SYSUAF, RIGHTSLIST and NET*PROXY files -- and the other shared files listed in SYLOGICALS.TEMPLATE -- into synchronization.

Admittedly, this is the deep end of the pool, but you're going to encounter more mistakes and more problems and more effort if you don't implement this merge.

This case is undoubtedly one of the reasons why the cluster was reportedly mis-configured.

In the interim, SET HOST to each node, SET DEFAULT to SYS$SYSTEM, RUN AUTHORIZE, and ADD the username. That'll be somewhat screwed up and it'll potentially increase the exposure to problems (slightly), but it'll get you out of your current immediate predicament. You will have to come back and fix these usernames.

The System Manager's Manual (both volumes) and the Cluster manuals (both volumes) are the starting manuals for learning about this whole area. And make no mistake, you're in rather deep water right now -- this is a moderately-sized and moderately-complex cluster with what is almost certainly a sub-optimal or invalid shared-file configuration, and with Fibre Channel SAN, and multiple system disks tossed in for extra measure.

Draining these swamps usually takes an inital order-of-magnitude investigation, and usually then couple of days of concerted research and conflict resolution, and related -- the exact effort depends greatly on the particular configuration, and how far gone it is, and how dependent critical applications are on the current configuration. And on the access and ability to drop the nodes in the configuration for service and ECOs and a testing reboot or two.

Stephen Hoffman
HoffmanLabs
owilliams
Frequent Advisor

Re: Allow user to access cluster

Thanks Everyone, you have been a great help. I can create the user now. I will have more questions later on how to fix the configuration. Hopefully I can get some reading in 1st.
Jon Pinkley
Honored Contributor

Re: Allow user to access cluster

Thanks for the output, that really does clear up a lot of questions.

It appears that only the NRCAVA node has the logical names that would be normally be defined in a cluster, i.e. SYSUAF and RIGHTSLIST.

You have three distinct system disks,

$3$DKB100 on NRCAVB (this almost for sure is a locally attached SCSI drive).

$1$DGA886: on AGS80A (this is SAN attached)

$1$DGA238: on NRCAVA (also SAN attached).

Ok, we need a bit more info for further analysis:

To see what SYSUAF and RIGHTSLIST are being used by LOGINOUT on each node, do the following: (needed because the logical names SYSUAF and RIGHTSLIST are not defined on two of the three nodes)

$ mcr sysman set environment/cluster
SYSMAN> do directory/file/nohead/notrail 'f$parse("SYSUAF","SYS$SYSTEM:SYSUAF.DAT",,,"NO_CONCEAL")
SYSMAN> do directory/file/nohead/notrail 'f$parse("RIGHTSLIST","SYS$SYSTEM:RIGHTSLIST.DAT",,,"NO_CONCEAL")

Letâ s look at the physical system devices too.

SYSMAN> do show device/full sys$sysdevice:

And check to see if the devices are visible clusterwide, and what system roots currently exist on each system disk. Each node should see the identical results for the following:

SYSMAN> do directory/file/date=(cre)/wid=(file:38:size:7)/size=all $3$DKB100:[000000]SYS*.DIR
SYSMAN> do directory/file/date=(cre)/wid=(file:38:size:7)/size=all $1$DGA886:[000000]SYS*.DIR
SYSMAN> do directory/file/date=(cre)/wid=(file:38:size:7)/size=all $1$DGA238::[000000]SYS*.DIR

Is there one node that is working as you expect? My guess would be the NRCAVA node, just due to the values of the logical names SYSUAF and RIGHTSLIST.

Do you know of a reason that separate system devices are set up? Valid reasons would be nodes with different architectures (AXP,I64), the ability to do rolling upgrades (upgrade one node while the â clusterâ remains available). In the past there were also performance advantages to having multiple system disks; that is less true with EVA controllers, as the virtual devices are already spread over multiple spindles.

It is possible that some of the system devices were originally â the sameâ , i.e. have a common ancestor, perhaps split for an upgrade.

Are all the nodes running the same architecture (i.e. VAX,AXP,I64), same version of VMS?

SYSMAN> do show system/noprocess

If multiple system roots exist on the $1$DGA238 device (NRCAVAâ s system device), then perhaps the intention was to have everything boot from the same system disk. Was the SAN a relatively new acquisition? Do you know what type of controller(s) the $1$DGA devices are connected to? i.e. EVA5000, EVA6000, MSA, HSG80?

If you are going to create the new USERNAME in multiple SYSUAF files, please use the
same UIC as exist in the already created USERNAME. Security is based on UIC.

Also can you provide output from:

$ set term/dev=la120 ! make is printable (this is a hardcopy terminal type)
$ show cluster
$ set term/inq ! reset you terminal so thing like delete work as expected

Also, talk to whoever holds the purse strings, and ask to go to the VMS Bootcamp. It will be well worth your time. Prepare to be overwhelmed, but you will definitely gain a lot of knowledge and be able to meet with the experts face to face.

Good luck,

Jon
it depends
Jon Pinkley
Honored Contributor

Re: Allow user to access cluster

Sorry for the formatting of my previous note, I edited in Word and pasted into the Reply box (since I have been burned several timss when the reply gets lost). The ' and " characters Word uses don't get translated by the forum software, and get displayed incorrectly.

So where you see the weird characters, they were originally either single or double quotes.

The SYSMAN commands came out ok, since I pasted them into Word from my KEA! (terminal emulator) session.

Have fun,

Jon
it depends
Jon Pinkley
Honored Contributor

Re: Allow user to access cluster

Here's something else to try just to see if the number of UAF records are close in each SYSUAF file. This won't expose usernames, it will just make it easy to see the scope of the problem. If two of the nodes have only a few records, and the third has many more, that is going to be an indication that you probably want to merge the smaller into the larger. If they all have about the same then there may be some set of command procedures that runs on a sheduled basis and copies the "master" to the other nodes.

$ mcr sysman set environment/cluster
SYSMAN> do search /noout/stat 'f$parse("SYS$SPECIFIC:[000000]SYSMGR.DIR",,,,"NO_CONCEAL") ""

Everything on SYSMAN> should be on a single line. You should be able to cut/paste ok.

If you are running multiple SYSUAF files to allow for differing process quotas, consider using the PQL_M* sysgen parameters to set different minimum values on the different nodes, as an alternative to having totally distinct copies of SYSUAF files. This doesn't give you the level of granularity that distinct SYSUAF files does, but makes the management aspects much easier.

But that's being a bit premature, we (you) need to know the reasons why there are separate system disks in the first place.

Jon
it depends
Jon Pinkley
Honored Contributor

Re: Allow user to access cluster

Sorry, that should be

SYSMAN> do search /noout/stat 'f$parse("SYSUAF","SYS$SYSTEM:SYSUAF.DAT",,,"NO_CONCEAL") ""

The number of records in the SYS$SPECIFIC:[SYSMGR] directory isn't very useful. I need to pay more attention to what I am cutting...

Jon
it depends