Vin,
My previous comment was based on $enq, not $enqw which your calls show.
$enq puts the request in the queue and doesn't wait, where $enqw puts the request into the queue and waits for the lock to be granted.
Those two will have vastly different timings, and the process calling $enqw has little control over how long it will have to wait, unless converting to NL or specifically telling it not to wait if the request can't be granted immediately via the LCK$M_NOQUEUE flag, but in that case the request will not be entered in the queue, so the lock will never be granted. There are some undocumented parameters and flags that can affect whether the $enq can "cut in line" ahead of other existing queued requests. The best that can do is to get the request to the beginning of the queue, it doesn't force other holders to drop any incompatible locks they hold, therefore that can not guarantee that processes holding incompatible locks will release them. The examples you provided are not using any of those features, so that can't explain the differences you reported.
Even if a process holding an incompatible lock has a blocking AST specified, there is no guarantee that is will get scheduled in a timely manner so it can convert its lock to a compatible lock, especially if it is running at a low priority on a busy system. This is classic priority inversion, i.e. a process executing at priority 0 can be locking a lock needed by a priority 16 process, and it can block the high priority process. If there are medium priority processes starving the priority 0 process of CPU, then even if the priority 0 process has a blocking AST, and it is willing to release the lock, if it doesn't get scheduled, it won't be able to release the lock. The PIXSCAN mechanism will eventually grant some CPU to the starved process, but that can take a long time (10s of seconds).
What is the purpose of the locks? Are they to coordinate access to shared memory, as a signaling mechanism for another process, or some other purpose?
What else is using the resource names in the same resource domain? What lock modes (PW,PR,EX, etc.) are being used by the other processes that are using the same resources?
Since your second $enqw is specifying the SYSCSTS flag, are you checking the return status for SS$_SYNCH vs. SS$_NORMAL? In cases where SS$_NORMAL is returned when the LCK$M_SYNCSTS flag is specified, the lock request could not be granted immediately, and your process was forced to wait. Any conversion to NL should always return SS$_SYNCH, but if there is a currently granted lock that is incompatible with PW, then the process will have to wait until whatever is holding the lock converts to a compatible lock or issues a $DEQ, and the process requesting the lock has no control over other processes holding the lock. That is one place where an EXEC acmode lock has some advantage over a user mode lock, as only processes executing in exec or kernel mode can request exec acmode locks.
Is your use of exec acmode locks attempting to synchronize with RMS? Or is the reason for using exec acmode locks so they will survive image rundown?
Several comments about the calls listed above.
1. when a parent lock is specified, the acmode is ignored, and the acmode of the parent lock is used.
2. when the LCK$M_CONVERT flag is specified, the resnam is ignored, as well as LCK$M_SYSTEM and rsdm_id, since these can be determined from the lockid which must be specified in the lksb when the LCK$M_CONVERT flag is specified.
It doesn't hurt to specify them other than possibly causing someone reading the code to make false assumptions about where the info is coming from.
However, since these are all ignored for conversions, I would expect the times to enqueue the lock request to be nearly the same, if they were using the same resource. The setting of a local event flag is fast. So that leaves ether the one resource being mastered on a different node, or contention for the resource name (or the delay associated with another process releasing a blocking lock). If you have a standalone box to test on, and you still see a difference, then it is most likely do to the waiting time, not the queuing time.
Note that the acmode is part of the resource "identification". So even within the same resource domain, there can be multiple resources with the same resnam.
The resource is uniquely identified by the following combination: resnam, UIC group (resource domain), access mode, address of parent RSB.
So if the resources being used by the EXEC mode routines are actually EXEC mode resources, then user mode code will not be able to take new locks on them. The point being that there may be more contention and less control of what can lock user mode resources than exec mode resources.
Also, as Hoff noted, you should probably be using EFN$C_ENF instead of 0, although it is unlikely to make a noticeable difference in time, it ensures you are not causing unintended side effects. For example, some other part you the program may be using event flag 0 as well.
Also, you should be checking the status values returned by SYS$ENQW and the in the lock status block. Even for $ENQW these can have different types of status return values. Perhaps you are checking these; we can't tell.
To find out where the time is being spent, follow Ian's advice and look at the SDA LCK extension, specifically the trace facility. This will give you high-resolution timestamps of when a conversion was requested and when it was granted. Be aware that the trace facility will affect performance and that it can generate a lot of info, that you will then have to sift through, also the
SDA> lck show trace
command displays most recent first, which may not be what you would expect.
Jon
it depends