<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: ANA/SYS show lock/sum in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518688#M68057</link>
    <description>"before releasing it". May be the tree simply moves because node A is wanting a lock and B releases the lock. Then a remaster "sole interest" is done.&lt;BR /&gt;&lt;BR /&gt;Will do more monitoring ...&lt;BR /&gt;&lt;BR /&gt;Wim</description>
    <pubDate>Mon, 25 Apr 2005 10:43:11 GMT</pubDate>
    <dc:creator>Wim Van den Wyngaert</dc:creator>
    <dc:date>2005-04-25T10:43:11Z</dc:date>
    <item>
      <title>ANA/SYS show lock/sum</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518674#M68043</link>
      <description>Shows "no quota for operation" and "proposed new manager declined". On the subject of remastering.&lt;BR /&gt;On my cluster they have substantial values. What do they indicate ?&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Wed, 06 Apr 2005 01:58:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518674#M68043</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-04-06T01:58:06Z</dc:date>
    </item>
    <item>
      <title>Re: ANA/SYS show lock/sum</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518675#M68044</link>
      <description>Wim,&lt;BR /&gt;  could you post more detail? Perhaps a cut and paste of the actual output?</description>
      <pubDate>Wed, 06 Apr 2005 16:06:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518675#M68044</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2005-04-06T16:06:08Z</dc:date>
    </item>
    <item>
      <title>Re: ANA/SYS show lock/sum</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518676#M68045</link>
      <description>Of course John. here is the complete output.&lt;BR /&gt;It's on a GS160 with a double cpu.&lt;BR /&gt;&lt;BR /&gt;SDA&amp;gt; show lock/sum&lt;BR /&gt;&lt;BR /&gt;Lock Manager Summary Information:&lt;BR /&gt;---------------------------------&lt;BR /&gt;Lock Manager Flags:&lt;BR /&gt;    Mode (LCKMGR_MODE) is automatic&lt;BR /&gt;    Dedicated Lock Manager is disabled&lt;BR /&gt;&lt;BR /&gt;Lock Manager Poolzone:&lt;BR /&gt;    Poolzone Region Address:    FFFFFFFF.82121D80&lt;BR /&gt;    Packet Size:                         00000100   (256.)&lt;BR /&gt;    Number of Pages:                     0000031A   (794.)&lt;BR /&gt;    Maximum Number of Pages:             00030C18   (199704.)&lt;BR /&gt;    Free Page Count:                     00000E2D   (3629.)&lt;BR /&gt;    Hits:                                1F179CF1   (521641201.)&lt;BR /&gt;    Misses:                              00000A8F   (2703.)&lt;BR /&gt;    Poolzone Expansions:                 00000D91   (3473.)&lt;BR /&gt;    Allocation Failures:                 00000000   (0.)&lt;BR /&gt;    Allocation not from 1st Page:        00000000   (0.)&lt;BR /&gt;    Empty Pages:                         00000000   (0.)&lt;BR /&gt;&lt;BR /&gt;Lock Manager per-CPU Performance Counters:&lt;BR /&gt;------------------------------------------&lt;BR /&gt;Counters  \  CPU Id                  0          1           Total&lt;BR /&gt;-------------------------  ----------------------    ------------&lt;BR /&gt;LCKRQ Cache                          0          0               0&lt;BR /&gt;LKB delete pending Cache             0          0               0&lt;BR /&gt;RSB delete pending Cache             0          0               0&lt;BR /&gt;LKB Cache                           88        133             221&lt;BR /&gt;RSB Cache                          168         31             199&lt;BR /&gt;LKB Allocations (cache)     1915358602 1882315377      3797673979&lt;BR /&gt;RSB Allocations (cache)     1913942197 1879929018      3793871215&lt;BR /&gt;New Lock Requests (local)   1395072696 1851452214      3246524910&lt;BR /&gt;New Lock Requests (in)       347138952     177934       347316886&lt;BR /&gt;New Lock Requests (out)      204537089     353679       204890768&lt;BR /&gt;Conversion Requests (loc)   1409289697 1074326271      2483615968&lt;BR /&gt;Conversion Requests (in)     171158606     120790       171279396&lt;BR /&gt;Conversion Requests (out)    122445710   49424582       171870292&lt;BR /&gt;Dequeue Requests (local)    1487526856 1751015839      3238542695&lt;BR /&gt;Dequeue Requests (in)        335314266     120338       335434604&lt;BR /&gt;Dequeue Requests (out)        64356609  144203032       208559641&lt;BR /&gt;$ENQ Requests that Wait       99824189   31134302       130958491&lt;BR /&gt;&lt;BR /&gt;Lock Manager per-CPU Performance Counters:&lt;BR /&gt;------------------------------------------&lt;BR /&gt;$ENQ Requests not Queued      42218212   14093770        56311982&lt;BR /&gt;Blocking ASTs (local)         19126467    4552927        23679394&lt;BR /&gt;Blocking ASTs (in)            42419234      28399        42447633&lt;BR /&gt;Blocking ASTs (out)           12284461   11014835        23299296&lt;BR /&gt;Directory Functions (in)     565661475     290018       565951493&lt;BR /&gt;Directory Functions (out)    904697639  339876935      1244574574&lt;BR /&gt;&lt;BR /&gt;Lock Manager Performance Counters:&lt;BR /&gt;----------------------------------&lt;BR /&gt;Deadlock Counters:&lt;BR /&gt;Deadlock Searches                                271&lt;BR /&gt;Deadlock Found                                     0&lt;BR /&gt;Deadlock Messages (in)                            34&lt;BR /&gt;Deadlock Messages (out)                           14&lt;BR /&gt;&lt;BR /&gt;Lock Remaster Counters:&lt;BR /&gt;Tree moved to this node                      1807521&lt;BR /&gt;Tree moved to another node                   1953666&lt;BR /&gt;Tree moved due to higher Activity            1953666&lt;BR /&gt;Tree moved due to higher LOCKDIRWT                 0&lt;BR /&gt;Tree moved due to Single Node Locks          2191087&lt;BR /&gt;No Quota for Operation                        342123&lt;BR /&gt;Proposed New Manager Declined                  56504&lt;BR /&gt;Operations completed                         3704671&lt;BR /&gt;Remaster Messages Sent                       9452665&lt;BR /&gt;Remaster Messages Received                   9185923&lt;BR /&gt;Remaster Rebuild Messages Sent                185382&lt;BR /&gt;Remaster Rebuild Messages Received            381606&lt;BR /&gt;&lt;BR /&gt;Lock Manager Performance Counters:&lt;BR /&gt;----------------------------------&lt;BR /&gt;&lt;BR /&gt;2-Phase Commit Counters:&lt;BR /&gt;Requests Sent                                 362830&lt;BR /&gt;Requests Received                            1070093&lt;BR /&gt;Ready Messages Sent                          1062465&lt;BR /&gt;Ready Messages Received                       357783&lt;BR /&gt;ACK Messages Sent                             357783&lt;BR /&gt;ACK Messages Received                        1062465&lt;BR /&gt;Cancel Messages Sent                               0&lt;BR /&gt;Cancel Messages Received                           0&lt;BR /&gt;SDA&amp;gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 07 Apr 2005 01:10:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518676#M68045</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-04-07T01:10:02Z</dc:date>
    </item>
    <item>
      <title>Re: ANA/SYS show lock/sum</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518677#M68046</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;PMS$GL_RM_QUOTA_WAIT indicates the no. of times, that no remastering quota was available on the local node (CLUB$L_RM_QUOTA in CLUB). The default quota is 5. This seems to limit the remastering operations currently in progress on the local node.&lt;BR /&gt;&lt;BR /&gt;PMS$GL_RM_REQ_NAK counts the no. of times the remote node (new master) declined to accept a remastering request for a resource tree (e.g. resource not found, shutdown in progress)&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Sun, 10 Apr 2005 10:42:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518677#M68046</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-04-10T10:42:30Z</dc:date>
    </item>
    <item>
      <title>Re: ANA/SYS show lock/sum</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518678#M68047</link>
      <description>That seems like an awful lot of lock remasters. Let's focus on that first and then seen if there are still enough 'no quota' and 'declined' messages left over to worry about. Is this lock re-mastering rate as you expect / by design, or it is 'just happening'. Over how much time where those stats? Is this an application balanced over a cluster? Any way to skew certain file / db accessed to certain member. It must be a somewhat recent VMS version to run on the GS160, but maybe a later one improved the lock remastering algoritme some? It is time to play with LOCKDIRWAIT ?&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Sun, 10 Apr 2005 14:24:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518678#M68047</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2005-04-10T14:24:06Z</dc:date>
    </item>
    <item>
      <title>Re: ANA/SYS show lock/sum</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518679#M68048</link>
      <description>The node was up for about 200 days when this data was captured.&lt;BR /&gt;I tried to find out which tree was moving (with the script I posted, even without the wait) but the only one I found was rightslist.dat. All other remasterings are not visible. That's why I hope HP will create a show remastering/int= command.&lt;BR /&gt;&lt;BR /&gt;I think the remastering must have something to do with DSM/MUMPS, that is running in cluster mode. Another cluster with only Sybase servers has a lot less remasterings.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Mon, 11 Apr 2005 01:05:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518679#M68048</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-04-11T01:05:48Z</dc:date>
    </item>
    <item>
      <title>Re: ANA/SYS show lock/sum</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518680#M68049</link>
      <description>&lt;BR /&gt;&amp;gt;&amp;gt; I tried to find out which tree was moving (with the script I posted, even without the wait) but the only one I found was rightslist.dat.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Ah, I find that very intesting and can't help wondering how much resources were wated with that. I calling is wasted because this file will be 99.99% read only for most customers. (Of course the system Wim was focussing on would be the exception to the rule :-). I wonder whether the (relatively) new concurrent read locks usage by RMS when using global buffers would influence this.&lt;BR /&gt;Is this systems using global buffers for rightslist? I fnd that advisable for many site irrespective of the CR locking.&lt;BR /&gt;&lt;BR /&gt;Or how about... (am I really saying this?...) creating a per-node copy of righstlist and promiss to put a fresh one in place (convert/share) after any/every update?&lt;BR /&gt;Yeah I know... ugly, but let's just say we'd do this for educational / analysis purposes and not for sustained usage in production?&lt;BR /&gt;&lt;BR /&gt;Groets,&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Mon, 11 Apr 2005 06:59:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518680#M68049</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2005-04-11T06:59:53Z</dc:date>
    </item>
    <item>
      <title>Re: ANA/SYS show lock/sum</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518681#M68050</link>
      <description>Hein,&lt;BR /&gt;&lt;BR /&gt;I don't think the rightslist is the bottleneck. It moves maximum every 8 seconds (average not even every minute) while I'm seeing remaster counters increase with 50 per minute. I also see lots of packages being exchanged (about 300 per minute) while rightslist lock list size should be rather small.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Mon, 11 Apr 2005 07:14:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518681#M68050</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-04-11T07:14:49Z</dc:date>
    </item>
    <item>
      <title>Re: ANA/SYS show lock/sum</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518682#M68051</link>
      <description>What is the value of the SYSGEN parameter PE1?&lt;BR /&gt;&lt;BR /&gt;We have set it to -1 to turn off lock remastering with DSM.  This is a dynamic parameter. &lt;BR /&gt;&lt;BR /&gt;You may want to run the routine LKMSTA in your DSM environment and look at Lockman writes.  This show contention between nodes in the cluster trying to write to the same DSM data block.  Generally when a routine is updating data in DSM they lock that node.  So if you have this happening often across the nodes in the cluster you may see the locks being remastered back and forth.&lt;BR /&gt;&lt;BR /&gt;You can either mount that DSM volume local to a node and use DDP to update that volume from the other nodes or rewrite the application to reduce the contention.</description>
      <pubDate>Mon, 25 Apr 2005 01:48:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518682#M68051</guid>
      <dc:creator>Cass Witkowski</dc:creator>
      <dc:date>2005-04-25T01:48:34Z</dc:date>
    </item>
    <item>
      <title>Re: ANA/SYS show lock/sum</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518683#M68052</link>
      <description>Cass,&lt;BR /&gt;&lt;BR /&gt;I did a monitoring during a busy interval of 45 minutes. The monitor interval in DSM was 60 seconds.&lt;BR /&gt;&lt;BR /&gt;The average lockman writes was 0.74 with a maximum of 2.2.&lt;BR /&gt;&lt;BR /&gt;The total deq as wel as enq was (each) about 27. The maximum was about 270 each !&lt;BR /&gt;&lt;BR /&gt;I got reported 2000 tree moves (exactly 0.74 per sec, just like lockman writes) which required about 10000 packets to be exchanged (so 5 packets per move).&lt;BR /&gt;&lt;BR /&gt;Conclusion : each lockman write results in a tree move ???&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Mon, 25 Apr 2005 09:13:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518683#M68052</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-04-25T09:13:06Z</dc:date>
    </item>
    <item>
      <title>Re: ANA/SYS show lock/sum</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518684#M68053</link>
      <description>Lockman Writes&lt;BR /&gt;&lt;BR /&gt;The number of times that the Write Demon, in response to contention for a block held at exclusive write access and subsequently modified in cache, needed to physically&lt;BR /&gt;write a block to disk before releasing access to it.&lt;BR /&gt;&lt;BR /&gt;Could it be that DSM uses the lock tree move mechanism for doing this (each lockman writes requires a lock tree move) ?&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Mon, 25 Apr 2005 09:28:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518684#M68053</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-04-25T09:28:54Z</dc:date>
    </item>
    <item>
      <title>Re: ANA/SYS show lock/sum</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518685#M68054</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;no. I don't think that the application is aware of lock remastering, it's happening 'behind the scenes'.&lt;BR /&gt;&lt;BR /&gt;But each lockman action may just produce enough lock activity in that resource tree to cause it to be moved ?!&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Mon, 25 Apr 2005 10:32:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518685#M68054</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-04-25T10:32:02Z</dc:date>
    </item>
    <item>
      <title>Re: ANA/SYS show lock/sum</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518686#M68055</link>
      <description>Volker,&lt;BR /&gt;&lt;BR /&gt;Could be but it would be a miracle that each time the activity is on another node.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Mon, 25 Apr 2005 10:34:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518686#M68055</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-04-25T10:34:14Z</dc:date>
    </item>
    <item>
      <title>Re: ANA/SYS show lock/sum</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518687#M68056</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;I don't know the DSM internals, but wouldn't the description given by Cass exactly lead to this kind of behaviour ?&lt;BR /&gt;&lt;BR /&gt;If one node has the data block cached, the other node has to do something (with locking) to cause it to be written to disk, to then read it itself. Then this node would have the block in cache and the other node would have to do something to obtain the data block...&lt;BR /&gt;&lt;BR /&gt;Something similar like this can be happening with RMS global buffers, if the you are writing a lot to the same file from multiple nodes.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Mon, 25 Apr 2005 10:42:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518687#M68056</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-04-25T10:42:42Z</dc:date>
    </item>
    <item>
      <title>Re: ANA/SYS show lock/sum</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518688#M68057</link>
      <description>"before releasing it". May be the tree simply moves because node A is wanting a lock and B releases the lock. Then a remaster "sole interest" is done.&lt;BR /&gt;&lt;BR /&gt;Will do more monitoring ...&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Mon, 25 Apr 2005 10:43:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518688#M68057</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-04-25T10:43:11Z</dc:date>
    </item>
    <item>
      <title>Re: ANA/SYS show lock/sum</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518689#M68058</link>
      <description>DSM does not know about lock remastering.  It does use OpenVMS locking for it's locking.&lt;BR /&gt;&lt;BR /&gt;I would look at either setting the SYSGEN parameter PE1 to -1 to turn off lock remastering or locally mounting that DSM volume set.&lt;BR /&gt;&lt;BR /&gt;You can use the ANASYS routines in DSM to show the cache contention.</description>
      <pubDate>Mon, 25 Apr 2005 11:59:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518689#M68058</guid>
      <dc:creator>Cass Witkowski</dc:creator>
      <dc:date>2005-04-25T11:59:38Z</dc:date>
    </item>
    <item>
      <title>Re: ANA/SYS show lock/sum</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518690#M68059</link>
      <description>Cass,&lt;BR /&gt;&lt;BR /&gt;Thanks for helping me on the right track.&lt;BR /&gt;The lock isn't visible because via my scripts because the lock tree only existed for a fraction of a second.&lt;BR /&gt;&lt;BR /&gt;But if I set PE1, it could harm other stuff on the cluster so I will accept the tree move in case of contention. It is by average 0.74 but this average is caused by peaks.&lt;BR /&gt;Local mounting is not an option because of high availability.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Tue, 26 Apr 2005 01:10:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518690#M68059</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-04-26T01:10:34Z</dc:date>
    </item>
    <item>
      <title>Re: ANA/SYS show lock/sum</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518691#M68060</link>
      <description>Wim&lt;BR /&gt;&lt;BR /&gt;We have pe1 set to -1 and over 100 sites.  It is also a dynamic parameter so you can change it and change it back.&lt;BR /&gt;&lt;BR /&gt;Cass</description>
      <pubDate>Tue, 26 Apr 2005 11:51:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518691#M68060</guid>
      <dc:creator>Cass Witkowski</dc:creator>
      <dc:date>2005-04-26T11:51:12Z</dc:date>
    </item>
    <item>
      <title>Re: ANA/SYS show lock/sum</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518692#M68061</link>
      <description>Cass,&lt;BR /&gt;&lt;BR /&gt;There could be big lock trees that NEED to move because the activity is on the other node. If the lock tree is not moved, a lot of overhead could be created. I have seen an average enq rate of 8000. If the tree isn't moved this could lead to a serious slow down.&lt;BR /&gt;&lt;BR /&gt;Just need more commands to analyze what is happening. Right now a log of every remaster would be great.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Tue, 26 Apr 2005 14:16:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518692#M68061</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-04-26T14:16:39Z</dc:date>
    </item>
    <item>
      <title>Re: ANA/SYS show lock/sum</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518693#M68062</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;&lt;QUOTE&gt;&lt;BR /&gt;I have seen an average enq rate of 8000.&lt;BR /&gt;&lt;/QUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;One of our applications has a number of application manager functionalities that run for 10 - 25 minutes, during which END/DEQ rate is between 100K - 250K.&lt;BR /&gt;&lt;BR /&gt;Its database is a collection of multi(4-11) -keyed RMS files (totalling ca 3 G).&lt;BR /&gt;Still, we have no complaints about performance of other users, not even users of that same app.&lt;BR /&gt;&lt;BR /&gt;fwiw,&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 27 Apr 2005 04:40:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/ana-sys-show-lock-sum/m-p/3518693#M68062</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2005-04-27T04:40:48Z</dc:date>
    </item>
  </channel>
</rss>

