<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: PE1 values and identifying which lock tree remasters. in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888583#M80071</link>
    <description>Thomas,&lt;BR /&gt;  When (if?) you can get your systems to V8.2, please make sure you look at the new SYSGEN parameter LOCKRMWT. It can be used for much finer control of lock remastering between nodes than was available with PE1. &lt;BR /&gt;&lt;BR /&gt;From the New Features manual&lt;BR /&gt;&lt;BR /&gt;Lock Re-mastering Improvements &lt;BR /&gt;* Provides more control over lock re-master decision making with the new LOCKRMWT system parameter &lt;BR /&gt;* Remote activity thresholds necessary to move a tree are now computed based on local activity rates &lt;BR /&gt;* Provides greater control of application performance within an OpenVMS cluster &lt;BR /&gt;* Reduces the possibility of lock trees thrashing between nodes in an OpenVMS</description>
    <pubDate>Wed, 15 Nov 2006 16:09:57 GMT</pubDate>
    <dc:creator>John Gillings</dc:creator>
    <dc:date>2006-11-15T16:09:57Z</dc:date>
    <item>
      <title>PE1 values and identifying which lock tree remasters.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888575#M80063</link>
      <description>We have  lock tree of about 1.3 million locks. In our four node cluster this tree can sometimes move around in rapid succession from one node to another.  In the worse case we incur RDB stalls. We are certain which application is causing the problem. &lt;BR /&gt;We have pegged PE1 to about 1,000,000 locks but then remote locking overhead causes other problem.  We are starting to incur CPU saturation on the master node and the cluster begins to degrade.  &lt;BR /&gt;&lt;BR /&gt;We are directing the application people to correct their code. The problem is a combination of RDB global buffers and an application which keeps widening it search criteria when records are not found. Looking for few records among millions is not a good combination when RDB Global Buffering is enabled.&lt;BR /&gt;&lt;BR /&gt;The question is: how can one use SDA to precisely identify the lock tree which moves around and ultimately link this to a database?  &lt;BR /&gt;&lt;BR /&gt;VMS 7.3-2 and RDB V7.1-441, four node DT cluster.</description>
      <pubDate>Sun, 29 Oct 2006 17:46:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888575#M80063</guid>
      <dc:creator>Thomas Ritter</dc:creator>
      <dc:date>2006-10-29T17:46:31Z</dc:date>
    </item>
    <item>
      <title>Re: PE1 values and identifying which lock tree remasters.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888576#M80064</link>
      <description>Thomas,&lt;BR /&gt;&lt;BR /&gt;a few years ago, we have had the same problem, even worse, closing the database crashed the system...&lt;BR /&gt;What we have done is to use a very large buffer size and large page size. This have drastically  reduce the number of locks in the tree.&lt;BR /&gt;&lt;BR /&gt;JF</description>
      <pubDate>Mon, 30 Oct 2006 02:09:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888576#M80064</guid>
      <dc:creator>Jean-François Piéronne</dc:creator>
      <dc:date>2006-10-30T02:09:58Z</dc:date>
    </item>
    <item>
      <title>Re: PE1 values and identifying which lock tree remasters.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888577#M80065</link>
      <description>I forgot,&lt;BR /&gt;&lt;BR /&gt;we have also open the database on only one node and started to use rows cache. The others nodes of the cluster doing remote access.&lt;BR /&gt;&lt;BR /&gt;JF</description>
      <pubDate>Mon, 30 Oct 2006 02:12:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888577#M80065</guid>
      <dc:creator>Jean-François Piéronne</dc:creator>
      <dc:date>2006-10-30T02:12:27Z</dc:date>
    </item>
    <item>
      <title>Re: PE1 values and identifying which lock tree remasters.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888578#M80066</link>
      <description>Thomas,&lt;BR /&gt;&lt;BR /&gt;the LCK$SDA extension allows you to directly view the most active resource trees, including the name of the node, which is currently mastering the tree and the Resource Name information:&lt;BR /&gt;&lt;BR /&gt;SDA&amp;gt; LCK SHOW ACTIVE&lt;BR /&gt;&lt;BR /&gt;If you repeat this command from time to time, when the resource tree is moving around, you should be able to see the (master) node name changing.&lt;BR /&gt;&lt;BR /&gt;On V7.3-2 there should also be SYS$EXAMPLES:RDB$SDA.C and .COM - these would allow you to build an RDB$SDA extension. It will allow you to display active DBs:&lt;BR /&gt;&lt;BR /&gt;SDA&amp;gt; RDB SHOW ACTIVE_DB&lt;BR /&gt;&lt;BR /&gt;If you really want to find out, when and how often these lock trees get remastered, you might be able to obtain this information using CNX tracing:&lt;BR /&gt;&lt;BR /&gt;SDA&amp;gt; CNX LOAD&lt;BR /&gt;SDA&amp;gt; CNX START TRACE/FUNCTION=REMASTER&lt;BR /&gt;...&lt;BR /&gt;SDA&amp;gt; CNX STOP TRACE&lt;BR /&gt;SDA&amp;gt; CNX SHO TRACE&lt;BR /&gt;SDA&amp;gt; CNX UNLOAD&lt;BR /&gt;&lt;BR /&gt;You might need to experiment with different /FUNC and /FAC parameters to obtain the desired information. The resource/lock names will not be shown, but will be in the trace buffers.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Mon, 30 Oct 2006 02:17:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888578#M80066</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2006-10-30T02:17:17Z</dc:date>
    </item>
    <item>
      <title>Re: PE1 values and identifying which lock tree remasters.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888579#M80067</link>
      <description>Jean, I like your idea of row caching and remote access. Do you have any numbers describing the task ? This is the first time I have heard that row caching and remote access might be do able. Row caching has been ignored as a solution because cluster wide access is required.</description>
      <pubDate>Mon, 30 Oct 2006 08:03:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888579#M80067</guid>
      <dc:creator>Thomas Ritter</dc:creator>
      <dc:date>2006-10-30T08:03:02Z</dc:date>
    </item>
    <item>
      <title>Re: PE1 values and identifying which lock tree remasters.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888580#M80068</link>
      <description>Thomas,&lt;BR /&gt;&lt;BR /&gt;I have played with SDA and CNX tracing a little bit and here is how you can actually trace resource trees moving (tested on V8.2):&lt;BR /&gt;&lt;BR /&gt;$ ANAL/SYS&lt;BR /&gt;SDA&amp;gt; CNX LOAD&lt;BR /&gt;SDA&amp;gt; CNX START TRACE/FAC=LCK/FUNC=(RM_Req,RM_Complete)&lt;BR /&gt;...&lt;BR /&gt;SDA&amp;gt; CNX STOP TRACE&lt;BR /&gt;SDA&amp;gt; CNX SHOW TRACE/FULL&lt;BR /&gt;&lt;BR /&gt;When done, use SDA&amp;gt; CNX UNLOAD to unload the trace code.&lt;BR /&gt;&lt;BR /&gt;A node giving up a resource tree will send (Tx) a RM_Req message to the remote node. Once the rebuild has happened, the remote node will reply (Rx) with a RM_Complete message.&lt;BR /&gt;&lt;BR /&gt;SDA&amp;gt; CNX SHOW TRACE/FULL will print the root resource name.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Mon, 30 Oct 2006 08:30:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888580#M80068</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2006-10-30T08:30:38Z</dc:date>
    </item>
    <item>
      <title>Re: PE1 values and identifying which lock tree remasters.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888581#M80069</link>
      <description>Thomas,&lt;BR /&gt;&lt;BR /&gt;You may find some articles about rows cache from:&lt;BR /&gt;&lt;A href="http://www.oracle.com/technology/products/rdb/index.html" target="_blank"&gt;http://www.oracle.com/technology/products/rdb/index.html&lt;/A&gt;&lt;BR /&gt;or from metalink&lt;BR /&gt;&lt;BR /&gt;The main drawback is that the database can be open on only one node. This why others nodes use remote access (Decnet or TCP/IP).&lt;BR /&gt;&lt;BR /&gt;But access of Rdb data thru row cache may be much much faster than thru standard cache, you can expect an 90% cut of the lock activity if you can cache this way the most active data.&lt;BR /&gt;&lt;BR /&gt;We have some programs which are 3 times faster when data area cache in rows cache instead in global buffers.&lt;BR /&gt;You will probably have to do some experiment  &lt;BR /&gt;and carrefully check your RMU statistics and profile access (no exclusive transaction or no sequential access for example)&lt;BR /&gt;&lt;BR /&gt;JF&lt;BR /&gt;JF</description>
      <pubDate>Mon, 30 Oct 2006 09:28:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888581#M80069</guid>
      <dc:creator>Jean-François Piéronne</dc:creator>
      <dc:date>2006-10-30T09:28:24Z</dc:date>
    </item>
    <item>
      <title>Re: PE1 values and identifying which lock tree remasters.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888582#M80070</link>
      <description>This is a bit of a side note, but are you using a dedicated lock management cpu?&lt;BR /&gt;In situations such as yours it can often help.&lt;BR /&gt;&lt;BR /&gt;If you are getting a lot of lock tree remastering, especially with RDB, that is exactly the reason PE1 was written.  Generally, you need to specifically decide where you want the mstering to take place, and set your lockdrwt accordingly.  You seem to know the application.&lt;BR /&gt;&lt;BR /&gt;You can't have that many huge databases.&lt;BR /&gt;&lt;BR /&gt;There's no substitute for application design.&lt;BR /&gt;&lt;BR /&gt;Incidently, also a tangent, regular file maintainence, will dramatically improve locking behave.&lt;BR /&gt;</description>
      <pubDate>Tue, 31 Oct 2006 06:52:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888582#M80070</guid>
      <dc:creator>comarow</dc:creator>
      <dc:date>2006-10-31T06:52:08Z</dc:date>
    </item>
    <item>
      <title>Re: PE1 values and identifying which lock tree remasters.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888583#M80071</link>
      <description>Thomas,&lt;BR /&gt;  When (if?) you can get your systems to V8.2, please make sure you look at the new SYSGEN parameter LOCKRMWT. It can be used for much finer control of lock remastering between nodes than was available with PE1. &lt;BR /&gt;&lt;BR /&gt;From the New Features manual&lt;BR /&gt;&lt;BR /&gt;Lock Re-mastering Improvements &lt;BR /&gt;* Provides more control over lock re-master decision making with the new LOCKRMWT system parameter &lt;BR /&gt;* Remote activity thresholds necessary to move a tree are now computed based on local activity rates &lt;BR /&gt;* Provides greater control of application performance within an OpenVMS cluster &lt;BR /&gt;* Reduces the possibility of lock trees thrashing between nodes in an OpenVMS</description>
      <pubDate>Wed, 15 Nov 2006 16:09:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888583#M80071</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2006-11-15T16:09:57Z</dc:date>
    </item>
    <item>
      <title>Re: PE1 values and identifying which lock tree remasters.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888584#M80072</link>
      <description>John,&lt;BR /&gt;&lt;BR /&gt;&lt;QUOTE&gt;&lt;BR /&gt;When (if?) you can get your systems to V8.2, please make sure you look at the new SYSGEN parameter LOCKRMWT.&lt;BR /&gt;&lt;/QUOTE&gt;&lt;BR /&gt;&lt;BR /&gt;Slight correction: this parameter is only available with OpenVMS V8.3&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Thu, 16 Nov 2006 01:49:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/pe1-values-and-identifying-which-lock-tree-remasters/m-p/3888584#M80072</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2006-11-16T01:49:40Z</dc:date>
    </item>
  </channel>
</rss>

