<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: RWCLU in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511034#M67881</link>
    <description>&amp;gt;Still not clear after a lot of reading.&lt;BR /&gt;&amp;gt;&lt;BR /&gt;&amp;gt; 1) Why do stations receive management of&lt;BR /&gt;&amp;gt; a resource while lockdirwt is 0 ?&lt;BR /&gt;&lt;BR /&gt;This may happen if they are the only node in the cluster with interest in a particular resource tree at a given point in time. As soon as there is a node with non-zero LOCKDIRWT which begins sharing the tree, VMS will tend to remaster the tree to the node with the higher value of LOCKDIRWT (unless it is artificially prevented from doing so).&lt;BR /&gt;&lt;BR /&gt;&amp;gt; 2) Is there no way to see what resource&lt;BR /&gt;&amp;gt; is exactly remastered instead of&lt;BR /&gt;&amp;gt; statistics ? I'm afraid it is something&lt;BR /&gt;&amp;gt; ordinary such as the sysuaf.&lt;BR /&gt;&lt;BR /&gt;SDA can show lock mastership for a tree, as others pointed out. An easy way is the LOCK_ACTV*.COM tool from the [KP_LOCKTOOLS] directory of the V6 Freeware. It shows, in descending order by activity level, all active resource trees, indicating the present master node with an asterisk.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; 3) How can I see how many bandwith is&lt;BR /&gt;&amp;gt; eaten by the remastering ? As I&lt;BR /&gt;&amp;gt; understand it, the packets exchanged can&lt;BR /&gt;&amp;gt; be very big and I only can see the number&lt;BR /&gt;&amp;gt; of packets (=messages).&lt;BR /&gt;&lt;BR /&gt;I don't know of a good way at present. You can use SHOW CLUSTER/CONTINUOUS to get counts of ALL block data transfers, which could at least give you an upper bound value.&lt;BR /&gt;&lt;BR /&gt;And then you could temporarily disable rematering with PE1=(a very large value, or -1) and compare the rates.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; 3) How many remasterings per minute is&lt;BR /&gt;&amp;gt; normal ? I see up to 30 remasterings per&lt;BR /&gt;&amp;gt; minute on my GS160 (running Sybase and&lt;BR /&gt;&amp;gt; DSM, DSM in cluster config).&lt;BR /&gt;&lt;BR /&gt;If it's the same set of trees being remastered all the time, that sounds excessive, like it is thrashing between nodes. I've addressed the causes and workarounds for lock mastership thrashing in some user-group presentations, such as&lt;BR /&gt;&lt;A href="http://www.geocities.com/keithparris/decus_presentations/s2001dfw_lock_manager.ppt" target="_blank"&gt;http://www.geocities.com/keithparris/decus_presentations/s2001dfw_lock_manager.ppt&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Your options to avoid thrashing at this point are basically:&lt;BR /&gt;o  Unbalanced node power rather than a set of equal-powered nodes&lt;BR /&gt;o  Unequal workloads (bias the load distribution to put more load on one machine than the others)&lt;BR /&gt;o  Unequal values of LOCKDIRWT&lt;BR /&gt;o  Non-zero values of PE1 (and since PE1 is dynamic, you could use different values at different times, perhaps allowing remastering for short times periodically to avoid trees getting stranded on sub-optimal nodes)&lt;BR /&gt;o  Raise value in VMS data cell LCK$GL_SYS_THRSH to require a higher delta in activity between nodes before a tree will be remastered&lt;BR /&gt;&lt;BR /&gt;&amp;gt; 4) You can limit the maximum size of a&lt;BR /&gt;&amp;gt; resource tree being moved by setting&lt;BR /&gt;&amp;gt; PE1 = n&lt;BR /&gt;&amp;gt; PE1=-1 will disabled remastering.&lt;BR /&gt;&lt;BR /&gt;Correct. Note that PE1=-1 disables all remastering, and also disables keeping of the statistics that SDA&amp;gt; LCK SHOW ACTIVE and my LOCK_ACTV* tools look at.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; On 2) : HP implemented lots of show&lt;BR /&gt;&amp;gt; commands (LCK...) but not the most&lt;BR /&gt;&amp;gt; important one : monitoring of remastering&lt;BR /&gt;&amp;gt; in detail.&lt;BR /&gt;&lt;BR /&gt;MONITOR RLOCK is also available. That gives general statistics on remastering in a bit more-readable format than SDA&amp;gt; SHOW RESOURCE/SUMMARY does.&lt;BR /&gt;&lt;BR /&gt;To get detail on which trees are moving, you might have to do something like process the output from a tool like LOCK_ACTV* looking for changes in lock mastership for specific trees.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; On 3) : HP shows the number of messages.&lt;BR /&gt;&amp;gt; What importance does it have it the size&lt;BR /&gt;&amp;gt; is between 0 and 64 K ? Shouldn't the&lt;BR /&gt;&amp;gt; number of MB be shown ?&lt;BR /&gt;&lt;BR /&gt;This is an artifact of history. Since lock remastering used to only use sequenced messages, that is what was counted and reported. Now block data transfer counts would be more interesting. As I noted above, you may be able to get some idea of the magnitude using SHOW CLUSTER/CONTINUOUS to get SCS-level block data transfer statistics.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; On 4) : this could be suicide. I should&lt;BR /&gt;&amp;gt; have a statistic of all resources with&lt;BR /&gt;&amp;gt; their size and number of remasterings&lt;BR /&gt;&amp;gt; before I could set this PE1. Better would&lt;BR /&gt;&amp;gt; have been a set file command to modify 1&lt;BR /&gt;&amp;gt; resource setting.&lt;BR /&gt;&lt;BR /&gt;Since PE1 is dynamic, it's fairly easy to play with it and observe the behavior.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; 5) Machines are getting quicker all the&lt;BR /&gt;&amp;gt; time. Why is are the parameters for&lt;BR /&gt;&amp;gt; remastering hardcoded ? This is very &lt;BR /&gt;&amp;gt; un-VMS.&lt;BR /&gt;&lt;BR /&gt;I agree. I have a problem report in the system on the issue of LCK$GL_ACT_THRSH being hard-coded at 80 (per 8-second rematering scan interval, or 10 lock operations per second) as the threshold of difference in locking activity between nodes which will trigger remastering of a tree. I'd rather see this be a percentage. Some folks use a program to modify this cell to a higher value.</description>
    <pubDate>Mon, 11 Apr 2005 10:36:33 GMT</pubDate>
    <dc:creator>Keith Parris</dc:creator>
    <dc:date>2005-04-11T10:36:33Z</dc:date>
    <item>
      <title>RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511012#M67859</link>
      <description>Yesterday we had a development cluster (with 6 system disk less stations in it) that had processes in RWCLU. All login trials went into RWCLU. All active processes too (not sure of the word all). I still had an open sesssion and could do a show system and a search. The search worked but eventually blocked too.&lt;BR /&gt;&lt;BR /&gt;This took over 15 minutes before I restarted the cluster (I didn't take a system dump, yes I was stupid).&lt;BR /&gt;&lt;BR /&gt;The restart was done for 1 server node but it got a fatal bugcheck in the lock manager.&lt;BR /&gt;Then I restarted both nodes. The stations didn't reboot.&lt;BR /&gt;&lt;BR /&gt;I checked the operator log file and found network interruptions but not between the 2 server nodes.&lt;BR /&gt;&lt;BR /&gt;I know RWCLU is notmal an indication of lock remastering. Performance advisor didn't find any anomalies.&lt;BR /&gt;&lt;BR /&gt;Anyone an idea of what happened ?&lt;BR /&gt;&lt;BR /&gt;VMS 7.3 not fully patched on Alphaserver 4100.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Thu, 24 Mar 2005 02:34:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511012#M67859</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-03-24T02:34:25Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511013#M67860</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;RWCLU is also used during cluster state transitions.&lt;BR /&gt;&lt;BR /&gt;What do you mean with 'network interruptions' ? Connection lost ?&lt;BR /&gt;&lt;BR /&gt;Would you want to post the CLUE file from the lock manager crash ?&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Thu, 24 Mar 2005 02:47:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511013#M67860</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-03-24T02:47:20Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511014#M67861</link>
      <description>Volker,&lt;BR /&gt;&lt;BR /&gt;The operator log file only mentioned "node station lost connection to station or server".&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Thu, 24 Mar 2005 02:58:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511014#M67861</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-03-24T02:58:48Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511015#M67862</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;so you had network problems...&lt;BR /&gt;&lt;BR /&gt;The following OPCOM message will indicate, that NODEA did NOT receive a cluster hello (multicast) message from NODEB since about 8-10 seconds: &lt;BR /&gt;&lt;BR /&gt;Node NODEA (csid 00010074) lost connection to node NODEB&lt;BR /&gt;&lt;BR /&gt;Each node in the cluster sends a multicast-message to all other cluster-members (using a MC address based on the cluster group number) every 3 seconds.&lt;BR /&gt;&lt;BR /&gt;If the problem is intermittent, once the next hello message is received from that node, the following OPCOM message is shown:&lt;BR /&gt;&lt;BR /&gt;Node NODEA (csid 00010074) re-established connection to node NODEB&lt;BR /&gt;&lt;BR /&gt;If no hello message is received from NODEB for more than RECNXINTERVAL seconds, the following message is shown:&lt;BR /&gt;&lt;BR /&gt;Node NODEA (csid 00010074) timed-out lost connection to node NODEB&lt;BR /&gt;&lt;BR /&gt;and NODEB is removed from the cluster by NODEA.&lt;BR /&gt;&lt;BR /&gt;While in this state, any process trying to communicate with the lock manager on a remote node may be put in RWCLU.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Thu, 24 Mar 2005 03:26:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511015#M67862</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-03-24T03:26:35Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511016#M67863</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;the LOCKMGRERR crash of node SALPV2 indicates that a bad/corrupted LOCK message has been received from node ALM12. That node should have crashed as well (with LOCKMGRERR) - as indicated by R0=0000223C   %SYSTEM-F-NODELEAVE.&lt;BR /&gt;&lt;BR /&gt;This type of crash can happen, when messages are being corrupted (either in the sending or the receiving node OR on the 'wire' i.e. LAN).&lt;BR /&gt;&lt;BR /&gt;Together with your network problems (connection lost), this is another indication of a possible network problem.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Thu, 24 Mar 2005 03:40:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511016#M67863</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-03-24T03:40:53Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511017#M67864</link>
      <description>Volker,&lt;BR /&gt;&lt;BR /&gt;You're hot. ALM12 did reboot. But it is the only station that rebooted.&lt;BR /&gt;&lt;BR /&gt;At 15:57:30 there were PEA device errors on SALPV1 (the server). In operator log there were "lost connection" messages.&lt;BR /&gt;At 15:59:30 my monitoring found processes in RWCLU (a batch job and a RSH to start with).&lt;BR /&gt;At 16:00:30 DSM gave circuit timeout messages&lt;BR /&gt;At 16:05: I saw the problem.&lt;BR /&gt;At 16:12:30 I rebooted the server.&lt;BR /&gt;&lt;BR /&gt;To survive network outages recnxinterval is on 900 seconds, so 15 minutes. I guess that I rebooted the system just to fast. The problem would have solved itself a little bit later.&lt;BR /&gt;&lt;BR /&gt;What I don't understand :&lt;BR /&gt;1) search is also locking local files. Why did it work until a certain level ? Cache ?&lt;BR /&gt;2) why I never get this with network outages during the night ? There are batch jobs all the time ! Outages too short ?&lt;BR /&gt;&lt;BR /&gt;Wim&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 24 Mar 2005 03:58:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511017#M67864</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-03-24T03:58:09Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511018#M67865</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;I'm assuming that you'll get the RWCLU state, if the lock request somehow involves an operation with a remote node, to which the local node has lost connection.&lt;BR /&gt;&lt;BR /&gt;This would highly depend on which resources your processes touch at that time.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Thu, 24 Mar 2005 04:34:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511018#M67865</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-03-24T04:34:54Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511019#M67866</link>
      <description>Only the 2 cluster servers have a non-zero LOCKDIRWT. And the network between them was not interrupted. So, why did I get RWCLU on login ? A lock remastering taking almost 15 minutes ?</description>
      <pubDate>Thu, 24 Mar 2005 08:30:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511019#M67866</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-03-24T08:30:02Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511020#M67867</link>
      <description>if only two nodes have nonzero lockdirwt then they will be the lock directory nodes. Other nodes talk to them to find out who is the master node for a particular node. Parhaps, due to your network problems, the lock directory lookup operation was what the processes in RWCLU where waiting for.</description>
      <pubDate>Thu, 24 Mar 2005 11:59:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511020#M67867</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2005-03-24T11:59:26Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511021#M67868</link>
      <description>Today the same thing happened.&lt;BR /&gt;But I didn't reboot and it freed itself without intervention.&lt;BR /&gt;A number of cluster stations where powered off.&lt;BR /&gt;The cluster was locked for 15 minutes, as expected.</description>
      <pubDate>Thu, 24 Mar 2005 13:28:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511021#M67868</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-03-24T13:28:05Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511022#M67869</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;yes, if you power off a satellite without proper shutdown, connections to this node will be lost and it will take RECNXINTERVAL seconds until the node is timed out and removed from the cluster.&lt;BR /&gt;&lt;BR /&gt;Once the node is removed from the cluster, everything (e.g. processes put in RWCLU) will continue.&lt;BR /&gt;&lt;BR /&gt;It's still not 100% clear to me, which kind of LOCK/RESOURCE operation in this scenario causes a process to be put in RWCLU.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Thu, 24 Mar 2005 13:44:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511022#M67869</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-03-24T13:44:27Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511023#M67870</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;one way for getting into the RWCLU state (RSN$_CLUSTRAN) is when a cluster state transtition has been started and locks on the local node are being stalled (LCK$GL_STALLREQS .ne. 0). But a cluster state transition is not started until the connection to the remote node times out (which is not yet true in your case).&lt;BR /&gt;&lt;BR /&gt;If the connection is lost (but not yet timed-out) and a process is involved in a lock operation, which needs to send a lock request to the remote node (to which the connection has been lost), the process is put in RWSCS.&lt;BR /&gt;&lt;BR /&gt;So the only remaining scenario I can think of (for entering RWCLU), is that a remastering operation may happen (between your 2 servers), which also involves a lock on the remote node.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Fri, 25 Mar 2005 04:48:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511023#M67870</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-03-25T04:48:58Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511024#M67871</link>
      <description>Lockdirwt is on 0 on all stations, 1 on the servers.&lt;BR /&gt;&lt;BR /&gt;I would expect that the lock manager stayed on the 2 servers but now I notice that ana/sys show lock/sum results in moves of the lm TO the stations and this very frequently (about every minute a move is done).&lt;BR /&gt;&lt;BR /&gt;???&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Tue, 29 Mar 2005 04:42:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511024#M67871</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-03-29T04:42:20Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511025#M67872</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;MONI RLOCK (or SDA&amp;gt; SHOW LOCK/SUMM) will list the 3 reasons of lock tree outbound movements:&lt;BR /&gt;&lt;BR /&gt;Tree moved due to higher Activity&lt;BR /&gt;Tree moved due to higher LOCKDIRWT&lt;BR /&gt;Tree moved due to Single Node Locks&lt;BR /&gt;&lt;BR /&gt;What does it say on your servers and satellites ?&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Tue, 29 Mar 2005 04:50:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511025#M67872</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-03-29T04:50:31Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511026#M67873</link>
      <description>Servers : 70% due to higher activity, 0% due to higher lockdirwt, 30% due to single node locks.&lt;BR /&gt;&lt;BR /&gt;Stations : 30% due to higher activity, 3% due to higher lockdirwt, 40% due to single node locks. On another station I found 70% due to lockdirwt.&lt;BR /&gt;&lt;BR /&gt;Wim&lt;BR /&gt;</description>
      <pubDate>Tue, 29 Mar 2005 04:55:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511026#M67873</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-03-29T04:55:40Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511027#M67874</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;lock remastering involves individual RESOURCE TREES (i.e. a root resource and it's sub-resources) - depending on lock activity in this tree on the different nodes in a cluster.&lt;BR /&gt;&lt;BR /&gt;Depending on your version of OpenVMS Alpha, there will be a SYS$SHARE:LCK$SDA SDA extension with various interesting commands (SDA&amp;gt; LCK provides some basic help).&lt;BR /&gt;&lt;BR /&gt;SDA&amp;gt; LCK SHOW ACTIVE will show resource tress with lock activity&lt;BR /&gt;&lt;BR /&gt;SDA&amp;gt; LCK STAT/NOALL/TREE/TOPTREE=n will display the n most active lock trees&lt;BR /&gt;&lt;BR /&gt;Now as you see the amount of lock remastering in your cluster, one of the possible scenarios for RWCLU (remastering a tree which includes a lock on a machine, which has lost connection to the cluster), becomes more plausible.&lt;BR /&gt;&lt;BR /&gt;If you really want to find out about the reason of a process going into RWCLU, force a crash with a process in that state and then we'll find out...&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Tue, 29 Mar 2005 06:11:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511027#M67874</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-03-29T06:11:34Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511028#M67875</link>
      <description>Still not clear after a lot of reading.&lt;BR /&gt;&lt;BR /&gt;1) Why do stations receive management of a resource while lockdirwt is 0 ?&lt;BR /&gt;&lt;BR /&gt;2) Is there no way to see what resource is exactly remastered instead of statistics ? I'm afraid it is something ordinary such as the sysuaf.&lt;BR /&gt;&lt;BR /&gt;3) How can I see how many bandwith is eaten by the remastering ? As I understand it, the packets exchanged can be very big and I only can see the number of packets (=messages).&lt;BR /&gt;&lt;BR /&gt;3) How many remasterings per minute is normal ? I see up to 30 remasterings per minute on my GS160 (running Sybase and DSM, DSM in cluster config).&lt;BR /&gt;&lt;BR /&gt;And Volker, I can not ask for a crash. I have to wait ...&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Tue, 29 Mar 2005 09:57:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511028#M67875</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-03-29T09:57:54Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511029#M67876</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;1) sole interest. If locks on this resource tree only exist on this station&lt;BR /&gt;&lt;BR /&gt;2) doesn't SDA&amp;gt; LCK SHOW ACTIVE at least show the most active resource trees to give you an idea about the resources involved.&lt;BR /&gt;&lt;BR /&gt;3) Lock re-mastering is a trade-off between on-going remote lock messages versus moving a resource tree once to the most active node and thereby trying to increase local locking.&lt;BR /&gt;Heavy lock remastering will increase Interrupt Stack Time.&lt;BR /&gt;&lt;BR /&gt;4) You can limit the maximum size of a resource tree being moved by setting PE1 = n&lt;BR /&gt;PE1=-1 will disabled remastering.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Tue, 29 Mar 2005 12:56:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511029#M67876</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-03-29T12:56:12Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511030#M67877</link>
      <description>Volker,&lt;BR /&gt;&lt;BR /&gt;I don't agree. &lt;BR /&gt;&lt;BR /&gt;On 1) : I found 1200 trees moved to a station in 6 days. How can a lock move a tree TO the station if it is the only one locking the resource ?&lt;BR /&gt;&lt;BR /&gt;On 2) : HP implemented lots of show commands (LCK...) but not the most important one : monitoring of remastering in detail.&lt;BR /&gt;SDA&amp;gt;show remastering&lt;BR /&gt;08:01:01.12 resource xxx moved from node aaa (bbb requests in 8sec) to node ccc(ddd requests in 8sec). Moved yyy K in zzz sec.&lt;BR /&gt;&lt;BR /&gt;On 3) : HP shows the number of messages. What importance does it have it the size is between 0 and 64 K ? Shouldn't the number of MB be shown ?&lt;BR /&gt;&lt;BR /&gt;On 4) : this could be suicide. I should have a statistic of all resources with their size and number of remasterings before I could set this PE1. Better would have been a set file command to modify 1 resource setting.&lt;BR /&gt;&lt;BR /&gt;5) Machines are getting quicker all the time. Why is are the parameters for remastering hardcoded ? This is very un-VMS.&lt;BR /&gt;&lt;BR /&gt;Wim&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 30 Mar 2005 01:21:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511030#M67877</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-03-30T01:21:14Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511031#M67878</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;re: 1) if a resource tree is first used on multiple nodes and then only generates activity on one stations, it should move there, shouldn't it ?&lt;BR /&gt;&lt;BR /&gt;re: others - I'll try to ask these questions during the OpenVMS Bootcamp in June.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Fri, 01 Apr 2005 09:10:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511031#M67878</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-04-01T09:10:58Z</dc:date>
    </item>
  </channel>
</rss>

