<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: RWCLU in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511016#M67863</link>
    <description>Wim,&lt;BR /&gt;&lt;BR /&gt;the LOCKMGRERR crash of node SALPV2 indicates that a bad/corrupted LOCK message has been received from node ALM12. That node should have crashed as well (with LOCKMGRERR) - as indicated by R0=0000223C   %SYSTEM-F-NODELEAVE.&lt;BR /&gt;&lt;BR /&gt;This type of crash can happen, when messages are being corrupted (either in the sending or the receiving node OR on the 'wire' i.e. LAN).&lt;BR /&gt;&lt;BR /&gt;Together with your network problems (connection lost), this is another indication of a possible network problem.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
    <pubDate>Thu, 24 Mar 2005 03:40:53 GMT</pubDate>
    <dc:creator>Volker Halle</dc:creator>
    <dc:date>2005-03-24T03:40:53Z</dc:date>
    <item>
      <title>RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511012#M67859</link>
      <description>Yesterday we had a development cluster (with 6 system disk less stations in it) that had processes in RWCLU. All login trials went into RWCLU. All active processes too (not sure of the word all). I still had an open sesssion and could do a show system and a search. The search worked but eventually blocked too.&lt;BR /&gt;&lt;BR /&gt;This took over 15 minutes before I restarted the cluster (I didn't take a system dump, yes I was stupid).&lt;BR /&gt;&lt;BR /&gt;The restart was done for 1 server node but it got a fatal bugcheck in the lock manager.&lt;BR /&gt;Then I restarted both nodes. The stations didn't reboot.&lt;BR /&gt;&lt;BR /&gt;I checked the operator log file and found network interruptions but not between the 2 server nodes.&lt;BR /&gt;&lt;BR /&gt;I know RWCLU is notmal an indication of lock remastering. Performance advisor didn't find any anomalies.&lt;BR /&gt;&lt;BR /&gt;Anyone an idea of what happened ?&lt;BR /&gt;&lt;BR /&gt;VMS 7.3 not fully patched on Alphaserver 4100.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Thu, 24 Mar 2005 02:34:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511012#M67859</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-03-24T02:34:25Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511013#M67860</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;RWCLU is also used during cluster state transitions.&lt;BR /&gt;&lt;BR /&gt;What do you mean with 'network interruptions' ? Connection lost ?&lt;BR /&gt;&lt;BR /&gt;Would you want to post the CLUE file from the lock manager crash ?&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Thu, 24 Mar 2005 02:47:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511013#M67860</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-03-24T02:47:20Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511014#M67861</link>
      <description>Volker,&lt;BR /&gt;&lt;BR /&gt;The operator log file only mentioned "node station lost connection to station or server".&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Thu, 24 Mar 2005 02:58:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511014#M67861</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-03-24T02:58:48Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511015#M67862</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;so you had network problems...&lt;BR /&gt;&lt;BR /&gt;The following OPCOM message will indicate, that NODEA did NOT receive a cluster hello (multicast) message from NODEB since about 8-10 seconds: &lt;BR /&gt;&lt;BR /&gt;Node NODEA (csid 00010074) lost connection to node NODEB&lt;BR /&gt;&lt;BR /&gt;Each node in the cluster sends a multicast-message to all other cluster-members (using a MC address based on the cluster group number) every 3 seconds.&lt;BR /&gt;&lt;BR /&gt;If the problem is intermittent, once the next hello message is received from that node, the following OPCOM message is shown:&lt;BR /&gt;&lt;BR /&gt;Node NODEA (csid 00010074) re-established connection to node NODEB&lt;BR /&gt;&lt;BR /&gt;If no hello message is received from NODEB for more than RECNXINTERVAL seconds, the following message is shown:&lt;BR /&gt;&lt;BR /&gt;Node NODEA (csid 00010074) timed-out lost connection to node NODEB&lt;BR /&gt;&lt;BR /&gt;and NODEB is removed from the cluster by NODEA.&lt;BR /&gt;&lt;BR /&gt;While in this state, any process trying to communicate with the lock manager on a remote node may be put in RWCLU.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Thu, 24 Mar 2005 03:26:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511015#M67862</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-03-24T03:26:35Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511016#M67863</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;the LOCKMGRERR crash of node SALPV2 indicates that a bad/corrupted LOCK message has been received from node ALM12. That node should have crashed as well (with LOCKMGRERR) - as indicated by R0=0000223C   %SYSTEM-F-NODELEAVE.&lt;BR /&gt;&lt;BR /&gt;This type of crash can happen, when messages are being corrupted (either in the sending or the receiving node OR on the 'wire' i.e. LAN).&lt;BR /&gt;&lt;BR /&gt;Together with your network problems (connection lost), this is another indication of a possible network problem.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Thu, 24 Mar 2005 03:40:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511016#M67863</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-03-24T03:40:53Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511017#M67864</link>
      <description>Volker,&lt;BR /&gt;&lt;BR /&gt;You're hot. ALM12 did reboot. But it is the only station that rebooted.&lt;BR /&gt;&lt;BR /&gt;At 15:57:30 there were PEA device errors on SALPV1 (the server). In operator log there were "lost connection" messages.&lt;BR /&gt;At 15:59:30 my monitoring found processes in RWCLU (a batch job and a RSH to start with).&lt;BR /&gt;At 16:00:30 DSM gave circuit timeout messages&lt;BR /&gt;At 16:05: I saw the problem.&lt;BR /&gt;At 16:12:30 I rebooted the server.&lt;BR /&gt;&lt;BR /&gt;To survive network outages recnxinterval is on 900 seconds, so 15 minutes. I guess that I rebooted the system just to fast. The problem would have solved itself a little bit later.&lt;BR /&gt;&lt;BR /&gt;What I don't understand :&lt;BR /&gt;1) search is also locking local files. Why did it work until a certain level ? Cache ?&lt;BR /&gt;2) why I never get this with network outages during the night ? There are batch jobs all the time ! Outages too short ?&lt;BR /&gt;&lt;BR /&gt;Wim&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 24 Mar 2005 03:58:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511017#M67864</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-03-24T03:58:09Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511018#M67865</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;I'm assuming that you'll get the RWCLU state, if the lock request somehow involves an operation with a remote node, to which the local node has lost connection.&lt;BR /&gt;&lt;BR /&gt;This would highly depend on which resources your processes touch at that time.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Thu, 24 Mar 2005 04:34:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511018#M67865</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-03-24T04:34:54Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511019#M67866</link>
      <description>Only the 2 cluster servers have a non-zero LOCKDIRWT. And the network between them was not interrupted. So, why did I get RWCLU on login ? A lock remastering taking almost 15 minutes ?</description>
      <pubDate>Thu, 24 Mar 2005 08:30:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511019#M67866</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-03-24T08:30:02Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511020#M67867</link>
      <description>if only two nodes have nonzero lockdirwt then they will be the lock directory nodes. Other nodes talk to them to find out who is the master node for a particular node. Parhaps, due to your network problems, the lock directory lookup operation was what the processes in RWCLU where waiting for.</description>
      <pubDate>Thu, 24 Mar 2005 11:59:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511020#M67867</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2005-03-24T11:59:26Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511021#M67868</link>
      <description>Today the same thing happened.&lt;BR /&gt;But I didn't reboot and it freed itself without intervention.&lt;BR /&gt;A number of cluster stations where powered off.&lt;BR /&gt;The cluster was locked for 15 minutes, as expected.</description>
      <pubDate>Thu, 24 Mar 2005 13:28:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511021#M67868</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-03-24T13:28:05Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511022#M67869</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;yes, if you power off a satellite without proper shutdown, connections to this node will be lost and it will take RECNXINTERVAL seconds until the node is timed out and removed from the cluster.&lt;BR /&gt;&lt;BR /&gt;Once the node is removed from the cluster, everything (e.g. processes put in RWCLU) will continue.&lt;BR /&gt;&lt;BR /&gt;It's still not 100% clear to me, which kind of LOCK/RESOURCE operation in this scenario causes a process to be put in RWCLU.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Thu, 24 Mar 2005 13:44:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511022#M67869</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-03-24T13:44:27Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511023#M67870</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;one way for getting into the RWCLU state (RSN$_CLUSTRAN) is when a cluster state transtition has been started and locks on the local node are being stalled (LCK$GL_STALLREQS .ne. 0). But a cluster state transition is not started until the connection to the remote node times out (which is not yet true in your case).&lt;BR /&gt;&lt;BR /&gt;If the connection is lost (but not yet timed-out) and a process is involved in a lock operation, which needs to send a lock request to the remote node (to which the connection has been lost), the process is put in RWSCS.&lt;BR /&gt;&lt;BR /&gt;So the only remaining scenario I can think of (for entering RWCLU), is that a remastering operation may happen (between your 2 servers), which also involves a lock on the remote node.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Fri, 25 Mar 2005 04:48:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511023#M67870</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-03-25T04:48:58Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511024#M67871</link>
      <description>Lockdirwt is on 0 on all stations, 1 on the servers.&lt;BR /&gt;&lt;BR /&gt;I would expect that the lock manager stayed on the 2 servers but now I notice that ana/sys show lock/sum results in moves of the lm TO the stations and this very frequently (about every minute a move is done).&lt;BR /&gt;&lt;BR /&gt;???&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Tue, 29 Mar 2005 04:42:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511024#M67871</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-03-29T04:42:20Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511025#M67872</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;MONI RLOCK (or SDA&amp;gt; SHOW LOCK/SUMM) will list the 3 reasons of lock tree outbound movements:&lt;BR /&gt;&lt;BR /&gt;Tree moved due to higher Activity&lt;BR /&gt;Tree moved due to higher LOCKDIRWT&lt;BR /&gt;Tree moved due to Single Node Locks&lt;BR /&gt;&lt;BR /&gt;What does it say on your servers and satellites ?&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Tue, 29 Mar 2005 04:50:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511025#M67872</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-03-29T04:50:31Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511026#M67873</link>
      <description>Servers : 70% due to higher activity, 0% due to higher lockdirwt, 30% due to single node locks.&lt;BR /&gt;&lt;BR /&gt;Stations : 30% due to higher activity, 3% due to higher lockdirwt, 40% due to single node locks. On another station I found 70% due to lockdirwt.&lt;BR /&gt;&lt;BR /&gt;Wim&lt;BR /&gt;</description>
      <pubDate>Tue, 29 Mar 2005 04:55:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511026#M67873</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-03-29T04:55:40Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511027#M67874</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;lock remastering involves individual RESOURCE TREES (i.e. a root resource and it's sub-resources) - depending on lock activity in this tree on the different nodes in a cluster.&lt;BR /&gt;&lt;BR /&gt;Depending on your version of OpenVMS Alpha, there will be a SYS$SHARE:LCK$SDA SDA extension with various interesting commands (SDA&amp;gt; LCK provides some basic help).&lt;BR /&gt;&lt;BR /&gt;SDA&amp;gt; LCK SHOW ACTIVE will show resource tress with lock activity&lt;BR /&gt;&lt;BR /&gt;SDA&amp;gt; LCK STAT/NOALL/TREE/TOPTREE=n will display the n most active lock trees&lt;BR /&gt;&lt;BR /&gt;Now as you see the amount of lock remastering in your cluster, one of the possible scenarios for RWCLU (remastering a tree which includes a lock on a machine, which has lost connection to the cluster), becomes more plausible.&lt;BR /&gt;&lt;BR /&gt;If you really want to find out about the reason of a process going into RWCLU, force a crash with a process in that state and then we'll find out...&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Tue, 29 Mar 2005 06:11:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511027#M67874</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-03-29T06:11:34Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511028#M67875</link>
      <description>Still not clear after a lot of reading.&lt;BR /&gt;&lt;BR /&gt;1) Why do stations receive management of a resource while lockdirwt is 0 ?&lt;BR /&gt;&lt;BR /&gt;2) Is there no way to see what resource is exactly remastered instead of statistics ? I'm afraid it is something ordinary such as the sysuaf.&lt;BR /&gt;&lt;BR /&gt;3) How can I see how many bandwith is eaten by the remastering ? As I understand it, the packets exchanged can be very big and I only can see the number of packets (=messages).&lt;BR /&gt;&lt;BR /&gt;3) How many remasterings per minute is normal ? I see up to 30 remasterings per minute on my GS160 (running Sybase and DSM, DSM in cluster config).&lt;BR /&gt;&lt;BR /&gt;And Volker, I can not ask for a crash. I have to wait ...&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Tue, 29 Mar 2005 09:57:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511028#M67875</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-03-29T09:57:54Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511029#M67876</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;1) sole interest. If locks on this resource tree only exist on this station&lt;BR /&gt;&lt;BR /&gt;2) doesn't SDA&amp;gt; LCK SHOW ACTIVE at least show the most active resource trees to give you an idea about the resources involved.&lt;BR /&gt;&lt;BR /&gt;3) Lock re-mastering is a trade-off between on-going remote lock messages versus moving a resource tree once to the most active node and thereby trying to increase local locking.&lt;BR /&gt;Heavy lock remastering will increase Interrupt Stack Time.&lt;BR /&gt;&lt;BR /&gt;4) You can limit the maximum size of a resource tree being moved by setting PE1 = n&lt;BR /&gt;PE1=-1 will disabled remastering.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Tue, 29 Mar 2005 12:56:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511029#M67876</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-03-29T12:56:12Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511030#M67877</link>
      <description>Volker,&lt;BR /&gt;&lt;BR /&gt;I don't agree. &lt;BR /&gt;&lt;BR /&gt;On 1) : I found 1200 trees moved to a station in 6 days. How can a lock move a tree TO the station if it is the only one locking the resource ?&lt;BR /&gt;&lt;BR /&gt;On 2) : HP implemented lots of show commands (LCK...) but not the most important one : monitoring of remastering in detail.&lt;BR /&gt;SDA&amp;gt;show remastering&lt;BR /&gt;08:01:01.12 resource xxx moved from node aaa (bbb requests in 8sec) to node ccc(ddd requests in 8sec). Moved yyy K in zzz sec.&lt;BR /&gt;&lt;BR /&gt;On 3) : HP shows the number of messages. What importance does it have it the size is between 0 and 64 K ? Shouldn't the number of MB be shown ?&lt;BR /&gt;&lt;BR /&gt;On 4) : this could be suicide. I should have a statistic of all resources with their size and number of remasterings before I could set this PE1. Better would have been a set file command to modify 1 resource setting.&lt;BR /&gt;&lt;BR /&gt;5) Machines are getting quicker all the time. Why is are the parameters for remastering hardcoded ? This is very un-VMS.&lt;BR /&gt;&lt;BR /&gt;Wim&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 30 Mar 2005 01:21:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511030#M67877</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-03-30T01:21:14Z</dc:date>
    </item>
    <item>
      <title>Re: RWCLU</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511031#M67878</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;re: 1) if a resource tree is first used on multiple nodes and then only generates activity on one stations, it should move there, shouldn't it ?&lt;BR /&gt;&lt;BR /&gt;re: others - I'll try to ask these questions during the OpenVMS Bootcamp in June.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Fri, 01 Apr 2005 09:10:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/rwclu/m-p/3511031#M67878</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2005-04-01T09:10:58Z</dc:date>
    </item>
  </channel>
</rss>

