<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Setting TQELM limits in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432734#M4291</link>
    <description>It is also funny how VMS reacts.&lt;BR /&gt;&lt;BR /&gt;1) when asking memory exceeding the pagefilequota : error returned to program&lt;BR /&gt;2) when asking SETIMR exceeding tqelm : hang and mutex.&lt;BR /&gt;&lt;BR /&gt;I'm not a system programmer but shouldn't  hang/error be a parameter for the call ?&lt;BR /&gt;&lt;BR /&gt;Wim</description>
    <pubDate>Tue, 30 Nov 2004 07:17:18 GMT</pubDate>
    <dc:creator>Wim Van den Wyngaert</dc:creator>
    <dc:date>2004-11-30T07:17:18Z</dc:date>
    <item>
      <title>Setting TQELM limits</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432730#M4287</link>
      <description>Hi, &lt;BR /&gt;&lt;BR /&gt;A number of application specific processes were seen to be in a state of MUTEX. Via the SDA the the resource wait state was WTTQE. Therefore the TQELM limit was increased for the process' owner. &lt;BR /&gt;&lt;BR /&gt;Ask the Wizard stated:&lt;BR /&gt;&lt;BR /&gt;"&lt;BR /&gt;TQELM is the Timer Queue Entry Limit.&lt;BR /&gt; &lt;BR /&gt;The TQELM is the maximum number of timed events you may have outstanding. Most processes don't require many, so quotas as&lt;BR /&gt;low as 10 can be reasonable.  (Third-party software can require far more of these, of course.)&lt;BR /&gt; &lt;BR /&gt;The most important thing about any quota is to make sure a process has sufficient to perform its job. As long as it doesn't actually&lt;BR /&gt;attempt to exceed the limit, there is no problem.  Attempting to exceed TQELM or other similar quotas will put the process into a&lt;BR /&gt;resource wait state, when the next scheduled TQE expires, the request will be granted, and the process will continue. So this and other similar quota-induced stall conditions are self-healing, unless the process has timer response constraints.&lt;BR /&gt;"&lt;BR /&gt;&lt;BR /&gt;As this issue has reoccured ... please could someone advise:&lt;BR /&gt;&lt;BR /&gt;-What exactly are Timer Queue Entries?&lt;BR /&gt;&lt;BR /&gt;-Is there a way to calculate the maximum number of TQEs a processes may need? i.e. is it code specific ..&lt;BR /&gt;&lt;BR /&gt;-Is there an adverse effect on system performance by increasing this value?&lt;BR /&gt;&lt;BR /&gt;Thanks in advance&lt;BR /&gt;R</description>
      <pubDate>Tue, 30 Nov 2004 03:55:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432730#M4287</guid>
      <dc:creator>Richard Leighton</dc:creator>
      <dc:date>2004-11-30T03:55:14Z</dc:date>
    </item>
    <item>
      <title>Re: Setting TQELM limits</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432731#M4288</link>
      <description>Richard,&lt;BR /&gt;&lt;BR /&gt;TQEs are data structs maintained by VMS in a doubly linked list, describing time-dependent requests, ordered by expiration time of the request.&lt;BR /&gt;Processes can use the system services $SCHDWK and $SETIMR to request time-dependent services; and can use $CANWAK an $CANTIM to cancel these requests.&lt;BR /&gt;&lt;BR /&gt;The actual number of TQEs a process needs is indeed very dependent of the image that runs in that process. How many, hard to say.&lt;BR /&gt;&lt;BR /&gt;Increasing the value of TQELM for a certain process and may afect the non-paged pool, since TQE are allocated from NPP. However, a TQE entry isn't that big, and if your Alpha has enough memory, I don't think that increasing the TQELM (to a reasonable value) will cause a performance issue.&lt;BR /&gt;&lt;BR /&gt;Greetz,&lt;BR /&gt;&lt;BR /&gt;Kris&lt;BR /&gt;</description>
      <pubDate>Tue, 30 Nov 2004 05:08:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432731#M4288</guid>
      <dc:creator>Kris Clippeleyr</dc:creator>
      <dc:date>2004-11-30T05:08:10Z</dc:date>
    </item>
    <item>
      <title>Re: Setting TQELM limits</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432732#M4289</link>
      <description>A TQE is only 64 bytes so allowing a process to have lots is not really a problem. The limit is there to prevent one process from using up all the non-paged pool (parhaps becase of a bug).&lt;BR /&gt;&lt;BR /&gt;Sometimes the TQE limit is related to other quotas but this is very application specific. Just pick a number big enough for the application to work.</description>
      <pubDate>Tue, 30 Nov 2004 05:59:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432732#M4289</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2004-11-30T05:59:08Z</dc:date>
    </item>
    <item>
      <title>Re: Setting TQELM limits</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432733#M4290</link>
      <description>Thanks Kris and Iain for your help.</description>
      <pubDate>Tue, 30 Nov 2004 06:28:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432733#M4290</guid>
      <dc:creator>Richard Leighton</dc:creator>
      <dc:date>2004-11-30T06:28:58Z</dc:date>
    </item>
    <item>
      <title>Re: Setting TQELM limits</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432734#M4291</link>
      <description>It is also funny how VMS reacts.&lt;BR /&gt;&lt;BR /&gt;1) when asking memory exceeding the pagefilequota : error returned to program&lt;BR /&gt;2) when asking SETIMR exceeding tqelm : hang and mutex.&lt;BR /&gt;&lt;BR /&gt;I'm not a system programmer but shouldn't  hang/error be a parameter for the call ?&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Tue, 30 Nov 2004 07:17:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432734#M4291</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-11-30T07:17:18Z</dc:date>
    </item>
    <item>
      <title>Re: Setting TQELM limits</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432735#M4292</link>
      <description>You can choose to disable resource wait mode so you get an error by using the $SETWRM system service. See&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/doc/732FINAL/4527/4527pro_007.html#index_x_1019" target="_blank"&gt;http://h71000.www7.hp.com/doc/732FINAL/4527/4527pro_007.html#index_x_1019&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;You can also disable this when creating a detached process e.g. RUN/NORESOURCE_WAIT&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 30 Nov 2004 07:36:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432735#M4292</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2004-11-30T07:36:03Z</dc:date>
    </item>
    <item>
      <title>Re: Setting TQELM limits</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432736#M4293</link>
      <description>Ian,&lt;BR /&gt;&lt;BR /&gt;Thanks for the clarification.&lt;BR /&gt;But it's still not clear : &lt;BR /&gt;&lt;BR /&gt;1)if I specify /noresource, will it never wait, whatever the program specified ?&lt;BR /&gt;&lt;BR /&gt;2) if the default is /resource, why is it for certain resources such as memory still "give an error" instead of waiting until memory is available ?&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Tue, 30 Nov 2004 08:07:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432736#M4293</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-11-30T08:07:13Z</dc:date>
    </item>
    <item>
      <title>Re: Setting TQELM limits</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432737#M4294</link>
      <description>resource wait mode only affects the behavior of system services for certain resources. &lt;BR /&gt;# System dynamic memory&lt;BR /&gt;# UNIBUS adapter map registers&lt;BR /&gt;# Direct I/O limit (DIOLM) quota&lt;BR /&gt;# Buffered I/O limit (BIOLM) quota&lt;BR /&gt;# Buffered I/O byte count limit (BYTLM) quota&lt;BR /&gt;&lt;BR /&gt;The system services return SS$_EXQUOTA instead of waiting if resource wait mode is disabled.</description>
      <pubDate>Tue, 30 Nov 2004 08:40:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432737#M4294</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2004-11-30T08:40:45Z</dc:date>
    </item>
    <item>
      <title>Re: Setting TQELM limits</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432738#M4295</link>
      <description>Richard,&lt;BR /&gt;&lt;BR /&gt;Regarding Wim's query:&lt;BR /&gt;&lt;BR /&gt;"1) when asking memory exceeding the pagefilequota : error returned to program&lt;BR /&gt;2) when asking SETIMR exceeding tqelm : hang and mutex."&lt;BR /&gt;&lt;BR /&gt;  There is a simple reason for this. PGFLQUOTA is a PROCESS quota. Therefore, if an allocation attempt fails, the same allocation will continue to fail forever.&lt;BR /&gt;&lt;BR /&gt;  TQELM and BYTLM are JOB quotas, pooled between all processes in the same job tree. So, if an allocation fails now, it may succeed later, if another process in the job tree returns some quota. (that VMS goes mutex even if there is only one job in the tree is questionable, but that's just how it works! at least the behaviour is consistent)&lt;BR /&gt;&lt;BR /&gt;&amp;gt;What exactly are Timer Queue Entries?&lt;BR /&gt;  &lt;BR /&gt;  You can request the operating system notify you at a specific time in the future, or after some time interval. The result may be setting an event flag, issuing a wakeup or firing an AST. The Timer Queue is a list of future notifications to be made. A JOB TREE has a limit as to how many notifications can be pending at any given time.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;Is there a way to calculate the maximum number of TQEs a processes may need? i.e. is it code specific ..&lt;BR /&gt;&lt;BR /&gt;  Yes it's code specific. You need to know how many future notifications are likely to be needed, and how long they will remain on the queue. Typical uses for timer entries are timeouts for I/Os and periodic events. I/Os tend to be short lived. Periodic events can be longer term. As timers expire they can be reused.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;-Is there an adverse effect on system performance by increasing this value?&lt;BR /&gt;&lt;BR /&gt;  On pre V7.3 systems, TQEs were kept in a doubly linked list, ordered in temporal sequence. If the queue got very long, operations to add entries at the correct place could take a long time (sequential scan required, at elevated IPL and holding the SCHED spinlock - not good!). A process with a high TQELM which added large numbers of entries could therefore adversly impact the whole system. A high TQELM which is not used does not affect the system.&lt;BR /&gt;&lt;BR /&gt;  As of V7.3, the TQE structure has changed, so insertions are far more efficient, even when there are large numbers of entries.&lt;BR /&gt;&lt;BR /&gt;  However, if you find you're exceeding TQELM even with high values, it's probably worth investigating exactly what the application is doing. Why is it issuing so many? Perhaps there's a time interval coded incorrectly (too long)? Are long term timers being requested and not cancelled when they should be?  &lt;BR /&gt;</description>
      <pubDate>Wed, 01 Dec 2004 06:29:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432738#M4295</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2004-12-01T06:29:32Z</dc:date>
    </item>
    <item>
      <title>Re: Setting TQELM limits</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432739#M4296</link>
      <description>Oh, and to Ian's suggestion...&lt;BR /&gt;&lt;BR /&gt;  Please DO NOT disable resource wait mode, unless you have control over ALL system services your code will ever make. Behaviour of any RTL calls, and even some system services are unpredictable with resource wait mode disabled.&lt;BR /&gt;&lt;BR /&gt;  The purpose of $SETRWM is for inner mode and real time programming, it is NOT a magic wand for fixing application resource problems. See the System Services Reference Manual BEFORE attempting to use this service.&lt;BR /&gt;&lt;BR /&gt;  If you're not coding in MACRO-32 with IPL above 2 then you should NOT be calling $SETRWM!</description>
      <pubDate>Wed, 01 Dec 2004 06:32:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432739#M4296</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2004-12-01T06:32:14Z</dc:date>
    </item>
    <item>
      <title>Re: Setting TQELM limits</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432740#M4297</link>
      <description>John,&lt;BR /&gt;&lt;BR /&gt;With mult-threaded programs it could be possible that memory comes available again.&lt;BR /&gt;&lt;BR /&gt;The RW states are the main reason why we have to reboot nodes. It's a pain ...&lt;BR /&gt;&lt;BR /&gt;Also : be carefull when changing quotas. Some products may refuse to start when they are not in balance (e.g. DSM).&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Wed, 01 Dec 2004 06:50:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432740#M4297</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-12-01T06:50:28Z</dc:date>
    </item>
    <item>
      <title>Re: Setting TQELM limits</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432741#M4298</link>
      <description>I agree with John about SYS$SETRWM - use it in your own application when you know what the affects are. &lt;BR /&gt;I mentioned it just to point out it is possibile to change VMS behaviour for resource waits not to recommend it.&lt;BR /&gt;&lt;BR /&gt;Back to the original question - I would tend to increase the TQELM then watch the process with SHOW_QUOTA.COM/AMDS/Availability Manager to see what the behaviour is. The program may have a bug or it may use a lot of timers in its normal behavior.</description>
      <pubDate>Wed, 01 Dec 2004 07:58:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/setting-tqelm-limits/m-p/3432741#M4298</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2004-12-01T07:58:11Z</dc:date>
    </item>
  </channel>
</rss>

