<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: OpenVMS SORT statis in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/openvms-sort-statis/m-p/5047875#M83773</link>
    <description>David,&lt;BR /&gt;&lt;BR /&gt;I agree with Dean, and will further note that the number of passes increased dramatically. I too would wonder about the process quotas, and a variety of other factors, including the quota settings on the batch queues. &lt;BR /&gt;&lt;BR /&gt;I would also like to understand what other load was on the system at the time of the sort, and whether this behavior is reproduceable on an otherwise idle machine.&lt;BR /&gt;&lt;BR /&gt;I have investigated several SORT anomalies for clients over the years, and changes in quotas, settings, and external workload have caused performance issues.&lt;BR /&gt;&lt;BR /&gt;I hope that the above is helpful.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
    <pubDate>Fri, 18 May 2007 12:34:08 GMT</pubDate>
    <dc:creator>Robert Gezelter</dc:creator>
    <dc:date>2007-05-18T12:34:08Z</dc:date>
    <item>
      <title>OpenVMS SORT statis</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-sort-statis/m-p/5047873#M83771</link>
      <description>Environment:&lt;BR /&gt;OpenVMS 7.3-2, alpha (3CPU ES40)&lt;BR /&gt;&lt;BR /&gt;What are some things which could impact "Sort tree size"?  Though our change logs shed no light, a regular sort job has changed it's pattern taking much longer than it used to.  Two sample stats are below. &lt;BR /&gt;&lt;BR /&gt;Any feedback is greatly appreciated!  Thanks, -David&lt;BR /&gt;&lt;BR /&gt;(Good sort from last month)&lt;BR /&gt;$ SORT/KEY=(POSITION:38,SIZE:32) /STAT -&lt;BR /&gt;ra_data1:TR_REV_401B1.0703 -&lt;BR /&gt;ra_data1:TR_REV_401B1.0703&lt;BR /&gt; &lt;BR /&gt;                  OpenVMS Sort/Merge Statistics&lt;BR /&gt; &lt;BR /&gt;Records read:    21161897          Input record length:      622&lt;BR /&gt;Records sorted:  21161897          Internal length:          622&lt;BR /&gt;Records output:  21161897          Output record length:     622&lt;BR /&gt;Working set:      8388608          Sort tree size:        446622&lt;BR /&gt;Virtual memory:    552864          Number of initial runs:    25&lt;BR /&gt;Direct I/O:        904702          Maximum merge order:       20&lt;BR /&gt; &lt;BR /&gt;Buffered I/O:         139          Number of merge passes:     2&lt;BR /&gt;Page faults:        35113          Work file alloc:     30873474&lt;BR /&gt;Elapsed time: 00:27:50.52          Elapsed CPU:      00:10:32.74  &lt;BR /&gt;&lt;BR /&gt;(Bad sort from this month)&lt;BR /&gt;$ SORT/KEY=(POSITION:38,SIZE:32) /STAT -&lt;BR /&gt;ra_data1:TR_REV_401W1.0704 -&lt;BR /&gt;ra_data1:TR_REV_401W1.0704&lt;BR /&gt; &lt;BR /&gt;                  OpenVMS Sort/Merge Statistics&lt;BR /&gt; &lt;BR /&gt;Records read:     2277280          Input record length:      622&lt;BR /&gt;Records sorted:   2277280          Internal length:          622&lt;BR /&gt;Records output:   2277280          Output record length:     622&lt;BR /&gt;Working set:      8388608          Sort tree size:             2&lt;BR /&gt;Virtual memory:     77104          Number of initial runs:569235&lt;BR /&gt;Direct I/O:       3168711          Maximum merge order:       20&lt;BR /&gt; &lt;BR /&gt;Buffered I/O:       23675          Number of merge passes: 29960&lt;BR /&gt;Page faults:         5270          Work file alloc:      3026946&lt;BR /&gt;Elapsed time: 12:32:07.18          Elapsed CPU:      09:59:26.48   &lt;BR /&gt;</description>
      <pubDate>Fri, 18 May 2007 11:24:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-sort-statis/m-p/5047873#M83771</guid>
      <dc:creator>David Moczygemba</dc:creator>
      <dc:date>2007-05-18T11:24:38Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS SORT statis</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-sort-statis/m-p/5047874#M83772</link>
      <description>directio is up, bufio is up, virtual memory&lt;BR /&gt;is down. I'd probably start and check to see if the process quotas have been changed. the cpu time is the wonder, from 10+ minutes to almost 10hours! take a close&lt;BR /&gt;look at the files to, see if they are normal&lt;BR /&gt;to what you usually process.</description>
      <pubDate>Fri, 18 May 2007 11:54:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-sort-statis/m-p/5047874#M83772</guid>
      <dc:creator>Dean McGorrill</dc:creator>
      <dc:date>2007-05-18T11:54:51Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS SORT statis</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-sort-statis/m-p/5047875#M83773</link>
      <description>David,&lt;BR /&gt;&lt;BR /&gt;I agree with Dean, and will further note that the number of passes increased dramatically. I too would wonder about the process quotas, and a variety of other factors, including the quota settings on the batch queues. &lt;BR /&gt;&lt;BR /&gt;I would also like to understand what other load was on the system at the time of the sort, and whether this behavior is reproduceable on an otherwise idle machine.&lt;BR /&gt;&lt;BR /&gt;I have investigated several SORT anomalies for clients over the years, and changes in quotas, settings, and external workload have caused performance issues.&lt;BR /&gt;&lt;BR /&gt;I hope that the above is helpful.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Fri, 18 May 2007 12:34:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-sort-statis/m-p/5047875#M83773</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2007-05-18T12:34:08Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS SORT statis</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-sort-statis/m-p/5047876#M83774</link>
      <description>As best I can tell the queue must have had some working set settings which were removed in the last few days.  Though I wasted a lot of today trying higher and higher working set values, the key was lower.  Specifically: a working set of 777680 seemed relatively optimum for the 12GB file we needed to run.  It sorted in 15 minutes once that change was in place.  I also had some minor challenges with  precedence between UAF settings, queue settings and PQLM settings but all is working now.  Thanks so much!  -David &lt;BR /&gt;&lt;BR /&gt;$ SET WORK/LIMIT=777680/QUOTA=777680/EXTENT=777680&lt;BR /&gt;$!starting sort on key1..from nr, to nr, rcd date &amp;amp; connect time....&lt;BR /&gt;$ SORT/KEY=(POSITION:38,SIZE:32) /STAT -&lt;BR /&gt;ra_data1:TR_REV_401B1.0704 -&lt;BR /&gt;ra_data1:TR_REV_401B1.0704&lt;BR /&gt; &lt;BR /&gt;                  OpenVMS Sort/Merge Statistics&lt;BR /&gt; &lt;BR /&gt;Records read:    20017984          Input record length:      622&lt;BR /&gt;Records sorted:  20017984          Internal length:          622&lt;BR /&gt;Records output:  20017984          Output record length:     622&lt;BR /&gt;Working set:       777680          Sort tree size:        472358&lt;BR /&gt;Virtual memory:    584640          Number of initial runs:    22&lt;BR /&gt;Direct I/O:        819499          Maximum merge order:       20&lt;BR /&gt;Buffered I/O:         119          Number of merge passes:     2&lt;BR /&gt;Page faults:        36600          Work file alloc:     26821314&lt;BR /&gt;Elapsed time: 00:15:22.86          Elapsed CPU:      00:09:46.95   &lt;BR /&gt;</description>
      <pubDate>Fri, 18 May 2007 15:55:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-sort-statis/m-p/5047876#M83774</guid>
      <dc:creator>David Moczygemba</dc:creator>
      <dc:date>2007-05-18T15:55:14Z</dc:date>
    </item>
    <item>
      <title>Re: OpenVMS SORT statis</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/openvms-sort-statis/m-p/5047877#M83775</link>
      <description>As best I can tell the queue must have had some working set settings which were removed in the last few days. Though I wasted a lot of today trying higher and higher working set values, the key was lower. Specifically: a working set of 777680 seemed relatively optimum for the 12GB file we needed to run. It sorted in 15 minutes once that change was in place. I also had some minor challenges with precedence between UAF settings, queue settings and PQLM settings but all is working now. Thanks so much! -David&lt;BR /&gt;&lt;BR /&gt;$ SET WORK/LIMIT=777680/QUOTA=777680/EXTENT=777680&lt;BR /&gt;$!starting sort on key1..from nr, to nr, rcd date &amp;amp; connect time....&lt;BR /&gt;$ SORT/KEY=(POSITION:38,SIZE:32) /STAT -&lt;BR /&gt;ra_data1:TR_REV_401B1.0704 -&lt;BR /&gt;ra_data1:TR_REV_401B1.0704&lt;BR /&gt;&lt;BR /&gt;OpenVMS Sort/Merge Statistics&lt;BR /&gt;&lt;BR /&gt;Records read: 20017984 Input record length: 622&lt;BR /&gt;Records sorted: 20017984 Internal length: 622&lt;BR /&gt;Records output: 20017984 Output record length: 622&lt;BR /&gt;Working set: 777680 Sort tree size: 472358&lt;BR /&gt;Virtual memory: 584640 Number of initial runs: 22&lt;BR /&gt;Direct I/O: 819499 Maximum merge order: 20&lt;BR /&gt;Buffered I/O: 119 Number of merge passes: 2&lt;BR /&gt;Page faults: 36600 Work file alloc: 26821314&lt;BR /&gt;Elapsed time: 00:15:22.86 Elapsed CPU: 00:09:46.95</description>
      <pubDate>Fri, 18 May 2007 15:57:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/openvms-sort-statis/m-p/5047877#M83775</guid>
      <dc:creator>David Moczygemba</dc:creator>
      <dc:date>2007-05-18T15:57:13Z</dc:date>
    </item>
  </channel>
</rss>

