<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: ABSV3.2(273) on OpenVMS V7.2-2 performance problem. in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123708#M87969</link>
    <description>Thanks for all your inputs Hein and Guenther.&lt;BR /&gt;&lt;BR /&gt;As mentioned in my first input "$abs lookup/brief" for 3,5 years archives files would take about Â½ year to list all.&lt;BR /&gt;&lt;BR /&gt;But the hints from Guenther to use abs$system:ABS$DUMP_AOE, it did almost the same job, and took only a couple of days.&lt;BR /&gt;&lt;BR /&gt;/Per</description>
    <pubDate>Wed, 16 Jan 2008 12:26:32 GMT</pubDate>
    <dc:creator>Per Hvid at Mach</dc:creator>
    <dc:date>2008-01-16T12:26:32Z</dc:date>
    <item>
      <title>ABSV3.2(273) on OpenVMS V7.2-2 performance problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123697#M87958</link>
      <description>We don't use ABS for the future archiving.&lt;BR /&gt;&lt;BR /&gt;But I have 3,5 years archive ABS dat file in 44 files, 234023938 blocks.&lt;BR /&gt;&lt;BR /&gt;I want to list all the old archives files&lt;BR /&gt;&lt;BR /&gt;So for january 2006 I did&lt;BR /&gt;$ abs lookup DISK$ARC_0106:[*...]*.*;* /catalog=MACH_ARCHIVE_2006 /BRIEF&lt;BR /&gt;3865985 objects(s) found in 1 catalog(s).&lt;BR /&gt;Elapsed time:       5 12:29:32.87&lt;BR /&gt;0106.BRIEF;1           639628 &lt;BR /&gt;&lt;BR /&gt;So it will take about ½ year to list of all the data !!!!!&lt;BR /&gt;&lt;BR /&gt;Do any one have any hints to extract the data faster.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Per Hvid  phv@mach.com</description>
      <pubDate>Thu, 03 Jan 2008 11:00:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123697#M87958</guid>
      <dc:creator>Per Hvid at Mach</dc:creator>
      <dc:date>2008-01-03T11:00:02Z</dc:date>
    </item>
    <item>
      <title>Re: ABSV3.2(273) on OpenVMS V7.2-2 performance problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123698#M87959</link>
      <description>I seem to recall ABS just uses RMS indexed files for datastore. And best I recall they are not tuned too cleverly. Catalog files which are no longer atively updated should probably be converted for once and for all.&lt;BR /&gt;That will fix internal and external fragmentation.&lt;BR /&gt;&lt;BR /&gt;Why don't use post a reply with sample ANALYZE/RMS/FDL/OUT as text attachment, and/or output from my tunecheck tool.&lt;BR /&gt;&lt;BR /&gt;We can possibly give you a generic, tuned FDL to convert with. You might tweak the allocations a little, based on individual archives.&lt;BR /&gt;&lt;BR /&gt;Optimal tuning will be a little more work.&lt;BR /&gt;A day of (for fee) consulting should do it.&lt;BR /&gt;&lt;BR /&gt;Specifically I think you need to know whether typically selected records are stored mnore or less in primary key order (even if selected through alternate) in which case one should lean towards a larger primary data area bucket size. If random access is expected than a smaller, but not too small, rms bucket size is needed.&lt;BR /&gt;&lt;BR /&gt;You may also want to try  a SET RMS/IND/BUF=50 before making a listing.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Hope this help some,&lt;BR /&gt;&lt;BR /&gt;Hein van den Heuvel ( at gmail dot com )&lt;BR /&gt;HvdH Performance Consulting&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 03 Jan 2008 12:33:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123698#M87959</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2008-01-03T12:33:33Z</dc:date>
    </item>
    <item>
      <title>Re: ABSV3.2(273) on OpenVMS V7.2-2 performance problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123699#M87960</link>
      <description>ABS V3.2? Huh...long time ago. If I remember right the lookup does a partial key search (fast) using the string up-to the first "*" wildcard indicator. From that point on the search is sequential (slow).&lt;BR /&gt;&lt;BR /&gt;If you specifiy more of the filespec you are looking for, like its top directory, the search most likely goes a lot faster.&lt;BR /&gt;&lt;BR /&gt;Btw, how big are the ABS$CATALOG:MACH_ARCHIVE_2006.* files?&lt;BR /&gt;&lt;BR /&gt;/Guenther</description>
      <pubDate>Mon, 07 Jan 2008 17:34:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123699#M87960</guid>
      <dc:creator>Guenther Froehlin</dc:creator>
      <dc:date>2008-01-07T17:34:24Z</dc:date>
    </item>
    <item>
      <title>Re: ABSV3.2(273) on OpenVMS V7.2-2 performance problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123700#M87961</link>
      <description>Thanks for your input Guenther.&lt;BR /&gt;Indexed file alternate key lookup huh?&lt;BR /&gt;&lt;BR /&gt;Did ABSV3.2 do anythign in the area of explicit rms indexed file buffer count selection, or did it just accept the default? If it accepted the default, then I would certainly retry with SET RMS/IND/BUF=50.&lt;BR /&gt;&lt;BR /&gt;Guenther, What are the ods that a 'next' record by alternate key is also the next record by primary key? High correlation or none?&lt;BR /&gt;&lt;BR /&gt;If there is no correlation, then effectively every name lookup will trigger a data bucket read, possibly with bucket split re-vector IO. If so, the 5 hours is perfectly reasonable:&lt;BR /&gt;&lt;BR /&gt;5 12:29:32.87 ==&amp;gt; 19713 seconds.&lt;BR /&gt;3865985/(5*3600+28*60+33) ==&amp;gt; 196 IO/sec.&lt;BR /&gt;&lt;BR /&gt;Per, &lt;BR /&gt;&lt;BR /&gt;What was the IO count and CPU reported?&lt;BR /&gt;&lt;BR /&gt;Have those archives ever been converted / re-organized? You may just want to try a convert with a large data bucket size (32 - 63 ) to clean up any bucket splits and to increase caching affectivness IF there is an alternate - primary sequence correlation.&lt;BR /&gt;If there is no correlation, then a large bucket will just be a waste of bandwidth.&lt;BR /&gt;&lt;BR /&gt;Can you post an ANAL/RMS/FDL output in an attachment? Or of you have a recent version of my rms_tune_check, with the -a output, could you post that?&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Hein van den Heuvel&lt;BR /&gt;HvdH Performance Consulting&lt;BR /&gt;</description>
      <pubDate>Mon, 07 Jan 2008 18:23:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123700#M87961</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2008-01-07T18:23:11Z</dc:date>
    </item>
    <item>
      <title>Re: ABSV3.2(273) on OpenVMS V7.2-2 performance problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123701#M87962</link>
      <description>I want to list all the backup files!&lt;BR /&gt;&lt;BR /&gt;Regarding the question of file size:&lt;BR /&gt;dir mach_arch*&lt;BR /&gt;Directory ABS$ROOT:[CATALOG]&lt;BR /&gt;MACH_ARCHIVE_2004_MCA090_VAOEI.DAT;1  1680384&lt;BR /&gt;MACH_ARCHIVE_2004_MCA091_VAOEI.DAT;1  1606656&lt;BR /&gt;MACH_ARCHIVE_2004_MCA094_VAOEI.DAT;1  1675008&lt;BR /&gt;MACH_ARCHIVE_2004_MCA100_VAOEI.DAT;1  1749504&lt;BR /&gt;MACH_ARCHIVE_2004_MCA103_VAOEI.DAT;1  1633280&lt;BR /&gt;MACH_ARCHIVE_2004_MCA110_VAOEI.DAT;1   824576&lt;BR /&gt;MACH_ARCHIVE_2004_MCA113_VAOEI.DAT;1   648960&lt;BR /&gt;MACH_ARCHIVE_2004_MCA114_VAOEI.DAT;1  1608192&lt;BR /&gt;MACH_ARCHIVE_2004_MCA118_VAOEI.DAT;1  1672960&lt;BR /&gt;MACH_ARCHIVE_2004_MCA123_VAOEI.DAT;1  1448448&lt;BR /&gt;MACH_ARCHIVE_2004_MCA128_VAOEI.DAT;1  1245952&lt;BR /&gt;MACH_ARCHIVE_2004_MCA132_VAOEI.DAT;1  1740288&lt;BR /&gt;MACH_ARCHIVE_2004_MCA136_VAOEI.DAT;1  1734656&lt;BR /&gt;MACH_ARCHIVE_2004_VAOE.DAT;1         72350720&lt;BR /&gt;MACH_ARCHIVE_2004_VTLE.DAT;1              768&lt;BR /&gt;MACH_ARCHIVE_2005_1_1.COM;1                 2&lt;BR /&gt;MACH_ARCHIVE_2005_MCA139_VAOEI.DAT;1  1805568&lt;BR /&gt;MACH_ARCHIVE_2005_MCA143_VAOEI.DAT;1  1559552&lt;BR /&gt;MACH_ARCHIVE_2005_MCA148_VAOEI.DAT;1  1683968&lt;BR /&gt;MACH_ARCHIVE_2005_MCA149_VAOEI.DAT;1  1681920&lt;BR /&gt;MACH_ARCHIVE_2005_MCA152_VAOEI.DAT;1     4352&lt;BR /&gt;MACH_ARCHIVE_2005_MCA156_VAOEI.DAT;1  1635072&lt;BR /&gt;MACH_ARCHIVE_2005_MCA160_VAOEI.DAT;1  1299712&lt;BR /&gt;MACH_ARCHIVE_2005_MCA164_VAOEI.DAT;1  1570304&lt;BR /&gt;MACH_ARCHIVE_2005_MCA168_VAOEI.DAT;1  1145600&lt;BR /&gt;MACH_ARCHIVE_2005_MCA169_VAOEI.DAT;1  1425152&lt;BR /&gt;MACH_ARCHIVE_2005_MCA173_VAOEI.DAT;1  1034496&lt;BR /&gt;MACH_ARCHIVE_2005_MCA175_VAOEI.DAT;1   964352&lt;BR /&gt;MACH_ARCHIVE_2005_MCA187_VAOEI.DAT;1  1700352&lt;BR /&gt;MACH_ARCHIVE_2005_VAOE.DAT;1         67031808&lt;BR /&gt;MACH_ARCHIVE_2005_VTLE.DAT;1              512&lt;BR /&gt;MACH_ARCHIVE_2006_MCA176_VAOEI.DAT;1  1063168&lt;BR /&gt;MACH_ARCHIVE_2006_MCA178_VAOEI.DAT;1   777984&lt;BR /&gt;MACH_ARCHIVE_2006_MCA181_VAOEI.DAT;1   795904&lt;BR /&gt;MACH_ARCHIVE_2006_MCA183_VAOEI.DAT;1   795904&lt;BR /&gt;MACH_ARCHIVE_2006_MCA185_VAOEI.DAT;1   888320&lt;BR /&gt;MACH_ARCHIVE_2006_MCA195_VAOEI.DAT;1  2470400&lt;BR /&gt;MACH_ARCHIVE_2006_MCA199_VAOEI.DAT;1   842752&lt;BR /&gt;MACH_ARCHIVE_2006_MCA205_VAOEI.DAT;1  2595840&lt;BR /&gt;MACH_ARCHIVE_2006_VAOE.DAT;1         40110080&lt;BR /&gt;MACH_ARCHIVE_2006_VTLE.DAT;1              512&lt;BR /&gt;MACH_ARCHIVE_2007_MCA203_VAOEI.DAT;1  1521920&lt;BR /&gt;MACH_ARCHIVE_2007_VAOE.DAT;1          5997568&lt;BR /&gt;MACH_ARCHIVE_2007_VTLE.DAT;1              512&lt;BR /&gt;Total of 44 files, 234023938 blocks.&lt;BR /&gt;&lt;BR /&gt;/Per</description>
      <pubDate>Mon, 07 Jan 2008 18:40:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123701#M87962</guid>
      <dc:creator>Per Hvid at Mach</dc:creator>
      <dc:date>2008-01-07T18:40:52Z</dc:date>
    </item>
    <item>
      <title>Re: ABSV3.2(273) on OpenVMS V7.2-2 performance problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123702#M87963</link>
      <description>ABS does not specify any buffer count in its RABs. The filename is a primary key in the *VAOE.DAT files.&lt;BR /&gt;&lt;BR /&gt;So there are 4 and not 40 files to scan.&lt;BR /&gt;&lt;BR /&gt;/Guenther</description>
      <pubDate>Mon, 07 Jan 2008 20:34:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123702#M87963</guid>
      <dc:creator>Guenther Froehlin</dc:creator>
      <dc:date>2008-01-07T20:34:53Z</dc:date>
    </item>
    <item>
      <title>Re: ABSV3.2(273) on OpenVMS V7.2-2 performance problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123703#M87964</link>
      <description>Per Emailed me some more numbers.&lt;BR /&gt;&lt;BR /&gt;It shows an astounding 72 Direct IOs per record reported,  but also 24 Buffered IOs per record ?!&lt;BR /&gt;&lt;BR /&gt;I am at a loss to explain any but a few buffered IOs. Mailbox involved? File opens? Log file extending every record?&lt;BR /&gt;&lt;BR /&gt;Guenther are 'child' files being opened and closed around a lookup?&lt;BR /&gt;&lt;BR /&gt;That would make for almost 10 direct IOs per record (prologue, area,&lt;BR /&gt;root, 2 levels more index, data) for the child file, instead for the 1 - 2 normally seen for an indexed file lookup with a good bit of the index in cache.&lt;BR /&gt;&lt;BR /&gt;Maybe the master file, if I may call it that (MACH_ARCHIVE_2006_VAOE.DAT?) generates the bulk of the IOs?&lt;BR /&gt;&lt;BR /&gt;Maybe Check with $MONITOR FCP, looking for&lt;BR /&gt; "File Open Rate " while running.&lt;BR /&gt;&lt;BR /&gt;Maybe check with Volkers SDA$PROCIO where the direct IOs go. And or SET FILE/STAT on driver and child file. The analyze with my RMS_STATS or MONI RMS or ANALYZE/SYS... SHOW PROC /RMS=FSB&lt;BR /&gt;&lt;BR /&gt;Guenther, Per just want a list of all files.&lt;BR /&gt;Is there not a more expedient way to do this? Could a simple tool report directly on the MACH_ARCHIVE_2006_MCA176_VAOEI.DAT file?&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Tue, 08 Jan 2008 13:29:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123703#M87964</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2008-01-08T13:29:32Z</dc:date>
    </item>
    <item>
      <title>Re: ABSV3.2(273) on OpenVMS V7.2-2 performance problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123704#M87965</link>
      <description>Hein, you all guessed right. Which is kind of bad news. The VAOE files contain the filenames of each file saved and a unique binary ID (2nd key). This ID key is used to index into the VAOEI files which contain a record for each time a file was saved on a certain tape volume (see volume ID in filename) and a unique key pointing into the VTLE file. The VTLE file contains a record for each save operation (save set name, volume label) with a unique ID per record.&lt;BR /&gt;&lt;BR /&gt;So, the lookup scans through the VAOE and for each entry it tries to find a matching VAOEI record. This is where the buffered I/Os come from (open/close to avoid runing out of FILLM/CHANNELCNT). That was the unfinished work in ABS V3.*. Just recently this has been improved in the latest ABS version to do this a bit smarter/faster.&lt;BR /&gt;&lt;BR /&gt;And so, using a program/DCL procedure which reads sequentially through a VAOE file printing the filename (the first 255 bytes) would go a lot faster. But if anything else is needed (file is in which save set on which tape) this needs to come from the VAOEI and/or VTLE files.&lt;BR /&gt;&lt;BR /&gt;/Guenther</description>
      <pubDate>Tue, 08 Jan 2008 16:37:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123704#M87965</guid>
      <dc:creator>Guenther Froehlin</dc:creator>
      <dc:date>2008-01-08T16:37:13Z</dc:date>
    </item>
    <item>
      <title>Re: ABSV3.2(273) on OpenVMS V7.2-2 performance problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123705#M87966</link>
      <description>Forgot to mention. There was a small utility/diagnostic program shipped with ABS which just lists the records in one of these files: SYS$SYSTEM:ABS$DUMP*.EXE. I don't remember if that was shipped with ABS V3.2, though.&lt;BR /&gt;&lt;BR /&gt;/Guenther</description>
      <pubDate>Tue, 08 Jan 2008 16:44:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123705#M87966</guid>
      <dc:creator>Guenther Froehlin</dc:creator>
      <dc:date>2008-01-08T16:44:08Z</dc:date>
    </item>
    <item>
      <title>Re: ABSV3.2(273) on OpenVMS V7.2-2 performance problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123706#M87967</link>
      <description>Well, I talked to Guenther some, and the implementation is 'sub-optimal'.&lt;BR /&gt;Basically for every master record, each slave file is opened, a lookup done, and closed. &lt;BR /&gt;That explains the high Buffered IO Count as well as the high direct IO count.&lt;BR /&gt;&lt;BR /&gt;During development an attempt was made to keep the files open, but for some sites that caused the process to run out of channels and this was unfortunately (imho) dropped. With the benefit of hindsight, and knowing how much quota folks grab these these anyway, I would have prefered to just keep opening and outright fail for the odd few (forcing higher quota) rather than punish ever user all the time. Oh well, too late now.&lt;BR /&gt;Considering the access pattern, even a modified LIFO file cache would have helped some, as opposed to FIFO which would jsut chase its own tail.&lt;BR /&gt;&lt;BR /&gt;Considering those files are opened and closed all the time, the RMS local cache does nothing but eat memory and locks.&lt;BR /&gt;&lt;BR /&gt;One ATTEMPT I would make is to use RMS Global Buffers to keep the index buckets in memory. The attached macro sources can do just that. Run it is a little command file as for example:&lt;BR /&gt;&lt;BR /&gt;$set rms/ind/buf=2&lt;BR /&gt;$run global_buffers&lt;BR /&gt;MACH_ARCHIVE_2006_MCA176_VAOEI.DAT;1 1063168&lt;BR /&gt;MACH_ARCHIVE_2006_MCA178_VAOEI.DAT;1 777984&lt;BR /&gt;MACH_ARCHIVE_2006_MCA181_VAOEI.DAT;1 795904&lt;BR /&gt;MACH_ARCHIVE_2006_MCA183_VAOEI.DAT;1 795904&lt;BR /&gt;MACH_ARCHIVE_2006_MCA185_VAOEI.DAT;1 888320&lt;BR /&gt;MACH_ARCHIVE_2006_MCA195_VAOEI.DAT;1 2470400&lt;BR /&gt;MACH_ARCHIVE_2006_MCA199_VAOEI.DAT;1 842752&lt;BR /&gt;MACH_ARCHIVE_2006_MCA205_VAOEI.DAT;1 2595840&lt;BR /&gt;MACH_ARCHIVE_2006_VAOE.DAT;1 40110080&lt;BR /&gt;MACH_ARCHIVE_2006_VTLE.DAT;1 512&lt;BR /&gt;$exit&lt;BR /&gt;&lt;BR /&gt;I only write 'attempt' because the XFC is likely to be caching everything already, avoiding the actual physical IOs.&lt;BR /&gt;The global buffers should avoid most direct IO (although RMS will still read the prologue and area descriptor into a local buffer). It may or might not help.&lt;BR /&gt;&lt;BR /&gt;Finally, it is worth your will to convert (stable) ABS AOEI reposetories. Due to bucket splits you can expect to save some 30% space, and can likely reduce an index level (or 2 in extreme cases). After what I learned today I would convert with Index (no compression) in Area 0 and a larger index bucket size. For example 16 for files under 2M blocks and 24 or 32 for file over that 1GB mark. The data bucket can be in area 1 with smaller bucket size (8, or even 4).&lt;BR /&gt;&lt;BR /&gt;Good luck!&lt;BR /&gt;&lt;BR /&gt;Hein van den Heuvel (at gmail dot com)&lt;BR /&gt;HvdH Performance Consulting&lt;BR /&gt;</description>
      <pubDate>Tue, 08 Jan 2008 19:30:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123706#M87967</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2008-01-08T19:30:56Z</dc:date>
    </item>
    <item>
      <title>Re: ABSV3.2(273) on OpenVMS V7.2-2 performance problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123707#M87968</link>
      <description>[sorry for the re-reply folks.]&lt;BR /&gt;&lt;BR /&gt;Oh, in case it is not obvious, that script should be run just before a listing is requested. It will just open the files and hibernate. Kill (stop/id, del/entr, ^Y whatever) when done.&lt;BR /&gt;&lt;BR /&gt;Also, with the global buffers in place I would run with SET RMS/IND/BUF=4 or so.&lt;BR /&gt;&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Tue, 08 Jan 2008 20:06:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123707#M87968</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2008-01-08T20:06:00Z</dc:date>
    </item>
    <item>
      <title>Re: ABSV3.2(273) on OpenVMS V7.2-2 performance problem.</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123708#M87969</link>
      <description>Thanks for all your inputs Hein and Guenther.&lt;BR /&gt;&lt;BR /&gt;As mentioned in my first input "$abs lookup/brief" for 3,5 years archives files would take about Â½ year to list all.&lt;BR /&gt;&lt;BR /&gt;But the hints from Guenther to use abs$system:ABS$DUMP_AOE, it did almost the same job, and took only a couple of days.&lt;BR /&gt;&lt;BR /&gt;/Per</description>
      <pubDate>Wed, 16 Jan 2008 12:26:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/absv3-2-273-on-openvms-v7-2-2-performance-problem/m-p/4123708#M87969</guid>
      <dc:creator>Per Hvid at Mach</dc:creator>
      <dc:date>2008-01-16T12:26:32Z</dc:date>
    </item>
  </channel>
</rss>

