<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: ZIP performance in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197212#M26752</link>
    <description>I didn't expect any difference up front without the /MOVE, but "hoping" for some relief with the reverse sorted list ... none so far but it's early in the game.  Yes all the files are in one directory.&lt;BR /&gt;&lt;BR /&gt;The next attempt will be to split the files into 10,000 file chunk subdirectories as suggested by Hein.&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Art</description>
    <pubDate>Thu, 03 Sep 2009 18:48:10 GMT</pubDate>
    <dc:creator>Art Wiens</dc:creator>
    <dc:date>2009-09-03T18:48:10Z</dc:date>
    <item>
      <title>ZIP performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197194#M26734</link>
      <description>We have an "out of control"/badly thought out process which has been producing thousands (many 10's of thousands!) of invoice files (relatively small files 10 - 20 blocks on a disk with a large cluster size) for quite some time.  I have been using ZIP_CLI (Zip 2.32) with a /BATCH=listoffiles.txt method to clean them up.  It takes a phenomenal amount of time and resources (Alpha 800 5/500 512MB VMS v7.2-2).  For example this was for ~80,000 files:&lt;BR /&gt;&lt;BR /&gt;  Accounting information:&lt;BR /&gt;  Buffered I/O count:            1315155      Peak working set size:      82640&lt;BR /&gt;  Direct I/O count:             49565437      Peak virtual size:         247648&lt;BR /&gt;  Page faults:                      6110      Mounted volumes:                0&lt;BR /&gt;  Charged CPU time:        0 11:34:52.42      Elapsed time:       1 05:31:52.67&lt;BR /&gt;&lt;BR /&gt;The actual command used was:&lt;BR /&gt;&lt;BR /&gt;$ zip_cli 2007_1.zip /batch=2007_TOZIP.LIS /move/keep/vms/nofull_path&lt;BR /&gt;&lt;BR /&gt;Is there any performance advantage instead of using a /BATCH list file to using wildcards for the input and using /SINCE and /BEFORE to select the input?  Or is there any way in general to do this more efficiently / expeditiously?&lt;BR /&gt;&lt;BR /&gt;Steven, I hope I have provided enough details to help your psychic powers along. ;-)&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Art</description>
      <pubDate>Thu, 03 Sep 2009 12:36:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197194#M26734</guid>
      <dc:creator>Art Wiens</dc:creator>
      <dc:date>2009-09-03T12:36:00Z</dc:date>
    </item>
    <item>
      <title>Re: ZIP performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197195#M26735</link>
      <description>Art,&lt;BR /&gt;&lt;BR /&gt;I presume that the intent is to ZIP and remove the files in one operation. &lt;BR /&gt;&lt;BR /&gt;Are you sure that the problem is ZIP and not the reorganization of the directories as the files are removed? Along a similar vein, have you tried different orderings of the files in the "listoffiles" (Hint: Inverse ordering per the discussions about delete optimization).&lt;BR /&gt;&lt;BR /&gt;What is the speed of the ZIP without the /MOVE?&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 03 Sep 2009 12:59:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197195#M26735</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2009-09-03T12:59:54Z</dc:date>
    </item>
    <item>
      <title>Re: ZIP performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197196#M26736</link>
      <description>Of the 29 hours involved in this example, I'ld say ~26 hours were related to ZIP and about 3 hours to removing them.  No, I haven't tried anything other than a natural list provided by DIR/COL=1 and removing the HEADER and TRAILER info.  In the past I have used a list with DIR/NOHEAD/NOTRAIL ... doesn't seem to make any difference ie. whether I'm in that directory or not.&lt;BR /&gt;&lt;BR /&gt;The ZIP process seems to take a lot of time up front, about midway through the ordeal it actually creates a temporary ZIP file Zxxxxxxx and starts adding files to it.&lt;BR /&gt;&lt;BR /&gt;Art</description>
      <pubDate>Thu, 03 Sep 2009 13:08:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197196#M26736</guid>
      <dc:creator>Art Wiens</dc:creator>
      <dc:date>2009-09-03T13:08:52Z</dc:date>
    </item>
    <item>
      <title>Re: ZIP performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197197#M26737</link>
      <description>How would I produce an inverse order list of files?</description>
      <pubDate>Thu, 03 Sep 2009 13:15:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197197#M26737</guid>
      <dc:creator>Art Wiens</dc:creator>
      <dc:date>2009-09-03T13:15:35Z</dc:date>
    </item>
    <item>
      <title>Re: ZIP performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197198#M26738</link>
      <description>Art,&lt;BR /&gt;&lt;BR /&gt;I was referring to the order of processing files, not which directory you (or the ZIP file are in).&lt;BR /&gt;&lt;BR /&gt;My thinking was that the problem may not be ZIP, and may be related to the traversals and re-structuring of the directory during the deletes. &lt;BR /&gt;&lt;BR /&gt;For control purposes (although I admittedly am reluctant to suggest it), it would be useful to know the performance of a simple wildcard COPY *.* NL: and a DELETE *.*. If these take similar times, the problem is in the size of the directory, not ZIP.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Thu, 03 Sep 2009 13:17:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197198#M26738</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2009-09-03T13:17:21Z</dc:date>
    </item>
    <item>
      <title>Re: ZIP performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197199#M26739</link>
      <description>Art,&lt;BR /&gt;&lt;BR /&gt;To produce an inverse ordered list of file (within a directory), produce the list using DIRECTORY/BRIEF/NOHEAD/NOTAIL and use SORT to sort in DESCENDING order (HELP SORT).&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Thu, 03 Sep 2009 13:19:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197199#M26739</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2009-09-03T13:19:08Z</dc:date>
    </item>
    <item>
      <title>Re: ZIP performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197200#M26740</link>
      <description>"use SORT to sort in DESCENDING order "&lt;BR /&gt;&lt;BR /&gt;Duh!  Sorry, I'm still worn out after 29 hours of ZIP'ing ;-)&lt;BR /&gt;&lt;BR /&gt;I have more to do, I'll give that a whirl.&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Art</description>
      <pubDate>Thu, 03 Sep 2009 13:27:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197200#M26740</guid>
      <dc:creator>Art Wiens</dc:creator>
      <dc:date>2009-09-03T13:27:43Z</dc:date>
    </item>
    <item>
      <title>Re: ZIP performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197201#M26741</link>
      <description>&lt;!--!*#--&gt;&amp;gt; [...] ZIP_CLI (Zip 2.32) [...]&lt;BR /&gt;&lt;BR /&gt;So someone really does use that CLI stuff.&lt;BR /&gt;Normally, I'd suggest trying the current&lt;BR /&gt;released version, 3.0, but I doubt that it&lt;BR /&gt;would help here.  (Might be fun, though.)&lt;BR /&gt;&lt;BR /&gt;&amp;gt; [...] /nofull_path&lt;BR /&gt;&lt;BR /&gt;Are all the files in one directory?  With&lt;BR /&gt;/move telling it to delete the files, if&lt;BR /&gt;they're all in one place, then the delete&lt;BR /&gt;operation itself might be very slow (and out&lt;BR /&gt;of my hands).  A test without /move might be&lt;BR /&gt;informative.  I'd need to think/look, but if&lt;BR /&gt;it deletes the files in the same order as it&lt;BR /&gt;adds them to the archive, then a&lt;BR /&gt;reverse-sorted list might help (less&lt;BR /&gt;reshuffling).&lt;BR /&gt;&lt;BR /&gt;&amp;gt; Is there any performance advantage instead&lt;BR /&gt;&amp;gt; of using a /BATCH list file to using&lt;BR /&gt;&amp;gt; wildcards for the input and using /SINCE&lt;BR /&gt;&amp;gt; and /BEFORE to select the input?&lt;BR /&gt;&lt;BR /&gt;Knowing nothing, I wouldn't expect it to&lt;BR /&gt;matter.  (Only one way to find out...)&lt;BR /&gt;Using the list does give you control over&lt;BR /&gt;the order, so if that matters, then the list&lt;BR /&gt;might be better.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; [...] enough details [...]&lt;BR /&gt;&lt;BR /&gt;It's a good start, but without some&lt;BR /&gt;profiling, it's hard to guess where it's&lt;BR /&gt;spending its time.&lt;BR /&gt;&lt;BR /&gt;Adding /COMPRESSION = STORE ("-0") would&lt;BR /&gt;eliminate any CPU time spent doing&lt;BR /&gt;compression, but I doubt that that's the big&lt;BR /&gt;problem.&lt;BR /&gt;&lt;BR /&gt;Ah.  I'm submitting too slowly.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; The ZIP process seems to take a lot of&lt;BR /&gt;&amp;gt; time up front, about midway through the&lt;BR /&gt;&amp;gt; ordeal it actually creates a temporary ZIP&lt;BR /&gt;&amp;gt; file Zxxxxxxx and starts adding files to&lt;BR /&gt;&amp;gt; it.&lt;BR /&gt;&lt;BR /&gt;It does do some research on the files to be&lt;BR /&gt;archived before it starts to work.  I thought&lt;BR /&gt;that it was mostly checking for existence,&lt;BR /&gt;but there might be more to it.</description>
      <pubDate>Thu, 03 Sep 2009 13:29:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197201#M26741</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2009-09-03T13:29:32Z</dc:date>
    </item>
    <item>
      <title>Re: ZIP performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197202#M26742</link>
      <description>Steven,&lt;BR /&gt;&lt;BR /&gt;The processing of a directory holding 80,000 files implies a directory well in excess of the XQP caches.&lt;BR /&gt;&lt;BR /&gt;I did not ask Art, but another thing that comes to mind is ensuring that the XFC is enabled and sized appropriately to the task at hand.&lt;BR /&gt;&lt;BR /&gt;Just the directory searches on a directory of that size would likely be expensive.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Thu, 03 Sep 2009 13:56:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197202#M26742</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2009-09-03T13:56:34Z</dc:date>
    </item>
    <item>
      <title>Re: ZIP performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197203#M26743</link>
      <description>&lt;!--!*#--&gt;$ show mem/cache/full&lt;BR /&gt;              System Memory Resources on  3-SEP-2009 10:57:55.77&lt;BR /&gt;&lt;BR /&gt;Virtual I/O Cache&lt;BR /&gt;    Total Size (Kbytes)           3200    Read IO Count             1003983095&lt;BR /&gt;    Free Kbytes                      0    Read Hit Count             176445148&lt;BR /&gt;    Kbytes in Use                 3200    Read Hit Rate                    17%&lt;BR /&gt;    Write IO Bypassing Cache  23261386    Write IO Count              38318456&lt;BR /&gt;    Files Retained                  99    Read IO Bypassing Cache    275267547&lt;BR /&gt;&lt;BR /&gt;$ show sys/noproc&lt;BR /&gt;OpenVMS V7.2-2  on node xxxxxx   3-SEP-2009 10:58:17.64  Uptime  85 21:26:23&lt;BR /&gt;&lt;BR /&gt;As I said, there's only 512MB memory in this box.&lt;BR /&gt;&lt;BR /&gt;Art</description>
      <pubDate>Thu, 03 Sep 2009 13:59:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197203#M26743</guid>
      <dc:creator>Art Wiens</dc:creator>
      <dc:date>2009-09-03T13:59:47Z</dc:date>
    </item>
    <item>
      <title>Re: ZIP performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197204#M26744</link>
      <description>&lt;!--!*#--&gt;&amp;gt; [...] VMS v7.2-2 [...]&lt;BR /&gt;&lt;BR /&gt;&amp;gt; Just the directory searches on a directory&lt;BR /&gt;&amp;gt; of that size would likely be expensive.&lt;BR /&gt;&lt;BR /&gt;The FAQ suggests that V7.2 is new enough to&lt;BR /&gt;get the boost to the directory cache stuff,&lt;BR /&gt;so a VMS upgrade might not help, either.&lt;BR /&gt;&lt;BR /&gt;A quick look at the code suggests that it's&lt;BR /&gt;doing (at least) a $PARSE on every name.&lt;BR /&gt;&lt;BR /&gt;Zip 3.0 may not relieve the misery, but it&lt;BR /&gt;does add a new message or two, like "Scanning&lt;BR /&gt;files ...", so that it doesn't look quite so&lt;BR /&gt;dead while it's scouting around.</description>
      <pubDate>Thu, 03 Sep 2009 14:14:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197204#M26744</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2009-09-03T14:14:44Z</dc:date>
    </item>
    <item>
      <title>Re: ZIP performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197205#M26745</link>
      <description>You're serious?  You're not punking us?  &lt;BR /&gt;&lt;BR /&gt;We're _really_ going to have a discussion about system performance and application tuning from this application on this AlphaServer 800 box with all of a half-gig of memory and slow SCSI disks?   Really? &lt;BR /&gt;&lt;BR /&gt;There are vendors offering a gigabyte of memory (4 DIMMs) for this box for US$120 or so.  You've got eight slots in this box.  So going from 512 to 2 GB is US$240 or so, and quite possibly less.  (I did only a very quick look for prices.)&lt;BR /&gt;&lt;BR /&gt;Faster disks?  I've been scrounging disks compatible with this vintage gear for small outlays.  Sticking a decent RAID controller and a shelf of disks into the box will get you better I/O speed. &lt;BR /&gt;&lt;BR /&gt;Or punt this box, and replace it with a faster Alpha or (likely cheaper) an Integrity.  I've seen some sweet AlphaServer DS10 boxes and even some AlphaServer ES45 boxes for less than US$1000, and Integrity boxes are cheap, too.  (Integrity has cheaper OpenVMS licenses.)  Or switch from Alpha hardware to one of the available Alpha emulations.&lt;BR /&gt;&lt;BR /&gt;Between the application design and the hardware and the OpenVMS configuration, tuning this box is likely wasting more money than upgrading or (better) replacing the box; even if you get double speed, you're still looking at 12 hours!   And if the hardware upgrade or the replacement or migration to Integrity or a switch to emulation is deemed unacceptable or too costly (starting at US$250 or so, and possibly less), then your management has made your decision for you.  Let it run for a day or three.</description>
      <pubDate>Thu, 03 Sep 2009 14:19:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197205#M26745</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2009-09-03T14:19:50Z</dc:date>
    </item>
    <item>
      <title>Re: ZIP performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197206#M26746</link>
      <description>I'm not asking for a tuning exercise on this vastly under-powered box, just if there's a "better" way to use ZIP than I am.  Believe me I feel the pain of how "expensive" lookups are from having a bazillion files in a directory.  Trying to sort the initial mess into year subdirectories has been enormous.&lt;BR /&gt; &lt;BR /&gt;"Or punt this box, and replace it with a faster Alpha"&lt;BR /&gt;&lt;BR /&gt;This process is underway, albeit at glacial speeds, to ES47's w/8GB on VMS8.3 .  The application support folks are the resource bottleneck.  I've had the new clusters built for about two years now!&lt;BR /&gt;&lt;BR /&gt;And FYI, the disk is HSG80 based, not local SCSI so that's a bit "faster".&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Art</description>
      <pubDate>Thu, 03 Sep 2009 14:29:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197206#M26746</guid>
      <dc:creator>Art Wiens</dc:creator>
      <dc:date>2009-09-03T14:29:41Z</dc:date>
    </item>
    <item>
      <title>Re: ZIP performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197207#M26747</link>
      <description>Hello Art,&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; Of the 29 hours involved in this example,  I'd say ~26 hours were related to ZIP and about 3 hours to removing them. No, I haven't tried anything other than a &lt;BR /&gt;&lt;BR /&gt;Judging by the high buffered IO I suspect that is it really the directory slowing down ZIP.&lt;BR /&gt;&lt;BR /&gt;If you can RENAME  10,000 file chunks away from this directory, notably in reverse order, and then zip those up, you should see a much better performance.&lt;BR /&gt;&lt;BR /&gt;You have to rename in reverse order to make the rename not take too long.&lt;BR /&gt;&lt;BR /&gt;Hein</description>
      <pubDate>Thu, 03 Sep 2009 14:44:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197207#M26747</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2009-09-03T14:44:47Z</dc:date>
    </item>
    <item>
      <title>Re: ZIP performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197208#M26748</link>
      <description>A good RAID controller (which can be had for around US$50) can do very well against an HSG80 box; I'd here guess you'd see somewhere between 7 MBps and maybe as much as 25 or 30 MBps, depending on the speeds of the disks behind the HSG80.   Various SCSI Ultra320 RAID controllers claim 320 MBps; you'll see a chunk of that, but not all of it.&lt;BR /&gt;&lt;BR /&gt;And unless you can really offload this box of most everything on it, then XFC is going to be running somewhere between shut off and very cache-constrained with what it can gather of the 0.5 GB of memory.  And I'm not sure there's anything that's going to help with the comparatively slow I/O path in V7.2-2 and particularly with gazillions of small files.&lt;BR /&gt;&lt;BR /&gt;I don't think you're going to make this box all that much faster through anything you can do in (only) software.  Spread the I/O load across spindles, etc.  The usual. &lt;BR /&gt;</description>
      <pubDate>Thu, 03 Sep 2009 15:02:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197208#M26748</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2009-09-03T15:02:06Z</dc:date>
    </item>
    <item>
      <title>Re: ZIP performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197209#M26749</link>
      <description>&amp;gt; You have to rename in reverse order to make the rename not take too long.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Would you not then pay nearly the same price for insertion into the new directory that you save during the removal from the old directory?</description>
      <pubDate>Thu, 03 Sep 2009 18:05:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197209#M26749</guid>
      <dc:creator>Jim_McKinney</dc:creator>
      <dc:date>2009-09-03T18:05:11Z</dc:date>
    </item>
    <item>
      <title>Re: ZIP performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197210#M26750</link>
      <description>Just fired off another one with a reverse sorted list (59,889 files) and no /MOVE switch ... it appears to be behaving similarily so far.  MONI MODE shows ~60% Kernel Mode, 10% Interrupt State and 30% Idle Time (it's a batch job running at prio 3 gotta leave some cycles for users ;-).  The job only has the list file open so far(and the command proc and batch log file).&lt;BR /&gt;&lt;BR /&gt;I added a second com to do the delete the files at the end.&lt;BR /&gt;&lt;BR /&gt;Talk to you tomorrow ;-)&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Art</description>
      <pubDate>Thu, 03 Sep 2009 18:12:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197210#M26750</guid>
      <dc:creator>Art Wiens</dc:creator>
      <dc:date>2009-09-03T18:12:09Z</dc:date>
    </item>
    <item>
      <title>Re: ZIP performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197211#M26751</link>
      <description>&lt;!--!*#--&gt;&amp;gt; [...] it appears to be behaving similarily&lt;BR /&gt;&amp;gt; so far.&lt;BR /&gt;&lt;BR /&gt;Yeah, well, all the "/MOVE" file deletion is&lt;BR /&gt;done at the end of the job, after the archive&lt;BR /&gt;creation is known to be successful, and I&lt;BR /&gt;wouldn't expect any non-delete activity to be&lt;BR /&gt;affected by the order.  So I'm not amazed.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; The job only has the list file open so far&lt;BR /&gt;&amp;gt; [...]&lt;BR /&gt;&lt;BR /&gt;It's "Scanning files ...".  Zip 3,0 says so:&lt;BR /&gt;&lt;BR /&gt;ALP $ zip3l fred.zip dka0:[utility.source.zip.zip31*...]*.*&lt;BR /&gt;Scanning files ................ ......&lt;BR /&gt;  adding: [.utility.source.zip.zip31a.aosvs] (stored 0%)&lt;BR /&gt;[...]&lt;BR /&gt;&lt;BR /&gt;I'd bet that the disk is busy, even with no&lt;BR /&gt;open files.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; Are all the files in one directory?&lt;BR /&gt;&lt;BR /&gt;Did we ever get an answer to that?</description>
      <pubDate>Thu, 03 Sep 2009 18:43:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197211#M26751</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2009-09-03T18:43:41Z</dc:date>
    </item>
    <item>
      <title>Re: ZIP performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197212#M26752</link>
      <description>I didn't expect any difference up front without the /MOVE, but "hoping" for some relief with the reverse sorted list ... none so far but it's early in the game.  Yes all the files are in one directory.&lt;BR /&gt;&lt;BR /&gt;The next attempt will be to split the files into 10,000 file chunk subdirectories as suggested by Hein.&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Art</description>
      <pubDate>Thu, 03 Sep 2009 18:48:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197212#M26752</guid>
      <dc:creator>Art Wiens</dc:creator>
      <dc:date>2009-09-03T18:48:10Z</dc:date>
    </item>
    <item>
      <title>Re: ZIP performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197213#M26753</link>
      <description>With the files involved stored out on a (fossil) HSG80, find a  (less-fossil) AlphaServer box with some spare cycles, MOUNT the disk, and run your zip from there.  It'll still hammer on the disk, but...</description>
      <pubDate>Thu, 03 Sep 2009 19:01:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/zip-performance/m-p/5197213#M26753</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2009-09-03T19:01:56Z</dc:date>
    </item>
  </channel>
</rss>

