<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Compression of very big files in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809506#M9916</link>
    <description>Steven,&lt;BR /&gt;&lt;BR /&gt;Standard link.&lt;BR /&gt;&lt;BR /&gt;6 seconds on a total of 9min18.&lt;BR /&gt;40 seconds on 14min40.&lt;BR /&gt;&lt;BR /&gt;I did a pre-zip to fill the cache with (part of) the file.&lt;BR /&gt;&lt;BR /&gt;Wim&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Mon, 03 Jul 2006 08:50:10 GMT</pubDate>
    <dc:creator>Wim Van den Wyngaert</dc:creator>
    <dc:date>2006-07-03T08:50:10Z</dc:date>
    <item>
      <title>Compression of very big files</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809481#M9891</link>
      <description>We currently keep Sybase database dumps on disk in zip archives. The zip archive is about 20% of the size of the dumps.&lt;BR /&gt;&lt;BR /&gt;The zip however consumes lots of cpu (even with /level=1 about 60sec/250 MB).&lt;BR /&gt;&lt;BR /&gt;Anyone a solution to compress with (a lot) less cpu consumption ?&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Wed, 21 Jun 2006 04:20:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809481#M9891</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2006-06-21T04:20:29Z</dc:date>
    </item>
    <item>
      <title>Re: Compression of very big files</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809482#M9892</link>
      <description>Which version of zip?</description>
      <pubDate>Wed, 21 Jun 2006 04:22:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809482#M9892</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2006-06-21T04:22:23Z</dc:date>
    </item>
    <item>
      <title>Re: Compression of very big files</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809483#M9893</link>
      <description>2.1</description>
      <pubDate>Wed, 21 Jun 2006 04:23:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809483#M9893</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2006-06-21T04:23:01Z</dc:date>
    </item>
    <item>
      <title>Re: Compression of very big files</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809484#M9894</link>
      <description>ZIP V2.31 is current.&lt;BR /&gt;&lt;BR /&gt;We ZIP RDB backupfiles and I found, that BZIP2 compresses better and uses less resources (no hard data available at the moment).&lt;BR /&gt;&lt;BR /&gt;regards Kalle</description>
      <pubDate>Wed, 21 Jun 2006 04:25:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809484#M9894</guid>
      <dc:creator>Karl Rohwedder</dc:creator>
      <dc:date>2006-06-21T04:25:00Z</dc:date>
    </item>
    <item>
      <title>Re: Compression of very big files</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809485#M9895</link>
      <description>Correction 2.3.</description>
      <pubDate>Wed, 21 Jun 2006 04:28:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809485#M9895</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2006-06-21T04:28:53Z</dc:date>
    </item>
    <item>
      <title>Re: Compression of very big files</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809486#M9896</link>
      <description>there is a beta version of a later zip out there somewhere or bzip2 is on the freeware&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/freeware/freeware70/000tools/alpha_images/bzip2.exe" target="_blank"&gt;http://h71000.www7.hp.com/freeware/freeware70/000tools/alpha_images/bzip2.exe&lt;/A&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 21 Jun 2006 04:37:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809486#M9896</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2006-06-21T04:37:31Z</dc:date>
    </item>
    <item>
      <title>Re: Compression of very big files</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809487#M9897</link>
      <description>Note that BZIP2 on the freeware is V1.0.1, whereas the version I use is 1.0.3a.&lt;BR /&gt;On the site &lt;A href="http://antinode.org/dec/sw/bzip2.html" target="_blank"&gt;http://antinode.org/dec/sw/bzip2.html&lt;/A&gt;&lt;BR /&gt;is a version 1.0.3b already, but I havn't used this one.&lt;BR /&gt;&lt;BR /&gt;regards Kalle</description>
      <pubDate>Wed, 21 Jun 2006 04:45:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809487#M9897</guid>
      <dc:creator>Karl Rohwedder</dc:creator>
      <dc:date>2006-06-21T04:45:30Z</dc:date>
    </item>
    <item>
      <title>Re: Compression of very big files</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809488#M9898</link>
      <description>Just tried 1.0.2.on a variable rec dumpof Sybase.&lt;BR /&gt;&lt;BR /&gt;rms-f-irc illegal record encountered; vbn or record number = !ul</description>
      <pubDate>Wed, 21 Jun 2006 04:48:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809488#M9898</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2006-06-21T04:48:56Z</dc:date>
    </item>
    <item>
      <title>Re: Compression of very big files</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809489#M9899</link>
      <description>If I change the file to fix,512 it works (bug or feature ?). rfm=stm caused record too large for users' buffer.&lt;BR /&gt;&lt;BR /&gt;My reference file of 320 MB is compressed in 518 cpu secs while it takes 81 secs with zip/level=1 (178 without /level which equals =5).&lt;BR /&gt;&lt;BR /&gt;So, not the thing I was hoping for or a problem ?&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Wed, 21 Jun 2006 05:44:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809489#M9899</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2006-06-21T05:44:51Z</dc:date>
    </item>
    <item>
      <title>Re: Compression of very big files</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809490#M9900</link>
      <description>Just gave it a try (using a small RDB backup file). BZIP2 uses considerably more CPU but less IO and produces a far better result in regards of filesize (see attached textfile).&lt;BR /&gt;I used the 1.0.3b version.&lt;BR /&gt;&lt;BR /&gt;regards Kalle</description>
      <pubDate>Wed, 21 Jun 2006 05:47:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809490#M9900</guid>
      <dc:creator>Karl Rohwedder</dc:creator>
      <dc:date>2006-06-21T05:47:07Z</dc:date>
    </item>
    <item>
      <title>Re: Compression of very big files</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809491#M9901</link>
      <description>I can confirm that the compress result is the best with gzip (almost 10% better than zip/lev=5, maybe level 9 could beat gzip).&lt;BR /&gt;&lt;BR /&gt;But cpu is the problem. So a nogo.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Wed, 21 Jun 2006 05:49:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809491#M9901</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2006-06-21T05:49:23Z</dc:date>
    </item>
    <item>
      <title>Re: Compression of very big files</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809492#M9902</link>
      <description>BTW : the cpu friendly /level=1 results in a 20 - 25 % bigger file than gzip. But since the result is still about 1/5th of the original file, it has not much importance.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Wed, 21 Jun 2006 06:00:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809492#M9902</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2006-06-21T06:00:25Z</dc:date>
    </item>
    <item>
      <title>Re: Compression of very big files</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809493#M9903</link>
      <description>Stupid, but usefull idea:&lt;BR /&gt;Change files to fix,512 ; export directory via NFS, mount it on linux machine and do BZIP2. BZIP2 does not use much I/O, and you spend extra cpu time, not cpu of VMS machine.</description>
      <pubDate>Wed, 21 Jun 2006 07:15:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809493#M9903</guid>
      <dc:creator>Vladimir Fabecic</dc:creator>
      <dc:date>2006-06-21T07:15:48Z</dc:date>
    </item>
    <item>
      <title>Re: Compression of very big files</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809494#M9904</link>
      <description>What's "very big"?  250MB?  (Be careful not&lt;BR /&gt;to confuse this with "large", which tends to&lt;BR /&gt;imply "bigger than 2GB".)  (("That's not a&lt;BR /&gt;knife [...]"))&lt;BR /&gt;&lt;BR /&gt;Zip 2.31 has some VMS-specific I/O&lt;BR /&gt;improvements, but I would not expect it to&lt;BR /&gt;differ much in CPU time from Zip 2.3.  It&lt;BR /&gt;might save some _real_ time, however.  For a&lt;BR /&gt;variety of reasons, I would not use anything&lt;BR /&gt;older than Zip 2.31 (or UnZip 5.52).&lt;BR /&gt;&lt;BR /&gt;I didn't do anything to the bzip2 code to&lt;BR /&gt;accomodate any non-UNIX-like file/record&lt;BR /&gt;formats, so, as it says on the Web page:&lt;BR /&gt;--------&lt;BR /&gt;BZIP2 is a UNIX-oriented utility, and as&lt;BR /&gt;such, it has little hope of dealing well&lt;BR /&gt;with RMS files whose record format is&lt;BR /&gt;anything other than Stream_LF.&lt;BR /&gt;&lt;BR /&gt;For a more versatile compressor-archiver&lt;BR /&gt;with greater RMS capability, see the&lt;BR /&gt;Info-ZIP Home Page. &lt;BR /&gt;--------&lt;BR /&gt;&lt;BR /&gt;I suppose that it should say "or fixed-512",&lt;BR /&gt;too, but the program is not expecting to&lt;BR /&gt;deal with RMS records of any type, and I have&lt;BR /&gt;no plans to change this.  (Jump right in, if&lt;BR /&gt;you wish.)  The release notes describe the&lt;BR /&gt;difference between my bzip2 1.0.3a and&lt;BR /&gt;1.0.3b, and it's I/O-related, not&lt;BR /&gt;CPU-related.&lt;BR /&gt;&lt;BR /&gt;When Zip 3.0 arrives (hold your breath), it&lt;BR /&gt;is expected to offer bzip2 compression&lt;BR /&gt;(optional, instead of the default "deflate"&lt;BR /&gt;method) in a Zip archive, but this is not in&lt;BR /&gt;the latest beta kit (3.0e).  (It'll be using&lt;BR /&gt;an external bzip2 object library, so you'll&lt;BR /&gt;need something like my bzip2 kit to enable&lt;BR /&gt;the feature.  Similar for UnZip 6.0, of&lt;BR /&gt;course.)&lt;BR /&gt;&lt;BR /&gt;I haven't ever tried it, but it should be&lt;BR /&gt;possible to build [Un]Zip with some fancy C&lt;BR /&gt;compiler options, like /ARCHITECTURE and&lt;BR /&gt;/OPTIMIZE=TUNE, which might help the&lt;BR /&gt;CPU-bound parts.  In Zip 2.x, you'd probably&lt;BR /&gt;need to edit the builder to do this.  (In Zip&lt;BR /&gt;3.x, it can be done from the command line.)&lt;BR /&gt;Test results from some adventurous user would&lt;BR /&gt;be received with interest.</description>
      <pubDate>Wed, 21 Jun 2006 10:48:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809494#M9904</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2006-06-21T10:48:27Z</dc:date>
    </item>
    <item>
      <title>Re: Compression of very big files</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809495#M9905</link>
      <description>It would be nice if zip had a "very light" mode. &lt;BR /&gt;&lt;BR /&gt;THe file I mentioned is just a sample. But we don't go over 2 GB (stay just under it).&lt;BR /&gt;&lt;BR /&gt;Not going to upgrade to win a few %. And need to be 6.2 compatible.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Wed, 21 Jun 2006 10:52:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809495#M9905</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2006-06-21T10:52:27Z</dc:date>
    </item>
    <item>
      <title>Re: Compression of very big files</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809496#M9906</link>
      <description>&amp;gt; It would be nice if zip had a "very light"&lt;BR /&gt;&amp;gt; mode.&lt;BR /&gt;&lt;BR /&gt;You mean less compression than "-1"?  There&lt;BR /&gt;_is_ "-0", but that (no compression) may be&lt;BR /&gt;less than you'd like.  I don't recall a lot&lt;BR /&gt;of demand for this, but I could ask around.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; Not going to upgrade to win a few %.&lt;BR /&gt;&lt;BR /&gt;The I/O improvements since 2.3/5.51 are&lt;BR /&gt;pretty big, in my opinion.  I can't say how&lt;BR /&gt;much they would help you.  (Personally, I&lt;BR /&gt;like the more non-VMS-compatible "-V"&lt;BR /&gt;archives, and command-line case preservation&lt;BR /&gt;(non-VAX), too.)&lt;BR /&gt;&lt;BR /&gt;&amp;gt; And need to be 6.2 compatible.&lt;BR /&gt;&lt;BR /&gt;Was that VAX or Alpha?&lt;BR /&gt;&lt;BR /&gt;We're (I'm) still testing as far back as VMS&lt;BR /&gt;V5.4 (VAX).  I have V6.2 (VAX) on a system&lt;BR /&gt;disk here, but I can't remember if I've tried&lt;BR /&gt;it lately.  (Someone else may test it on&lt;BR /&gt;something even older.)  Be sure to complain&lt;BR /&gt;if you have any problems.</description>
      <pubDate>Wed, 21 Jun 2006 11:09:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809496#M9906</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2006-06-21T11:09:06Z</dc:date>
    </item>
    <item>
      <title>Re: Compression of very big files</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809497#M9907</link>
      <description>Wim, dare I ask why CPU consumption is a problem ? We perform lots of zipping using gzip but those activities are scheduled outside of business hours. Sure CPU is high, but so what. We also use RAM disk techniques which see the CPUs running sustained 100% for hours.&lt;BR /&gt;&lt;BR /&gt;Years ago I worked at a site, which had an all integrated chargeout system tied into all processing. Users would be billed based on CPU I/O and other items. System utilization  cost users real money and really encourage good IT.&lt;BR /&gt;&lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 21 Jun 2006 19:52:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809497#M9907</guid>
      <dc:creator>Thomas Ritter</dc:creator>
      <dc:date>2006-06-21T19:52:36Z</dc:date>
    </item>
    <item>
      <title>Re: Compression of very big files</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809498#M9908</link>
      <description>&amp;gt;&amp;gt; If I change the file to fix,512 it works (bug or feature ?). rfm=stm caused record too large for users' buffer. &lt;BR /&gt;&lt;BR /&gt;Hi Wim,&lt;BR /&gt;&lt;BR /&gt;I think we have been here before, but I'll repeat it none the less in defense of the xyz-ZIPs and/or any other non-sybase tool which may croak on those files.&lt;BR /&gt;&lt;BR /&gt;A file with attributes rfm=stm is expected to have CR/LF as record terminators and can silently ignore leading binary zeroes in records. Hardly a 'flexible' format for supposedly binary files.&lt;BR /&gt;&lt;BR /&gt;If a file does not have those attributes, yet is labelled as such, then applications can and will fall over. Rightly so!&lt;BR /&gt;&lt;BR /&gt;Labelling the file RFM=FIX, MRS=512 is likely to be much more appropriate and 'benign' for most applications.&lt;BR /&gt;&lt;BR /&gt;IMHO those binary files should really be labelled RFM=UDF, but unfortunately that upsets some standard tools.&lt;BR /&gt;&lt;BR /&gt;Met vriendelijke groetjes,&lt;BR /&gt;&lt;BR /&gt;fwiw,&lt;BR /&gt;Hein.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/doc/731FINAL/4523/4523pro_007.html#rms_record_format_field" target="_blank"&gt;http://h71000.www7.hp.com/doc/731FINAL/4523/4523pro_007.html#rms_record_format_field&lt;/A&gt;&lt;BR /&gt;"FAB$C_STM&lt;BR /&gt;Indicates stream record format. Records are delimited by FF, VT, LF, or CR LF, and all leading zeros are ignored. This format applies to sequential files only and cannot be used with the block spanning option. "</description>
      <pubDate>Wed, 21 Jun 2006 22:13:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809498#M9908</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2006-06-21T22:13:08Z</dc:date>
    </item>
    <item>
      <title>Re: Compression of very big files</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809499#M9909</link>
      <description>Thomas,&lt;BR /&gt;&lt;BR /&gt;CPU is a problem because we have a lot to compress. And have no real non-business hours. We have 1 continent live all the time while the other continents are doing dumps, compress, etc.&lt;BR /&gt;To be more correct : cpu is not the problem, wall time is.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Thu, 22 Jun 2006 01:02:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809499#M9909</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2006-06-22T01:02:18Z</dc:date>
    </item>
    <item>
      <title>Re: Compression of very big files</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809500#M9910</link>
      <description>&amp;gt; [...] cpu is not the problem, wall time is.&lt;BR /&gt;&lt;BR /&gt;If so, I'd definitely look at Zip 2.31, as&lt;BR /&gt;its I/O improvements may actually help.&lt;BR /&gt;&lt;BR /&gt;With some SET RMS_DEFAULT action, you can&lt;BR /&gt;help Zip 2.3, but you need 2.31 to get the&lt;BR /&gt;SQO bit set to make highwater marking less&lt;BR /&gt;painful, and to avoid _copying_ the temporary&lt;BR /&gt;output file, if your (output) archive is on&lt;BR /&gt;a different disk from your current default&lt;BR /&gt;device+directory.  (An explicit "-b" option&lt;BR /&gt;can work around that one in 2.3.)&lt;BR /&gt;&lt;BR /&gt;Have I mentioned that I think that Zip 2.31&lt;BR /&gt;is generally better than 2.3?</description>
      <pubDate>Thu, 22 Jun 2006 01:26:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/compression-of-very-big-files/m-p/3809500#M9910</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2006-06-22T01:26:28Z</dc:date>
    </item>
  </channel>
</rss>

