<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Any compression tool available which could use multiple CPU in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432498#M205950</link>
    <description>I have one, I wrote it myself and posted it to the original thread "favourite sysadmin scripts you always keep around" in August 2002, together with another program to decompress.&lt;BR /&gt;It is fixed at 4 threads, but that seems to be the optimum - even on my 12cpu rp8400 where it consumes over 1000% (yes one thousand) cpu in top.&lt;BR /&gt;It uses the zlib library so you need that installed.  It runs at between 2-3 times the speed of compress.&lt;BR /&gt;The decompression is single stream, but that has been shown to be quickest.&lt;BR /&gt;The other thing is it is also 32 bit (i.e. 2Gb limit where you do not redirect stdout), but as has been pointed out, if you use it to read a pipe and merely append or redirect stdout using |, &amp;gt; or &amp;gt;&amp;gt; the 2Gb limit does not apply.&lt;BR /&gt;Alternatively you can modify the fopen() call in the code to be fopen64() on the output file.  The worst thing about it is that is isn't compatible with compress/gzip or bzip2.&lt;BR /&gt;&lt;BR /&gt;Here is the link:&lt;BR /&gt;&lt;A href="http://forums1.itrc.hp.com/service/forums/parseCurl.do?CURL=%2Fcm%2FQuestionAnswer%2F1%2C%2C0x026250011d20d6118ff40090279cd0f9%2C00.html&amp;amp;admit=716493758+1101807043917+28353475" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/parseCurl.do?CURL=%2Fcm%2FQuestionAnswer%2F1%2C%2C0x026250011d20d6118ff40090279cd0f9%2C00.html&amp;amp;admit=716493758+1101807043917+28353475&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Make sure you test it though, it isn't commercial and comes with no warranty(!)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Tue, 30 Nov 2004 04:46:13 GMT</pubDate>
    <dc:creator>Steve Lewis</dc:creator>
    <dc:date>2004-11-30T04:46:13Z</dc:date>
    <item>
      <title>Any compression tool available which could use multiple CPU</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432491#M205943</link>
      <description>I have a requirement to transfer 35GB data file every day and thinking of doing a compression so that I could save the transmission time. Currently the time I save in transmission is nullified with the time it takes to do the compression. I have multi CPU servers available and thinking of using threading to overcome the compression time.&lt;BR /&gt;&lt;BR /&gt;I found pbzip2 (Parallel bzip) which has multiple CPU support. But it have a 2GB file limit&lt;BR /&gt;&lt;BR /&gt;Does any one have any recommendations?&lt;BR /&gt;&lt;BR /&gt;Thanks in advance&lt;BR /&gt;Madhu</description>
      <pubDate>Mon, 29 Nov 2004 17:39:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432491#M205943</guid>
      <dc:creator>Madhu Kangara</dc:creator>
      <dc:date>2004-11-29T17:39:53Z</dc:date>
    </item>
    <item>
      <title>Re: Any compression tool available which could use multiple CPU</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432492#M205944</link>
      <description>I believe bzip2 wouldn't be a good choice anyway : it is really well compressing, but a little bit slow.&lt;BR /&gt;Maybe you should use compress. It compresses low but faster.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Fred&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Nov 2004 18:54:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432492#M205944</guid>
      <dc:creator>Fred Ruffet</dc:creator>
      <dc:date>2004-11-29T18:54:04Z</dc:date>
    </item>
    <item>
      <title>Re: Any compression tool available which could use multiple CPU</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432493#M205945</link>
      <description>None of these tools multi threads or multi processes for very good reason.&lt;BR /&gt;&lt;BR /&gt;You can design your script to zip two tools a the same time.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;do &lt;BR /&gt;control=$(ps -ef | grep zipsomething | wc -l)&lt;BR /&gt;&lt;BR /&gt;if [ control -ge 2 ] &lt;BR /&gt;then &lt;BR /&gt;   sleep 30&lt;BR /&gt;else&lt;BR /&gt;  zipsomething &amp;amp;&lt;BR /&gt;  zipsomething &amp;amp;&lt;BR /&gt;fi&lt;BR /&gt;&lt;BR /&gt;That will at least insure that two zip processs are running. You'll have to be careful building the zipsomething command line so the same file is not zipped by two processes.&lt;BR /&gt;&lt;BR /&gt;As to the tool gzip is good, can go up to 8 GB with patching.&lt;BR /&gt;&lt;BR /&gt;SEP&lt;BR /&gt;done</description>
      <pubDate>Mon, 29 Nov 2004 19:02:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432493#M205945</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2004-11-29T19:02:11Z</dc:date>
    </item>
    <item>
      <title>Re: Any compression tool available which could use multiple CPU</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432494#M205946</link>
      <description>I liked the way pbzip2 worked for multiple CPU and I could specify how many CPUs to be used&lt;BR /&gt;Here is the link &lt;A href="http://compression.ca/pbzip2/" target="_blank"&gt;http://compression.ca/pbzip2/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;But as I said earlier it has a 2GB file size limit and bzip2 does not have that limit for the version I use.&lt;BR /&gt;&lt;BR /&gt;In my case I have a single 35GB file.</description>
      <pubDate>Mon, 29 Nov 2004 19:13:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432494#M205946</guid>
      <dc:creator>Madhu Kangara</dc:creator>
      <dc:date>2004-11-29T19:13:31Z</dc:date>
    </item>
    <item>
      <title>Re: Any compression tool available which could use multiple CPU</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432495#M205947</link>
      <description>SEP,&lt;BR /&gt;&lt;BR /&gt;You're right for multiple files, but here there is only one file. So what is needed is really multi-thread. And it does not exis, as far as I know.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Fred&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Nov 2004 19:15:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432495#M205947</guid>
      <dc:creator>Fred Ruffet</dc:creator>
      <dc:date>2004-11-29T19:15:42Z</dc:date>
    </item>
    <item>
      <title>Re: Any compression tool available which could use multiple CPU</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432496#M205948</link>
      <description>Here is an unfinished thought (because it is time to go to sleep!)....&lt;BR /&gt;&lt;BR /&gt;'split' the large single file into fifo pipes.&lt;BR /&gt;Launch compresses for each pipe.&lt;BR /&gt;Start transfers as the compress jobs finish.&lt;BR /&gt;Uncompress, appending to a single file on the other side.&lt;BR /&gt;The uncompress, much like the transfer, would be single stream, but that generally takes less time than compress.&lt;BR /&gt;&lt;BR /&gt;Here is a perl script that was supposed to split in parallel:&lt;BR /&gt;&lt;BR /&gt;$file = shift @ARGV or die "Please provide file to split and # chunks";&lt;BR /&gt;$chunks = shift @ARGV;&lt;BR /&gt;$chunks = 4 unless $chunks;&lt;BR /&gt;$chunks = 26 if $chunks &amp;gt; 26;&lt;BR /&gt;$total = -s $file;&lt;BR /&gt;die "puny file" unless ($total &amp;gt; 10000000);&lt;BR /&gt;$name = "xxx_";&lt;BR /&gt;$chunk = int( $total / $chunks);&lt;BR /&gt;$i = 0;&lt;BR /&gt;while ($i &amp;lt; $chunks) {&lt;BR /&gt;  $command = sprintf( "mknod %sa%c p", $name, ord("a") + $i++ );&lt;BR /&gt;  printf "-- $command\n";&lt;BR /&gt;  system ($command);&lt;BR /&gt;  }&lt;BR /&gt;$command = "split -b $chunk $file $name";&lt;BR /&gt;$i = 0;&lt;BR /&gt;while ($i &amp;lt;= $chunks) {&lt;BR /&gt;        print "-- $command\n";&lt;BR /&gt;        exec ($command) unless fork();&lt;BR /&gt;        $letter = ord("a") + $i++;&lt;BR /&gt;        $command = sprintf( "cat %sa%c | gzip &amp;gt; %sa%c.gz", $name, $letter, $name, $letter  );&lt;BR /&gt;        }&lt;BR /&gt;$pid = 1;&lt;BR /&gt;$pid = wait() while ($pid &amp;gt; 0);&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;First problem was that gzip does not eat from fifo's... but cats do!&lt;BR /&gt;Biggest problem is that only one zip is going at a time because split is of course only writing one pipe at a time waiting for the result to be picked up. &lt;BR /&gt;One silly fix for that is to split into real intermediate files, and zip those. Yuck.&lt;BR /&gt;I think the better solution would be for the perl script to fork mutliple reader streams which each seek to their own start point and then read (binmode) and feed data into their own gzips.&lt;BR /&gt;&lt;BR /&gt;On the other side, I think I'd go for a single unzip to combine the files. I don't think it will work to output into a single file from multipel streams after individual seeks. Then again, I suppose that would work, notably when starting block aligned.&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Tue, 30 Nov 2004 01:22:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432496#M205948</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2004-11-30T01:22:24Z</dc:date>
    </item>
    <item>
      <title>Re: Any compression tool available which could use multiple CPU</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432497#M205949</link>
      <description>how can compression programs suffer from file sizes when used in pipes?&lt;BR /&gt;&lt;BR /&gt;If you compress a single file, that limit is a burden, and for ages gzip had that problem. It was easy to overcome using more recent versions from GNU.&lt;BR /&gt;&lt;BR /&gt;bzip2 knows, just as gzip, compression rate command line parameters, that influence the CPU usage (-1 .. -9), but do not control the number of CPU's involved, so your question is very good.&lt;BR /&gt;&lt;BR /&gt;The option of having a script take care of running two (or more) compressions at the same time is good, but why would pbzip2 not work on unlimited file sizes when in streaming mode?&lt;BR /&gt;&lt;BR /&gt;# pbzip2 -options &amp;lt; very_very_large_file &amp;gt; compressed_file&lt;BR /&gt;&lt;BR /&gt;and why use a compressed file anyway&lt;BR /&gt;&lt;BR /&gt;# pbzip2 -options &lt;FILE&gt;&lt;/FILE&gt;&lt;BR /&gt;using dd as a buffer&lt;BR /&gt;&lt;BR /&gt;Another option would be to compile pbzip2 from source yourself, removing the file limit&lt;BR /&gt;&lt;A href="http://compression.ca/pbzip2/" target="_blank"&gt;http://compression.ca/pbzip2/&lt;/A&gt;&lt;BR /&gt;&lt;A href="http://compression.ca/pbzip2/pbzip2-0.8.tar.gz" target="_blank"&gt;http://compression.ca/pbzip2/pbzip2-0.8.tar.gz&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Enjoy, Have FUN! H.Merijn</description>
      <pubDate>Tue, 30 Nov 2004 02:04:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432497#M205949</guid>
      <dc:creator>H.Merijn Brand (procura</dc:creator>
      <dc:date>2004-11-30T02:04:23Z</dc:date>
    </item>
    <item>
      <title>Re: Any compression tool available which could use multiple CPU</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432498#M205950</link>
      <description>I have one, I wrote it myself and posted it to the original thread "favourite sysadmin scripts you always keep around" in August 2002, together with another program to decompress.&lt;BR /&gt;It is fixed at 4 threads, but that seems to be the optimum - even on my 12cpu rp8400 where it consumes over 1000% (yes one thousand) cpu in top.&lt;BR /&gt;It uses the zlib library so you need that installed.  It runs at between 2-3 times the speed of compress.&lt;BR /&gt;The decompression is single stream, but that has been shown to be quickest.&lt;BR /&gt;The other thing is it is also 32 bit (i.e. 2Gb limit where you do not redirect stdout), but as has been pointed out, if you use it to read a pipe and merely append or redirect stdout using |, &amp;gt; or &amp;gt;&amp;gt; the 2Gb limit does not apply.&lt;BR /&gt;Alternatively you can modify the fopen() call in the code to be fopen64() on the output file.  The worst thing about it is that is isn't compatible with compress/gzip or bzip2.&lt;BR /&gt;&lt;BR /&gt;Here is the link:&lt;BR /&gt;&lt;A href="http://forums1.itrc.hp.com/service/forums/parseCurl.do?CURL=%2Fcm%2FQuestionAnswer%2F1%2C%2C0x026250011d20d6118ff40090279cd0f9%2C00.html&amp;amp;admit=716493758+1101807043917+28353475" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/parseCurl.do?CURL=%2Fcm%2FQuestionAnswer%2F1%2C%2C0x026250011d20d6118ff40090279cd0f9%2C00.html&amp;amp;admit=716493758+1101807043917+28353475&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Make sure you test it though, it isn't commercial and comes with no warranty(!)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 30 Nov 2004 04:46:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432498#M205950</guid>
      <dc:creator>Steve Lewis</dc:creator>
      <dc:date>2004-11-30T04:46:13Z</dc:date>
    </item>
    <item>
      <title>Re: Any compression tool available which could use multiple CPU</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432499#M205951</link>
      <description>In between I recieved a patch for pbzip2 from its developer to fix the 2GB file size limit. So I will test that and will update the status&lt;BR /&gt;&lt;BR /&gt;I liked some of the comments posted here and will give points to those&lt;BR /&gt;</description>
      <pubDate>Tue, 30 Nov 2004 12:38:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432499#M205951</guid>
      <dc:creator>Madhu Kangara</dc:creator>
      <dc:date>2004-11-30T12:38:30Z</dc:date>
    </item>
    <item>
      <title>Re: Any compression tool available which could use multiple CPU</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432500#M205952</link>
      <description>I agree with you Fred. I merely gave a multi-tasking methodology. None of these tools multi threads. Thats for a very good reason.&lt;BR /&gt;&lt;BR /&gt;Reliability is more important than speed.&lt;BR /&gt;&lt;BR /&gt;Nice hat Fred.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Tue, 30 Nov 2004 12:47:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432500#M205952</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2004-11-30T12:47:52Z</dc:date>
    </item>
    <item>
      <title>Re: Any compression tool available which could use multiple CPU</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432501#M205953</link>
      <description>As replied earlier... you may want to check out solutions that keep all the bits in the air.&lt;BR /&gt;&lt;BR /&gt;Anyway, over lunch I poked some more at a perl script to split a file and compress the parts, and it now works fine. (after I moved the open + seek from the parent to the children).&lt;BR /&gt;&lt;BR /&gt;Here is a sample session for a 4GB file on an ia64 hp server rx7620 (8p)&lt;BR /&gt; &lt;BR /&gt;# time perl split.pl xx.dat 6&lt;BR /&gt;6 x 699072512 byte chunks. 10667 x 65536 byte blocks. 4194305024 bytes&lt;BR /&gt;real     1:35.60, user        0.64, sys        29.37&lt;BR /&gt;# That's with over 75% cpu busy and gives:&lt;BR /&gt;# ls -l xx*&lt;BR /&gt;        4194305024 Nov 29 21:05 xx.dat&lt;BR /&gt;        163707548 Nov 30 10:41 xx.dat_1.gz&lt;BR /&gt;        167387395 Nov 30 10:41 xx.dat_2.gz&lt;BR /&gt;        163581093 Nov 30 10:41 xx.dat_3.gz&lt;BR /&gt;        162035968 Nov 30 10:41 xx.dat_4.gz&lt;BR /&gt;        159506304 Nov 30 10:41 xx.dat_5.gz&lt;BR /&gt;        159981309 Nov 30 10:41 xx.dat_6.gz&lt;BR /&gt;#Put them back together with:&lt;BR /&gt;for i in xx.dat*gz&lt;BR /&gt;do&lt;BR /&gt;gunzip -c $i &amp;gt;&amp;gt; xx&lt;BR /&gt;done&lt;BR /&gt;real     1:07.8, user       50.0, sys        16.3&lt;BR /&gt;# ls -l xx&lt;BR /&gt;        4194305024 Nov 30 10:47 xx&lt;BR /&gt;# doublecheck&lt;BR /&gt;# time diff xx xx.dat&lt;BR /&gt;real     2:13.2, user     1:32.8, sys        24.4&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;The script:&lt;BR /&gt;&lt;BR /&gt;$|=1;&lt;BR /&gt;$file = shift @ARGV or die "Please provide file to split and # chunks";&lt;BR /&gt;open (FILE, "&amp;lt;$file") or die "Error opening $file";&lt;BR /&gt;close (FILE);&lt;BR /&gt;$chunks = shift @ARGV;&lt;BR /&gt;$chunks = 4 unless $chunks;&lt;BR /&gt;$chunks = 26 if $chunks &amp;gt; 26;&lt;BR /&gt;$total = -s $file;&lt;BR /&gt;die "puny file" unless ($total &amp;gt; 10000000);&lt;BR /&gt;# make last chunk the smallest&lt;BR /&gt;$block = 64*1024;&lt;BR /&gt;$blocks =  1 + int( $total / ($chunks * $block));&lt;BR /&gt;$chunk = $blocks * $block;&lt;BR /&gt;print "$chunks x $chunk byte chunks. $blocks x $block byte blocks. $total bytes\n";&lt;BR /&gt;$i = 0;&lt;BR /&gt;while ($i &amp;lt; $chunks) {&lt;BR /&gt;  if ($pid=fork()) {&lt;BR /&gt;    $i++;&lt;BR /&gt;    } else {&lt;BR /&gt;    open (FILE, "&amp;lt;$file") or die "Error opening $file in child $i";&lt;BR /&gt;    binmode (FILE);&lt;BR /&gt;    $pos = sysseek (FILE, $chunk * $i++, 0);&lt;BR /&gt;    $name = "${file}_${i}.gz";&lt;BR /&gt;    open (ZIP, "| gzip &amp;gt; $name") or die "-- zip error child $i file $name";&lt;BR /&gt;    while ($blocks-- &amp;amp;&amp;amp; $block) {&lt;BR /&gt;      $block = sysread(FILE, $buffer, $block);&lt;BR /&gt;      syswrite (ZIP, $buffer) if ($block);&lt;BR /&gt;      }&lt;BR /&gt;    exit 0;&lt;BR /&gt;    }&lt;BR /&gt;  }&lt;BR /&gt;$pid = wait() while ($pid &amp;gt; 0);&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Enjoy!&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Tue, 30 Nov 2004 14:07:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432501#M205953</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2004-11-30T14:07:11Z</dc:date>
    </item>
    <item>
      <title>Re: Any compression tool available which could use multiple CPU</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432502#M205954</link>
      <description>Can you put a Gig-E circuit between the two servers? If so, then the transfer of 35Gb would  take about 5 minutes.&lt;BR /&gt;&lt;BR /&gt;live free or die&lt;BR /&gt;harry d brown jr</description>
      <pubDate>Tue, 30 Nov 2004 14:25:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432502#M205954</guid>
      <dc:creator>harry d brown jr</dc:creator>
      <dc:date>2004-11-30T14:25:32Z</dc:date>
    </item>
    <item>
      <title>Re: Any compression tool available which could use multiple CPU</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432503#M205955</link>
      <description>I love Hein's (2nd) perl script.  Provided that the sysseek call really does go direct to the right point in the file without a sequential search, you should get good performance and a high level of confidence out of it.&lt;BR /&gt;My program is truly multi-threaded and does it all in one pass up the file, but you won't have the confidence of the simplicity that Hein's perl solution gives you.  You would also have to edit some of my C code to get the correct level of compression and fopen64.&lt;BR /&gt;For a 35Gb file, you need something that does only one pass up the source file, or several small scans of the parts.&lt;BR /&gt;Hein I take my hat off to you, that little script is probably what I was looking for 2 years ago.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 30 Nov 2004 16:21:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432503#M205955</guid>
      <dc:creator>Steve Lewis</dc:creator>
      <dc:date>2004-11-30T16:21:34Z</dc:date>
    </item>
    <item>
      <title>Re: Any compression tool available which could use multiple CPU</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432504#M205956</link>
      <description>&amp;gt; Provided that the sysseek call really does go direct to the right point in the file without a sequential search&lt;BR /&gt;&lt;BR /&gt;It does. Each child starts and stops pretty much at the same time, the actual data contents defining the cpu time needed.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; you should get good performance and a high level of confidence out of it.&lt;BR /&gt;&lt;BR /&gt;With a reasonable IO system I believe it to give near inverse linear improvement in elapsed time for the number of chunks selected, up to the number of available cpu's. For final perfomance tweaks you might want to toss an mpsched to the zip command, and force one per cpu.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; perl solution gives you. You would also have to edit some of my C code to get&lt;BR /&gt;&lt;BR /&gt;I find that it actually looks more like a C program then a perl script :^)&lt;BR /&gt;&lt;BR /&gt;&amp;gt; I take my hat off to you, that little script is probably what I was looking for 2 years ago. &lt;BR /&gt;&lt;BR /&gt;And a pretty wizards hat at that. Thanks! :-)&lt;BR /&gt;&lt;BR /&gt;Obviously the script is still pretty rough. Only initial error handling, remnants of shady pasts (that '26' was for the split hack in the first attempt) and so on, but it should be a fine starting point for someones specialized solution (different output selection, different zip params, automatically determine (free) cpu count,...)&lt;BR /&gt;&lt;BR /&gt;by making the last chunk the smallest I could keep the loop control simple: just read a selected number of blocks or untill you coudl read no more (the last chunk).&lt;BR /&gt;Cheers,&lt;BR /&gt;Hein.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 30 Nov 2004 19:02:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/any-compression-tool-available-which-could-use-multiple-cpu/m-p/3432504#M205956</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2004-11-30T19:02:58Z</dc:date>
    </item>
  </channel>
</rss>

