1748198 Members
2580 Online
108759 Solutions
New Discussion юеВ

Re: gzip

 
mangor
Occasional Advisor

gzip

Hello All,
I'm using gzip to compress an oracle export on the fly. It's going through the pipe file at a rate of 8k. I have hp-11.0 64bit. Is there anyway to increase the rate?

mknod pipe_file_name p

gzip < pipe_file_name > export_name.dmp.gz &
exp / file=pipe_file_name full=y ...


13 REPLIES 13
Steven E. Protter
Exalted Contributor

Re: gzip

Do the process in multiples steps.

Create a tar file of all files, then gzip that.

You'll probably improve the throughput.

Make sure the disk isn't near full, that can cause huge i/o performance hits.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Mark Greene_1
Honored Contributor

Re: gzip

In addtion to Steven's comments, if you can tar to a file system on one disk and gzip the output to a file system on a seperate disk, depending on the SCSI bus's max sustained throughput, you may see a big performance increase. If the max throughput can't be sustained, you'll probably not notice a difference. obviously you'll want tot be working with the two least busy disks on the system.

mark
the future will be a lot like now, only later
H.Merijn Brand (procura
Honored Contributor

Re: gzip

You cxan also experiment with the rate of compression you need.

gzip -9 is compressing very well, but it's dead slow. By the time you need that compression, you'll probably better of with bzip2, which compresses much better, but is even slower on compressing (but faster on decompressing)

gzip -1 compresses almost not, but is very fast.

gzip -5 is the default.

So if gzip -3 is compressing enough for you, it might also raise the throughput enough to be your compromise

Enjoy, Have FUN! H.Merijn
Enjoy, Have FUN! H.Merijn
H.Merijn Brand (procura
Honored Contributor

Re: gzip

One more (small) thing. Not even a shot in the dark.

It might help to compile the most recent version of gzip from scratch with +O4, so it runs even faster.

I don't know what gzip port you have, but in some situations, binaries compiled with GNU gcc or compiled in portable (pa-risc-1.1) mode might run up to 40% slower than binaries compiled with HP C-ANSI-C in the native architecture.

# file `which gzip`

will show you what architecture gzip is built for

a5:/tmp 102 > ll /stand/vmunix
564 -rwxr-xr-x 1 root sys 15405408 Jun 14 18:58 /stand/vmunix
a5:/tmp 103 > timex gzip -1 < /stand/vmunix >/dev/null

real 1.84
user 1.80
sys 0.03

a5:/tmp 104 > timex gzip -3 < /stand/vmunix > /dev/null

real 2.48
user 2.17
sys 0.03

a5:/tmp 105 > timex gzip -5 < /stand/vmunix > /dev/null

real 3.06
user 2.76
sys 0.03

a5:/tmp 106 > timex gzip -9 < /stand/vmunix > /dev/null

real 27.97
user 27.05
sys 0.04

a5:/tmp 107 >

a5:/tmp 109 > gzip --version
gzip 1.3.5
(2002-09-30)
Copyright 2002 Free Software Foundation
Copyright 1992-1993 Jean-loup Gailly
This program comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of this program
under the terms of the GNU General Public License.
For more information about these matters, see the file named COPYING.
Compilation options:
DIRENT UTIME STDC_HEADERS HAVE_UNISTD_H HAVE_MEMORY_H HAVE_STRING_H HAVE_LSTAT
Written by Jean-loup Gailly.
a5:/tmp 110 > file `which gzip`
/pro/bin/gzip: PA-RISC2.0 shared executable dynamically linked
a5:/tmp 111 >

===

a5:/tmp 111 > /usr/contrib/bin/gzip --version
gzip 1.2.4 (18 Aug 93)
Compilation options:
DIRENT UTIME STDC_HEADERS HAVE_UNISTD_H
a5:/tmp 112 > file /usr/contrib/bin/gzip
/usr/contrib/bin/gzip: s800 shared executable
a5:/tmp 113 >

a5:/tmp 113 > timex /usr/contrib/bin/gzip -9 /dev/null

real 21.89
user 21.49
sys 0.03

a5:/tmp 114 > timex /usr/contrib/bin/gzip -5 < /stand/vmunix > /dev/null

real 3.03
user 2.89
sys 0.03

a5:/tmp 115 > timex /usr/contrib/bin/gzip -3 < /stand/vmunix > /dev/null

real 2.35
user 2.33
sys 0.02

a5:/tmp 116 > timex /usr/contrib/bin/gzip -1 < /stand/vmunix > /dev/null

real 2.06
user 2.03
sys 0.02

a5:/tmp 117 >

Enjoy, Have FUN! H.Merijn
Enjoy, Have FUN! H.Merijn
mangor
Occasional Advisor

Re: gzip

thanks everyone but that doesn't help to answer my question. I'm not looking for a faster way to take a backup. I'm looking for a way to increase the size of the pipe file. We already take a full system backup in a short time period. I want to perform an export and compress it on the fly. Is there anyway to increase the pipe file size?
Sridhar Bhaskarla
Honored Contributor

Re: gzip

Hi,

Unless you write your own C-program to increase the buffer size to read|write from the pipe, I don't think you can be able to do it.

However with the hint from Procura, but at the price of compression, you can significantly reduce the time. For ex., try the following.

mkfifo pipe
gzip -1 < pipe > your_file.gz&
timex dd if=/stand/vmunix of=pipe bs=1024k

See how long it took. Try with various compression options like

gzip -3 < pipe > your_file.gz&
timex dd if=/stand/vmunix of=pipe bs=1024k

Observe size of your_file.gz.

Remember that as you increase the speed, the compression level is going to increase hence slower the operation. 1 being the fastest (minimum compression) and 9 the slowest (maximum compression). 6 is the default I believe.

-Sri
You may be disappointed if you fail, but you are doomed if you don't try
Hein van den Heuvel
Honored Contributor

Re: gzip

Mangor,

That ought to work. It ought to work fast.
The compress CPU time is normally the bottleneck. You may want to specify the Oracle version and cpu count/speed/architecture for better help.
What is 'rate of 8k'. kb/sec? yikes.
What is the rate for exportign to /dev/null?
What is the cpu usage going to /dev/null (expressed in single cpu speed, as export is just one singlestreamed program).

You checked general export adivses like:
http://www.orafaq.com/faqiexp.htm#SPEED
(btw.... COMPRESSED=y in the par file does not compress the output, just the future allocation :-).

I have seen some suggestions to sleep a second or two before starting export to make sure that the reading starts before the writing.

There are also suggestions to avoid NFS output in the process.

Personally I use mpsched to explicitly schedule exp and zip on seperate CPU's.
Even if it does not help (I think it does) it gives you good visibility on the bottleneck... if any.

If the zip CPU is 100%, then that's all you are going to get no? (of course top will tell you that either way; bound or not).

The hints for faster zips, or for zip options, should be followd up.
Personally I use bzip2 (and -3 if I remember).

There is no way that exporting to a disk first is going to help au contraire!
If if does help, just use that as an analysis tool to isolate what is broken.. because it shouldn't help.

Finally try and try again.
It is well worth a few minutes of evaluation of cpu time, elapsed time, and compression afficiency if you are about to dive into multiplr hours of exporting and compressing.

Good luck,
Hein.
Mark Greene_1
Honored Contributor

Re: gzip

"I want to perform an export and compress it on the fly. Is there anyway to increase the pipe file size? "

I still think you are chasing the wrong end of this, and that your bottleneck is pulling the data out of Oracle. Have you run iostats on the disks where the Oracle volumes are stored during the time period in which your backup runs? 8k/sec seems way too slow for this to be a memory or kernel threshold.

mark
the future will be a lot like now, only later
Alzhy
Honored Contributor

Re: gzip

If your piped gzip output rate from yor Oracle export job is 8k a second.. then that means you're probably getting anywhere from 80K to 100K a second from Oracle. Possibilities:

1. Oracle is slow feeding your pipe/gzip'd process
2. The system is very I/O and CPU bound.

What is the model and memory/disk config of this machine?
Hakuna Matata.