Simpler Navigation for Servers and Operating Systems - Please Update Your Bookmarks
Completed: a much simpler Servers and Operating Systems section of the Community. We combined many of the older boards, so you won't have to click through so many levels to get at the information you need. Check the consolidated boards here as many sub-forums are now single boards.
If you have bookmarked forums or discussion boards in Servers and Operating Systems, we suggest you check and update them as needed.
Showing results for 
Search instead for 
Did you mean: 

Fastest copy over net using pipes.


Fastest copy over net using pipes.

I need to make an Oracle export from Tru64 to HP-UX, so I need the fastest method to copy data over net using pipes.
- ssh with HPN and no crypt. Perhaps compresed.
- netcat ?
- rsh (slow)

Any benchmark available ?
Any other option ?
Hein van den Heuvel
Honored Contributor

Re: Fastest copy over net using pipes.

Hello Juan,

Welcome back to the Forum. Too bad the Tu64 solution did not go forward huh?

If you know your solution, then go ahead and do it and don't ask us! But maybe it is better to step back to the real problem, to get an Oracle database transferred, and then consider all options.

Using pipes may well be the best solution.
It is kinda nice to avoid a bunch of temporary files and keep all the bits up in the air. The only thing is... all the bits are up in the air! No pickup if anything gets dropped.

Here is what I would consider.
Make a script to run the oracle export into local pipes (mkfifo / mknod -p) and feed that into bzip2 or zip into a file.
Use the FILESIZE=10GB. Or us 5GB or 50GB or whatever matches about 1/10 - 1/20 of you Database DATA size as you surely use INDEXES=N.
Have a second script (or a fork) pick up those chunks of export as they finish and transfer them with FTP or NFS. Catch on the other size.
After a few (2 or 3) files are on the other size, you may want to start with imports from a pipe fed by uncompressing the chunks.

If space is limited, then you can delete the chunks on the sender side once received and/or delete the chunck on the reciever once they are imported.

As good chuncks are imported you can already start building indexed for the completed tables.

Hope this helps enough! (If not, contact me directly and for mere money I'll be happy to provide full details and scripts! :-).


Hein van den Heuvel
HvdH Performance Consulting.
Volker Borowski
Honored Contributor

Re: Fastest copy over net using pipes.


did you try sqlnet ?

exp .... -> to Pipe

imp user@tragetdb .... -> from Pipe

Proper tnsnames.ora for targetdb required.
Watch out for NLS parameters and environment, and try a dummy table first to check proper tranfer of special chars like "Umlauts" or graphics or likewise.



Re: Fastest copy over net using pipes.

I thank your answers but don´t solve my problem.
I need to make an export of 9i over Tru64 and a simulaneous import into 10g over HP-UX.
We try something like this:

exp parfile=exp.par tables=RE.$tb file=/temp/$tb.pipe & rsh srv2 'cat > /temp2/$tb.pipe' < /temp/$tb.pipe"
But it seems that copy over net using rsh are slow.
I need another method to fast move data from pipe on server1 to pipe on server2.
Hein van den Heuvel
Honored Contributor

Re: Fastest copy over net using pipes.

Juan, Did you make sure to test the raw network speed using several tools? The cat from pipe _might_ just transmit a short record at a time instead of blocked up data. You probably want to have delayed-ack active (on Tru64 : sysconfig -r inet tcpnodelack=1 )

And uh... you do have a decent, direct, high-speed connection right?

Just transfer a large ( > 1GB) file with various tools outside import/export context:

Also try the pipe construct by cat-ting a file into it.

Good luck,

Bill Hassell
Honored Contributor

Re: Fastest copy over net using pipes.

You should start by determining just how fast you network is...If it is a WAN, it won't matter what you use, it will be extremely slow. You can upgrade your connection to T3 so you can get about 2-4 Mbyte/sec (T1 or DSL = 80 Kbytes/sec). If this is internal, 10Base gets you about 800Kbyte/sec, 100Base gets you about 8 Mbytes/sec and 80 Mbytes/sec with 1000Base. Now those are approximate figures with an efficient protocol such as ftp. Pipes are a *LOT* less efficient due to the larger number of ACKs used.

So the biggest bang is to compress everything (this all assumes that you are not sending a few megs but sending several Gb). Exports tend to be highly compressible and you want to avoid sending unnecessary data. The total time to complete will be network limited until you use a 1000Base link where the CPU speed for compress/decompress will become more significant.

Bill Hassell, sysadmin
Volker Borowski
Honored Contributor

Re: Fastest copy over net using pipes.


you could try like this:

Shell one : Start export to pipe

exp parfile=exp.par tables=RE.$tb file=/temp/$tb.pipe &

Shell two : read from pipe, compress, rsh, (on target:uncompress+feed to import pipe)

dd if=/temp/$tb.pipe | compress | rsh srv2 'uncompress| dd of=/temp2/$tb.pipe'

Shell three (On srv2)
imp .... from pipe

To optimise throughput, you can play around with blocksize parameters in dd. You you could try "ibs=" / "obs=" to set the blocksize to i.e. 8192 bytes when reading from disk and to 2048 or 4096 when writing to the compress (or gzip). Use reverse parameters on the receiving side.

exp: dd if=$tb.pipe ibs=8192 obs=2048
imp: dd if=$tb.pipe ibs=2048 obs=8192

I think "dd" is a better choice, because you have better control of the blocking elements.

May be gzip gives you better compression than compress, but it is more costy in terms of cpu resources.