- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Transfer very large files across servers.
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-12-2005 02:23 AM
12-12-2005 02:23 AM
Most probably I will be using unix "rcp" command to accomplish this.
I need some hints to speed up this transfer:
1.Do I need to tune any HP-UX Kernel parameter to make it faster by reading bigger chunks of blocks from file systems etc?
2.If I compress files at host “A” and copy to host “B” and then uncompress it, does this will speed up the transfer?
3.Can I transfer files in parallel?
Thanks.
Gulam
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-12-2005 02:37 AM
12-12-2005 02:37 AM
Solution1. I don't think kernel tuning will help. Its a network bandwidth issue.
2. Yes, compression does speed the transfer but it requires diskspace to compress and the time saved is exceeded by the time it takes to compress. Still the file transfer itself will be faster. gzip had the best compression last I checked.
3. Yes, but it will probably take just as long. Network bandwithc is the issue.
If the two machines are on the same network, see that they have gigabit networking and a good gigabit switch between them.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-12-2005 02:49 AM
12-12-2005 02:49 AM
Re: Transfer very large files across servers.
Jeff Traigle
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-12-2005 02:58 AM
12-12-2005 02:58 AM
Re: Transfer very large files across servers.
the kernel for.
2. Use scp and you can compress the data on
the fly. Data can be compressed on the wire.
3. Transferring files in parallel may cause
problems with overloading the receive buffers
on the target machine.
If these are not new datafiles, but mearly
updates to existing files, consider using
rsync. It will merge the changed and new blocks into the destination file. It uses
checksum comparisons to identify changed
blocks. It usually runs over ssh, and
compression level can be tuned. You may
want to use it even if the files are new.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-12-2005 07:18 AM
12-12-2005 07:18 AM
Re: Transfer very large files across servers.
I am completely biased because I've worked with NFS for years, but you could use NFS to do this.
I've spent the past few years working on NFS performance tuning issues and I've been able to get NFS/TCP to run at wire speeds for both reading and writing using Gigabit interfaces. In fact, during a recent engagement I was able to get many, many GigE interfaces running at wire speed with NFS/TCP traffic.
Just a suggestion, since NFS ships with all of our systems and you can use standard "cp" commands to move files around. NFS certainly supports parallel transfers via multiple cp commands.
Regards,
Dave
I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-12-2005 02:16 PM
12-12-2005 02:16 PM
Re: Transfer very large files across servers.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-12-2005 02:34 PM
12-12-2005 02:34 PM
Re: Transfer very large files across servers.
1) Kernel tuning won't help in this case as networking speed related with external parameters like Bandwidth, Traffic, etc..
2) You can use gzip or compress to reduce the file size. This time you will have reduced file with same content and it will be lesser time than the previous one(uncompressed)
3) Yes, You can transfer files in parallel.
Try to use SCP or SFTP since it will provide better performance than RCP.
-Arun
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-12-2005 03:18 PM
12-12-2005 03:18 PM
Re: Transfer very large files across servers.
i would go for FTP too.
Don't forget to set the appropriate mode for the transfer: Binary or ASCII
You can of course do it in parallel.
if you server have multiple network interfaces, you can also benefit from that.
khope this helps too!
kind regards
yogeeraj
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-12-2005 10:05 PM
12-12-2005 10:05 PM
Re: Transfer very large files across servers.
What is the type of connectivity b/w those two servers, is it LAN or WAN. In both cases I will recommend ftp, because it has less overheads. You can compress the files using compress command, not gzip and then ftp it, sftp is also preferable. If I'm not wrong gzip wont supports files aboove 2TB. I am not sure any patching is available on that.
If it's ur regular activity and u can invest some money, go for some data compression boxes b/w the machines, which will reduce your headache of data compression and decompression at unix level.
Regards,
Sunil
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-13-2005 04:47 AM
12-13-2005 04:47 AM
Re: Transfer very large files across servers.
*) How fast (bandwidth) is the network between the two servers?
*) How "far apart" (ping times) are the two servers?
*) What CPUs do you have in the two servers?
*) How much memory is there on the target server?
*) How fast are the disc(s) on the source and the target systems?
*) Are the different files going to the same or different filesystems/discs?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-13-2005 04:56 AM
12-13-2005 04:56 AM
Re: Transfer very large files across servers.
Here's what I do:
created /app/admin/drp
created script to update files tn DR server called update-binaries.sh
created file to store what to update called distfile
/app/admin/drp # cat update-binaries.sh
#! /bin/sh
# Keep the DRP copy of the vgpadm up-to-date.
# Currently the files are in:
#
# /home/vgpadm
#
# See the rdist(1M) distfile for a list of exclusions.
DRPDIR=/app/admin/drp
DRPHOST=svr032
mount | grep /home > /dev/null 2>&1
if [ $? -eq 0 ]
then
( su - vgpadm -c "rdist -f $DRPDIR/distfile vgpadm"; ) 2>&1 |\
tee $DRPDIR/drp.log 2>&1 |\
mailx -s "VGPADM DRP rdist output" gwild@mydomain.com
fi
/app/admin/drp # cat distfile
VGPDR = ( svr032 )
#
# File systems to be copied over to the DR host.
VGPADM = /home/vgpadm
vgpadm: ( ${VGPADM} ) -> ( ${VGPDR} )
install -R -w ;
except ${VGPADM}/logfiles;
Add to cron:
# Copy vgpadm across to the DR site.
05 01 * * * /app/admin/drp/update-binaries.sh
Rgds...Geoff