Operating System - HP-UX
1819997 Members
3744 Online
109608 Solutions
New Discussion юеВ

Re: How to migrate data (400 GB) to NetApp as quick as possible

 
SOLVED
Go to solution
THEUNISSEN, J
Advisor

How to migrate data (400 GB) to NetApp as quick as possible

I want to migrate data from a direct attached disk (StorageTek RAID system) on a HP-UX 11.00 system to a NetApp filer. We now have mounted the NetApp filer via NFS.
I planned to use the command cp -rp *, but (I think because of the NFS mounts) this takes quite a while with 400 GB of data. This 400 GB is constantly changing data (not static data) and I need a copy of this on my new NetApp filer system and this should be an exact copy. So the system will be not used during the copy. But our customer wants a downtime as short as possible. I did some tests and it turned out that my downtime would be approx. 24 hours and this is too long. Is there another method rather than the cp command. Ftp is not really an option, because I have a lot of subdirectories as well. Who can help?
NFS/CIFS error
16 REPLIES 16
Michael Schulte zur Sur
Honored Contributor

Re: How to migrate data (400 GB) to NetApp as quick as possible

Hi,

I am not familiar with netapp filer, but is it possible to use rcp? It is a lot faster then cp.

greetings

Michael
U.SivaKumar_2
Honored Contributor

Re: How to migrate data (400 GB) to NetApp as quick as possible

Hi,

I would suggest to NFS over TCP instead of NFS over udp as it gives more performance since it supports large NFS transfer buffers .

since TCP is used , you get higher reliabilty and error recovery for this huge data.

regards,

U.SivaKumar.

Innovations are made when conventions are broken
Michael Steele_2
Honored Contributor
Solution

Re: How to migrate data (400 GB) to NetApp as quick as possible

You can really tune things up in NFS by :

a) adding more memory or reducing process load and freeing memory

b) Increase number of 'nfsd' daemons

NUM_NFSD=16 (* /etc/rc.config.d/nfsconf *)

c) Eliminate symbolic links

d) NFS client will cache so maybe increase or decrease 'dbc_max_pct' and 'dbc_min_pct'. The issue here is writting to disk which usually happens when the buffer fills. So sometimes biggers is not better. Also, file sizes are a factor. Rule of thumb is buffer cache should be 25% of memory.

See attachment about NFS performance or thread link which refers to same.

http://forums1.itrc.hp.com/service/forums/parseCurl.do?CURL=%2Fcm%2FQuestionAnswer%2F1%2C%2C0x1c97e8644a22d411ade70090279cd0f9%2C00.html&admit=716493758+1068383566904+28353475
Support Fatherhood - Stop Family Law
Michael Steele_2
Honored Contributor

Re: How to migrate data (400 GB) to NetApp as quick as possible

Oh yeah, NFS version 3 is better than version 2 especially with large file sizes over 2 GB.
Support Fatherhood - Stop Family Law
Steven E. Protter
Exalted Contributor

Re: How to migrate data (400 GB) to NetApp as quick as possible

I've seen rcp just plain fail to come out a the other end with the same number of bytes on these large copies. Probably symbolic links caused the problem.

I've had more success with mv.

The bottom line is the best you can do is tune the NFS and make sure the networking is full speed. lanadmin -x 0 zero is the lan number from lanscan.

If those lan connections are not the speed and duplex you expect, the speed of the transfer can be dramatically cut by getting it to 100 BaseT Full Duplex(lanadmin -X) and making sure NIC cards with connections 100 BT and slower are on switch ports that are not autonegotiate.

This is going to take some time, and its probably time to plan for down time.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Bruno Ganino
Honored Contributor

Re: How to migrate data (400 GB) to NetApp as quick as possible

You have considered "cpio" command ?
See man for options...
HTH
Bruno
Torino (Turin) +2H
THEUNISSEN, J
Advisor

Re: How to migrate data (400 GB) to NetApp as quick as possible

I have tried several methods now:

ftp: can't be used, because I have several subdirectories as well, so scripts should be made

cp: cp -rp * will work

fbackup/frecover

tar cf/tar xf

cpio -o/i (turned out to be the slowest method)

I looked at the time command, and found out that the speed is not really good (5 MB/s) and that speed I reached on a backbone of 1GB (??). So I asked the customer now to check this network first. I did an ftp on another site (also using the A4929A 1000-T card) and then the speed was 20 MB/s. Quite a difference. So I have to wait for this. But the supplier of this NetApp filer system also said you need a seperate 1 GB network as well to assure the performance. I am also wondering if I should use jumbo frames now?
NFS/CIFS error
Stefan Farrelly
Honored Contributor

Re: How to migrate data (400 GB) to NetApp as quick as possible

The performance difference between using udp or tcp for nfs is minute. See this document from netapps on nfs performance;

http://www.netapp.com/tech_library/3146.html

Netapps now supports direct attach fibre to an HP-UX server so for your migration I suggest you ignore nfs and direct fibre connect your netapps to your HP, then copy it, then disconnect and remount your netapps on your HP using nfs.
Im from Palmerston North, New Zealand, but somehow ended up in London...
Stefan Farrelly
Honored Contributor

Re: How to migrate data (400 GB) to NetApp as quick as possible

If you cant direct attach then yes, ensure your gigabit lan connection to the Netapps is NOT on the normal LAN - use a private lan from the HP to the Netapps for the copy, and for performance reasons you should always run a private lan from Netapps to HP.
Also ensure both are running AUTO (HP Gigabit cards must be set to AUTO as is the switch theyre connected to - they will then negotiate up to 1000FD).

See the document in my last reply, using dd you can get 70MB/s over a gigabit lan connection to your Netapps filer - we do on ours.
Im from Palmerston North, New Zealand, but somehow ended up in London...
Eugene Fleischmann
New Member

Re: How to migrate data (400 GB) to NetApp as quick as possible

For filesystem based copies, I find this works best:
cd $srcdir
find . -xdev -print | cpio -pxdum $destdir

If you can directly attach the disks, then
you can use a host-level mirroring approach through LVM. Depending upon how much room is available in each vg on the hp server,
either:
1) vgextend vg1 disk1
2) pvmove old_disk1 disk1
3) vgreduce vg1 old_disk1
or:
1) vgextend vg1 disk1 disk2 disk3 disk4
2) vgdisplay -v vg1 | awk '/LV Name/{print $3} | while read LV
do lvextend -m 1 vg1/$LV disk1 disk2 disk3 disk4
done
3) vgdisplay -v vg1 | awk '/LV Name/{print $3} | while read LV
do lvreduce -m 0 vg1/$LV disk1 disk2 disk3 disk4
done
4) vgreduce vg1 old_disk1 old_disk2 old_disk3 old_disk4
Alzhy
Honored Contributor

Re: How to migrate data (400 GB) to NetApp as quick as possible

Option 1: CPIO

cd /source_fs
find ./ -depth -print|cpio -pdvmu /netapp_fs


Option 2: vxdump/vxrestore

vxdump 0f - /source_fs|(cd /netapps_fs;vxrestore rf -)

Hakuna Matata.
doug mielke
Respected Contributor

Re: How to migrate data (400 GB) to NetApp as quick as possible

I'm faced with this task often, and there is no great answer.

While some copy methods are faster than others, going through a single host, ( between storTek and Netapps) is going to be time consuming.

If you have room, how about a StorTec to StorTec copy, using dd for instance.

Then you can bring the system/ dbase up, and do an NFS copy at your leisure.

Ralph Haefner
Frequent Advisor

Re: How to migrate data (400 GB) to NetApp as quick as possible

Along the lines of the post above . . .

We used the snapshot feature of our Storage Tek d280 to do backups. We'd make a snapshot then copy that to tape while the underlying filesystem was mounted and the app running.

You could do the same. A very short outage to make the snapshot, then copy that to your netapp.

The advantage is you don't need enough free disk space to do a full copy of the original disks, and it is much faster than a full copy to make the snapshot. The downside is I believe Storage Tek makes you pay for snapshot software, so you might not have it.
Michael Schulte zur Sur
Honored Contributor

Re: How to migrate data (400 GB) to NetApp as quick as possible

Hi,

is your problem solved? If so, could you please award some points to those, who could help you?

greetings,

Michael
THEUNISSEN, J
Advisor

Re: How to migrate data (400 GB) to NetApp as quick as possible

The problem was not really solved with all your answers, but this is the real IT world. So I asked this customer to stop the application on Friday and hoped I would be able to migrate this for Monday. I start at 6 AM on Friday and was finish at about 3 AM on Saturday. So almost 24 hours. I start to copy this 400 GB by using cp -rp * for each folder I have on the StorageTek RAID (9176). So in future I hope there would be a better way, but this was the only way right now. The performance (speed) was the best when the files are relative big. The performance is very poor with small files. We had both, so it takes a lot of time anyway. Thanks for all your responses.
NFS/CIFS error
Hoefnix
Honored Contributor

Re: How to migrate data (400 GB) to NetApp as quick as possible

Hi,

Have a look at this thread, also some NFS/NetApps performance issues, maybe your solution is in this one.
http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=400028

Regards,
Peter Geluk