Operating System - HP-UX
1755605 Members
4987 Online
108836 Solutions
New Discussion юеВ

Filesystem migration recommendations...

 
Ian Kinthar
Occasional Contributor

Filesystem migration recommendations...

Very soon I am going to be migrating a LOT of data from a bunch of unprotected storage to a new EMC Clariion. In the past I would have used 'dd' to perform this copy on a per filesystem basis. Unfortunately, I haven't done this kind of data migration in a couple of years and suffice to say I'm a bit rusty.

I believe that 'dd if=/dev/vgold/rold_fs of=/dev/vgnew/rnew_fs bs=1024k' is the way I would have done this in the past. Obviously I will need to create the new filesystems of the same size or slightly larger than the original filesystems. I'm pretty sure that's about all there is to the process excluding changing the mount points. My questions are...

1. I can't remember if this can or should be done with the filesystems in question unmounted?
2. Is there a better way (more efficient) to go about this process?
3. Any special considerations to bear in mind for things like Oracle table space?

Thanks in advance for the help. I know this is basic stuff, but it really has been a couple of years since I've needed to do this.
7 REPLIES 7
Patrick Wallek
Honored Contributor

Re: Filesystem migration recommendations...

If you have MirrorDisk installed and create your LUNs on the Clariion to be the same size as your current PVs, then there is the possiblity that you could vgextend your VGs, mirror to the new disks, then vgreduce the old disks. This could all be done on-line though you might have a minimal impact on response time.

As far as your questions go:

1) Have the filesystems unmounted would guarantee that nothing would be written to them while you are doing the dd.
2) A larger block size would help it go faster.
3) Make absolutely sure Oracle is down. You don't want changes to the data files while you are running a dd.

Ian Kinthar
Occasional Contributor

Re: Filesystem migration recommendations...

I really hadn't considered that possibility... One concern I have with that is providing a solid backout. My plan was to simply mount up the original filesystems on /o/old_fs for a couple of weeks in the event that there is some kind of problem with the copy or the new storage. Doing the mirror doesn't really leave us with a backout once we lvreduce out the mirror.

The other problem is that I don't think we have Mirror-UX licensed on this particular server.
Geoff Wild
Honored Contributor

Re: Filesystem migration recommendations...

If you have room in your vg and if the extent size will accomodate your new luns (best to make them the same size as your current storage) - then you can vgextend and pvmove to the new storage.

Otherwise, I found vxdump/vxrestore the best way to migrate between frames:

vxdump -0 -f - -s 1000000 -b 16 /usr/sap/tmp | (cd /zmnt/usr/sap/tmp vxrestore rf -) &


where zmnt is the new luns....then when complete - umount everything and mount the new lvos to the original dirs...

Rgds...Geoff
Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
Ian Kinthar
Occasional Contributor

Re: Filesystem migration recommendations...

Can the vxdump/vxrestore be performed with the filesystems unmounted? My concern here is access to the data during the copy and things becoming out of sync.

Thanks.
Peter Nikitka
Honored Contributor

Re: Filesystem migration recommendations...

Hi,

the data to be read by vxdump should be umounted (use the device name), vxrestore needs a directory (i.e. mounted device).

mfG Peter
The Universe is a pretty big place, it's bigger than anything anyone has ever dreamed of before. So if it's just us, seems like an awful waste of space, right? Jodie Foster in "Contact"
A. Clay Stephenson
Acclaimed Contributor

Re: Filesystem migration recommendations...

Yes, vxdump can be done with the filesystem unmounted; in fact, that is how you should do it. However, no method is going to be faster than dd and dd will also handle any raw data areas (such as those used by databases).

Moreover, you can "tune" your optimum bs= value and the number of simultaneous dd's beforehand by doing some dd's while the system is hot and getting your transfer metrics. (Obviously, you won't actually use any data transferred "hot" but the transfers rates will still be valid.) Generally, as long as you get bs above about 256KiB the throughput is going to change very little and your initial 1024k will serve nicely.

If you also plan to expand the size of any filesystems as you move to the new array then simply make the destination LVOL's as large as you want them beforehand, do the dd transfer; and then execute fsadm -F vxfs -b or extendfs -F vxfs after the transfer to "grow" the filesystem to fit its new LVOL size.

The only real danger of the dd approach is that it is easy to mix the of= and if= parameters and if so, you just destroyed your data. Of course, the same applies to virtually all UNIX commands; simply pay attention to what you are doing or better still write a script and carefully examine the script before launching it -- just as you would with any UNIX task.
If it ain't broke, I can fix that.
Doug O'Leary
Honored Contributor

Re: Filesystem migration recommendations...

Hey;

I've actually done this so many times at my current client that I can do it in my sleep - after a couple of those weekends, I probably have done it in my sleep!

You have a couple of choices. The one thing to remember is that this isn't rocket science. You're simply moving bits from one disk to another. If you're going to be doing this on multiple systems, you want something that can be repeated which means automation.

As others have mentioned, the mirroring of the data would be the method for the least impact. There shouldn't be much concern regarding the validity of the copy as the mirrorux software will maintain that for you. The constraint, though, is the new luns must be near the size of the old ones - something that's sometimes difficult to do.

You can also generate new lvs, then restore from tape. That'd be a good way to verify your backups as well.

The process I used was generated because we were moving to larger luns and we were going to SRDF the data between datacenters.

I generated a table of the logical volumes with the format:

new_lv old_lv Size LEs PEs FS Owner/MP

The FS column is a flag for whether or not the LV is a filesystem or a raw volume. If it's one, the last column speicifies a mount point; if it's 0, the last column specifies an owner.

I use an inline script to generate that table, but that should be simple enough.

Once I have that, generate the new lvs (another script) and mount them, as appropriate, under /mnt/${mp} such that /oracle's new filesystem will be /mnt/oracle.

Then, you can run the attached script. It'll examine the lvs file and use find/cpio combinations for the filesystems and dd for the raw volumes.

You'll probably want to edit the script/tables for your own needs. For instance, when I first designed this thing, I thought I'd be using the LEs/PEs when building the new LVs - turned out not to be the case, so I never used those columns.

That should give you some potential ideas and a start along the path if you're scripting find/cpio/dd combinations...

Doug

------
Senior UNIX Admin
O'Leary Computers Inc
linkedin: http://www.linkedin.com/dkoleary
Resume: http://www.olearycomputers.com/resume.html