Operating System - HP-UX
1828415 Members
3175 Online
109977 Solutions
New Discussion

Data migration from Internal to External Raid Disk Subsystem

 
Wee Choon Toh
New Member

Data migration from Internal to External Raid Disk Subsystem

Hi, we are planning to migration our HP N4000 internal disks (about 100GB of multiple disks) to an IBM ESS. And then remove then internal disks.

I have managed to piece out a set of migration steps and hope that you guys could help to verify it.

Assuming c0t5d0 is my internal disk and c1t3d0 is my external disks :
1) pvcreate /dev/rdsk/c1t3d0
2) lvdisplay -v /dev/vg00/lvol* #check coresponding lvols in c0t5d0
2) vgextend /dev/vg00 /dev/dsk/c1t3d0
3) lvextend -m 1 /dev/vg00/lvol1 /dev/dsk/lvol1
4) lvextend -m 1 /dev/vg00/lvol2 /dev/dsk/lvol2
5) lvextend -m 1 /dev/vg00/lvol3 /dev/dsk/lvol3
6) lvreduce -m 0 /dev/vg00/lvol1
7) lvreduce -m 0 /dev/vg00/lvol2
8) lvreduce -m 0 /dev/vg00/lvol3
9) vgreduce /dev/vg00/dev/dsk/c0t5d0
10) vgdisplay -v # Verify vg00 only contains external disk c1t3d0.

Thanks and regards !
14 REPLIES 14
JACQUET
Frequent Advisor

Re: Data migration from Internal to External Raid Disk Subsystem

Hi,

Just a point to underline when you reduce your LV, you have to precise which disk you reduce your lv, otherwise you'll be in trouble.
Here is the way :
1) pvcreate /dev/rdsk/c1t3d0
2) lvdisplay -v /dev/vg00/lvol* #check coresponding lvols in c0t5d0
2) vgextend /dev/vg00 /dev/dsk/c1t3d0
3) lvextend -m 1 /dev/vg00/lvol1 /dev/dsk/c1t3d0
4) lvextend -m 1 /dev/vg00/lvol2 /dev/dsk/clt3d0
5) lvextend -m 1 /dev/vg00/lvol3 /dev/dsk/c1t3d0
6) lvreduce -m 0 /dev/vg00/lvol1 /dev/dsk/c0t5d0
7) lvreduce -m 0 /dev/vg00/lvol2 /dev/dsk/c0t5d0
8) lvreduce -m 0 /dev/vg00/lvol3 /dev/dsk/c0t5d0
9) vgreduce /dev/vg00 /dev/dsk/c0t5d0
10) vgdisplay -v # Verify vg00 only contains external disk c1t3d0

Here it is...

PJA.
PJA
David Navarro
Respected Contributor

Re: Data migration from Internal to External Raid Disk Subsystem

Hi, I think you write vg00 for example, but this vg is not vg00. If it is, then you need to do a recovery tape.
If you vg is another different than vg00. then you have a little errors in syntasis.

3) lvextend -m 1 /dev/vg00/lvol1 /dev/dsk/c1t3d0
4) lvextend -m 1 /dev/vg00/lvol2 /dev/dsk/c1t3d0
5) lvextend -m 1 /dev/vg00/lvol3 /dev/dsk/c1t3d0
6) lvreduce -m 0 /dev/vg00/lvol1 /dev/dsk/c0t5d0
7) lvreduce -m 0 /dev/vg00/lvol2 /dev/dsk/c0t5d0
8) lvreduce -m 0 /dev/vg00/lvol3 /dev/dsk/c0t5d0
9) vgreduce /dev/vg00 /dev/dsk/c0t5d0

These are lines with changes, look that I add defice file when I do mirror and when I reduce it.
And very important. Backup all berofe !!!
JACQUET
Frequent Advisor

Re: Data migration from Internal to External Raid Disk Subsystem

Oups, just an other point:
If this example concerne really vg00, where the HP-UX System is, the things i told you are not valid, it is just in the case of disk of data not containing the OS, otherwise, you have to do "mirroring system" and demirror it :

pvcreate -B /dev/rdsk/c1t3d0
mkboot -l /dev/rdsk/c1t3d0
mkboot -a "hpux(;0) /stand/vmunix" /dev/rdsk/c1t3d0
vgextend /dev/vg00 /dev/dsk/c1t3d0
# lvextend -m 1 /dev/vg00/lvol1 /dev/dsk/c1t3d0
# lvextend -m 1 /dev/vg00/lvol2 /dev/dsk/c1t3d0
# lvextend -m 1 /dev/vg00/lvol3 /dev/dsk/c1t3d0
...

# lvlnboot -b /dev/vg00/lvol1
# lvlnboot -r /dev/vg00/lvol3
# lvlnboot -s /dev/vg00/lvol2

# lvlnboot -v
(to verify...)

And then reduce you're LV and your vg like the first things i told.
Then you have to modify by ISL the Primary boot Path from your new disk.

An other advice, if you're planing to migrate your Os to another disk, use a valid backup by make_recovery !!!

PJA
PJA

Re: Data migration from Internal to External Raid Disk Subsystem

Hi,
I think you must distinguish between a root disk and other.
if you are talking about the root disk, you must doing follow:
1. pvcreate -B /dev/rdsk/c1t3d0
2. mkboot /dev/rdsk/c1t3d0
3. mkboot -a "hpux ;0)/stand/vmunix" /dev/rdsk/c1t3d0
4. add the physical volume to vg_group
vgextend /dev/vg00 /dev/dsk/c1t3d0
5. Mirroring the Logical Volumes
lvextend -m 1 /dev/vg00/lvol1 /dev/dsk/c1t3d0
lvextend -m 1 /dev/vg00/lvol2 /dev/dsk/c1t3d0
lvextend -m 1 /dev/vg00/lvol3 /dev/dsk/c1t3d0
lvlnboot -v lvol1 /dev/vg00
6. Deleting the old Logical Volumes
lvreduce -m 0 /dev/vg00/lvol1 /dev/dsk/c0t5d0
lvreduce -m 0 /dev/vg00/lvol2 /dev/dsk/c0t5d0
lvreduce -m 0 /dev/vg00/lvol3 /dev/dsk/c0t5d0
check the old disk if any lvl are existing
pvdisplay -v /dev/dsk/c0t5d0

You cann this Procedure whitout mkboot for any volume Groups
Regard
Mouamed



the world of unix is beautifull
Marcin Wicinski
Trusted Contributor

Re: Data migration from Internal to External Raid Disk Subsystem

Hi,

One more: if you have installed online diagnostics, after the mirror is complete, you should suplement LIF area:

- compare LIF areas on the disks with

lifls /dev/rdsk/....

- suplement LIF with

mkboot -b /usr/sbin/diag/lif/updatediaglif2 -p -p... /dev/rdsk/....


later,
Marcin Wicinski
Thierry Poels_1
Honored Contributor

Re: Data migration from Internal to External Raid Disk Subsystem

Hey,
how about using pvmove instead of mirroring and unmirroring??
It will do the same job, with less typing work ;-)
good luck,
Thierry.
All unix flavours are exactly the same . . . . . . . . . . for end users anyway.
Wee Choon Toh
New Member

Re: Data migration from Internal to External Raid Disk Subsystem

Thanks for everyone's help. The other question will be whether the same set of procedure could be used for both SCSI and FC adapter (i.e. A5158A) connections to an external RAID storage ?
Thanks and regards !
Sridhar Bhaskarla
Honored Contributor

Re: Data migration from Internal to External Raid Disk Subsystem

vg00 is your root volume group. Prepare make_recovery tape and then ignite the external disk with the tape. Else there are five major steps

PRE. create a make_recovery_tape

1. Make c1t3d0 bootable

#DSK=/dev/dsk/c1t3d0
#RDSK=/dev/rdsk/c1t3d0
#pvcreate -B $RDSK
#mkboot -l $RDSK
#mkboot -a "hpux -lq(;0)/stand/vmunix" $RDSK
#cd /usr/sbin/diag/lif
#mkboot -vb updatediaglif2 -p ISL -p HPUX -p AUTO $DSK

2. Extend the logical volumes

/* For all the logical volumes in vg00 (lvol1 - stand) being the first, do an*/

#lvextend -m 1 $LV $DSK
#lvlnboot -b /dev/vg00/lvol1 ( lvol for stand)
#lvlnboot -s /dev/vg00/lvol2 (swap
#lvlnboot -r /dev/vg00/lvol3 (root)
#lvlnboot -R

/* Once this is done make sure that all the disks appear as boot disks with the correspoinding logical volumes

3. Boot from the other disk

#set boot -p (path to $DSK)
#reboot

/*This should boot fromo your $DSK*/
/*Now reduce the logical volumes from the original boot disk */

4. Remove the original disk

/* Check LIFs and AUTO string */
#lifls /dev/dsk/c1t3d0
#lifcp /dev/dsk/c1t3d0:AUTO -

/* for all the logical volumes $LV in ROOT disk */

#lvreduce -m 0 $LV /dev/dsk/c0t5d0
#vgreduce vg00 /dev/dsk/c0t5d0

The same procedure can be followed for the non-boot disks except for making boot disk and booting from the other disk

Now about your other question -YES, this can be used even on FC if you have the recent PDC (40.25 onwards). Otherwise it won't allow you to boot from fiberchannel.

-Sri

You may be disappointed if you fail, but you are doomed if you don't try
linuxfan
Honored Contributor

Re: Data migration from Internal to External Raid Disk Subsystem

Hi Wee,

You said you are migrating the data from N4000's internal disks(about 100GB), as far as i know N's only have 2 internal disks and the maximum internal disk capacity is 72 GB.
You can look at
http://www.hp.com/products1/unixservers/midrange/nclass/specifications/index.html
which confirms my point.

To upgrade your PDC firmware to 40.25 you can install the patch PHSS_21769, but the lastest firmware for N4000 is 41.02 (patch PHSS_22657).

Also you haven't indicated if you only have /dev/vg00 or do you have other VGs on the internal disks?

In any case, if you don't want to go through a whole lot of steps, the easiest would be create a make_recovery tape and then boot off of the tape and recover your OS to the new disks connected to your FC(A5158) card (you will have to upgrade your firmware before you can do this). This will enable you to boot back to you original disks, if something should go wrong.

To create the make_recovery tape you could use something like
make_tape_recovery -x inc_entire=vg00 -I -v -a /dev/rmt/0mn
-HTH
Ramesh
They think they know but don't. At least I know I don't know - Socrates
Wee Choon Toh
New Member

Re: Data migration from Internal to External Raid Disk Subsystem

Sorry for not making the environment clearer.. 8(
I'm trying to gather the various methods of migrating the data from internal disks to external RAID disk subsystem (FC-based connection)

The volume groups may be vg00 (root volume) and
other volume groups vgXX.

From the responses, I understood the need to differentiate the root vol vg00 and other volume groups. I'm aware that make_recovery tape is the prefered process but I need to minimize down-time, therefore I have to opt for mirroring method.
(1)Any other things to take note if the mirroring method is use for other volume group.
(2)Could raw device (Oracle) use the same mirroring method ?
Thanks and regards !
Sridhar Bhaskarla
Honored Contributor

Re: Data migration from Internal to External Raid Disk Subsystem

Wee

1) Nothing to be worried. Instead of specified -m 1 option, you will be specifying -m 2. You will be selectively removing two mirrors from the two internal disks later

lvreduce -m 1 /dev/vg00/lvol1 $DISK0
lvreduce -m 0 /dev/vg00/lvol1 $DISK1

This will leave the logical volume entirely on the new disk on the extenal disk array. You can mirror to another disk on the extenal array later.

2) This is a good question. You can do mirroring even for the RAW logical volumes. I didn't do it before.

The same command lvextend will work.

lvextend -m 1 /dev/vgxx/raw_lvol $NEWDISK

Else, during the downtime, you can create a new volume group on the extenal disks and do a dd from the old raw logical volumes to the new raw logical volumes.

dd if=/dev/vgxx/raw_lvol1 of=/dev/vgnewxx/raw_lvol1 bs=1024k

As mentioned earlier, we need to follow creating boot disk, extending mirrors, booting from the new disk, reducing mirrors for vg00

For vgxx which is not bootable, you just need to extend mirrors and then reduce the mirrors later.

-Sri
You may be disappointed if you fail, but you are doomed if you don't try
linuxfan
Honored Contributor

Re: Data migration from Internal to External Raid Disk Subsystem

Hi Wee,

The basic reason for this forum is to get suggestions/ideas and then based on your circumstances make an intelligent decision, so it doesn't matter which method/approach you follow to solve your problem, more importantly the problem has to be solved.
As far as your questions are concerned,
Make sure the sizes of the new disk are of similar or smaller size than your existing disks. (for eg: if you have 2x36GB, the new disks should be 36GB or less, the reason being you cannot add a disk whose PE size is bigger than the max PE of the VG, well technically you can but then you won't be utilizing the full capacity of the disk)
-HTH
Ramesh
They think they know but don't. At least I know I don't know - Socrates
Deshpande Prashant
Honored Contributor

Re: Data migration from Internal to External Raid Disk Subsystem

Hi Ramesh
I thought with make_recovery tapes you can change the file system sizes as well as use the higher capacity disks (eg. go from existing 9 gb disks to higher 36 gb disks).

Prashant Deshpande.
Take it as it comes.
linuxfan
Honored Contributor

Re: Data migration from Internal to External Raid Disk Subsystem

Hi,

yes you can use make_recovery to migrate your OS to a bigger disk, but woo was planning to use the mirroring option.

-regards
Ramesh
They think they know but don't. At least I know I don't know - Socrates