- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Data Migration (new disk paths)
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-24-2003 09:21 AM
тАО03-24-2003 09:21 AM
Is it as simple as doing a VGEXPORT and then after the migration doing a VGIMPORT with the mapfile and the new device paths?
Any thoughts.....
As always, thanks for any direction on this.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-24-2003 09:26 AM
тАО03-24-2003 09:26 AM
Solution1. Make sure you get a good full b/u.
2. vgexport your VG's with a p option. i.e. vgexport -p -s -m mapfile.map vgblah blah
3. Put these mapfiles in a safe place. On network/tape
4. Swing you cables, etc.
5. mkdir /dev/vgblah
6. mknod /dev/vgblah group c 64 0x010000 (the 0xXX must be unique)
7. vgimport -s -m mapfile.map vgblah
Enjoy!
Regards,
RZ
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-24-2003 09:35 AM
тАО03-24-2003 09:35 AM
Re: Data Migration (new disk paths)
There are multiple ways of doing it. I would do the following.
1. Generate the map files with various options
vgexport -p -v -s -m /tmp/vgxx.s.map vgxx
vgexport -p -v -m /tmp/vgxx.map -f /tmp/vgxx.disks vgxx
ll /dev/*/group > /tmp/vgxx.groups
Copy all of the above to another server.
2. Unmount all the filesystems and deactivate the VGs. You will need to shutdown the application/database that are accessing the filesystems. Use fuser to find any processes that are accessing the filesystems.
#umount /dev/vgxx/lvolx
(repeat it for all the lvols of VGs on EMC)
#vgchange -a n vgxx
#vgexport vgxx
Let them do the copyying of data and complete your hardware maintenance. Once it is done, do
#ioscan -f
#ioscan -fnC disks > /tmp/disks.out
Have a quick look at this file and make sure you see the disks on all the paths.
3. Import VGs.
#mkdir /dev/vgxx
#mknod /dev/vgxx/group c 64 0x0?0000
(? should be unique. Or you can get it from the previous saved file 'grep vgxx /tmp/vgxx.groups')
#vgimport -v -s -m /tmp/vgxx.s.map vgxx
#vgchange -a y vgxx
#mount -a
(Repeat the abovve for all the VGs)
(However, my personal choice is to use -f option with vgimport as it will restore the disks in the same way they were before in the lvmtab. If you customized the PV order, then you will lose it if you use "-s" option. One glitch about using -f is to make sure you change the disks in vgxx.disks to reflect the new ones. Your job will be more difficult if the EMC changed the LUN numbers on the new EMC).
-Sri
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-24-2003 09:39 AM
тАО03-24-2003 09:39 AM
Re: Data Migration (new disk paths)
Yes, 'vgexport' and 'vgimport' are what you seek. The 'mapfile' defines the association of logical volume device file minor numbers to logical volume names (e.g 1=lvol1, 2=mystuff, etc.). In the absence of a mapfile, default names are applied (lvol1, lvol2, etc.).
If you use the '-s' option during 'vgexport', the volume group ID (VGID) is added to the mapfile you create. This allows discovery of the physical volumes during the 'vgimport' process without requiring the specification of the pv-paths. This is quite convenient for the operation you will be doing.
In lieu of using the '-s' option, you can collect the pv-paths associated with your "old" volume group with the '-f outfile' argument to 'vgexport'. This file can be editted before use as the '-f infile' of a 'vgimport', assuming you know the pv-paths for the "new" server's volume group.
An advantage to using the '-f' option during the 'vgimport' process is that you can *order* your pv-paths to distribute alternate links between controllers in one operation instead of following the 'vgimport' process with 'vgreduce' and 'vgextend' commands to balance alternate links after the fact.
See the appropriate man pages for the aforementioned commands for more information.
Regards!
...JRF...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-24-2003 09:52 AM
тАО03-24-2003 09:52 AM
Re: Data Migration (new disk paths)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-24-2003 09:53 AM
тАО03-24-2003 09:53 AM
Re: Data Migration (new disk paths)
Well, I prefer to handle migrating data from Old EMC to New EMC using the hostbased approach.
Doing it yourself (for me anyway) you can be certain then that the data gets migrated...and migrated right. EMC is very good at migrating, but often going from OLD arrays to new arrays..you find you want to change things. Like creating new vg with a larger PE, so you don't run out of luns before you max out the new array (hit this on an old EMC I inherited here..arghgh). So may I suggest doing the migration yourself. It is more work..but well worth it in the long run.
Take a look at this thread..JRF references back to an earlier post about migrating. I find it worked extremely well and gave me the ability to get right back up on the old VG's in the event the migration didn't go well (of course everything went like silk...). Love that fbackup/frecover !!
http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0xb6e7e822e739d711abdc0090277a778c,00.html
Regards,
Rita
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-24-2003 10:05 AM
тАО03-24-2003 10:05 AM
Re: Data Migration (new disk paths)
But, guys, how do you migrate data without using the file system or application database structure?
Jim, since you're noting vgexport and vgimport I'll assume the file system structure and not raw disks as used in Informix, for example. But even with a tape restore there's got to be a file system structure of some kind. So how is EMC going to bypass this? The usual way is to add in the fc adapters from the new EMC array, create new vgs and then a temporary file system mount point for each original file system. Then to copy over the data between original and temporary file systems.
Once copied over the new vg and logical volumes can be re-mounted onto the originals.
In short, all you're doing is re-directing the linking of the mount points from the original file system to newly created logical volumes and then updating /etc/fstab.
Once done, the JBOD and old vgs are vgexport / vgremove'd.
Its important /critical to use new vgs in this procedure since the size of your disks are going to change, probably for the bigger. Consequenlty, your entire old LVM structure is 90% obsolete. By retaining the old LVM structure using the vgexport -s options you're coping over a LVM header mapped to old disk sizes.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-24-2003 10:18 AM
тАО03-24-2003 10:18 AM
Re: Data Migration (new disk paths)
But you're exactly right. This is the right time to make any adjustments to my VGs and LVs, and I need to keep that in mind. Because I'm going SCSI to Fiber I have the luxury of being connected to both Symms at the same time rather than cutting over one channel at a time.
Frederick... the disk sizes are different but the hypervolumes presented to me will be the same. I assumed EMC was just going to do a bit by bit copy to the new volumes.
I'm also going to have to re-write my TimeFinder BCV scripts. This project isn't looking like so much fun anymore.
You both raise good questions though.
Let me tell you, this forum is great. I'm the sole Unix Admin with 7 systems and little experience. This forum has always been there when I need an answer or direction.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-24-2003 10:31 AM
тАО03-24-2003 10:31 AM
Re: Data Migration (new disk paths)
It is one's choice. I have done both hostbased migrations and array based migrations.
For me it looks like it will be much productive and easier if you do it through array based migrations. Your production may slow down a bit during the synchrorization process.
But if you attempt to do hostbased migration, then there are few risks involved with it. You will need to take care of removing the alternate device files from the VGs, drop the corresponding cable to the HBA and connect the new one or change it on the switch/director, add the new disks into the volume groups, extend the mirrors, break the mirrors and take out the disks on the other path, connect the new cable, get the disks, populate the lvmtab and then extend the alternate links. Anything can go wrong anywhere if you are unlucky.
On the other hand, EMC/XP hardware sync/split technology is proven and is of minimal risk. You always have the old disks as good backout.
-Sri
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-24-2003 01:30 PM
тАО03-24-2003 01:30 PM
Re: Data Migration (new disk paths)
We went from FC60 (fibre) to VA7400 (fibre). The main reason for the migration was to increase the disk space. I basically did a dd of all the logical volumes on the FC60 storage system to the VA7400. NOTE** NOT disk, so I had to create volume groups & lvs accordingly. The actual dd process was fine, database came up & no data was lost.... However, we latter hit a problem that we (well I) did not forsee. I forgott that there is a maximum number of logical volumes per volume group (255). We also could have saved ourselves ALOT of hassel if we had put some fore thought into the original VG creation process (some years back when space was not an issue!). e.g. we had a "vgdatab" with 12 disks each 18GB & used the LVM defaults.
If we had
o used a max PE per PV of 2 or 4x the original disk size
o used a max PV per VG of 2x the original design
o Used LVM distributed stripes such that even though we used HW mirroring/protection we could still use LVM mirroring to migrate/move data.
o Lastly, realise/remember that there is a limit of 255 LVs per VG. so if you create (on average) 2GB LVs for your database, you will only be able to use about 500GB PER VG before running out of LV descriptors.
Alas, we did not, so had to take the 6hour outage for the dd (ouch). AND now we can only address 500GB of our 800GB volume group (double ouch!).
good luck with your project, I if the above helps prevent any "oopses", great.
Tim