- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Disk group migration
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-12-2010 02:05 AM
тАО06-12-2010 02:05 AM
I'm dealing with hp-ux 11v3 and EVA8100 with temporary Business Copy license. EVA shows some LUNs to hpux from one disk group (lets call it DG_One). This LUNs are all in one volume group.
I added some enclosures with disks and created another disk group (lets call it DG_Two). So, the question is: how to move the LUNs from DG_One to DG_two with minimum downtime?
I have an approximate plan:
1) shutdown applications that are using the storage
2) unmount all filesystems on this LUNs
2.1) maybe, shutdown system?
3) make a snapclones of all LUNs, wait until it ends replication
4) unpresent all old LUNs
5) present new ones with old LUN ids
5.1) turn system on
I'm not sure if it's a good plan, and be happy if anyone criticizes it. Also, i've read this thread: http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1276277&admit=109447626+1252752400823+28353475 about problems of changing LUNs online and if anyone can comment it - it will also be very appreciated.
Philipp.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-12-2010 03:00 AM
тАО06-12-2010 03:00 AM
SolutionRefer the following links for similar discussion -
http://forums11.itrc.hp.com/service/forums/questionanswer.do?admit=109447626+1274278296691+28353475&threadId=1321868
http://forums11.itrc.hp.com/service/forums/questionanswer.do?admit=109447626+1272100878798+28353475&threadId=1354054
http://forums11.itrc.hp.com/service/forums/questionanswer.do?admit=109447626+1273478940508+28353475&threadId=696657
Hope this helps.
Regards,
Murali
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-12-2010 04:48 AM
тАО06-12-2010 04:48 AM
Re: Disk group migration
As i can see from this threads, the less complicated method is just what i wrote, but it means downtime at least until all LUNs will end snapcloning process. And i forgot about LUNs WWN, so my plan should look like this:
1) shutdown applications that are using the storage
2) unmount all filesystems on this LUNs
3) unpresent all old LUNs
4) make a snapclones of all LUNs with changing defauld WWname with the source LUN ones, wait until it ends replication
5) present new ones with old LUN ids
As for 2.1 - it looks like it's not mandatory in this case.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-12-2010 05:11 AM
тАО06-12-2010 05:11 AM
Re: Disk group migration
>> As i can see from this threads, the less complicated method is just what i
>> wrote
Yes, your set of steps does look optimized.
Lets see what other forum members have to say about this.
>> And i forgot about LUNs WWN
Yes right. The unique WWN for every target has to be maintained.
When you do eventually try out your plan, let us know how that goes.
Good luck.
Regards,
Murali
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-12-2010 06:04 AM
тАО06-12-2010 06:04 AM
Re: Disk group migration
=====
#dmesg
.....
class : lunpath, instance 47
lun path (class = lunpath, instance = 47) belonging to LUN (default minor = 0x10) has gone offline. The lunpath hwpath is 1/0/6/1/0.0x50001fe15012c3ea.0x4006000000000000
class : lunpath, instance 38
lun path (class = lunpath, instance = 38) belonging to LUN (default minor = 0x10) has gone offline. The lunpath hwpath is 1/0/6/1/0.0x50001fe15012c3e8.0x4006000000000000
class : lunpath, instance 77
lun path (class = lunpath, instance = 77) belonging to LUN (default minor = 0x10) has gone offline. The lunpath hwpath is 1/0/14/1/0.0x50001fe15012c3e9.0x4006000000000000
class : lunpath, instance 83
lun path (class = lunpath, instance = 83) belonging to LUN (default minor = 0x10) has gone offline. The lunpath hwpath is 1/0/14/1/0.0x50001fe15012c3eb.0x4006000000000000
LVM: VG 64 0x010000: PVLink 3 0x000010 Failed! The PV is not accessible.
=====
Now oracle took his files and looks like my plan worked fine. Unfortunately with downtime...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-13-2010 02:03 AM
тАО06-13-2010 02:03 AM
Re: Disk group migration
> LVM: VG 64 0x010000: PVLink 3 0x000010 Failed! The PV is not accessible.
Your procedure is a shortcut of the procedure, I would use. I.e. I would try to avoid the above message.
>1) shutdown applications that are using the storage
>2) unmount all filesystems on this LUNs
I.e. After unmounting the filesystem, "vgexport preview, for the mapfile" followed by a "vgexport" of the volumegroup that contains the OLD lun.
> 3) unpresent all old LUNs
> 4) make a snapclones of all LUNs with changing default WWname with the source LUN ones, wait until it ends replication
> 5) present new ones with old LUN ids
Then vgimport the volumegroup with the "new" "old LUN ids".
NOTE: Would probably throw in a vgchgid before the vgimport, for safety.
NOTE2: Not to sure that after "your" step 5, dmesg wouldnt ask to execute a scsimgr replace_wwid to get access to the "new OLD lunid" lun.
> Now oracle took his files and looks like
> my plan worked fine. Unfortunately with
> downtime...
Thats why you have host (volumemanager) mirroring, which allows for online "migration"..
Greetz,
Chris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-13-2010 02:26 AM
тАО06-13-2010 02:26 AM
Re: Disk group migration
Chris, was it possible to use mirroring in my situation without MirriorUX installed?
Philipp.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-13-2010 01:45 PM
тАО06-13-2010 01:45 PM
Re: Disk group migration
> Chris, was it possible to use mirroring in
> my situation without MirriorUX installed?
No, it wasnt.
But mirrordisk/UX, is virtually something that almost cost nothing and also is included in most "environments", except the base OE, so the cost shouldnt be a prohibitive for any corporation, who can afford a hp-ux/oracle environment, imo. ;)
Greetz,
Chris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-13-2010 09:41 PM
тАО06-13-2010 09:41 PM
Re: Disk group migration
Chris, you live in a better world than i :)
Philipp.