- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Breaking a two node cluster and mounting the s...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-08-2005 11:40 PM
08-08-2005 11:40 PM
Breaking a two node cluster and mounting the storage box on resp. Node
I would like to know the steps to break a two node cluster which has HP Arrays mirrored through lvm mirroring? I want to disable the cluster, break the mirror, mount the Volume groups on both the nodes ? Carry out upgradation on the primary part while users are able to access the DB on the failover part in query mode. After upgradation I would like to again add the failover node in the cluster and update the failover array with the primary array?
Any inputs for the above case is most welcome?
Thanks in Advance.
We are not using any of the Tools like snapshots, BCVs. The Storage is RAW device.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-09-2005 12:13 AM
08-09-2005 12:13 AM
Re: Breaking a two node cluster and mounting the storage box on resp. Node
Steps to break the cluster:
1. First you halt the package and cluster on the HA.
# cmhaltpkg
# cmhaltcl
2. Delete the package and cluster configuration by running below command on either HA node.
# cmdeleteconf -c cluster_name -p Pkg_name
3. Export the VG on the server not to retained.
# vgexport
4. Split the mirrored LV.
# lvsplit -A -s
Now you can delete the splitted lv and reduce the VG go ahead as you need.
Hope this will help u.
Regards,
Rajesh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-09-2005 12:31 AM
08-09-2005 12:31 AM
Re: Breaking a two node cluster and mounting the storage box on resp. Node
Are you talking about upgrading Serviceguard? or the database? or something else?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-09-2005 12:36 AM
08-09-2005 12:36 AM
Re: Breaking a two node cluster and mounting the storage box on resp. Node
You don't need to, or even want to delete the package or cluster. Overview of the steps that you're looking for:
1. Halt package
2. Reactivate volume group(s)
3. lvsplit LVs. Ensure they're on a distinct subset of the VG's PVs. In other words, the split LVs must be the only things on one set of disks and the original LVs must be the only thing on another set of disks.
4. Deactivate the VG
5. vgchgid the PVs associated w/split LVs
6. Reactivate the VG and clean up the now missing LVs and PVs. vgreduce -f and vgscan are two good options to investigate.
7. Mount everything where needed & perform your upgrade. Import the split VG using a different name and group file minor number.
8. Once done and everything's working to your satisfaction, unmount everything on the primary node and deactivate the VGs.
9. Restart the package.
At some time later, when you're convinced that the upgrade was successful, deactivate the split mirrors on the adoptive node and add the PVs to the vg on the primary node. Remirror everything. Don't forget to export/import VGs to the adoptive node.
If things should get borked beyond all recall, your failback position is the split mirrors. Deactivate them on the adoptive node, reactivate them on the primary node. This will entail changing the VG name which is done on import and the LV names which can be done when the VG is imported but deactivated.
That should cover the steps that you need. Ensure you know, down to the command line level, how to do each step before starting. You want to do all your planning up front. Come the day you pull the trigger, you don't want to be doing any thinking - just follow the checklist.
HTH;
Doug O'Leary
------
Senior UNIX Admin
O'Leary Computers Inc
linkedin: http://www.linkedin.com/dkoleary
Resume: http://www.olearycomputers.com/resume.html
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-09-2005 03:14 PM
08-09-2005 03:14 PM
Re: Breaking a two node cluster and mounting the storage box on resp. Node
Some clarifications please. There are two EVAs each connected to the one of the Node.
Can't I use lvreduce and then vgreduce to just remove the EVA connected on the Secondary node from the Primary node?
Then export all VGs from Secondary node and create new VGs importing the disks on the Local EVA?
Will this work?
And once the Primary Server Application is uograded, deactivate the secondary node VGs and copy the VGs of primary to secondary node?
And then sync the primary copy to the secondary EVA.
Is this okay? Am I missing something?
I have never done the vgchgid before.
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-10-2005 01:21 AM
08-10-2005 01:21 AM
Re: Breaking a two node cluster and mounting the storage box on resp. Node
I don't know what an EVA is. From the context, it sounds like it's a disk array. Assuming that's the case, it should be irrelevant.
Some background may make things clearer. When you pvcreate a disk, the system, in effect, repartitions the disk and puts a LVM header on it. When you add a PV to a VG, the Volume Group ID is added to the LVM header information.
There's a couple of issues with simply reducing the LVs and trying to bring up the reduced LVs on the secondary system.
1. The lvreduce command removes the LV information from the system tables. I'm not 100% sure it removes the information from the LV header information on the disk; however, I believe it does. Therefore, you wouldn't be able to reduce the LV, move the disk on which the reduced LV existed to a different system and hope to access it because it doesn't exist in the LVM header information anymore.
2. Even if that's not the case, the LVM header information is going to contain the VGID for a volume group that's valid on the adoptive node. When you try to import the disk, LVM is going to freak because you're trying to import an imported VG.
3. Simply reducing the LV doesn't ensure that the LVs are on discreet disks. Remember that a VG is created at the phyiscal volume level. A disk can belong to one and only one volume group. When you split these disks out, you need to make sure that all the split LVs are on one set of disks and all the other LVs are on a different set of disks otherwise you will corrupt the volume group information and you'll be restoring from tape.
vgchgid changes the VGID on a set of disks. Let's say you have 6 disks in the VG. When you split the LVs, you ensure the primary LVs are on disks 1-3 and the splits are on 4-6. Using psuedo cts, for instance:
/dev/dsk/c1t1d0
/dev/dsk/c1t2d0
/dev/dsk/c1t3d0
/dev/dsk/c1t4d0
/dev/dsk/c1t5d0
/dev/dsk/c1t6d0
When you start, the VGID is the same on all PVs. You split everything out, deactivate teh VG, then run vgchgid on the last three disks:
vgchgid /dev/dsk/c1t4d0 /dev/dsk/c1t5d0 /dev/dsk/c1t6d0
That effectively makes the last three disks part of a new, different VG which can then be imported on the adoptive node. When you try to reactivate the original VG, it's going to complain about missing disks and LVs. That's where the vgreduce and vgscan comes in.
Hopefully, that clears things up for you.
Doug
------
Senior UNIX Admin
O'Leary Computers Inc
linkedin: http://www.linkedin.com/dkoleary
Resume: http://www.olearycomputers.com/resume.html