HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Changing VGs from a cluster
Operating System - HP-UX
1834027
Members
6798
Online
110063
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-05-2005 10:08 AM
09-05-2005 10:08 AM
Hi folks,
We have been handling several boxes with HP-UX for a quite long time, but we are not that experts on MCSG, so forgive this somewhat basic question.
The scenario: two VGs, one package and a migration to a different storage.
How things are now: the package are using two VGs: one "in" the cluster and other "outside" (the new disks from the new storage). The other configured VG are not being used (the data was migrated), and it belongs to the old storage.
Yes, the cluster is inconsistent right now.
What should be done:
- Insert the new disks (a brand new VG) on the cluster.
- Migrtate the data on the old disks to a new disks (other new VG) and insert it on the cluster.
- Discard the older VGs.
Well, making things simplier: how can we insert a VG onto the cluster, and how can I withdraw a VG from the cluster (this implies data migration)? Of course the package must be stopped, but what should I take on account when planning this change? And what if one of the old VGs has the lock disk?
Thanks in advance.
Filipe.
We have been handling several boxes with HP-UX for a quite long time, but we are not that experts on MCSG, so forgive this somewhat basic question.
The scenario: two VGs, one package and a migration to a different storage.
How things are now: the package are using two VGs: one "in" the cluster and other "outside" (the new disks from the new storage). The other configured VG are not being used (the data was migrated), and it belongs to the old storage.
Yes, the cluster is inconsistent right now.
What should be done:
- Insert the new disks (a brand new VG) on the cluster.
- Migrtate the data on the old disks to a new disks (other new VG) and insert it on the cluster.
- Discard the older VGs.
Well, making things simplier: how can we insert a VG onto the cluster, and how can I withdraw a VG from the cluster (this implies data migration)? Of course the package must be stopped, but what should I take on account when planning this change? And what if one of the old VGs has the lock disk?
Thanks in advance.
Filipe.
Solved! Go to Solution.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-05-2005 10:25 AM
09-05-2005 10:25 AM
Solution
Hi,
Here is the thread which includes recreating VG with different configuration.
http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=766000
& Here is the one which involves steps for changing the cluster lock.
http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=260487
The version of service guard discussed here could be different.
HTH,
Devender
Here is the thread which includes recreating VG with different configuration.
http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=766000
& Here is the one which involves steps for changing the cluster lock.
http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=260487
The version of service guard discussed here could be different.
HTH,
Devender
Impossible itself mentions "I m possible"
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-06-2005 12:10 AM
09-06-2005 12:10 AM
Re: Changing VGs from a cluster
First, create the new VG and logical volumes in it, then vgimport it onto the second Serviceguard node.
# pvcreate -f /dev/rdsk/c-t-d- (repeat as needed)
Perform the next two commands on both nodes
# mkdir /dev/vg_NEW
# mknod /dev/vg_NEW/group c 64 0xNN0000 (where NN=unique minor number
On the first node only:
# vgcreate /dev/vg_NEW
# lvcreate /dev/vg_NEW/lvolN
(repeat as necessary)
# vgexport -pvs -m /etc/lvmconf/vg_NEW.map /dev/vg_NEW
# rcp /etc/lvmconf/vg_NEW.map OTHERNODE:/etc/lvmconf/
On the other node:
# vgimport -vs -m /etc/lvmconf/vg_NEW.map /dev/vg_NEW (this populates /etc/lvmtab on the 2nd node)
On the first node:
Edit the cluster configuration ASCII file:
- update the FIRST_CLUSTER_LOCK_VG with vg_NEW
In the ASCII file, each node section, update FIRST_CLUSTER_LOCK_PV with one of the disks in the VG
Add VOLUME_GROUP vg_NEW
Remove the old VG VOLUME_GROUP reference
Mount the new VG/lvols
Halt the package which is reliant on the OLD VG
Activate the old VG: vgchange -c y vg_OLD
Mount the related file systems.
Migrate the data from the old VG to the new VG logical volumes. (Use CPIO?)
Edit the package control script, updating these references:
VG[N]
LV[N].....
Copy the package control script to the other node
Leave vg_NEW activated
cmapplyconf -f -C (this will update the cluster lock VG, mark the vg_NEW as a clustered VG, and de-cluster the old VG.
Deactivate vg_NEW.
Start the package - see the package control log file for the cause if the start fails.
# pvcreate -f /dev/rdsk/c-t-d- (repeat as needed)
Perform the next two commands on both nodes
# mkdir /dev/vg_NEW
# mknod /dev/vg_NEW/group c 64 0xNN0000 (where NN=unique minor number
On the first node only:
# vgcreate /dev/vg_NEW
# lvcreate /dev/vg_NEW/lvolN
(repeat as necessary)
# vgexport -pvs -m /etc/lvmconf/vg_NEW.map /dev/vg_NEW
# rcp /etc/lvmconf/vg_NEW.map OTHERNODE:/etc/lvmconf/
On the other node:
# vgimport -vs -m /etc/lvmconf/vg_NEW.map /dev/vg_NEW (this populates /etc/lvmtab on the 2nd node)
On the first node:
Edit the cluster configuration ASCII file:
- update the FIRST_CLUSTER_LOCK_VG with vg_NEW
In the ASCII file, each node section, update FIRST_CLUSTER_LOCK_PV with one of the disks in the VG
Add VOLUME_GROUP vg_NEW
Remove the old VG VOLUME_GROUP reference
Mount the new VG/lvols
Halt the package which is reliant on the OLD VG
Activate the old VG: vgchange -c y vg_OLD
Mount the related file systems.
Migrate the data from the old VG to the new VG logical volumes. (Use CPIO?)
Edit the package control script, updating these references:
VG[N]
LV[N].....
Copy the package control script to the other node
Leave vg_NEW activated
cmapplyconf -f -C
Deactivate vg_NEW.
Start the package - see the package control log file for the cause if the start fails.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-08-2005 02:26 AM
09-08-2005 02:26 AM
Re: Changing VGs from a cluster
Thanks folks, speccialy Stephen.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP