- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Service Guard new VGs on new storage
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-25-2015 05:21 AM
02-25-2015 05:21 AM
Service Guard new VGs on new storage
Hi there ITRCers!
The scenario: two packages, a Service Guard Cluster, need to migrate the data to a new storage!
The plan?
1) Present LUNs to both nodes
2) ioscan -fnC disk on both nodes and insf -e
3) pvcreate -f /dev/rdsk/cXtXdX (do NOT run pvcreate on alternative links)
Perform the next two commands on both nodes
# mkdir /dev/vg_NEW-NAME-ON-NEWSTORAGE
# mknod /dev/vg_NEW/group c 64 0xNN0000 (where NN=unique minor number)
On the first node only:
# vgcreate /dev/vg_NEW
# lvcreate /dev/vg_NEW/lvolN
(repeat as necessary)
# vgexport -pvs -m /etc/lvmconf/vg_NEW.map /dev/vg_NEW
# rcp /etc/lvmconf/vg_NEW.map OTHERNODE:/etc/lvmconf/
On the other node:
# vgimport -vs -m /etc/lvmconf/vg_NEW.map /dev/vg_NEW (this populates /etc/lvmtab on the 2nd node)
On the first node:
Edit the cluster configuration ASCII file:
- update the FIRST_CLUSTER_LOCK_VG with vg_NEW
Right now I do have
FIRST_CLUSTER_LOCK_VG /dev/vgdp
what VG should I put in there?
In the ASCII file, each node section, update FIRST_CLUSTER_LOCK_PV with one of the disks in the VG
Right now I have
FIRST_CLUSTER_LOCK_PV /dev/dsk/c11t0d2
What should I put in there?
Add VOLUME_GROUP vg_NEW
Remove the old VG VOLUME_GROUP reference
Mount the new VG/lvols
Halt the package which is reliant on the OLD VG
<-- what should be the right way to halt the packacke?
Activate the old VG: vgchange -c y vg_OLD
Mount the related file systems.
Migrate the data from the old VG to the new VG logical volumes. (Use CPIO or cp?)
Edit the package control script, updating these references:
VG[N]
Lastly, our nodes are currently attached to the CURRENT storage via Direct Attach, the CURRENT storage is ALSO on our SAN, if we connect our nodes to the SAN and zone accordingly, the EXISTING storage devices would change or remain the same?
THANKS!!!!