HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- disk path change
Operating System - Linux
1826399
Members
4097
Online
109692
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-11-2010 08:02 PM
05-11-2010 08:02 PM
disk path change
Hi,
I have a two nodes linux cluster running on MC/SG. It is using volume group. What is the proper step if I want to move the MC/SG to a different set of disks (path will change). Do I still need to do vgexport of volume group to map file. Should I reapply the clusterconf
I have a two nodes linux cluster running on MC/SG. It is using volume group. What is the proper step if I want to move the MC/SG to a different set of disks (path will change). Do I still need to do vgexport of volume group to map file. Should I reapply the clusterconf
abc
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-11-2010 11:23 PM
05-11-2010 11:23 PM
Re: disk path change
Linux LVM is different from HP-UX LVM: in Linux, map files don't exist and "vgexport" means a very different thing.
You can move the data with four steps:
1.) add the new disks as new PVs to the VG (first pvcreate, then vgextend)
2.) use pvmove to move the data to the new disks
3.) once the old disks are free, use vgreduce to remove them from the VG
4.) on the inactive node, run "vgscan -vv" make it learn the new VG configuration and to verify that it can see the new disks.
If your package configuration includes direct references to the old disks (e.g. disk availability monitoring), you must change these to point to the new disks instead. But if your package configuration only references to the disks using the VG name, there is no reason to modify the package configuration at all.
This is because Serviceguard for Linux does not have a kernel-based "cluster mode" for VGs like HP-UX Serviceguard has.
NOTE:
Linux LVM tools *do* include a "vgchange -c y" function, but it does *not* work with ServiceGuard.
It is for RedHat Cluster only: if you switch a VG to cluster mode and don't have RedHat cluster daemons running, you'll just make the VG inaccessible. Undoing this mistake non-destructively requires a little trick that is documented in the RedHat Knowledge Base:
http://kbase.redhat.com/faq/docs/DOC-3619
(Guess why I know this... :)
MK
You can move the data with four steps:
1.) add the new disks as new PVs to the VG (first pvcreate, then vgextend)
2.) use pvmove to move the data to the new disks
3.) once the old disks are free, use vgreduce to remove them from the VG
4.) on the inactive node, run "vgscan -vv" make it learn the new VG configuration and to verify that it can see the new disks.
If your package configuration includes direct references to the old disks (e.g. disk availability monitoring), you must change these to point to the new disks instead. But if your package configuration only references to the disks using the VG name, there is no reason to modify the package configuration at all.
This is because Serviceguard for Linux does not have a kernel-based "cluster mode" for VGs like HP-UX Serviceguard has.
NOTE:
Linux LVM tools *do* include a "vgchange -c y" function, but it does *not* work with ServiceGuard.
It is for RedHat Cluster only: if you switch a VG to cluster mode and don't have RedHat cluster daemons running, you'll just make the VG inaccessible. Undoing this mistake non-destructively requires a little trick that is documented in the RedHat Knowledge Base:
http://kbase.redhat.com/faq/docs/DOC-3619
(Guess why I know this... :)
MK
MK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-12-2010 10:57 PM
05-12-2010 10:57 PM
Re: disk path change
another way
1. add new disks
2. create vgs & fs
3. mount these file systems with new mount points
4. shutdown pkg & mount old filesystems to a temp mount point, so that data won't get changed.
5. copy data from old to new with dd or cpio
6. unmount all filesysems & put new entries in pkg cntl file.
7. start package.
Gudluck
Prasanth
1. add new disks
2. create vgs & fs
3. mount these file systems with new mount points
4. shutdown pkg & mount old filesystems to a temp mount point, so that data won't get changed.
5. copy data from old to new with dd or cpio
6. unmount all filesysems & put new entries in pkg cntl file.
7. start package.
Gudluck
Prasanth
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-24-2010 03:56 AM
05-24-2010 03:56 AM
Re: disk path change
Hi,
The migration method will be using EMC sancopy so basically it is just block level copy of the LUN from old storage to new storage. Once the sancopy complete, the LUN will have exact image of the source.
The problem is in the cluster.conf it point to the /dev/sdb1 for the lock lun. The lock lun will be using the sancopy method. I would like to know how to update the config file after we complete the migration.
The migration method will be using EMC sancopy so basically it is just block level copy of the LUN from old storage to new storage. Once the sancopy complete, the LUN will have exact image of the source.
The problem is in the cluster.conf it point to the /dev/sdb1 for the lock lun. The lock lun will be using the sancopy method. I would like to know how to update the config file after we complete the migration.
abc
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Support
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP