HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Re: adding addition FC paths to existing configure...
Operating System - Linux
1833247
Members
2969
Online
110051
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-27-2010 10:39 PM
03-27-2010 10:39 PM
adding addition FC paths to existing configured san disks
Hi guys
I need to add additional fc paths to already configured lvm san disks in rhel 5.3.
i have 5 servers that will be shutdown the san allocation represented but with dual paths rather than single path the existing hardware address will remain but there will then be ana additional.
i also have a disk that will be represented with completely new paths.
i have been trying to find the right process to get this done but all the information seems to contradict itself as to the correct process to use.
my thoughts are that i can just add the additional path into lvm as normal on the five servers that will be represented with the same path +1.
on the server which will have new paths will an export and import work.
any advice really appreciated and rewarded.
regards
andrew
I need to add additional fc paths to already configured lvm san disks in rhel 5.3.
i have 5 servers that will be shutdown the san allocation represented but with dual paths rather than single path the existing hardware address will remain but there will then be ana additional.
i also have a disk that will be represented with completely new paths.
i have been trying to find the right process to get this done but all the information seems to contradict itself as to the correct process to use.
my thoughts are that i can just add the additional path into lvm as normal on the five servers that will be represented with the same path +1.
on the server which will have new paths will an export and import work.
any advice really appreciated and rewarded.
regards
andrew
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-28-2010 02:17 AM
03-28-2010 02:17 AM
Re: adding addition FC paths to existing configured san disks
The LVMs of Linux and HP-UX are similar in usage, but *not* identical.
(Note: there are two versions of Linux LVM. The old version is now called LVM1: it was used with 2.4.* kernel series. The 2.6.* kernels all use LVM2, and that's the version I'm writing about.)
One of the important differences is that unlike HP-UX LVM, Linux LVM has *no* built-in facility for dealing with multiple paths to a PV. In Linux, multipathing is handled with a separate, optional "layer" between the LVM and the disk devices.
RedHat recommends using "dm-multipath", available in RHEL 4.x and RHEL 5.x distributions. It will present an additional /dev/mapper/* device for each multipathed disk. The name of the device can be /dev/mapper/mpath (RedHat default), /dev/mapper/ or a custom name defined in /etc/multipath.conf.
Here's the manual for dm-multipath:
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/DM_Multipath/index.html
If you prefer to use something else, like HP SecurePath or EMC PowerPath, find out how it presents multipathed devices and what LVM configuration changes may be needed to use it.
Also check that the features needed by your multipath solution are enabled in your SAN storage system. For dm-multipath, the important ones are known as "SPC-2 support" and "unique WWIDs for each LUN".
If the Linux LVM can detect multiple paths to a given PV, it by default assumes all paths are equivalent and just picks the first one it saw; but you can optionally set up a preferred_names rule in /etc/lvm/lvm.conf that makes LVM prefer certain devices over others. You can use this to make LVM use the multipathed devices presented by your multipath solution.
The default /etc/lvm/lvm.conf already has a preferred_names line that's suitable for use with dm-multipath, but it's commented out.
Another important difference between HP-UX and Linux is that Linux LVM *does not* store the PV paths *anywhere* in a persistent fashion while the VG is deactivated (there is no /etc/lvmtab). This enables automatic reconfiguration when storage hardware connections are changed.
When a system is started up (and whenever you run a vgscan), Linux LVM will look for LVM PV headers on all disks it is allowed to access... and when it sees a PV that belongs to a VG, it learns the name, LV configuration and VGID of that VG from the PV header. Once all accessible disks have been scanned, the LVM subsystem knows the names and configurations of all accessible VGs. If all the PVs containing a given LV are available, then that LV can be activated.
(Yes, I meant LV: Linux LVM actually handles the activation/deactivation on a per-LV basis. "vgchange -a y" is actually little more than a wrapper that runs "lvchange -a y" for all LVs of the requested VG.)
In practical terms, what you should do before your FC path change is:
1.) Install and prepare your multipath solution. If you choose dm-multipath, the quick overview of the procedure is:
- install the dm-multipath RPM
- edit /etc/multipath.conf to enable it, by removing the "blacklist all devices" configuration block in the beginning of the file
- "chkconfig multipathd on" to make sure multipathd will be enabled at boot
2.) Edit /etc/lvm/lvm.conf preferred_names line to make sure LVM will use your multipath devices. If you use dm-multipath, that means commenting out the default
preferred_names = [ ]
and uncommenting the commented-out version just below it:
# preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]
3.) Use the mkinitrd command to re-create your initrd, so that the lvm.conf change is included in your initrd.
Now you should be ready, but reboot your servers once before the FC change to make sure there are no mistakes. The system should work as before.
Then shutdown the servers as you planned, and make the FC change. As you re-start the servers, they should now automatically detect the multiple paths to each SAN disk and LVM should automatically use the multipathed devices presented by the multipath solution. Yes - if you did the preparations correctly, you should have to do *nothing special* at this point.
MK
(Note: there are two versions of Linux LVM. The old version is now called LVM1: it was used with 2.4.* kernel series. The 2.6.* kernels all use LVM2, and that's the version I'm writing about.)
One of the important differences is that unlike HP-UX LVM, Linux LVM has *no* built-in facility for dealing with multiple paths to a PV. In Linux, multipathing is handled with a separate, optional "layer" between the LVM and the disk devices.
RedHat recommends using "dm-multipath", available in RHEL 4.x and RHEL 5.x distributions. It will present an additional /dev/mapper/* device for each multipathed disk. The name of the device can be /dev/mapper/mpath
Here's the manual for dm-multipath:
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/DM_Multipath/index.html
If you prefer to use something else, like HP SecurePath or EMC PowerPath, find out how it presents multipathed devices and what LVM configuration changes may be needed to use it.
Also check that the features needed by your multipath solution are enabled in your SAN storage system. For dm-multipath, the important ones are known as "SPC-2 support" and "unique WWIDs for each LUN".
If the Linux LVM can detect multiple paths to a given PV, it by default assumes all paths are equivalent and just picks the first one it saw; but you can optionally set up a preferred_names rule in /etc/lvm/lvm.conf that makes LVM prefer certain devices over others. You can use this to make LVM use the multipathed devices presented by your multipath solution.
The default /etc/lvm/lvm.conf already has a preferred_names line that's suitable for use with dm-multipath, but it's commented out.
Another important difference between HP-UX and Linux is that Linux LVM *does not* store the PV paths *anywhere* in a persistent fashion while the VG is deactivated (there is no /etc/lvmtab). This enables automatic reconfiguration when storage hardware connections are changed.
When a system is started up (and whenever you run a vgscan), Linux LVM will look for LVM PV headers on all disks it is allowed to access... and when it sees a PV that belongs to a VG, it learns the name, LV configuration and VGID of that VG from the PV header. Once all accessible disks have been scanned, the LVM subsystem knows the names and configurations of all accessible VGs. If all the PVs containing a given LV are available, then that LV can be activated.
(Yes, I meant LV: Linux LVM actually handles the activation/deactivation on a per-LV basis. "vgchange -a y
In practical terms, what you should do before your FC path change is:
1.) Install and prepare your multipath solution. If you choose dm-multipath, the quick overview of the procedure is:
- install the dm-multipath RPM
- edit /etc/multipath.conf to enable it, by removing the "blacklist all devices" configuration block in the beginning of the file
- "chkconfig multipathd on" to make sure multipathd will be enabled at boot
2.) Edit /etc/lvm/lvm.conf preferred_names line to make sure LVM will use your multipath devices. If you use dm-multipath, that means commenting out the default
preferred_names = [ ]
and uncommenting the commented-out version just below it:
# preferred_names = [ "^/dev/mpath/", "^/dev/mapper/mpath", "^/dev/[hs]d" ]
3.) Use the mkinitrd command to re-create your initrd, so that the lvm.conf change is included in your initrd.
Now you should be ready, but reboot your servers once before the FC change to make sure there are no mistakes. The system should work as before.
Then shutdown the servers as you planned, and make the FC change. As you re-start the servers, they should now automatically detect the multiple paths to each SAN disk and LVM should automatically use the multipathed devices presented by the multipath solution. Yes - if you did the preparations correctly, you should have to do *nothing special* at this point.
MK
MK
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP