- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- sgxl node - systemd: Starting Activation of LVM2 l...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-05-2022 11:04 AM - last edited on 07-13-2022 05:00 AM by support_s
07-05-2022 11:04 AM - last edited on 07-13-2022 05:00 AM by support_s
Hello, teams.
I have a question that I couldn't find an answer to in the documentation: what effect does the mechanism of automatic activation of logical volumes detected when scanning disks by the server at OS startup have?
RHEL - Controlling autoactivation of logical volumes
Autoactivation of a logical volume refers to the event-based automatic activation of a logical volume during system startup. As devices become available on the system (device online events), systemd/udev runs the lvm2-pvscan service for each device. This service runs the pvscan --cache -aay device command, which reads the named device. If the device belongs to a volume group, the pvscan command will check if all of the physical volumes for that volume group are present on the system. If so, the command will activate logical volumes in that volume group.
In the "Managing HPE Serviceguard for Linux" described " Enabling Volume Group Activation Protection", which has no effect on the mechanism described above - when the server starts, all available logical volumes are activated (part of messages - local VG, all other VGs - shared
Jun 9 16:46:53 nodename systemd: Started LVM2 PV scan on device 253:17.
Jun 9 16:46:53 nodename systemd: Started LVM2 PV scan on device 253:15.
Jun 9 16:46:53 nodename lvm: 1 logical volume(s) in volume group "vg_db_dmzdb_data18" now active
Jun 9 16:46:53 nodename lvm: 10 logical volume(s) in volume group "rhel" now active
Jun 9 16:46:53 nodename lvm: 1 logical volume(s) in volume group "vg_db_dmzdb_data17" now active
Jun 9 16:46:53 nodename lvm: 1 logical volume(s) in volume group "vg_db_dmzdb_data14" now active
Jun 9 16:46:54 nodename lvm: 1 logical volume(s) in volume group "vg_db_dmzdb_data10" now active
Jun 9 16:46:54 nodename lvm: 1 logical volume(s) in volume group "vg_db_dmzdb_data09" now active
Jun 9 16:46:54 nodename lvm: 1 logical volume(s) in volume group "vg_db_dmzdb_data20" now active
Jun 9 16:46:54 nodename lvm: 1 logical volume(s) in volume group "vg_db_dmzdb_data06" now active
Jun 9 16:46:54 nodename lvm: 9 logical volume(s) in volume group "vg_db_dmzdb_data01" now active
Jun 9 16:46:54 nodename lvm: 1 logical volume(s) in volume group "vg_db_dmzdb_data03" now active
Jun 9 16:46:54 nodename lvm: 1 logical volume(s) in volume group "vg_db_dmzdb_data13" now active
Jun 9 16:46:54 nodename systemd: Started Activation of LVM2 logical volumes.
Jun 9 16:46:54 nodename systemd: Found device /dev/mapper/rhel-home.
Jun 9 16:46:54 nodename systemd: Found device /dev/mapper/rhel-var.
Jun 9 16:46:54 nodename systemd: Found device /dev/mapper/rhel-var_log.
Jun 9 16:46:54 nodename systemd: Found device /dev/mapper/rhel-var_log_audit.
Jun 9 16:46:54 nodename systemd: Found device /dev/mapper/rhel-var_log_rootsh.
Jun 9 16:46:54 nodename systemd: Found device /dev/mapper/rhel-var_tmp.
Jun 9 16:46:54 nodename systemd: Found device /dev/mapper/rhel-app_log.
Jun 9 16:46:54 nodename systemd: Found device /dev/mapper/rhel-u01.
Jun 9 16:46:54 nodename systemd: Reached target Local Encrypted Volumes.
Jun 9 16:46:54 nodename systemd: Starting Activation of LVM2 logical volumes...
Do I need to additionally modify lvm parameters on cluster nodes to avoid automatic activation of logical volumes?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-05-2022 12:05 PM
07-05-2022 12:05 PM
Query: sgxl node - systemd: Starting Activation of LVM2 logical volumes
System recommended content:
1. Managing HPE Serviceguard for Linux A.12.80.00 | Creating the Logical Volume Infrastructure
2. Red Hat Enterprise Linux 7 - systemd lvm2-activation.service Failed to Start Up
Please click on "Thumbs Up/Kudo" icon to give a "Kudo".
Thank you for being a HPE valuable community member.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-05-2022 01:01 PM
07-05-2022 01:01 PM
Re: Query: sgxl node - systemd: Starting Activation of LVM2 logical volumes
Hi,
Option one - disable the lvm2 metadata daemon
RHEL docs - LVM is configured to make use of the daemon when the global/use_lvmetad variable is set to 1 in the lvm.conf configuration file. This is the default value.
Our setting in lvm.conf:
global/use_lvmetad = 0
global/locking_type = 1
systemctl status
● lvm2-lvmetad.service
Loaded: masked (/dev/null; bad)
Active: inactive (dead)
option two - locking_dir in /etc/lvm/lvm.conf is set correctly
locking_dir = "/run/lock/lvm".
It is very likely that the basic settings on the cluster node comply with the recommendations. What else can I check?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-05-2022 01:40 PM
07-05-2022 01:40 PM
Re: Query: sgxl node - systemd: Starting Activation of LVM2 logical volumes
I have some thought:
we use parameter activation/volume_list for controlling logical volume activation, but we need control too auto vg activation.
HP-UX cluster has parameter in the file /etc/lvmrc = AUTO_VG_ACTIVATE.
In the Linux we can use similar parameter - activation/auto_activation_volume_list.
Setting auto_activation_volume_list to an empty list disables autoactivation entirely. Setting auto_activation_volume_list to specific logical volumes and volume groups limits autoactivation to those logical volumes.
What do you think about this?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-07-2022 09:53 PM
07-07-2022 09:53 PM
SolutionIn lvm.conf file you can mention which vgs should be activated at the boot time.
Exclude cluster volume group activation, when the serviceguard activates the volume group it will assign tags to the volume group, if a cluster volume group is activated without tags then there is a chance that data might be corrupted.
Disable lvmetad, as per the serviceguard manual mentioned
12.2. Controlling logical volume activation
I work for HPE.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
