Operating System - HP-UX
1752600 Members
4273 Online
108788 Solutions
New Discussion юеВ

LVM Warning Status IDs reaching max values

 
SOLVED
Go to solution
D Block 2
Respected Contributor

LVM Warning Status IDs reaching max values

The # dmesg
output I received talks about 'rolled back', is there a simple procedure to do what is suggested ?

# dmesg


LVM: Warning: VG 64 0x010000: The Configuration and Status IDs are reaching maximum values. To guarantee consistency and data integrity, the IDs should be rolled back. Please make all the PVs in the VG available then deactivate and activate the VG.
Note: Future activations may be aborted if the IDs are not rolled back.
1/0/1/1/0/1/0.3 tgt
1/0/1/1/0/1/0.3.0 stape
Golf is a Good Walk Spoiled, Mark Twain.
18 REPLIES 18
Devender Khatana
Honored Contributor

Re: LVM Warning Status IDs reaching max values

Hi,

Your VG with minur number have some problem. Could be some missing disks ? Could you attach some information related to this VG?

#strings /etc/lvmtab
#vgdisplay -v /dev/vgXX

Confirm VG name by listing device file and do it for matching minor number.

#ll /dev/*/group

Were there some recent changes?

Regards,
Devender
Impossible itself mentions "I m possible"
D Block 2
Respected Contributor

Re: LVM Warning Status IDs reaching max values

Here is the rest of the Dmesg... there is a Fabric Switch being replaced and hence some errors. And the HBA to that Fabric has a state: AWAITING LINK UP.

The system seems fine.


LVM: Performed a switch for Lun ID = 0 (pv = 0x000000004bdce000), from raw device 0x1f062600 (with priority: 0, and current flags: 0x0) to raw device 0x1f042600 (with priority: 1, and current flags: 0x0).
LVM: Performed a switch for Lun ID = 0 (pv = 0x000000004bdd2000), from raw device 0x1f062700 (with priority: 0, and current flags: 0x0) to raw device 0x1f042700 (with priority: 1, and current flags: 0x0).
LVM: Performed a switch for Lun ID = 0 (pv = 0x000000004bde4000), from raw device 0x1f063000 (with priority: 0, and current flags: 0x0) to raw device 0x1f043000 (with priority: 1, and current flags: 0x0).
LVM: Performed a switch for Lun ID = 0 (pv = 0x000000004bde8000), from raw device 0x1f063100 (with priority: 0, and current flags: 0x0) to raw device 0x1f043100 (with priority: 1, and current flags: 0x0).
LVM: Performed a switch for Lun ID = 0 (pv = 0x000000004bdee000), from raw device 0x1f063200 (with priority: 0, and current flags: 0x0) to raw device 0x1f043200 (with priority: 1, and current flags: 0x0).
LVM: Performed a switch for Lun ID = 0 (pv = 0x000000004be02000), from raw device 0x1f063300 (with priority: 0, and current flags: 0x0) to raw device 0x1f043300 (with priority: 1, and current flags: 0x0).
LVM: Performed a switch for Lun ID = 0 (pv = 0x000000004be0a000), from raw device 0x1f063400 (with priority: 0, and current flags: 0x0) to raw device 0x1f043400 (with priority: 1, and current flags: 0x0).
LVM: Performed a switch for Lun ID = 0 (pv = 0x000000004be10000), from raw device 0x1f063500 (with priority: 0, and current flags: 0x0) to raw device 0x1f043500 (with priority: 1, and current flags: 0x0).
LVM: Performed a switch for Lun ID = 0 (pv = 0x000000004be14000), from raw device 0x1f063600 (with priority: 0, and current flags: 0x0) to raw device 0x1f043600 (with priority: 1, and current flags: 0x0).
LVM: Performed a switch for Lun ID = 0 (pv = 0x000000004be1a000), from raw device 0x1f063700 (with priority: 0, and current flags: 0x0) to raw device 0x1f043700 (with priority: 1, and current flags: 0x0).
1/0/6/1/0: Fibre Channel Driver received Link Dead Notification.

1/0/6/1/0: Fibre Channel Driver received Link Dead Notification.

LVM: VG 64 0x010000: PVLink 31 0x062600 Failed! The PV is still accessible.
LVM: VG 64 0x010000: PVLink 31 0x062700 Failed! The PV is still accessible.
LVM: VG 64 0x010000: PVLink 31 0x063000 Failed! The PV is still accessible.
LVM: VG 64 0x010000: PVLink 31 0x063100 Failed! The PV is still accessible.
LVM: VG 64 0x010000: PVLink 31 0x063200 Failed! The PV is still accessible.
LVM: VG 64 0x010000: PVLink 31 0x063300 Failed! The PV is still accessible.
LVM: VG 64 0x010000: PVLink 31 0x063400 Failed! The PV is still accessible.
LVM: VG 64 0x010000: PVLink 31 0x063500 Failed! The PV is still accessible.
LVM: VG 64 0x010000: PVLink 31 0x063600 Failed! The PV is still accessible.
LVM: VG 64 0x010000: PVLink 31 0x063700 Failed! The PV is still accessible.
ncsebus root;



here's the strings of the lvmtab file (note the weird looking lv under vg00:
/dev/vg00
9wB2 <-- WHAT IS THIS THING ?
/dev/dsk/c0t6d0
/dev/dsk/c3t6d0
/dev/nbudb
/dev/dsk/c4t2d6
/dev/dsk/c4t2d7
/dev/dsk/c4t3d0
/dev/dsk/c4t3d1
/dev/dsk/c4t3d2
/dev/dsk/c4t3d3
/dev/dsk/c4t3d4
/dev/dsk/c4t3d5
/dev/dsk/c4t3d6
/dev/dsk/c4t3d7
/dev/dsk/c6t2d6
/dev/dsk/c6t2d7
/dev/dsk/c6t3d0
/dev/dsk/c6t3d1
/dev/dsk/c6t3d2
/dev/dsk/c6t3d3
/dev/dsk/c6t3d4
/dev/dsk/c6t3d5
/dev/dsk/c6t3d6
/dev/dsk/c6t3d7
ncsebus root;


here's the vgdisplay -v vg00
--- Volume groups ---
VG Name /dev/vg00
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 22
Open LV 22
Max PV 16
Cur PV 2
Act PV 2
Max PE per PV 4384
VGDA 4
PE Size (Mbytes) 16
Total PE 8748
Alloc PE 7058
Free PE 1690
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0

the final VG:

VG Name /dev/nbudb
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 2
Open LV 1
Max PV 30
Cur PV 20
Act PV 10
Max PE per PV 1302
VGDA 20
PE Size (Mbytes) 32
Total PE 13020
Alloc PE 12500
Free PE 520
Total PVG 0
Total Spare PVs 0

the special group:
root; ls -l vg00/group
crw-r----- 1 root sys 64 0x000000 Mar 11 2005 vg00/group
root; ls -l nbudb/group
crw-r--r-- 1 root sys 64 0x010000 Apr 6 10:33 nbudb/group
ncsebus root;






Golf is a Good Walk Spoiled, Mark Twain.
Devender Khatana
Honored Contributor
Solution

Re: LVM Warning Status IDs reaching max values

Hi,

The reason for the message is that one of the two paths for 10 disks in /dev/nbudb is not yet up. Since there are two paths for each PV, the same is accessible through the other path (Another switch). The link has not been restored as your HBA's to new Fabric switch has not established a connection.

There should not be any problem from this in functionality but VG might not get activated once deactivated without -q option. This is because more than half of the number of physical volumes should be present for activating the VG normally. Otherwise also if the link through the new fabric switch is not restored after all tries and you need to deactivate and reactivate the VG. Then manually activate the vg using

#vgchange -a Y -q n /dev/nbudb

Try to fix the problem of the HBA link state and everything else should be fine.

HTH,
Devender
Impossible itself mentions "I m possible"
D Block 2
Respected Contributor

Re: LVM Warning Status IDs reaching max values

Devender,

see below, my new (and I do NOT this the problem is a new problem really), but the ERROR below says something about what the Kernel believes it has 30, but the lvmtab has 20.

Will this clear up during a Reboot ??

The link for the 2nd HBA came up after fixing some network cabling problems going to the new Switch.

The Alternate paths were established fine, and the Stale paths were vgreduce first. Then the new paths vgextended. BUT...

While finishing the last Extend, and trying save the update to /etc/lvmconf directory, got this ERROR below.. The same error is reported when doing a vgcfgbackup command.

ERROR:# vgcfgbackup nbudb
vgcfgbackup: /etc/lvmtab is out of date with the running kernel:Kernel indicates 30 disks for "/dev/nbudb"; /etc/lvmtab has 20 disks.
Cannot proceed with backup.
ncsebus:/tmp/sanswitch:

I have some details of the OLD vgdisplay compared to the NEW vgdisplay
-----------------------------------------
This is the OLD VG showing the Alternate paths which were removed. Just showing was the VG looked like before the switch upgrade

old vgdisplay -v


VG Name /dev/nbudb
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 2
Open LV 1
Max PV 30
Cur PV 20
Act PV 10
Max PE per PV 1302
VGDA 20
PE Size (Mbytes) 32
Total PE 13020
Alloc PE 12500
Free PE 520
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0

--- Logical volumes ---
LV Name /dev/nbudb/nbudblv
LV Status available/syncd
LV Size (Mbytes) 400000
Current LE 12500
Allocated PE 12500
Used PV 10


--- Physical volumes ---
PV Name /dev/dsk/c6t2d6
PV Name /dev/dsk/c4t2d6 Alternate Link
PV Status available
Total PE 1302
Free PE 52
Autoswitch On

PV Name /dev/dsk/c6t2d7
PV Name /dev/dsk/c4t2d7 Alternate Link
PV Status available
Total PE 1302
Free PE 52
Autoswitch On

PV Name /dev/dsk/c6t3d0
PV Name /dev/dsk/c4t3d0 Alternate Link
PV Status available
Total PE 1302
Free PE 52
Autoswitch On

PV Name /dev/dsk/c6t3d1
PV Name /dev/dsk/c4t3d1 Alternate Link
PV Status available
Total PE 1302
Free PE 52
Autoswitch On

PV Name /dev/dsk/c6t3d2
PV Name /dev/dsk/c4t3d2 Alternate Link
PV Status available
Total PE 1302
Free PE 52
Autoswitch On

PV Name /dev/dsk/c6t3d3
PV Name /dev/dsk/c4t3d3 Alternate Link
PV Status available
Total PE 1302
Free PE 52
Autoswitch On

PV Name /dev/dsk/c6t3d4
PV Name /dev/dsk/c4t3d4 Alternate Link
PV Status available
Total PE 1302
Free PE 52
Autoswitch On

PV Name /dev/dsk/c6t3d5
PV Name /dev/dsk/c4t3d5 Alternate Link
PV Status available
Total PE 1302
Free PE 52
Autoswitch On

PV Name /dev/dsk/c6t3d6
PV Name /dev/dsk/c4t3d6 Alternate Link
PV Status available
Total PE 1302
Free PE 52
Autoswitch On

PV Name /dev/dsk/c6t3d7
PV Name /dev/dsk/c4t3d7 Alternate Link
PV Status available
Total PE 1302
Free PE 52
Autoswitch On


#

# after the new switch upgrade, and running the ioscan -fnC disk, then a insf -e, the new c8 showed up. So, vgreduce the dead links/paths c6, and then vgextended using the new c8 paths.

vgdisplay -v nbudb

VG Name /dev/nbudb
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 2
Open LV 1
Max PV 30
Cur PV 20
Act PV 10
Max PE per PV 1302
VGDA 20
PE Size (Mbytes) 32
Total PE 13020
Alloc PE 12500
Free PE 520
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0

--- Logical volumes ---
LV Name /dev/nbudb/nbudblv
LV Status available/syncd
LV Size (Mbytes) 400000
Current LE 12500
Allocated PE 12500
Used PV 10


--- Physical volumes ---
PV Name /dev/dsk/c4t2d6
PV Name /dev/dsk/c8t2d6 Alternate Link
PV Status available
Total PE 1302
Free PE 52
Autoswitch On

PV Name /dev/dsk/c4t2d7
PV Name /dev/dsk/c8t2d7 Alternate Link
PV Status available
Total PE 1302
Free PE 52
Autoswitch On

PV Name /dev/dsk/c4t3d0
PV Name /dev/dsk/c8t3d0 Alternate Link
PV Status available
Total PE 1302
Free PE 52
Autoswitch On

PV Name /dev/dsk/c4t3d1
PV Name /dev/dsk/c8t3d1 Alternate Link
PV Status available
Total PE 1302
Free PE 52
Autoswitch On

PV Name /dev/dsk/c4t3d2
PV Name /dev/dsk/c8t3d2 Alternate Link
PV Status available
Total PE 1302
Free PE 52
Autoswitch On

PV Name /dev/dsk/c4t3d3
PV Name /dev/dsk/c8t3d3 Alternate Link
PV Status available
Total PE 1302
Free PE 52
Autoswitch On

PV Name /dev/dsk/c4t3d4
PV Name /dev/dsk/c8t3d4 Alternate Link
PV Status available
Total PE 1302
Free PE 52
Autoswitch On

PV Name /dev/dsk/c4t3d5
PV Name /dev/dsk/c8t3d5 Alternate Link
PV Status available
Total PE 1302
Free PE 52
Autoswitch On

PV Name /dev/dsk/c4t3d6
PV Name /dev/dsk/c8t3d6 Alternate Link
PV Status available
Total PE 1302
Free PE 52
Autoswitch On

PV Name /dev/dsk/c4t3d7
PV Name /dev/dsk/c8t3d7 Alternate Link
PV Status available
Total PE 1302
Free PE 52
Autoswitch On


ncsebus root; strings /etc/lvmtab
/dev/vg00
9wB2 <---- WHAT ??
/dev/dsk/c0t6d0
/dev/dsk/c3t6d0
/dev/nbudb
/dev/dsk/c4t2d6
/dev/dsk/c4t2d7
/dev/dsk/c4t3d0
/dev/dsk/c4t3d1
/dev/dsk/c4t3d2
/dev/dsk/c4t3d3
/dev/dsk/c4t3d4
/dev/dsk/c4t3d5
/dev/dsk/c4t3d6
/dev/dsk/c4t3d7
/dev/dsk/c8t2d6
/dev/dsk/c8t2d7
/dev/dsk/c8t3d0
/dev/dsk/c8t3d1
/dev/dsk/c8t3d2
/dev/dsk/c8t3d3
/dev/dsk/c8t3d4
/dev/dsk/c8t3d5
/dev/dsk/c8t3d6
/dev/dsk/c8t3d7
ncsebus root;


# vgcfgbackup nbudb
vgcfgbackup: /etc/lvmtab is out of date with the running kernel:Kernel indicates 30 disks for "/dev/nbudb"; /etc/lvmtab has 20 disks.
Cannot proceed with backup.
ncsebus:/tmp/sanswitch:
#


ncsebus root;















Golf is a Good Walk Spoiled, Mark Twain.
Devender Khatana
Honored Contributor

Re: LVM Warning Status IDs reaching max values

Hi Tom,

The reason for this is the alternate path removed and discovered. It happens in some cases. You will have to do this to remove all missing PV's (Which are actually only a few old PV Links here)

#vgreduce -f

HTH,
Devender
Impossible itself mentions "I m possible"
D Block 2
Respected Contributor

Re: LVM Warning Status IDs reaching max values

here's a bdf:

# bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 524288 183744 337904 35% /
/dev/vg00/lvol1 505392 63640 391208 14% /stand
/dev/vg00/lvol11 4620288 184616 4403648 4% /var
/dev/vg00/lvol18 2048000 16523 1904634 1% /var/tmp
/dev/vg00/lvol17 524288 1229 490375 0% /var/opt/sanmgr
/dev/vg00/lvol16 1540096 121398 1330033 8% /var/opt/perf/datafiles
/dev/vg00/lvol15 1032192 1357 966415 0% /var/opt/ids
/dev/vg00/lvol12 2048000 37611 1884786 2% /var/adm
/dev/vg00/lvol14 2048000 400096 1544942 21% /var/adm/sw
/dev/vg00/lvol13 4096000 2104 3838035 0% /var/adm/crash
/dev/vg00/lvol9 4620288 1114112 3478824 24% /usr
/dev/vg00/lvol10 8192000 5302221 2709256 66% /usr/openv
/dev/vg00/vault 9224192 3187606 5849522 35% /usr/openv/netbackup/vault
/dev/vg00/logs 5120000 462146 4512374 9% /usr/openv/netbackup/logs
/dev/nbudb/nbudblv 409600000 275907640 132685992 68% /usr/openv/netbackup/db
/dev/vg00/lvol8 2048000 459744 1576048 23% /tmp
/dev/vg00/lvol5 4620288 1339632 3255056 29% /opt
/dev/vg00/lvol7 524288 1229 490375 0% /opt/sanmgr
/dev/vg00/maestro 5632000 1586757 3792475 29% /opt/maestro
/dev/vg00/lvol6 524288 1229 490375 0% /opt/ids
/dev/vg00/aptare 4096000 274691 3582532 7% /opt/aptare
/dev/vg00/lvol4 2048000 234016 1800200 12% /home
ncsebus:/tmp/sanswitch:
#
Golf is a Good Walk Spoiled, Mark Twain.
Pete Randall
Outstanding Contributor

Re: LVM Warning Status IDs reaching max values

Tom,

In the strings output you're seeing:

ncsebus root; strings /etc/lvmtab
/dev/vg00
9wB2 <---- WHAT ??
/dev/dsk/c0t6d0

the 9wB2 is just a bunch of characters in the lvmtab file that strings thought it could translate into something readable. This is quite common and can safely be ignored.


Pete

Pete
D Block 2
Respected Contributor

Re: LVM Warning Status IDs reaching max values

Pete, thanks for catching that and explaining the details.
Golf is a Good Walk Spoiled, Mark Twain.
D Block 2
Respected Contributor

Re: LVM Warning Status IDs reaching max values

I think at this moment, I'm a happy camper! but I am very concerned about running the:

vgreduce -f

can someone caution me on the side effects here ?
Golf is a Good Walk Spoiled, Mark Twain.