Operating System - HP-UX
1753518 Members
4974 Online
108795 Solutions
New Discussion

Re: Strange issue :: "bdf ." is hanging for a host

 
chindi
Respected Contributor

Strange issue :: "bdf ." is hanging for a host

Hi ,

 

We are having a cluster node where suddenly "bdf ." is hanging but "bdf" command is running fine.

In bdf we found that "/data" mountpoint is mounted , but pv status is unavailable .

 

Has someone observed this kind of issue ?

Have logged call with HP .

 

We suspect someone has accidentally deleted lun from storage , without carrying necessary checks from OS end .

So how do we clear this mess now ?

 

We dont want data of that vg .

Can we create a new lun and do 

vgcfgrestore -R -n /dev/vg01 /dev/rdisk/diskxx  which will 

Force to restore the LVM configuration data when volume group is still

      active:

 

In cmviewcl we can see the package is up n running .

fuser -kuc /data is hanging .

3 REPLIES 3
sapoguheman
Frequent Advisor

Re: Strange issue :: "bdf ." is hanging for a host

Please ask storage team to check for the storage issues first and see to it that you have your PV's online.

Once your PV's are back do a clean restart of the package.

 

 

Note: No need to make any LVM changes.

 

 

I faced the same issue where some diks were removed by mistake by storage admin.

later they restored back that PV's and I didnt do any LVM changes only clean restart was required.

 

 

 

 

 

chindi
Respected Contributor

Re: Strange issue :: "bdf ." is hanging for a host

Hi ,

 

There is no way they can restore it.

Any other solution for the same ?

 

Can we take alternate disk and import current "vg ID"  on it and activate that "vg" somehow ??

We do not want data of it .

sapoguheman
Frequent Advisor

Re: Strange issue :: "bdf ." is hanging for a host

I belive you have not lost all disks in the VG , if you have lost then only thing is remove and create a new one which would be very safe and easy . -----Only thing which you have to do in this case is vgexport <vgname> on both primary and seconday node.

 

 

In case you have some disks still alive in the VG If you dont want the data on the disks which were removed , you can try this one to get rid of old disks from the cluster

 

 Forcefully remove the disks from the VG which holds that disk on primary node  try VGREDUCE first if that doesnt work ( pvchange -a n <lost-diskname>) this will remove the entries for old disks from the lvmtab/lvmtab_p. take a backup of lvmtab before you proceed

 

Create a map file of the VG on active node where VG is active (vgexport) on primary node.

 

Copy the map file from the primary to secondary node.

Do VGIMPORT on secondary node using that map file (Follow the vgimport / vgexport procedure dependng on LVM version you use , In latest version we dont have to create vg group file etc).

 

After doing this you have removed the missing disks forcefully from the VG and cluster.

 

Since the disks were shared disks across the cluster , ask the storage team to remove the lun mapping for both primary and secondar node ,,, you need to share the lunid of disk which you removed.

 

 

Later you can order the new disks again to the VG if you have any requirement.