- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Strange issue :: "bdf ." is hanging for a host
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-03-2015 02:27 AM - edited 07-03-2015 02:47 AM
07-03-2015 02:27 AM - edited 07-03-2015 02:47 AM
Strange issue :: "bdf ." is hanging for a host
Hi ,
We are having a cluster node where suddenly "bdf ." is hanging but "bdf" command is running fine.
In bdf we found that "/data" mountpoint is mounted , but pv status is unavailable .
Has someone observed this kind of issue ?
Have logged call with HP .
We suspect someone has accidentally deleted lun from storage , without carrying necessary checks from OS end .
So how do we clear this mess now ?
We dont want data of that vg .
Can we create a new lun and do
vgcfgrestore -R -n /dev/vg01 /dev/rdisk/diskxx which will
Force to restore the LVM configuration data when volume group is still
active:
In cmviewcl we can see the package is up n running .
fuser -kuc /data is hanging .
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-03-2015 03:20 AM
07-03-2015 03:20 AM
Re: Strange issue :: "bdf ." is hanging for a host
Please ask storage team to check for the storage issues first and see to it that you have your PV's online.
Once your PV's are back do a clean restart of the package.
Note: No need to make any LVM changes.
I faced the same issue where some diks were removed by mistake by storage admin.
later they restored back that PV's and I didnt do any LVM changes only clean restart was required.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-03-2015 03:25 AM
07-03-2015 03:25 AM
Re: Strange issue :: "bdf ." is hanging for a host
Hi ,
There is no way they can restore it.
Any other solution for the same ?
Can we take alternate disk and import current "vg ID" on it and activate that "vg" somehow ??
We do not want data of it .
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-08-2015 02:54 AM - edited 07-08-2015 03:02 AM
07-08-2015 02:54 AM - edited 07-08-2015 03:02 AM
Re: Strange issue :: "bdf ." is hanging for a host
I belive you have not lost all disks in the VG , if you have lost then only thing is remove and create a new one which would be very safe and easy . -----Only thing which you have to do in this case is vgexport <vgname> on both primary and seconday node.
In case you have some disks still alive in the VG If you dont want the data on the disks which were removed , you can try this one to get rid of old disks from the cluster
Forcefully remove the disks from the VG which holds that disk on primary node try VGREDUCE first if that doesnt work ( pvchange -a n <lost-diskname>) this will remove the entries for old disks from the lvmtab/lvmtab_p. take a backup of lvmtab before you proceed
Create a map file of the VG on active node where VG is active (vgexport) on primary node.
Copy the map file from the primary to secondary node.
Do VGIMPORT on secondary node using that map file (Follow the vgimport / vgexport procedure dependng on LVM version you use , In latest version we dont have to create vg group file etc).
After doing this you have removed the missing disks forcefully from the VG and cluster.
Since the disks were shared disks across the cluster , ask the storage team to remove the lun mapping for both primary and secondar node ,,, you need to share the lunid of disk which you removed.
Later you can order the new disks again to the VG if you have any requirement.