HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- "vxdisk list" shows status of 'online dgdisabled s...
Operating System - HP-UX
1826417
Members
4000
Online
109692
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-18-2011 11:31 PM
05-18-2011 11:31 PM
"vxdisk list" shows status of 'online dgdisabled shared'
Hello All,
I have an Oracle 11.2.0.2 four nodes RAC setup running on CFS.
Due to some test requirement, I had to disable physical access to a majority of the voting disks on CFS master node which would make CSS to mark those voting disks as stale, the DB instance drops, the VIP migrates to the other node, then CRS stops.
Steps followed:
node2# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 9519cbe2b59c4f02bf3cb37f2ee74e33 (/cfs/vote/vote) []
2. ONLINE 3008de2a84e14f9dbfbf1c0581a5537f (/cfs/vote1/vote1) []
3. ONLINE a1b0bb574c464f45bfd515588cedfe36 (/cfs/vote2/vote2) []
Located 3 voting disk(s).
node2# vxprint -g vote
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg vote vote - - - - - -
dm disk52 eva80000_3 - 3108736 - - - -
v vol1 fsgen ENABLED 2097152 - ACTIVE - -
pl vol1-01 vol1 ENABLED 2097152 - ACTIVE - -
sd disk52-01 vol1-01 ENABLED 2097152 0 - - -
node2# vxprint -g vote1
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg vote1 vote1 - - - - - -
dm disk53 eva80000_4 - 4157312 - - - -
v vol1 fsgen ENABLED 3145728 - ACTIVE - -
pl vol1-01 vol1 ENABLED 3145728 - ACTIVE - -
sd disk53-01 vol1-01 ENABLED 3145728 0 - - -
Thu May 19 10:28:20 IST 2011
node2# scsimgr -f disable -D /dev/rdisk/disk52
scsimgr: LUN /dev/rdisk/disk52 disabled successfully
node2# scsimgr -f disable -D /dev/rdisk/disk53
scsimgr: LUN /dev/rdisk/disk53 disabled successfully
Till this point everything worked expectedly.
After this I enabled the disks back.
node2# scsimgr -f enable -D /dev/rdisk/disk53
scsimgr: LUN /dev/rdisk/disk53 enabled successfully
node2# scsimgr -f enable -D /dev/rdisk/disk52
scsimgr: LUN /dev/rdisk/disk52 enabled successfully
node2# ioscan -P health /dev/rdisk/disk52
Class I H/W Path health
===============================
disk 52 64000/0xfa00/0xe online
node2# ioscan -P health /dev/rdisk/disk53
Class I H/W Path health
===============================
disk 53 64000/0xfa00/0xf online
However the DGs being used for vote and vote1 are still disabled, so the node could not join the cluster back. crsctl start cluster on that particular node will fail.
node2# vxdg list
NAME STATE ID
vote1 disabled,shared 1303277519.24.node1
data enabled,shared,cds 1303224972.16.node1
ocr enabled,shared,cds 1303281697.32.node1
ocr1 enabled,shared,cds 1303282187.36.node1
ocr2 enabled,shared,cds 1303283865.48.node1
orabin enabled,shared,cds 1303226312.20.node1
redo enabled,shared,cds 1303282639.40.node1
vote disabled,shared 1303278357.28.node1
vote2 enabled,shared,cds 1303283394.44.node1
node2# vxdisk list
DEVICE TYPE DISK GROUP STATUS
disk_0s2 auto:LVM - - LVM
disk_1 auto:LVM - - LVM
eva80000_0 auto:cdsdisk disk9 data online shared
eva80000_1 auto:cdsdisk disk50 orabin online shared
eva80000_2 auto:cdsdisk disk51 redo online shared
eva80000_3 auto:cdsdisk disk52 vote online dgdisabled shared
eva80000_4 auto:cdsdisk disk53 vote1 online dgdisabled shared
eva80000_5 auto:cdsdisk disk54 vote2 online shared
eva80000_6 auto:cdsdisk disk48 ocr1 online shared
eva80000_7 auto:cdsdisk disk49 ocr2 online shared
eva80000_8 auto:cdsdisk disk55 ocr online shared
I found one tech article at symantec site which tells something related to similiar issue.
http://www.symantec.com/business/support/index?page=content&id=TECH7407
But here in my case DGs are in shared mode, and when I try to deport I get something like this
node2# vxdg deport vote1
VxVM vxdg ERROR V-5-1-584 Disk group vote1: Some volumes in the disk group are in use
Also, I can't unmount the filesystem for deport as it will cause whole cluster to stop working.
Had the oracle version been 11.2.0.1, it would have rebooted the node, and problem would have got solved. But 11.2.0.2 version doesn't reboot the node.
I don't want to manually reboot the machine and really have reached a stalemate here.
Help provided here would be greatly appreciated.
Regards,
Himanshu
I have an Oracle 11.2.0.2 four nodes RAC setup running on CFS.
Due to some test requirement, I had to disable physical access to a majority of the voting disks on CFS master node which would make CSS to mark those voting disks as stale, the DB instance drops, the VIP migrates to the other node, then CRS stops.
Steps followed:
node2# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 9519cbe2b59c4f02bf3cb37f2ee74e33 (/cfs/vote/vote) []
2. ONLINE 3008de2a84e14f9dbfbf1c0581a5537f (/cfs/vote1/vote1) []
3. ONLINE a1b0bb574c464f45bfd515588cedfe36 (/cfs/vote2/vote2) []
Located 3 voting disk(s).
node2# vxprint -g vote
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg vote vote - - - - - -
dm disk52 eva80000_3 - 3108736 - - - -
v vol1 fsgen ENABLED 2097152 - ACTIVE - -
pl vol1-01 vol1 ENABLED 2097152 - ACTIVE - -
sd disk52-01 vol1-01 ENABLED 2097152 0 - - -
node2# vxprint -g vote1
TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0
dg vote1 vote1 - - - - - -
dm disk53 eva80000_4 - 4157312 - - - -
v vol1 fsgen ENABLED 3145728 - ACTIVE - -
pl vol1-01 vol1 ENABLED 3145728 - ACTIVE - -
sd disk53-01 vol1-01 ENABLED 3145728 0 - - -
Thu May 19 10:28:20 IST 2011
node2# scsimgr -f disable -D /dev/rdisk/disk52
scsimgr: LUN /dev/rdisk/disk52 disabled successfully
node2# scsimgr -f disable -D /dev/rdisk/disk53
scsimgr: LUN /dev/rdisk/disk53 disabled successfully
Till this point everything worked expectedly.
After this I enabled the disks back.
node2# scsimgr -f enable -D /dev/rdisk/disk53
scsimgr: LUN /dev/rdisk/disk53 enabled successfully
node2# scsimgr -f enable -D /dev/rdisk/disk52
scsimgr: LUN /dev/rdisk/disk52 enabled successfully
node2# ioscan -P health /dev/rdisk/disk52
Class I H/W Path health
===============================
disk 52 64000/0xfa00/0xe online
node2# ioscan -P health /dev/rdisk/disk53
Class I H/W Path health
===============================
disk 53 64000/0xfa00/0xf online
However the DGs being used for vote and vote1 are still disabled, so the node could not join the cluster back. crsctl start cluster on that particular node will fail.
node2# vxdg list
NAME STATE ID
vote1 disabled,shared 1303277519.24.node1
data enabled,shared,cds 1303224972.16.node1
ocr enabled,shared,cds 1303281697.32.node1
ocr1 enabled,shared,cds 1303282187.36.node1
ocr2 enabled,shared,cds 1303283865.48.node1
orabin enabled,shared,cds 1303226312.20.node1
redo enabled,shared,cds 1303282639.40.node1
vote disabled,shared 1303278357.28.node1
vote2 enabled,shared,cds 1303283394.44.node1
node2# vxdisk list
DEVICE TYPE DISK GROUP STATUS
disk_0s2 auto:LVM - - LVM
disk_1 auto:LVM - - LVM
eva80000_0 auto:cdsdisk disk9 data online shared
eva80000_1 auto:cdsdisk disk50 orabin online shared
eva80000_2 auto:cdsdisk disk51 redo online shared
eva80000_3 auto:cdsdisk disk52 vote online dgdisabled shared
eva80000_4 auto:cdsdisk disk53 vote1 online dgdisabled shared
eva80000_5 auto:cdsdisk disk54 vote2 online shared
eva80000_6 auto:cdsdisk disk48 ocr1 online shared
eva80000_7 auto:cdsdisk disk49 ocr2 online shared
eva80000_8 auto:cdsdisk disk55 ocr online shared
I found one tech article at symantec site which tells something related to similiar issue.
http://www.symantec.com/business/support/index?page=content&id=TECH7407
But here in my case DGs are in shared mode, and when I try to deport I get something like this
node2# vxdg deport vote1
VxVM vxdg ERROR V-5-1-584 Disk group vote1: Some volumes in the disk group are in use
Also, I can't unmount the filesystem for deport as it will cause whole cluster to stop working.
Had the oracle version been 11.2.0.1, it would have rebooted the node, and problem would have got solved. But 11.2.0.2 version doesn't reboot the node.
I don't want to manually reboot the machine and really have reached a stalemate here.
Help provided here would be greatly appreciated.
Regards,
Himanshu
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-21-2011 08:00 PM
05-21-2011 08:00 PM
Re: "vxdisk list" shows status of 'online dgdisabled shared'
NO RESPONSES!
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Support
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP