- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Physically disconnecting cluster lock disk fro...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-08-2009 11:17 AM
тАО01-08-2009 11:17 AM
Physically disconnecting cluster lock disk from 2 node RAC cluster ?
For some reason, I can't run any cm commands.(see below)
However, I have migrated data from luns on old array to new array. Apparently, since I cant run cmapplyconf, I was unable to rename cluster lock disk on the new array so (it appears)cmclconfig still sees OLD ARRAY lock disk.
The old array is going to be physically disconnected from the servers in couple of days and thus cluster lock disk will be lost.
Does physically disconnecting the lock disk from a running cluster cause any issues with the cluster ?
Can 2 node cluster operate without lock disk ?
I have enlisted HP to look into the issue .
#cmviewcl
cmviewcl : Cannot view the cluster configuration.
Either this node is not configured in a cluster, user doesn't have
access to view the cluster configuration, or there is some obstacle
to viewing the configuration. Check the syslog file for more information.
For a list of possible causes, see the Serviceguard manual for cmviewcl.
#cmquerycl
cmquerycl : Unable to find any configuration information
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-08-2009 11:36 AM
тАО01-08-2009 11:36 AM
Re: Physically disconnecting cluster lock disk from 2 node RAC cluster ?
the impact of not having a lock PV in a 2 node cluster is understood when one of the node fails or u want to start the cluster that is Entire cluster shall be inconsistent and typically teh split brain syndrome cant be handled.
So any reformation of the cluster what shall happen no one can say as at the time of restarting the cluster none of the nodes shall be able to find the lock PV and the cluster shall never form in case there is a node abrupt failure or if the cluster restarts.
However the cluster in present state can keep running if there is node failure or u do not stop and restart the cluster.
u can try posting the /etc/cmcluster/cmclconfig.ascii or whatever is the Cluster config ascii file that u r having.
and give the O/Ps of the command
#cmcheckconf -v -C /etc/cmcluster/cmclconfig.ascii file or the cluster config ascii file.
before running cmapplyconf
for the Lock VG please try pasting the vgdisplay -v
also try to say what is the new lock PV that u have tried and also the vgdisplay -v for the VG that this new lock PV belongs.
also better if u can give the results of the following command on both of the nodes #ioscan -fnCdisk
#vgdisplay -v
regards
Sujit
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-08-2009 11:40 AM
тАО01-08-2009 11:40 AM
Re: Physically disconnecting cluster lock disk from 2 node RAC cluster ?
on correction this reads as
"However the cluster in present state can keep running if there is no node failure or u do not stop and restart the cluster."
that is any abrupt failure of the either node or the restart of the cluster can render ur running cluster into an entire catastrophe and total inconsistency .
Please give the Commands outputs as requested in the earlier post.
Regards
Sujit
Regards
Sujit
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-08-2009 11:50 AM
тАО01-08-2009 11:50 AM
Re: Physically disconnecting cluster lock disk from 2 node RAC cluster ?
http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1302786
Regards
Sujit
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-09-2009 07:18 AM
тАО01-09-2009 07:18 AM
Re: Physically disconnecting cluster lock disk from 2 node RAC cluster ?
so what happened after that , did u come out of that situation?
Was it by adding a new disk to the Cluster VG ,edit the ascii file adding that to the LOCK VG,putting the new LOCK PV, removing the old entry . doing a cmcheckconf to make sure that the new Cluster LOCK PV is identified well, then shutdown the Packages and applications, Halt the cluster, apply the cluster config ASCII File using cmapplyconf and restartting the cluster.
Did u face any other issues in that ?
Regards
Sujit
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-12-2009 07:00 AM
тАО01-12-2009 07:00 AM
Re: Physically disconnecting cluster lock disk from 2 node RAC cluster ?
Thanks for responding as I have been out for some time.
There was a problem in inetd.conf so I could not run cmviewcl. I fixed that now I can run cmviewcl.
The other major problem is this is what I get when I cmquerycl, it is trying to query disks that dont exist in lvmtab and thus fails. I can't get to run cmcheckconf or cmpapplyconf eitehr with cluster down.
(c9 was on the old array). As you can see from lvmtab, I only have c6 and c10 disks which are being used.
Not sure why ithas trouble query c6t0d1. And c6t0d3 does not even exist.(see ioscan output below)
(CMQUERY OUTPUT from INPBHES5)
# cmquerycl -C /tmp/escript.prod.ascii -n inbphes5 -n inbphes6
Warning: Unable to determine local domain name for inbphes5
Timed out trying to query the following disk(s) on node inbphes5
/dev/root
/dev/dsk/c0t0d0
/dev/dsk/c6t0d1
/dev/dsk/c6t0d3
/dev/dsk/c9t0d0
/dev/dsk/c9t0d1
Timed out trying to query the following disk(s) on node inbphes5
/dev/dsk/c7t0d0
/dev/dsk/c7t0d1
/dev/dsk/c7t0d3
/dev/dsk/c7t0d4
/dev/dsk/c6t0d0
/dev/dsk/c2t1d0
inbphes5:/root ==> strings /etc/lvmtab
/dev/vg00
/dev/dsk/c2t1d0
/dev/dsk/c3t0d0
/dev/ops
/dev/dsk/c6t0d2
/dev/dsk/c10t0d2
/dev/apps
/dev/dsk/c10t0d0
/dev/dsk/c10t0d1
/dev/dsk/c6t0d0
/dev/dsk/c6t0d1
inbphes5:/root ==> cmviewcl
CLUSTER STATUS
escript_prod up
NODE STATUS STATE
inbphes5 up running
PACKAGE STATUS STATE AUTO_RUN NODE
inbphes5sg up running disabled inbphes5
NODE STATUS STATE
inbphes6 up running
PACKAGE STATUS STATE AUTO_RUN NODE
inbphes6sg up running disabled inbphes6
inbphes5:/root ==> cmgetconf -c escript_prod /tmp/ecluster.txt
Timed out trying to query the following disk(s) on node inbphes5
/dev/dsk/c2t1d0
/dev/dsk/c3t0d0
/dev/dsk/c6t0d2
/dev/dsk/c10t0d2
/dev/dsk/c10t0d0
/dev/dsk/c10t0d1
Timed out trying to query the following disk(s) on node inbphes6
/dev/dsk/c2t1d0
/dev/dsk/c3t0d0
/dev/dsk/c12t0d0
/dev/dsk/c13t0d0
/dev/dsk/c12t0d1
/dev/dsk/c13t0d1
inbphes5:/root ==> ioscan -fnkC disk|grep c6
/dev/dsk/c6t0d0 /dev/rdsk/c6t0d0
/dev/dsk/c6t0d1 /dev/rdsk/c6t0d1
/dev/dsk/c6t0d2 /dev/rdsk/c6t0d2
/dev/dsk/c6t1d0 /dev/rdsk/c6t1d0
/dev/dsk/c6t2d0 /dev/rdsk/c6t2d0
/dev/dsk/c6t3d0 /dev/rdsk/c6t3d0
/dev/dsk/c6t4d0 /dev/rdsk/c6t4d0
/dev/dsk/c6t5d0 /dev/rdsk/c6t5d0
/dev/dsk/c6t6d0 /dev/rdsk/c6t6d0
/dev/dsk/c6t7d0 /dev/rdsk/c6t7d0
/dev/dsk/c6t8d0 /dev/rdsk/c6t8d0
/dev/dsk/c6t9d0 /dev/rdsk/c6t9d0
/dev/dsk/c6t10d0 /dev/rdsk/c6t10d0
/dev/dsk/c6t11d0 /dev/rdsk/c6t11d0
/dev/dsk/c6t12d0 /dev/rdsk/c6t12d0
/dev/dsk/c6t13d0 /dev/rdsk/c6t13d0
/dev/dsk/c6t14d0 /dev/rdsk/c6t14d0
/dev/dsk/c6t15d0 /dev/rdsk/c6t15d0
inbphes5:/root ==>
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-17-2009 01:36 AM
тАО01-17-2009 01:36 AM
Re: Physically disconnecting cluster lock disk from 2 node RAC cluster ?
sorry for the delayed response...
what i shall like to know is that what was the method that u used to copy the data from the old array to the new array?
are the c6 disks belonging to the old array,
can u please give the bdf, ioscan -fnCdisk for both the nodes and the vgdisplay -v ?
did u remove the older paths from the VG?
and then did u do a vgexport and vgimport again so as to update the vgconfiguartion that might have have changed owing to addition of disks from the new array and removing the disks and paths of the old array ???
can u also tell if u have what is the toatl O/P for the cmquerycl command as u run ???
Regards
Sujit