System Administration
cancel
Showing results for 
Search instead for 
Did you mean: 

Physically disconnecting cluster lock disk from 2 node RAC cluster ?

Sammy_2
Super Advisor

Physically disconnecting cluster lock disk from 2 node RAC cluster ?

I have 2 node HPSG cluster with eRAC. HPUX 11.11 on both nodes.

For some reason, I can't run any cm commands.(see below)
However, I have migrated data from luns on old array to new array. Apparently, since I cant run cmapplyconf, I was unable to rename cluster lock disk on the new array so (it appears)cmclconfig still sees OLD ARRAY lock disk.

The old array is going to be physically disconnected from the servers in couple of days and thus cluster lock disk will be lost.

Does physically disconnecting the lock disk from a running cluster cause any issues with the cluster ?
Can 2 node cluster operate without lock disk ?

I have enlisted HP to look into the issue .
#cmviewcl
cmviewcl : Cannot view the cluster configuration.
Either this node is not configured in a cluster, user doesn't have
access to view the cluster configuration, or there is some obstacle
to viewing the configuration. Check the syslog file for more information.
For a list of possible causes, see the Serviceguard manual for cmviewcl.

#cmquerycl
cmquerycl : Unable to find any configuration information
good judgement comes from experience and experience comes from bad judgement.
6 REPLIES
sujit kumar singh
Honored Contributor

Re: Physically disconnecting cluster lock disk from 2 node RAC cluster ?

hi


the impact of not having a lock PV in a 2 node cluster is understood when one of the node fails or u want to start the cluster that is Entire cluster shall be inconsistent and typically teh split brain syndrome cant be handled.

So any reformation of the cluster what shall happen no one can say as at the time of restarting the cluster none of the nodes shall be able to find the lock PV and the cluster shall never form in case there is a node abrupt failure or if the cluster restarts.

However the cluster in present state can keep running if there is node failure or u do not stop and restart the cluster.

u can try posting the /etc/cmcluster/cmclconfig.ascii or whatever is the Cluster config ascii file that u r having.

and give the O/Ps of the command

#cmcheckconf -v -C /etc/cmcluster/cmclconfig.ascii file or the cluster config ascii file.

before running cmapplyconf

for the Lock VG please try pasting the vgdisplay -v .

also try to say what is the new lock PV that u have tried and also the vgdisplay -v for the VG that this new lock PV belongs.

also better if u can give the results of the following command on both of the nodes #ioscan -fnCdisk
#vgdisplay -v





regards
Sujit
sujit kumar singh
Honored Contributor

Re: Physically disconnecting cluster lock disk from 2 node RAC cluster ?

please read the line in teh earlier post written as "However the cluster in present state can keep running if there is node failure or u do not stop and restart the cluster."

on correction this reads as
"However the cluster in present state can keep running if there is no node failure or u do not stop and restart the cluster."


that is any abrupt failure of the either node or the restart of the cluster can render ur running cluster into an entire catastrophe and total inconsistency .


Please give the Commands outputs as requested in the earlier post.


Regards
Sujit

Regards
Sujit
sujit kumar singh
Honored Contributor

Re: Physically disconnecting cluster lock disk from 2 node RAC cluster ?

more vivid description on the clster lock PV and Cluster lick VG can be found here.

http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1302786

Regards

Sujit
sujit kumar singh
Honored Contributor

Re: Physically disconnecting cluster lock disk from 2 node RAC cluster ?

Hi

so what happened after that , did u come out of that situation?


Was it by adding a new disk to the Cluster VG ,edit the ascii file adding that to the LOCK VG,putting the new LOCK PV, removing the old entry . doing a cmcheckconf to make sure that the new Cluster LOCK PV is identified well, then shutdown the Packages and applications, Halt the cluster, apply the cluster config ASCII File using cmapplyconf and restartting the cluster.


Did u face any other issues in that ?


Regards
Sujit
Sammy_2
Super Advisor

Re: Physically disconnecting cluster lock disk from 2 node RAC cluster ?

Sujit,
Thanks for responding as I have been out for some time.
There was a problem in inetd.conf so I could not run cmviewcl. I fixed that now I can run cmviewcl.
The other major problem is this is what I get when I cmquerycl, it is trying to query disks that dont exist in lvmtab and thus fails. I can't get to run cmcheckconf or cmpapplyconf eitehr with cluster down.
(c9 was on the old array). As you can see from lvmtab, I only have c6 and c10 disks which are being used.
Not sure why ithas trouble query c6t0d1. And c6t0d3 does not even exist.(see ioscan output below)




(CMQUERY OUTPUT from INPBHES5)

# cmquerycl -C /tmp/escript.prod.ascii -n inbphes5 -n inbphes6

Warning: Unable to determine local domain name for inbphes5


Timed out trying to query the following disk(s) on node inbphes5
/dev/root
/dev/dsk/c0t0d0
/dev/dsk/c6t0d1
/dev/dsk/c6t0d3
/dev/dsk/c9t0d0
/dev/dsk/c9t0d1

Timed out trying to query the following disk(s) on node inbphes5
/dev/dsk/c7t0d0
/dev/dsk/c7t0d1
/dev/dsk/c7t0d3
/dev/dsk/c7t0d4
/dev/dsk/c6t0d0
/dev/dsk/c2t1d0
inbphes5:/root ==> strings /etc/lvmtab
/dev/vg00
/dev/dsk/c2t1d0
/dev/dsk/c3t0d0
/dev/ops
/dev/dsk/c6t0d2
/dev/dsk/c10t0d2
/dev/apps
/dev/dsk/c10t0d0
/dev/dsk/c10t0d1
/dev/dsk/c6t0d0
/dev/dsk/c6t0d1
inbphes5:/root ==> cmviewcl

CLUSTER STATUS
escript_prod up

NODE STATUS STATE
inbphes5 up running

PACKAGE STATUS STATE AUTO_RUN NODE
inbphes5sg up running disabled inbphes5

NODE STATUS STATE
inbphes6 up running

PACKAGE STATUS STATE AUTO_RUN NODE
inbphes6sg up running disabled inbphes6
inbphes5:/root ==> cmgetconf -c escript_prod /tmp/ecluster.txt

Timed out trying to query the following disk(s) on node inbphes5
/dev/dsk/c2t1d0
/dev/dsk/c3t0d0
/dev/dsk/c6t0d2
/dev/dsk/c10t0d2
/dev/dsk/c10t0d0
/dev/dsk/c10t0d1

Timed out trying to query the following disk(s) on node inbphes6
/dev/dsk/c2t1d0
/dev/dsk/c3t0d0
/dev/dsk/c12t0d0
/dev/dsk/c13t0d0
/dev/dsk/c12t0d1
/dev/dsk/c13t0d1



inbphes5:/root ==> ioscan -fnkC disk|grep c6
/dev/dsk/c6t0d0 /dev/rdsk/c6t0d0
/dev/dsk/c6t0d1 /dev/rdsk/c6t0d1
/dev/dsk/c6t0d2 /dev/rdsk/c6t0d2
/dev/dsk/c6t1d0 /dev/rdsk/c6t1d0
/dev/dsk/c6t2d0 /dev/rdsk/c6t2d0
/dev/dsk/c6t3d0 /dev/rdsk/c6t3d0
/dev/dsk/c6t4d0 /dev/rdsk/c6t4d0
/dev/dsk/c6t5d0 /dev/rdsk/c6t5d0
/dev/dsk/c6t6d0 /dev/rdsk/c6t6d0
/dev/dsk/c6t7d0 /dev/rdsk/c6t7d0
/dev/dsk/c6t8d0 /dev/rdsk/c6t8d0
/dev/dsk/c6t9d0 /dev/rdsk/c6t9d0
/dev/dsk/c6t10d0 /dev/rdsk/c6t10d0
/dev/dsk/c6t11d0 /dev/rdsk/c6t11d0
/dev/dsk/c6t12d0 /dev/rdsk/c6t12d0
/dev/dsk/c6t13d0 /dev/rdsk/c6t13d0
/dev/dsk/c6t14d0 /dev/rdsk/c6t14d0
/dev/dsk/c6t15d0 /dev/rdsk/c6t15d0
inbphes5:/root ==>
good judgement comes from experience and experience comes from bad judgement.
sujit kumar singh
Honored Contributor

Re: Physically disconnecting cluster lock disk from 2 node RAC cluster ?

Hi Sammy,



sorry for the delayed response...


what i shall like to know is that what was the method that u used to copy the data from the old array to the new array?

are the c6 disks belonging to the old array,
can u please give the bdf, ioscan -fnCdisk for both the nodes and the vgdisplay -v ?

did u remove the older paths from the VG?

and then did u do a vgexport and vgimport again so as to update the vgconfiguartion that might have have changed owing to addition of disks from the new array and removing the disks and paths of the old array ???
can u also tell if u have what is the toatl O/P for the cmquerycl command as u run ???



Regards
Sujit