HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Single disk lock issue
Operating System - HP-UX
1834087
Members
2473
Online
110063
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-07-2007 03:32 AM
08-07-2007 03:32 AM
Single disk lock issue
I attempted to create a 2-node cluster on HP.
Here the procedure that I followed:
1) created 4 volumes and mapped to cluster host group which concluded both nodes
2) Both nodes see all these volumes
3) on node 1, performed the following steps for each volume:
Assuming cxtxdx is primary and cytydy is the secondary path
# pvcreate â f /dev/rdsk/cxtxdx
o # pvcreate â f /dev/rdsk/cytydy
o # mkdir /dev/vg##
o # mknod /dev/vg##/group c 64 0x##0000
o # vgcreate /dev/vg## /dev/dsk/cxtxdx /dev/dsk/cytydy
o # lvcreate â L 1024 /dev/vg##
o # vgchange â a n /dev/vg##
o # vgexport â p â s -m /tmp/vg##.map /dev/vg##
o # transfer this vg##.map file over node 2 at /tmp directory Repeat for all volume groups
4) On node 2, performed the following steps:
o # mkdir /dev/vg##
o # mknod /dev/vg##/group c 64 0x##0000
o # ioscan â f
o # insf -e
o # vgimport â s â m /tmp/vg##.map /dev/vg##
o # vgchange â a n /dev/vg#
Repeat for all volumes groups
lvmtab on node 1:
# strings /etc/lvmtab
/dev/vg00
/dev/dsk/c2t1d0
/dev/vg01
/dev/dsk/c34t0d0
/dev/dsk/c32t0d0
/dev/vg02
/dev/dsk/c32t0d1
/dev/dsk/c34t0d1
/dev/vg03
/dev/dsk/c34t0d2
/dev/dsk/c32t0d2
/dev/vg04
/dev/dsk/c32t0d3
/dev/dsk/c34t0d3
lvmtab on node 2:
# strings /etc/lvmtab
/dev/vg00
/dev/dsk/c2t1d0
/dev/vg01
/dev/dsk/c97t0d0
/dev/dsk/c95t0d0
/dev/vg02
/dev/dsk/c97t0d1
/dev/dsk/c95t0d1
/dev/vg03
/dev/dsk/c97t0d2
/dev/dsk/c95t0d2
/dev/vg04
/dev/dsk/c97t0d3
/dev/dsk/c95t0d3
6) Then I used SMH to create cluster:
a) added both nodes to cluster
b) defined parameters, network, lock, and volume groups
c) I selected sigle disk lock and selected all volume group in "cluster aware"
d) clicked ok
e) clicked "check configuration"
The operation log returned the following message: First cluster lock physical volume "/dev/dsk/c95t0d0" on node "powerhp" does not belong to first cluster lock volume group "/dev/vg01". Specify a physical volume that belongs to first volume group. (see attachment for more details)
So, it is the cluster clock setting that caused this problem. However, I am not able to change the lock device path since there is only one path in the drop down list.
Does anyone have any suggestions on what I might need to do to fix the problem?
Here the procedure that I followed:
1) created 4 volumes and mapped to cluster host group which concluded both nodes
2) Both nodes see all these volumes
3) on node 1, performed the following steps for each volume:
Assuming cxtxdx is primary and cytydy is the secondary path
# pvcreate â f /dev/rdsk/cxtxdx
o # pvcreate â f /dev/rdsk/cytydy
o # mkdir /dev/vg##
o # mknod /dev/vg##/group c 64 0x##0000
o # vgcreate /dev/vg## /dev/dsk/cxtxdx /dev/dsk/cytydy
o # lvcreate â L 1024 /dev/vg##
o # vgchange â a n /dev/vg##
o # vgexport â p â s -m /tmp/vg##.map /dev/vg##
o # transfer this vg##.map file over node 2 at /tmp directory Repeat for all volume groups
4) On node 2, performed the following steps:
o # mkdir /dev/vg##
o # mknod /dev/vg##/group c 64 0x##0000
o # ioscan â f
o # insf -e
o # vgimport â s â m /tmp/vg##.map /dev/vg##
o # vgchange â a n /dev/vg#
Repeat for all volumes groups
lvmtab on node 1:
# strings /etc/lvmtab
/dev/vg00
/dev/dsk/c2t1d0
/dev/vg01
/dev/dsk/c34t0d0
/dev/dsk/c32t0d0
/dev/vg02
/dev/dsk/c32t0d1
/dev/dsk/c34t0d1
/dev/vg03
/dev/dsk/c34t0d2
/dev/dsk/c32t0d2
/dev/vg04
/dev/dsk/c32t0d3
/dev/dsk/c34t0d3
lvmtab on node 2:
# strings /etc/lvmtab
/dev/vg00
/dev/dsk/c2t1d0
/dev/vg01
/dev/dsk/c97t0d0
/dev/dsk/c95t0d0
/dev/vg02
/dev/dsk/c97t0d1
/dev/dsk/c95t0d1
/dev/vg03
/dev/dsk/c97t0d2
/dev/dsk/c95t0d2
/dev/vg04
/dev/dsk/c97t0d3
/dev/dsk/c95t0d3
6) Then I used SMH to create cluster:
a) added both nodes to cluster
b) defined parameters, network, lock, and volume groups
c) I selected sigle disk lock and selected all volume group in "cluster aware"
d) clicked ok
e) clicked "check configuration"
The operation log returned the following message: First cluster lock physical volume "/dev/dsk/c95t0d0" on node "powerhp" does not belong to first cluster lock volume group "/dev/vg01". Specify a physical volume that belongs to first volume group. (see attachment for more details)
So, it is the cluster clock setting that caused this problem. However, I am not able to change the lock device path since there is only one path in the drop down list.
Does anyone have any suggestions on what I might need to do to fix the problem?
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-07-2007 04:45 AM
08-07-2007 04:45 AM
Re: Single disk lock issue
Shalom,
I don't think you have finished configuration thats all.
cmquerycl -n nodename1 nodename2
Followed by cmcheckconf and cmapplyconf.
You can set the lock disk in your configuration file before running cmquerycl
There are a number of errors in your post that make it difficult for me to understand the process.
ex: I used SMH (do you mean SAM) to create cluster.
clock instead of lock.
But I plowed through and tried to help anyway.
SEP
I don't think you have finished configuration thats all.
cmquerycl -n nodename1 nodename2
Followed by cmcheckconf and cmapplyconf.
You can set the lock disk in your configuration file before running cmquerycl
There are a number of errors in your post that make it difficult for me to understand the process.
ex: I used SMH (do you mean SAM) to create cluster.
clock instead of lock.
But I plowed through and tried to help anyway.
SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-07-2007 09:11 AM
08-07-2007 09:11 AM
Re: Single disk lock issue
Steven,
Sorry for the confusion.
SMH stands for System Management Homepage.Basically, it's is same as Serviceguard Manager. HP SMH includes the integration of Web-based applications that provide a graphical user interface (GUI) for HP system administration tasks. You can also check it out here: http://docs.hp.com/en/B3935-90108/ch01s02.html#babhjjhd
"clock" is my typo. I meant lock
I also noticed the text in steps 3 and 4 are messed up. HEre are the better ones:
3) on node 1, performed the following steps for each volume:
Assuming cxtxdx is primary and cytydy is the secondary path
# pvcreate -f /dev/rdsk/cxtxdx
# pvcreate -f /dev/rdsk/cytydy
# mkdir /dev/vg##
# mknod /dev/vg##/group c 64 0x##0000
# vgcreate /dev/vg## /dev/dsk/cxtxdx /dev/dsk/cytydy
# lvcreate -L 1024 /dev/vg##
# vgchange -a n /dev/vg##
# vgexport -p -s -m /tmp/vg##.map /dev/vg##
# transfer this vg##.map file over node 2 at /tmp directory Repeat for all volume groups
4) On node 2, performed the following steps:
# mkdir /dev/vg##
# mknod /dev/vg##/group c 64 0x##0000
# ioscan -f
# insf -e
# vgimport -s -m /tmp/vg##.map /dev/vg##
# vgchange -a n /dev/vg#
Repeat for all volumes groups
By the way, what do these commands do. At what point should I execute them?
cmquerycl -n nodename1 nodename2
cmcheckconf
cmapplyconf
Sorry for the confusion.
SMH stands for System Management Homepage.Basically, it's is same as Serviceguard Manager. HP SMH includes the integration of Web-based applications that provide a graphical user interface (GUI) for HP system administration tasks. You can also check it out here: http://docs.hp.com/en/B3935-90108/ch01s02.html#babhjjhd
"clock" is my typo. I meant lock
I also noticed the text in steps 3 and 4 are messed up. HEre are the better ones:
3) on node 1, performed the following steps for each volume:
Assuming cxtxdx is primary and cytydy is the secondary path
# pvcreate -f /dev/rdsk/cxtxdx
# pvcreate -f /dev/rdsk/cytydy
# mkdir /dev/vg##
# mknod /dev/vg##/group c 64 0x##0000
# vgcreate /dev/vg## /dev/dsk/cxtxdx /dev/dsk/cytydy
# lvcreate -L 1024 /dev/vg##
# vgchange -a n /dev/vg##
# vgexport -p -s -m /tmp/vg##.map /dev/vg##
# transfer this vg##.map file over node 2 at /tmp directory Repeat for all volume groups
4) On node 2, performed the following steps:
# mkdir /dev/vg##
# mknod /dev/vg##/group c 64 0x##0000
# ioscan -f
# insf -e
# vgimport -s -m /tmp/vg##.map /dev/vg##
# vgchange -a n /dev/vg#
Repeat for all volumes groups
By the way, what do these commands do. At what point should I execute them?
cmquerycl -n nodename1 nodename2
cmcheckconf
cmapplyconf
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-07-2007 09:27 AM
08-07-2007 09:27 AM
Re: Single disk lock issue
hattori
cmquerycl - queries the nodes and get information about LVM and network interface ..
cmcheckcl - checks the configuration of your Service Guard cluster and will report any errors.
cmapplycl - applies the changes to the cluster.
cmquerycl - queries the nodes and get information about LVM and network interface ..
cmcheckcl - checks the configuration of your Service Guard cluster and will report any errors.
cmapplycl - applies the changes to the cluster.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP