- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Cluster Volume Groups
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-11-2002 06:22 AM
07-11-2002 06:22 AM
Cluster Volume Groups
I'm planing to configure two node cluster. The external disks are on VA7400 connected to Two servers over Fiber via brocade switch.
I've created few luns in order to create VG's and LV's
Now my problem is When I run ioscan I see these lun disks but with diferrent diskname .
For example On server1 c15t0d0 has the same information as c14t0d0 has on server2.
Server1 does not have disk with c14t0d0.
When I create VG on one server and vgexport map file to other server how cluster will understand this.
Hope I'm clear.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-11-2002 06:28 AM
07-11-2002 06:28 AM
Re: Cluster Volume Groups
Hopefully u have installed STM .
Try using this script. It will show H/W Path, dsk device and serial number. This should eleminate non-unique stuff ...
#!/bin/sh
#
#set -x
PATH=/usr/bin:/bin:/usr/sbin:/sbin
OUTFILE=/var/conf/stminfo.disk
if [ -f ${OUTFILE} ]
then
echo "NOTE: overwriting existing file ${OUTFILE}"
else
echo "Creating ${OUTFILE}"
fi
cstm </dev/null 2>&1
scl type disk
info
wait
infolog
saveas
${OUTFILE}
done
quit
ok
!
echo "Disk HW-Path, device files and Serial No."
echo "--------------------------------------------------------"
grep -e ^Hardware -e ^Serial ${OUTFILE} | awk '{print $3}' | while read VALUE ; do
echo $VALUE | grep -q '[0-9]/'
if [ $? -eq 0 ] ; then
HWPATH=$VALUE
ioscan -fnH${VALUE} | sed -n 's+.*\(/dev/dsk/[^ ]*\).*+\1+p' | read DEVICE
else
printf "%20s %15s %15s\n" $HWPATH $DEVICE $VALUE
fi
done
echo "NOTE: STM disk information has been saved to ${OUTFILE}."
echo "--------------------------------------------------------"
HTH,
RGDS, Holger
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-11-2002 06:29 AM
07-11-2002 06:29 AM
Re: Cluster Volume Groups
It is better if both the nodes see the same disk at the same scsi address rather than at different addresses. It makes life that much easier at a later date. However you can still configure the system with the disks being seen at differrent addresses. To do this, do a vgexport to a map file on the conf node and then exit this map file and change the corresponding entries,
if c14d0t1 on conf node is c15d1t1 edit the map file and change c14d0t1 to c15r1t1 and so on. Then on the other node import the vg using this modified map file, activate the vg and try to see if you can mount the lv's from this vg.
Hope this helps.
Regds
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-11-2002 06:32 AM
07-11-2002 06:32 AM
Re: Cluster Volume Groups
This is just a unique way each node knows how to get to the device, and the c14/15 number is just the instance on each node that has been created when the node has ioscan'ed and found the Connection, an dthen insf'ed device files.
If you already know the correct device files, then simpy do the vgimport adding the device files, as known on the system you are importing them, at the end of the command line.
You could also look at using the -s option to vgexport/vgimport, but beware that this CAN cause issues when used on large arrays.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-11-2002 06:38 AM
07-11-2002 06:38 AM
Re: Cluster Volume Groups
I tried vgexport and then vgimport with -s option., I could mount , umount file system wihout any problem but when I try adding these volume group to cluster SAM did not see any of the volume gorup.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-11-2002 06:40 AM
07-11-2002 06:40 AM
Re: Cluster Volume Groups
IDs for the controllers has to be different and hence you get c15 on one server and c14 on the other server.
What you have to do after creating the vg and lvols on the first server is :
Go to second server :
# mkdir /dev/vg02
#mknod /dev/vg02/group c 62 0x020000
On first server :
# vgexport -p -s -m /tmp/vg02.map /dev/vg02
(Creates the map file but the -s option does not export the vg )
rcp the map file to second server
On second server :
Edit the map file and change the c15 to c14 and then
# vgimport -s -m /tmp/vg02.map /dev/vg02
Piyush
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-11-2002 06:41 AM
07-11-2002 06:41 AM
Re: Cluster Volume Groups
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-11-2002 06:59 AM
07-11-2002 06:59 AM
Re: Cluster Volume Groups
Try this link below on the steps you can try to match the instances on the two systems,
http://support2.itrc.hp.com/service/cki/docDisplay.do?docLocale=en_US&docId=200000061924970
Hope this helps.
regds
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-11-2002 10:54 AM
07-11-2002 10:54 AM
Re: Cluster Volume Groups
We've just recently brought on line Hitachi 9200 arrays with brocade switches.
I've seen the same "problem" you have with different device file names for the same luns on different systems.
I'm not sure how the VA7400's work, but at some point you specify or it will assign a target id to the lun. When you initially expose the lun to your HBA's, do an ioscan -fnCdisk and you will see new paths with hardware but no device files.
HP sees the lun target id's and maps them into the dotted pathname in octal, so if the TID is 0, you'll see something like 14/8.8.0.124.0.0.0 as the pathname (and after an insf -e, a device file name like /dev/rdsk/c16t0d0). If the target id is 8, you would see 14/8/8/0.124.1.0, device file c16t1d0. Target id 9, 14/8.8.0.124.0.1.1, device file c16t1d1. The path representation is apparently a dotted octal, and the same representation maps into the device file name t#d# positions. You can look for visibility of a new device on your hp9000' s after exposing the lun, and givien the octal representation of the target id, you'll know what hardware path to expect; you probably don't want to pvcreate -f a lun already used in a volume group!
Prior comments are correct about the device names not having to be the same across hosts, and using vgexport -p -v -s -m mapfile_name vg_name will create a mapfile for sharing volume groups. (Note - the -s option does not prevent an actual export, it specifies rather that the mapfile will be for shared volume use. The -p option (preview) creates the mapfile without doing an actual export. After copying over the mapfile and making the directory and group file on the second host, a vgimport -v -s -m mapfile_name vg_name will import the volumes using the lv_names you already established on the first host. You do not need to edit the mapfile and change the device names at all; vgimport -s will scan the disk data structures for you and determine which devices should be assigned to your new volume group.
One important factor for using the volumes under serviceguard is the minor device numbers of your group files must be the same on each host. (ie your mknod group c 64 0xNN0000 must be the same on each host). If you already have a lot of volumes defined on one host, you may have to start your volume numbering higher for all hosts using the volumes. Another important factor is that the default configuration for hpux allows a maximum of 10 volume groups. When you try to import a volume to a vg number greater than 9, it won't be seen. You have to build a new kernel with an increased maximum volume groups parameter, and this will require a reboot, even for 11x.
I must say while I'm not happy with the relative inflexibility of the configuration software on our Hitachi's - I think the VA7400's are better this way - the performance just blows me away. I'm doing copies of 30gig database volumes in 10-20 minutes, switching access to a different host and using that for backup, getting the production server completely out of the picture.... Very fast and flexible.
Good luck with your implementation.
Greg Martin.