<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Cluster Volume Groups in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-volume-groups/m-p/2761949#M710441</link>
    <description>and the H/W path are same on both node for these disks except the instance number.</description>
    <pubDate>Thu, 11 Jul 2002 13:41:02 GMT</pubDate>
    <dc:creator>Rushank</dc:creator>
    <dc:date>2002-07-11T13:41:02Z</dc:date>
    <item>
      <title>Cluster Volume Groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-volume-groups/m-p/2761943#M710435</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;I'm planing to configure two node cluster. The external disks are on VA7400 connected to Two servers over Fiber via brocade switch.&lt;BR /&gt;&lt;BR /&gt;I've created few luns in order to create VG's and LV's&lt;BR /&gt;Now my problem is When I run ioscan I see these lun disks but with diferrent diskname .&lt;BR /&gt;For example On server1  c15t0d0 has the same information as c14t0d0 has on server2.  &lt;BR /&gt;Server1 does not have disk with c14t0d0. &lt;BR /&gt;When I create VG on one server and vgexport map file to other server how cluster will understand this.&lt;BR /&gt;Hope I'm clear.&lt;BR /&gt;</description>
      <pubDate>Thu, 11 Jul 2002 13:22:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-volume-groups/m-p/2761943#M710435</guid>
      <dc:creator>Rushank</dc:creator>
      <dc:date>2002-07-11T13:22:07Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster Volume Groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-volume-groups/m-p/2761944#M710436</link>
      <description>Hi !&lt;BR /&gt;&lt;BR /&gt;Hopefully u have installed STM .&lt;BR /&gt;Try using this script. It will show H/W Path, dsk device and serial number. This should eleminate non-unique stuff ...&lt;BR /&gt;&lt;BR /&gt;#!/bin/sh&lt;BR /&gt;#&lt;BR /&gt;#set -x&lt;BR /&gt;&lt;BR /&gt;PATH=/usr/bin:/bin:/usr/sbin:/sbin&lt;BR /&gt;OUTFILE=/var/conf/stminfo.disk&lt;BR /&gt;&lt;BR /&gt;if [ -f ${OUTFILE} ]&lt;BR /&gt;then&lt;BR /&gt;        echo "NOTE: overwriting existing file ${OUTFILE}"&lt;BR /&gt;else&lt;BR /&gt;        echo "Creating ${OUTFILE}"&lt;BR /&gt;fi&lt;BR /&gt;&lt;BR /&gt;cstm &amp;lt;/dev/null 2&amp;gt;&amp;amp;1&lt;BR /&gt;scl type disk&lt;BR /&gt;info&lt;BR /&gt;wait&lt;BR /&gt;infolog&lt;BR /&gt;saveas&lt;BR /&gt;${OUTFILE}&lt;BR /&gt;done&lt;BR /&gt;quit&lt;BR /&gt;ok&lt;BR /&gt;!&lt;BR /&gt;echo "Disk HW-Path, device files and Serial No."&lt;BR /&gt;echo "--------------------------------------------------------"&lt;BR /&gt;grep -e ^Hardware -e ^Serial ${OUTFILE} | awk '{print $3}' | while read VALUE ; do&lt;BR /&gt;echo $VALUE | grep -q '[0-9]/'&lt;BR /&gt;if [ $? -eq 0 ] ; then&lt;BR /&gt;HWPATH=$VALUE&lt;BR /&gt;ioscan -fnH${VALUE} | sed -n 's+.*\(/dev/dsk/[^ ]*\).*+\1+p' | read DEVICE&lt;BR /&gt;else&lt;BR /&gt;printf "%20s %15s %15s\n" $HWPATH $DEVICE $VALUE&lt;BR /&gt;fi&lt;BR /&gt;done&lt;BR /&gt;echo "NOTE: STM disk information has been saved to ${OUTFILE}."&lt;BR /&gt;echo "--------------------------------------------------------"&lt;BR /&gt;&lt;BR /&gt;HTH,&lt;BR /&gt;&lt;BR /&gt;RGDS, Holger</description>
      <pubDate>Thu, 11 Jul 2002 13:28:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-volume-groups/m-p/2761944#M710436</guid>
      <dc:creator>Holger Knoppik</dc:creator>
      <dc:date>2002-07-11T13:28:21Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster Volume Groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-volume-groups/m-p/2761945#M710437</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;It is better if both the nodes see the same disk at the same scsi address rather than at different addresses. It makes life that much easier at a later date. However you can still configure the system with the disks being seen at differrent addresses. To do this, do a vgexport to a map file on the conf node and then exit this map file and change the corresponding entries,&lt;BR /&gt;&lt;BR /&gt;if c14d0t1 on conf node is c15d1t1 edit the map file and change c14d0t1 to c15r1t1 and so on. Then on the other node import the vg using this modified map file, activate the vg and try to see if you can mount the lv's from this vg.&lt;BR /&gt;&lt;BR /&gt;Hope this helps.&lt;BR /&gt;&lt;BR /&gt;Regds&lt;BR /&gt;</description>
      <pubDate>Thu, 11 Jul 2002 13:29:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-volume-groups/m-p/2761945#M710437</guid>
      <dc:creator>Sanjay_6</dc:creator>
      <dc:date>2002-07-11T13:29:14Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster Volume Groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-volume-groups/m-p/2761946#M710438</link>
      <description>There is no problem whatsoever having the discs/luns identified differently on each node as you are seeing.&lt;BR /&gt;This is just a unique way each node knows how to get to the device, and the c14/15 number is just the instance on each node that has been created when the node has ioscan'ed and found the Connection, an dthen insf'ed device files.&lt;BR /&gt;&lt;BR /&gt;If you already know the correct device files, then simpy do the vgimport adding the device files, as known on the system you are importing them, at the end of the command line.&lt;BR /&gt;You could also look at using the -s option to vgexport/vgimport, but beware that this CAN cause issues when used on large arrays.&lt;BR /&gt;</description>
      <pubDate>Thu, 11 Jul 2002 13:32:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-volume-groups/m-p/2761946#M710438</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2002-07-11T13:32:52Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster Volume Groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-volume-groups/m-p/2761947#M710439</link>
      <description>Well,&lt;BR /&gt;&lt;BR /&gt;I tried vgexport and then vgimport with -s option., I could mount , umount file system wihout any problem but when I try adding these volume group to cluster SAM did not see any of the volume gorup. &lt;BR /&gt; &lt;BR /&gt;</description>
      <pubDate>Thu, 11 Jul 2002 13:38:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-volume-groups/m-p/2761947#M710439</guid>
      <dc:creator>Rushank</dc:creator>
      <dc:date>2002-07-11T13:38:00Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster Volume Groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-volume-groups/m-p/2761948#M710440</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;IDs for the controllers has to be different and hence you get c15 on one server and c14 on the other server.&lt;BR /&gt;&lt;BR /&gt;What you have to do after creating the vg and lvols on the first server is :&lt;BR /&gt;&lt;BR /&gt;Go to second server :&lt;BR /&gt;&lt;BR /&gt;# mkdir /dev/vg02&lt;BR /&gt;#mknod /dev/vg02/group c 62 0x020000&lt;BR /&gt;&lt;BR /&gt;On first server :&lt;BR /&gt;# vgexport -p -s -m /tmp/vg02.map /dev/vg02&lt;BR /&gt;(Creates the map file but the -s option does not export the vg )&lt;BR /&gt;&lt;BR /&gt;rcp the map file to second server&lt;BR /&gt;&lt;BR /&gt;On second server :&lt;BR /&gt;Edit the map file and change the c15 to c14 and then &lt;BR /&gt;&lt;BR /&gt;# vgimport -s -m /tmp/vg02.map /dev/vg02&lt;BR /&gt;&lt;BR /&gt;Piyush&lt;BR /&gt;</description>
      <pubDate>Thu, 11 Jul 2002 13:40:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-volume-groups/m-p/2761948#M710440</guid>
      <dc:creator>PIYUSH D. PATEL</dc:creator>
      <dc:date>2002-07-11T13:40:04Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster Volume Groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-volume-groups/m-p/2761949#M710441</link>
      <description>and the H/W path are same on both node for these disks except the instance number.</description>
      <pubDate>Thu, 11 Jul 2002 13:41:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-volume-groups/m-p/2761949#M710441</guid>
      <dc:creator>Rushank</dc:creator>
      <dc:date>2002-07-11T13:41:02Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster Volume Groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-volume-groups/m-p/2761950#M710442</link>
      <description>Hi Rushank,&lt;BR /&gt;&lt;BR /&gt;Try this link below on the steps you can try to match the instances on the two systems,&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://support2.itrc.hp.com/service/cki/docDisplay.do?docLocale=en_US&amp;amp;docId=200000061924970" target="_blank"&gt;http://support2.itrc.hp.com/service/cki/docDisplay.do?docLocale=en_US&amp;amp;docId=200000061924970&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Hope this helps.&lt;BR /&gt;&lt;BR /&gt;regds&lt;BR /&gt;</description>
      <pubDate>Thu, 11 Jul 2002 13:59:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-volume-groups/m-p/2761950#M710442</guid>
      <dc:creator>Sanjay_6</dc:creator>
      <dc:date>2002-07-11T13:59:44Z</dc:date>
    </item>
    <item>
      <title>Re: Cluster Volume Groups</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/cluster-volume-groups/m-p/2761951#M710443</link>
      <description>Sounds like you're having great fun!&lt;BR /&gt;&lt;BR /&gt;We've just recently brought on line Hitachi 9200 arrays with brocade switches.&lt;BR /&gt;&lt;BR /&gt;I've seen the same "problem" you have with different device file names for the same luns on different systems.&lt;BR /&gt;&lt;BR /&gt;I'm not sure how the VA7400's work, but at some point you specify or it will assign a target id to the lun.  When you initially expose the lun to your HBA's, do an ioscan -fnCdisk and you will see new paths with hardware but no device files.&lt;BR /&gt;&lt;BR /&gt;HP sees the lun target id's and maps them into the dotted pathname in octal, so if the TID is 0, you'll see something like 14/8.8.0.124.0.0.0 as the pathname (and after an insf -e, a device file name like /dev/rdsk/c16t0d0).  If the target id is 8, you would see 14/8/8/0.124.1.0, device file c16t1d0.  Target id 9, 14/8.8.0.124.0.1.1, device file c16t1d1.  The path representation is apparently a dotted octal, and the same representation maps into the device file name t#d# positions.  You can look for visibility of a new device on your hp9000' s after exposing the lun, and givien the octal representation of the target id, you'll know what hardware path to expect;  you probably don't want to pvcreate -f a lun already used in a volume group!&lt;BR /&gt;&lt;BR /&gt;Prior comments are correct about the device names not having to be the same across hosts, and using vgexport -p -v -s -m mapfile_name vg_name will create a mapfile for sharing volume groups.  (Note - the -s option does not prevent an actual export, it specifies rather that the mapfile will be for shared volume use.  The -p option (preview) creates the mapfile without doing an actual export.  After copying over the mapfile and making the directory and group file on the second host, a vgimport -v -s -m mapfile_name vg_name will import the volumes using the lv_names you already established on the first host.  You do not need to edit the mapfile and change the device names at all; vgimport -s will scan the disk data structures for you and determine which devices should be assigned to your new volume group.&lt;BR /&gt;&lt;BR /&gt;One important factor for using the volumes under serviceguard is the minor device numbers of your group files must be the same on each host.  (ie your mknod group c 64 0xNN0000 must be the same on each host). If you already have a lot of volumes defined on one host, you may have to start your volume numbering higher for all hosts using the volumes.   Another important factor is that the default configuration for hpux allows a maximum of 10 volume groups.  When you try to import a volume to a vg number greater than 9, it won't be seen.  You have to build a new kernel with an increased maximum volume groups parameter, and this will require a reboot, even for 11x.&lt;BR /&gt;&lt;BR /&gt;I must say while I'm not happy with the relative inflexibility of the configuration software on our Hitachi's - I think the VA7400's are better this way - the performance just blows me away.  I'm doing copies of 30gig database volumes in 10-20 minutes, switching access to a different host and using that for backup, getting the production server completely out of the picture....   Very fast and flexible.&lt;BR /&gt;&lt;BR /&gt;Good luck with your implementation.&lt;BR /&gt;&lt;BR /&gt;Greg Martin.</description>
      <pubDate>Thu, 11 Jul 2002 17:54:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/cluster-volume-groups/m-p/2761951#M710443</guid>
      <dc:creator>Greg Martin</dc:creator>
      <dc:date>2002-07-11T17:54:35Z</dc:date>
    </item>
  </channel>
</rss>

