<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: SG shared array device files in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2754995#M643302</link>
    <description>Mark&lt;BR /&gt;&lt;BR /&gt;Funny enough Patricks understanding is correct - I had a similar issue last week on a test cluster we have here. I created a volume group on node1 using a disk that belonged to node2. The thing was , when i attempted to view lvm information on node1 for that disk it told me that the disk didn't belong to a volume group. I had to force the pvcreate. Then when performing actions on node2 i found that the disk had 'somehow' (whoops) lost all it's LVM configuration. This required a vgcfgrestore and a data restore to get everything back to it's original state. A little embarassing I can tell you.&lt;BR /&gt;&lt;BR /&gt;Be careful&lt;BR /&gt;&lt;BR /&gt;Steve</description>
    <pubDate>Sat, 29 Jun 2002 23:01:46 GMT</pubDate>
    <dc:creator>steven Burgess_2</dc:creator>
    <dc:date>2002-06-29T23:01:46Z</dc:date>
    <item>
      <title>SG shared array device files</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2754991#M643298</link>
      <description>Hi All,&lt;BR /&gt;&lt;BR /&gt;I have an array shared between 6 service guarded machines.  Therefore all machines see any new luns when created on the Raid.&lt;BR /&gt;&lt;BR /&gt;Is there any info stored in the device file which will uniquely identify a lun across all machines, as on one machine it might be different from the dev file on another depending on the instance # of the FC card they're attached to.  Also with redundant FC cards, it's even more confusing...&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;&lt;BR /&gt;Mark</description>
      <pubDate>Sat, 29 Jun 2002 21:00:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2754991#M643298</guid>
      <dc:creator>Mark Henry_1</dc:creator>
      <dc:date>2002-06-29T21:00:49Z</dc:date>
    </item>
    <item>
      <title>Re: SG shared array device files</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2754992#M643299</link>
      <description>Hi Mark:&lt;BR /&gt;&lt;BR /&gt;There is no magic bullet here. You probably noted that your SCSI ID (the 't' part) and the LUN No (the 'd' ) remains constant but the controller instance (the 'c' part) is a crap shoot. You can use the vgexport -s to add the VGID to the mapfile and then a vgimport -s will scan the attached disks looking for the matching VGID. That's actually a pretty good way to do this but almost certainly all the primary/alternate path choices will not be optimum.&lt;BR /&gt;&lt;BR /&gt;Another method (but tedious) is to add trhe LUN's one at a time and then do an ioscan -C disk -fn on all your nodes. It takes a while but then you know for sure.&lt;BR /&gt;</description>
      <pubDate>Sat, 29 Jun 2002 22:19:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2754992#M643299</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2002-06-29T22:19:42Z</dc:date>
    </item>
    <item>
      <title>Re: SG shared array device files</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2754993#M643300</link>
      <description>Clay,&lt;BR /&gt;&lt;BR /&gt;Thx.&lt;BR /&gt;&lt;BR /&gt;My biggest worry is that I might later try to add a PV to another VG later and trash some data - will LVM allow you to pvcreate an existing PV, and if so will that damage anything?  Similarly will LVM allow you to add a PV to a VG where it belongs to a VG already and the machine you're attempting it from does not have the VG currently active?&lt;BR /&gt;&lt;BR /&gt;-Mark</description>
      <pubDate>Sat, 29 Jun 2002 22:37:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2754993#M643300</guid>
      <dc:creator>Mark Henry_1</dc:creator>
      <dc:date>2002-06-29T22:37:32Z</dc:date>
    </item>
    <item>
      <title>Re: SG shared array device files</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2754994#M643301</link>
      <description>If you have a VG defined on one machine and at some point attempt to do a pvcreate to a LUN from another machine, then you can very easily shoot yourself in the foot.&lt;BR /&gt;&lt;BR /&gt;A basic 'pvcreate /dev/dsk/c?t?d?' will generate an error if the disk has VG information on it already.  However if you do a 'pvcreate -f /dev/dsk/c?t?d?' then  a pvcreate will be done on that LUN and your pager will probably start going crazy very shortly.&lt;BR /&gt;&lt;BR /&gt;That's the way I understand it to work anyway.</description>
      <pubDate>Sat, 29 Jun 2002 22:44:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2754994#M643301</guid>
      <dc:creator>Patrick Wallek</dc:creator>
      <dc:date>2002-06-29T22:44:04Z</dc:date>
    </item>
    <item>
      <title>Re: SG shared array device files</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2754995#M643302</link>
      <description>Mark&lt;BR /&gt;&lt;BR /&gt;Funny enough Patricks understanding is correct - I had a similar issue last week on a test cluster we have here. I created a volume group on node1 using a disk that belonged to node2. The thing was , when i attempted to view lvm information on node1 for that disk it told me that the disk didn't belong to a volume group. I had to force the pvcreate. Then when performing actions on node2 i found that the disk had 'somehow' (whoops) lost all it's LVM configuration. This required a vgcfgrestore and a data restore to get everything back to it's original state. A little embarassing I can tell you.&lt;BR /&gt;&lt;BR /&gt;Be careful&lt;BR /&gt;&lt;BR /&gt;Steve</description>
      <pubDate>Sat, 29 Jun 2002 23:01:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2754995#M643302</guid>
      <dc:creator>steven Burgess_2</dc:creator>
      <dc:date>2002-06-29T23:01:46Z</dc:date>
    </item>
    <item>
      <title>Re: SG shared array device files</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2754996#M643303</link>
      <description>Hi Mark:&lt;BR /&gt;&lt;BR /&gt;While again there is no magic bullet, the best advice I can offer is to carefully document your cluster. I keep a rather elaborate Visio diagram of all the cluster disks and network connections. Woe be unto the admin that uses a LUN without consulting the diagram and really big woe unto the admin that creates a new LUN without updating the diagram.&lt;BR /&gt;&lt;BR /&gt;One other problem that you may not have had to deal with yet is the use of raw disks (or LUN's) for databases. While it is generally better to use LVOL's for this, some diehards prefer the raw disk devices themselves. In that case, you don't have the freedom of allowing the 'c' part to change. The solution to this is the use of symbolic links. Oracle might be looking for /oradata/file01.dbf. You then enter a symbolic link&lt;BR /&gt;ln -s /dev/rdsk/c2t3d5 /oradata/file01.dbf &lt;BR /&gt;on each node and you are all set.  &lt;BR /&gt;</description>
      <pubDate>Sun, 30 Jun 2002 18:16:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2754996#M643303</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2002-06-30T18:16:00Z</dc:date>
    </item>
    <item>
      <title>Re: SG shared array device files</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2754997#M643304</link>
      <description>Hi Mark:&lt;BR /&gt;&lt;BR /&gt;In addition to the advice already given, I'll offer another "rule" that helps with LVM.&lt;BR /&gt;&lt;BR /&gt;Whenever you destroy a volume group and do not intend to import it elsewhere, *explicitly*, *right-then-and-there* do a 'pvcreate -f' on the physical devices that comprised the volume group.  This will set the VGID to zero.  A subsequent, simple 'pvcreate' (without the 'f'orce option) will proceed without complaint.&lt;BR /&gt;&lt;BR /&gt;This is most convenient when you use 'vgexport' as a rapid destruction method for removing a volume group, its logical volumes, cleaning up /etc/lvmtab, and /dev/vg*/group and other device files.&lt;BR /&gt;&lt;BR /&gt;Thus, when you later decide to 'pvcreate' you will not (nor should not) use the 'f'orce option.  Then, having *not* forced the pvcreate, any warning of present volume group information can be regarded as cause for more through analysis --- probably because you have *really* chosen the wrong device!&lt;BR /&gt;&lt;BR /&gt;Regards!&lt;BR /&gt;&lt;BR /&gt;...JRF...</description>
      <pubDate>Sun, 30 Jun 2002 18:30:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2754997#M643304</guid>
      <dc:creator>James R. Ferguson</dc:creator>
      <dc:date>2002-06-30T18:30:06Z</dc:date>
    </item>
    <item>
      <title>Re: SG shared array device files</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2754998#M643305</link>
      <description>You didn't say what kind of array you have, but many arrays now have a "LUN Security" feature that makes it so that each host only sees it's LUNs and not those from other machines.  The disk array itself does this, so it's not part of HP-UX.&lt;BR /&gt;&lt;BR /&gt;You can even group LUNs and servers so that you can manage multiple Service Guard clusters easily.  If what you're saying is that you have 3 2-node clusters, you can make it so that any cluster sees only it's own LUNs, which should help at least a little.&lt;BR /&gt;&lt;BR /&gt;On HP Fibre Channel disk arrays, it's called "LUN Security" and is configured via "Secure Manager".  EMC calls this "Volume Logix", I think.  Other brands may call it something different.&lt;BR /&gt;&lt;BR /&gt;Good luck!&lt;BR /&gt;</description>
      <pubDate>Sun, 30 Jun 2002 20:23:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2754998#M643305</guid>
      <dc:creator>Vincent Fleming</dc:creator>
      <dc:date>2002-06-30T20:23:44Z</dc:date>
    </item>
    <item>
      <title>Re: SG shared array device files</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2754999#M643306</link>
      <description>Mark&lt;BR /&gt;&lt;BR /&gt;The above advice ..10pts..  BUT if you've got a disk &amp;amp; you want to know where it lives or is it truly spare you will need to look at the VGID of the disk.  The below thread shows you how to do this.&lt;BR /&gt;&lt;A href="http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0x65d7d08cc06fd511abcd0090277a778c,00.html" target="_blank"&gt;http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0x65d7d08cc06fd511abcd0090277a778c,00.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;you could also use vgscan -pav to see if any disks are orphaned/spare.&lt;BR /&gt;&lt;BR /&gt;Personally I do (well, try to do) the following&lt;BR /&gt;&lt;BR /&gt;1 - Create the same VG NAME &amp;amp; minor number on ALL the computers in the cluster.&lt;BR /&gt;2 - maintain a VG called vgspare which contains my 100% sure orphaned/spare disks.&lt;BR /&gt;3 - try to get the controller numbers the same (remember that fc cards are LAN cards, so you will need to set the instance number of both, see man ioinit)&lt;BR /&gt;4 - create LARGE vg's so it is unlikely that new disks will need to be added.&lt;BR /&gt;&lt;BR /&gt;Tim</description>
      <pubDate>Sun, 30 Jun 2002 20:34:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2754999#M643306</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2002-06-30T20:34:21Z</dc:date>
    </item>
    <item>
      <title>Re: SG shared array device files</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2755000#M643307</link>
      <description>You could look at the cmpdisks script I supplied in:&lt;BR /&gt;&lt;A href="http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0xbe20eea29889d611abdb0090277a778c,00.html" target="_blank"&gt;http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0xbe20eea29889d611abdb0090277a778c,00.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 01 Jul 2002 06:40:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sg-shared-array-device-files/m-p/2755000#M643307</guid>
      <dc:creator>melvyn burnard</dc:creator>
      <dc:date>2002-07-01T06:40:47Z</dc:date>
    </item>
  </channel>
</rss>

