Operating System - HP-UX
1834112 Members
2262 Online
110063 Solutions
New Discussion

SG shared array device files

 
Mark Henry_1
Frequent Advisor

SG shared array device files

Hi All,

I have an array shared between 6 service guarded machines. Therefore all machines see any new luns when created on the Raid.

Is there any info stored in the device file which will uniquely identify a lun across all machines, as on one machine it might be different from the dev file on another depending on the instance # of the FC card they're attached to. Also with redundant FC cards, it's even more confusing...

Thanks,

Mark
9 REPLIES 9
A. Clay Stephenson
Acclaimed Contributor

Re: SG shared array device files

Hi Mark:

There is no magic bullet here. You probably noted that your SCSI ID (the 't' part) and the LUN No (the 'd' ) remains constant but the controller instance (the 'c' part) is a crap shoot. You can use the vgexport -s to add the VGID to the mapfile and then a vgimport -s will scan the attached disks looking for the matching VGID. That's actually a pretty good way to do this but almost certainly all the primary/alternate path choices will not be optimum.

Another method (but tedious) is to add trhe LUN's one at a time and then do an ioscan -C disk -fn on all your nodes. It takes a while but then you know for sure.
If it ain't broke, I can fix that.
Mark Henry_1
Frequent Advisor

Re: SG shared array device files

Clay,

Thx.

My biggest worry is that I might later try to add a PV to another VG later and trash some data - will LVM allow you to pvcreate an existing PV, and if so will that damage anything? Similarly will LVM allow you to add a PV to a VG where it belongs to a VG already and the machine you're attempting it from does not have the VG currently active?

-Mark
Patrick Wallek
Honored Contributor

Re: SG shared array device files

If you have a VG defined on one machine and at some point attempt to do a pvcreate to a LUN from another machine, then you can very easily shoot yourself in the foot.

A basic 'pvcreate /dev/dsk/c?t?d?' will generate an error if the disk has VG information on it already. However if you do a 'pvcreate -f /dev/dsk/c?t?d?' then a pvcreate will be done on that LUN and your pager will probably start going crazy very shortly.

That's the way I understand it to work anyway.
steven Burgess_2
Honored Contributor

Re: SG shared array device files

Mark

Funny enough Patricks understanding is correct - I had a similar issue last week on a test cluster we have here. I created a volume group on node1 using a disk that belonged to node2. The thing was , when i attempted to view lvm information on node1 for that disk it told me that the disk didn't belong to a volume group. I had to force the pvcreate. Then when performing actions on node2 i found that the disk had 'somehow' (whoops) lost all it's LVM configuration. This required a vgcfgrestore and a data restore to get everything back to it's original state. A little embarassing I can tell you.

Be careful

Steve
take your time and think things through
A. Clay Stephenson
Acclaimed Contributor

Re: SG shared array device files

Hi Mark:

While again there is no magic bullet, the best advice I can offer is to carefully document your cluster. I keep a rather elaborate Visio diagram of all the cluster disks and network connections. Woe be unto the admin that uses a LUN without consulting the diagram and really big woe unto the admin that creates a new LUN without updating the diagram.

One other problem that you may not have had to deal with yet is the use of raw disks (or LUN's) for databases. While it is generally better to use LVOL's for this, some diehards prefer the raw disk devices themselves. In that case, you don't have the freedom of allowing the 'c' part to change. The solution to this is the use of symbolic links. Oracle might be looking for /oradata/file01.dbf. You then enter a symbolic link
ln -s /dev/rdsk/c2t3d5 /oradata/file01.dbf
on each node and you are all set.
If it ain't broke, I can fix that.
James R. Ferguson
Acclaimed Contributor

Re: SG shared array device files

Hi Mark:

In addition to the advice already given, I'll offer another "rule" that helps with LVM.

Whenever you destroy a volume group and do not intend to import it elsewhere, *explicitly*, *right-then-and-there* do a 'pvcreate -f' on the physical devices that comprised the volume group. This will set the VGID to zero. A subsequent, simple 'pvcreate' (without the 'f'orce option) will proceed without complaint.

This is most convenient when you use 'vgexport' as a rapid destruction method for removing a volume group, its logical volumes, cleaning up /etc/lvmtab, and /dev/vg*/group and other device files.

Thus, when you later decide to 'pvcreate' you will not (nor should not) use the 'f'orce option. Then, having *not* forced the pvcreate, any warning of present volume group information can be regarded as cause for more through analysis --- probably because you have *really* chosen the wrong device!

Regards!

...JRF...
Vincent Fleming
Honored Contributor

Re: SG shared array device files

You didn't say what kind of array you have, but many arrays now have a "LUN Security" feature that makes it so that each host only sees it's LUNs and not those from other machines. The disk array itself does this, so it's not part of HP-UX.

You can even group LUNs and servers so that you can manage multiple Service Guard clusters easily. If what you're saying is that you have 3 2-node clusters, you can make it so that any cluster sees only it's own LUNs, which should help at least a little.

On HP Fibre Channel disk arrays, it's called "LUN Security" and is configured via "Secure Manager". EMC calls this "Volume Logix", I think. Other brands may call it something different.

Good luck!
No matter where you go, there you are.
Tim D Fulford
Honored Contributor

Re: SG shared array device files

Mark

The above advice ..10pts.. BUT if you've got a disk & you want to know where it lives or is it truly spare you will need to look at the VGID of the disk. The below thread shows you how to do this.
http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0x65d7d08cc06fd511abcd0090277a778c,00.html

you could also use vgscan -pav to see if any disks are orphaned/spare.

Personally I do (well, try to do) the following

1 - Create the same VG NAME & minor number on ALL the computers in the cluster.
2 - maintain a VG called vgspare which contains my 100% sure orphaned/spare disks.
3 - try to get the controller numbers the same (remember that fc cards are LAN cards, so you will need to set the instance number of both, see man ioinit)
4 - create LARGE vg's so it is unlikely that new disks will need to be added.

Tim
-
melvyn burnard
Honored Contributor

Re: SG shared array device files

You could look at the cmpdisks script I supplied in:
http://forums.itrc.hp.com/cm/QuestionAnswer/1,,0xbe20eea29889d611abdb0090277a778c,00.html

My house is the bank's, my money the wife's, But my opinions belong to me, not HP!