1847213 Members
2202 Online
110263 Solutions
New Discussion

Re: scsi id problem

 
khilari
Regular Advisor

scsi id problem

Hi guys, actually i have a problem with my jbod. i was
configuring my mc/service guard and what i did was that on
server a, i configured the scsi id to c1 and in server b, i
configured it to c3. They both were watching 6 disks on the
jbod. Well, the vg import and export are running fine, just
that when i run cmquerycl to put the cluster together i cant
do it. It gives errors such as it can read the disk on scsi
controller c1. Do, u think i should put the same scsi
controllers on both servers?? like c1 on server a and c1 in
server b too. It wasnt a problem before... It didnt matter
which scsi controllers they are on all long as they could see
the disks in the jbod..
3 REPLIES 3
Bharat Katkar
Honored Contributor

Re: scsi id problem

Hi,
Actually problem is that same disk is showing two different names one on server a and other on server b becuase they are using controllers with different H/w paths.
The disk device files on server a will be c1txdx wheras on server b it will be c3txdx and this is what is your problem.
You can create link files on any of the servers to resemble the disk device names.

For e.g. i do it on server b:

Telnet server b
# ln -s /dev/dsk/c3txtdy /dev/dsk/c1txdy
# ln -s /dev/rdsk/c3txdy /dev/dsk/c1txdy

Where x and y will have unique value for each 6 disk and you have repeat the above 2 commands for all disks.

This will make the disk devices on both the servers identical.

Make sure c1txdy devices are not already present on the server and represnt some other devices.

Hope that helps.,
Regards,
You need to know a lot to actually know how little you know
Kent Ostby
Honored Contributor

Re: scsi id problem

Mujtaba --

I think that this will be in your best interest overall as a long term supportable item.

Best regards,

Kent M. Ostby
"Well, actually, she is a rocket scientist" -- Steve Martin in "Roxanne"
Stephen Doud
Honored Contributor

Re: scsi id problem

The cXtYdZ names do not have to match up between servers. As a matter of fact, Serviceguard doesn't care what the disk special files are named. It's LVM that cares - and only to insure they the correct disks are addressed.
Explanation:
When a volume group is created, /etc/lvmtab will include the /dev/dsk paths for each disk that is a member of the VG.
In a Serviceguard environment, it will be necessary to load the other nodes' /etc/lvmtab with the volume group name and disk special files. vgimport is used. If vgimport references a map file which contains the VGID on the top line, vgimport will scan each disk for a VGDA, and load the /dev/dsk path for any disk containing the same VGID. Since vgchange only looks at the local /etc/lvmtab, it need not match the file on a different node in the cluster.
Example:
nodeA (where the VG already exists):
# vgexport -pvs -m map.vgNAME /dev/vgNAME
# rcp map.vgNAME :/etc/lvmconf

nodeB:
# mkdir /dev/vgNAME
# mknod /dev/vgNAME/group c 64 0xNN0000 (where NN = a unique minor number)
# vgimport -vs -m /etc/lvmconf/map.vgNAME /dev/vgNAME


To promote high availability of the data, use RAID protection or mirroring, and redundant paths to the JBOD.