System Administration
cancel
Showing results for 
Search instead for 
Did you mean: 

hp storageworks p2000 with qlogic fiber channel + debian

 
Highlighted
New Member

hp storageworks p2000 with qlogic fiber channel + debian

hi!

Please help me configuring my hp storageworks p2000 into my debian lenny OS.

I've already installed the qlogic fiber channel module

lsmod |grep qla2
qla2xxx 202821 0
scsi_transport_fc 35259 1 qla2xxx

Also lspci shows:

03:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02)

I don't know what to do next in order for me to mount the shared disk.

I will appreaciate your immediate assistance.

Oliver
4 REPLIES 4
Highlighted
Honored Contributor

Re: hp storageworks p2000 with qlogic fiber channel + debian

FibreChannel storage systems must often be configured to present a specific disk/LUN to specific FibreChannel HBA(s), using the WWN(s) of the HBA(s). If this is not done, the storage might not allow any connections at all.

A WWN is like a MAC address on a network card, except it's a bit longer. It is usually printed on the HBA card, and also accessible via the driver: see /sys/class/fc_host/host?/node_name.

(Replace "host?" with the directory that corresponds with your HBA. If you have only one FC HBA, there should be only one subdirectory in /sys/class/fc_host/.)

Usually, the storage system also needs to know the "host type", or some attributes that determine how the disk is presented to the server. Different OSs and multipath solutions have different requirements. See the documentation of your storage for details.

Once the storage side has been configured, your system might immediately see a new /dev/sd* device. Install the "lsscsi" package from Debian package repository, then use the "lsscsi" command to identify your disk devices.

Sometimes the device might not appear automatically, and manual probing is required. (With modern Linux kernels, this might indicate that the host type or other storage-side configuration is not 100% correctly set for Linux, or that you have an old HBA firmware version.) In these cases, you can either reboot the server or trigger a re-scan manually.

This command will cause the HBA to reset its connection to the storage. It can be used to make the HBA aware of newly presented disks. (Warning: this causes an interruption to FC connectivity. Unless you have multipathing configured, don't use it if you have FC disks mounted!)

echo "1" > /sys/class/fc_host/host?/issue_lip

After this, you should wait a few seconds to let the LIP complete (you'll see the FC link going down and then come back up in the dmesg listing). Then issue the next command, which makes the kernel check for new devices in this HBA:

echo "- - -" > /sys/class/scsi_host/host?/scan

Note: this command uses /sys/class/scsi_host, not /sys/class/fc_host. Yet the "host?" part should be named exactly the same in both directory branches.

MK
MK
Highlighted
New Member

Re: hp storageworks p2000 with qlogic fiber channel + debian

hi mk,

Thank you for your reply.

I've successfully mounted the disk on my debian hosts and configured multipathing.

SErver A:

multipath -ll
nuxeo (3600c0ff00010c8c2feacf84c01000000) dm-0 HP ,P2000 G3 FC
[size=5.5T][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=10][enabled]
\_ 7:0:0:0 sdb 8:16 [active][ready]
\_ round-robin 0 [prio=50][active]
\_ 0:0:0:0 sda 8:0 [active][ready]

Server B:

multipath -ll
nuxeo (3600c0ff00010c8c2feacf84c01000000) dm-3 HP ,P2000 G3 FC
[size=5.5T][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=50][active]
\_ 7:0:0:0 sdb 8:16 [active][ready]
\_ round-robin 0 [prio=10][enabled]
\_ 0:0:0:0 sda 8:0 [active][ready]


=====multipath.conf=======

defaults {
udev_dir /dev
polling_interval 10
selector "round-robin 0"
path_grouping_policy failover
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio_callout "/bin/true"
path_checker tur
rr_min_io 100
rr_weight uniform
failback immediate
no_path_retry 12
user_friendly_names yes
}
multipaths {

multipath {
wwid 3600c0ff00010c8c2feacf84c01000000
alias nuxeo
path_grouping_policy group_by_prio
path_selector "round-robin 0"
failback immediate
rr_weight uniform
no_path_retry 10
rr_min_io 100
}
}

devices {
device {
vendor "HP"
product "P2000 G3 FC|P2000G3 FC/iSCSI"
path_grouping_policy group_by_prio
getuid_callout "/lib/udev/scsi_id -g -u -s /dev/%n"
path_checker tur
path_selector "round-robin 0"
prio_callout "/sbin/mpath_prio_alua /dev/%n"
rr_weight uniform
failback immediate
hardware_handler "0"
no_path_retry 18
rr_min_io 100
}


So I mounted /dev/mapper/nuxeo on both servers. My problem now is: If I write data on Server 1, those data I written are not appearing on Server 2 and vice versa.

Please help.

Oliver
Highlighted
Honored Contributor

Re: hp storageworks p2000 with qlogic fiber channel + debian

If you want storage that needs to be readble and writable on more than one server, you will need to choose a clustered storage solution.

The free ones are:

OCFS2 - from Oracle - http://oss.oracle.com/projects/ocfs2/
GFS2 - from Redhat (that is if you have full support if not --- you'll need to go with CentOS) - http://www.linuxdynasty.org/howto-setup-gfs2-with-clustering.html

Both are easy to implement.


The non-free ones will be Symantec Veritas Cluster Server,


Also with the above -- please make suer you have the correct/recomemnded HP multipath settings in your /etc/multipath.conf "devices" section:


device {
vendor "HP"
product "P2000 G3 FC|P2000G3 FC/iSCSI"
path_grouping_policy group_by_prio
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
path_checker tur
path_selector "round-robin 0"
prio_callout "/sbin/mpath_prio_alua /dev/%n"
rr_weight uniform
failback immediate
hardware_handler "0"
no_path_retry 18
rr_min_io 100
}

Cheers!
Hakuna Matata.
Highlighted
Honored Contributor

Re: hp storageworks p2000 with qlogic fiber channel + debian

Which filesystem type are you using on your shared disk?

If you want to access the shared disk from multiple hosts at the same time, you need a filesystem that is designed to be used like that: a cluster filesystem, like GFS, GFS2 or OCFS. These will require some other infrastructure for cluster-wide lock management and error handling, like DLM and fencing. A cluster filesystem usually won't perform quite as well as a non-cluster filesystem, because the cluster-wide coordination will cause some overhead.

Ordinary (non-cluster) filesystems like ext2/ext3/ext4, ReiserFS and XFS are all designed with the assumption that the filesystem will be mounted by only one host at a time. This allows the filesystems to use intensive caching to improve performance.

But when a non-cluster filesystem is mounted on two or more hosts simultaneously, the caching will actually be very harmful. When a filesystem is mounted, the host will read some filesystem metadata and cache it. Subsequent read operations will cause more data to be cached. When it's time to actually use that data (e.g. for finding a place on the disk for a new file), the filesystem will assume the cached metadata is still valid... even though the other host may have changed it in the meantime.

Even if you mount a non-cluster filesystem as writeable in one node only, and read-only in all others, the caching will still cause problems. It *might* work if you disable *all* host-side caching for that filesystem and the underlying disk device(s)... but that would certainly ruin your disk performance.

If you need to access the same filesystem from multiple hosts at the same time, it's often easier to mount it locally to one host only and use NFS to share it to the other hosts. (If NFS is not an acceptable solution, Debian also has OpenAFS which can be used the same way.)

But this makes the one host that access the filesystem locally a critical resource: if that gets overloaded or fails, no other host can access that filesystem either. This is a SPoF, or a Single Point of Failure.

If you need to minimize the SPoFs, but don't need to access the filesystem from multiple hosts at the same time, you might consider a failover cluster (active-passive cluster). It is usually less demanding than a cluster filesystem set-up. In this case, the cluster infrastructure will handle the mounting of the shared disk instead of the regular /etc/fstab. The cluster infrastructure should disallow mounting the disk unless it is known for certain that the disk is not currently mounted by some other host. This allows you to use a regular non-cluster filesystem on a shared disk.

If your requirement is to both minimize SPoFs *and* access the filesystem from multiple nodes simultaneously, then a cluster filesystem is what you need.

If you need to absolutely remove SPoFs, you will have to replicate the storage too. For that, you might need a "distributed block device" like DRBD.

Debian's own documentation does not seem to say much about setting up a cluster. These links I googled might be of some value to you:

http://realtechtalk.com/Configuring_OCFS2_Clustered_File_System_on_Debian_Based_Linux_including_Ubuntu_and_Kubuntu-109-articles

http://www.howtoforge.com/high-availability-storage-with-glusterfs-on-debian-lenny-automatic-file-replication-across-two-storage-servers

http://gcharriere.com/blog/?tag=gfs

MK
MK