Operating System - Linux
1752492 Members
5343 Online
108788 Solutions
New Discussion юеВ

Re: Redhat Linux multipathing question

 
Dineshkumar Surpur
Frequent Advisor

Redhat Linux multipathing question

Hi,

We are running RedHat Enterprise Linux 3.0 update 4 connected to a SAN storage and using QLogic Fibre Channel HBA on the host server. We are evaluating multipath solutions and have to use active/active multipathing. Veritas VxVM is expensive and ruled out. I am looking for using LVM or EVMS which i beleive uses MP driver. Is is not MP driver is active/passive ? Does anybody recommend what i need to do to get active/active multipathing using LVM or EVMS or any other solution for active/active multipathing on Linux? Any documents or links will be very helpful .

Thanks.

4 REPLIES 4
Huc_1
Honored Contributor

Re: Redhat Linux multipathing question

I guess a good starting point for EVMS would be
http://evms.sourceforge.net/

Jean-Pierre
Smile I will feel the difference
Serviceguard for Linux
Honored Contributor

Re: Redhat Linux multipathing question

LVM oes not have multipathing built in. LVM2 uses DM which does have MP capability. Either way, I'm 99% sure that DM does not have active/active capability.

What storage are you using - not all storage would provide any performance benefit with active/active connections.
Dineshkumar Surpur
Frequent Advisor

Re: Redhat Linux multipathing question

we are using 3par storage which provides performance benefits using active/active multipathing. SuSE provided Device Mapper on its 2.6.10 kernel which is actice/active anything like that on RedHat 2.4 kernel to acheive active/active MP
Dineshkumar Surpur
Frequent Advisor

Re: Redhat Linux multipathing question

I have tried LVM present on RedHat Enterprise Edition 3 Update 4.

# rpm -qa lvm
lvm-1.0.8-9

Once the path is faulty the path never comes back to active or sparse by itself and one has to do remove the device and add back the device recreating the array for the path to be used again.

Ex:

# mdadm -C /dev/md0 --level=multipath --raid-devices=2 /dev/sdb1 /dev/sdc1

where /dev/sdb1 and /dev/sdc1 are paths to the same storage volume.

say if sdb1 becomes faulty (ex: Fibre Cable pull test) once the path and the device have come back the mdadm doesn't detect the path has come active and one has to remove the add the device back ex: sdb1 in the followig example to change its state from faulty.

# mdadm /dev/md0 -r /dev/sdb1
# mdadm /dev/md0 -a /dev/sdb1


Does anybody know of a way for mdadm to auto detect the path has come back and change the mode from faulty to active or sparse ?

BTW LVM is active/passive since it depends on mdadm is active/passive and LVM2 in 2.6 will use device mapper which is active/active.