Operating System - Linux
1819794 Members
3431 Online
109607 Solutions
New Discussion юеВ

mdadm Multipath raid configuration

 
Nabil_11
Frequent Advisor

mdadm Multipath raid configuration

Hi All,
I have to DL380 G4 HP servers
two HW RAID Controllers pluged in
with MSA500 G2 storage
I connect one of the raid controller as HW RAID 0+1 for OS Redhat AS 3.0 update 3
on the local hard drives, olso this RAID controller connceted to the HW raid 0+1 on MSA500 storage
and there is another RAID controller connected to the storage
So: one RAID controller connected to
/dev/cciss/c0d0 (OS local drives RAID)
and /dev/cciss/c0d1 (raid on storage)
and the other controller
connected to /dev/cciss/c1d0
inside linux /dev/cciss/c0d1 and /dev/cciss/c1d0 the same drive with multipath
I configured /etc/mdadm.conf
DEVICE /dev/cciss/c0d1 /dev/cciss/c1d0
ARRAY /dev/md0 devices=/dev/cciss/c0d1, /dev/cciss/c1d0
then I execute
#mdadm -C /dev/md0 --level=multipath --raid-devices=2 /dev/cciss/c0d1 /dev/cciss/c1d0
its created successfully as /dev/md0
but I lost it after reboot ??????????
so I have to exectue
#mdadm -A /dev/md0
after every reboot
also the important think that I cant access
any defined partion inside this md0
when I execute fdisk /dev/md0
its display /dev/md0p1 p2 p3 p4 p5 p6
but I cant access thease partions
CUZ there is NO special device file for these
device there is no /dev/md0p1 and so

Please Advice

Kind Regards
5 REPLIES 5
Ivan Ferreira
Honored Contributor

Re: mdadm Multipath raid configuration

From the red hat guide, it seems that the multipath devices must be created on a per partition basis.

See this example:

# mdadm -C /dev/md0 --level=multipath --raid-devices=4 /dev/sda1 /dev/sdb1
/dev/sdc1 /dev/sdd1
Continue creating array? yes
mdadm: array /dev/md0 started.

# mdadm --detail /dev/md0

So, i think you need to partition the /dev/cciss/c1d0 device, and then create the multipath device.

Let me know if this works.

Regards.
Por que hacerlo dificil si es posible hacerlo facil? - Why do it the hard way, when you can do it the easy way?
Nabil_11
Frequent Advisor

Re: mdadm Multipath raid configuration

Thanks for your relply,

I already did it, I fdisk /dev/cciss/c1d0
then create /dev/md0
it work fine also #mdadm --detail /dev/md0
it gives excellent output.
BUT
1. When I reboot my machine I have to execute
#mdadm -A /dev/md0
I read on one site saying that I have to create scsripts for start and shutdown md0
and link it to appropriate run level
its OK I can do it,

2. The main question when I execute fdisk /dev/md0 it display six partions inside md0
/dev/md0p1
/dev/md0p2
.
.
/dev/md0p6
But I cant access any of them CUZ there is no
special device file for these partitions
I make trick by make
ls -l /dev/cciss/c1d0
it gives me maj/min 106 0
so I execute
#mknod /dev/md0p6 b 106 6
and I mount this file system it works fine
but only with second path


Regards

Nabil
Matti_Kurkela
Honored Contributor

Re: mdadm Multipath raid configuration

Your multipath setup looks OK to me.

Remember that the native Linux multipath is active/passive: it uses only one path at a time. If the active path fails, it switches over to another path. So, it does not give extra bandwidth, just fault tolerance.

For active/active multipath configuration (load balancing between paths, for example) you need additional software that's compatible with your storage. You probably have to pay for that (I don't know of any free software like thati).

You can try disconnecting the active path: by default, it will take 10s or so to confirm the active path is gone, then it automatically retries the active disk operations on the other path. You also get some messages on syslog (and probably on the console, too) about a lost path.

To see the state of the paths at any time, check the file /proc/mdstat.
MK
Nabil_11
Frequent Advisor

Re: mdadm Multipath raid configuration

Thanks,

I know that my configuration is correct BUT
in my cluster suite I thing the correct way to use
/dev/md0p1 as primary quorum
/dev/md0p2 as shadow quorum
/dev/md0p5 as /u01
/dev/md0p6 as /u02

I cant access thease devices
I'm still using /dev/cciss/c1d0

any help,

Regards

Nabil
Matti_Kurkela
Honored Contributor

Re: mdadm Multipath raid configuration

Oops, I didn't notice this at first...

The /dev/md0 devices cannot be split into partitions like normal disks. Fdisk might show the partitions, but the kernel can't access them like that.

Normal disks like /dev/cciss/c0d* have gaps in their device numbers for partitions: /dev/cciss/c0d0 is major 104, minor 0 and /dev/cciss/c0d1 is major 104, minor 16. This leaves minor numbers 1-15 for partitions /dev/cciss/c0d0p*.

However, /dev/md0 is major 9 minor 0 and
/dev/md1 is major 9 minor 1. There is no gap in the minor numbers for /dev/md0p* -style partitions. The major and minor numbers are hardcoded in kernel, so you can't change them easily.

You are supposed to make /dev/cciss/c0d1p1 and /dev/cciss/c1d0p1 into /dev/md0, /dev/cciss/c0d1p2 and /dev/cciss/c1d0p2 into /dev/md1 and so forth. Does not make much sense for a multipath configuration, but that's how it is.

There is a way, though: you can use LVM.
Make your current multipathed /dev/md0 into a LVM physical volume (pvcreate /dev/md0), then create a volume group (vgcreate vgsomething /dev/md0) and create logical volumes (lvcreate -L somesize vgsomething) instead of partitions. Then the logical volumes are accessible as /dev/vgsomething/lvol* instead of /dev/md0p*.

To start up this construct, you need to script the following things to happen at boot-up (unless RedHat AS already does them):

mdadm --assemble --scan
(this autodetects the multipath components and assembles them into /dev/md* device or devices)

vgscan
(this detects the LVM volume groups on all disks, including /dev/md0. It's like a fsck for the LVM configuration.)

vgchange -a y vgsomething
(this activates your volume group: necessary before mounting the logical volumes)

After this, you can mount the logical volumes /dev/vgsomething/lvol* as normal disks.

To shut down cleanly, after umounting all the logical volumes you should do:
vgchange -a n vgsomething
mdadm --stop /dev/md0

It seems to me that Linux LVM does not get overly upset if you forget to do that, though...

(You can name the volume group as you wish: my use of "vgsomething" as an example is a sign I've been spending a lot of time with HP-UX LVM...)
MK