<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: mdadm Multipath raid configuration in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/mdadm-multipath-raid-configuration/m-p/3588689#M18862</link>
    <description>Oops, I didn't notice this at first...&lt;BR /&gt;&lt;BR /&gt;The /dev/md0 devices cannot be split into partitions like normal disks. Fdisk might show the partitions, but the kernel can't access them like that.&lt;BR /&gt;&lt;BR /&gt;Normal disks like /dev/cciss/c0d* have gaps in their device numbers for partitions: /dev/cciss/c0d0 is major 104, minor 0 and /dev/cciss/c0d1 is major 104, minor 16. This leaves minor numbers 1-15 for partitions /dev/cciss/c0d0p*.&lt;BR /&gt;&lt;BR /&gt;However, /dev/md0 is major 9 minor 0 and &lt;BR /&gt;/dev/md1 is major 9 minor 1. There is no gap in the minor numbers for /dev/md0p* -style partitions. The major and minor numbers are hardcoded in kernel, so you can't change them easily.&lt;BR /&gt;&lt;BR /&gt;You are supposed to make /dev/cciss/c0d1p1 and /dev/cciss/c1d0p1 into /dev/md0, /dev/cciss/c0d1p2 and /dev/cciss/c1d0p2 into /dev/md1 and so forth. Does not make much sense for a multipath configuration, but that's how it is.&lt;BR /&gt;&lt;BR /&gt;There is a way, though: you can use LVM.&lt;BR /&gt;Make your current multipathed /dev/md0 into a LVM physical volume (pvcreate /dev/md0), then create a volume group (vgcreate vgsomething /dev/md0) and create logical volumes (lvcreate -L somesize vgsomething) instead of partitions. Then the logical volumes are accessible as /dev/vgsomething/lvol* instead of /dev/md0p*.&lt;BR /&gt;&lt;BR /&gt;To start up this construct, you need to script the following things to happen at boot-up (unless RedHat AS already does them):&lt;BR /&gt;&lt;BR /&gt;mdadm --assemble --scan &lt;BR /&gt;(this autodetects the multipath components and assembles them into /dev/md* device or devices)&lt;BR /&gt;&lt;BR /&gt;vgscan&lt;BR /&gt;(this detects the LVM volume groups on all disks, including /dev/md0. It's like a fsck for the LVM configuration.)&lt;BR /&gt;&lt;BR /&gt;vgchange -a y vgsomething&lt;BR /&gt;(this activates your volume group: necessary before mounting the logical volumes)&lt;BR /&gt;&lt;BR /&gt;After this, you can mount the logical volumes /dev/vgsomething/lvol* as normal disks.&lt;BR /&gt;&lt;BR /&gt;To shut down cleanly, after umounting all the logical volumes you should do:&lt;BR /&gt;  vgchange -a n vgsomething&lt;BR /&gt;  mdadm --stop /dev/md0&lt;BR /&gt;&lt;BR /&gt;It seems to me that Linux LVM does not get overly upset if you forget to do that, though...&lt;BR /&gt;&lt;BR /&gt;(You can name the volume group as you wish: my use of "vgsomething" as an example is a sign I've been spending a lot of time with HP-UX LVM...)</description>
    <pubDate>Tue, 26 Jul 2005 07:58:23 GMT</pubDate>
    <dc:creator>Matti_Kurkela</dc:creator>
    <dc:date>2005-07-26T07:58:23Z</dc:date>
    <item>
      <title>mdadm Multipath raid configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/mdadm-multipath-raid-configuration/m-p/3588684#M18857</link>
      <description>Hi All,&lt;BR /&gt;I have to DL380 G4 HP servers&lt;BR /&gt;two HW RAID Controllers pluged in &lt;BR /&gt;with MSA500 G2 storage&lt;BR /&gt;I connect one of the raid controller as HW RAID 0+1 for OS Redhat AS 3.0 update 3&lt;BR /&gt;on the local hard drives, olso this RAID controller connceted to the HW raid 0+1 on MSA500 storage &lt;BR /&gt;and there is another RAID controller connected to the storage&lt;BR /&gt;So: one RAID controller connected to &lt;BR /&gt;/dev/cciss/c0d0 (OS local drives RAID) &lt;BR /&gt;and /dev/cciss/c0d1 (raid on storage)&lt;BR /&gt;and the other controller &lt;BR /&gt;connected to /dev/cciss/c1d0 &lt;BR /&gt;inside linux /dev/cciss/c0d1 and /dev/cciss/c1d0  the same drive with multipath&lt;BR /&gt;I configured /etc/mdadm.conf&lt;BR /&gt;DEVICE /dev/cciss/c0d1 /dev/cciss/c1d0&lt;BR /&gt;ARRAY  /dev/md0 devices=/dev/cciss/c0d1, /dev/cciss/c1d0&lt;BR /&gt;then I execute&lt;BR /&gt;#mdadm -C /dev/md0 --level=multipath --raid-devices=2 /dev/cciss/c0d1 /dev/cciss/c1d0&lt;BR /&gt;its created successfully as /dev/md0&lt;BR /&gt;but I lost it after reboot ??????????&lt;BR /&gt;so I have to exectue &lt;BR /&gt;#mdadm -A /dev/md0&lt;BR /&gt;after every reboot&lt;BR /&gt;also the important think that I cant access&lt;BR /&gt;any defined partion inside this md0&lt;BR /&gt;when I execute fdisk /dev/md0&lt;BR /&gt;its display /dev/md0p1 p2 p3 p4 p5 p6&lt;BR /&gt;but I cant access thease partions &lt;BR /&gt;CUZ there is NO special device file for these&lt;BR /&gt;device there is no /dev/md0p1 and so&lt;BR /&gt;&lt;BR /&gt;Please Advice &lt;BR /&gt;&lt;BR /&gt;Kind Regards</description>
      <pubDate>Sat, 23 Jul 2005 05:39:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/mdadm-multipath-raid-configuration/m-p/3588684#M18857</guid>
      <dc:creator>Nabil_11</dc:creator>
      <dc:date>2005-07-23T05:39:27Z</dc:date>
    </item>
    <item>
      <title>Re: mdadm Multipath raid configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/mdadm-multipath-raid-configuration/m-p/3588685#M18858</link>
      <description>From the red hat guide, it seems that the multipath devices must be created on a per partition basis.&lt;BR /&gt;&lt;BR /&gt;See this example:&lt;BR /&gt;&lt;BR /&gt;# mdadm -C /dev/md0 --level=multipath --raid-devices=4 /dev/sda1 /dev/sdb1  &lt;BR /&gt; /dev/sdc1 /dev/sdd1&lt;BR /&gt;Continue creating array? yes&lt;BR /&gt;mdadm: array /dev/md0 started.&lt;BR /&gt;&lt;BR /&gt;# mdadm --detail /dev/md0&lt;BR /&gt;&lt;BR /&gt;So, i think you need to partition the /dev/cciss/c1d0 device, and then create the multipath device.&lt;BR /&gt;&lt;BR /&gt;Let me know if this works.&lt;BR /&gt;&lt;BR /&gt;Regards.</description>
      <pubDate>Sat, 23 Jul 2005 15:31:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/mdadm-multipath-raid-configuration/m-p/3588685#M18858</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2005-07-23T15:31:24Z</dc:date>
    </item>
    <item>
      <title>Re: mdadm Multipath raid configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/mdadm-multipath-raid-configuration/m-p/3588686#M18859</link>
      <description>Thanks for your relply,&lt;BR /&gt;&lt;BR /&gt;I already did it, I fdisk /dev/cciss/c1d0 &lt;BR /&gt;then create /dev/md0&lt;BR /&gt;it work fine also #mdadm --detail /dev/md0&lt;BR /&gt;it gives excellent output.&lt;BR /&gt;BUT &lt;BR /&gt;1. When I reboot my machine I have to execute &lt;BR /&gt;     #mdadm -A /dev/md0&lt;BR /&gt;   I read on one site saying that I have to create scsripts for start and shutdown md0&lt;BR /&gt;and link it to appropriate run level &lt;BR /&gt;its OK I can do it,&lt;BR /&gt;&lt;BR /&gt;2. The main question when I execute fdisk   /dev/md0 it display six partions inside md0&lt;BR /&gt;/dev/md0p1 &lt;BR /&gt;/dev/md0p2&lt;BR /&gt;.&lt;BR /&gt;.&lt;BR /&gt;/dev/md0p6&lt;BR /&gt;But I cant access any of them CUZ there is no&lt;BR /&gt;special device file for these partitions&lt;BR /&gt;I make trick by make &lt;BR /&gt;ls -l /dev/cciss/c1d0  &lt;BR /&gt;it gives me maj/min 106  0 &lt;BR /&gt;so I execute &lt;BR /&gt;#mknod /dev/md0p6 b 106 6 &lt;BR /&gt;and I mount this file system it works fine&lt;BR /&gt;but only with second path&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Nabil</description>
      <pubDate>Sun, 24 Jul 2005 00:38:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/mdadm-multipath-raid-configuration/m-p/3588686#M18859</guid>
      <dc:creator>Nabil_11</dc:creator>
      <dc:date>2005-07-24T00:38:21Z</dc:date>
    </item>
    <item>
      <title>Re: mdadm Multipath raid configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/mdadm-multipath-raid-configuration/m-p/3588687#M18860</link>
      <description>Your multipath setup looks OK to me.&lt;BR /&gt;&lt;BR /&gt;Remember that the native Linux multipath is active/passive: it uses only one path at a time. If the active path fails, it switches over to another path. So, it does not give extra bandwidth, just fault tolerance.&lt;BR /&gt;&lt;BR /&gt;For active/active multipath configuration (load balancing between paths, for example) you need additional software that's compatible with your storage. You probably have to pay for that (I don't know of any free software like thati).&lt;BR /&gt;&lt;BR /&gt;You can try disconnecting the active path: by default, it will take 10s or so to confirm the active path is gone, then it automatically retries the active disk operations on the other path. You also get some messages on syslog (and probably on the console, too) about a lost path.&lt;BR /&gt;&lt;BR /&gt;To see the state of the paths at any time, check the file /proc/mdstat.&lt;BR /&gt;</description>
      <pubDate>Mon, 25 Jul 2005 07:20:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/mdadm-multipath-raid-configuration/m-p/3588687#M18860</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2005-07-25T07:20:41Z</dc:date>
    </item>
    <item>
      <title>Re: mdadm Multipath raid configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/mdadm-multipath-raid-configuration/m-p/3588688#M18861</link>
      <description>Thanks,&lt;BR /&gt;&lt;BR /&gt;I know that my configuration is correct BUT&lt;BR /&gt;in my cluster suite I thing the correct way to use &lt;BR /&gt;/dev/md0p1 as primary quorum&lt;BR /&gt;/dev/md0p2 as shadow quorum&lt;BR /&gt;/dev/md0p5  as /u01 &lt;BR /&gt;/dev/md0p6  as /u02&lt;BR /&gt;&lt;BR /&gt;I cant access thease devices&lt;BR /&gt;I'm still using /dev/cciss/c1d0&lt;BR /&gt;&lt;BR /&gt;any help,&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Nabil&lt;BR /&gt;</description>
      <pubDate>Tue, 26 Jul 2005 03:17:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/mdadm-multipath-raid-configuration/m-p/3588688#M18861</guid>
      <dc:creator>Nabil_11</dc:creator>
      <dc:date>2005-07-26T03:17:41Z</dc:date>
    </item>
    <item>
      <title>Re: mdadm Multipath raid configuration</title>
      <link>https://community.hpe.com/t5/operating-system-linux/mdadm-multipath-raid-configuration/m-p/3588689#M18862</link>
      <description>Oops, I didn't notice this at first...&lt;BR /&gt;&lt;BR /&gt;The /dev/md0 devices cannot be split into partitions like normal disks. Fdisk might show the partitions, but the kernel can't access them like that.&lt;BR /&gt;&lt;BR /&gt;Normal disks like /dev/cciss/c0d* have gaps in their device numbers for partitions: /dev/cciss/c0d0 is major 104, minor 0 and /dev/cciss/c0d1 is major 104, minor 16. This leaves minor numbers 1-15 for partitions /dev/cciss/c0d0p*.&lt;BR /&gt;&lt;BR /&gt;However, /dev/md0 is major 9 minor 0 and &lt;BR /&gt;/dev/md1 is major 9 minor 1. There is no gap in the minor numbers for /dev/md0p* -style partitions. The major and minor numbers are hardcoded in kernel, so you can't change them easily.&lt;BR /&gt;&lt;BR /&gt;You are supposed to make /dev/cciss/c0d1p1 and /dev/cciss/c1d0p1 into /dev/md0, /dev/cciss/c0d1p2 and /dev/cciss/c1d0p2 into /dev/md1 and so forth. Does not make much sense for a multipath configuration, but that's how it is.&lt;BR /&gt;&lt;BR /&gt;There is a way, though: you can use LVM.&lt;BR /&gt;Make your current multipathed /dev/md0 into a LVM physical volume (pvcreate /dev/md0), then create a volume group (vgcreate vgsomething /dev/md0) and create logical volumes (lvcreate -L somesize vgsomething) instead of partitions. Then the logical volumes are accessible as /dev/vgsomething/lvol* instead of /dev/md0p*.&lt;BR /&gt;&lt;BR /&gt;To start up this construct, you need to script the following things to happen at boot-up (unless RedHat AS already does them):&lt;BR /&gt;&lt;BR /&gt;mdadm --assemble --scan &lt;BR /&gt;(this autodetects the multipath components and assembles them into /dev/md* device or devices)&lt;BR /&gt;&lt;BR /&gt;vgscan&lt;BR /&gt;(this detects the LVM volume groups on all disks, including /dev/md0. It's like a fsck for the LVM configuration.)&lt;BR /&gt;&lt;BR /&gt;vgchange -a y vgsomething&lt;BR /&gt;(this activates your volume group: necessary before mounting the logical volumes)&lt;BR /&gt;&lt;BR /&gt;After this, you can mount the logical volumes /dev/vgsomething/lvol* as normal disks.&lt;BR /&gt;&lt;BR /&gt;To shut down cleanly, after umounting all the logical volumes you should do:&lt;BR /&gt;  vgchange -a n vgsomething&lt;BR /&gt;  mdadm --stop /dev/md0&lt;BR /&gt;&lt;BR /&gt;It seems to me that Linux LVM does not get overly upset if you forget to do that, though...&lt;BR /&gt;&lt;BR /&gt;(You can name the volume group as you wish: my use of "vgsomething" as an example is a sign I've been spending a lot of time with HP-UX LVM...)</description>
      <pubDate>Tue, 26 Jul 2005 07:58:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/mdadm-multipath-raid-configuration/m-p/3588689#M18862</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2005-07-26T07:58:23Z</dc:date>
    </item>
  </channel>
</rss>

