- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Re: Softeware RAID with mdadm and physical device ...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-27-2011 03:23 AM
09-27-2011 03:23 AM
Hi All,
I am not so familiar with mdadm and would like to know how to rename devices within mdadm .
Here some details:
# cat /etc/*release
SUSE Linux Enterprise Server 10 (x86_64)
VERSION = 10
PATCHLEVEL = 1
# rpm -qa |grep mdadm
mdadm-2.6-0.11
At the moment we are using EMC powerpath to see luns on the SAN. I have to migrate it to linux dm-multipath . So far no big issue, but I am afraid of the behavior of mdadm when the change is performed.
I got with this MD, 2 disks in RAID 1 :
# mdadm --detail /dev/md10|grep active
0 120 64 0 active sync /dev/emcpowere
1 120 176 1 active sync /dev/emcpowerl
When moving to dm-multipath, I thought of bringing down the MD array, uninstall powerpath, install dm-multipath and then activate the array once again with the following :
mdadm -A -R -s --config=</path/to/my/mdadm.conf>
But I am not sure if mdadm will scan all the disks when re-assembling it or if it will try to find the former devices in some kind of cache or other config files...
Here is the conf within mdadm.conf (will have to change the DEVICE line before but thats no big deal)
# cat mdadm.conf
DEVICE /dev/emcpower?
ARRAY /dev/md10 level=raid1 num-devices=2 UUID=1bf9fb4f:9b7d318d:b60f830f:17e03f10
Any one would know ? Or if anyone would know of a website with some explanation, I could not find so far what I needed ...
Thanks in advance
Regards,
Thierry
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-27-2011 06:01 AM
09-27-2011 06:01 AM
SolutionLike LVM, mdadm uses UUIDs to identify array components.
When told to start a specific array, it will find the array UUID in the configuration file, then scan all the disks allowed by the DEVICE setting for software RAID superblocks with that UUID. Once found, it reads all the other information from the superblock(s), then figures out if the array can be started immediately or if some recovery actions are required first.
You can even run "mdadm --assemble --scan" to make mdadm scan all the allowed devices for RAID superblocks and start all available valid software RAID arrays. Since the superblock contains information on which /dev/md* device to use for each array, this should start all the arrays using their usual /dev/md* devices even if you have completely lost your configuration file.