- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Array Setup and Networking
- >
- RHEL6 Multipath.conf Configuration
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-09-2014 08:41 AM
05-09-2014 08:41 AM
The multipath for our Nimble unit has 4 paths, all of which are active and have the same priority:
mpathd (25ae755c39f9f44946c9ce900ddc4aa62) dm-9 Nimble,Server
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 17:0:0:0 sds 65:32 active ready running
|- 18:0:0:0 sdt 65:48 active ready running
|- 19:0:0:0 sdu 65:64 active ready running
`- 20:0:0:0 sdv 65:80 active ready running
mpathc (2455e55d26ab2bd436c9ce900ddc4aa62) dm-8 Nimble,Server
size=1.5T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 5:0:0:0 sdg 8:96 active ready running
|- 8:0:0:0 sdj 8:144 active ready running
|- 7:0:0:0 sdi 8:128 active ready running
`- 6:0:0:0 sdh 8:112 active ready running
Below is the multipath.conf file:
defaults {
user_friendly_names yes
find_multipaths yes
path_checker directio
polling_interval 5
no_path_retry fail
fast_io_fail_tmo 5
path_grouping_policy multibus
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z][[0-9]*]"
device {
vendor "*"
product "*"
}
}
blacklist_exceptions {
device {
vendor "Nimble"
product "Server"
}
device {
vendor "NETAPP"
product "LUN"
}
}
devices {
device {
vendor "Nimble"
product "Server"
path_grouping_policy group_by_serial
path_selector "round-robin 0"
features "1 queue_if_no_path"
no_path_retry 20
path_checker tur
rr_min_io 20
failback immediate
rr_weight priorities
}
device {
vendor "NETAPP"
product "LUN"
path_grouping_policy group_by_serial
path_selector "round-robin 0"
features "1 queue_if_no_path"
no_path_retry 20
path_checker tur
rr_min_io 20
failback immediate
rr_weight priorities
}
}
If I wanted to change it so that there were 2 priorities instead, how would I do that? I need it to look something like:
size=40G features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=4 status=active
| |- 1:0:0:0 sda 8:0 active ready running
| `- 2:0:0:0 sdc 8:32 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
|- 1:0:1:0 sdb 8:16 active ready running
`- 2:0:1:0 sdd 8:48 active ready running
Solved! Go to Solution.
- Tags:
- multipath.conf
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-13-2014 06:36 AM
05-13-2014 06:36 AM
SolutionHi AJ,
Interesting question. I've done a little bit of research and it looks like the solution would have been to use prio_callout. I saw would have been because it has been removed in RHEL6. Looking at your request I think you want to avoid using the inter switch link (ISL), is that correct? If so Nimble has added a feature in the 2.X version where you can tell the Nimble about your configuration and have it avoid the ISL. This is done by having with a bisect (low vs high) or odd/even IP distribution. The idea is you would configure your environment with all the odd IP's on one switch and all the even IP's on the other. This way Nimble knows to send traffic for odd addresses to the correct switch to ensure no ISL is used. This works the same for bisect but the IP's are low on one switch and high on the other.
Back to RHEL multipathd I found a great doc here that talks about all the options and mentions the fact that prio_callout is depreciated: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/pdf/DM_Multipath/Red_Hat_Enterprise_Linux-…
One other thing to mention is in RHEL 6 they change "rr_min_io" to "rr_min_io_rq." See the doc above for more details.
Let us know if this helps!
Cheers,
Bill