- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- MSA Storage
- >
- Re: MSA 2040: Discovery shows ports that are not m...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-05-2022 08:41 AM - last edited on 05-06-2022 02:33 AM by support_s
05-05-2022 08:41 AM - last edited on 05-06-2022 02:33 AM by support_s
Hi,
I'm a total newbie to SAN , but have managed to get it working so far. However, some things don't look right to me.
I have attached two hosts via DAC (two cables each). vm03 to controller A1 and B1, vm04 to controller A3 and B3. In the management utility, I have mapped vm03 to port 1, and vm04 to port 3.
For completeness in case I might add other hosts later, I have also configured IP adresses for the other ports on the MSA 2040. I have setup open-iscsi and multipath on vm03 (a Proxmox host). vm04 will follow later.
Now, when I run "iscsiadm -m discovery -t sendtargets -p 10.0.1.100 -I iscsi_enp2s0f0", I would expect to only get the IPs for A1 and B1, but get this:
10.0.1.100:3260,1 iqn.1986-03.com.hp:storage.msa2040.15472745cf
10.0.1.110:3260,2 iqn.1986-03.com.hp:storage.msa2040.15472745cf
10.0.2.120:3260,3 iqn.1986-03.com.hp:storage.msa2040.15472745cf
10.0.2.130:3260,4 iqn.1986-03.com.hp:storage.msa2040.15472745cf
10.0.3.140:3260,5 iqn.1986-03.com.hp:storage.msa2040.15472745cf
10.0.3.150:3260,6 iqn.1986-03.com.hp:storage.msa2040.15472745cf
10.0.4.160:3260,7 iqn.1986-03.com.hp:storage.msa2040.15472745cf
10.0.4.170:3260,8 iqn.1986-03.com.hp:storage.msa2040.15472745cf
Of course, there is no way to connect to the other 6 addresses...
Is this expected behaviour? Might this be the result of some kind of caching?
(I had configured a mapping to all four ports before, but have started over again and deleted everything related I could find)
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-05-2022 09:42 AM
05-05-2022 09:42 AM
Query: MSA 2040: Discovery shows ports that are not mapped
System recommended content:
1. HPE MSA 1040, MSA 2040, and MSA 2042 Storage GL225R003 Firmware Release Notes
Please click on "Thumbs Up/Kudo" icon to give a "Kudo".
Thank you for being a HPE valuable community member.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-05-2022 12:26 PM
05-05-2022 12:26 PM
Solution@Larsen0815
Yes this is compelely expected behavior. During the discovery one of the SCSI inquiries retrieves ALL the host port IP addresses from the MSA. This is reported to the host whether they are accesible via the network to the host or not. Or whether there is a volume presented from those ports. It does not cause any problems but you do want to validate that you have multipath showing you have both an optimized and unoptimized PATH.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-06-2022 02:33 AM
05-06-2022 02:33 AM
Re: MSA 2040: Discovery shows ports that are not mapped
Ok, good to know.
Multipath is configured and working:
atl-vm03:~# multipath -ll
3600c0ff000277b3d819b3c6201000000 dm-16 HP,MSA 2040 SAN
size=5.2T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=30 status=active
|- 7:0:0:0 sdb 8:16 active ready running
`- 8:0:0:0 sdc 8:32 active ready running
Still wondering though, when I restart open-iscsi, I get these warnings:
May 06 11:23:21 atl-vm03 iscsid[717396]: connect to 10.0.2.120:3260 failed (No route to host)
May 06 11:23:21 atl-vm03 iscsid[717396]: connect to 10.0.2.120:3260 failed (No route to host)
May 06 11:23:21 atl-vm03 iscsid[717396]: connect to 10.0.3.150:3260 failed (No route to host)
May 06 11:23:21 atl-vm03 iscsid[717396]: connect to 10.0.3.150:3260 failed (No route to host)
...
Why is it trying to connect to the other ports?
It is also trying to connect to the ports I use (10.0.1.100 and 10.0.1.110) and fails for 10.0.1.110. Probably because it's not using the right interface as I can ping .100 and .110 (have to use -I for the second one):
atl-vm03:~# ping 10.0.1.100
PING 10.0.1.100 (10.0.1.100) 56(84) bytes of data.
64 bytes from 10.0.1.100: icmp_seq=1 ttl=64 time=0.229 ms
atl-vm03:~# ping 10.0.1.110 -I enp2s0f1
PING 10.0.1.110 (10.0.1.110) from 10.0.1.210 enp2s0f1: 56(84) bytes of data.
64 bytes from 10.0.1.110: icmp_seq=1 ttl=64 time=0.284 ms
Maybe you have an idea even though open-iscsi is not an HP product. Otherwise, I would of course consult the open-isci mailing list.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-06-2022 02:35 AM - edited 05-06-2022 03:54 AM
05-06-2022 02:35 AM - edited 05-06-2022 03:54 AM
Re: MSA 2040: Discovery shows ports that are not mapped
Nevermind. Just minutes after my previous reply, I got an idea that this was caused by the automatic start of all those targets. Set startup to manual for the unused ports and solved that problem
Just to verify: The ports are also reported to the host even when they are not mapped in the management utility?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-06-2022 08:00 AM
05-06-2022 08:00 AM
Re: MSA 2040: Discovery shows ports that are not mapped
@Larsen0815
Yes, the host ports which have an IP address other than 0.0.0.0 will show up in the SCSI VPD pages which are queried during discovery. And will then show up in open iSCSI.
Looking at your multipath output, I'm wondering if the multiple paths are being discovered correctly. It appears that you have 2 paths sdb and sdc but they are both set to a PRIO of 30. Which would indicate they are both the same ALUA state. Since they come from independent controllers, 7 and 8, it looks like you have 2x HBAs connecting to the same MSA host port. I think that your network issue causing the ping issue may be connected.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-09-2022 04:48 AM
05-09-2022 04:48 AM
Re: MSA 2040: Discovery shows ports that are not mapped
Ok, thanks.
Multipath: The issue with open-iscsi arose because I had set all discovered targets to automatic startup. Therefore it was trying to connect to .110 over both interfaces. I disabled all of them but two (.100 on interface 0, and .110 on interface 1) and restarting open-iscsi doesn't show any problems anymore.
Not sure, if that's what you meant regarding the ping issue, though.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-09-2022 10:16 AM
05-09-2022 10:16 AM
Re: MSA 2040: Discovery shows ports that are not mapped
@Larsen0815
When I look at your multipath -ll output I see 2 equal Paths:
|- 7:0:0:0 sdb 8:16 active ready running `- 8:0:0:0 sdc 8:32 active ready running
Both under the same 'prio' value
prio=30
This would indicate to me that you are ONLY connected to one controller
Here is my multipath output:
mpathe (3600c0ff0001be33453df786201000000) dm-13 HP,MSA 2040 SAN
size=93G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 5:0:0:1 sdk 8:160 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
`- 6:0:0:1 sdl 8:176 active ready running
There are 2 different 'prio' groups which indicate I have a connection to both the Optimized PATH and the un-optimized Path.
One other way to look at the PATHs is to pull the CLI command in XML output:
CLI> set cli-parameters api pager off
CLI> show initiators
Look at the output for the host port bits:
<PROPERTY name="host-port-bits-a" type="uint32" size="8" draw="true" sort="integer" display-name="Host Port Bits Ctlr A">8</PROPERTY>
<PROPERTY name="host-port-bits-b" type="uint32" size="8" draw="true" sort="integer" display-name="Host Port Bits Ctlr B">8</PROPERTY>
The number is a bitmap of the ports 1 == 1, 2==2, 1 and 2 == 3... in my case port 4 == 8. You can see my host port bits for the initiator show a login to the A controller port 4 and B controller port 4
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-10-2022 05:46 AM
05-10-2022 05:46 AM
Re: MSA 2040: Discovery shows ports that are not mapped
This is the redacted output of "show initiators":
<COMP G="0" P="1"/> <OBJECT basetype="initiator" name="initiator" oid="1" format="rows">
<PROPERTY name="nickname" type="string" size="255" draw="true" sort="string" display-name="Nickname">atl-vm03</PROPERTY>
<PROPERTY name="host-port-bits-a" type="uint32" size="8" draw="true" sort="integer" display-name="Host Port Bits Ctlr A">1</PROPERTY>
<PROPERTY name="host-port-bits-b" type="uint32" size="8" draw="true" sort="integer" display-name="Host Port Bits Ctlr B">1</PROPERTY>
</OBJECT>
<COMP G="0" P="2"/> <OBJECT basetype="initiator" name="initiator" oid="2" format="rows">
<PROPERTY name="nickname" type="string" size="255" draw="true" sort="string" display-name="Nickname">atl-vm04</PROPERTY>
<PROPERTY name="host-port-bits-a" type="uint32" size="8" draw="true" sort="integer" display-name="Host Port Bits Ctlr A">4</PROPERTY>
<PROPERTY name="host-port-bits-b" type="uint32" size="8" draw="true" sort="integer" display-name="Host Port Bits Ctlr B">4</PROPERTY>
</OBJECT>
The multipath output comes from this. Comment out the following in "/etc/multipath.conf"...
devices {
device {
vendor "HP"
product "MSA 2040 SAN"
path_selector "round-robin 0"
path_grouping_policy multibus
failback immediate
no_path_retry 18
}
}
...and after a multipathd restart you get this:
3600c0ff000277b3d819b3c6201000000 dm-16 HP,MSA 2040 SAN
size=5.2T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 8:0:0:1 sdc 8:32 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- 7:0:0:1 sdb 8:16 active ready running
The above configuration is based on what I have seen on different websites to utilize both controllers.
Should I stick to the default configuration?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-10-2022 07:41 AM
05-10-2022 07:41 AM
Re: MSA 2040: Discovery shows ports that are not mapped
@Larsen0815
Likely a mixture of both, there is one default which is possibly not correct.
**failback** - immediate should be used, I believe that 'manual' is the default (to see defaults: multipath -t). immediate tells the system to immediately failback to the highest priority group when it is available, this would be going back to the optimized paths
path_selector - round-robin or service-time (default) are ok
path_grouping_policy - this should be group_by_prio. Setting this to multibus makes all paths equal, this setting with round-robin will likely lead to some performance degradation as the system will use the non-optimized paths equally to the optimized paths.
FYI - the host-port-bits set to '4' indicates that your host has logged in to port 3 on both controllers. This shows that the physical connections are correct.
An optimized path is one from the controller which 'owns' the volume, Pool, disk-group. An unoptimized path has to transfer the I/O request inside the MSA from one controller to the other to be services and then back to the original controller to respond to the host. It can incur a small to medium performance penalty doing the handoff inside the controllers.