1751770 Members
4465 Online
108781 Solutions
New Discussion

HPE MSA 2042

 
SOLVED
Go to solution
Rob9
Occasional Contributor

HPE MSA 2042

Can anyone advise? 

When configuring the host ports, does the system auto detect a preferred port based on speed,latency, and etc? I cannot see an option to set priority manually.

I plan to configure A and B host port 1 with a 10GB link but thought about adding a backup 1GB link to host port2 for each controller. I have concerns that host port 2 will be used when 10GB is avaiable. 

 

 

 

6 REPLIES 6
HPSDMike
HPE Pro

Re: HPE MSA 2042

When you map a volume you can specify which ports that volume is available on. If you select nothing then it will be available by all four ports on each controller (assuming there is connectivty to those ports). You can also specify which ports to map to (typically 1,2; 3,4; or 1,2,3,4). What you are proposing is not ideal if you use the recommended multipathing policies for the MSA (ALUA/RR). The MSA and hosts, when configured for best practices, would see all paths to the LUN (in your case 2 x 1GB and 2 x 10GB) and the usual configuration is round robin. So this means one packet would go down the 10GB line and potentially the next down the 1GB line; again, less than ideal. Also, the MSA will try and prevent you from going down a non-optimized path. In this case, the non-optimized paths would be anything on the non-owning controller of the vdisk (or pool) that is backing the volume. So, essentially, under normal operations, the host and MSA will negiotate to send all packets through all available ports on the owning controller (so a max of 4 at a time on the MSA 2042). If the owning controller goes down then the MSA will alert the host to use the paths on the other controller. 

You'll get this question, and more, answered in the following "best practice" document (see pages 26-28, 44)

https://www.hpe.com/h20195/v2/GetDocument.aspx?docname=4AA4-6892ENW

 



I work for HPE. The comments in this post are my own and do not represent an official reply from the company. No warranty or guarantees of any kind are expressed in my reply.

Accept or Kudo

Rob9
Occasional Contributor

Re: HPE MSA 2042

This is a great answer, thanks for your reply and the BP guide. 

Soo, to ensure I get max bandwidth it might be wise leaving 1gb alone. Having 1 10GB on A and 1 10GB on B. Then (eventually) add more 10GB lines when I have the infrastructure to support. 

My iSCSI infrastructure involves a switch(2) stack... The same 10GB will be connected to either switch, so if I lose a switch the other channel will remain online.

HPSDMike
HPE Pro

Re: HPE MSA 2042

Yes, you can do what you are suggesting and add more 10GB later.  However, the ideal minimum setup is as shown in the BP guide:

  • Server A, iSCSI port 1 to switch 1 (think A-side fabric)
  • Server A, iSCSI port 2 to switch 2 (think B-side fabric)
  • MSA Port A1 to switch 1 (think A-side fabric)
  • MSA Port B1 to switch 1 (think A-side fabric)
  • MSA Port A2 to switch 2 (think B-side fabric)
  • MSA port B2 to switch 2 (think B-side fabric)

This gives each server two discreet paths to get to both MSA controllers. In this scenario it's actually better if the two switches don't know anything about each other and aren't stacked. However, if your two switches are part of a stack then that is OK. You can consider virtually breaking the "A-side" and "B-side" by putting the associated ports in their own VLAN. Finally, if you have to put all ports into the same VLAN on the same switch stack then that is fine too. The issue you'll run into there is that you will more quickly exceed the "maxmium iSCSI paths to a LUN" of your operating system. I know ESXi only supports 8 paths to a LUN and 4 hosts ports x 2 server ports will do it.

 



I work for HPE. The comments in this post are my own and do not represent an official reply from the company. No warranty or guarantees of any kind are expressed in my reply.

Accept or Kudo

Rob9
Occasional Contributor

Re: HPE MSA 2042

Yeah, I can see how that would provide full resiliency and better flow control. 

I actually have 4 esxi hosts, each host has two dedicated 1GB links in the iSCSI stack with each port connected to swtich 1 and switch 2 already.

Eventually I plan to upgrade the NIC's on each host to 10GB, but I need better switches that can allow for more than two 10GB connections (per switch) ;)

This is also the reason Im left with no room for the SAN to Switch. No more spare 10GB, but given the options I have its probably best with

- MSA Port A1 = 10GB

- MSA Port B1 = 10GB

Than

- MSA Port A1 = 1GB

- MSA Port B1 = 1GB

- MSA Port A2 = 1GB

- MSA Port B2= 1GB

Or what we have mention previously re 10GB and 1GB combined.

HPSDMike
HPE Pro
Solution

Re: HPE MSA 2042

- MSA Port A1 = 10GB to SW1

- MSA Port B1 = 10GB to SW 2

will work fine so long as the switches particupate in the same stack and ports are in the same VLAN. If your switches are isolated then this would not be a good option because only one server port would have access to each controller. So, in the case of properly functioning ALUA/RR then you'll only have a max of 1GB to a volume per server because the other link is essentially a standby link. 

As far as I can tell, all your options should work to some extent or another. It really depends on your workload and IO patterns. I know lots of customers who run 2 x 1GB ports from the servers and 4 x 1GB ports on the MSA and it works fine for them. You could elect to just do this all at 1GB , with nice and clean redundant paths everywhere , until such time as you can make the complete jump to 10GB. But again, it all depends on your workload. If you've got a single shelf of spinning disks, it's likely you'll never drive enough I/O to really exercise the 10GB anyway. 

 



I work for HPE. The comments in this post are my own and do not represent an official reply from the company. No warranty or guarantees of any kind are expressed in my reply.

Accept or Kudo

Rob9
Occasional Contributor

Re: HPE MSA 2042

Thanks for your time Mike.  I agree, I was even thinking instead of two 1GB from each host I could increase to four. 

Using Veeam and having multiple SQL servers that are quite disk heavy our IO has high spikes through certain points of the day, 10GB I hope will be more than adequate.