Aruba & ProVision-based
cancel
Showing results for 
Search instead for 
Did you mean: 

Aruba 3810M ISCSI Configuration

 
Highlighted
Occasional Contributor

Aruba 3810M ISCSI Configuration

Afternoon All,

I'm currently setting up a pair of stacked Aruba 3810M switches to handle our ISCSI Traffic from an MSA 2050 to our VMWare servers. Should I enable spanning tree or not? Any other advice would also be appreciated.

1 REPLY 1
Highlighted
Honored Contributor

Re: Aruba 3810M ISCSI Configuration

From what you wrote it seems that you're going to use the (backplane stacked) Aruba 3810M as switches serving both the frontend (Servers to Clients) and the backend (Server <--> iSCSI SAN) layers...but I suggest you to not do so.

Generally (backplane stacked) Aruba 3810M switches serving the frontend (Servers <--> Clients) could be a good solution but avoid mixing that side of the fence with the iSCSI zone (even if you plan to use non routed VLANs to segregate iSCSI traffics).

The backend (Server <--> iSCSI SAN) layer should indeed be kept isolated from the frontend and should be served by two switches not interconnected togheter (so neither backplane stacked nor frontplane stacked), the reason? it is well explained here (it's related to correctly planning multipathing) and also can be seen on the HPE MSA 1050/2050/2052 Best Practice Technical White Paper (at Page 42) and on the HPE Moving from NAS to HPE 3PAR StoreServ storage with iSCSI VVols without sacrificing simplicity Implementation Guide (at Page 11).

Doesn't matter if NAS/SAN is not the same or if Switches aren't exactly the Aruba 3810M...try to understand the design concepts exposed and why resiliency kicks in: the backend layer (Server <--> iSCSI SAN) should be served by two switches not interconnected exactly like it happens on FC SAN where one (or more) FC Switch(es) forms the SAN Fabric "A" while another FC Switch forms the totally separated SAN B....and hosts (servers) are concurrently connected to both via their FC Adapters in a resilient way (cross connecting and using different controllers/port to achieve maximum resiliency).

Supposing Server(s) are properly configured with regards to iSCSI/VLANs/Ports and adopting the correct resiliency level on every device: iSCSI ports/iSCSI controllers/switches) you can obtain what on the first link's thread is summarized with the statement: "This configuration will handle any single pNIC failure, any single Switch failure, any single iSCSI Controller failure or any single Link failure without interruption.".

With regards to (R)STP [*] it shouldn't be an issue, set involved Ethernet ports with admin-edge-port and bpdu-protection (spanning-tree interface ethernet <port-id> admin-edge-port bpdu-protection.

If Aruba 3810M on the backend will be kept separated you could enable and set the spanning-tree on both with priority 0 (each one will be root of its Spanning-Tree topology) and force its operation mode to RSTP (instead of default MSTP).