HPE SimpliVity
1753642 Members
4949 Online
108798 Solutions
New Discussion юеВ

Do we need further configuration if change the connection method of storage and federation?

 
SOLVED
Go to solution
TerryTian
Occasional Contributor

Do we need further configuration if change the connection method of storage and federation?

Hi All

the environment for now,there are only two nodes,storage and federation ports are connected directly with each nodes.management port connected with switch.

if we change the connection of storage and federation from directly connection to switch,is it feasible?

if yes,what configuration need to be configuerd?
if not ,is there any document we can find to share to customer?

if we change back ,(from swtich to directly connection), could be the original environment working normally?

any advise.

thanks.

2 REPLIES 2
JohnHHaines
HPE Pro

Re: Do we need further configuration if change the connection method of storage and federation?

Absolutely.

You should not need to change any configuration.

However, it is possible you configured the ports without VLANs when deploying in a direct-connect configuration. In this case, you will need to open a call with HPE Support to chaneg the network configuration to add them in (make sure they are configured on the swtich).

The default for the two 10 Gb ports in Direct Connect is Active/Passive. When switch connected, you can change that.

You can do this live - remove one cable from one node, and plug it into the switch. Using a new cable, connect the other node into the switch. You should now be abel to move the other two ports into the switches, live.


I work for HPEAccept or Kudo
TerryTian
Occasional Contributor
Solution

Re: Do we need further configuration if change the connection method of storage and federation?

Hi All

I just found the solution information for this case.

for your reference

How to Convert Direct Connected 2-Node Datacenter to Use 10 Gb Switch
How To#: 20151
Title: How to Convert Direct Connected 2-Node Datacenter to Use 10 Gb Switch
Table of Contents
Overview
Procedure
Customer Support
Article Information
Properties
Server Platform: All server platforms
OmniStack Software Version: All software versions
Information Level: 1
Date Created: Wednesday, April 5, 2017
Date Revised: Wednesday, April 5, 2017
Overview
This article explains how to reconfigure the vSwitches and physical cabling between two directly connected Simplivity
nodes to convert them to a 10 Gb switched infrastructure. Since the OmniCubes are direct connected in this scenario,
there is no direct impact to performing this procedure during production, provided the ping confirmations are completed
and come back successful.
This may be desired for various reasons, including:
1. Growing the datacenter beyond an initial two nodes
2. Providing access to the storage networks for resilient connections from compute nodes.
This procedure requires the following:
1. Administrative access to the vCenter server
2. Administrative access to the ESXi instances of the nodes
3. Physical access to the network infrastructure (ports on SimpliVity and the switch)
Procedure
The following resolution provides a permanent solution for the issue.
1. Log in to the vCenter server, and locate the network configuration section for each SimpliVity node.
2. Verify that there are two physical NICs being used for the storage and federation networks (normally vSwitch1).
3. Disable vSphere HA.
4. Log in to both ESXi servers and verify that the NFS Storage kernel port can be pinged with the vmkping command.
5. In vCenter, take the second physical NIC off vSwitch1.
6. Verify ping functionality again.
1
7. On the switch, configure the 10 Gb ports to support the new connections. Make sure to use sufficient MTU
(storage and federation networks should be using 9000).
8. Swing the physical network cables over from the direct node to node, to be node to switch port.
9. In vCenter, build a test vSwitch on each of the two nodes. Create a test vmkernel port on each. Associate the
physical NICs to this test vSwitch.
10. In the ESXi SSH sessions, verify that each node can ping the test vmkernel port on the other system. This is to
test the network path across the switch ports.
11. Once verified, in vCenter move the physical NICs from the test vSwitch to the original vSwitch (vSwitch1). Make
these switches the primary connections for the vSwitch. Disable the first physical NIC on vSwitch1.
12. Verify that vmkpings work as in step 3.
13. Once the switched connection is verified, from vCenter, move the first physical NIC from vSwitch1 to the test
vSwitch.
14. Swing physical network cables over from direct node to node, to be node to switch port.
15. Verify traffic across the switch by using the vmkping to the partner test vmkernel port as in step 9.
16. After verified, use vCenter to logically move the first physical NIC from the test vSwitch to the federation/storage
vSwitch (vSwitch1).
17. Test traffic across the newly moved first NIC by going into vCenter and moving the second NIC to the not
used
state.
18. Verify that vmkpings work as in step 3.
19. After verification, return to vCenter, and put the NICs back into active/standby status as desired.
20. The test vmkernel port and the test vSwitch may be cleaned up as desired.
21. Enable vSphere HA.