Storage Boards Cleanup
To make it easier to find information about HPE Storage products and solutions, we are doing spring cleaning. This includes consolidation of some older boards, and a simpler structure that more accurately reflects how people use HPE Storage.
HPE StoreVirtual Storage / LeftHand
cancel
Showing results for 
Search instead for 
Did you mean: 

Move LeftHand cluster to new switches

SOLVED
Go to solution
kghammond
Frequent Advisor

Move LeftHand cluster to new switches

We have two 2-node LeftHand clusters. We recently bought some newer bigger backplane switches. We need to move our LeftHand clusters to the new switches.

Ideally we want to create zero downtime for the clusters.

Currently all nodes have two bonded nic's using eight total ports on the current switch. We have the vlan's trunked to the new switching infrastructure.

I can see two possible solutions:

1) We quickly unplug and plug the switch ports to the new switch, wait for arp timeouts and re-sync if necessary. In this scenario, would it be better to move the VIP owner first or second? Will our VMware LUN's lose connectivity at all during the VIP failover? Is there a manual way to move the VIP?

2) We take each individual node offline via CMC (shutdown). Then plugin the new switch ports, then boot the node back online and then re-sync via CMC. Once again, move VIP first or second with this solution.

Any preferences to 1 or 2 and are there any other solutions that might be cleaner yet?

Thank You,
Kevin
1 REPLY
Bryan McMullan
Trusted Contributor
Solution

Re: Move LeftHand cluster to new switches

Sounds a bit dangerous. I'd highly recommend waiting until a maintenance window, but I'm sure you've already thought of that. So to answer your question....

1) I believe the timeout for the VIP is about 15 seconds, that 15 seconds is supposed to guarantee that the VIP is down before the cluster chooses another node. If you only have VMWare connected to it, you may be okay with doing a quick pull and plug. When we upgraded the software on our Catalyst 3750 stack, our connection from our ESX cluster to the SAN was dropped. We left a couple test machines on to see what would happen. Surprisingly, they simply paused until connectivity was re-established. No harm, no foul. You'll most likely see errors on ESX, but I don't think it will drop the LUN (ours didn't and it was down longer than 1 minute). As for manually moving the VIP, unfortunately...there is no method available. I've begged and pleaded with LHN for this capability, and perhaps it will come in the future.

2) This seems much more reasonable. What is your connection between the new hardware and old hardware? If it will be fully routed and at least a Gb or higher, this would be the direction I'd personally go. You'd want to move the VIP second in this solution. Be aware that you may (likely) still have a blip while the VIP times out and changes, but I think this is a safer method.

I can't really think of any other options. I guess if you are using the adaptive load balancing on the dual NICs in your unit, you could put one from each unit into the new hardware (with the ports in a shutdown state) and then down the ones on the old one and bring the new ones up. That would be quicker than the pull and plug, and all the units can function with just a single NIC. You can then move the remaining NICs over at your leisure.

Good luck!