StoreVirtual Storage
Showing results for 
Search instead for 
Did you mean: 

Existing Cluster - 10Gb upgrade

Occasional Contributor

Existing Cluster - 10Gb upgrade


Has anyone gone through the process of upgrading an existing cluster to 10Gb? We have installed the 10 Gb card and RAM, but the process of changing the IP is pretty vague in the 10 Gb upgrade documentation that I've been able to locate.

Our existing environment consists of 2 P4500 G2 nodes running LHOS 12.5, and a FOM running 12.6. The nodes have their two onboard NICs bonded as bond0. All communication occurs on the 192.168.160.x subnet.

What I'd like to bond the two new 10 Gb NICs together (bond1) and either use that for all communication on the 192.168.160.x subnet, or use bond1 for LHOS and iSCSI, and bond0 for management on the 192.168.170.x subnet.

My first attempt was to assign a new 192.168.160.x IP to bond1 with no gateway specified. When I did so, I found that I couldn't ping or communicate with that IP. Moving the LHOS and management to bond1 didn't reestablish connectivity, so I stopped at that point because I was afraid that I wouldn't be able to manage the node via the CMC.

For my second attempt, I went through the same process, but I assigned a different subnet that the CMC could still communicate with. This prevented the CMC warning about the bonds being on the same subnet and allowed me to manage and ping the new IP. However, as I began moving services to bond1, I received a warning that all of my volumes would become inaccessible until the node became discoverable again.

At this point, I'm thinking that my safest option is to add a third node to the cluster and then remove one node at a time from the cluster and management group, work through the IP changes, and then add the node back. This sounded pretty safe to me, as long as we were OK with the restriping that would take place.

Is there another approach that someone could recommend? I would be willing to take a maintenance window and shutdown the whole management group, but in the lab I never found a way to have the cluster in some type of maintenance mode and still be allowed to make IP changes.

Any comments would be welcomed.

Mark E Walters
Occasional Visitor

Re: Existing Cluster - 10Gb upgrade


Not sure if you have resolved this yet - I have just (30 mins ago) completed an upgrade of our Storevirtual 4530 cluster from 1Gb Bonded interfaces to 10Gb Bonded interfaces.

AFAIK you cannot have SAN traffic on the 10Gb interfaces and Management on the 1Gb interfaces - So don't even try - 10Gb/s should be plenty for both tasks.

The Process was not difficult.

I first shutdown all server that had access to the SAN volumes - This was done mainly because we were going to Upgrade the Firmware on the Switches and add 2 more Switches to the Stack.  The upgrade of the SAN to 10Gb can be done without shutting down the servers, providing you have Network Raid 10 on the Volumes (i.e. you can take 1 Disk Node offline without the volumes going offline.

First Make sure you have ILO access to the Disk Nodes - You will need this to assign the IP address.

1) Open the CMC and Authenticate to the Management group

2) On Disk node 1 - Right click the 1Gb Bond and select delete.  It gives you a warning, but continue after reading.  This will disable the 1Gb/s Nics, so you no longer can manage that Disk Node thru the CMC - Dont Panic! continue with the next step.

3) Login to the ILO on disk node 1 and logon to SAN I/Q.  Go to the Nics, and select one of the 10 Gb Nic

4) Assign the IP address, Subnetmask and Gateway. I used the same IP address that was on the 1Gb Bond so I didnt have to make any changes to target Ip addresses etc

5) Go back to the CMC and logon to the Node again - Wait for it to logon completely, and wait for all volumes to re-sync

6) Go to networking for Disk Node 1, and Form a bond with the 2 x10Gb Nic selected - Make sure your switch ports are configured properly before you do this.

7) Wait for the Bond to be formed and login to the Node again in the CMC.

8) Ping the IP address on the 10Gb interface on Disk Node 1 to make sure all comms is ok

9)  Breath a sigh of relief, and take a break for a bit. Just to let the stress levels go down ;->

Repeat steps 2 thru 9 for each disk node you have.

At the end of it, you will have all disk nodes running at 10Gb/s


After doing this, my Management group is running on a trial license, but I suspect that is because the License is based on MAC address, and we have just Changed to the 10Gb NIC which will have a different MAC address.  You just need to go back to HP licensing portal, and re-licanse with the New MAC address (or feature key)