StoreVirtual Storage
1748265 Members
4047 Online
108760 Solutions
New Discussion

Adding Nodes and Update FW Questions/Recommendations

 
Ralf Gerresheim
Frequent Advisor

Adding Nodes and Update FW Questions/Recommendations

Hi,

 

I want to update an existing P4000 Multi-SAN environment (v9.5) with a Management Group and

a) a Cluster with 4 Nodes, running LUNs with NetRaid 10

b) a Cluster with 2 Nodes, running LUNs with NetRAID 5 (this should be NetRaid 10 after the upgrade)

c) a FOM

All nodes have 10G Upgrade kits installed.

 

Cluster a) will get two more nodes

Cluster b) will get five(!) more nodes

 

The storage of Cluster a) is presented to VMware servers, the storage of cluster b) to Windows servers

 

My questions:

1. Can I run all upgrade tasks through the CMC? Without Downtime?

 

2. Can I update the FOM through the CMC? Do the Servers loose connect to the Data if the FOM reboots?

 

2. For the second cluster, what is the best way to add the 5 nodes?

    - create a new cluster with the five nodes, create LUNs with NetRaid10, copy the data from the old cluster   to the new cluster

    - delete the old cluster and move the old nodes to the new cluster

or: can I add the new nodes to the old cluster and change the NetRaid from 5 to 10?

 

4. A general question: How does the software work with an uneven number of nodes? Is it possible at all? Or is the only way to create a cluster with 4 nodes and have one remainig node?

 

5. Do I have special steps to perform to integrate the 10 G nodes? As I remember, in former times the nodes first have to connect to 1 GB and get 10G patches before aible tu run with 10 G.

 

Thanks in advance

 

 

 

 

1 REPLY 1
oikjn
Honored Contributor

Re: Adding Nodes and Update FW Questions/Recommendations

you can add all the nodes into the management group for both clusters at the same time and this does not cause any downtime.  You can then use the edit cluster option in CMC to add all the nodes you want to each cluster at the same time.  This can be done live and does not cause any downtime, but will cause a cluster re-stripe which takes performance away from your production.  After you restripe cluster b, you can convert the luns from NR5 to NR10 by editing the LUN.  This is also something that can be done live and does not interrupt LUN access, but also requires a restripe which also takes performance away from the live system.

 

It is also possible to run an odd number of nodes with NR10.  The manual talks about the stripe/mirror pattern if you are interested.