BladeSystem - General
1751854 Members
5275 Online
108782 Solutions
New Discussion

2 x C7000 Stacked enclosures with 2 x StoreVirtual 4630

 
chuckk281
Trusted Contributor

2 x C7000 Stacked enclosures with 2 x StoreVirtual 4630

An iSCSI question from Muhammed:

 

**********

 

 

I have a scenario in which 2 x C7000 enclosures equipped with Flex 10/10D VC modules are stacked and configured in single domain configuration, enclosure #1 have StoreVirtual 4630 (4 node) iSCSI storage and enclosure #2 will be installed with StoreVirtual 4630 (2 node) iSCSI storage.

Is there any limitation to have more than one StoreVirtual (4630) in single VC domain?

 

This question arose when I read “HP Virtual Connect with iSCSI Cookbook (December 2013)” and it mentioned some limitation about attaching multiple StoreVirtual 4000 storage nodes to the same VC domain (page 28).

 

***********

 

Reply from Dave:

                The limitation on the p4000 refers to the rack mount version, as they need the external switch to connect all nodes together for communications.

The 4630 is a bladed node with SAS storage. The bladed nodes will be internal to the enclosure and will have node communication inside the enclosure like our VDI solutions.

 

                You may want to have four iSCSI networks defined, two for each enclosure, i.e. enc0_iSCSI_Bay1, enc0_iSCSI_Bay2, enc1_iSCSI_Bay1 and enc1_iSCSI_Bay2.

This will keep both 4630s in their own enclosure setup, helping bandwidth.

Or you could use iSCSI_Bay1 & iSCSI_Bay2 to be able to share between enclosures, assuming that enough stacking links are connected to provide proper bandwidth.

 

Keep in mind that I have not done this and our VDI solutions allow for MES, but only use one set of 4630 nodes.

I do not see why it would not work?

 

And from Chris:

 

One thing to also keep in mind is Nic Bonding method used by the LeftHand Family, ALB is the preferred choice and when used the Nics need to be able to see each other over the network.

Two iSCSI networks would require each network to have an uplink port connected to a switch linking the two networks together, this would be preferred if the iSCSI storage would need to be connection to devices external to the Blade enclosure and an external CMC could be used.

 

If the storage is going to be used internal to the Blade enclosure and the 4630s are all going to be in the same Management Group in one or more clusters and internal network would be best. This would eliminate the need to have a connection to a switch. Another thing to keep in mind is the LeftHand will only do iSCSI over one subnet, however Management Nics can be created to run the CMC on a different subnet / external network. Replication can only be done over the LeftHand network.

 

Here is a scenario to create a single network between stacked enclosures. Create a single internal network called iSCSI. In the profile for each 4630 add LOM1a and 2a to iSCSI, add LOM1b and 2b to a routable network (one tied to uplink sets), set the bandwidth to 100 or 200meg (Why waste bandwidth when it is not needed?) that can be used for managing, mail and monitoring. Bond LOM1a and 2a together using a not routable subnet. Bond LOM1b and 2b using a routable subnet, this allows the CM to be run external to the enclosure.

 

Do the normal setup, Network Bonding, Patching and Licensing before creating the Management Group. Once the cluster is created verify the Node order, for best high availability, the order should be as an example, one node from enc0, then the next node from enc1, then back to enc0, etc. This config is recommended since it is possible to lose nonconsecutive nodes and keep Netraid 10 storage available provided quorum is not lost, meaning you could lose one enclosure and still have storage up on the other. A FOM could help with this.

 

Please reply back if you have questions. You may be familiar with most of this already.

 

Response from Muhammed with inline reply from Chris:

Thanks Chris,

 

Yes I have few questions here, appreciate if you provide your feedback.

 

1)      Customer wants 4 node 4630 in Enclosure0 as a separate cluster and 2 Node 4630 in Encl1 as a separate storage entity. If we make any storage cluster accessible to any blade server then what should be a VC network configuration.

 

These could both be part or the same Management group but different Clusters, or they could be separate Management Groups and Clusters. Putting them in a single Management Group would give you the ability to use Peer motion to move volumes between the Clusters on the fly.

If the enclosures are stacked an internal network could be created, it would be available on all C7000 enclosures in the stack. I usually call it iSCSI.

Stacking the enclosures is possible if all VC modules in each C7000 are the identical, Flex-10 or Flex-fabric or 10/10D and up to 4 enclosures.

If the VC Modules are not identical two enclosures can be linked without a switch by creating an Active/Standby network in place of the stacking links.

 

2) As mentioned in your reply, LOM1a and 2a bond will be used for iSCSI traffic and LOM1b and 2b will be used for management (CMC) purpose. Does it mean that 4630 support out of band management?

 

All LeftHand devices provided they have enough Nics can do Management from another subnet. LeftHand will only pass iSCSI and inter node communication on one subnet, however management traffic can be on a different subnet, only one subnet can have a Gateway. Be cautious using a different management subnet it can hide network issues on the iSCSI network, you may be able to see all the nodes but if there is a network issue on the LeftHand Nics the nodes will not be able to see each other.

 

2)      What if we add another c7000 enclosure and stacked it with existing 2 x c7000 stacked enclosures, this 3rd enclosure is running with P4800 G2 (2 Node cluster) and requirement is same .i.e. any server can access any storage. Then what should be the Network configuration for both iSCSI Network and Management Network (CMC)?

 

The VC Modules must be Identical to stack, the P4800s used mainly used Flex-10. The LOM creation can be the same for 4630s and P4800s, LOM1a & 1b, for iSCSI and 1b & 2b for management set to 100 or 200mbs. Create Bond 0 for Lom 1a & 2B and Bond1 for Lom 1b & 2B. By default you should see 4 Nics on each node the second ones may have to added and configured in the Blades VC profile.

Check the Communication tab and make sure the LeftHand interface is set to the correct Bond

 

And input from Dan:

A Pair of HPN 5900-48XG switches go for between 25 and 30K USD and would make an excellent iSCSI Network backbone.

 

Compared to how much your customer is spending on storage, this would be very inexpensive and will allow them to keep each c7000 as independent VC Domains.  No stacking headaches and you can expand well past 4 enclosures over time.  Can add some DL servers if need be (DL580 Gen8?) as well.

 

And you can still have separate VLANs for Mgmt and iSCSI.

Have Mgmt VLAN leave VC modules via normal uplinks.

Have iSCSI VLAN leave VC modules via dedicated uplinks to 5900 switches (in place of stacking cables).

 

Just something to consider…

 

*************

 

Other comments?