BladeSystem Virtual Connect
cancel
Showing results for 
Search instead for 
Did you mean: 

How to best stack these 2 enclosures.

chuckk281
Trusted Contributor

How to best stack these 2 enclosures.

Leo had a customer VC stacking issue:

 

************

 

My customer request to create 2 enclosure domain, each enclosure has been populated with interconnect module as below,

 

Enclosure 01

 

4 X 1/10Gb-F Virtual Connect Enet Module (447047-B21) inserted in the enclosure interconnect bays 1, 2, 3 & 4.

 

Uplink:- XFP 10GB ports used (2 ports)

 

Enclosure 02

 

4 X 1/10Gb-F Virtual Connect Enet Module (447047-B21) inserted in the enclosure interconnect bays 1, 2, 3 & 4.

 

Uplink:- XFP 10GB ports used (2 ports)

 

 

Note: I am aware these modules are no more supported by HP

                       

 

Could someone provide me with best stacking can be used to form these 2 enclosure domain.

 

Referring some link I can see stacking can be enabled on CX4 & port 1-4 RJ45. To enable stacking can we mix CX4 and port 1-4 in an interconnect bay?

 

*************

 

Info from Vincent:

 

***************

 

You can use any ports you want for stacking. You shouldn't mix speeds in the connections between 2 modules: e.g. to connect bay 1 to bay 3 in enclosure 01 you should not use both the CX4 port and an RJ45 port. But you can use the CX4 between bay 1 and bay 3, and 1 or several RJ45 ports between bay 3 of Enclosure 01 and bay 1 of Enclosure 02.

The best way to do it depends on other factors, in particular the network configuration you want to make: which networks on which uplinks, and which server profiles have connections to which network. From that you could infer which stacking links would get the most traffic and that's where you'd want to put your 10Gb links

 

****************

 

Other comments or input?

 

 

3 REPLIES
marcelkoedijk
Frequent Advisor

Re: How to best stack these 2 enclosures.

This guide should help you :)

http://h20565.www2.hp.com/portal/site/hpsc/template.BINARYPORTLET/public/kb/docDisplay/resource.process/?spf_p.tpst=kbDocDisplay_ws_BI&spf_p.rid_kbDocDisplay=docDisplayResURL&javax.portlet.begCacheTok=com.vignette.cachetoken&spf_p.rst_kbDocDisplay=wsrp-resourceState%3DdocId%253Demr_na-c02102153-4%257CdocLocale%253D&javax.portlet.endCacheTok=com.vignette.cachetoken

 

Beware off multi-enclosure stacking, isnt always the best solutions.

 

- Yes your abble to move blades from 1 to the other enclosure bay slots

- Yes you need less ethernet uplinks

 

But... Be sure about the risc's:

 

- Virtual Connect Domain, CAN Fail, then you loosse 32 servers off production

- Vritual Connect Update, will update the hole domain as a single update, all servers affected

- Be sure that the cluster farm is split over at least 2 Virtual Connect Domains.

 

Multi-Domain configuration is most likely 1 single blade enclosure with 32 blades, this will be the single-point off failure in case there go something wrong with the VC domain.

 

See also page 3 in the document :)

Psychonaut
Respected Contributor

Re: How to best stack these 2 enclosures.

To put it mildly I'd say that is a bit of a negative review.  The odds of losing your domain are pretty low.

 

It's all in what your end goal is.  I've had mutlipe 4-wide domains and the have had no issues with then.  You do have to be aware of a couple of things like making sure your blade firmware is at the same or similar levels for all servers.

 

If you have a lot of server to server traffic it is a good way to have that go from chassis to chassis without ever hitting the actual switch.  It's easier to manage multiple chassis at the same time and add new connections or VLANs.  

 

Just a couple of counter points. 

marcelkoedijk
Frequent Advisor

Re: How to best stack these 2 enclosures.

your right at the point that you can keep data between servers stay within the domain and dont need to load the coreswitch. This can be a big advantage for a enviroment.

 

I see more than  once VC domains will completly down, even at the lasted SPP firmware. This why iám carefull.

A pause flood that fillup de VC domain memory and crash is one of the things that was possible in iscsi enviroments below firmware 4.xx. Thats now fixed, but there are more senarios.

 

To the point.. I dont want to panic this thing, there are also more than 1 good reasons to stack enclosures. We also running successfully 4 enclosure multi-enc. configuration. But keep in mind what the differents are, and what level of availabilty is needed.

 

Just splittup your hypervisor farm between 2 different VC domains (single or multi-stack domains) is a good advice.