HPE Synergy
1756613 Members
2824 Online
108848 Solutions
New Discussion юеВ

Re: Synergy Non Blocking Architecture

 
ksumbly
Occasional Advisor

Synergy Non Blocking Architecture

We always come across this, that our HPE Synergy has 1:1 non blocking architecture, can anyone help elaborate this and also highlight how this is unique and a key differentiator in competition scenario.
2 REPLIES 2
AkilDevaraj
HPE Pro

Re: Synergy Non Blocking Architecture

Hello Ksumbly,

First lets see the meaning of Non-blocking architecture:
In the context of a switch, the ability to handle independent packets simultaneously because the switch has sufficient internal resources to handle maximum transfer rates from all ports. In a blocking architecture, the switch may become a bottleneck. In the Same way we have the SAS modules and the storage options in HPE synergy which has this this feature of redundancy on the existng frame so that the performance or productivity is always up when configured or setup with redundancy .

With regard to the key difference , it is about the availability of the resources as and when required when one module or hardware is down .Please check the below article for HPE synergy features and performance please find the below document :

https://h20195.www2.hpe.com/V2/getpdf.aspx/4AA6-3257ENW.pdf?

 


I work for HPE
DanRobinson
HPE Pro

Re: Synergy Non Blocking Architecture

It mainly comes up when talking about/against UCS.

On the Synergy VC modules (and the various switch modules too for that matter) there is no oversubscription inside the Enclosure / Logical Enclosure.

So lets do a quick breakdown of the Virtual Connect 40 Gb module.
It has a 32 port 40Gb ASIC under the hood.

12 of those ports are used for the L1-L4 Link ports for the ILM (Satellite Modules)
Those 12 are 4 physical ports that each take 3 ASIC ports, which is 120Gb per physical port, which is then 12 x 10Gb.
Thus every server in a Satellite connected Enclosure has it's own dedicated 10Gb lane back to the ASIC.
When you use the 20Gb Satellite module, you use 2 of the ILM ports, and so each server gets a dedicated 20Gb (per side).

Now we have 12 internal ports as well for the local servers.
For historical reasons which changed before launch, these are all 40Gb each.
So each server in the same frame as the VC 40 has a dedicated 40Gb port, even though the max it will use is 20 with the 3820c Mezz card.

Then lastly we have 8 QSFP+ ports for uplinks.
These are 1 40Gb ASIC port each.

Thus 12+12+8 = 32
And we have dedicated bandwidth between every single port on the VC module.

The VC100 is a 32 x 100 ASIC and uses almost the same design.

 

Now the total Downlink (Blade) bandwidth can exceed that of the Uplink (Q1-6) bandwidth...
But this would be no different than a 48 port 10Gb ToR Switch which may only have 6 x 40Gb uplinks.
Synergy VC modules effectively replace the ToR in a standard Rackmount design.

 

With UCS Blades, each Enclosure gets 2 FEX modules which can be oversubscribed before they hit the Fabric Interconnect.
So you could have 8 blades with 20Gb NICs that all share only 4 x 10Gb connections back to the FI which is where their real switching happens.
Blade 1 and 2 in the same UCS chassis can't talk to each other without going up to the FI and then back down again.
So there you could be obversubscribed at the Chassis FEX and then oversubscribed again at the FI uplinks to the rest of the network.
UCS has always had that trade off between FEX Oversubscription and Number of Enclosures per FI Pair.
The more bandwidth you want each Enclosure to have, the less enclosures you can have downstream from the FI, the more you pay in Port Licensing costs on the FIs, and the sooner you will end up needing another FI pair.

 

The other interpretation of Non Blocking could refer to the midplane.
Being the midplane in Synergy is a Passive Midplane, how much bandwidth/connection Blade 1 is using has no impact on what the other blades can do.
Every blade has dedicated Copper Traces between it and the Interconnect slots.
As well as some pre-plumbed pass thrus in the midplane for future Direct Optical / Silicon Photonics type (non copper) connections in the future.


I work for HPE

Accept or Kudo