Comware Based
1748036 Members
4782 Online
108757 Solutions
New Discussion

Re: 10500 < 11900 < 12500?

 
internalnews
Frequent Advisor

10500 < 11900 < 12500?

So, we get a new switch between the 10500 and 12500?

 

http://www.hpapjtechnicalsymposium.com/Docs/TRACK/HPN.pdf

 

Why do we need another highend modular switch. We once get the 12500. But afterwards the 10500 (which isn´t that different) and no we have a 11900? What´s the purpose of all of these switching series. I do not see any real difference between those...

6 REPLIES 6
manuel.bitzi
Trusted Contributor

Re: 10500 < 11900 < 12500?

Hi

 

As I know the 11900 is similar to the 10500 but designed for the datacenter with equaivalent Datacenter features. 10500 is designed for campus lan and 12500 for datacenter core.

 

But I agree with you. I don't see the sence.

 

br

Manuel

H3CSE, MASE Network Infrastructure [2011], Switzerland
internalnews
Frequent Advisor

Re: 10500 < 11900 < 12500?

       "make the switch family suited to fulfill the requirements and needs for enterprise campus networks, metropolitan-area networks (MANs) and data center networks. "

       So, we have a kind of "hybrid" switch that could be uses "everywhere": core + datacenter + edge?

 

       Interesting, Comware 7 already supports Openflow 1.3 in the beginning.

  • What´s next?
  1. Yet another modular switch? S7600-X? http://www.h3c.com.cn/Products___Technology/Products/Switches/Data_Center_Switch/S7600-X/S7600-X/
  2. A5500-HI with 24x SFP fiber ports and POE+ edition? http://www.h3c.com.cn/Products___Technology/Products/Switches/Catalog/S5500/S5500-HI/
  3. A5120-HI?: http://www.h3c.com.cn/Products___Technology/Products/Switches/Catalog/S5120/S5120-HI/
NetworkUser1
New Member

Re: 10500 < 11900 < 12500?

Some explanation here:

- 12500: high performance DC core switch using CLOS/VoQ based architecture. High buffers (256MB per 10G). Highly scalable with up to 1M FIB entries (LEF).

- 11900: high performance DC aggregation/code switch using CLOS based architecture. Not everybody requires 1M FIB entries, so the overall CAPEX is lower than the 12900

- 10500: primarily focused at Campus.

NetworkUser1
New Member

Re: 10500 < 11900 < 12500?

Some clarification

- 12500: high performance DC core. CLOS/VoQ. high buffered switch (256MB per 10G port). Highly scalable as up to 1M FIB entries

- 11900: CLOS architecture, but as not everybody requires 1M FIB entries, provides a different cost structure

- 10500: CAMPUS based solution. Was initially shipping with Comware v5, which does not provide the DC features, now supported by Comware v7 (DCB, FCoE, TRILL, PBB/SPB ...).

Keep in mind as well that SW depends on HW (ASIC) so even with Comware v7, some features can't be available on every single type of module (ASIC dependencies)

NetworkUser1
New Member

Re: 10500 < 11900 < 12500?

Would be interesting to understand the positioning looking at Cisco's solution today:

- Newly introduced Nexus 9000

- Recently introduced Nexus 7700

- Previous generation of Nexus 7000

- Nexus 6000

- and of course the Catalyst 6500 (and now 6800) which is still present in many Data Centers.

 

That's 6 plaforms not to mention the different types of modules for Nexus 7700/Nexus 7000 as M1, M2, F1, F2 and now F3 ... Not to mention the fact that Nexus 7700 is a complete forklift upgrade of the Nexus 7000. And of course, Nexus 9000 has absolutely nothing in common with Nexus 7700 nor 7000...  

Peter_Debruyne
Honored Contributor

Re: 10500 < 11900 < 12500?

Hi,

 

10500/11900 are using a port-based CLOS, which means that a packet from card 1 port 1 to card 2 port 1 will take 1 path over fabric module x and will always use that same fabric card (unless failure happens of course).

 

So there is some port to fabric mapping, so assume (just example) that first 4 ports of each card would be mapped to fabric module 1, when you would only connect servers/switches to these 4 ports, then traffic would only pass fabric module1, the other fabrics would not be used (a bit like xbar, but 4 of them connecting to a single line card).

 

This automatically means that some fabric modules will get more/less load compared to the others.

 

The 12500/12900 are using virtual output queues, which means that the line card will slice a frame into multiple cells, and these cells are distributed over all fabric modules. So all fabric modules are always used for inter module packet delivery. This will give very predictible switch packet latency, since the load is always shared over all units.

 

Most people will not ever see this difference in real flows, but this is one of the reasons why 12xxx platform is preferred for core.

10500/11900 CLOS is "easier" to develop, so that will also be reflected in the pricing compared to the 12xxx components.

 

11900 is essentially a 10500 (IMO) which can only be equipped with the right DC modules (line card buffers and management module on cmw7) while the 10500 can also have more basic line cards, as well as cmw5 or cmw7 management modules.

 

The mentioned h3c 7600-x is unknown to me, but specs look very similar to 10500, so I guess (based on title 'carrier class') that it will be more SP oriented (so it is possible it will never come to us, but that is just a guess).

 

Best regards,Peter