Aruba & ProVision-based
1748123 Members
3330 Online
108758 Solutions
New Discussion юеВ

Re: Adding 10GbE (J8436A) to ProCurve 3500yl-48g (J8693A) with CX4 yl Module (J8694A)

 
SOLVED
Go to solution
StlCardsFan04
Occasional Collector

Adding 10GbE (J8436A) to ProCurve 3500yl-48g (J8693A) with CX4 yl Module (J8694A)

I need some help with using 10GbE fiber connections with two 3500yl-48G-POE+ (J8693A) switches.

Each switch has the optional 10-GbE X2 + 2p CX4 Module (J8694A) installed. According to the documentation, the 10 Gigabit-X2-SC SR Optic Transceiver (J8436A) is what I need. 

 

IтАЩd like to install 4 of the HP 131 10G X2 SC SR Transceivers (J8436A) in the spare ports on the back of the J8694A module to utilize dual 10GbE (Intel X520) adapters (with INTEL 1G/10G Dual Rate SFP+ Transceivers FTLX8571D3BCV-IT) I have in my PowerEdge R820 Servers.

I haven't worked with fiber before and don't want to mess up. Can I just use an OM3 LC/SC fiber patch cable, or will I need to use something different? 

Thank you very much!

5 REPLIES 5
Vince-Whirlwind
Honored Contributor

Re: Adding 10GbE (J8436A) to ProCurve 3500yl-48g (J8693A) with CX4 yl Module (J8694A)

Seeing as the 3500s don't stack, you will probably have to patch each server's dual connections both to the back of the same 3500, if you intend to use link aggregation for active/active.
So not the best way to get redundancy.
Those ports were not intended as host ports but uplink ports.
You have to ask yourself - you're giving the server 10Gb onto the switch, but where does it go then? You've only got 1Gb.

But otherwise, yes, you need an SC connector for the HP end, and OM3 is correct for the patch lead.

Richard Litchfield
Respected Contributor

Re: Adding 10GbE (J8436A) to ProCurve 3500yl-48g (J8693A) with CX4 yl Module (J8694A)

You will probably also need a cable with LC-SC connectors, or an  adapter pigtail cable if you have an SC-SC cable.

You could do Distributed Trunking to set up a LAG across the two 3500 switches if you need to do that.

StlCardsFan04
Occasional Collector

Re: Adding 10GbE (J8436A) to ProCurve 3500yl-48g (J8693A) with CX4 yl Module (J8694A)

Thank you very much for the reply and suggestions. I already knew this going in, and was hoping to gather some metrics to back me up, but this is not a workable solution for me. What I really need is 2 dedicated 10G switches to increase throughput in my 3-node Windows failover cluster. I use 3 identical Dell PE R820 servers to host the Microsoft Hyper-V VM's. Each server (Node-1, Node-2, Node-3) has an Intel X520 DP 10Gb DA/SFP+, + I350 DP 1Gb Ethernet NDC and 1 x Intel I350 QP 1Gb NIC.. 

In order for failover to work properly, I need 2 switches to support 10G connections from each of the R820's. If I'm going to have 2 10G switches, I might as well put 10G cards in my other servers to take advantage of the network throughput. 

I need two 12 or 24 port 10G switches to make this work. In order to reduce I'd like to use SFP+ fiber. Can anyone offer suggestions for refurbished/used 10G switches which would be a good fit for me? 

Thanks again!

parnassus
Honored Contributor
Solution

Re: Adding 10GbE (J8436A) to ProCurve 3500yl-48g (J8693A) with CX4 yl Module (J8694A)

It probably wouldn't be a bad idea to focus into something like the HPE FlexFabric 5820X 24XG SFP+ Switch (HPE SKU: JC102B): it supports 24 SFP+ 10GbE ports and 4 RJ-45 auto-negotiating 10/100/1000 ports so, at least from the point of view of interfaces number and type you're looking for, it should fit the bill...latest software Release Notes is here.

Probably you should be able to find the previous one (the JC102A) refurbished at relatively "low" prices since the JC102B, new, is still quite expensive.

 

 


I'm not an HPE Employee
Kudos and Accepted Solution banner
StlCardsFan04
Occasional Collector

Re: Adding 10GbE (J8436A) to ProCurve 3500yl-48g (J8693A) with CX4 yl Module (J8694A)

I agree with you. It's time to bite the bullet and upgrade to something more useful. It will definitely be worth it. I greatly appreciate the advice. Thanks everyone!