Switches, Hubs, and Modems
cancel
Showing results for 
Search instead for 
Did you mean: 

4104GL fault-tolerant network

SOLVED
Go to solution
mark johnson_15
Occasional Advisor

4104GL fault-tolerant network

Hi, was wondering if someone could give me some pointers with my network (pros/cons):

see attached .jpg to see what i'm trying to explain.

I have 1x 4104GL with 2x 100/1000 modules (J4863a) installed (also with redundent power)
the modules are trunked:
module 1 type J4863A
module 2 type J4863A
trunk A1,B1 Trk1 Trunk
trunk A2,B2 Trk2 Trunk
trunk A3,B3 Trk3 Trunk
trunk A4,B4 Trk4 Trunk
trunk A5,B5 Trk5 Trunk

then each trunk connects to ports 49 and 50 of a 2650 which are also trunked with the same settings.
RSTP is enabled on all switches

each server has dual nics configured with Switch Fault Tolerance (SFT) which connect to any 2 of the 2650's ports.

is there anything i've missed?
would you do this differently?

thanks for your help
--mark

------------
running-config of the 4104
ticker-0# sh run

Running configuration:

; J4887A Configuration Editor; Created on release #G.07.26

hostname "ticker-0"
snmp-server contact "Mark Johnson"
snmp-server location "Tenfore Ticker"
time daylight-time-rule Western-Europe
cdp run
module 1 type J4863A
module 2 type J4863A
interface A1
speed-duplex auto-1000
no lacp
exit
interface A2
speed-duplex auto-1000
no lacp
exit
interface A3
speed-duplex auto-1000
no lacp
exit
interface A4
speed-duplex auto-1000
no lacp
exit
interface A5
speed-duplex auto-1000
no lacp
exit
interface B1
speed-duplex auto-1000
no lacp
exit
interface B2
speed-duplex auto-1000
no lacp
exit
interface B3
speed-duplex auto-1000
no lacp
exit
interface B4
speed-duplex auto-1000
no lacp
exit
interface B5
speed-duplex auto-1000
no lacp
exit
trunk A1,B1 Trk1 Trunk
trunk A2,B2 Trk2 Trunk
trunk A3,B3 Trk3 Trunk
trunk A4,B4 Trk4 Trunk
trunk A5,B5 Trk5 Trunk
snmp-server community "public" Unrestricted
snmp-server host 10.10.10.20 "public" Not-INFO
snmp-server host 10.10.10.120 "public"
vlan 1
name "DEFAULT_VLAN"
untagged A6,B6,Trk1-Trk5
ip address 10.10.10.1 255.255.255.0
exit
fault-finder bad-driver sensitivity high
fault-finder bad-transceiver sensitivity high
fault-finder bad-cable sensitivity high
fault-finder too-long-cable sensitivity high
fault-finder over-bandwidth sensitivity high
fault-finder broadcast-storm sensitivity high
fault-finder loss-of-link sensitivity high
stack commander "tenfore+ticker"
stack auto-grab
stack member 1 mac-address 00306edfa580
stack member 2 mac-address 00306ee00600
stack member 3 mac-address 00306ee09380
stack member 4 mac-address 00306ee15b80
stack member 5 mac-address 00306eaef0c0
spanning-tree
spanning-tree Trk1-Trk5 priority 4

-----------------------
running-config of a 2650
HP ProCurve Switch 2650b# sh run

Running configuration:

; J4899A Configuration Editor; Created on release #H.07.32

hostname "HP ProCurve Switch 2650b"
snmp-server contact ""
snmp-server location "Ticker"
time daylight-time-rule Western-Europe
cdp run
interface 49
speed-duplex auto-1000
no lacp
exit
interface 50
speed-duplex auto-1000
no lacp
exit
trunk 49-50 Trk1 Trunk
snmp-server community "public" Unrestricted
snmp-server host 10.10.10.20 "public" Not-INFO
vlan 1
name "DEFAULT_VLAN"
untagged 1-48,Trk1
ip address 10.10.10.3 255.255.255.0
exit
stack join 000a57cd9300
spanning-tree
spanning-tree Trk1 priority 4

13 REPLIES
Stuart Teo
Trusted Contributor

Re: 4104GL fault-tolerant network

Question: I don't quite understand your running-config. Could you shed some light as to why you are using stacking?
If a problem can be fixed, there's nothing to worry. If a problem can't be fixed, worrying ain't gonna help. Bottom line: don't worry.
mark johnson_15
Occasional Advisor

Re: 4104GL fault-tolerant network

thanks for your reply.

stacking was on by default, and so far it hasn't caused me any problems.
what's the reason for your question?
have you found problems with stacking, does it hinder your/this config.?

do i need spanning-tree enabled when using trunking?
Stuart Teo
Trusted Contributor
Solution

Re: 4104GL fault-tolerant network

1st, I have to say that we have very few 4100 on our site. We use more of 4000 and 5300. I'm also no expert of stacking since we don't use it here.

The reason why I'm asking is that the above configuration seems to suggest that the 4100 has been set up as the stack COMMANDER. The rest of the 2650 seems to be participating in the stacking. Did you do that deliberately?
If a problem can be fixed, there's nothing to worry. If a problem can't be fixed, worrying ain't gonna help. Bottom line: don't worry.
mark johnson_15
Occasional Advisor

Re: 4104GL fault-tolerant network

hi,
yep the 4100 is the stack commander and the 4 2650's are slaves, at the time it seemed the logical set-up to use (see diagram on 1st post)
also each switch has it's own ip address, which looking at it now sort of defeats the object of stacking.

do you think i should do it another way?

out of interest, how do you monitor/control your switches?
a straight telnet program or toptools??

thanks for your reply by-the-way.
Stuart Teo
Trusted Contributor

Re: 4104GL fault-tolerant network

I don't know. I've never used stacking. On the older 4000M's the stacking module was too expensive to make any sense.

I prefer to be able to reach each and every switch individually. It just doesn't make sense to me to be dependent on a single commander after spending all the $$ stacking it.

I run an snmp trap receiver to receive traps from the switches. Received traps gets written into a database. We can then query the database for tell-tale signs of failure. In my situation, we also have a script that emails me a daily summary.

Regarding your diagram, I see no disagreements with it. If you're big into redudancy then HP's mesh technology might be something you want to consider. The big gotcha there is that each mesh domain can only have up to 12 switches. Scalability question there. Of course you can "bridge" 2 mesh domains with 4gbps trunks. It's really up to you.
If a problem can be fixed, there's nothing to worry. If a problem can't be fixed, worrying ain't gonna help. Bottom line: don't worry.
Stuart Teo
Trusted Contributor

Re: 4104GL fault-tolerant network

Regarding Toptools, HP will be releasing Procurve Manager replaces Toptools.

I personally like Toptools but it does use up 1 server and it's not suitable if you manage a few hundred/thousand switches.
If a problem can be fixed, there's nothing to worry. If a problem can't be fixed, worrying ain't gonna help. Bottom line: don't worry.
mark johnson_15
Occasional Advisor

Re: 4104GL fault-tolerant network

thanks for the input, i really do appreciate it.
I've got the switches and configured them, but there not in a production environment as of yet, and i wanted to get other peoples professional opinion of my setup. so thank you again.

to my knowledge the 4100 series doesn't support HP meshing, which maybe i should have looked at a bit further before buying it, the 5300 series does.

i've got toptools installed at the moment and it seems like a nice bit of software, plus the size of my network is very small (as you can see from the diagram), do you know when HP are going to release Procurve Manager?

cheers
--mark
Stuart Teo
Trusted Contributor

Re: 4104GL fault-tolerant network

You're most welcome!

According to the information posted at http://www.hp.com/rnd/ Procuve Manager is due to be downloadable on 11/10. I'm anxious to download it myself.

My apologies on the 4100, it does not support mesh and I forgot all about it! Still, RSTP with trunks is a great way to provide high availability.

One thing about the switch logs, it holds 1000 entries. It loses all entries when the switch loses power. That's why we have a trap receiver set up to receive those logs. Hope this info help!
If a problem can be fixed, there's nothing to worry. If a problem can't be fixed, worrying ain't gonna help. Bottom line: don't worry.
Ardon
Trusted Contributor

Re: 4104GL fault-tolerant network

Why not use the Syslog Server functionality instead of the SNMP Traps?

Thanks, Ardon
ProCurve Networking Engineer
Stuart Teo
Trusted Contributor

Re: 4104GL fault-tolerant network

Hi,

I've tested the Procurve Manager and it's not too bad. I'm not too sure about this so don't quote me: I think that the 4100gl series is the only one with syslog facilities. All other models use snmp traps.

The Procurve Manager acts as a trap receiver as well, it will receive logs from the switches. (Amongst the many things that it will do)
If a problem can be fixed, there's nothing to worry. If a problem can't be fixed, worrying ain't gonna help. Bottom line: don't worry.
mark johnson_15
Occasional Advisor

Re: 4104GL fault-tolerant network

thanks chaps, as always it's most appreciated!
yeah the 4100 does have Syslog Server functionality, but my 2650's don't, and i can't see that advantage of using syslog over snmp.

i've tried the Procurve Manager (thanks for the link by the way) and to be honest i don't like it as much as toptools, dunno why but???

cheers
--Mark
Ardon
Trusted Contributor

Re: 4104GL fault-tolerant network

Hi,

The 2650 running the latest code DOES support Syslog.

Thanks, Ardon
ProCurve Networking Engineer
mark johnson_15
Occasional Advisor

Re: 4104GL fault-tolerant network

yep you're right! thanks!

before i was just looking at the documentation (which didn't include syslog), but i've just checked on the switches and enabled syslogging.

does the syslog give more detail then snmp?
because when i enabled syslog on the switches i started getting error messages from 1 port, where the snmp trap was giving me nothing. strange??

thanks for the help!
cheers
--mark