Network Simulator
1748198 Members
2568 Online
108759 Solutions
New Discussion юеВ

Re: Simulation MST on AOS-CX

 
SOLVED
Go to solution
moelharrak
Occasional Advisor

Simulation MST on AOS-CX

Hi,

I'm trying to configure MST on GNS3 using the AOS-CX software (see the attached topology), Everything seems working fine (Core-1 is the root bridge for VLANs 1-100 and Core-2 is the Root for VLANs 101-200) However when I try to ping switches , most of time the ping fails , I don't know if it is configuration/design issue or software issue in GNS3.

Any Help?

Selection_494.png

Core-1 Configuration

!
!
vlan 1-200
spanning-tree
spanning-tree priority 1
spanning-tree config-name TEST
spanning-tree config-revision 1
spanning-tree instance 1 vlan 1-100
spanning-tree instance 1 priority 1
spanning-tree instance 2 vlan 101-200
spanning-tree instance 2 priority 2

interface 1/1/1 -1/1/3
no shutdown
no routing
vlan trunk native 1
vlan trunk allowed 1,100,200

interface vlan1
ip address 192.168.1.1/24
no shutdown
exit

Core-2 Configuration

!
!
vlan 1-200
spanning-tree
spanning-tree priority 2
spanning-tree config-name TEST
spanning-tree config-revision 1
spanning-tree instance 1 vlan 1-100
spanning-tree instance 1 priority 2
spanning-tree instance 2 vlan 101-200
spanning-tree instance 2 priority 1

interface 1/1/1 -1/1/3
no shutdown
no routing
vlan trunk native 1
vlan trunk allowed 1,100,200

interface vlan1
ip address 192.168.1.2/24
no shutdown
exit

AS-1 Configuration

!
vlan 1,100,200
spanning-tree
spanning-tree config-name TEST
spanning-tree config-revision 1
spanning-tree instance 1 vlan 1-100
spanning-tree instance 2 vlan 101-200

interface 1/1/1 -1/1/2
no shutdown
no routing
vlan trunk native 1
vlan trunk allowed 1,100,200

interface vlan1
ip address 192.168.1.101/24

AS-2 Configuration

!
vlan 1,100,200
spanning-tree
spanning-tree config-name TEST
spanning-tree config-revision 1
spanning-tree instance 1 vlan 1-100
spanning-tree instance 2 vlan 101-200

interface 1/1/1 -1/1/2
no shutdown
no routing
vlan trunk native 1
vlan trunk allowed 1,100,200

interface vlan1
ip address 192.168.1.102/24

6 REPLIES 6
Ivan_B
HPE Pro

Re: Simulation MST on AOS-CX

Hi @moelharrak !

Your configuration seems to be absolutely valid - all switches should be in the same region, Core-1 should be the root bridge for instance 1 where the VLAN 1 is located, so from the instance 1 perspective ports 1 1/1 - 1/1/3 on Core-1 must be Designated, on Core-2 port 1/1/1 should be Root, ports 1/1/2 - 1/1/3 Designated, AS-1's 1/1/1 - Root, 1/1/2 - Alternate (Blocked), AS-2 1/1/2 - Root, 1/1/1 - Alternate (Blocked). If that is the actual state, then I'd blame high CPU utilization of the host PC. To verify that you can shutdown all reduntant links and try to run a couple of pings again. If CPU utilization is the root cause pings will drop in this case too. Another possible issue that can be caused by CPU utilization - if STP BPDUs are dropped, then you have constant recalculations of the STP tree which only makes the situation worse and that of course you will see ping drops. 

Another take - memory utilization. If the cummulative memory consumed by 4 QEMU instances is more than the RAM decicated to the GNS3 VM (in case you use it) or more than physical RAM available in your host PC (if you run QEMU instances natively, on the bare metal, without GNS3 VM) then you may have very intensive swapping that contributes to the overall slowness of the system and therefore to ping drops.

 

I am an HPE employee

Accept or Kudo

moelharrak
Occasional Advisor

Re: Simulation MST on AOS-CX

Thank you for your answer, YES all ports are exactly like you described , means that MST is working good.I did disable the redundant links but still the same issue.

I'm not using GNS3 VM , and I don't think that it's Memory issue , I'm running GNS3 on a 32Go RAM server ,Processor is Intel Xeon(R) CPU E5-2630 v2 @ 2.60GHz x12

Qemu are configured with 4Go of RAM and 2 CPUs.

- Output of top command:

Selection_495.png

 

Ivan_B
HPE Pro

Re: Simulation MST on AOS-CX

I agree, now when I know what system those images are running on we can exclude memory depletion, as well as high CPU utilization. 

Let me ask you about the issue with pings - how the packet loss looks like? If you run a continous ping will you see 1-2 lost packets here and there or ping works fine for a certain amount of time and then there is a period of time like 5-10 seconds when all packets are lost? If you send 100 echo requests from AS-1 to Core-1 how many from these will be lost?

BTW, not sure what Aruba image version you use, but you can try 10.04.3000 or 10.05.0020 availabe here - https://afp.arubanetworks.com/afp/index.php/AOS-CX_OVA (link for partners only) and check if it makes any difference.

 

I am an HPE employee

Accept or Kudo

moelharrak
Occasional Advisor

Re: Simulation MST on AOS-CX

I'm using Version : Virtual.10.04.1000 

Ping From Core-1 to Core-2

--- 192.168.1.2 ping statistics ---
100 packets transmitted, 60 received, 40% packet loss, time 104917ms
rtt min/avg/max/mdev = 3.363/2071.910/4380.645/1398.695 ms
Core-1#

Ping From Core-2 to Core-1

--- 192.168.1.1 ping statistics ---
100 packets transmitted, 59 received, 41% packet loss, time 104973ms
rtt min/avg/max/mdev = 2.557/2291.489/3702.095/1254.625 ms

Ping From AS-1 and AS-2 to Core-1 and Core-2 are all failed now.

 

Ivan_B
HPE Pro
Solution

Re: Simulation MST on AOS-CX

It seems that GNS3 has issues with Layer 2 looped topologies. As soon as you create one, traffic starts to loop. I was able to reproduce it both with Aruba CX 10.04.3000 and IOSv (Cisco IOS Software, vios_l2 Software (vios_l2-ADVENTERPRISEK9-M), Version 15.2) images. For the illustration I will use IOSv L2 images, just to prove the issue is not ArubaOS-CX specific.

As soon as you create looped topology, even a classic triange:

 
gns3-topo.png

 

Despite the fact STP blocked appropriate port:

 

 

SW1#sh spann vl 1

VLAN0001
  Spanning tree enabled protocol ieee
  Root ID    Priority    32769
             Address     0c28.9168.5a00
             This bridge is the root
             Hello Time   2 sec  Max Age 20 sec  Forward Delay 15 sec

  Bridge ID  Priority    32769  (priority 32768 sys-id-ext 1)
             Address     0c28.9168.5a00
             Hello Time   2 sec  Max Age 20 sec  Forward Delay 15 sec
             Aging Time  300 sec

Interface           Role Sts Cost      Prio.Nbr Type
------------------- ---- --- --------- -------- --------------------------------
Gi0/0               Desg FWD 4         128.1    P2p
Gi0/1               Desg FWD 4         128.2    P2p
SW2#sh spanning-tree vl 1

VLAN0001
  Spanning tree enabled protocol ieee
  Root ID    Priority    32769
             Address     0c28.9168.5a00
             Cost        4
             Port        1 (GigabitEthernet0/0)
             Hello Time   2 sec  Max Age 20 sec  Forward Delay 15 sec

  Bridge ID  Priority    32769  (priority 32768 sys-id-ext 1)
             Address     0c28.91db.f400
             Hello Time   2 sec  Max Age 20 sec  Forward Delay 15 sec
             Aging Time  300 sec

Interface           Role Sts Cost      Prio.Nbr Type
------------------- ---- --- --------- -------- --------------------------------
Gi0/0               Root FWD 4         128.1    P2p
Gi0/1               Altn BLK 4         128.2    P2p
SW3#show spanning-tree vl 1

VLAN0001
  Spanning tree enabled protocol ieee
  Root ID    Priority    32769
             Address     0c28.9168.5a00
             Cost        4
             Port        1 (GigabitEthernet0/0)
             Hello Time   2 sec  Max Age 20 sec  Forward Delay 15 sec

  Bridge ID  Priority    32769  (priority 32768 sys-id-ext 1)
             Address     0c28.917e.d000
             Hello Time   2 sec  Max Age 20 sec  Forward Delay 15 sec
             Aging Time  300 sec

Interface           Role Sts Cost      Prio.Nbr Type
------------------- ---- --- --------- -------- --------------------------------
Gi0/0               Root FWD 4         128.1    P2p
Gi0/1               Desg FWD 4         128.2    P2p

 

 

 

we still see very rapidly increasing interface counters (time interval between "show" commands for one interface was 5 seconds):

 

 

SW1#show interfaces gig0/0 | i packets
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 54000 bits/sec, 97 packets/sec
     276 packets input, 26083 bytes, 0 no buffer
     70034 packets output, 4767033 bytes, 0 underruns
SW1#show interfaces gig0/0 | i packets
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 55000 bits/sec, 98 packets/sec
     276 packets input, 26083 bytes, 0 no buffer
     70577 packets output, 4803933 bytes, 0 underruns
SW1#
SW1#
SW1#show interfaces gig0/1 | i packets
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 53000 bits/sec, 100 packets/sec
     10588 packets input, 1191462 bytes, 0 no buffer
     84395 packets output, 6208173 bytes, 0 underruns
SW1#show interfaces gig0/1 | i packets
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 54000 bits/sec, 101 packets/sec
     10589 packets input, 1191510 bytes, 0 no buffer
     84888 packets output, 6241673 bytes, 0 underruns
SW1#

 

 

 

 

SW2#show interfaces gig0/0 | i packets
  5 minute input rate 57000 bits/sec, 105 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     79741 packets input, 5422453 bytes, 0 no buffer
     428 packets output, 36636 bytes, 0 underruns
SW2#show interfaces gig0/0 | i packets
  5 minute input rate 56000 bits/sec, 104 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     80224 packets input, 5455257 bytes, 0 no buffer
     428 packets output, 36636 bytes, 0 underruns
SW2#
SW2#
SW2#show interfaces gig0/1 | i packets
  5 minute input rate 55000 bits/sec, 103 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     82656 packets input, 5620170 bytes, 0 no buffer
     283 packets output, 29618 bytes, 0 underruns
SW2#show interfaces gig0/1 | i packets
  5 minute input rate 56000 bits/sec, 104 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     83111 packets input, 5651070 bytes, 0 no buffer
     283 packets output, 29618 bytes, 0 underruns
SW2#
SW3#show interfaces gig0/0 | i packets
  5 minute input rate 55000 bits/sec, 102 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     95900 packets input, 6985621 bytes, 0 no buffer
     10594 packets output, 1193295 bytes, 0 underruns

SW3#show interfaces gig0/0 | i packets
  5 minute input rate 54000 bits/sec, 101 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     96515 packets input, 7027381 bytes, 0 no buffer
     10595 packets output, 1193355 bytes, 0 underruns
SW3#
SW3#
SW3#show interfaces gig0/1 | i packets
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 54000 bits/sec, 101 packets/sec
     223 packets input, 25271 bytes, 0 no buffer
     87665 packets output, 5966042 bytes, 0 underruns
SW3#show interfaces gig0/1 | i packets
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 55000 bits/sec, 102 packets/sec
     224 packets input, 25319 bytes, 0 no buffer
     88167 packets output, 6000154 bytes, 0 underruns
SW3#

 

 

 

It's obvious that so large amount of packets in a idle network with 3 hosts and 3 ip addresses is can't be described by other cause than a loop. 

And while IOSv2 image doesn't have issues with pings despite the loop, ArubaCX image definitely starts to drop excessive traffic on virtual CPU level and therefore we see huge latency and drops. BTW, to "resolve" the issue you need to delete the redundant links between GNS3 nodes, not just "shutdown" them on virtual devices. In my tests with Aruba only deleting redundant links helped to stabilize the situation.

 

I am an HPE employee

Accept or Kudo

moelharrak
Occasional Advisor

Re: Simulation MST on AOS-CX

You are right I did a lab without redundancy links and seems working fine.

Sad that it's not working , soon I'm going to install a big network with full redundancy, VRRP configured on core switches for VLANs and also ISPs links,.. and was trying to build and lab that virtually to test every possible issue.

Hope that GNS3 will fix this issue in the near future.