HPE Aruba Networking & ProVision-based
1834825 Members
2635 Online
110070 Solutions
New Discussion

High CPU load on modules

 
waketech
New Member

High CPU load on modules

Hey there,

 

Has anybody experienced high CPU load on 24p modules in a 5400?  The specific modules we're noticing it on is J8706A and J8702A.

 

A bit more info:

 

We have a 5412 as our core router.  It's populated with 2 24p SFP modules and 5 24p Gig-T modules.  If we run 'sh cpu' everything looks fine.

 

3 percent busy, from 300 sec ago
1 sec ave: 1 percent busy
5 sec ave: 1 percent busy
1 min ave: 1 percent busy


Task usage for last 2 sec
 % CPU | Description
-------+--------------------------
  98.0 | Idle
   0.7 | Sessions & I/O
   0.7 | SNMP
   0.7 | IP Host/Routing

 

However, if we do 'sh cpu slot a' it shows pretty high CPU load

 

slot a:
-------
67 percent busy, from 300 sec ago
1 sec ave: 90 percent busy
5 sec ave: 88 percent busy
1 min ave: 89 percent busy

 

That was run at 6:30am when we have very little traffic on the network. Here's the port utilizations:

 

Status and Counters - Port Utilization

                                 Rx                           Tx           
 Port      Mode     | --------------------------- | ---------------------------
                    | Kbits/sec   Pkts/sec  Util  | Kbits/sec  Pkts/sec   Util
 -------- --------- + ---------- ---------- ----- + ---------- ---------- -----
 A1       .         | 0          0          0     | 0          0          0    
 A2       1000FDx   | 5664       148        00.56 | 5616       226        00.56
 A3       1000FDx   | 5104       41         00.51 | 5176       84         00.51
 A4       .         | 0          0          0     | 0          0          0    
 A5       1000FDx   | 26872      3524       02.68 | 32640      3410       03.26
 A6       .         | 0          0          0     | 0          0          0    
 A7       1000FDx   | 0          0          0     | 0          0          0    
 A8       1000FDx   | 0          0          0     | 0          0          0    
 A9       1000FDx   | 5000       33         00.50 | 5120       77         00.51
 A10      1000FDx   | 5232       51         00.52 | 5264       136        00.52
 A11      1000FDx   | 4968       0          00.49 | 5048       47         00.50
 A12      1000FDx   | 5192       44         00.51 | 5216       134        00.52
 A13      1000FDx   | 5304       73         00.53 | 5264       159        00.52
 A14      1000FDx   | 5160       33         00.51 | 5240       120        00.52
 A15      1000FDx   | 5152       24         00.51 | 5192       152        00.51
 A16      1000FDx   | 5520       181        00.55 | 6512       294        00.65
 A17      1000FDx   | 5328       154        00.53 | 5224       134        00.52
 A18      1000FDx   | 5184       43         00.51 | 5176       130        00.51
 A19      1000FDx   | 5168       57         00.51 | 5264       139        00.52
 A20      1000FDx   | 5160       52         00.51 | 5584       155        00.55
 A21      1000FDx   | 5256       95         00.52 | 5312       176        00.53
 A22      1000FDx   | 5256       64         00.52 | 5792       196        00.57
 A23      1000FDx   | 5424       97         00.54 | 5208       181        00.52
 A24      .         | 0          0          0     | 0          0          0    

 

We'll witness similar numbers throughout the day.  The other modules aren't as high (56, 51, 37, 10, 36; 5min avg) but they will climb a bit throughout the day...  We usually see 3 or 4 modules consistently over 70% usage (at 5min avg) with 2 SFP modules >90%.

 

I've tried running 'showtaskusage' from inside testmode, but it didn't show anything useful...  Here's a snippet of one:

 

 

slot e:
-------
                |  Recent |  %  |  Total  | Time Since |   Times  |   Max
 Task Name      |   Time  | CPU |  Time   |  Last Ran  |    Ran   |  Time
----------------+---------+-----+---------+------------|----------|-------
     mIpAdMUpCt |   53 ms |   2 |   114 ms|      11 ms |       44 |  12 ms
       eDevIdle |  460 ms |  25 |   901 ms|       2 ms |     4843 | 795 us
       eDrvPoll |   79 ms |   4 |   465 ms|      44 ms |       50 |  50 ms
     tDevPollRx |  875 ms |  47 |     2 s |       1 ms |     2460 |  10 ms
     tDevPollTx |  185 ms |  10 |   360 ms|       3 ms |     2417 | 800 us
     mPmSlvCtrl |   53 ms |   2 |   113 ms|     320 ms |        9 |  15 ms
     mPoeMgrCtl |   65 ms |   3 |   132 ms|     740 ms |      379 |   1 ms

 

Coming back to the original question, has anybody else noticed excessively high CPU usage on individual modules?