Operating System - HP-UX
1833696 Members
3497 Online
110062 Solutions
New Discussion

Monitoring logifiles for errors hourly

 
SOLVED
Go to solution
Gary Friou
New Member

Monitoring logifiles for errors hourly

Is there a way to monitor a log for errors but only for the previous hour? I am running a simple grep statement to search the log for "fatal", but I want it to look at only the previous hours worth of data. So if I was to run the command at 4:00, I would want it to search for errors only created from 3:00 to 3:59. Thanks in advance, Gary
7 REPLIES 7
Patrick Wallek
Honored Contributor

Re: Monitoring logifiles for errors hourly

You'd have to build some "smarts" into your script so that it also checks the time and date in the log file.
Dennis Handly
Acclaimed Contributor

Re: Monitoring logifiles for errors hourly

You could use grep -n and then delete all lines that were there in the previous output, based on the line number.
Andrew Young_2
Honored Contributor

Re: Monitoring logifiles for errors hourly

Hi.

Why not make a copy of the log file every hour, then diff it and search the diff. It will use space but solve the time problem.
Example:
diff sample.log sample1.log | grep ^\< | cut -c 2- > search.log
cp -p sample.log sample1.log

The data will be in search.log

Another solution would be to run a x=`wc -l sample.log` on the log file every hour and when you scan the log file skip the first x lines, however thats a little more work and if the contents of the log files are reset or logrtated you could end up with a problem.

Regards

Andrew Y
Si hoc legere scis, nimis eruditionis habes
Arturo Galbiati
Esteemed Contributor

Re: Monitoring logifiles for errors hourly

Hi Gary,
please, post an example of your log file so I'll be bale to help you in wrting script based on the line format you have.
Rgds,
Art
Gary Friou
New Member

Re: Monitoring logifiles for errors hourly

I kind of like Andrew's suggestion above, but maybe with a little bit different theory.

Here's a line of the log I am looking for errors in:

17:03:02|SELA=00:03:00|SUBM=dms@dbmsn01|SNOD=NDM.SPARE|CCOD=16|RECI=PERR|RECC=CAPR|LNOD=P|PNOD=dbmsn01|MSST=Process fatal error

So to go along with Andrew's suggestion, is there a way to maybe grep for every line that begins with the current time and includes lines for the previous 60 minutes, re-direct that to another temporary log and then grep that log for the 'fatal' error condition?

Thanks again all for your assistance...
Patrick Wallek
Honored Contributor
Solution

Re: Monitoring logifiles for errors hourly

To figure the hour you would have to do something like:

HOUR=$(date +%H)
if (( ${HOUR} == 0 )) ; then
PREVHOUR=23
else
let PREVHOUR=${HOUR}-1
fi

Then you can try doing your grep. You should be able to combine it into something like:

grep ^${PREVHOUR}: filename | grep -i fatal

The ^${PREVHOUR}: will look for the value of previous hour at the beginning of the line with a : after it, like 17:.

Gary Friou
New Member

Re: Monitoring logifiles for errors hourly

Thanks very much Patrick...

I used the PREVHOUR statement you shared and that solved my issue.

Thanks again to all who responded...