1826489 Members
3779 Online
109692 Solutions
New Discussion

Find error

 
ust3
Regular Advisor

Find error

I have a directory that our applications are continuely generate new file to it , some files hv a word "error" in content , I would like to find out which file hv this word and send me notify mail to inform me which file hv this word , currently , I have a stupid script to grep the word , the script hv scheduled to run in every 2 hours , it works fine but I worry that if the system is shutting down while the cron job is time to run , then the error checking would be missed in this two hours, this would cause senior problem , furthermore , I am not sure if the checking is correct while the file is generating at the same time , can advise what is the best method , is there any method to check the directory instantly ? if possible , can provide the script ?

Thx in advance.
3 REPLIES 3
ust3
Regular Advisor

Re: Find error

thx ,

I know the command tail -f could display the line while the file is updated , could I use the similiar function to check the file ? thx
Dennis Handly
Acclaimed Contributor

Re: Find error

>I know the command tail -f could display the line while the file is updated

Yes, you could use tail -f on every file. And if you are creating new files, you would have find new files and then put each tail -f in the background.

>scheduled to run in every 2 hours

You could have the script run continuously and just add a sleep at the end so you can do the checking more often. See my reply in the following thread with "while true; do":
http://forums.itrc.hp.com/service/forums/questionanswer.do?threadId=1187460
A. Clay Stephenson
Acclaimed Contributor

Re: Find error

You are probably thinking about the problem all wrong. Tail -f can be used to examine individual files but your problem is that there are multiple files. You could grep more often than two hours (every minute?) but that could be a resource-expensive operation. You could use find to exclude all files which have not been modified since the last scan which would reduce the workload but perhaps a better answer would be to have all your applications log errors to a common facility such as syslog. After all, the applications know that an error occurred so it should be simple to add a call to syslog so that errors are sent to syslog in addition to being written to your existing logfiles. You then have the simple task of searching syslog and now your tail -f idea works very well because it's looking at one file.
If it ain't broke, I can fix that.