Operating System - HP-UX
1827324 Members
5047 Online
109962 Solutions
New Discussion

Re: Performance improvement on perl script

 
SOLVED
Go to solution
Srikanth Arunachalam
Trusted Contributor

Performance improvement on perl script

Hi,

I have a perl script that reads oracle loader log file and extract feedfile to generate an output file. The script is attached. If the size of the feedfile and logfile is limited it works fine. However, when the feedfile is very large in terms of 25GB or 30GB, it gives out of memory error.

The details are
ops_generate_exception.sh takes in parameter of logfile.
Within the shell perl script process_exception.pl is called with parameter of formatted logfile and feedfile to generate outputfile called OR-C-DR02-RB002-01-8-HST-EST-20101107-01.

Possibly the perl script reads the entrie feedfile that it gives out of memory in prod environment. Need help to fine tune the perl script please.

Thanks,
Srikanth A
12 REPLIES 12
James R. Ferguson
Acclaimed Contributor
Solution

Re: Performance improvement on perl script

Hi Srikanth:

> when the feedfile is very large in terms of 25GB or 30GB, it gives out of memory error.

So is sounds like you are slurping (reading) the whole file into memory.

Please provide your attachment as a simple text one, not as a "*.rar. archive which is non-standard.

Regards!

...JRF...
VK2COT
Honored Contributor

Re: Performance improvement on perl script

Hello,

For very large files, I would try the
following options (if you cannot rewrite
it in C):

a) Use Tie::File module. The files is not
loaded into memory and it should work even
for "gigantic" files.

b) Split your input file into smaller files
and process them individually. The only bad
side is that you need more temporary disk
space.

There are other possibilities too,
but I would start with the above ones.

Cheers,

VK2COT

VK2COT - Dusan Baljevic
Hein van den Heuvel
Honored Contributor

Re: Performance improvement on perl script

>> Please provide your attachment as a simple text one, not as a "*.rar. archive which is non-standard.

Yeah, use ZIP if you must, but plain TEXT is preferred by most I guess.
RAR is a MAC alternative to ZIP.

I could see/extract the contents on windows using the free Stuffit expander:
http://www.stuffit.com/win-expander-download.html

JRF> So is sounds like you are slurping (reading) the whole file into memory.

Good guess... Here is the core:
-----
{
local $/ = undef;
@pieces = split( /(?=Record\s+\d+:)/, <> );
}
for $line (@pieces) {
if ( $line =~ m{(Record\s+(\d+)).+?(ORA.+?)\s}s ) {
print $1, "|", $3, "|", $lookup[ $2 - 1 ], "|",$up_amt[ $2 - 1 ],"\n";
}
}
----

From the perl userguide:

"You may set it ($\ = $RS = $INPUT_RECORD_SEPARATOR in english) to a multi-character string to match a multi-character terminator, or to undef to read through the end of file."

So just change that to a single loop processing records at a time.
A little more programming, infinitely more scaling.

If you want to tease readers here to help with that then yo may want to provide a TEXT :-) attachment with a hand full of lines from such log-file containing an example of a record being searched for, and a few lines before and after for good measure.

Btw... why process the data 3 times over?
1) strip blank
2) Look for special record
3) perl.

Perl can do that all in one sweep.

Finally... a pet-peeve of mine:

feedfile=`cat ${formatted_logfile}|grep "Data File"|awk -F':' '{print $2}'`

Yuck! It will be a moot point when you teach the perl script to do it all, but as a general concept why involve CAT and GREP when AWK can do it all?
Carpenters... Learn to use your tools!

feedfile=$(awk -F':' '/Data File/{print $2}' ${formatted_logfile})

Good luck!
Hein
Srikanth Arunachalam
Trusted Contributor

Re: Performance improvement on perl script

Hi All,

I am attaching each files instead of RAR file. If I am not able to send it in one message, will do it in multiple messages. Please help me in this regard.

Thanks,
Srikanth
Srikanth Arunachalam
Trusted Contributor

Re: Performance improvement on perl script

Hi All,

In my earlier mail, I sent the perl script. I will attach the logfile (passed as first parameter to script now).

Thanks,
Srikanth A
Srikanth Arunachalam
Trusted Contributor

Re: Performance improvement on perl script

Hi All,

This is the last attachment. Herewith, I am attaching the feedfile (I have taken only 4 recrds of entire feedfile to avoid oversizeing). This is passed as second parameter to the perl script.

Thanks,
Srikanth
James R. Ferguson
Acclaimed Contributor

Re: Performance improvement on perl script

Hi Srikanth:

Well it looks like it was *me* who wrote the original version of the script that doesn't scale :-(

http://forums.itrc.hp.com/service/forums/questionanswer.do?threadId=1435953

Regards!

...JRF...
Srikanth Arunachalam
Trusted Contributor

Re: Performance improvement on perl script

Hi James,

I do understand that it was your bit of code. Since I started using this, I am simply addicted to it and am trying to come out of lions den. Please show your lion hearted skills to save me from trap.

Thanks,
Srikanth
James R. Ferguson
Acclaimed Contributor

Re: Performance improvement on perl script

Hi:

A quick look back to your original thread indicated that the logfile could be "chunked" into paragraphs as divided by blank lines. Given this, the original scriptt should be able to be rewritten to:

# cat ./myfilter
#!/usr/bin/perl
use strict;
use warnings;
my ( @pieces, $line, @lookup );
{
my $feedfile = 'myfeed';
open( my $fh, '<', $feedfile ) or die "Can't open '$feedfile': $!\n";
while (<$fh>) {
chomp;
push( @lookup, substr( $_, 0, 14 ) );
}
}
{
local $/ = "";
while (<>) {
if ( m{(Record\s+(\d+)).+?(ORA.+?)\sSQL}s ) {
print $1, "|", $3, "|", $lookup[ $2 - 1 ], "\n";
}
}
}
1;

Regards!

...JRF...
Hein van den Heuvel
Honored Contributor

Re: Performance improvement on perl script

JRF>> Well it looks like it was *me* who wrote the original version of the script that doesn't scale :-(

A good deed never goes unpunished :-)

The original post was actually more clear than this one!

I now realize how the program also reads the whole Oracle feed file. This it probably where it blows up. Please verify by adding a simple 'print' statement after the first loop.

Here is a bit of perl one-liner to do the lookk code job. You could just merge that into the perl script.

$ perl -ne '$r = $1 if /(^Record\s+\d+:)/; if (/^ORA-/) { print qq($r|$_|) if $r; $r=undef}' tmp.txt


But I suspect that it blows up on reading the feed file and that the number of errors are more or less under control, the number of data lines not.

So reverse the roles...

Read the LOG file first, establishing an array of error line numbers + ORA text.

Next open the feed file and scan.
When you read a line for which there is an entry in the error erray, cul the details and print. ...
while <$fh> {
next unless $error_line{$.}
print ....
}

g2g,

Hein
Hein van den Heuvel
Honored Contributor

Re: Performance improvement on perl script

No takers?
No surprise, as it was convoluted enough.

For yucks I rewrote the whole mess as a linear operation, in a single simple perl script.
No helper files, no helper tools, no arrays.
I'm sure you'll need to tweak it some, but try.

Script attached.

Outline...

# Get some variables set up
# Open all files and deal with initial records.
# Find name of data file in log file.
# Find log records followed by ORA errors.
# For each of those Read data file until we find the signalled record
# Now print the desired data from data record in desired format.
# loop for next error
# print final count.

Hein
Srikanth Arunachalam
Trusted Contributor

Re: Performance improvement on perl script

Hi Hein and James,

Thanks for your valuable gem of perl expertise in helping me out resolve this issue. I need to tweak a bit. I will complete the requirement and get back to you on this.

Thanks,
Srikanth