Operating System - HP-UX
1834926 Members
2607 Online
110071 Solutions
New Discussion

How to trim/remove a file which is alway open for writing

 
Rui Vilao
Regular Advisor

How to trim/remove a file which is alway open for writing

Greetings!

I have an application running on HP-UX which is started in background and I
redirect its stdout and stderr to a file.

Its behaviour can be simulated by this script:

#!/usr/bin/ksh
yes | tr "y012" "00" > big_big_file

big_big_file is always growing.

When I try on the Korn Shell:

> big_big_file

THe file is only apparently put to length zero, because after a new write it
gains again its previous size. If I try to remove it, the space available on
the filesystem keeps reducing. It is only released when I kill the process.

How can I handle this situation?

Note: I have this problen on HP-UX but I believe its the same on any unix...

Thanks in advance for you help,

Kind Regards,

Rui.
"We should never stop learning"_________ rui.vilao@rocketmail.com
12 REPLIES 12
John Palmer
Honored Contributor

Re: How to trim/remove a file which is alway open for writing

Without getting your application to close the file I'm not sure that you can. Are you not able to stop/restart this process on a regular basis?

If it's some sort of log file and you are not interested in the contents, can you use /dev/null instead?
Andy Monks
Honored Contributor

Re: How to trim/remove a file which is alway open for writing

Rui,

The problem is, that the process with the file open has a file pointer than points a long way into the file. Thus when you truncate it, althought the files length is now 0, the original process still has it's pointer somewhere else. When it next writes to the file, that's where it's going to write. Unless the process can do a lseek() to the end of the file (which will now be at byte 0), it's always going to write to where it thinks the end is.

There is 1 piece of good news however. Because you've truncated the file, the file will probably now be a sparse file.

So, if you do 'll ' and 'du -k ' you'll see that the size is completely different (btw du -k returns in 1k blocks). When the process does it's write to the end of the file, as the rest of the file doesn't exist, the O/S won't actually fill in all the nulls. The inode just keeps track that they should really be there.

However, if you backup/restore or copy the file, the nulls will be filled it (unless you use frecover with the -s option).
CHRIS_ANORUO
Honored Contributor

Re: How to trim/remove a file which is alway open for writing

Use cat /dev/null > big.big.file
When We Seek To Discover The Best In Others, We Somehow Bring Out The Best In Ourselves.
Rui Vilao
Regular Advisor

Re: How to trim/remove a file which is alway open for writing

Thanks John and Andy.

You are both right. I looks like I don?t have much
choices...

I can not stop my application on a regular basis... It
run 24h a day... (it runs on a MC/SG cluster!).
The application write trace info to stdout and stderr...
It should not... but it does!
Redirecting it to /dev/null is not the best solution either
because then I loose my trace info...

Andy, you are right when I run:
> big_big_file

The space on the filesystem is released. But may be it
is not a good to have such a weird situation on a
HA system...

Chris,
Your suggestion is the same as > big_big_file
Thanks anyway...

More hints?
"We should never stop learning"_________ rui.vilao@rocketmail.com
Andy Monks
Honored Contributor

Re: How to trim/remove a file which is alway open for writing

Rui,

It's possible (but not a good idea), that a program could be run as root, that will reset the filepointer of another process. (slightly friendlier than adb!). However, as it would have to work off the pid, if the process died and was restarted as another pid, then something else took the original pid, you could be in trouble.

Let me see how easy (or not) this is to write. Are we just talking about stdio/out/err?
Tim Malnati
Honored Contributor

Re: How to trim/remove a file which is alway open for writing

The only solution I can think of to handle this would be with an interprocess pipe file (prwxrwxrwx). In this case your current process writes to this pipe file and another process actually does the logging, trimming, etc. This special file type has the capability of having one end open with the other end closed. The process receiving and processing the data needs to be solid code and I strongly recommend some form of automatic periodic monitoring of the 'catcher' process also.
Andy Monks
Honored Contributor

Re: How to trim/remove a file which is alway open for writing

Hi Rui,

Ok, I can write a small program to reset the file pointers of another process (assuming you just want stdin/out/err done.

So, what I need from you, is the release of hp-ux your running and if hpux 11, if it's 32 or 64bit.
Rui Vilao
Regular Advisor

Re: How to trim/remove a file which is alway open for writing

Again, thanks a lot for your contribution!

Andy, I could indeed try your suggestion of having a process which reset the file
pointers of another process. It is probably not the best/cleanest solution...
I have to tell you that I am not starting only one process like this...
Our application consists of more or less 10 processes...
I hope it will not be too much work for you...
But since our application has a Solaris porting... The problem would still remain unsolved
on this platform.
Do you it is good solution to use "> big_big_file" because this will at least
prevent from filling the filesystem. Do you see any other problem than ls
returning the wrong size?

Btw our current OS is 10.20 but we are migrating to 11.00 (64 bits).

Tim, could you detail a bit your solution?

Thanks,

Rui
"We should never stop learning"_________ rui.vilao@rocketmail.com
Andy Monks
Honored Contributor

Re: How to trim/remove a file which is alway open for writing

Rui,

I was looking at Tim's suggestion and assuming you can redirect the output you could create a couple of named pipes (2 for each process) and then have something else reading from them.

e.g.

mknod /dev/pipe1_out p
mknod /dev/pipe1_err p

Then do :-

yourapp > /dev/pipe1_out 2>/dev/pipe1_err

Then just have 2 processes writing it to a file :-

cat < /dev/pipe1_out > /tmp/file1.out
cat < /dev/pipe1_err > /tmp/file1.err

That way, you can stop the cat processes anytime, delete the file and restart them

Btw, you MUST have the cat processes running as otherwise the pipe will fill up and the application writing to it, might get upset.
Tim Malnati
Honored Contributor

Re: How to trim/remove a file which is alway open for writing

Basically Andy has it right. cat -u might be better where no buffering will take place. You can also run the data into a single named pipe using 2>&1 if the application or script at the other end can handle the mixture.

Named pipes are supported by a lot more than just shell scripts and C routines. A variety of database engines also support them allowing you to channel the data directly into a database. With some cute manipulation you can even setup a service port and receive and redirect data from another machine into a daemon process without using socket code.

Andy's caution is on the money. A named pipe accumulates data just like any other file if the process on the other end is not extracting the data from it. Depending on your process, this could be quite a bit very quickly. Usually I have a cron job setup to verify the catcher process is running and restart it if it's not (with some additional flags to handle shutdown periods).
Felix J. Liu
Advisor

Re: How to trim/remove a file which is alway open for writing

Just a thought:

Try symbolic link. i.e.,

touch big_big_file_1 big_big_file_2
ln -s big_big_file_1 big_big_file

Later, you do
rm big_big_file; ln -s big_big_file_2 big_big_file

and examine big_big_file_1 then make its size zero for next use or make a new file for the next switch.

You can even automate it, if you have enough disk space or delete old logs in time.

Hope this will work (I am not so sure about it). I tried with a simple echo "line" >> tst.log enndless loop script and it seems worked there.
Felix J. Liu
Advisor

Re: How to trim/remove a file which is alway open for writing

Just a thought:

Try symbolic link. i.e.,

touch big_big_file_1 big_big_file_2
ln -s big_big_file_1 big_big_file

Later, you do
rm big_big_file; ln -s big_big_file_2 big_big_file

and examine big_big_file_1 then make its size zero for next use or make a new file for the next switch.

You can even automate it, if you have enough disk space or delete old logs in time.

Hope this will work (I am not so sure about it). I tried with a simple echo "line" >> tst.log enndless loop script and it seems worked there.