1752402 Members
5692 Online
108788 Solutions
New Discussion

about fork() in linux

 
SOLVED
Go to solution
Carlo Montanari
Advisor

about fork() in linux

I use some cross-platform products or technologies (java, openview, etc).
I've noticed a different behaviour between linux (RH, but I think this is not important) and all unix dialects: many objects which elsewhere consist of a single process, in linux consist of a father, a child, and many grandchildren.
An example of this is the process opcctla of OpenView Operation agent: it is 1 process in unix, 8 processes in linux.
I have 2 questions about this:
- why does linux fork so much, where others don't?
- if I want to compute the memory use of the product, should I *SUM UP* the use of the single processes, or they somehow share the same memory pages?

Any help appreciated

Carlo
1 REPLY 1
Jerome Henry
Honored Contributor
Solution

Re: about fork() in linux

Hi Carlo,

There are several answers to your question. The first think to get is that Linux was first built for x86 architecture, which wasn't, at least at the beginning, built for multithreading. The consequence is that a thread could exist in several instances, but had to be controlled by an ordonancer, and the CPU's one wasn't (far from that) enough for this.
You know the classical story of the thread :
for i=0 ;
i+=1;
return i;
at first return time, i=1. But if 2 same threads occur in same time:
for i=0;
i+=1; for i=0;
return i; i+=1;
return i;
In that case i=0 at first return, which can cause a lot of problem.
That's one reason why (OK, kernel hackers, I know I'm simplifying the problem, but the idea is to get things explained, not to write a book :]] ) what you called granchildren are set up by a child, father for them. This child also has the duty to terminate gran children in case they become zombies or accomplished their duty, such liberating memory space on this so weakly build architecture.

Then, process on starting must have large power, but not for long, that's why the first process to start often has uid0 rights, launches what is required, then makes a child with less power (just enough to have things go smooth) and stops.

This is the family story. Now why other systems don't do things that way ? I assume mainly for system architecture reasons... and maybe also historical... HP-UX forumers could certainly answer that point.

When you look at processes, you'll often see first process dead or asleep, and same for many granchildren. The total used memory still is the sum of them all (I mean, the active ones). Check, and you'll see that this is not much, compared to the number of 'ready to live' granchildren... The system is very efficient in that point of view.

Hope this is useful...

J
You can lean only on what resists you...