- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- about fork() in linux
Operating System - Linux
1752402
Members
5692
Online
108788
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-17-2003 08:28 PM
11-17-2003 08:28 PM
I use some cross-platform products or technologies (java, openview, etc).
I've noticed a different behaviour between linux (RH, but I think this is not important) and all unix dialects: many objects which elsewhere consist of a single process, in linux consist of a father, a child, and many grandchildren.
An example of this is the process opcctla of OpenView Operation agent: it is 1 process in unix, 8 processes in linux.
I have 2 questions about this:
- why does linux fork so much, where others don't?
- if I want to compute the memory use of the product, should I *SUM UP* the use of the single processes, or they somehow share the same memory pages?
Any help appreciated
Carlo
I've noticed a different behaviour between linux (RH, but I think this is not important) and all unix dialects: many objects which elsewhere consist of a single process, in linux consist of a father, a child, and many grandchildren.
An example of this is the process opcctla of OpenView Operation agent: it is 1 process in unix, 8 processes in linux.
I have 2 questions about this:
- why does linux fork so much, where others don't?
- if I want to compute the memory use of the product, should I *SUM UP* the use of the single processes, or they somehow share the same memory pages?
Any help appreciated
Carlo
Solved! Go to Solution.
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-17-2003 11:34 PM
11-17-2003 11:34 PM
Solution
Hi Carlo,
There are several answers to your question. The first think to get is that Linux was first built for x86 architecture, which wasn't, at least at the beginning, built for multithreading. The consequence is that a thread could exist in several instances, but had to be controlled by an ordonancer, and the CPU's one wasn't (far from that) enough for this.
You know the classical story of the thread :
for i=0 ;
i+=1;
return i;
at first return time, i=1. But if 2 same threads occur in same time:
for i=0;
i+=1; for i=0;
return i; i+=1;
return i;
In that case i=0 at first return, which can cause a lot of problem.
That's one reason why (OK, kernel hackers, I know I'm simplifying the problem, but the idea is to get things explained, not to write a book :]] ) what you called granchildren are set up by a child, father for them. This child also has the duty to terminate gran children in case they become zombies or accomplished their duty, such liberating memory space on this so weakly build architecture.
Then, process on starting must have large power, but not for long, that's why the first process to start often has uid0 rights, launches what is required, then makes a child with less power (just enough to have things go smooth) and stops.
This is the family story. Now why other systems don't do things that way ? I assume mainly for system architecture reasons... and maybe also historical... HP-UX forumers could certainly answer that point.
When you look at processes, you'll often see first process dead or asleep, and same for many granchildren. The total used memory still is the sum of them all (I mean, the active ones). Check, and you'll see that this is not much, compared to the number of 'ready to live' granchildren... The system is very efficient in that point of view.
Hope this is useful...
J
There are several answers to your question. The first think to get is that Linux was first built for x86 architecture, which wasn't, at least at the beginning, built for multithreading. The consequence is that a thread could exist in several instances, but had to be controlled by an ordonancer, and the CPU's one wasn't (far from that) enough for this.
You know the classical story of the thread :
for i=0 ;
i+=1;
return i;
at first return time, i=1. But if 2 same threads occur in same time:
for i=0;
i+=1; for i=0;
return i; i+=1;
return i;
In that case i=0 at first return, which can cause a lot of problem.
That's one reason why (OK, kernel hackers, I know I'm simplifying the problem, but the idea is to get things explained, not to write a book :]] ) what you called granchildren are set up by a child, father for them. This child also has the duty to terminate gran children in case they become zombies or accomplished their duty, such liberating memory space on this so weakly build architecture.
Then, process on starting must have large power, but not for long, that's why the first process to start often has uid0 rights, launches what is required, then makes a child with less power (just enough to have things go smooth) and stops.
This is the family story. Now why other systems don't do things that way ? I assume mainly for system architecture reasons... and maybe also historical... HP-UX forumers could certainly answer that point.
When you look at processes, you'll often see first process dead or asleep, and same for many granchildren. The total used memory still is the sum of them all (I mean, the active ones). Check, and you'll see that this is not much, compared to the number of 'ready to live' granchildren... The system is very efficient in that point of view.
Hope this is useful...
J
You can lean only on what resists you...
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP