Skip to ContentSkip to Footer
Start of content
- Community Home
- >
- Servers and Operating Systems
- >
- Operating System - Linux
- >
- General
- >
- about fork() in linux
General
-
- Forums
-
- Advancing Life & Work
- Advantage EX
- Alliances
- Around the Storage Block
- HPE Ezmeral: Uncut
- OEM Solutions
- Servers & Systems: The Right Compute
- Tech Insights
- The Cloud Experience Everywhere
- HPE Blog, Austria, Germany & Switzerland
- Blog HPE, France
- HPE Blog, Italy
- HPE Blog, Japan
- HPE Blog, Middle East
- HPE Blog, Latin America
- HPE Blog, Russia
- HPE Blog, Saudi Arabia
- HPE Blog, South Africa
- HPE Blog, UK & Ireland
-
Blogs
- Advancing Life & Work
- Advantage EX
- Alliances
- Around the Storage Block
- HPE Blog, Latin America
- HPE Blog, Middle East
- HPE Blog, Saudi Arabia
- HPE Blog, South Africa
- HPE Blog, UK & Ireland
- HPE Ezmeral: Uncut
- OEM Solutions
- Servers & Systems: The Right Compute
- Tech Insights
- The Cloud Experience Everywhere
-
Information
- Community
- Welcome
- Getting Started
- FAQ
- Ranking Overview
- Rules of Participation
- Tips and Tricks
- Resources
- Announcements
- Email us
- Feedback
- Information Libraries
- Integrated Systems
- Networking
- Servers
- Storage
- Other HPE Sites
- Support Center
- Aruba Airheads Community
- Enterprise.nxt
- HPE Dev Community
- Cloud28+ Community
- Marketplace
-
Forums
-
Blogs
-
Information
-
English
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
11-17-2003 08:28 PM
11-17-2003 08:28 PM
I use some cross-platform products or technologies (java, openview, etc).
I've noticed a different behaviour between linux (RH, but I think this is not important) and all unix dialects: many objects which elsewhere consist of a single process, in linux consist of a father, a child, and many grandchildren.
An example of this is the process opcctla of OpenView Operation agent: it is 1 process in unix, 8 processes in linux.
I have 2 questions about this:
- why does linux fork so much, where others don't?
- if I want to compute the memory use of the product, should I *SUM UP* the use of the single processes, or they somehow share the same memory pages?
Any help appreciated
Carlo
I've noticed a different behaviour between linux (RH, but I think this is not important) and all unix dialects: many objects which elsewhere consist of a single process, in linux consist of a father, a child, and many grandchildren.
An example of this is the process opcctla of OpenView Operation agent: it is 1 process in unix, 8 processes in linux.
I have 2 questions about this:
- why does linux fork so much, where others don't?
- if I want to compute the memory use of the product, should I *SUM UP* the use of the single processes, or they somehow share the same memory pages?
Any help appreciated
Carlo
Solved! Go to Solution.
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
11-17-2003 11:34 PM
11-17-2003 11:34 PM
Solution
Hi Carlo,
There are several answers to your question. The first think to get is that Linux was first built for x86 architecture, which wasn't, at least at the beginning, built for multithreading. The consequence is that a thread could exist in several instances, but had to be controlled by an ordonancer, and the CPU's one wasn't (far from that) enough for this.
You know the classical story of the thread :
for i=0 ;
i+=1;
return i;
at first return time, i=1. But if 2 same threads occur in same time:
for i=0;
i+=1; for i=0;
return i; i+=1;
return i;
In that case i=0 at first return, which can cause a lot of problem.
That's one reason why (OK, kernel hackers, I know I'm simplifying the problem, but the idea is to get things explained, not to write a book :]] ) what you called granchildren are set up by a child, father for them. This child also has the duty to terminate gran children in case they become zombies or accomplished their duty, such liberating memory space on this so weakly build architecture.
Then, process on starting must have large power, but not for long, that's why the first process to start often has uid0 rights, launches what is required, then makes a child with less power (just enough to have things go smooth) and stops.
This is the family story. Now why other systems don't do things that way ? I assume mainly for system architecture reasons... and maybe also historical... HP-UX forumers could certainly answer that point.
When you look at processes, you'll often see first process dead or asleep, and same for many granchildren. The total used memory still is the sum of them all (I mean, the active ones). Check, and you'll see that this is not much, compared to the number of 'ready to live' granchildren... The system is very efficient in that point of view.
Hope this is useful...
J
There are several answers to your question. The first think to get is that Linux was first built for x86 architecture, which wasn't, at least at the beginning, built for multithreading. The consequence is that a thread could exist in several instances, but had to be controlled by an ordonancer, and the CPU's one wasn't (far from that) enough for this.
You know the classical story of the thread :
for i=0 ;
i+=1;
return i;
at first return time, i=1. But if 2 same threads occur in same time:
for i=0;
i+=1; for i=0;
return i; i+=1;
return i;
In that case i=0 at first return, which can cause a lot of problem.
That's one reason why (OK, kernel hackers, I know I'm simplifying the problem, but the idea is to get things explained, not to write a book :]] ) what you called granchildren are set up by a child, father for them. This child also has the duty to terminate gran children in case they become zombies or accomplished their duty, such liberating memory space on this so weakly build architecture.
Then, process on starting must have large power, but not for long, that's why the first process to start often has uid0 rights, launches what is required, then makes a child with less power (just enough to have things go smooth) and stops.
This is the family story. Now why other systems don't do things that way ? I assume mainly for system architecture reasons... and maybe also historical... HP-UX forumers could certainly answer that point.
When you look at processes, you'll often see first process dead or asleep, and same for many granchildren. The total used memory still is the sum of them all (I mean, the active ones). Check, and you'll see that this is not much, compared to the number of 'ready to live' granchildren... The system is very efficient in that point of view.
Hope this is useful...
J
You can lean only on what resists you...
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
End of content
United States
Hewlett Packard Enterprise International
Communities
- Communities
- HPE Blogs and Forum
© Copyright 2021 Hewlett Packard Enterprise Development LP