- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Memory Usage by process - Always the BIG ?
Operating System - HP-UX
1752290
Members
4819
Online
108786
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-06-2010 08:40 AM
тАО05-06-2010 08:40 AM
Re: Memory Usage by process - Always the BIG ?
Hi pulse,
How would the % be calculated since a process will occupy physical,virtual and swap memory.
Now while calculating the % should i do a
(occupied physical + occupied virtual + occupied swap ) / (total physical + total virtual + total swap ) * 100
I dont these calculations are as simple as you indicate
How would the % be calculated since a process will occupy physical,virtual and swap memory.
Now while calculating the % should i do a
(occupied physical + occupied virtual + occupied swap ) / (total physical + total virtual + total swap ) * 100
I dont these calculations are as simple as you indicate
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-06-2010 09:20 AM
тАО05-06-2010 09:20 AM
Re: Memory Usage by process - Always the BIG ?
Well, your calculation would be less than simple -- but that's also because it is less than obvious that it would have any real meaning. A process consuming a lot of virtual memory may not be consuming any swap or any physical memory at all (mmap() a very large file without touching the mapping and you get just this). Or it could be consuming a lot of virtual+swap but no physical (large swap-backed object) or it could be consuming a lot of physical and virtual memory with no swap (touching that mapping mentioned before), etc.
So I don't get at all what you're asking. There's no real concept of a process using a percentage of "total system resources", each resource type has some relation to the other, but a tenuous one in most cases and should be analyzed separately.
Here's something which might get you going. If you really, really want to add up the totals for some reason, the framework is there (add a field for total_global or something and each proc_global would += (proc_phys + proc_virt + proc_swap + proc_mlock), keep a top_global -- and then you can do the comparison to (sbuf.physical_memory + dbuf.psd_vm + vbuf.psv_swapspc_max + vbuf.psv_swapmem_max + vbuf.psv_swapmem_max) [fudging a little in just calling Memory Swap maximum the maximum for Lockable memory -- it is actually a little lower, depending on OS version and tunables, etc.] I don't see the point -- but there you go.
Sample output (and I don't write UIs, so yes, this is probably ugly):
Memory Stat total used avail %used
physical 11591.2 2502.7 9088.6 22%
active virtual 712.6 527.4 185.2 74%
active real 385.9 274.1 111.8 71%
memory swap 11024.8 1528.6 9496.2 14%
device swap 8192.0 405.1 7786.9 5%
Activations: 0 total, 0 rate. Deactivations: 0 total, 0 rate.
Reclaims from Swap: 0 total (Up 0), 0 rate.
Top Physical PID: [cimprovagt] 1534 Phys: 100 Mb (0.8687%) of RAM.
Top Virtual PID: [cimprovagt] 1534 Virt: 436 Mb (61.2214%) of Total Virt.
Top Swap PID: [cimprovagt] 1534 Swap: 79 Mb (0.4159%) of Total Swap.
Top Mlock PID: [midaemon] 1913 Mlock: 32 Mb (0.2967%) of Memory Swap.
The numbers don't line up 100% with kmeminfo -- but pstat tries to account for shared object references more than kmeminfo does, so that isn't surprising.
Also, the Total Virtual resource is really unknown. Since the virtual space reported includes files, we'd have to know the maximum space of any and all filesystems ever to be attached, etc. The kernel doesn't bother predicting that and as such, doesn't report it (we'd need max file space + max swap + memory swap to equal the absolute total Virtual space). So that "Total Virt" is really the current total virtual load of User space.
So I don't get at all what you're asking. There's no real concept of a process using a percentage of "total system resources", each resource type has some relation to the other, but a tenuous one in most cases and should be analyzed separately.
Here's something which might get you going. If you really, really want to add up the totals for some reason, the framework is there (add a field for total_global or something and each proc_global would += (proc_phys + proc_virt + proc_swap + proc_mlock), keep a top_global -- and then you can do the comparison to (sbuf.physical_memory + dbuf.psd_vm + vbuf.psv_swapspc_max + vbuf.psv_swapmem_max + vbuf.psv_swapmem_max) [fudging a little in just calling Memory Swap maximum the maximum for Lockable memory -- it is actually a little lower, depending on OS version and tunables, etc.] I don't see the point -- but there you go.
Sample output (and I don't write UIs, so yes, this is probably ugly):
Memory Stat total used avail %used
physical 11591.2 2502.7 9088.6 22%
active virtual 712.6 527.4 185.2 74%
active real 385.9 274.1 111.8 71%
memory swap 11024.8 1528.6 9496.2 14%
device swap 8192.0 405.1 7786.9 5%
Activations: 0 total, 0 rate. Deactivations: 0 total, 0 rate.
Reclaims from Swap: 0 total (Up 0), 0 rate.
Top Physical PID: [cimprovagt] 1534 Phys: 100 Mb (0.8687%) of RAM.
Top Virtual PID: [cimprovagt] 1534 Virt: 436 Mb (61.2214%) of Total Virt.
Top Swap PID: [cimprovagt] 1534 Swap: 79 Mb (0.4159%) of Total Swap.
Top Mlock PID: [midaemon] 1913 Mlock: 32 Mb (0.2967%) of Memory Swap.
The numbers don't line up 100% with kmeminfo -- but pstat tries to account for shared object references more than kmeminfo does, so that isn't surprising.
Also, the Total Virtual resource is really unknown. Since the virtual space reported includes files, we'd have to know the maximum space of any and all filesystems ever to be attached, etc. The kernel doesn't bother predicting that and as such, doesn't report it (we'd need max file space + max swap + memory swap to equal the absolute total Virtual space). So that "Total Virt" is really the current total virtual load of User space.
- Tags:
- pstat
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-06-2010 09:30 AM
тАО05-06-2010 09:30 AM
Re: Memory Usage by process - Always the BIG ?
Hi Don,
I think I am getting a great sign of hope now.
Even I think i am confused but I will explain -
A simple summarization of the % of the HPUX memory being consumed and the process occuping the highest % of memory
Need a perl or shell script :(
I think I am getting a great sign of hope now.
Even I think i am confused but I will explain -
A simple summarization of the % of the HPUX memory being consumed and the process occuping the highest % of memory
Need a perl or shell script :(
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-06-2010 10:45 AM
тАО05-06-2010 10:45 AM
Re: Memory Usage by process - Always the BIG ?
> looking for some script which may really provide a consolidate memory usage status and process taking max memory...
> Expressing process using highest memory in terms of percent.
This really won't provide any useful information. A memory leak is nothing but a programming artifact...it may be intentional (the program needs more memory) or unintentional and therefore a leak. But knowing what percentage of RAM a single process is using is like trying to fix a filesystem by looking for big files. All the big files (or processes) are running as designed, and the real problem is with dozens of smaller files (or processes) that are not supposed to be growing.
So let's assume that the program using the largest amount of local or heap memory is Java and it has a resident set size of 2000MB. Do you know how to rewrite the Java code to not use so much memory? If not, then all you can do is apply the latest patches and buy more RAM if necessary.
Or perhaps it is shared memory (which is not part of local memory for any process, but Oracle requested 5000MB. So it is using a lot of memory but you must talk to the DBA about this usage. You may be informed that to reduce it to 1000MB means that all Oracle transactions will take 500% longer to complete.
The amount of memory used by each process is a function of how it was written. While you can impose a memory limit for each process, the majority of programs will simply crash with an errno 12 message (not enough core). So you will have solved the problem for programs that take more than the allowed space but no one gets any work done.
Bill Hassell, sysadmin
> Expressing process using highest memory in terms of percent.
This really won't provide any useful information. A memory leak is nothing but a programming artifact...it may be intentional (the program needs more memory) or unintentional and therefore a leak. But knowing what percentage of RAM a single process is using is like trying to fix a filesystem by looking for big files. All the big files (or processes) are running as designed, and the real problem is with dozens of smaller files (or processes) that are not supposed to be growing.
So let's assume that the program using the largest amount of local or heap memory is Java and it has a resident set size of 2000MB. Do you know how to rewrite the Java code to not use so much memory? If not, then all you can do is apply the latest patches and buy more RAM if necessary.
Or perhaps it is shared memory (which is not part of local memory for any process, but Oracle requested 5000MB. So it is using a lot of memory but you must talk to the DBA about this usage. You may be informed that to reduce it to 1000MB means that all Oracle transactions will take 500% longer to complete.
The amount of memory used by each process is a function of how it was written. While you can impose a memory limit for each process, the majority of programs will simply crash with an errno 12 message (not enough core). So you will have solved the problem for programs that take more than the allowed space but no one gets any work done.
Bill Hassell, sysadmin
- « Previous
-
- 1
- 2
- Next »
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP