- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- SMP Load Balancing
Operating System - OpenVMS
1752238
Members
5239
Online
108785
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-04-2006 09:39 AM
тАО10-04-2006 09:39 AM
GS1280 15 CPU 64GB
900 processes (VMS + application + Oracle 9i)
Attached is an ECP report showing CPU usage
per CPU averaged over a 3 hour period (8-11am).
Question: can someone explain why there is a
35% spread between the lowest CPU utilization
and the highest? We are having response time
problems during spikes in application usage.
I'm guessing that if the CPU usage were more uniform, the spikes would be handled better
(though I may be wrong about this).
We would rather not buy more (very expensive)
CPU modules if there is some way to tune the
system. TIA
900 processes (VMS + application + Oracle 9i)
Attached is an ECP report showing CPU usage
per CPU averaged over a 3 hour period (8-11am).
Question: can someone explain why there is a
35% spread between the lowest CPU utilization
and the highest? We are having response time
problems during spikes in application usage.
I'm guessing that if the CPU usage were more uniform, the spikes would be handled better
(though I may be wrong about this).
We would rather not buy more (very expensive)
CPU modules if there is some way to tune the
system. TIA
Solved! Go to Solution.
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-04-2006 10:31 AM
тАО10-04-2006 10:31 AM
Solution
Simple. Openvms schedules a runnable user task with no established affinity on the first idle cpu starting from the high numbers.
The thought/plan is that the lower cpus as (semi) reserved for interupt handling (Istk) and lock manager as needed.
Looks like a nicely balanced, busy, system to me. With this as average it does not surprise me to hear that during peak soem response time problems happen, but I suspect that during those peaks all CPUs were gainfully employed, otherwise you'd never see user time on the low CPUs, and you do.
A more fine-grained picture (time wise) might help here. T4
The bulk of the CPU is spend in user mode, so any tuning would have to happen there to have an effect. Even if you magically could tune all kernel mode away, then you'd still only made a minor impact. Still, what is believed to be responsible for the kernel time? QIO, Scheduler, Locks, Logical names?
How is the gut feel on the Oracle tuning?
How much (percentage) of the time goes there?
Have folks been looking at statspack, high-get queries and such? Excessive spinning? That coudl cause an IO or lock bottleneck to look like a cpu shortage.
You might want to set some affinity to heavier hitting oracle processes to lower CPUs to keep those processes a little out of the scheduling picture (LGWR, DBWR, MON,...)
Interesting!
Hope this helps a little,
Hein van den Heuvel
HvdH Performance Consulting.
The thought/plan is that the lower cpus as (semi) reserved for interupt handling (Istk) and lock manager as needed.
Looks like a nicely balanced, busy, system to me. With this as average it does not surprise me to hear that during peak soem response time problems happen, but I suspect that during those peaks all CPUs were gainfully employed, otherwise you'd never see user time on the low CPUs, and you do.
A more fine-grained picture (time wise) might help here. T4
The bulk of the CPU is spend in user mode, so any tuning would have to happen there to have an effect. Even if you magically could tune all kernel mode away, then you'd still only made a minor impact. Still, what is believed to be responsible for the kernel time? QIO, Scheduler, Locks, Logical names?
How is the gut feel on the Oracle tuning?
How much (percentage) of the time goes there?
Have folks been looking at statspack, high-get queries and such? Excessive spinning? That coudl cause an IO or lock bottleneck to look like a cpu shortage.
You might want to set some affinity to heavier hitting oracle processes to lower CPUs to keep those processes a little out of the scheduling picture (LGWR, DBWR, MON,...)
Interesting!
Hope this helps a little,
Hein van den Heuvel
HvdH Performance Consulting.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-04-2006 07:11 PM
тАО10-04-2006 07:11 PM
Re: SMP Load Balancing
Jack,
> I'm guessing that if the CPU usage were more uniform, the spikes would be handled better
Unlikely. CPU isn't really "load balanced" in the same way as (say) network traffic. If a process is computable and there's a CPU available, it will execute. In VERY broad terms, the more CPUs you have, the more likely one will be available when a process becomes computable.
If we assume that all compute processing is independent, you can scale linearly with CPUs. However, that's a bad assumption. Most processing activity will involve access to shared resources, which requires interlocking, and therefore synchronization between CPUs. This will reduce performance scaling.
The other effect is to do with the distribution of demand for CPU. If a CPU is idle, but there are no computable processes, it will remain idle. Thus, if you have numerous processes all waiting for the same event, they may all become computable at the same time. If there are more processes than CPUs, then some will have to wait. So, even if the total demand for CPU over some period of time is less than the total available compute resource, it's still possible that spikes in demand will mean less than perfect utilization.
As Hein says, you can force some processes to use specific CPUs using affinity, but really all that can do is improve performance for the specific process by (possibly) guaranteeing access to a CPU when required. The effect on overall system performance is only likely to be negative, because it reduces the system's choices. There may be special cases where giving a "key" process preferred access to a CPU can help smooth out CPU demand, but there aren't any generic methods for identifying such a circumstance.
Learn your workload. Decrease the granularity of your samples to see if you can identify patterns.
> I'm guessing that if the CPU usage were more uniform, the spikes would be handled better
Unlikely. CPU isn't really "load balanced" in the same way as (say) network traffic. If a process is computable and there's a CPU available, it will execute. In VERY broad terms, the more CPUs you have, the more likely one will be available when a process becomes computable.
If we assume that all compute processing is independent, you can scale linearly with CPUs. However, that's a bad assumption. Most processing activity will involve access to shared resources, which requires interlocking, and therefore synchronization between CPUs. This will reduce performance scaling.
The other effect is to do with the distribution of demand for CPU. If a CPU is idle, but there are no computable processes, it will remain idle. Thus, if you have numerous processes all waiting for the same event, they may all become computable at the same time. If there are more processes than CPUs, then some will have to wait. So, even if the total demand for CPU over some period of time is less than the total available compute resource, it's still possible that spikes in demand will mean less than perfect utilization.
As Hein says, you can force some processes to use specific CPUs using affinity, but really all that can do is improve performance for the specific process by (possibly) guaranteeing access to a CPU when required. The effect on overall system performance is only likely to be negative, because it reduces the system's choices. There may be special cases where giving a "key" process preferred access to a CPU can help smooth out CPU demand, but there aren't any generic methods for identifying such a circumstance.
Learn your workload. Decrease the granularity of your samples to see if you can identify patterns.
A crucible of informative mistakes
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP