General
cancel
Showing results for 
Search instead for 
Did you mean: 

Searcing for Processor "best practices"

SOLVED
Go to solution
Rob Hussey
Advisor

Searcing for Processor "best practices"

We are running an RX7640 - HPUX 11.23 ( Upgrading to 11.31 shortly ) with Oracle 11R1 ( Upgrading to R2 shortly) ( Large DB )

I have heard that most places like to run at a 30% to 45% Processor utility rate, as prescribed by using top and taking the inverse of the idle value.

For the past 12 months, as well as the last peak volume, we have been averaging a utility of 21% of our processors. With this being said, it is my belief that we actually have more than the cost effective need of processors.

I am unable to locate any standards, white papers or “best practices” or the like to support my beliefs.

TIA,
- Rob
- Rob Hussey
14 REPLIES
Pete Randall
Outstanding Contributor

Re: Searcing for Processor "best practices"

This is definitely a "it depends" question.

Do you have any peak periods that require more processing power? Month-end processing? Billing cycles? If not, I would tend to agree that you have more than enough, but I surely wouldn't worry about it.

It's also possible that your upcoming upgrades may require more resources so keep that in mind.


Pete

Pete
Rob Hussey
Advisor

Re: Searcing for Processor "best practices"

We do have 3 weeks of the year we run a lot of code that I believe, is extremely poorly written and thus am stating fix the code before we'll buy yet more hardware. I feel that a system should be sized for 90% of the uptime, instead of sizing a system for 10% of the uptime. Do you know of any documentation that may support my case?

I have upgraded our RX6600 HPUX and 11R2 showing an increase of 4%+/- on the processors.
- Rob Hussey
Rita C Workman
Honored Contributor

Re: Searcing for Processor "best practices"

If you're saying that you are only using 21% of your CPU? Well then I'd say you have plenty of resources left to use.

Not everything requires a written document. Best practice is sometimes what's best for your shop. Looks to me like your place has plenty of resources. You're running the box, just monitor your boxes and trust yourself.

Kindest regards,
Rita
Rita C Workman
Honored Contributor
Solution

Re: Searcing for Processor "best practices"

On your last comment...sizing for 10% of time.

Well, guess that one comes down to - which is cheaper, spending time for some programmers who didn't write it right the first time to find their garbage and fix it -or- buy some hardware just to address that 10% of the time peak load.

I call that a management decision.

Rgrds,
Rita
Duncan Edmonstone
Honored Contributor

Re: Searcing for Processor "best practices"

If you are moving to 11.31 and 11gR2 on your rx7640, and the nPar it is in is made up of more than one cell board, you should read this whitepaper... enabling NUMA support could gain you a fair bit of performance:

http://h20195.www2.hp.com/V2/GetPDF.aspx/4AA2-4194ENW.pdf

HTH

Duncan

HTH

Duncan
Duncan Edmonstone
Honored Contributor

Re: Searcing for Processor "best practices"

Oh and just to answer your original question about nominal levels of CPU utilization, the answer is of course "it depends". Generally as CPU utilization gets above around 50%, you can start to see response times start to increase, although the effect may be insignificant until you approach somewhere arouind 80% (generally the more CPUs you have the higher the utilization number before latency really starts to bite).

The math behind this is quite complex (for a simpleton like me at least), but if it interests you (it doesn't me to be honest) a google seach on:

cpu utilization queueing theory

will turn up some interesting material to peruse.

HTH

Duncan

HTH

Duncan
Jose Mosquera
Honored Contributor

Re: Searcing for Processor "best practices"

Kenan Erdey
Honored Contributor

Re: Searcing for Processor "best practices"

Hi,

just to add something.

avarege is not really important. you should look at what you need in production times. for example, if you run java program 2 or 3 times in a month, and cpu load gets high and process does not completed in period you expect, you need more cpu resource. or you run a service that servers custemers between 08:00-18:00, you should avarege between these times.

cpu load metric should be taken into. if it's below or about 1, it's ok. there is a correlation betwee them but as mentioned it's mathematical and i couldn't steen understand. i had started a thread about this:

http://h30499.www3.hp.com/t5/General/cpu-load-and-utilization-correlation/m-p/4695729#M146668


generally there is a blief that over %60 utulization cpu load increases etc. but it changes.

after all if your resources are still high, and you want to do something, you can think about partitioning technologies.

Computers have lots of memory but no imagination
Rob Hussey
Advisor

Re: Searcing for Processor "best practices"

I now have a better feeling and some documentation to assist management to pursue resolving the root cause, and not buying processor to overpower the problem by throwing money at it.

Duncan - Some of the documents provided by your suggestion will be good to present to management.

Kenan - The standard range of utility on a monthly basis is 10% to 48%, not using the month with a week of heavy load.

Rita - Your last thought is where I have been going.

Jose - Thank you for the direction.

Thank you all for your help!!!

Rob
- Rob Hussey
Steven Schweda
Honored Contributor

Re: Searcing for Processor "best practices"

> avarege is not really important. [...]

I'm with him. What, exactly, does the
average tell you? What, exactly, matters?

If all your work gets done soon enough, then
why worry about adding hardware? If some of
your work does not get done soon enough, then
you have a problem to solve. (Possible
solutions include faster hardware or faster
software, but also things like more efficient
job scheduling, or a revised definition of
"soon enough".)

If you plan to increase the workload, and
you're currently consuming close to 100% of
some (any) resource (at some time), then you
can reasonably anticipate some trouble.

> This is definitely a "it depends" question.

I'm with him, too.


> [...] we actually have more than the cost
> effective need of processors.

What, exactly, does that mean, in English?
You have more processors than you think you
need?
Rob Hussey
Advisor

Re: Searcing for Processor "best practices"

My thought is that if we are not using 80% of our available resources 90% of the time, if we more processors, we'll be not using even more money that we have spent on hardware.

My thought is to size the system for 85% to 90% of the ussage and not size a system for 10% to 15% of the time.
- Rob Hussey
Zinky
Honored Contributor

Re: Searcing for Processor "best practices"

My thought on the matter having been in this business for so many years and leading/herding/guiding bean counters is as follows:

IF the 5 to 15 % of the time that the system gets to peak system Load is important for the business/client - then size FAT. Buy/size your system with enough resources to meet that 10 to 15 per cent peak requirements. Management and its bean counters should be smart enough to tell if say billing and MASS jobs (most typical enterprise peak periods) merit the splurge on IT Systems.

IF the 5 to 15 % is not that really important and your SLA is flexible - then size LEAN.

So you see it all depends.


Hakuna Matata

Favourite Toy:
AMD Athlon II X6 1090T 6-core, 16GB RAM, 12TB ZFS RAIDZ-2 Storage. Linux Centos 5.6 running KVM Hypervisor. Virtual Machines: Ubuntu, Mint, Solaris 10, Windows 7 Professional, Windows XP Pro, Windows Server 2008R2, DOS 6.22, OpenFiler
Zinky
Honored Contributor

Re: Searcing for Processor "best practices"

I forgot to add and share something very important to this thread which should really have been titeled System Utilisation Best Practices.

We have been in the same predicament for over 7 years now. Our systems are widely varying - with onlines generally light but unpredictable and batch and month ends generally heavy. The dilemma back then was whether to buy small or mid sized systems or go with the big Domes and implement a scheme where resources - mostly CPU can be allocated on demand and on the fly - aka vPars.

It was the BEST solution. We have had agile systems and I think saved a TON of money going this route with systems readily (on the fly or with a short downtime) reconfigurable to address varying workloads. So No wastage of CPU resources. Looking back at my system historicals -- we've been able to utilize up to 80% of CPU resources on average.

In your case - you can likely do the same. HP's partitioning continuum is the best out there for UNIX systems. You can do vPars, IVMs, Psets, WLM groups, etc.

Now that we are almost through with our UNIX-away project and on Linux -- the choices for higher CPU utilisation whilst having an agile system are endless. There's HA/Virtualization using KVM or vSPhere so we truly now have higher agility and efficiency in using system resources whilst lowering costs.
Hakuna Matata

Favourite Toy:
AMD Athlon II X6 1090T 6-core, 16GB RAM, 12TB ZFS RAIDZ-2 Storage. Linux Centos 5.6 running KVM Hypervisor. Virtual Machines: Ubuntu, Mint, Solaris 10, Windows 7 Professional, Windows XP Pro, Windows Server 2008R2, DOS 6.22, OpenFiler
Rob Hussey
Advisor

Re: Searcing for Processor "best practices"

Thanks Rita, Alzhy and everyone else!!!!


Well, guess that one comes down to - which is cheaper, spending time for some programmers who didn't write it right the first time to find their garbage and fix it -or- buy some hardware just to address that 10% of the time peak load.


IF the 5 to 15 % of the time that the system gets to peak system Load is important for the business/client - then size FAT. Buy/size your system with enough resources to meet that 10 to 15 per cent peak requirements. Management and its bean counters should be smart enough to tell if say billing and MASS jobs (most typical enterprise peak periods) merit the splurge on IT Systems.

- Rob Hussey