Operating System - HP-UX
1833734 Members
2768 Online
110063 Solutions
New Discussion

Re: performance impact measurement

 
SOLVED
Go to solution
Shivkumar
Super Advisor

performance impact measurement

Dear Sirs;

I want to measure the performance of a particular software before and after the installation.

I don't have very sophisticated third party tools and want to use tools available on hpux 11/11i.

Appreciate your suggestion.
Shiv
10 REPLIES 10
Mel Burslan
Honored Contributor
Solution

Re: performance impact measurement

before you install the product, at a point where the system is running in a stable manner, run the command

sar 5 10 > /tmp/preload

load your software and start running it. After the system stabilized after the install, i.e. new user profiles got created and initial tests completed (which usually takes few days in my cases), run the command

sar 5 10 > /tmp/postload

the compare the last lines in these two files, starting with word "Average"

If your application is a well behaving one, it will show the load it introducd to the system under %usr column. But there might be a slight increase in the %sys column as well.

you can baseline performance impact on these numbers, if you are not using any other external performance monitors.

hope this helps
________________________________
UNIX because I majored in cryptology...
DCE
Honored Contributor

Re: performance impact measurement

If you have the foundation OS you are limited to
top
sar
vmstat
swapinfo
bdf

If you have enterprise edition of the OS then
glance
is available
Mel Burslan
Honored Contributor

Re: performance impact measurement

also, see SEP's script for free performance monitoring script here:

http://www.hpux.ws/system.perf.sh

it collects a lot more information than a simple sar

run it again before and after, then compare the outputs to figure out the impact.
________________________________
UNIX because I majored in cryptology...
Hakan Aribas
Valued Contributor

Re: performance impact measurement

---
top
---

top displays the top processes on the system and periodically updates the information. Raw CPU percentage is used to rank the processes. top can be executed with or without command-line options.

Three general classes of information are displayed by top:

1. System Data: The first few lines at the top of the display show general information about the state of the system

2. Memory Data: Includes virtual and real memory in use (with the amount of memory considered "active" in parentheses) and the amount of free memory

3. Process Data: Information about individual processes on the system. When process data cannot fit on a single screen, top divides the data into two or more screens. To view multiple-screen data, use the j, k, and t commands described previously. Note that the system- and memory-data displays are present in each screen of multiple-screen process data.

Arunvijai_4
Honored Contributor

Re: performance impact measurement

Hi Shiv,

Usually, we use "top" and "sar" for performance measurement as part of P&R testing. watch "top" for some time before installing and after installing your app. It should help.

-Arun
"A ship in the harbor is safe, but that is not what ships are built for"
Joseph Loo
Honored Contributor

Re: performance impact measurement

hi shiv,

these r the commands i used:

# sar -d
for disk

# sar -u
or
# top
for CPU

# vmstat
e.g. vmstat 3 10 --> 10 blocks of 3 seconds
or UNIX95 for memory.

regards.
what you do not see does not mean you should not believe
Yogeeraj_1
Honored Contributor

Re: performance impact measurement

hi shiv,

also note that since performance on any system is not linearly distributed across time, you may have to take several snap during the day prior to the installation (why not during several days depending on your environment of course)

these benchmarks will help in determining performance degradations (if any)


You can also base yourself on what Oracle recommends, i.e. three types of tests:
1. Minimal testing
2. Functional testing
3. Integration testing

hope this helps too!

kind regards
yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)
Trond Haugen
Honored Contributor

Re: performance impact measurement

Take a look at the sa1 manpage.
In my view it can be good to be able to capture all data for say a whole day. And keep it. You never know what sar data you may want to look at later. And you will have ALL data for each timestamp. (Sure you can run sar with all options. But reading the output is not easy.)

Regards,
Trond
Regards,
Trond Haugen
LinkedIn
Raj D.
Honored Contributor

Re: performance impact measurement

Hi Shiv ,

here the tools are , need to concentrate:

# sar -u -M 5 [ CPU all..utilisation]
# iostat 5 5 [ Disk i/o utilisation]
# vmstat 5 5 , [ swap/paging utilisation]
# netstat [ Network Utilisation and details]
# top [ top processes and other details.]
# glance [ Memory, cpu , disk , process ...etc ]
# glance -t [ checking system process table.]
# sar -v 5 5 [ checking system process table utilisation ]
# lsof -i -U [ Unix and Internet open ports ]
# ps -el | sort -r -k10 | more [ biggest processes ,consuming memory]

* lsof can be downloaded from
http://hpux.cs.utah.edu

Also you can run a script to gather data over a period of time.

Cheers,
Raj.
Raj.
" If u think u can , If u think u cannot , - You are always Right . "
Hein van den Heuvel
Honored Contributor

Re: performance impact measurement


Of course I agree with most prior suggestions. You know... measure memory usage with and without some new software active. Measure average cpu usage before and after new software. Measure the capaticy usage before and after.

But maybe, just maybe, you should also look at it from a different angle. Look at it from the angle you in fact imply by the title: What is the IMPACT of the new software?

One assumes the title means 'impact on prior system performance'. For that, I believe you need to come up with a benchmark transaction in your existing software. Maybe the time it takes to run a particular report? The time to do a backup? The average response time of your OLTP transactions?
It is possible that the new software hogs a resource (network? specific disk?) which causes the existing applications to slow down and perhaps appear to use less cpu (per elapsed time interval).
Maybe the new software touches files on a drive with lots of free space, but near maximum IO capacity and the new software causes much more random IO patterns, pushing that drive over the edge. (the knee in its performance curve)
Maybe it is a filesystem allocation interlock that suddenly starts to hurt.

Just suggesting to not just look at raw numbers like CPU%, MEM%, IOBUSY%, but to try to put those in context of 'work units done' and response time measurements.

fwiw,
Hein.