SarCheck(TM): Automated Analysis of HP-UX sar and ps data

(English text version 4.02)


This is an analysis of the data contained in the file /tmp/rpt. The data was collected on 05/31/2001, from 08:00:00 to 16:40:03, from the HP9000/800/V2600 system 'vsuncom'. There were 26 data records used to produce this analysis. The operating system used to produce the sar report was HP-UX Release B.11.00. 32 processors are present. 32 gigabytes of memory are present.

Data collected by the ps -elf command on 05/31/2001 from 08:00:00 to 16:40:03, and stored in the file /usr/local/ps/20010531, will also be analyzed.

SUMMARY

When the data was collected, no CPU bottleneck could be detected. At least one disk drive was busy enough to suggest an intermittent or impending performance bottleneck. A change to at least one tunable parameter has been recommended. Limits to future growth have been noted in the Capacity Planning section.

At least one possible memory leak has been detected. At least one possible runaway process has been detected. A suspiciously large process has been detected. See the Resource Analysis section for details.

RECOMMENDATIONS SECTION

All recommendations contained in this report are based solely on the conditions which were present when the performance data was collected. It is possible that conditions which were not present at that time may cause some of these recommendations to result in worse performance. To minimize this risk, analyze data from several different days, implement only regularly occurring recommendations, and implement them one at a time.

A CPU upgrade is not recommended because the current CPU had significant unused capacity.

Change the value of 'nfile' from 50010 to 57562. The parameter 'nfile' sets the size of the file descriptor table, which determines the total number of files which can be simultaneously open on the system. This change will use roughly 0.29 additional megabytes of memory. This approximation does not take into account the memory impact of changes to any other parameters whose values are dependent on this value. The accuracy of this approximation is also limited by the fact that the actual size of the kernel changes in 4kb increments.

No disk recommendations have been made because only a slight or intermittent bottleneck was seen.

Use the System Administration Manager (SAM) to change the values of tunable parameters. More information on the SAM utility and relinking the kernel is available in the System Administration Tasks manual.

RESOURCE ANALYSIS SECTION

Average CPU utilization was only 39.4 percent. This indicates that spare CPU capacity exists. If any performance problems were seen during the entire monitoring period, they were not caused by a lack of CPU power. CPU utilization peaked at 80 percent from 14:40:01 to 15:00:01. Peak resource utilization statistics can be used to help understand performance problems. If performance was worst during the period of peak CPU utilization, then a performance bottleneck may be the CPU.

The CPU was idle (neither busy nor waiting for I/O) and had nothing to do an average of 44.5 percent of the time. If overall performance was good, this means that on average, the CPU was lightly loaded. If performance was generally unacceptable, the bottleneck may have been caused by remote file I/O which cannot be directly measured with sar and therefore cannot be considered by SarCheck.

The CPU was waiting for I/O an average of 16.1 percent of the time. This infers that the system may have been somewhat I/O bound. The time that the system was waiting for I/O peaked at 32 percent during multiple time intervals. Disk statistics indicate that some intermittent bottlenecks may have been present.

The syncer daemon used 2.5 percent of the CPU from 08:00:00 to 16:40:03. The syncer is responsible for writing data from the buffer cache to disk. Its activity indicates that it is not so active as to cause a problem.

This system's buffer cache is dynamic, meaning that its size is determined by the amount of free memory on the system. The average cache hit ratio of logical reads was 99.5 and the average cache hit ratio of logical writes was 99.7 percent. The cache hit ratios of logical reads and writes are high enough to indicate that filesystem buffer sizes do not need to be increased. Based on the current values of dbc_min_pct and dbc_max_pct, the buffer cache can range in size from 655.3 to 982.9 megabytes of memory.

No evidence of an overall memory shortage was seen in the following statistics: The swap queue was occupied an average of 0 percent of the time. The average swap out rate was 0.00 per second.

The fs_async flag is not set. This may result in reduced disk performance, but keeps filesystem data structures consistent in the event of a system crash. This option is currently in the state recommended for production systems.

No unusual configurable parameter values were seen in those parameters which relate to the process accounting system. The current values of acctsuspend and acctresume are unlikely to have an impact on system performance.

The inode cache did not overflow, but was completely full in 3.8 percent of the samples collected during the monitoring period. With UNIX operating systems such as HP-UX which use the inode table as a cache, this indicates that the inode cache may actually be somewhat larger than necessary. Since this system did not seem to have a memory bottleneck, this possibly oversized inode cache should be worth the extra memory.

During part of the monitoring period, the file table was nearly full. Specific recommendations for increasing the size of this table have been made in the recommendations section. Peak table usage statistics (max used/table size) as reported by sar: Process table: 1679/8020. Open file table: 46050/50010.

The process table, controlled by the nproc parameter, was grossly oversized. There is nothing to gain by reducing the size of this table, so no change to the parameter 'nproc' is recommended.

The average rate of System V semaphore calls was 157.4 per second. System V semaphore activity peaked at a rate of 422.36 per second from 16:00:02 to 16:20:00. Peak resource utilization statistics can be used to help understand performance problems. If performance was worst during the period of peak semaphore activity, then that activity may be a performance bottleneck and application or database activity related to semaphore usage should be looked at more closely. No problems have been seen, and no changes have been recommended for System V semaphore parameters. Note that SarCheck only checks these parameter's relationships to each other since semaphore usage data is not available. Algorithms used by SarCheck to check these relationships are available in the help text of SAM.

The average rate of System V message calls was 98.8 per second. System V message activity peaked at a rate of 103.53 per second from 13:20:00 to 13:40:01. Peak resource utilization statistics can be used to help understand performance problems. If performance was worst during the period of peak message activity, then that activity may be a performance bottleneck and application or database activity related to message usage should be looked at more closely. No problems have been seen, and no changes have been recommended for System V message parameters. Note that SarCheck only checks these parameter's relationships to each other since message usage data is not available. Algorithms used by SarCheck to check these relationships are available in the help text of SAM, and in the file /usr/include/sys/msg.h.

The value of (msgssz * msgseg) is not less than 128kb. Nothing in the HP-UX documentation mentions this, but a number of other (and older) performance books indicate that this is a problem.

The ratio of exec to fork system calls was 0.96. This indicates that PATH variables are efficient.

Note: 165 disks were present. By default, the presence of more than 12 disks causes SarCheck to only report on the busiest disks. This is meant to control the verbosity of this report. To see all disks included in the report, use the -d option.

The -dtoo switch has been used to format disk statistics into the following table.

Disk Device Statistics
Disk Device Average Percent Busy Peak Percent Busy Queue Depth When Occupied Average Service Time
c2t6d0 38.2 57.0 1.6 24.4
c0t6d0 46.0 65.2 2.0 27.1
c19t0d4 17.3 83.3 3.8 7.0
c22t1d2 16.8 80.9 3.6 6.7
c13t1d3 16.6 82.9 4.0 6.8
c23t0d3 15.8 77.8 3.2 6.1
c19t0d7 21.2 37.4 0.5 3.7
c20t0d6 22.2 39.7 0.5 4.1
c13t1d5 22.8 39.3 0.5 4.3
c23t0d5 22.6 38.5 0.5 4.2
c16t3d7 13.7 62.4 0.6 16.2
c11t3d5 11.8 52.6 1.6 21.3
c17t2d7 10.9 55.4 1.5 20.7
c24t0d3 9.1 55.7 1.3 16.5

The disk device c2t6d0 was busy an average of 38.2 percent of the time and had an average queue depth of 1.6 (when occupied). This indicates that the device is not a performance bottleneck. During the peak interval from 16:20:00 to 16:40:03, the disk was 57.0 percent busy. Peak disk busy statistics can be used to help understand performance problems. If performance was worst when the disk was busiest, then a performance bottleneck may be that disk. The average service time reported for this device and its accompanying disk subsystem was 24.4 milliseconds. This is somewhat slow for a modern disk drive, and the disappointing performance may be due to the disk or its controller. Service time is the delay between the time a request was sent to a device and the time that the device signaled completion of the request.

The disk device c0t6d0 was busy an average of 46.0 percent of the time and had an average queue depth of 2.0 (when occupied). This disk device was occasionally more than 50.0 percent busy, which indicates the possibility of an intermittent disk I/O bottleneck that may cause periods of performance degradation. During the peak interval from 16:20:00 to 16:40:03, the disk was 65.2 percent busy. The average service time reported for this device and its accompanying disk subsystem was 27.1 milliseconds. This is very slow by modern standards, and the poor performance may due to either the disk or its controller.

The disk device c19t0d4 was busy an average of 17.3 percent of the time and had an average queue depth of 3.8 (when occupied). This indicates that the device is not a performance bottleneck. During the peak interval from 13:20:00 to 13:40:01, the disk was 83.3 percent busy. The average service time reported for this device and its accompanying disk subsystem was 7.0 milliseconds. This is relatively fast.

The disk device c22t1d2 was busy an average of 16.8 percent of the time and had an average queue depth of 3.6 (when occupied). This indicates that the device is not a performance bottleneck. During the peak interval from 13:20:00 to 13:40:01, the disk was 80.9 percent busy. The average service time reported for this device and its accompanying disk subsystem was 6.7 milliseconds. This is relatively fast.

The disk device c13t1d3 was busy an average of 16.6 percent of the time and had an average queue depth of 4.0 (when occupied). This indicates that the device is not a performance bottleneck. During the peak interval from 13:20:00 to 13:40:01, the disk was 82.9 percent busy. The average service time reported for this device and its accompanying disk subsystem was 6.8 milliseconds. This is relatively fast.

The disk device c23t0d3 was busy an average of 15.8 percent of the time and had an average queue depth of 3.2 (when occupied). This indicates that the device is not a performance bottleneck. During the peak interval from 13:40:01 to 14:00:00, the disk was 77.8 percent busy. The average service time reported for this device and its accompanying disk subsystem was 6.1 milliseconds. This is relatively fast.

The disk device c19t0d7 was busy an average of 21.2 percent of the time and had an average queue depth of 0.5 (when occupied). This indicates that the device is not a performance bottleneck. The average service time reported for this device and its accompanying disk subsystem was 3.7 milliseconds. This is indicative of a very fast disk or a disk controller with cache.

The disk device c20t0d6 was busy an average of 22.2 percent of the time and had an average queue depth of 0.5 (when occupied). This indicates that the device is not a performance bottleneck. The average service time reported for this device and its accompanying disk subsystem was 4.1 milliseconds. This is indicative of a very fast disk or a disk controller with cache.

The disk device c13t1d5 was busy an average of 22.8 percent of the time and had an average queue depth of 0.5 (when occupied). This indicates that the device is not a performance bottleneck. The average service time reported for this device and its accompanying disk subsystem was 4.3 milliseconds. This is indicative of a very fast disk or a disk controller with cache.

The disk device c23t0d5 was busy an average of 22.6 percent of the time and had an average queue depth of 0.5 (when occupied). This indicates that the device is not a performance bottleneck. The average service time reported for this device and its accompanying disk subsystem was 4.2 milliseconds. This is indicative of a very fast disk or a disk controller with cache.

The disk device c16t3d7 was busy an average of 13.7 percent of the time and had an average queue depth of 0.6 (when occupied). This indicates that the device is not a performance bottleneck. During the peak interval from 15:20:01 to 15:40:01, the disk was 62.4 percent busy. The average service time reported for this device and its accompanying disk subsystem was 16.2 milliseconds. This is somewhat slow for a modern disk drive, and the disappointing performance may be due to the disk or its controller.

The disk device c11t3d5 was busy an average of 11.8 percent of the time and had an average queue depth of 1.6 (when occupied). This indicates that the device is not a performance bottleneck. During the peak interval from 09:20:01 to 09:40:01, the disk was 52.6 percent busy. The average service time reported for this device and its accompanying disk subsystem was 21.3 milliseconds. This is somewhat slow for a modern disk drive, and the disappointing performance may be due to the disk or its controller.

The disk device c17t2d7 was busy an average of 10.9 percent of the time and had an average queue depth of 1.5 (when occupied). This indicates that the device is not a performance bottleneck. During the peak interval from 09:40:01 to 10:00:00, the disk was 55.4 percent busy. The average service time reported for this device and its accompanying disk subsystem was 20.7 milliseconds. This is somewhat slow for a modern disk drive, and the disappointing performance may be due to the disk or its controller.

The disk device c24t0d3 was busy an average of 9.1 percent of the time and had an average queue depth of 1.3 (when occupied). This indicates that the device is not a performance bottleneck. During the peak interval from 09:40:01 to 10:00:00, the disk was 55.7 percent busy. The average service time reported for this device and its accompanying disk subsystem was 16.5 milliseconds. This is somewhat slow for a modern disk drive, and the disappointing performance may be due to the disk or its controller.

Unusually large process size seen in /opt/perf/bin/midaemon, owned by root, pid 1213. The size of this process was 16743 pages, or 65.402 megabytes of memory.

CPU usage seen in libio_optical, owned by amass, pid 1688. Between 15:20:01 and 16:20:00, 1541 seconds of CPU time were used. CPU utilization by this process averaged 42.82 percent of a single processor during that interval.

CPU usage seen in libio_optical, owned by amass, pid 1687. Between 09:40:01 and 10:20:01, 699 seconds of CPU time were used. CPU utilization by this process averaged 29.12 percent of a single processor during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 21656. Between 08:00:00 and 16:40:03, this process grew from 0 to 5397 blocks. Memory usage grew at an average rate of 622.7 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 20236. Between 08:00:00 and 10:20:01, this process grew from 0 to 6641 blocks. Memory usage grew at an average rate of 2845.8 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 1803. Between 08:00:00 and 12:20:01, this process grew from 0 to 6641 blocks. Memory usage grew at an average rate of 1532.4 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 3571. Between 08:00:00 and 11:40:01, this process grew from 0 to 6693 blocks. Memory usage grew at an average rate of 1825.2 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 469. Between 08:00:00 and 10:20:01, this process grew from 0 to 6693 blocks. Memory usage grew at an average rate of 2868.1 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 4064. Between 08:00:00 and 10:20:01, this process grew from 0 to 6613 blocks. Memory usage grew at an average rate of 2833.8 blocks/hr during that interval.

CPU usage seen in oraclePBSCS, owned by oracle, pid 15478. Between 08:00:00 and 08:40:00, 2278 seconds of CPU time were used. CPU utilization by this process averaged 94.92 percent of a single processor during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 18626. Between 08:00:00 and 10:00:00, this process grew from 0 to 6605 blocks. Memory usage grew at an average rate of 3302.5 blocks/hr during that interval.

A possible memory leak was seen in ora_pmon_PRTX, owned by oracle, pid 10451. Between 08:00:00 and 15:40:01, this process grew from 0 to 5593 blocks. Memory usage grew at an average rate of 729.5 blocks/hr during that interval.

Unusually large process size seen in /opt/bscs/batch/bin/rlh, owned by bscs, pid 29980. The size of this process was 49406 pages, or 192.992 megabytes of memory.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 7995. Between 08:00:00 and 14:40:01, this process grew from 0 to 5341 blocks. Memory usage grew at an average rate of 801.1 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 27546. Between 08:00:00 and 12:20:01, this process grew from 0 to 6693 blocks. Memory usage grew at an average rate of 1544.4 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 25348. Between 08:20:00 and 10:20:01, this process grew from 0 to 5333 blocks. Memory usage grew at an average rate of 2666.1 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 12050. Between 08:20:00 and 12:40:00, this process grew from 0 to 6693 blocks. Memory usage grew at an average rate of 1544.5 blocks/hr during that interval.

A possible memory leak was seen in ag_req, owned by bscs, pid 7420. Between 08:20:00 and 16:40:03, this process grew from 112 to 4117 blocks. Memory usage grew at an average rate of 480.6 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 9305. Between 08:20:00 and 14:40:01, this process grew from 5305 to 6585 blocks. Memory usage grew at an average rate of 202.1 blocks/hr during that interval.

CPU usage seen in oraclePRTX, owned by oracle, pid 28429. Between 08:40:00 and 10:00:00, 1638 seconds of CPU time were used. CPU utilization by this process averaged 34.12 percent of a single processor during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 26840. Between 09:00:00 and 13:20:00, this process grew from 5505 to 6726 blocks. Memory usage grew at an average rate of 281.8 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 27542. Between 09:00:00 and 10:20:01, this process grew from 5305 to 6613 blocks. Memory usage grew at an average rate of 980.8 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 260. Between 09:40:01 and 10:20:01, this process grew from 5301 to 5445 blocks. Memory usage grew at an average rate of 216.0 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 18175. Between 09:20:01 and 10:40:01, this process grew from 0 to 5321 blocks. Memory usage grew at an average rate of 3990.8 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 12825. Between 09:20:01 and 10:00:00, this process grew from 0 to 6681 blocks. Memory usage grew at an average rate of 10025.7 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 6494. Between 09:20:01 and 13:20:00, this process grew from 0 to 6697 blocks. Memory usage grew at an average rate of 1674.4 blocks/hr during that interval.

CPU usage seen in oraclePRTX, owned by oracle, pid 27848. Between 09:20:01 and 10:40:01, 1598 seconds of CPU time were used. CPU utilization by this process averaged 33.29 percent of a single processor during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 14904. Between 09:20:01 and 10:20:01, this process grew from 0 to 6689 blocks. Memory usage grew at an average rate of 6689.0 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 5543. Between 09:20:01 and 10:00:00, this process grew from 0 to 6605 blocks. Memory usage grew at an average rate of 9911.6 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 1741. Between 09:20:01 and 10:00:00, this process grew from 5301 to 5437 blocks. Memory usage grew at an average rate of 204.1 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 1519. Between 09:20:01 and 12:00:00, this process grew from 0 to 6697 blocks. Memory usage grew at an average rate of 2511.6 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 4879. Between 09:40:01 and 11:40:01, this process grew from 0 to 6689 blocks. Memory usage grew at an average rate of 3344.5 blocks/hr during that interval.

CPU usage seen in ora_p007_PRTX, owned by oracle, pid 24913. Between 09:40:01 and 10:40:01, 744 seconds of CPU time were used. CPU utilization by this process averaged 20.67 percent of a single processor during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 14046. Between 09:40:01 and 10:20:01, this process grew from 0 to 6689 blocks. Memory usage grew at an average rate of 10033.5 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 11636. Between 09:40:01 and 10:20:01, this process grew from 0 to 6609 blocks. Memory usage grew at an average rate of 9913.5 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 16630. Between 09:40:01 and 10:20:01, this process grew from 5305 to 6129 blocks. Memory usage grew at an average rate of 1236.0 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 21215. Between 09:40:01 and 12:40:00, this process grew from 5397 to 6693 blocks. Memory usage grew at an average rate of 432.0 blocks/hr during that interval.

A possible memory leak was seen in pbfe, owned by lockbox, pid 16102. Between 10:00:00 and 11:20:01, this process grew from 649 to 7209 blocks. Memory usage grew at an average rate of 4919.0 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 1833. Between 10:00:00 and 14:20:00, this process grew from 0 to 6693 blocks. Memory usage grew at an average rate of 1544.5 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 29655. Between 10:00:00 and 14:20:00, this process grew from 5317 to 7005 blocks. Memory usage grew at an average rate of 389.5 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 11381. Between 10:00:00 and 12:40:00, this process grew from 0 to 6613 blocks. Memory usage grew at an average rate of 2479.9 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 12096. Between 11:20:01 and 16:40:03, this process grew from 0 to 5385 blocks. Memory usage grew at an average rate of 1009.6 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 12048. Between 13:00:00 and 14:20:00, this process grew from 0 to 6641 blocks. Memory usage grew at an average rate of 4980.8 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 8974. Between 10:00:00 and 11:40:01, this process grew from 0 to 6689 blocks. Memory usage grew at an average rate of 4012.7 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 1232. Between 10:00:00 and 13:00:00, this process grew from 0 to 6613 blocks. Memory usage grew at an average rate of 2204.3 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 3486. Between 10:20:01 and 11:40:01, this process grew from 0 to 6693 blocks. Memory usage grew at an average rate of 5019.8 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 21646. Between 10:20:01 and 13:20:00, this process grew from 0 to 6641 blocks. Memory usage grew at an average rate of 2213.9 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 27217. Between 10:20:01 and 13:40:01, this process grew from 0 to 6613 blocks. Memory usage grew at an average rate of 1983.9 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 18409. Between 10:20:01 and 11:40:01, this process grew from 0 to 5333 blocks. Memory usage grew at an average rate of 3999.8 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 6049. Between 10:20:01 and 11:00:00, this process grew from 0 to 6693 blocks. Memory usage grew at an average rate of 10043.7 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 4444. Between 10:20:01 and 11:00:00, this process grew from 0 to 6693 blocks. Memory usage grew at an average rate of 10043.7 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 21843. Between 10:20:01 and 11:40:01, this process grew from 0 to 5337 blocks. Memory usage grew at an average rate of 4002.8 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 27892. Between 10:20:01 and 16:40:03, this process grew from 0 to 5333 blocks. Memory usage grew at an average rate of 842.0 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 27604. Between 10:40:01 and 12:00:00, this process grew from 5325 to 7213 blocks. Memory usage grew at an average rate of 1416.3 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 21629. Between 10:40:01 and 12:00:00, this process grew from 0 to 6621 blocks. Memory usage grew at an average rate of 4966.8 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 15076. Between 10:40:01 and 11:40:01, this process grew from 5317 to 6613 blocks. Memory usage grew at an average rate of 1296.0 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 14583. Between 11:00:00 and 12:20:01, this process grew from 0 to 5337 blocks. Memory usage grew at an average rate of 4001.9 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 19630. Between 11:20:01 and 13:20:00, this process grew from 0 to 6609 blocks. Memory usage grew at an average rate of 3305.0 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 26466. Between 11:00:00 and 15:20:01, this process grew from 0 to 6641 blocks. Memory usage grew at an average rate of 1532.4 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 22604. Between 11:00:00 and 12:40:00, this process grew from 0 to 5437 blocks. Memory usage grew at an average rate of 3262.2 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 24617. Between 12:00:00 and 15:40:01, this process grew from 5325 to 6525 blocks. Memory usage grew at an average rate of 327.2 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 23385. Between 11:20:01 and 12:00:00, this process grew from 0 to 6641 blocks. Memory usage grew at an average rate of 9965.7 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 20152. Between 11:20:01 and 12:40:00, this process grew from 0 to 5433 blocks. Memory usage grew at an average rate of 4075.6 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 22711. Between 11:20:01 and 13:40:01, this process grew from 0 to 6641 blocks. Memory usage grew at an average rate of 2846.1 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 955. Between 13:20:00 and 14:40:01, this process grew from 0 to 6693 blocks. Memory usage grew at an average rate of 5018.7 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 14057. Between 11:20:01 and 12:20:01, this process grew from 0 to 6613 blocks. Memory usage grew at an average rate of 6613.0 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 13657. Between 11:20:01 and 12:20:01, this process grew from 0 to 5433 blocks. Memory usage grew at an average rate of 5433.0 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 21437. Between 11:20:01 and 15:00:01, this process grew from 0 to 6693 blocks. Memory usage grew at an average rate of 1825.4 blocks/hr during that interval.

Unusually large process size seen in bch, owned by bscs, pid 25380. The size of this process was 49502 pages, or 193.367 megabytes of memory.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 16349. Between 11:20:01 and 13:00:00, this process grew from 0 to 6641 blocks. Memory usage grew at an average rate of 3985.3 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 6285. Between 11:20:01 and 13:00:00, this process grew from 0 to 6609 blocks. Memory usage grew at an average rate of 3966.1 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 14350. Between 11:20:01 and 12:20:01, this process grew from 0 to 6613 blocks. Memory usage grew at an average rate of 6613.0 blocks/hr during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 8928. Between 11:20:01 and 12:40:00, this process grew from 0 to 6613 blocks. Memory usage grew at an average rate of 4960.8 blocks/hr during that interval.

CPU usage seen in ora_p001_PBSCS, owned by oracle, pid 1072. Between 11:40:01 and 13:00:00, 2497 seconds of CPU time were used. CPU utilization by this process averaged 52.03 percent of a single processor during that interval.

A possible memory leak was seen in oraclePBSCS, owned by oracle, pid 9843. Between 11:40:01 and 15:20:01, this process grew from 0 to 6693 blocks. Memory usage grew at an average rate of 1825.4 blocks/hr during that interval.

CPU usage seen in ora_p000_PBSCS, owned by oracle, pid 1040. Between 11:40:01 and 13:00:00, 2500 seconds of CPU time were used. CPU utilization by this process averaged 52.09 percent of a single processor during that interval.

CPU usage seen in oraclePBSCS, owned by oracle, pid 1158. Between 11:40:01 and 12:20:01, 664 seconds of CPU time were used. CPU utilization by this process averaged 27.67 percent of a single processor during that interval.

CPU usage seen in teh, owned by bscs, pid 1155. Between 11:40:01 and 12:20:01, 1435 seconds of CPU time were used. CPU utilization by this process averaged 59.79 percent of a single processor during that interval.

CAPACITY PLANNING SECTION

This section is designed to provide the user with a rudimentary linear capacity planning model and should be used for rough approximations only. These estimates assume that an increase in workload will affect the usage of all resources equally. These estimates should be used on days when the load is heaviest to determine approximately how much spare capacity remains at peak times.

Based on the limited data available in this single sar report, the system cannot support an increase in workload at peak times without some level of performance degradation. Since multiple resources appeared to be unable to support an additional workload, see the following paragraphs for additional information on the capacity Implementation of some of the suggestions in the recommendations section may help to increase the system's capacity.

The CPU can support an increase in workload of approximately 12 percent at peak times. The busiest disk can support a workload increase of approximately 0 percent at peak times. For more information on peak CPU and disk utilization, refer to the Resource Analysis section of this report.

The process table, controlled by the parameter 'nproc', can support at least a 100 percent increase in the number of entries. The file table, controlled by the parameter 'nfile', can support approximately a 0 percent increase in the number of entries.

Please note: In no event can Aurora Software Inc. be held responsible for any damages, including incidental or consequent damages, in connection with or arising out of the use or inability to use this software. All trademarks belong to their respective owners. Evaluation copy for: Telecorp PCS Inc. This software expires on 06/29/2001 (mm/dd/yyyy). SC9000 Code version: 4.02. Serial number: 00054022.

Thank you for trying this evaluation copy of SarCheck. To order a licensed version of this software, just type 'analyze9000 -o' at the prompt to produce the order form, and follow the instructions.

(c) copyright 1995-2001 by Aurora Software Inc., Plaistow NH, USA, All Rights Reserved. http://www.sarcheck.com

Statistics for system, vsuncom
  Start of peak interval End of peak interval Date of peak interval
System model number is, 9000/800/V2600     
Statistics collected on, 05/31/2001     
Average CPU utilization, 39.4%     
Peak CPU utilization, 80% 14:40:01 15:00:01 05/31/2001
Average user CPU utilization, 26.4%     
Average sys CPU utilization, 13.0%     
Average waiting for I/O, 16.1%     
Average run queue depth, 1.3     
Peak run queue depth, 1.4 Multiple peaks Multiple peaks  
Average swap queue occupancy, 0.0%     
Average swap out rate, 0.00/sec     
Average cache read hit ratio, 99.5%     
Average cache write hit ratio, 99.7%     
Disk device w/highest peak, c19t0d4     
Avg pct busy for that disk, 17.3%     
Peak pct busy for that disk, 83.3% 13:20:00 13:40:01 05/31/2001
Percent of process tbl used, 20.9%     
Process table overflows, No     
Percent of file table used, 92.1%     
File table overflows, No     
Inode cache pct of time full, 3.8%     
Inode cache overflows, No     
Approx CPU capacity remaining, 12.5%     
Approx I/O bandwidth remaining, 0.0%     
Remaining process tbl capacity, 100%+     
Remaining file table capacity, 0.0%     
Can memory support add'l load, Yes