Operating System - HP-UX
1753862 Members
7431 Online
108809 Solutions
New Discussion

Load Generator / Benchmark for HP-UX

 
Steve Bonds
Trusted Contributor

Load Generator / Benchmark for HP-UX

Forum-dwellers:

I need to find something to measure disk I/O performance on an HP-UX system. I have the systems fully instrumented with Measureware so collecting the stats isn't a problem, but I'd like to ensure that what I use for a benchmark is consistent.

So far I have not seen much available for HP-UX. It looks like my options are:

1) Bonnie (http://www.textuality.com/bonnie/)
2) Code something up myself

I'm not looking for anything too fancy-- though the ability to open and write to a raw logical volume would be helpful for insulating myself from the large buffer cache present on these hosts. This will be used for comparison between two systems and doesn't need to reflect any "real world" workload.

What do you all use for disk I/O benchmarking?

-- Steve

 

 

P.S. This thread has been moved from General to HP-UX > sysadmin. - Hp Forumn Moderator

1 REPLY 1
pooderbill
Valued Contributor

Re: Load Generator / Benchmark for HP-UX

There's really only one repeatable metric for plain old disks -- dd which serially reads the selected disk without any overhead using a filesystem directory and file(s). I have attached diskperf which just encapsulates the dd command and computes the results as a 1-liner. It won't write anything since that can get complicated with large data tests.

 

   Usage: diskperf [-l log] [-a] [-r <KB>] <MB> <DSF/lvol> [<DSF/lvol>... ]
   where: -l log file
          -a = always append to log, otherwise new log started
          -r <KB> where <KB> is a 1024 block
             (min=1 KB, default=1024 KB, max=16384 KB)
          -v verbose messages

   and:   MB  = megabytes to read (MB = 1024*1024) or "all"
                The keyword "all" reads the entire disk/LUN
          DSF = fullpath to LUN device file or mountpoint or lvol


   Will run a dd read test on the raw device or lvol and report on
   performance.  Optionally write the results to a logfile.

 

So this tool can read a raw device file, or raw lvol and report the results.

In it's simplest form:

 

# diskperf 500 /dev/disk/disk1
/dev/rdisk/disk1: read 500 MB in 3.3 secs (500 recs @ 1024 KB) = 151.5 MB/sec

 

Use a very small (10 KB) record size:

 

 # diskperf -r 10 500 /dev/disk/disk1
/dev/rdisk/disk1: read 500 MB in 23.9 secs (51200 recs @ 10 KB) = 20.9 MB/sec

 

Use a medium size (100 KB) record size:

 

 # diskperf -r 100 500 /dev/disk/disk1
/dev/rdisk/disk1: read 500 MB in 4.9 secs (5120recs @ 100 KB) = 102.0 MB/sec

 

Use a very large (4 MB) record size

 

 # diskperf -r 4096 500 /dev/disk/disk1
/dev/rdisk/disk1: read 500 MB in 3.3 secs (125 recs @ 4096 KB) = 151.5 MB/sec

 

As you can see, with this specific JBOD disk, the read rate tops out around 150 MB/sec with a 1 MB record size. As expected, short and very short records will slow the throughput due to rotational delays.

 

For a disk array, the array's read cache will virtualize the disks behind the controller so reads should be long enough to fille the cache. For that, you'll need to read dozens of GBytes. Use the "all" value rather than a MB number to read the whole LUN or lvol. You can easily experiment with load balancing on alternate paths (where available).