Operating System - HP-UX
1819535 Members
494 Online
109603 Solutions
New Discussion юеВ

Disk throughput testing tool/utility

 
SOLVED
Go to solution
kw_1
Contributor

Disk throughput testing tool/utility

We're looking for a tool or utility that will generate a high volume of disk I/O so that we can compare the throughput we get on different disk arrays. Does anyone know of any freeware etc. that we could use for this purpose?
In particular we're looking to compare an EMC array to a Compaq array.

6 REPLIES 6
Steven E. Protter
Exalted Contributor

Re: Disk throughput testing tool/utility

Poor mans route:
dd command

find /fsname -exec grep -l 'where are you' {} \;

multiple inistances of both, running against filesystems and disks.

We can't afford a real utility so thats how I do it.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
G. Vrijhoeven
Honored Contributor

Re: Disk throughput testing tool/utility

Hi,

We copy a big file like the kernel or multiple kernels appended ( /stand/vmunix ) around to generate some load. You can use glance for i/o's ( free for 60 days). I know it is not the same as dataaccess like a database but it will give you some idea.

HTH,

Gideon
doug mielke
Respected Contributor

Re: Disk throughput testing tool/utility

I you choose to do this youself, beware of I/Os of the same data repeatedly, or you'll only be testing the cache response of the array, not it's physical response.


SEP's idea will do that for reads.
You may also want to reduce the size of the Unix buffer cache, to remove that latency (or advantage) from the picture.

We've used a script full of dd commands to do this to/from various filesystems.
We then run the script multiple time simult. to keep the cache changinging

Henry Quek
Advisor

Re: Disk throughput testing tool/utility

Hi,

You might want to try a free SCSI Command Utility (SCU) available at http://www.bit-net.com/~rmiller/scu.html.

I've not really tried it yet but came upon it recently when I was looking for a tool like this.

Cheers,
Henry
T G Manikandan
Honored Contributor
Solution

Re: Disk throughput testing tool/utility

I have come across a utility call "stkio" from storagetek.

you can check the storagetek site for the tool.
you can do a I/O stress test.

Also check up expect

http://expect.nist.gov

Re: Disk throughput testing tool/utility

Hi!

I suggest you PostMark v1.5 (http://www.netapp.com/tech_library/3022.html):
PostMark was designed to create a large pool of continually changing files and to measure the transaction rates for a workload approximating a large Internet electronic mail server.
PostMark generates an initial pool of random text files ranging in size from a configurable low bound to a configurable high bound. This file pool is of configurable size and can be located on any accessible file system.
Once the pool has been created (also producing statistics on continuous small file creation performance), a specified number of transactions occurs. Each transaction consists of a pair of smaller transactions:
├В┬╖ Create file or Delete file
├В┬╖ Read file or Append file
The incidence of each transaction type and
its affected files are chosen randomly to
minimize the influence of file system caching, file read ahead, and disk level caching and track buffering. This incidence can be tuned by setting either the read or create bias parameters to produce the desired results.
When a file is created, a random initial length is selected, and text from a random pool is appended up to the chosen length. File deletion selects a random file from the list of active files and deletes it.
When a file is to be read, a randomly selected file is opened, and the entire file is read (using a configured block size) into memory. Either buffered or raw library routines may be used, allowing existing software to be approximated if desired.

Appending data to a file opens a random file, seeks to its current end, and writes a random amount of data. This value is chosen to be less than the configured file size high bound. If the file is already at the maximum size, no further data will appended. When all of the transactions have completed, the remaining active files are all deleted (also producing statistics on continuous file deletion).
On completion of each run, a report is generated showing:
├В┬╖ Elapsed time
├В┬╖ Elapsed time spent performing transactions and average transaction rate (files/second)
├В┬╖ Total number of files created and average creation rate (files/second)
├В┬╖ Number of files created initially and average creation rate (files/second)
├В┬╖ Number of files created during sequence of transactions and average creation rate (files/second)
├В┬╖ Total number of files read and average rate (files/second)
├В┬╖ Total number of files appended and average rate (files/second)
├В┬╖ Total number of files deleted and average
deletion rate (files/second)
├В┬╖ Number of files deleted after transactions were complete and average deletion rate (files/second)
├В┬╖ Number of files deleted during sequence of transactions and average deletion rate (files/second)
├В┬╖ Total size of data read and average input rate (bytes/second)
├В┬╖ Total size of data written and average output rate (bytes/second)

A portable random number generator (derived from the Unix reference implementation) is included in the PostMark distribution ensuring identical initial conditions across different platf
Tell me what you need, and I'll tell you how to get along without it!