Operating System - HP-UX
1829348 Members
1930 Online
109991 Solutions
New Discussion

Re: Raw disk write performance?

 
Chris Heerschap_2
New Member

Raw disk write performance?

We're phasing out an HDS 9960 and I'm wiping the disks before we get rid of it.

I've created a file the same size of the LUNs (13.5GB each) and use this command to write to the disk:

dd if=bigfile of=/dev/rdsk/c4t0d0 bs=4096k

Experimented with the blocksize from 1024k to 20480k, 4096k seems to be the fastest. Takes about 10-11 minutes.

What is odd that I can run a cp on the filesystem and create another copy of the file in just over 5 minutes. That doesn't seem right -- adding LVM and filesystem overhead should slow things down, not make them run 2x faster.

The system is an L2000 with two 1G HBAs, one connected to the old HDS and one connected to our new SAN. The big file is on a LUN from the new SAN.

Yes, using "/dev/zero" as an input file runs faster, but that still doesn't explain the discrepancy between a file copy and the dump to a raw device.

I'm sure I'm missing something here. Might not be a critical thing, (since I could just use /dev/zero) but I sure am curious, because I think it might be an opportunity to learn something useful about disk access.
6 REPLIES 6
Bill Hassell
Honored Contributor

Re: Raw disk write performance?

> What is odd that I can run a cp on the filesystem and create another copy of the file in just over 5 minutes.

apples-to-apples? Are you copying a 13Gb file? Is the source file sparse? A sparse source file will appear to copy extremely fast because the missing records will not be read but created much like /dev/zero. Check the source file with du and ll as in:

du -k bigfile
ll bigfile

(du will be in Kbytes) If ll is significantly larger than du (du only counts occupied blocks) then the bigfile is sparse.


Bill Hassell, sysadmin
Chris Heerschap_2
New Member

Re: Raw disk write performance?

The file isn't sparse:

4-miata:/test-svc> du -k bigfile
14226488 bigfile

It's 13G of:

"All work and no play makes Homer something something..."

When I do the filesystem copy, I can't use "cp" since it doesn't like the big file and errors out. I use cat:

cat bigfile > bigfile2

While it's running, I can run an "ls -l" and see the filesize growing at about 50MB/sec.
Calandrello
Trusted Contributor

Re: Raw disk write performance?

Friend the Raw in one tax of better writing for working in caracter. but the application always must be analyzed that anger to record.
Bill Hassell
Honored Contributor

Re: Raw disk write performance?

The dd command will bypass the buffer cache since you are using the character device file. However, the cp command will use the buffer cache and if it is fairly large (more than 1Gb), the data is likely in memory from the creation process. Try a cp of another large file. The file is highly repetitive so there might be some optimization going on (not likely but useful to test). You might want to use /dev/urandom to erase the HDS drives as a more effective method. NOTE: be sure you have installed the KRNG (optimized random number generator) product installed:

http://h20293.www2.hp.com/portal/swdepot/displayProductInfo.do?productNumber=KRNG11I


Bill Hassell, sysadmin
Chris Heerschap_2
New Member

Re: Raw disk write performance?

Interesting... running a "dd" off of /dev/urandom makes files way smaller than what I would expect.

dd if=/dev/urandom of=test bs=4096k count=1

Creates a 256 byte file. Using the numbers to create an OPEN-E sized file (14226480 Kbytes) created a file 1/16th the proper size. Something to do with draining the entropy pool, or is /dev/urandom the more psuedo-random one?

Ran out of time today to time and test piping /dev/urandom directly to the raw disk file.

I'll get some new performance numbers on monday. Thanks for the input!
A. Clay Stephenson
Acclaimed Contributor

Re: Raw disk write performance?

The /dev/random and /dev/urandom read chunks are limited by the RNG_READMAX define which is set to 256 bytes. When you specify a count=1 that means read until bs is satified or a block of inpu data has been read. If you are going to use /dev/urandom then you need to do something like this:

dd if=/dev/urandom ibs=256 obs=256k of=/dev/rdsk/c1t5d0
If it ain't broke, I can fix that.