- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- dd for disk performance, buffer cache, and hardwar...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-17-2002 08:23 AM
тАО12-17-2002 08:23 AM
I've got some easy points for anyone wishing to point out the virtues and/or shortcomings of dd and system cache at several levels.
Background:
HP-UX 11.0, V-Class box, various disk architectures.
Does dd use buffer or any other file-system cache at the OS level? I'm trying to run some disk-level performance testing - something like:
$time dd if=some_known_big_file of=/dev/null bs=10k
Is this a real measure of disk performance? Does this test cache more than disk performance? Will this time vary if I repeat the test?
I believe that, even if the OS is not caching, the machine and/or the disk units probably ARE caching. I know that EMC Symmetrics have quite a bit of cache, and I believe that even individual disk units have some cache. How about HP Galaxy disk? Jamaica?
I note that:
$time dd if=some_known_big_file of=/dev/null ibs=10k obs=10k
is quite a bit slower.
Is there a better test than dd?
Thanks.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-17-2002 08:32 AM
тАО12-17-2002 08:32 AM
Re: dd for disk performance, buffer cache, and hardware-level cache
To gauge performance I'd use glance, with multiple "dd"'s running, until I saturated the IO.
live free or die
harry
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-17-2002 08:39 AM
тАО12-17-2002 08:39 AM
SolutionIf you use dd to a rlvol then no, it does NOT use the unix buffer cache. If you dd an lvol or a filename then yes, it does use the unix buffer cache.
Yes, dd will be quicker on a rerun. Especially if youre using say an EMC or XP or any sort of disk array with cache on it, as even if you dd a raw lvol it will still get cached up on the disk array and thus subsequent runs will be faster. Thus - always use the first run as a benchmark as its not in cache.
Even using an individual disk with raw lvol subsequent runs will be slightly quicker as most disks have a little cache on them - so either pick the first run or average over a few runs.
The answer is yes, dd is the best tool for measuring disk performance at a low level - unless you purchase a highlevel tool to do it.
EMC's come with tons of cache so subsequent runs are always tons faster. Jamaica's are dump disks - but even individual disks have a little cache so subsequent runs will be slightly quicker.
I dont know about HP Galaxy disks - what on earth are they ? if new then almost certainly they have lots of cache.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-17-2002 08:47 AM
тАО12-17-2002 08:47 AM
Re: dd for disk performance, buffer cache, and hardware-level cache
One comment. If you want to begin to measure disk metrics, use the raw device (/dev/rdsk/cXtYdZ) and make sure the input and output block sizes are the same. You can use the 'count' option to constrain the transfer to a given amount (size) of your choice. See the 'dd' manpages for more information.
Regards!
...JRF...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-17-2002 10:35 AM
тАО12-17-2002 10:35 AM
Re: dd for disk performance, buffer cache, and hardware-level cache
"dd if=some_known_big_file of=/dev/null ibs=10k obs=10k".
That is very different from this seemingly equivalent command:"dd if=some_known_big_file of=/dev/null bs=10k".
In the former, there is an extra step that dd itself handles within its own process space (regardless of any additional UNIX buffer cache that might be involved); it is the copy from the input buffer to the output buffer. By specifying only 'bs=10k' both input and output use a common buffer and the process's copy operation is avoided.
You should also test writing from a device file as well.
dd if=/dev/zero of=/dev/rdsk/cXtYdZ bs=10k; the /dev/zero will supply an endless stream of ASCII NUL's.
As has been stated, dd is at best only a fair tool for measuring disk performance but the least you can do is get rid of the overhead of the extra copy operations to provide a better measurement of what you are actually trying to gauge.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-17-2002 12:38 PM
тАО12-17-2002 12:38 PM
Re: dd for disk performance, buffer cache, and hardware-level cache
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-17-2002 12:41 PM
тАО12-17-2002 12:41 PM
Re: dd for disk performance, buffer cache, and hardware-level cache
It's been somewhere in the vicinity of three times slower.
Could there potentially be a CPU bottleneck skewing the results?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-17-2002 12:56 PM
тАО12-17-2002 12:56 PM
Re: dd for disk performance, buffer cache, and hardware-level cache
You might have a CPU or memory bottleneck but I rather doubt it.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-17-2002 07:42 PM
тАО12-17-2002 07:42 PM
Re: dd for disk performance, buffer cache, and hardware-level cache
As far as a disk metric, dd is pretty meaningless. Not even a backup behaves like dd (unless you are backing up raw disks). There are reams of articles on how to characterize disk performance but most of it boils down to the same thing: large sequential transfers on separate channels is the fastest way to move data. Since the real world seldom behaves this way, modern disk arrays incorporate massively large data caches.
By the way Nike and Jamaica are VERY old technology. Today, you would look at HP's VA-series of disk arrays using fibre interfaces for maximum performance.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-28-2002 12:14 PM
тАО12-28-2002 12:14 PM
Re: dd for disk performance, buffer cache, and hardware-level cache
I'll tell you what I did... We needed to move 220GB of data from FC60 to VA7400 (bigger disks, more cache), but did not have a VA7400 to play with for the tests to evaluate the migration time...
We did a variety of tests, one of which was
dd if=/dev/vgXX/rlvolY bs=4m of=/dev/null
This took about 1 minute per GB, we chose 4MB as the block size as this is the Extent size of the Physical volume.
dd if=/dev/vgXX/rlvolY bs=4m of=/dev/vg00/rtest
This took longer 2 minutes per GB.
We tried multiple dd's but this only slowed things down (we used a SAME structure on the FC60, therefore the dd'd interfere with each other)
So we ended up doing single dd's in serial (script) and the whole thing took about 1min 25 secs per GB, so the writing phase added about 50%.
From the above I would conclude
o dd was a fairly good way of moving data 'round
o If you use SAME in the volume groups DO NOT use parallel dd's
o Writting to /dev/null would overestimate a good disk-subsystem.
Regards
Tim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-28-2002 01:28 PM
тАО12-28-2002 01:28 PM
Re: dd for disk performance, buffer cache, and hardware-level cache
Jamaica: this is JBOD having disks, and each disk can have internal cache either turned on or off. This may affect disk's perfomance;
Galaxy: both controllers have cache, but, for some reason (manually, on failure etc), cache may be disabled. Again, it will affect LUN perfomance
Eugeny
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-28-2002 01:30 PM
тАО12-28-2002 01:30 PM
Re: dd for disk performance, buffer cache, and hardware-level cache
Glad things went well.
Yeah, dd is not too bad. Although its performance will not be too much better the normal LVM mirroring (at least in my experience) it does remove a lot of the volume management overhead.
Normally on more advanced arrays such as XPs or Symmetrix you would use BCVs to do this.....copying a 4GB hyper internally took ~40 seconds when I used to do it.
Also, /dev/null is a psuedo driver, so the driver simply drops the information as it recieves it, so no IO is performed. So in the case of comparing this to physical disk access the results should be approximately half for /dev/null "writes", which I believe you witnessed.
Just one point though....if you are copying filesystem data and there is not much filesystem usage on a particular disk (unusual I agree) then a simple unix copy may even be quicker! With dd you are writing the whole disk, nomatter if the content is data or not.
Regards,
James.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-01-2003 03:54 AM
тАО01-01-2003 03:54 AM
Re: dd for disk performance, buffer cache, and hardware-level cache
Yep...to I agree with what you say. On one point though, I would have done a tar/cpio/cp -r and this would have saved time but as I put together a script to do the dd on the raw LV's I felt that the time saved would be minimal (minutes maybe) as this was 1% of the data.
We could not use LVM mirroring (I wanted to...) for 3 reasons
o kilobyte striping, no LB=VM mirroring allowed we mirror on FC60
o MAX PE/PV was too small
o MAX PV was set to 16 & we would need 24+
LVM mirroring would have been great as we could have done it on-line!!!
Regards
Tim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-02-2003 05:46 PM
тАО01-02-2003 05:46 PM
Re: dd for disk performance, buffer cache, and hardware-level cache
Bill Hassel is right, take a higher blocksize for better performance.
The blocksize for the best performance is 256k, this is the maximum size of a physical write call in unix. If you make the blocksize higher than 256k you would have more the one physical write.
But the blocksize has only influence on dd to rawdevices not on charakterdevices.
Stop the rain
Claus