HPE GreenLake Administration
- Community Home
- >
- Storage
- >
- Legacy
- >
- Disk
- >
- Re: about sustained throughput
Disk
1832927
Members
2601
Online
110048
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-29-2004 05:04 AM
12-29-2004 05:04 AM
Hi there,
1. what is disk Sustained Throughput?
2. what's the relationship between it and disk's peak throughput?
3. Suppose the disk's sustained throughput is 40MB/S and the average request size is 8KB. Can we think the disk is able to handle 40MB/8KB= 5000 requests per second?
According to some documents, usually a disk can handle around 40 requests which average request size is 8KB.
4. how could we calsculate the disk throughput according to the average request size of the workload? for example what's the disk's maximum throughput on the workload with a 16KB average request size?
Thanks!
Dong
1. what is disk Sustained Throughput?
2. what's the relationship between it and disk's peak throughput?
3. Suppose the disk's sustained throughput is 40MB/S and the average request size is 8KB. Can we think the disk is able to handle 40MB/8KB= 5000 requests per second?
According to some documents, usually a disk can handle around 40 requests which average request size is 8KB.
4. how could we calsculate the disk throughput according to the average request size of the workload? for example what's the disk's maximum throughput on the workload with a 16KB average request size?
Thanks!
Dong
Solved! Go to Solution.
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-29-2004 07:45 AM
12-29-2004 07:45 AM
Re: about sustained throughput
Hi,
1
Continous throughput, large sequential read/writes.
2
I don't belive "peak throughput" is very relevant in the "real world". It seems mostly relate to the bus speed. Perhaps chached read writes.
3
Of course not. With small random read/writes the IO-rate (IOPS) is more interesting. 40 IOPS is perhaps not relevant for the fastest disks. Maybe 100-150 but NOT 5000.
4
I don't belive such calculations is very reliable. There is a number of factors that may have impact. A better idea is to use a disk performance benchmark, for example IOmeter or Postmark.
1
Continous throughput, large sequential read/writes.
2
I don't belive "peak throughput" is very relevant in the "real world". It seems mostly relate to the bus speed. Perhaps chached read writes.
3
Of course not. With small random read/writes the IO-rate (IOPS) is more interesting. 40 IOPS is perhaps not relevant for the fastest disks. Maybe 100-150 but NOT 5000.
4
I don't belive such calculations is very reliable. There is a number of factors that may have impact. A better idea is to use a disk performance benchmark, for example IOmeter or Postmark.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-30-2004 12:12 AM
12-30-2004 12:12 AM
Solution
One may request 8kb, but what the operating system is requesting may be quite different, in size and number of requests. Disks typically do somewhere around 150 to 250 ops/sec. If the ops require seeks, this slows things down. If the ops can be satisfied from the track cache, this helps speed things up. The disk track cache may also be re-ordering requests, and issuing read-aheads.
If you are interested in the I/O subsystem performance under small transfer, from random locations, then you may wish to measure the performance using postmark.
If you are wishing to measure the performance under Windows, while utilizing huge quantities of async I/O requests, then
you may wish to try Iometer.
If you are interested in the I/O subsystem performance under sequential I/O workloads, then you may wish to try Iozone, or Bonnie.
The tricky part is to understand that the disk can not be removed from the system and then measured. The disk performance is only interesting when it is in the system and tested as the application will see, under real-world situations. The operating system and the I/O subsystem interact with each other. The throughput of the disk is not as important to the application as the throughput of the system, with this disk in it.
The throughput of the system is also not a constant, or even linear. The throughput of most systems is a fairly complex polynomial, and contains quite a few variables. Things like: file size, transfer size, amount of RAM in the system, CPU L1 size, CPU L2 size, memory speed, number of disk controllers, number of disks, number of disks in a RAID set, RAID level, operating system read-ahead algorithm, VM subsystem's page replacement algorithm, file system type, location of filesystem journal, number of CPUs, number of processes/threads doing I/O, type of I/O.. async, mmap, read, write, mixed read/write, and a few dozen more ....
My suggestion: http://www.iozone.org
Run this:
iozone -Raz -g ##m -b out.wks
Where the ## is the size of RAM in the system, in megabytes. (Be sure to follow the value with the "m")
The -R turns on Excel output.
The -a turns on Auto mode.
The -z turns on more coverage.
The -g ##m sets the maximum filesize.
The -b filename.wks is the Excel spreadsheet with the results.
Now plot the results.... You can then see the throughput (single stream in this case) for the I/O subsystem, over a range of file sizes and transfer sizes.
Enjoy,
Don
If you are interested in the I/O subsystem performance under small transfer, from random locations, then you may wish to measure the performance using postmark.
If you are wishing to measure the performance under Windows, while utilizing huge quantities of async I/O requests, then
you may wish to try Iometer.
If you are interested in the I/O subsystem performance under sequential I/O workloads, then you may wish to try Iozone, or Bonnie.
The tricky part is to understand that the disk can not be removed from the system and then measured. The disk performance is only interesting when it is in the system and tested as the application will see, under real-world situations. The operating system and the I/O subsystem interact with each other. The throughput of the disk is not as important to the application as the throughput of the system, with this disk in it.
The throughput of the system is also not a constant, or even linear. The throughput of most systems is a fairly complex polynomial, and contains quite a few variables. Things like: file size, transfer size, amount of RAM in the system, CPU L1 size, CPU L2 size, memory speed, number of disk controllers, number of disks, number of disks in a RAID set, RAID level, operating system read-ahead algorithm, VM subsystem's page replacement algorithm, file system type, location of filesystem journal, number of CPUs, number of processes/threads doing I/O, type of I/O.. async, mmap, read, write, mixed read/write, and a few dozen more ....
My suggestion: http://www.iozone.org
Run this:
iozone -Raz -g ##m -b out.wks
Where the ## is the size of RAM in the system, in megabytes. (Be sure to follow the value with the "m")
The -R turns on Excel output.
The -a turns on Auto mode.
The -z turns on more coverage.
The -g ##m sets the maximum filesize.
The -b filename.wks is the Excel spreadsheet with the results.
Now plot the results.... You can then see the throughput (single stream in this case) for the I/O subsystem, over a range of file sizes and transfer sizes.
Enjoy,
Don
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP