- Community Home
- >
- Storage
- >
- Entry Storage Systems
- >
- MSA Storage
- >
- Re: IOPS Tests
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-13-2023 09:05 AM - last edited on 09-14-2023 08:53 PM by support_s
09-13-2023 09:05 AM - last edited on 09-14-2023 08:53 PM by support_s
Good day
Hope this is not a silly question but want to find out if anyone has a known case for IOmeter setup to measure MSA2060 IOPS?
2 x 1.9 TB SSD SAS disks for cache
42 x 12 TB 7K SAS MDL disks
I want to try and mimic the below MSA GUI IOPS graph in IOmeter if possible.
Any advice will be much appreciated
Thanks
Solved! Go to Solution.
- Tags:
- msa
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2023 05:43 PM
09-14-2023 05:43 PM
Solution@CRD
I don't know of any way of getting a graph in IOMeter. The graph you are showing is over a number of days with a data point average every 15min. IOmeter runs a workload as fast as possible for the time given and returns in the results the average over the time.
In order to get the multiple data points as shown in the graph you would need to create that many access specification or run that many variable tests from the 'Test Setup' tab. In the end you would need to take the data to your favorite spreadsheet app and then create a graph from the data.
Some fun knobs to turn in IOMeter:
'Disk Targets' - # of Outstanding I/Os == this is the queue depth you are pushing, if you are doing a random workload increase this to improve performance
'Disk Targets' - Maximum Disk Size == put this small enough to fit in cache and watch the dial go really fast. Keep it inside your SSD capacity after writing data to the Pool and you should see this steadily increase as the tiering engine moves that 'Hot' data into the SSD tier.
'Test Setup' - Run time == this is how long the test runs, put multiple of the same access specifications and set for 15 min and you should get a steady stream of the ~same performance output (once the tiering/caching is stabilized)
'test Setup' - Cycling Options -> Cycle # Outstanding I/Os == steadily increase the queue depth and find a saturation point for your workload
All that can show some fun numbers but in the end IOmeter and the Array may still report differing information. The array is going to report what it sees which will be the result of caching and coalescing between IOMeter and the array. So don't expect IOmeter to show 550 READ IOPs and the array to report exactly 550 READ IOPs.
Hope this helps.
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]