- Community Home
- >
- Storage
- >
- Legacy
- >
- General
- >
- about performance control
General
1822081
Members
3519
Online
109640
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-28-2005 11:38 AM
тАО01-28-2005 11:38 AM
Hi all,
Currently I am working on a storage system performance control project. We know average response time and throughput are two important metrics.
My question is: if we can guarantee the delay of average response time, can we say we also control the degradation of system throughput
well? how about vice versa? could you please give me some examples?
Thanks
Dong
Currently I am working on a storage system performance control project. We know average response time and throughput are two important metrics.
My question is: if we can guarantee the delay of average response time, can we say we also control the degradation of system throughput
well? how about vice versa? could you please give me some examples?
Thanks
Dong
Solved! Go to Solution.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-28-2005 04:01 PM
тАО01-28-2005 04:01 PM
Re: about performance control
Hi, Dong,
This is a broad topic. Are there particular types or situations that interest you? For example: NAS, NetApp, NFS, Brocade SAN, EMC DMX, local hard drive, ...? In my perspective, things like "system throughput" change based on what the architecture is. In an appliance or complex external array, you have relatively little control over what's going on.
It might be helpful if you mentioned where you're going with it -- are you engineering new hardware, new software, trying to solve a problem, writing a paper... :-)
Regards,
Mic
This is a broad topic. Are there particular types or situations that interest you? For example: NAS, NetApp, NFS, Brocade SAN, EMC DMX, local hard drive, ...? In my perspective, things like "system throughput" change based on what the architecture is. In an appliance or complex external array, you have relatively little control over what's going on.
It might be helpful if you mentioned where you're going with it -- are you engineering new hardware, new software, trying to solve a problem, writing a paper... :-)
Regards,
Mic
What kind of a name is 'Wolverine'?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-29-2005 03:18 AM
тАО01-29-2005 03:18 AM
Re: about performance control
Thanks Mic.
I am working on a RAID system with new software. Some functions of this new software could induce system performance degradation.
We want to dynamically restrict the execution time of these functions in order to keep the performance degradation in an affordable level. Do we need to guarantee the average response time, throughput or both of them?
I am working on a RAID system with new software. Some functions of this new software could induce system performance degradation.
We want to dynamically restrict the execution time of these functions in order to keep the performance degradation in an affordable level. Do we need to guarantee the average response time, throughput or both of them?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-30-2005 01:54 PM
тАО01-30-2005 01:54 PM
Solution
Sorry for the delay. I wasn't able to get back to this stuff much this week.
But I guess from my perspective, which is admittedly not a storage expert (but a definitely a worker), it really depends on what kind of RAID. If you're using JBOD with MirrorDisk as your RAID control, then you may have more insight and control. If, on the other hand, you have an EMC Symmetrix-class array, you have relatively little control (unless something has changed in the last year).
I don't see how you can have any real control on either throughput or average response time if you're using the very high-end, cached storage like EMC (or Hitachi, ... even an old HP AutoRAID). If this is the case, I guess my answer would be to use the (supposedly vendor-neutral) APIs developed by EMC for ControlCenter (not real clear on this, sorry) or beg the vendor for more control. If you're working with this storage, I'd say the best thing you can do is measure the two metrics mentioned above and throttle application performance appropriately so that they don't degrade. But that kind of doesn't make sense...it seems like optimization is actually the goal...
So I'm guessing you have control over the RAID functionality -- some kind of relatively low-tech stuff.
After a little thought, I would say yes, I think that if you guarantee average response time you can guarantee throughput to a degree -- but remember that you're only guaranteeing an *average* response time and therefore there will be spikes. There's also system overhead related to disk I/O. There may be a system overhead issue that is unrelated to disk response or throughput that slows down throughput because the application can't get around to issuing the I/O requests. For example, system load suddenly jumps. The main application (that's supposed to have the good disk performance) is now competing for CPU with everything else, and therefore it just doesn't issue its I/O requests as quickly as it did when it wasn't waiting for CPU.
I'm hoping some others will jump in for different perspectives. I really don't feel like an expert -- I just have a few thoughts on it. There are a lot of "what ifs" that still aren't specified in this example and those can make a big difference.
HTH,
Mic
But I guess from my perspective, which is admittedly not a storage expert (but a definitely a worker), it really depends on what kind of RAID. If you're using JBOD with MirrorDisk as your RAID control, then you may have more insight and control. If, on the other hand, you have an EMC Symmetrix-class array, you have relatively little control (unless something has changed in the last year).
I don't see how you can have any real control on either throughput or average response time if you're using the very high-end, cached storage like EMC (or Hitachi, ... even an old HP AutoRAID). If this is the case, I guess my answer would be to use the (supposedly vendor-neutral) APIs developed by EMC for ControlCenter (not real clear on this, sorry) or beg the vendor for more control. If you're working with this storage, I'd say the best thing you can do is measure the two metrics mentioned above and throttle application performance appropriately so that they don't degrade. But that kind of doesn't make sense...it seems like optimization is actually the goal...
So I'm guessing you have control over the RAID functionality -- some kind of relatively low-tech stuff.
After a little thought, I would say yes, I think that if you guarantee average response time you can guarantee throughput to a degree -- but remember that you're only guaranteeing an *average* response time and therefore there will be spikes. There's also system overhead related to disk I/O. There may be a system overhead issue that is unrelated to disk response or throughput that slows down throughput because the application can't get around to issuing the I/O requests. For example, system load suddenly jumps. The main application (that's supposed to have the good disk performance) is now competing for CPU with everything else, and therefore it just doesn't issue its I/O requests as quickly as it did when it wasn't waiting for CPU.
I'm hoping some others will jump in for different perspectives. I really don't feel like an expert -- I just have a few thoughts on it. There are a lot of "what ifs" that still aren't specified in this example and those can make a big difference.
HTH,
Mic
What kind of a name is 'Wolverine'?
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Learn About
News and Events
Support
© Copyright 2025 Hewlett Packard Enterprise Development LP