AI Insights
Showing results for 
Search instead for 
Did you mean: 

The cost of downtime: How DevOps can save you dollars

By: Joe Panettieri 


How's this for ironic: each time your CIO unlocks a new revenue opportunity for your business, the cost of downtime grows. A decade ago, the cost of downtime often involved delayed emails or busy signals in your customer support center. Some dollars were certainly lost. But most businesses survived the occasional IT darkness.


How devops can seve you money?Fast forward to the present. The mobile, cloud, social, and big data waves have created a perfect storm of opportunities and challenges. Customer-facing applications on the web—and mobile apps on smartphones and tablets—have forced corporate IT to transform from a cost center into a profit-pursuing center. The potential upside seems limitless. But the cost of downtime has also grown exponentially.


Consider these painful stats, according to IDC:

  • The average hourly cost of an infrastructure failure is $100,000 per hour.
  • The average cost of a critical application failure per hour is $500,000 to $1 million.


Of course, the vast majority of businesses are not Fortune 1000 organizations, but it's a safe bet that the cost of downtime is rising for all businesses. Remember: mobile, cloud, social, and big data have democratized IT. As midmarket and regional businesses benefit from each wave's upside, those customer-facing applications and machine learning systems can sink a business amid an outage.


Pushing beyond backup and recovery

Amid the rising cost of downtime, it's only natural that businesses are striving to increase their uptime. The world has shifted from a "backup and recovery" mindset to a "business continuity" mindset.


Thanks to hybrid infrastructure, businesses can recreate their on-premise networks and applications in the cloud. When on-premise systems fail, a near-instant shift to cloud-based services can "keep the lights on" for employees and customers until corporate IT troubleshoots and fixes the primary network or application outage.


Still, those efforts sometimes don't go far enough. Some companies can't afford to have a warm site or hot site "ready to go" on a moment's notice, especially as enterprises mix and match more and more on-premise applications with mobile and cloud offerings. As a first step, businesses can apply business continuity strategies to their most mission critical applications. But applying that strategy to absolutely every application can drain time and money from IT's core mission: continuous innovation.


Fortunately, there's another way to vastly improve system and application reliability.


Cost of downtime: DevOps to the rescue?

That's where DevOps enters the picture. As I mentioned in an earlier blog, DevOps now involves five key team roles and responsibilities. Take a closer look at each of the five roles, and you'll see their potential influence over system and application reliability:

  1. VP of Getting Things Done: Usually the VP of operations or VP of DevOps, or perhaps a VP of product or a VP of engineering. Whatever the title, the VP typically is obsessed with system and application reliability—and the overall customer experience (CX).
  2. Trusted lieutenants: Here, you find automation experts, code release experts, and architects. Automate the right way, and continuous delivery will reach end users in seamless ways. Automate the wrong way, and your continuous delivery pipeline can look more like a broken conveyor belt, torturing customers with unpredictable upgrade arrivals.
  3. Quality assurance experts: Debug everything the right way before it reaches automated, continuous delivery, and chances are your system reliability will rise.
  4. Security and compliance leaders: Build for compliance mandates (Sarbanes-Oxley, HIPAA, PCI DSS, etc.) from the start, and you'll mitigate the chances of a hack or a government-ordered shutdown affecting your system.
  5. Overall operations experts: This is where a centralized dashboard for application and performance monitoring enters the picture. Get it right and those dashboards can help you see and correct declining performance before a full-blown outage arises.

Five reasons businesses embrace DevOps

I'm not the only person who sees a connection between DevOps and reduced downtime (or, as I prefer to put it, improved uptime). Roughly 43 percent of enterprises now leverage DevOps best practices, according to IDC. Editor in Chief Alan Shimel took a closer look at IDC's survey results and found these five business drivers for DevOps adoption:

  1. Automation, 60 percent
  2. Continuous delivery, 50 percent
  3. Continuous integration, 43.3 percent
  4. Automated testing, 43.3 percent
  5. Application monitoring and management 43.3 percent


Certainly, system and application reliability aligns with each of those priorities.

  • Automate the right way and the overall network becomes more and more robust. Automate the wrong way and you could suffer a death of a thousand cuts, torturing your own network with an endless stream of bad updates.
  • Similarly, continuous delivery and continuous integration means you're building a strong infrastructure foundation and a stronger application layer for your customers.
  • Automated testing, done the right way, means you can spot potential issues far more rapidly, killing off the bugs before they ever reach your production environments.
  • And finally, application monitoring and management ensures you continuously optimize your production system, mitigating small risks and issues before they propagate out of control and potentially take down your systems.


The bottom line? Long live Devops—and the applications they continuously deliver to your employees and customers.

To gain a better insights on DevOps – how it began and why you should leverage it read Getting grounded with Dev/Ops.



About the author

Joe PanettieriJoe Panettieri


Joe Panettieri is co-founder and Content Czar for ChannelE2E (, which tracks IT service providers from Entrepreneur to Exit (E2E). Panettieri has more than 20 years experience as a media entrepreneur covering enterprise, midmarket and small business IT issues.

About the author

Connect with Joe:

 Follow me on Twitter @JoePanettieri

0 Kudos
About the Author




I agree with much of what you’re saying. I can’t say that it’s bull**bleep**, but it’s mostly interpreted wrong by many people (or by me). DevOps is first DEVELOPER, hence the Dev prefix and second Operations. It’s easier to train a developer to do sane operating procedures than to teach a sysadmin the developer mindset (testing, revision control, breaking down to logical units and similar). DevOps is making the old SysOps positions deprecated. SysOps require manpower or attention to do something, at a general term atleast. While in DevOps we strive for an fulle automated system, from the application level down to bare metal, which is out of scope for any normal system administrator. This will become more evident in the years to come when microservices get more traction. With microservices we get a more volatile, highly dynamic environment where manual labor is not an option. Another evidence that the traditional system administrator is getting deprecated is container (docker and similar). Container put the entire OS in the hand of the developers, all required from system administration is a kernel which support containers and maybe storage solutions. A small team of 5 to 10 persons can easily manage 10 000 servers in such a setup (not at containerlevel, but container-host level).

Those who believe DevOps is a developer and a sysop do not see the entire picture, sadly.   PHP Training in Chennai

Online Expert Days - 2020
Visit this forum and get the schedules for online Expert Days where you can talk to HPE product experts, R&D and support team members and get answers...
Read more
HPE at 2020 Technology Events
Learn about the technology events where Hewlett Packard Enterprise will have a presence in 2020
Read more
View all