Shifting to Software-Defined
Showing results for 
Search instead for 
Did you mean: 

Workload Repatriation and the Regional Service Provider Opportunity



Luis.Hernanz.1MP-SRGB.jpgBy guest blogger Luis Miguel Hernanz, Global Chief Technologist for Service Providers, HPE

It is a very well-known fact that public cloud adoption has grown notably over the years. But the movement of workloads to cloud is not an all-or-nothing exercise. The most common approach for the enterprise is to embrace a hybrid IT model where they source applications from the most appropriate delivery platform (private cloud, managed cloud, public cloud, etc.) based on their unique requirements (e.g. availability, performance, cost, security, compliance and location) in order to drive the right business outcome. The evolution from a traditional IT environment to a hybrid IT environment is executed in a very different fashion by each company, as all of them have different goals for this journey and start with different maturity levels around hybrid IT management capabilities.

By abstracting what individual companies are doing, you can see some interesting, and sometimes not so intuitive trends. In this article, I’ll discuss one of these trends: Workload Repatriation. Workload repatriation is the phenomenon in which a workload that was initially developed and deployed in a public cloud is brought back to a private or managed cloud. There are a plethora of reasons for this workload transition. I have compiled below the most common reasons that I have seen in the market:

- End-to-end control of the user experience: most companies consider that getting end-to-end control over their core applications could give them the edge versus the competition. In some cases, the only way to do so is to move them back to a local data center. Dropbox is a good example of this1. Their decision to move their core application from AWS to their own data centers has given them a boost in their reputation with the enterprise IT groups, their main target customer. Apple realized that they could not get the performance they wanted for some of their services2. Facebook found it was difficult to integrate Instagram with the Facebook core applications if Instagram remained at AWS3.

- Regulatory Compliance4: some industries are under heavy regulations. Even if the public cloud providers are trying to catch up in this area, providing support for the most common certifications and regulations, the fact is that they only have data centers in a relatively small number of countries (data locality is a key requirement for some regulations). Therefore, they will be mostly focused on regulations/certifications that are relevant for those countries that they are covering. Some customers might find this limitation unsuitable for their needs.

- Total Cost of Ownership: it is very straightforward to know the cost of a single vm in the public cloud. But when you have a complex application that is data and I/O intensive and spans several cloud regions, it is much more difficult to understand the total cost of ownership. There are additional costs associated with custom configurations or service levels. Some companies have learned this the hard way by checking their monthly bill once the application has been deployed. There have been some high profile examples of this like Snap, where 80% of their losses were attributable to the use of the Google Cloud5. There are strong arguments6 that suggest it would have been better for them to repatriate their applications to their own data center.

- Changing financial priorities: when a start-up is born, CAPEX is normally a precious and scarce element. Therefore, an OPEX based IT model is a good fit for these companies. But, when the company grows and revenues are flowing at a fast pace, it could make sense for them to use some of that money to create their own infrastructure so that they can reduce their operational costs in the long term (private cloud normally has a lower total cost of ownership7 in the long run for well-managed private clouds8).

- Performance and SLAs9 (for traditional workloads): a cloud-native application is designed to run on top of unreliable infrastructure with a performance profile that can change at any time. But, this is not the case of more traditional workloads, which still depend on getting stable and predictable performance from the underlying infrastructure. Traditional workloads work better in the local data center.

- Lifecycle management segmentation: some companies have decided to host different life stages of a workload in different environments. The most common case is companies that develop some of their applications in the public cloud but have their production environments in their data center. There could be several reasons for this. For example, a company might feel that they are ready to host non-critical (read dev and test) environments in the public cloud, but that they are still lacking the tools and/or processes to do the same with production environments. On the other hand, there are companies that develop most of their applications in-house, but deploy some layers (normally the ones that require bigger scale) in the public cloud to deal with increased loads that are not regular (Black Friday for example). Another interesting case is when companies create an application for the first time in the public cloud to validate the market (so that they do not need to invest CAPEX in something that might not be successful) and then move the application back to their data center when they have demonstrated success.

- Improvements in the local Data Center technology10: thanks to new technologies like Composable Infrastructure, Containers and Hyperconverged solutions, it is possible to get some of the same flexibility and time to market that you get in a public cloud provider. Moreover, the advanced financial models that some infrastructure providers offer can provide an OPEX model to consume infrastructure in your local data center.

Typically, most of the reasons above are interrelated and show up at the same time. The short story is that if an application is critical to your core business and it is becoming too difficult or expensive to manage in the public cloud, it’s probably a good time to think about bringing it back in-house. Note that this could even be at the same time you are moving other workloads to the public cloud to explore additional business opportunities. This would be a great example that the future of IT will be hybrid11 and that every company will need to find their Right Mix of the different IT options that are available on the market.


medium.jpgSo what is the role of the non-hyperscale Service Providers in regard to this workload repatriation phenomenon? Even if customers are suffering some of the above shortcomings of public cloud, many organizations, large and small, still do not have the data center footprint, the CAPEX, the people skillset or even the desire to migrate, host and manage their applications. This is where service providers can provide value to these organizations. The opportunity for service providers is to address the limitations of public cloud providers and meet the unique requirements of enterprise workloads and their complexity.  Service providers are optimally positioned to address these business needs:

  • They have data centers in the same countries as the customers to provide local support for security, data residency and regulatory compliance requirements.
  • They can create innovative offerings that can better address tighter security and performance requirements.
  • They can create differentiated pricing models that are more predictable and convenient for the enterprise customer.
  • They can provide the transparency and control you need to manage your business.
  • They have the specialized expertise, people and skillset to augment enterprises’ IT.

How can service providers capture on the opportunities of workload repatriation and hybrid IT deployment? The most important thing for service providers is to stay close to the enterprise customers to be able to understand the customer challenge and to highlight the benefits of their offerings. In order to expand their market reach, it is critical for service providers to partner with companies that are specialized in the enterprise space. These partners can contribute their additional coverage of the enterprise market, a deep understanding of the customer pain points,  and the best way to position a service provider solution. Moreover, they are often involved in the early stages of the sales cycle, when there is an opportunity to shape the customer point of view about the right way to solve a specific problem.

These collaborations can be tricky, however, if the right partnership model is not established first. Hewlett Packard Enterprise, as part of its new strategy that promotes a partner-first model, has created a dedicated program to partner with service providers including a non-compete commitment. We will see other companies following the same lead in the future.

“Workload repatriation is a real opportunity for Regional Service Providers, but they need to be close to the Enterprise Customers to benefit from it.”


Follow HPE Composable Infrastructure

External resources:

0 Kudos
About the Author


June 18 - 20
Las Vegas, NV
HPE Discover 2019 Las Vegas
Learn about all things Discover 2019 in  Las Vegas, Nevada, June 18-20, 2019
Read more
Read for dates
HPE at 2019 Technology Events
Learn about the technology events where Hewlett Packard Enterprise will have a presence in 2019.
Read more
View all