AI Insights
Showing results for 
Search instead for 
Did you mean: 

Legacy IT meets automation and orchestration: Part I

Common wisdom holds that as legacy IT becomes more virtualized, it also becomes easier to manage. But does it really?

While it's true that virtual resources can be provisioned, configured, deployed, and then decommissioned at a rapid pace, the only reason this appears to be easy is because actually completing these tasks is now handled by software. The simple fact is that because of virtual sprawl, managing dynamic, virtual infrastructure is much more difficult than running yesterday's hardware-defined stack.

That's why advanced automation and orchestration needs to be treated as a core component of legacy IT—whether it resides in the data center, a colocation facility, or in the cloud. The overall environment must become more responsive to the rapid pace of digital business processes. IT with automation and orchestration can enable line-of-business managers to simply define what they need and have the automation software and hardware pull it together from various local and remote locations, and then apply the proper security, governance, and related policies to govern its use. Meanwhile, IT can get back to doing what it does best: analyzing and improving applications, leading to a better user experience.

5 key layers that need attentionmanufacturingpartsforeng_240915.jpg

There are challenges here as well. Without adequate strictures on access, data sharing, and other functions, even a seemingly well-orchestrated operation can come crashing down as resources and applications start to vie for common assets. And if proper lifecycle policies aren't put in place, the end comes even sooner as orphaned VMs and containers are left humming away without anyone realizing that they're tying up resources and costing money. This is why enterprise executives need to ask the hard questions before introducing new processes into their legacy environments. How will current applications and VMs be affected? What are the workload implications?

A fully functional automation and orchestration environment should target five key layers in the data/infrastructure stack, says Data Center Knowledge's Bill Kleyman. On the hardware layer, simple one-to-one application server mapping is no longer good enough. A virtual environment requires dynamic load balancing and automated provisioning so workloads can move easily to wherever they're needed and can be supported at minimal cost. Additionally, the software/application layer needs greater control to boost more efficient use of resources and migrate more easily to multiple hosts.


On the virtual layer, automation and orchestration benefits service delivery and hypervisor management to ensure broad interoperability and coordinated security, governance, and compliance policies. On the cloud, of course, automation and orchestration provides the necessary interoperability on public, private, and hybrid infrastructure so the enterprise can view and control the various services as a single, integrated data ecosystem. Finally, a data center layer works toward balancing resources, energy consumption, and other elements for optimal performance.

Automation and orchestration in action

Some of the world's leading enterprises are already putting this kind of functionality into action. Sprint is using automation and orchestration to foster a DevOps style of management across cloud and software-defined architectures. As Chris Saunderson, lead architect for data center automation at Sprint, explains, the idea is to create a "data center of things" that enables not just a more efficient and effective data environment, but one that can be rapidly altered to meet emerging market challenges. But technology is easy to change; it's people who are resistant, says Saunderson: "People are very good about doing technology change, but unwiring people's brains is a problem, and you have to acknowledge that up-front. You're going to have a significant amount of resistance from people to change the way that they're used to doing things." With a DevOps management model, things like resource configuration and orchestration are built into the application itself, allowing users and managers to define the parameters of its operation and let it deal with the messy business of building its ideal environment. In this way, business processes can be built around the needs of users rather than the constraints of static infrastructure.

In the very near future, automation and orchestration will not be just another luxury for the well-heeled enterprise—it will be a critical component of the distributed data environment.

In part II of this series, we'll explore how enterprises can expand IT automation & Brochure Screenshot.JPGorchestration to power innovation. To learn more about boosting legacy IT, read "Traditional meets innovation: Unleash your IT potential."


0 Kudos
About the Author


I have worked at HPE for 6 years and now try to help customers identify solutions to their most frequent data center problems. In a past life, I tried teaching high school science for a dozen years, but finally decided change is good and moved over to HPE. I live in Albuquerque, New Mexico with my family, a cat, and a dog.


Virualizing legacy apps server is especailly a challenge when the original administrator of the legacy system has long since retired and did not document anything.

The most challenging part is getting the software talking and working efficiently like it did before.

Starting June 23
HPE Discover Virtual Experience
Joins us for HPE Discover Virtual Experience live and on-demand
Read more
Online Expert Days - 2020
Visit this forum and get the schedules for online Expert Days where you can talk to HPE product experts, R&D and support team members and get answers...
Read more
View all