1846578 Members
1556 Online
110256 Solutions
New Article
Patrick_Lownds

Workload Rebalancing

Introduction

The public cloud has been a game-changer in enterprise IT, enabling companies to rapidly scale up their infrastructure without investing in expensive on-premises hardware. However, as companies have become more reliant on the public HPE_ELEMENT_Blog.jpgcloud, they have also started to realise some of the drawbacks associated with the public cloud. One of the biggest concerns for many companies is their lack of control over their data when it's stored in the public cloud. This has led some companies to consider rebalancing workloads back to their own data centres and hybrid cloud platforms.

Rebalancing workloads from the public cloud is moving data and applications previously hosted on a public cloud property back to a company’s own data centre and or a hybrid cloud platform. There are several reasons why a company might choose to do this.

One common reason is to regain control over sensitive data. When data is stored in the public cloud, it's often subject to the cloud provider's security policies and procedures. While cloud providers have robust security measures in place, some companies feel more comfortable managing their own security controls.

Another reason why a company might rebalance workloads is to save costs. While the public cloud can be cost-effective in the short term, as workloads scale up, costs can quickly spiral out of control. This is particularly true for companies that have unpredictable workloads. With a hybrid cloud or an on-premises infrastructure, companies can better predict their costs and avoid unexpected bills.

A third reason why a company might rebalance workloads is to improve performance. While the public cloud offers scalability, it's not always the fastest option. This is particularly true for applications that require low latency, such as those used in financial services or gaming. By moving workloads closer to the end user, companies can improve performance and reduce latency.

When it comes to workload rebalancing, a failed cloud migration can create additional challenges. The data and applications may have been modified or changed during the migration, making it more difficult to move workloads back to the original infrastructure.

Fig1 - Workload RebalancingFig1 - Workload RebalancingDespite the challenges, many companies have successfully rebalanced workloads from the public cloud or away from one public cloud provider to another. Companies looking to rebalance workloads between public cloud providers have a similar set of motivations to those wanting to move those same workloads on-premises, but there are also other reasons as well.

One common reason in this scenario is to avoid vendor lock-in. By diversifying their cloud provider portfolio, companies can reduce the dependency on a single vendor and gain flexibility in terms of pricing, service offerings, and contractual terms. This enables these companies to negotiate better deals, switch providers when necessary, or take advantage of unique features offered by different cloud providers.

Changes in the corporate landscape, such as mergers, acquisitions, or divestitures, can lead to the need for workload rebalancing. Companies may consolidate their IT infrastructure and migrate workloads to a common cloud platform following a merger or acquisition. Conversely, in a divestiture, separate entities may require rebalancing workloads to different cloud providers based on their individual strategies and requirements.

Fig2 - Case Study AWS to AzureFig2 - Case Study AWS to Azure

To successfully rebalance workloads, companies need to take a strategic approach. Start by assessing the current infrastructure and identifying which workloads are best suited for rebalancing. Next, create a detailed migration plan that includes timelines, budgets, and resource requirements. It's also important to ensure that the new infrastructure is properly configured and that data is migrated securely.

So how do you start that journey?

Discovery

Discovery is the process of identifying and understanding the workloads and applications running within an IT infrastructure. It involves gathering information about the applications, their dependencies, and their interactions with other components of the infrastructure. The goal of discovery is to create a comprehensive inventory of applications, understand their characteristics, and assess their interdependencies.

The amount of time and effort required to carry out a successful discovery will vary depending on where you are migrating workloads to and from. When moving from a public cloud provider to on-premises or between public cloud providers, you may find that it takes less time than first migrating workloads to the public cloud from on-premises.

Discovery is crucial for effective IT management, especially when planning rebalancing, consolidation, or simply optimising your environment. By gaining a clear understanding of the workloads plus applications and their relationships, companies can make informed decisions and minimise the risks associated with changes to the IT environment.

There are various methods and tools available for carrying out that discovery, each with its own strengths and limitations. Let's explore some common approaches to application discovery:

  1. Manual Inventory - This method involves manually documenting the applications in the infrastructure. IT personnel collect information by interacting with application owners, reviewing documentation, and examining system configurations. While this method can provide a high level of accuracy, it can be time-consuming, prone to human error, and may not capture all the details required to ensure success.
  2. Network Scanning - Network scanning tools automatically scan the network to identify active devices and collect information about running applications. Network scanning can provide a broad view of the applications in the environment but may miss applications that are not network-visible.
  3. Agent-Based Discovery - Agent-based discovery involves installing lightweight software agents on individual systems to collect data about applications. These agents can gather detailed information about the workloads and applications, their configurations, and resource utilisation. Agent-based discovery is more accurate and can provide real-time information, but it requires the deployment and management of agents across the infrastructure.
  4. Configuration Management Databases (CMDB) - CMDBs are central repositories that store information about IT assets, including applications. They consolidate data from various sources, such as network scans, agent-based discovery, and manual inputs. CMDBs provide a holistic view of applications and their relationships with other infrastructure components. They can also integrate with other IT management processes, such as change management and incident management.

Once the discovery process is complete, companies can use the gathered information to support various rebalancing initiatives. It also enables effective capacity planning, helps identify workload and application dependencies for consolidation projects, facilitates risk assessment, and aids in troubleshooting and incident management.

Fig3 - Modernise your workloads with HPEFig3 - Modernise your workloads with HPE

Assessment

An assessment is a process of evaluating and analysing the characteristics, performance, and resource requirements of workloads and applications within an IT infrastructure. It involves gathering data, measuring metrics, and conducting analysis to gain insights into the behaviour, efficiency, and suitability of workloads and applications for specific infrastructure environments or initiatives. The goal of the assessment is to make informed decisions about workload architecture, management, optimisation, capacity planning, and resource allocation.

The assessment typically involves the following key aspects:

  1. Performance Evaluation - includes measuring and analysing performance metrics of the workload or application. This involves monitoring aspects such as response times, throughput, latency, and resource utilisation. Performance evaluation helps identify any bottlenecks or inefficiencies within workloads and applications, enabling companies to optimise performance and ensure optimal resource allocation.
  2. Resource Requirements - Understanding the resource requirements of workloads is crucial for capacity planning and resource allocation. Workload assessment involves analysing factors such as CPU, memory, storage, and network requirements of each workload or application. By assessing resource needs, companies can make decisions about the appropriate infrastructure capacity and provisioning for optimal performance.
  3. Dependency Analysis - Workloads and applications often have dependencies on other components or services within the IT environment, such as databases, APIs, or external systems. Assessing workload dependencies helps in identifying critical dependencies and potential risks. It ensures that all necessary components are properly configured and available to support the workload or application effectively.
  4. Scalability and Elasticity - Assessing the scalability and elasticity of the workload and application is important to ensure they can handle increasing demands and adapt to changing conditions. This involves evaluating the ability of workloads to scale up or down based on workload fluctuations and resource requirements. Understanding workload scalability enables companies to plan for future growth and optimise infrastructure resources accordingly.
  5. Cost Analysis - Assessing the cost implications of rebalancing during the assessment is essential for effective resource allocation and budget management. The assessment includes evaluating the financial impact of running and maintaining specific workloads and applications, considering factors such as licensing costs, infrastructure expenses, and operational costs. Cost analysis helps companies make informed decisions about workload and application optimisation and cost-effective resource allocation.
  6. Security and Compliance - Assessing the security and compliance aspects of workloads and applications ensures that they meet company security policies and regulatory requirements. This involves analysing configurations, access controls, encryption, and data protection measures. By assessing security and compliance, companies can identify and address potential vulnerabilities or non-compliance issues.

Migration

Migration refers to the process of moving workloads and applications, the associated services, from one IT environment to another. It involves transferring the entire system, including its data, configurations, dependencies, and associated components, from the source environment (such as a public cloud provider or on-premises infrastructure in need of modernisation) to the target environment (such as a hybrid cloud platform on-premises and or to a different public cloud provider).

Migration can be driven by various factors, including the need for scalability, cost optimisation, improved performance, better security, or organisational changes such as mergers or acquisitions. Migration typically follows a structured approach to ensure a smooth transition and minimise disruption to business operations. Here are the key steps involved in migration:

  1. Planning - The planning phase involves defining the migration goals, identifying the workloads to be migrated, and determining the target environment. It includes assessing the dependencies, resource requirements, and potential challenges of migration. A migration strategy and timeline are developed, and stakeholders are identified.
  2. Pre-migration Preparation - In this phase, the necessary preparations are made to ensure a successful migration. This includes validating and optimising the source workload environment, ensuring compatibility between the source and target environments, and establishing data backup and recovery mechanisms. Additionally, any necessary network connectivity, security configurations, and compliance requirements are addressed.
  3. Data and Application Migration: This phase involves transferring the data, applications, and associated components from the source environment to the target environment. The migration method can vary depending on factors such as workload complexity, data volume, and downtime tolerance. It may involve using tools or services provided by the target environment or trusted third-party products. Migrating the data is one of the most challenging phases during migration, mainly because of the logistics of moving large quantities of data.
  4. Testing and Validation - After the migration, thorough testing and validation are essential to ensure that the migrated workloads function as expected in the new environment. This includes verifying application functionality, testing performance and scalability, validating data integrity, and conducting any necessary user acceptance testing. The results of these tests determine whether the migration was successful or if any adjustments or remediation are required.
  5. Post-migration Optimisation - Once the workload or application is successfully migrated, the focus shifts to optimising the performance and resource utilisation in the target environment. This may involve making adjustments to configurations, fine-tuning resource allocation, optimising networking settings, and implementing workload or application-specific optimisations. Continuous monitoring and performance analysis help identify areas for improvement and ensure optimal operation in the new environment.
  6. Decommissioning the Source Environment - Once the migration is validated and the workload or application is running smoothly in the target environment, the source environment can be decommissioned. This step involves proper data disposal or archival, updating both documentation and CMDB, plus communicating the completion of the migration to relevant stakeholders.

It is worth noting that public cloud providers charge for the egress of data, and the specific policies can vary between the providers. Amazon Web Services charges for data egress in most cases. Google Cloud Platform also charges for data egress. Azure has its own pricing model for the egress of data, and charges may apply for data transfer out of its network and so ensure you factor this aspect into your overall project costs.

It's important to review the specific pricing details, documentation, and terms of service of each cloud provider to understand their egress charging policies accurately. Cloud providers regularly update their pricing structures, so it's advisable to refer to the official documentation or contact the provider directly for the most up-to-date information regarding egress charges.

In conclusion, the rebalancing of workloads is a complex process that requires careful planning and execution. However, for companies that are concerned about data security, cost, or performance, it can be a viable option. By taking a strategic approach and leveraging modern infrastructure technologies, companies can successfully rebalance workloads back to their own data centres and or to hybrid cloud platforms.

For more information on the many ways we can help you, https://www.hpe.com/uk/en/services/pointnext.html.

Patrick Lownds
Hewlett Packard Enterprise

twitter.com/HPE_TechSvcs 

linkedin.com/showcase/hpe-technology-services/ 

hpe.com/pointnext  

0 Kudos
About the Author

Patrick_Lownds