Digital Transformation
cancel
Showing results for 
Search instead for 
Did you mean: 

Windows 2003 Upgrade: From assets to potential liabilities

HPE-SW-Guest

By Ken O'Hagan

 

Following on from my blog last time about the end of life of Windows 2003, let’s now look at the challenges we face in just understanding the true scale and complexity when dealing with our Windows 2003 assets which are soon to become potential liabilities.

 

If we look across our IT estate, we see a picture of technical diversity. How many times has your organisation sought to “standardise” on to a single monitoring solution, or tried to reduce the myriad of server configurations into a small, medium or large option? Then think back to the entropy and config drift that has crept in thanks to the white knights and heroes in the tanks who “Just get it back online” on a daily basis. A quick fix here, a config tweak there, next you know your “standardisation” has become specific to one particular project.

 

Fast forward a few years and we are not quite sure what we have out there or how it will react to the upgrade of its operating system. What’s more fear inducing to the business owner is that we actually have to consider what these systems depend on too, in order to understand the full impact.

 

Simply put, you need to get the best understanding of your environment that you possibly can so that you can try and avoid contributing to the analyst view that 60% of outages are caused by poor change. One of the more significant changes in the Windows line up is that as of Windows 2012, there is no 32bit option any more. So any applications not fully 64bit compatible need to be replaced, upgraded or rewritten to be compatible.

 

Don’t be tricked into thinking this is a Windows only issue. It’s not. Here’s what you need to think about:

 

1. How many servers have you got and what are they?

This means accurately comparing what your assets lists tell you against what is actually physically out there. This can be done using the old school pen and paper exercise but the last thing we want is have someone manually recording server details and assets using their eyes and spreadsheets. It’s fraught with scope for error. Inventory tooling that can accurately and more importantly, repetitively discover the server assets and their component parts is more effective. But it’s a catch-22. In order to discover down to the chipset level, you need to be able to get deep into the guts of the machine. This is not something that will be enabled on a machine remotely, so we would need an agent based solution. If we need this, we need to know the server is there. If we know the server is there, why do we need to discover it? Fun and games. Well, not really, because even though we know about a server, we still need to understand what hardware there is and all manner of other things about it to help understand whether it is a candidate to take the latest operating system or not.

 

2. What system software are you running?

Will it still work? Is there a newer version we must go to, and what does that mean? Within your environment, what system tools do you require to be compliant? Do you use specific backup and recovery systems for example? How will they handle a machine backed up using Windows 2003 tonight and then Windows 2012 tomorrow? Is there a version of the software for Windows 2012 and is it backwards compatible with older archived backup mediums? These are the questions we have to ask ourselves for each component of systems software including monitoring agents, can agents be upgraded in isolation or will this also drive a need to upgrade the operations management console too? Will the JVMs need to be upgraded?

 

3. What commercial off-the-shelf software are you running and is it compatible?

In addition to system software, consider the main functional software, databases, application servers, or native server components. Is there a 64bit version available and will it continue to operate correctly? or are there any hardware components that it relies upon that may no longer work? Does it require any exotic peripherals that may not be supported going forward?

 

4. What in house developed applications are you running and is the code going to work in the new operating environment?

This is where it gets more involved. Don’t get me wrong,- all of this change so far needs to be tested to make sure that it still works. However, this is where further time and effort on our part is needed. We need to execute code reviews to ensure that moving to the new versions of JVMs, to new chipsets, to a 64bit environment will not break anything.

 

5. The ‘what else’ question

If we have to upgrade applications or code on the servers, does that affect any other servers? Is there a knock on effect and does that drive a dependant upgrade? By looking at the discovered environment, we can then use dependency mapping technologies like Universal Discovery at HP to work out which servers in the environment depend on which others to function. We can then apply these steps to each of them in turn and then use this to feed into migration planning and a prioritised list of servers. So we can take logical groupings that are suitable to the environment such as location by location or logical groupings based on business service or departmental ownership. This tends to be the most effective though it does make logistics more entertaining.   

 

I hope this has sparked a thought process for your own environment. In my next blog I will to discuss the testing and deployment. As always I am interested in your comments.

0 Kudos
About the Author

HPE-SW-Guest

This account is for guest bloggers. The blog post will identify the blogger.

Labels
Events
28-30 November
Madrid, Spain
Discover 2017 Madrid
Join us for Hewlett Packard Enterprise Discover 2017 Madrid, taking place 28-30 November at the Feria de Madrid Convention Center
Read more
HPE at Worldwide IT Conferences and Events -  2017
Learn about IT conferences and events  where Hewlett Packard Enterprise has a presence
Read more
View all