HPE Business Insights
Showing results for 
Search instead for 
Do you mean 

To virtualise or not — that is the (rhetorical) question

on ‎01-03-2014 03:00 AM ‎01-22-2014 11:01 PM Alec_Wagner

Today we are facing more and more complexity in our IT landscapes. We are forever bolting on newer, faster, cheaper solutions to deliver business services. Also, as if that weren’t recipe enough to give us night terrors, we continue to layer on top of existing systems—be they back-end transactional, data repositories or utility services—to be able to reuse as much as possible.

 

The challenge is that as the next major project comes along, we then have a dilemma around how we test a new solution in preproduction. What’s worse is that we don’t always have control of those systems to be able to test against them. Even more worrisome is that we don’t want to be firing test transactions against any live systems for fear of loss of integrity—or the inevitable "There is no one left who knows how it works, don't touch it!"

 

We must take a simple, scientific approach whereby we isolate all but the variable under test, keeping everything else constant. Easier said than done, right? Further, we need to be able to repeat tests, perform them earlier in the cycle, to be able to clean out environments to ensure a known starting point and to ensure that there is no data corruption. This all adds costs and makes it less feasible to maintain a real replica environment.

 

The thing is, we need to test. It is not optional; just look at the news recently and count the number of outages that made the headlines that maybe could have been mitigated through more thorough testing. We need to be able to develop and test end-to-end, early and often. The earlier we can test, the sooner we identify defects, and the more money we save in error-prevention. The benefits are clear: Some studies quote a 1:10:100 multiplier effect for the cost of fixing a defect in design, coding and production (http://www.riceconsulting.com/public_pdf/STBC-WM.pdf).

 

Going back to the system under test, we have options of how we handle dependent systems. We can stub out external integrations to always get a predicted response, but this doesn’t really test the integration code—and, let’s face it, we have all had to stand in front of the boss and explain how a stubbed routine made it into production, right? No? Oh, just me then. I'll just get my coat on the way out then!

 

So if we need to test, but don’t want to risk stubbing something and looking daft, what can we do? We virtualise.

Service virtualisation is a software solution consisting of a virtual service created to emulate a system we interface without the need to maintain it or to have to clean it out after each test run. This removes the costs of additional (sometimes archaic, always expensive) hardware; it removes the risks of stubbing and removes the delays of cleaning and preparation before reuse.

 

Increasing the frequency of testing and doing it earlier has shown to reduce the number of major bugs significantly. An HP customer reports a 30 percent decrease in issues relating to code quality. Widening this, studies suggest that virtualisation can deliver benefit in these areas too:

  • elimination of a performance test delay due to missing or unstable components of the application
  • reduction in hours spent coding, configuring and maintaining custom stubs or homegrown virtualization
  • reduction in hours of overtime or after-hours work for late-night performance testing

Service virtualisation is a great benefit, but it comes with some overheads. Test scripts being driven at the front-end need to be synchronised with test responses at the back-end. Orchestrating this could get complex, though no more complex than having to maintain a mirror environment. My question amongst all of this is a simple one: “Do you virtualise or not? Can you use virtualisation in other areas?” One such pondering moment had me thinking if maybe we could use it as a honeypot in security testing, where we provide the impression to an "eIntruder” or “eBurglar" that they have breached our network when really we are merely containing them in a secure area or the network letting them exhaust themselves trying to breach something pretending to be something it is not.

 

To learn more, download the free HP/Forrester whitepaper "Service Virtualization And Testing (SVT): How Application Development, Testing, And Delivery Leaders Can Speed Up Delivery And Improve Quality Of Applications" [reg. req'd.].

 

What are your feelings toward service virtualisation? Is it something you use, see a need for? Or is it not? Are there other uses for it that we are not tapping into right now? I would be interested in your viewpoints, so please share them in the comments below.

 

Ken O'Hagan is director of software presales at UK&I at Hewlett-Packard. Before coming to HP, Ken amassed close to 10 years of technical experience, working for companies such as Perot Systems and The Bank of Ireland. During his time at the latter, he was responsible for architecture definition/validation, hardware specification, technical design, and implementation and was a key part of the team that successfully implemented the five largest programs ever delivered for Bank of Ireland.

0 Kudos
About the Author

HPE-SW-Guest

This account is for guest bloggers. The blog post will identify the blogger.

Labels
Events
Aug 29 - Sep 1
Boston, MA
HPE Big Data Conference 2016
Attend HPE’s Big Data Conference on August 29 - September 1, 2016 to learn from peers in every industry and hear from Big Data experts and thought lea...
Read more
Sep 13-16
National Harbor, MD
HPE Protect 2016
Protect 2016 is our annual conference on September 13 - 16, 2016, and is the place to meet the world’s top information security talent, discuss new pr...
Read more
View all