Cloud Source
Showing results for 
Search instead for 
Do you mean 

The Future of IT. 6 Technologies to watch.

on ‎07-24-2013 04:58 AM

Technical-Innovation.jpgIn my previous blog entry, I talked about how IT may transform our environment and highlighted a couple future evolutions, the CeNSEr and the Actor. What information technology evolutions will help us achieve these? I will not talk about sensors here. Although they are becoming pervasive, there are many different types and more are to come, discussing them would lead us too far. I want to highlight what needs to change from a server, storage, networking and software perspective to allow the waves described previously to happen. I believe 6 technology evolutions will play a key role, so let me describe them to you.


1. Software defined everything

Over the last couple years we have heard a lot about software defined networks (SDN) and more recently, software defined data center (SDDC). There are fundamentally two ways to implement a cloud. Either you take the approach of the major public cloud providers, combining low cost skinless servers with commodity storage, linked through cheap networking. You establish racks and racks of them. It’s probably the cheapest solution, but you have to implement all the management and optimization yourself. Sure you can use software tools to do so, but you will have to develop the policies, the workflows and the automation.


Alternatively you can use what is getting known as “converged infrastructure”, as term originally coined by HP, but now used by all our competitors. Servers, storage and networking are integrated in a single rack, or a series of interconnected ones, and the management and orchestration software included in the offering, provides an optimal use of the environment. You get increased flexibility and are able to respond faster to requests and opportunities.


We all know that different workloads require different characteristics. Infrastructures are typically implemented using general purpose configurations that have been optimized to address a very large variety of workloads. So, they do an average job for each. What if we could change the configuration automatically when-ever the workload changes to ensure optimal usage of the infrastructure for each workload. This is precisely the concept of software defined environments. Configurations are no longer stored in the hardware, but adapted as and when required. Obviously this requires more advanced software that is capable of reconfiguring the resources.


A software defined data center is described as a data center where all infrastructure is virtualized and delivered as a service.  Control of the data center is automated by software – meaning hardware configuration is maintained through intelligent software systems. Three core components comprise the SDDC, server virtualization, network virtualization and storage virtualization. It remains to be said that some workloads still require physical systems (often referred to as bare metal), hence the importance of projects such as OpenStack’s Ironic that could be defined as a hypervisor for physical environments.


2. Specialized servers

I just mentioned all workloads are not equal, but run on the same, general purpose servers (typically x86). What if we create servers that are optimized for specific workloads? In particular, when developing cloud environments delivering multi-tenant SaaS services, one could well envisage the use of servers specialized for a specific task, for example video manipulation, dynamic web service management. Developing efficient, low energy specialized servers that can be configured through software is what HP’s project Moonshot is all about. Today, it’s still in its infancy, but there is much more to come there. Think about 45 server/storage cartridges linked through three fabrics (for networking, storage and high speed cartridge to cartridge interconnections), sharing common elements such as network controllers, management functions and power management. If you then build the cartridges using low energy servers, you reduce energy consumption by nearly 90%. If you build SaaS type environments, using multi-tenant application modules, do you still need virtualization? This simplifies the environment, reduces the cost of running it and optimizes the use of server technology for every workload.


Particularly for environments that constantly run certain types of workloads, such as analyzing social or sensor data, the use of specialized servers can make the difference. So, definitely an evolution to watch.


3. Photonics

Let’s now complement those specialized servers with photonic based connections enabling flat, hyper-efficient networks boosting bandwidth, and we have an environment that is optimized to deliver the complex tasks of analyzing and acting upon signals provided by the environment in its largest sense.


But technology is even going further. I talked about the three fabrics, over time; why not use photonics to improve the speed of the fabrics themselves, increasing the overall compute speed. We are not there yet, but early experiments with photonic backplanes for blade systems have shown overall compute speed increased up to a factor 7. That should be step 2.


Step 3 takes things one step further. The specialized servers I talked about are typically system on a chip (SoC) servers, in other words, complete computers on a single chip. Why not use photonics to link those chips with their outside world.

On chip lasers have been developed in prototypes, so we are not that far out. We could even bring things one step further and use photonics within the chip itself, but that is still a little further out. I can’t tell you the increase in compute power that such evolutions will provide you, but I would expect it to be huge.


4. Storage

Storage is at a crossroad. On the one hand, hard disk drives (HDD) have improved drastically over the last 20 years, both in reading speed and in density. I still remember the 20MB hard disk drive, weighing 125Kg of the early 80’s. When I compare that with the 3TB drive I bought a couple months ago for my home PC, I can easily depict this evolution. But then the SSD (solid state disk) has appeared. Where a HDD read will take you 4 ms, the SDD read is down at 0.05 ms.


Using nanotechnologies, HPLabs did develop prototypes of the Memristor, a new approach to data storage, faster than Flash memory and consuming way less energy. Such device could store up to 1 petabit of information per square centimeter and could replace both memory and storage, speeding up access to data and allowing order of magnitude increase in the amount of data stored. Since HP has been busy preparing production of these devices. First production units should be available towards the end of 2013 or early in 2014. It will transform our storage approaches completely.


5. Automation

Increasingly, automation is transforming the way we manage our IT environments, from a usage, lifecycle management and management perspective. Automating routine tasks, automating the reaction to external events, the testing and deploying of applications and services, all lead to a faster deployment of new functionality and a quicker reaction to issues in the system.


What is really fascinating is the move to continuous integration. Continuously develop and test the code to provide daily incremental versions of the software. One good example is the continuous integration OpenStack project lead by Monty Taylor. So, rather than having regular upgrade cycles with associated risks, the software is being enhanced and improved on a daily basis. Automation is becoming critical to manage complex environments, delivering services, information and action.


6. Networking

As we are moving to the CeNSEr wave, one area still needs major improvement and that is the network. Access and use of networks, whether wireless or wire line, are too difficult and costly to allow the massive amount of data to be moved to the cloud, analyzed and reacted upon. Yes 4G is being rolled out on a global basis, but ubiquitous access at a reasonable price is still not available today. The network is quickly becoming the bottleneck and needs major improvements. While we start rolling out 4G we already hear talks about 5G and even 6G. Are these going to be the solution? Who knows, but there is one other aspect that needs to be addressed and that is the economical side of things. What is the cost of transferring 1 petabyte of information?



We have a real interesting and bright future ahead of us and we will see many new services appear. We also have plenty of opportunities for innovation. As digital technology is creeping in everything we do, our live will have to adapt. For example, one of my friends asked me an intriguing question. By 2050, will we still have driver licenses? Indeed, if the driverless car is the future, why should we? Now, are you ready to be driven by your car? But this is just what we can imagine now, I’m sure there will be plenty of new things we cannot even think about today. It’s up to all of us to invent them. So, what are you waiting for?

About the Author


on ‎07-29-2013 09:07 PM

A very interesting article! Thanks for sharing!

on ‎12-09-2013 10:22 PM

very good explanation about the upcoming trend in the IT technologies.

Nov 29 - Dec 1
Discover 2016 London
Learn how to thrive in a world of digital transformation at our biggest event of the year, Discover 2016 London, November 29 - December 1.
Read more
Each Month in 2016
Software Expert Days - 2016
Join us online to talk directly with our Software experts during online Expert Days. Find information here about past, current, and upcoming Expert Da...
Read more
View all