Cloud Source
Showing results for 
Search instead for 
Do you mean 

The Machine, a view of the future of computing

on ‎06-16-2014 10:27 PM


The second week of June is traditionally the week of HP Discover, which allows me to be back in hot and sunny Las Vegas. This year was no exception. What made this HP Discover interesting though is that HP is having its 75th anniversary. Yes, it’s 75 years ago that Bill Hewlett and Dave Packard tossed the coin and called the company Hewlett-Packard. What would have happened if the coin had fallen on the other side, and we would have become PH?


These have been 75 years of innovation, and although over the last decade analysts seemed to feel HP had stopped innovation, things are definitely back on track. HP is back. I believe that’s the key phrase for this HP Discover. So let me give you a feel of what I’m talking about. We introduced many products during the week, and I will come back to some of those, but there is one thing I’d like to talk to you about. It’s called “The Machine.” It’s not a product (yet), but it’s an effort. And in the usual HP approach, we call upon our partners to work with us on that effort. But what effort am I talking about?


Re-inventing the computing industry

Since 1945, computers follow a clearly defined architecture, called the “Von Neumann” architecture. It consists of a control unit, arithmetic processors, memory and input/output devices. Some special processors have been created with other architectures, but all our general-purpose computers have been following this. Yes, there have been major improvements, one of the key ones was the Reduced Instruction Set Computer (RISC) systems introduced in the 80’s. (By the way, HP played a major role in that.)


“The Machine” is changing the paradigm. Up till now, all computers have been working with electrons. Sure you’ll tell me there are experiments with quantum computers and others. But frankly these are still in their infancy. Electrons have this nasty issue that at a given moment you never know exactly where they are. So you need a group of them to ensure, statistically, that you have opened or closed a gate. Over the years we have made silicon traces smaller and smaller, working with less and less electrons. So, we are doomed to reach a limit, the moment we will no longer be sure a gate is opened or closed, as we don’t know whether it will have been hit by an electron.


Quantum computers try to solve that by using photons rather than electrons.


The Machine takes a completely different approach. Realizing that most computers spend up to 80 percent of their time on tasks there to manage the environment, not to perform the task at hand, The Machine just gets rid of these tasks making computers increasingly effective. How do they do that?


Well, in current environments, two key tasks are taking the majority of the effort. On one hand, instructions and data keep being shuffled between persistent storage, memory and cache. Things keep going up and down all the time through these layers. And in that process we go through multiple communication busses, each managed by their own software stack. What if we get rid of them? On the other hand, the second task consists in all the systems we have developed to be able to use the capacity of our current CPUs efficiently. These include virtualization layers, multi-tasking, and etc. Again, what if we can get rid of those?


Cache, memory and storage, make it one

If we would have a single technology that would have the speed of cache while having the persistence of storage, we could combine storage, memory and cache into a single device that would keep all the information and instructions. There would be no need any more to boot-up computers, to shut them down, or to hibernate. All instructions and data would just be there when we needed it.


Is this dreaming? Actually no. HP developed a technology, called the Memristor, which allows us, at affordable prices, to have cache, memory and storage functionalities in the same system for the environments we will need in the near future. And these include storage of the “big data” we all talk about in previous blogs.


Memristance is a property of an electronic component. If charge flows in one direction through a circuit, the resistance of that component of the circuit will increase, and if charge flows in the opposite direction in the circuit, the resistance will decrease. If the flow of charge is stopped by turning off the applied voltage, the component will “remember” the last resistance that it had, and when the flow of charge starts again, the resistance of the circuit will be what it was when it was last active. More information on the HP memristor activities are described on Wikipedia.


Let me just point one thing out for you to understand the change. Using the memristor technology, HP prototyped a crossbar latch memory that can fit 100 gigabits in a square centimeter, and proposed a scalable 3D design (consisting of up to 1000 layers or 1 petabit per cm3). In 2012, the device achieved a read time of 90 nanoseconds (if not faster), approximately one-hundred times faster than contemporaneous flash memory, while using one percent as much energy.


Connect the memory and the processor

It’s great to have huge memory/storage space. But now, how do we get quick access to all that memory as we will need it to do the job? We could use copper. But in that case we would need huge cables and consume great energy. So why not go for fiber optics? Let’s directly attach the memristor technology to the central processing unit. Not only can you transfer information up to 6TB per second, but you do it with very low power consumption. So, you win on two fronts—speed and energy consumption. Lower consumption also leads to lower heat dissipation, which in turn allows less need for cooling and the opportunity to develop denser environments, so you use less space.


And what about the CPU?

Well, because you don’t need the compute power to do the 80 percent of things you won’t do anymore, you don’t need the CPU power either. So you can easily work with low-energy chips, such as the ones used in cell phones. We already worked on that with MoonShot. Actually just going for such architecture allowed us to reduce the size of the IT environment powering, our own website, from 25 racks to 3. With that, we are handling 300M hits per day. And the power consumption is down to 720 watts! Taking this one step further, HPLabs is looking at servers the size of a credit card. And, as with MoonShot, the idea of using specialized servers for specific tasks is also in the designs. In an Internet of Things world, this makes a lot of sense. As servers are so small and don’t consume a lot of power, there is no need any more to virtualize the environment. You can easily dedicate one server to one task, particularly if you tune the server to the task. No virtualization reduces the overhead, and leaves more CPU power to do the real job at hand.


Pulling it all together

Using ions to store, photons to communicate and electrons to compute: that’s the vision of The Machine. Simplicity is its motto. We have the opportunity to build an environment that is very different from our current state. But obviously, the current operating systems have to take care of all the overhead required. It would be very difficult to strip current operating systems of all those functions, so it makes sense to create a new one. That’s what we want to do. But we do not want to do it alone, so we are creating an open-source development project. Listen to how Martin Fink described “The Machine” at HP Discover in Las Vegas.



HP is back

I started this blog entry by pointing out “HP is back”, I’d like to add, innovation is back at HP. I hope you agree with me, this project is really exciting. But it’s not all. We believe that this new technology will allow us to create mesh Clouds. You probably ask yourself what that means. I’ll come back to you discussing this in my next blog entry. But be aware, this is not the only news from HP Discover.

There’s plenty more.


0 Kudos
About the Author


on ‎07-15-2014 02:07 AM

Great stuff !

on ‎07-20-2014 09:03 AM



Where do we find the open source project?

on ‎02-14-2015 07:54 PM
Any plans on making a memristor based SSD available? It would be much quicker to market than a full new architecture and allow HP to work on perfecting the manufacturing process whilst getting income.
on ‎06-21-2015 04:50 PM

Why Linux as the OS? why not use a microkernel based system ? An OS with self-healing properties that does not crash? Like Minix (from which Linux is derived).


Take a look at this:


Use a fail-safe system which is self-healing and does not crash! Operating system research has indicated that Microkernels have definite advantages over monolithic kernels (like Linux). If you want "The Machine" to be truly advanced and not obsolete at birth you have got to take a hint from the past 30 years of OS research. In addition to this Minix comes with the BSD license (which has advantages for a company, as Apple will tell you). If you do not like Minix, look at L4 as a microkernel (L4 pistachio also has the BSD license). Or look at HelenOS also a microkernel OS under the BSD license.

on ‎01-05-2016 11:19 AM

Hi ,

What happened with "The Machine" initiative ? is it still a viable options ?



Nov 29 - Dec 1
Discover 2016 London
Learn how to thrive in a world of digital transformation at our biggest event of the year, Discover 2016 London, November 29 - December 1.
Read more
Each Month in 2016
Software Expert Days - 2016
Join us online to talk directly with our Software experts during online Expert Days. Find information here about past, current, and upcoming Expert Da...
Read more
View all