Behind the scenes at Labs
cancel
Showing results for 
Search instead for 
Did you mean: 

Mission to Mars at New Scientist Live

Curt_Hopkins

Ianbest (Custom).jpg

By Ian Brooks, European Head of Innovation, Hewlett Packard Labs

On September 28th, I had the privilege of presenting at the New Scientist Live conference in London, surrounded by such luminaries as the British International Space Station (ISS) astronaut Tim Peake, the anthropologist and broadcaster Professor Alice Roberts and the renowned chef Heston Blumenthal.

The 30,000 attendees were a broad church, from young adults to retirees, all with a commercial or academic interest in science, so it wasn’t a pure technology-focused session.. My session combined the requirements of a Mission to Mars with the journey to Memory-Driven Computing and ended with the drive to build America’s first exascale computer, a single computer that approaches the combined performance of the current Top 500 Supercomputers in the world.

A Mission to Mars

For the past 30 years, computers have been gradually entering almost every facet of our working and social lives, with laptops, tablets, smart phones, plus smart cars, factories, medicine and even cities underpinning many of our future initiatives. But computers suitable for a Mission to Mars will face a very different set of requirements. Computers on earth lead a fairly privileged life. They have their own private rooms with air conditioning, filtered reliable power, are immediately connected to others over the internet and have teams of specialists preening them to gain their best performance. Compare this predictable lifestyle with that of a Mission to Mars.

The computers on a Mars mission will have to be far more powerful and autonomous as the round-trip signal time to Mars varies from five minutes to 50 minutes depending on the planetary alignments. They will have to be much more robust to survive the launch stresses, deal with variable power, and a near constant bombardment by high energy particles including coronal mass ejections, alpha particles and radiation levels that are 250 times higher than on earth.

Finally, they will have to be self-reliant as the astronauts will have other things to concern themselves with than the preening of the computers.

To overcome these issues, NASA typically takes a computer and “hardens” the hardware over several years, but this adds time and weight to the end solution. So, could we use a commercially available computer and add the resilience through software? To test this, HPE has been collaborating with NASA and on August 14th 2017 launched the world’s first teraflop supercomputer via a SpaceX launcher to the ISS.

During this one year (roughly the time taken for a journey to Mars) they test the systems software that will monitor the performance of the system and modify the state of the central processing unit (CPU) and memory depending on the circumstances, thus hardening the system via software. This has the added advantage that the software modules can be updated whilst in space, unlike hardware.

Back on earth, we have come to expect computers to be ever more capable as well as smaller, cheaper, and faster due to the miniaturization of components. But this trend is coming to an end. This is a worrying enough trend but it comes at a time when digital data is growing at an exponential rate. In fact, more digital data was created in the last two years, than in the entire history of mankind!

So what could we do differently?

The majority of current computer systems are based on a Von Neumann architecture, named after John Von Neumann, a Hungarian-American scientist. They have a small amount of “volatile” memory (in that the contents of the memory are lost if the power is removed), general purpose CPUs, and system interconnects made using copper wires or copper tracks within the circuit boards.

But if we think of the needs of our vast requirement for high speed, low power computation we need to start again.

The first area that we can improve is to move towards non-volatile memory pools. These could be petabytes in scale, would allow us to load vastly more complex problem and data sets, and wouldn’t need constant power to every cell in order to maintain the memory contents.

Secondly, if we are to connect petabyte scale memory pools to our compute engines then we need to adopt a high bandwidth, low power, low latency connectivity fabric such as photonics (sending minute pulses of light along transparent waveguides).

Thirdly, although CPUs are good at many things, in our future world, we will need efficiency and vastly improved time to results so new compute engines such as graphics processing units (GPUs), ASICs, neuromorphic computers, and optical computers will provide this. These are the constituent parts of an architecture that we call Memory-Driven Computing, one that places the data at the core of the architecture and allows an optimal balance of compute engines to access the same data sets.

When will this happen?

HPE booted the world’s first Memory-Driven Computer with 160 terabytes of memory in May 2017 and the results from this research prototype are now informing our approach to the next generation of computers.

So what comes after The Machine?

The next step in the journey is to take the intellectual property from our acquisition of SGI, combine this with the capabilities of The Machine, and use these insights to develop the first exascale computer.

This is a single computer with computational power of one quintillion operations per second, the equivalent to the combined compute power of Top 500 Supercomputers of today. This is a massive undertaking in its own right but these current Supercomputers would require around 660 megawatts of power, something equivalent to the output of a large nuclear power plant. Our intention is to deliver the exascale computing performance with a power envelope of around 30 megawatts. This will be great for reducing energy bills, but the real advantage is to democratize access to supercomputing power and allow these systems to be placed in areas that are completely unfeasible today, helping run smart cities, smart factories, artificial intelligence-assisted healthcare solutions, or vast scientific experiments.

How would life be different?

With wide-ranging access to exascale performance, we could start to deliver personalized medicine, provide astonishingly accurate simulations of the real-world, leading to improved materials for space or aircraft, or new propulsion systems. We could deliver solutions in areas with less developed power grids or deliver predictive analytics to help move our company’s focuses from hindsight to foresight.

Giant leap

Looking at the ever faster rates of data being generated, the collapsing levels of time to decision, and the approaching limits of current computing technology, we needed a completely new approach. The space-bourne computer, Memory-Driven Computing, and the drive towards exascale computing will underpin the next wave of insights.

When these elements come together, we’ll see that Memory-Driven Computing truly represents, to quote Neil Armstrong, another “giant leap for mankind.”

0 Kudos
About the Author

Curt_Hopkins

Managing Editor, Hewlett Packard Labs