Behind the scenes at Labs
cancel
Showing results for 
Search instead for 
Did you mean: 

Star Trek, Labs, and the Desire to Know

KirkBresniker

stbyondlesslong.gif

By Kirk Bresniker, Hewlett Packard Labs Chief Architect and HPE Fellow

“All men by nature desire to know” – Aristotle, Metaphysics

As first imagined by Gene Roddenberry during the turbulent mid-1960s and now re-imagined during our own often disconcerting times, Star Trek invokes an image of a future 200 years from now where mankind has finally found a balance between technology and humanity.

Freed from want and fear, the curiosity which is the true hallmark of mankind drives them to take their place among the stars and found the Federation of Planets. At the forefront of this expansion of humankind across the galaxy is Star Fleet and Star Fleet Academy, located in San Francisco. It was prescient to imagine the nexus of technology today becoming the Academy of tomorrow.

Today

Every two years we are creating as much new information as all mankind has ever created. Where will it go? How will we gain insights from it? Just when we need our information technology to get faster, cheaper, and more efficient than ever before, it’s becoming clear that the technologies and behaviors that got us this far are running out of room.

Moore’s Law is winding down. But it’s not just semiconductor scaling: relational database technology, general-purpose operating systems, electronic information storage and communications are all running out of improvement headroom at roughly the same time. How do we think we’ll get from today, and the tantalizing opportunities and equally daunting challenges we face, to Star Trek’s world of justice and opportunity?

It starts with an analysis of future trends and development of breakthrough technologies, which leads to advanced development that allows innovations to be engineered which can be delivered at a global scale to tackle real problems.

At Hewlett Packard Enterprise, Hewlett Packard Labs has been partnering with the engineering, supply chain, support, and services teams from our Global Business Units on a radically new kind of computer we call “The Machine.” This broad and ambitious project links together fundamental research, advanced development, and engineering to tackle the challenge of 21st century information technology: How are we going to gain benefit from the data explosion that comes from the always-connected world of intelligent things?

While much of our research on The Machine project is longer-term and we’re working towards a major technical breakthrough, not specific product roadmaps, we believe The Machine will significantly influence technological development in the near term.

Non-volatile Memory Scaling

The constant doubling of data we are experiencing is called “Exponential Growth.” If you have a demand that is growing exponentially, you’d better have a supply that can keep up. Since the 1970s we had Moore’s Law, the observation by Intel co-founder Gordon Moore that about every two to three years the number of transistors you can put on a chip doubles due to miniaturization.

For the first 30 years there was a triple word score: smaller transistors were cheaper, faster, and used less power. The last two are due to another 1970s observation called Dennard Scaling, after IBM Fellow Robert Dennard. This meant that as an industry we were economically incented to keep hardware designs similar and just keep shrinking. A side effect of generations of similar hardware is that we keep the software basically the same for years, which is also the root of so many of today’s cyber security problems; for instance, code written when computers could only be accessed by dedicated terminals exposed to the global hacking community.

It was great while it lasted.

Dennard scaling ended in 2005, the victim of physics of the decreasing number of atoms available to construct the tiny devices. We’re now within a couple of years of Moore’s Law ending as well. There may be only one or two improvements left, and each is increasingly difficult to deliver, and produces diminishing returns.

What’s next?

Look at a city like San Francisco. When you run out of real estate, there’s only one way to go, and that’s up. But you don’t just start randomly stacking different kinds of structures on top of each other. You build skyscrapers for office space and apartments.

In the same way, the regular arrays of memory devices are poised to continue to grow in capacity and reduce power and cost in a way that computation devices will find very difficult. If you’ve ever seen a photomicrograph of a computer chip, parts look very regular (that’s the memory) and parts look very jumbled (that’s the compute).

There are several new types of memory devices, each based on new physical properties of matter different than the technologies we use today. Phase change, spin torque, and the Memristor resistive memory discovered at Hewlett Packard Labs are all in active research and development. They all share the property of non-volatility, which means that they retain their information even when power is removed. This is crucial in scaling memory energy costs. When you’re creating as much information as we are, it has to be able to be retained with no maintenance energy. But if we’re to get value from it, it needs to be accessible at incredibly high speeds or we’ll never have the time to read it again.

Our recent announcement of Persistent Memory for HPE ProLiant Servers is leading the industry to adapt operating systems and applications to this emerging world of abundant, non-volatile memory.

startrekmanual.gifThe author's book shelf

Nano-scale Silicon Photonics

When you transmit information, there’s a key ratio that you want to understand: How much information (bits) versus the time (seconds), energy (joules), distance (meters), and money (dollars) you have to spend.

That’s b/s*J*m*$ for the engineering team to optimize.

Think of trying to talk in a noisy bar. You can talk slower (s), shout louder (J), get closer (m) or go to a quieter night spot ($$$$). The two basic particles we harness for point-to-point communications are electrons moving in a conductor and photons in a medium. We’ve had fiber-optic commutations for decades, but they were optimized for lots of data moving at high speed over long distances at very high costs. For shorter travel we use electrons running in copper wires and cables. They’re cheap, but they are actually pretty inefficient. Launch electrons down a network cable and only a couple of percent of the energy you spend makes it to the other end. The rest is lost as heat and radio interference.

Photons are great because they can travel very far without loss.

Right now in a data center, if traveling over 10 meters at top end network speeds, you’re using photons. Electrons lose the b/sJm$ tradeoff. What we’re working on now is miniaturization of the photonic transmitters and receivers that will allow them to be incorporated directly into memory and computational devices. The distance where it will be better to use photons to transmit information will shrink from tens of meters to tens of centimeters. We’ll transmit vastly more information at much faster rates and consume much less power.

But it’s not just about pulling out individual copper wires and dropping in glass fibers or silicon waveguides. Silicon photonics will change the way we organize the basic building blocks of compute, memory, and I/O devices.

Because photons can go either centimeters or hundreds of meters, the only difference being the speed of light over that distance (five nanoseconds per meter in glass fiber), we can create devices from hand-held to data center scale in ways we simply could not accomplish with electronic transmission.

At HPE Discover last June in Las Vegas, we showed a prototype 100Gbs photonic link using technologies developed jointly by Hewlett Packard Labs and HPE Enterprise Group engineering teams. This transformative technology is already in Advanced Development and will influence the next generation designs of HPE Servers, Storage, and Networking.

Graph Analytics

Whether it’s a social network, the systems in a data center, a complex supply chain, or an ecological habitat, there are countless social, economic, business and biological systems can be represented as a graph (an interconnection of objects with relationships between them).

In real world graphs, the interconnections or edges between the objects or vertices, are often random. What makes the Six Degrees of Kevin Bacon game work is how dense and complex human relationships are, allowing us to cut through a huge population in a tiny number of steps. Unfortunately, finding those steps is a problem which is easy to describe but fiendishly hard to compute.

What makes it even harder is that it is exactly the kind of problem our current computers are not designed to handle. Because traditional compute has been abundant and memory has been relatively scarce, we’ve come up with generations of tricks that work great for certain kinds of problems, like the orderly rows and columns of a relational database running a business application.

The regular scans of scarce memory allow us to be able to hide memory scarcity behind caches and disk drives because in most cases we can predict what’s going to happen next and swap in the required information just in time. The random nature of graphs breaks all that, so we’re forced to take a simple graph problem and make it into a complex, bloated database problem.

With the abundance of memory from Non-volatile Memory Scaling and the design freedom that Nano-scale Silicon Photonics provides, we can visualize new classes of systems that can tackle graph problems natively by keeping even the largest graphs in memory. With a fast, photonically-connected memory, the penalty (or latency) for random access even over the largest graph will be removed.

micrograph.gifThe waiting is the hardest part (so, let’s not)

But this is all new computer science since the algorithms and applications we have assume scarce memory. How do we develop the new algorithms before the new systems? Do we have to wait?

The answer is no.

We can start right now because we already have platforms like HPE’s Superdome X. While it was created to target the high performance database market, it is a fast memory fabric system you can use today.

The Superdome X fabric puts 24TB of high performance memory and 16 sockets of top-end Intel Xeon microprocessors within a couple hundred nanoseconds of each other. These platforms have allowed Labs researchers and developers to create the new computer science that powers Memory-Driven Computing.

This is not just theoretical work. The Labs team have adopted one of the most popular data management frameworks today, Apache Spark, to the Superdome X, and are already getting 10X improvements on graph applications without having to change any application code. Imagine what we could do in the brave new world of memory abundance?

Star Trek imagines a world freed from want and fear, focused on knowledge and capitalizing on diversity. We have the opportunity within this decade to provide our political, economic, and cultural innovators with unprecedented capabilities to react to global challenges with real-time insight from information technology

***

Read more about Labs and Star Trek on Behind the Scenes and News Channel Asia. Watch the HPE Star Trek commercial on YouTube and check out the movie’s trailers on IMDB.

***

Star Trek images courtesy of Paramount Pictures; ST manual photo by the author; photomicrograph by Paull Rautakorpl via Flickr

KMB
About the Author

KirkBresniker

Kirk Bresniker is Chief Architect of Hewlett Packard Labs and an Hewlett Packard Enterprise Fellow. Prior to joining Labs, Kirk was Vice President and Chief Technologist in the HP Servers Global Business Unit representing 25 years of innovation leadership.