Behind the scenes at Labs
cancel
Showing results for 
Search instead for 
Did you mean: 

Stan Williams talks the end of Moore’s Law at the Centre for Quantum Technologies

 

stanwhiteboard650.gif

By Curt Hopkins, Managing Editor, Hewlett Packard Labs

Labs’ Stan Williams, HPE Senior Fellow and member of the Foundational Technologies lab, was a featured speaker at the Centre for Quantum Technology’s eighth birthday party in December.

Held at the National University of Singapore, this half-celebration, half-symposium, featured Williams’ talk, “Computing Beyond the Age of Moore’s Law.” 

As the CQT itself put it, “(Williams) looked to the future of computing, pointing out that Moore's Law, the projection that the number of transistors on a chip doubles every eighteen months, is coming to an end. He presented a vision for computers inspired by the operation of the brain, and for machines having specialised components for different kinds of calculation. He highlighted the recently launched US Nanotechnology-Inspired Grand Challenge for Future Computing.”

“’And then there’s quantum, of course.’  The monk sighed.”

National University of Singapore is the fifth-ranked engineering school in the world, after Tsinghua, MIT, Berkeley, and Zhejiang University, according to US News and World Report, and possesses one of the world’s strongest quantum institutes, headed by Artur Ekert, “one of the absolute pioneers in the area of quantum information,” as Williams put it. But Williams characterized his own talk as very much about opportunities for computing outside of quantum technology.

“Quantum computing is often very highly touted,” he told Behind the Scenes. “But in my own view and view of many people that I respect, what people call ‘quantum computing’, the use of entanglement to perform massively parallel computation, is very, very far way, and even if and when it becomes available there is very little you can do with it that makes it economically worthwhile.”

Among the things you can do are factoring a large number (useful for breaking the RSA encryption scheme) and calculating the properties of another quantum mechanical system. That may allow you to design better drugs, or create better materials, for example. These are important applications, said Williams, but respond to very niche problems. You would be very unlikely to use quantum computing to create an application you would use in either a data center or in your own home.

“We need to find a new technology platform that will allow us to scale our computing tech exponentially by performing more computations per unit energy instead of putting more transistors per unit area,” said Williams. “Can we figure out the algorithm, the circuitry, which would help us perform these tasks more efficiently?”

So, given how unlikely quantum entanglement is to spawn a new, broadly useful type of computing, and given how close we are coming to the end of Moore’s Law – Williams posited we are likely to see it lurch to a stop by 2021 – where do we go from here?

As Williams says in his talk, we should look to the brain, to what is sometimes called neuromorphic computing.

Brains! Brains!

“If we can adapt an understanding of how the brain works to our computing systems we can make improvements in the kinds of computers you and I care about,” Williams explained. “For example, search is a really huge deal. A lot of time, with current computing, when you use search, you don’t get what you asked for and you get tons of stuff you don’t care about. But as a human being, you can meet somebody and although you may forget their name, you still recognize their face, even if that person has changed significantly. The human mind does search and recognition and remembrance much better than our current technology does. If we can bring brain-like computing capabilities into our computing space, we can power intuitive search.”

If you go back to the very beginnings of computing with Alan Turing in the 1930s, that was his goal, to approximate the actions of the brain. He had in mind a system that would function more like a brain than like today’s computer, according to Williams.

 “But he got caught up in World War II,” he said. “He had to break the Enigma code, so he didn’t have the time to think more about how you build a brain-like machine, he had to build a codebreaking machine.”

Stan conference.jpgColossus, one of the codebreaking machines whose creation followed from Turing’s activities at Bletchley Park, is the direct ancestor to the computers we have today. They use the Von Neumann architecture and not an alternate path to more brain-like computing they might have, had circumstances allowed Turing to take a different route.

Policy and prohibition

Williams and his team’s efforts toward brain-like computing have never been trapped in an ivory tower. In fact, he has had extensive experience in the policy arena.

A white paper Williams sent to the White House helped to convince the Office of Science and Technology Policy to issue the Grand Challenge for Future Computing.  That paper outlined the potential of brain-like computing.

“I tried to write it up in a way that would resonate with policy makers,” he said. “It was part of an effort to urge the government to move into this effort to unite efforts around the goal of defeating and defying this end of Moore’s Law.

One of the fears that spring out of this new technology – this creation of computing that is more closely analogous to the brain – is of artificial intelligence, specifically, of what AI could do if it became hyper-competent and its goals diverged from ours.

High-profile critiques include those of SpaceX’s Elon Musk and Dr. Stephen Hawking, who fear we may create AI and AI might then end us.

“I personally don’t want to build a sentient machine,” he said, “but I do want to build computers that are far more efficient, which will adopt aspects of what brains do. There’s a difference between sentience and intuitive computing. As we issue this challenge for future computing it’s reasonable to keep safety checks on what we do.”

“The fear being injected into this topic is too excessive,” said Williams. His solution is to be part of the policy discussions. As an example, he points to nanotechnology, which he took part in developing and to whose policy limitations he contributed.

“I was strongly involved in the U.S. National Nanotechnology Initiative, back in the turn of the century,” he said. “At that time Prince Charles and others were talking about gray goo,” a sort of world-devouring nanotechnological cancer, “while others talked about how we’d create infinite wealth. Those involved in the science knew neither scenario was remotely true. Now we’re in the middle of it and the hype is almost completely gone but the science has gone into products – ceramics, electrical systems, medical delivery – but most of this is completely invisible to people.”

What the scientists like Williams were concerned about was nanotech pollution, about waste products leaching into the environment.

“Part of our efforts were to close health effect loops,” he said, “to make sure we didn’t build, for example, large manufacturing plants that created a lot of nanotech sludge that got into the environment.”

This kind of marriage of scientific knowledge and policy engagement will be necessary for the creation of neuromorphic computing, according to Williams.

“It’s not going to be either as bad or as good as people anticipate.”

You can watch the talk that Williams gave at the Centre for Quantum Technology’s eighth birthday party in December.

0 Kudos
About the Author

Curt_Hopkins

Managing Editor, Hewlett Packard Labs