Advancing Life & Work

Labs Chief Architect Kirk Bresniker on Memory-Driven Computing for Large-Scale Genomics

Kirk Bresniker, Hewlett Packard Labs Chief Architect and HPE Fellow — and his colleagues Sharad Singhal and Hartmut Schultze, along with partners from the German Center for Neurodegenerative Diseases (DZNE) — recently published the paper “A Novel Computational Architecture for Large-Scale Genomics.” It introduces Memory-Driven Computing (MDC) as a novel and alternative in silicon architecture that overcomes many of the limitations posed by current approaches to large-scale genomics.



According to the paper, “we are expanding on our initial experiments to understand the performance benefits achievable when complete genomics workflows can directly use big data within a unified large memory pool across the pipeline, as opposed to having each tool read and write data back to disk. Collectively, we believe MDC will be a key tool in realizing integrated analysis of truly large -omics and other big datasets in the life sciences and clinical research and medicine.”

 We talked with Bresniker to get more insight about the paper and the potential applications of its findings.


You and your team recently published the paper “A Novel Computational Architecture for Large-Scale Genomics.” How did it come about?
It's a collaborative work that we did with the German Center for Neurodegenerative Diseases (DZNE). They formed this group to try and crack Alzheimer's and what’s happening to our brains, and they realized very quickly that it's a big data problem. How can we tell across genetics, behaviors, environmental factors, and all the other things that accumulate in an individual's life and in his or her brain? How can we tell that an individual will contract the disease, knowing that when the disease starts, it might be decades until it manifests and it's too late? And so, they're studying thirty-thousand individuals for the next thirty years, and they're ten years into this already.

We're talking about generating petabytes or even exabytes of data. We have this big data problem. The conventional computers we have are good. They're helping us out, but we're not going to crack the nut. We're not going to be able to get the kind of breakthrough we want because we won't be able to look at all the information. So, how can we actually have an architecture that makes everything work and lets every single one of those bytes be available to the researcher — and in a time that matters?


What were some of the results you achieved through your research?
With Memory-Driven Computing as a novel computational architecture, our machines are running a hundred times faster and they’re using 60% less energy. And it changed the way that the researchers behaved. And that's because instead of something taking a couple minutes, it was taking 13 seconds. Imagine if you're a geneticist and you have a hunch and you say, “Oh, well, I wonder what if,” and you hit the button to do the analysis. Instead of taking ten minutes, it takes ten seconds!


Wow! 100 times faster and 60% less energy. How was the team at Labs able to do that?
It was really three things. One, it was coming up with the ideas. Two, it was recognizing the opportunity and figuring out where there was a real problem. And three, the last piece was investment. Is there someone you can talk to convince that the reward outweighs the risk? And that's really what came together for us with this program.


Where do you see the biggest opportunities for the application of the findings in the paper?
For us, it was a model for a system where you have massive amounts of information. In our case, it was all that medical information. It was all those MRIs and genomic scans, and you're also facing the challenges of privacy. So how can we operate on all that data? How can we afford every individual access to the right kind of information without having to centralize it and without having to lose control? We maximize potential security, but we still admit all that data.

Other questions to consider include: How many insights are you missing? How many cures to Alzheimer's are out there with all that data hiding in plain sight in individual silos? Can we afford that view over all of the data in your enterprise and the ability to make a decision in a time that matters?


Any opportunities in other industries for this?
So, the ones that are really fascinating to me are transportation, energy, and communications. How do we get the right vehicle out to where it needs to be? How do we get the energy out to where it needs to be in a sustainable fashion? And then, how do we harness all the data at once? How do we actually span all those things and make use of that data and admit as much as possible into beneficial economics for society?


What does the future of Memory-Driven Computing look like?
Now, the hard part is it's one thing to have our experimental apparatus in the Labs and it’s another thing to drive this ubiquitously. It has to be bigger than us or it has to be something that we're hosting as a service. In the intervening years, how do you now take this proof point that was one point on the graph, and how do you have that broader conversation? How do you make it accessible? How do you get university students interested in that kind of approach? And that's been part of the ongoing communication.

Curt Hopkins
Hewlett Packard Enterprise

Kirk BresnikerKirk BresnikerAbout Kirk Bresniker
Kirk Bresniker is Chief Architect, Hewlett Packard Labs and a Hewlett Packard Enterprise Fellow and Vice President. He joined Labs in 2014 to drive The Machine Research and Advanced Development Program, leading teams across Labs and across HPE business units with the goal of demonstrating and evangelizing the benefits of Memory-Driven Computing. He holds 28 U.S. and 10 foreign patents in areas of modular platforms and blade systems, integrated circuits, and power and environmental control.

0 Kudos
About the Author


Managing Editor, Hewlett Packard Labs