Advancing Life & Work
HPE-Editor

HPE Tech Talk Podcast: Supercomputing is not just for elites, Episode 10

With the explosion of data, businesses of all sizes are required to handle massive and complex data workloads. How? Supercomputers. Helping to make High Performance Compute accessible for the masses is our guest - Pete Ungaro, HPE’s General Manager, High Performance Computing (HPC) and Mission Critical Solutions (MCS).

 

Also available on:   Spotify   /   Apple Podcasts   /   Other podcast apps

 

Transcript

Robert Christiansen:

Hi, this is Robert Christiansen with the Office of the CTO. Thank you for joining us for HPE Tech Talk this week…

High performance computing is not just becoming massively powerful, it's becoming massively necessary for businesses of all sizes. On this episode, HPE's Pete Ungaro—General Manager of High-Performance Computing (HPC) and Mission Critical Solutions (MCS)—joins us to talk about how HPC is being utilized as a service. Pete, welcome onboard here. Thanks for joining us.

Pete Ungaro:

[0:57] Hey, thanks, Robert, happy to be here.

Robert:

[1:00] Before we get going here, would you just tell us a little bit about your background? I find it extremely interesting—your career path—but more importantly, how you landed here at HPE.

Pete:

[1:11] Yeah, of course. It's been a fun ride, for sure. I grew up in IBM. There, I managed a lot of the work in high performance computing as well as data warehousing and business intelligence. And then, I moved over to Cray, where I became the CEO and I was the CEO at Cray, which is the supercomputing company, for just over 15 years. Then, of course, about just over a year ago, we got acquired by HPE and it's been an amazing ride of coming full circle back into a diversified company like HPE, but one that really cares about high performance computing and is really driving the leading edge of that. So it's been fun.

Robert:

[2:13] … Most people don't get exposed to HPC or what high performance computing really is. It's often put into the supercomputer realm with liquid cooled silicon and all sorts of different stuff like that. But I think it's really broadened in its term and its abilities. Can you just give us a definition of what HPC or high performance computing means and what it means today to everybody?

Pete:

[2:34] Yeah, I think when we all think about high performance computing or supercomputing, we think of these massive, massive systems, some of the largest computers and fastest systems in the planet. And those are fun. We love doing them and we do a bunch of them, but what really is happening within the high performance computing industry is just with the massive growth of data that we've had, people are trying to figure out how do I deal with all of this data? My models are getting bigger. My data sets are getting bigger. I'm finding new ways to analyze those data sets with things like AI, machine learning and deep learning and big data analytics.

[3:21] And what we're finding is that HPC is the core infrastructure that people are using to do all of that. So it's kind of gone from just doing kind of modeling and simulation in science and engineering to expanding into AI and big data analytics and really being used by all sizes and shapes of companies. So it's been a really interesting part of the market and an interesting place to be.

Robert:

[4:03] … Could you just elaborate a bit and go deeper about what are some of the more, I wouldn't say common, but we're seeing more of AI applications showing up in these spaces for HPC?

Pete:

[4:16] Yeah. One of the big things around AI is just how do we model data and how do we use data? So, it's a lot about moving data around the system as fast as we can and to compute on it and generate information from that data and analysis from it. And so HPC is a natural for that, right? Because we use very high performance networks or interconnects in the machine to move data on and off very fast storage devices. So, lots of people have talked about AI supercomputers, basically, even just being the way to do AI in the future. And whether that's doing kind of standard machine learning applications or the deep learning trading models, or even inference models going forward, we're seeing GPU computing, very fast interconnects and high performance storage, all being part of that AI needs and what we're seeing in all companies going forward.

Robert:

[9:41] Pete, I look at part of the responsibilities that you have here at the company. And one of those product lines, I think is really interesting. And the connection between those two is the Edgeline products and how do you take HPC capacity or capabilities and make it super dense and small, put them into durable devices that go into high hazard areas and stuff? We've got some new stuff coming in that are putting GPU's into these very small form factors that I think is really interesting. How do you see that edge, bringing that what I call a fatter edge compute node that has those capabilities, what does that look like for us in the future?

Pete:

[10:26] Yeah, I think, as you think about all the data that's being built and made, and how do we go and compute on this, and the easiest way to do that is to go compute where the data comes from. And that's on the edge of our networks, right? And so, taking the same capability that we have in high performance computing systems, like GPU's, like faster processors and stuff, and packaging them in a ruggedized, small form factor environment, optimized for size, weight, and power, that's really what Edgeline is all about.

[11:06]  We have an interesting customer, Zenseact, who used to be Zenuity, and they're doing autonomous driving. And I think this is a great example of needing to do some computing close in the car, and then some back up in the cloud. And so, it gives you a perspective of you can put Edgeline systems, Edge servers out in the vehicle itself, and then be able to do processing all the way back in the data center or in the cloud for the bigger models that you're going to do. So, it's a great example, I think of being able to use both pieces of competing, a more traditional high-performance computer, as well as an Edge device.

Robert:

[5:52] … So how are we, HPE, bringing this as a service out to just make it a little bit more available on a subscription basis or as a service?

Pete:

[6:02] Yeah. I think that's a really important point because a lot of people just don't know where to get started.

Robert:

Right.

Pete:

[6:10] They don't have the skills or the infrastructure to get going. And so, one of the ways that we're approaching it at HPE is that we start from our leadership position in the market. We're the market leaders in HPC. We have over a 37% share of the market. So we have a very, very strong market leadership. And we take that leadership position in the market and then use as a service and use GreenLake to bring that capability to a cloud infrastructure rather than starting with a cloud infrastructure and trying to apply that to HPC.

[6:46] And so, we were able—through GreenLake cloud services—to really just make it very simple and very fast to deploy and use and not really take all the guesswork and all the difficulties or intricacies, I guess I would say, of managing one of these HPC clusters out of the picture and just bring it to customers as a managed service and let them scale as they need it with new capability, whether that be: New servers, new storage or even applications.

Robert:

[13:51] That's fantastic. I'm pretty excited about the HPE vision, the bringing together of the HPC or high performance computing as a service with the GreenLake services, as well as all of that, the new research and technologies that we've got coming up here. On an economics point of view, how do you break down as a service thinking for somebody? So what would be some of the things that they would start asking themselves? They say, "Well, if I want to consume HPC as a service, how do I think about that consumption of it?" What are the parts that they're going to be measuring potentially, what are the parts they're going to be paying for over time?

Pete:

[14:35] Yeah. I think it's going to be different for different workloads and different customers of course, but I think a lot of it is scaling up the computing needs and scaling up the data storage needs. Right? And what we're finding is: There's a set of customers that either have very burst-y requirements, so sometimes they need a minimal amount of capacity and other times they need a lot. Or we have, more likely, and what we're seeing more and more of, is just people growing over time. And so starting small, starting with a small machine learning or deep learning model, for instance, and then that continues to grow and grow and grow over time.

[15:29] And so, “as a service” is just a natural play for those kinds of customers where they have a lot more flexibility, they only pay for what they need and what they use. They have instantaneous capacity available to them. I think GreenLake is really about bringing the cloud to our customers and letting them use it in the way that they want to interact with it, whether it's in their own data centers, whether it's in colo facilities, or whether it's up in the public cloud. And so, it just gives a lot more flexibility for someone that isn't a traditional using HPC for their applications, seven by 24, all day long, kind of thing.

Robert:

That's a really interesting distinction that you made there, all those ephemeral things that you were talking about, burst-ability, lack of visibility and growth, all those things, those are classically public cloud value propositions. Right? And so, I think about HPE and solving this problem, and I'm glad you wrapped up on it because where the data is being generated is not in the public cloud. Right? It's out in the fields, it's out in the data centers, and you have to have something there to take action. I think that's a very important point.

Pete:

[16:53] Yeah. And I think, especially when you think about the kinds of applications that are used in the HPC space, in the AI space, they're very data intensive. And so, you have a lot of data, so it's not so easy or low cost to move that data up into the cloud and back, and such… And the latency starts to really hurt you. And that's another reason why we talked about Edge Computing, right, is to lower that latency of initially computing on that data. And so, these are things that just give customers a stronger infrastructure, a stronger architecture on which to compute, because they don't have to just compute on a general-purpose architecture. They can compute on one that's purpose built for HPC and AI. And GreenLake allows them to have that.

Robert:

[18:34] Pete, thank you so much for joining us on Tech Talk today.

Pete:

[18:37] Yeah. Appreciate being here. It's super fun. Thanks.

Robert:

[20:05] And to our listeners, thank you for joining us this week. Stay tuned for upcoming episodes where we will be discussing hot topics and the news of the day with the leading experts from HPE. Goodbye and take care.

 


HPE Editor
Hewlett Packard Enterprise

twitter.com/hpe
linkedin.com/company/hewlett-packard-enterprise
hpe.com

HPE Editor
0 Kudos
About the Author

HPE-Editor