HPE Ezmeral: Uncut
cancel
Showing results for 
Search instead for 
Did you mean: 

How HPE Ezmeral is helping organizations conquer today’s data challenges

A behind the scenes interview with HPE expert, Anil Gadre, about how HPE Ezmeral’s ground-breaking solutions are helping organizations worldwide improve business outcomes.

HPE Ezmeral Solves Data Challenges.png

Organizations are struggling with data intensive applications, data storage environments, and AI/ML workflows. These issues are strategic to enterprises because they are the key use cases to improve and advance business processes and operations. In this interview with Anil Gadre, leader of the HPE Ezmeral Go-To-Market team, I ask him about the challenges data science and IT organizations are facing, why it’s important to implement the right solution, and how that’s going to improve the outcomes for the business.

Lauren: What are some challenges you are seeing with data intensive workloads?

Anil: Let’s start with who is struggling and why.  Two groups are dealing with some really difficult problems:

  • Group 1: The data scientists, AI/ML Ops teams, line of business; and,
  • Group 2: The infrastructure operators that the line of business relies upon to make all of this work

These two groups are often at odds with each other because group 1 needs to move quickly; they want access to resources immediately, and they want the freedom to try new tools and methods.  Group 1 often views group 2 as a constraint. The infrastructure people in group 2, on the other hand, see those people in group 1 as asking for unrealistic things in an unrealistic timeframe. The challenge is to find a solution that solves both of their problems simultaneously.

Lauren: What changes are you seeing in this market?

Anil: Three discontinuities or paradigm shifts are happening all at once:

  • Battle #1: AI discontinuity 

This takes place at the application tier and is all about leveraging AI and ML. Every company, whether they know it or not, needs to leverage AI and ML to some degree. Their competitors sure are! But it’s a very difficult thing to do.

  • Battle #2: Containerization discontinuity

The IT people are dealing with the reality that the entire underpinning of how they built data centers is changing under their feet. And with this containerization wave, Kubernetes is coming. All these new technologies (whether it's microservices, Kubernetes, etc.) are here, and the whole landscape is changing.    

  • Battle #3: Data related discontinuity

It’s well understood that data is exploding. A data fabric is essential so companies can actually get, keep, govern, and use that data in the service of the business unit who needs all the AI/ML to happen. Therefore, one of the problems we talk about is that organizations have a lot of data; but that's the easy part. The hard part is the question: how are you going solve for the data logistics challenges from edge to cloud?  That means, how are you going to get the data securely from here to there and give it to the right people who need to do something with it – all ensuring they get it at the right time. 

Lauren: Can you explain how HPE has built a solution to address all 3 of these issues.

Anil: HPE can address these three different paradigm shifts or waves of change that are going on all at once. We provide  a platform, called HPE Ezmeral, with which organizations can conquer those challenges. HPE Ezmeral is the best runtime platform for your data centric workloads. HPE Ezmeral is uniquely architected for data intensive, stateful workloads that require enterprise-grade reliability, speed, and scale. Lastly, HPE Ezmeral is agnostic and runs in any cloud, any hardware, and is 100% open source Kubernetes.

HPE Ezmeral includes several capabilities:

  • Data Fabric: Enables one unified view of all your data, whether it’s in the cloud, in your data center, or at the edge. It is software that provides a unified data platform and file system to ingest, store, manage, process, apply, and analyze all data types from any data source at scale from a variety of different ingestion mechanisms.
  • Container Platform: Offers you a mechanism to orchestrate and operate a highly agile containerized world. It provides application developers, data scientists, and IT operations teams a complete management and orchestration solution for developing, modernizing, securing and operationalizing applications in containers and Kubernetes clusters. All of this is accomplished at scale with software and tools to manage the entire software lifecycle, automating developer and IT Ops processes
  • ML Ops: Brings DevOps-like agility to the entire machine learning lifecycle. It enables the operationalization of the end-to-end pipeline that supports the continuous delivery and continuous integration of models in a production environment. The platform includes software and tools for data science teams to build, train, deploy, and monitor machine learning solutions as well as provide lifecycle management of the overall analytics environment and data repositories.

To summarize: With the data fabric, you can control your data. With the container platform, you can control the vast array of microservices and apps in containers. And with ML Ops, you can control the ML lifecycle and have greater collaboration. With these components, you can now deliver the agility, the openness, and the freedom that line of business is looking for.

Lauren: Could you dive a bit deeper into what a data fabric is?

Anil: HPE builds a data platform that lets enterprises create a data fabric to deal with the data logistics your data science teams struggle with. A data fabric is the ability to interconnect multiple points of data so you can get a global, unified view of your data.  A data fabric addresses the following challenges:

  • Data diversity: Manage many types of different data, whether it's images, files, objects, pieces of text, photos, video, etc.
  • Degree of scale: Easily manage the growth in volume, variety, and velocity of data that is outpacing the capabilities of traditional architectures. The largest Oracle database, the largest financial trading databases are tiny compared to the stuff that the planet creates every day on TikTok and Instagram.
  • Extremely reliable at a scale: Architected for resiliency at exabyte scale with distributed and replicated metadata, and robust, efficient global mirroring.
  • Embrace a globally distributed set of data: Historically organizations parked their data inside a data center, and if you wanted to move it, it was a heroic act of rocket scientists. These organizations need to be able stitch together data sitting anywhere in the world. It could be on an oil well, it could be in a CT scanner, or inside a car, which is a real case from one of our customers. You now can see a global view of all of your data, which you weren’t ever able to do before.

Lauren: Speaking of use cases, can you highlight some?

Anil: Absolutely. Let’s take a look at 4 different ones.

  • Global automaker: autonomous vehicles

Imagine being able to let go of the steering wheel while driving and still be able to resume control of it—or to totally disengage from the driving process. A global automaker is moving closer to making this a reality as it works to develop highly automated and fully autonomous vehicles.

The company believes by developing such vehicles, it can provide accident-free driving and enhance vehicle and road safety. But achieving this requires rigorous testing and the ability to collect massive amounts of data from cameras and sensors attached to a car, to perceive its surroundings, and detect issues. The automaker must then make this data available to developers and data scientists, who train algorithms known as deep neural networks. Through these algorithms, a car can learn to make smart decisions in real time using sensor data, enabling it to drive safely.

Data scientists and developers also use this data to identify anomalies, helping them refine automated and autonomous driving systems. The challenge is how to manage and analyze the enormous amounts of data generated all over the world. To accelerate the development of autonomous driving functions, this automaker needed a solution that would allow it to access and share data globally with high performance on a massive scale. The company also wanted to minimize replication and avoid data duplication to optimize hardware resources.

The automaker needed a data platform to collect and manage massive amounts of data from test vehicles and make them available to developers across the world. To accomplish this enormous undertaking, they deployed HPE Ezmeral Data Fabric , accelerating its development of autonomous driving functions with ready access to global data.  You can read the full case study here.

  • Large oil pipeline company:  Analytics at the edge

Another example of successful analytics in action is demonstrated by a large oil pipeline company, who has power technology running on wellheads pumping stations. Their pipeline isn't just a big piece of pipe; there's a lot of electronics and intelligence all along the way. Every one of these pumping stations is quite smart. 

The HPE Ezmeral Data Fabric enables this pipeline company to collect data and engage in analytics – right at each station. Additionally, they can see a global view of the data without having to have a human manually transfer the file or run a script. All of this is seamless, so the technician doesn’t have to be involved in the process. Instead, the data is continuously flowing into the data fabric, providing them instant access to real-time global data.

  • Major retailer: Instant insight with a global view

Today, retailers in a local store collect a lot of data, but the store manager can't see any of the analytics in real time. All the collected data is typically shipped to the corporate headquarters where someone runs analytics. Next, the results can be sent to the local store – probably the next day or even later. Hence, the store manager can see what happened yesterday, the week before, or even the month prior, but they are looking at old data instead of what is happening right now.

Every major retailer is trying to automate this process in order to get nearly instantaneous analytics in the local store. And HPE helps some of the largest retailers in the world create these data fabrics that are tying together all this distributed data -- allowing them to have a global view of this data in real time and giving them the competitive advantage they need.

  • Medical supply companies: Results faster, security maintained

My last example is from the medical device industry. Many large medical supply companies are using data fabric to modernize the entire data flow from devices within the health care system.

For example, why does it take 15 hours to get a result from a CT scan or an MRI?  Can these companies cut that time down to 15 minutes? The only way to do that is to embed a lot of analytics inside the CT scanner. But if they wanted to globally learn from those images in addition to doing local analytics, they would need to send the data to somewhere else -- to where they could implement machine learning on images coming from thousands of CT scanners all over the world. That way they could improve the algorithms for more automated spotting of anomalies.

You may be thinking this type of data sharing has all kinds of interesting problems with security. For example, certain country laws say you can't remove data from that country. So how do you set up controls for such a policy that? The HPE Ezmeral data fabric can easily accomplish that by setting up a policy that states data on these machines cannot leave this rack, this building, or whatever location is determined. This means you can have data sovereignty by policy.

Lauren: What is the ultimate goal for customers with the use of HPE Ezmeral?

Anil: The enterprises interested in this solution are those who are entering what I call the second decade of data. The last 10 years have been a lot of experimentation and a lot of trials. But every company now is getting to the realization that analyzing data actually works, and they need to put it into production to have a competitive advantage.

The largest financial, retail, healthcare, and manufacturing companies are already using this technology. A bell curve shows us the middle majority are still trying to figure this out. They need solutions that just work without the need for a massive amount of highly specialized people, and they're going to need a partner who can help them. They need someone who can solve for the three discontinuities I mentioned at the start of the interview: AI, containerization, and the data explosion. And they need someone who can do it all at once. HPE does that, offering HPE Ezmeral as a Service through HPE GreenLake. The enterprise can then focus on their business, and HPE will take care of the tech.

To learn more about HPE Ezmeral visit HPE.com/Ezmeral and take a few free courses, visit the HPE Ezmeral On-Demand Learning site.

Lauren Engebretson

About the Author

L_Engebretson

Lauren Engebretson is the product marketing manager for HPE OneView and the HPE Composable Ecosystem Partner Program at Hewlett Packard Enterprise (HPE). Lauren has extensive experience in product management and product marketing across HPE hardware, software, and Infrastructure as a Service solutions. She spearheaded the launch of HPE Composable Infrastructure and HPE Synergy, driving increased awareness, education, and engagement through her technical knowledge and customer-first focus.