HPE Ezmeral: Uncut

Where is MapR today?

Gain insights into the perspectives of people who worked for MapR Technologies and are now key contributors to the HPE Data Fabric (formerly MapR Data Platform) since MapR’s acquisition by HPE.

HPE Data Fabric-MapR-blog article.jpg

In a rapidly changing world, it’s important and useful to keep track of trusted friends, even when they change addresses.

For those who work with large-scale data across almost any industry, you probably know of the unique data platform engineered over the last decade by MapR Technologies. But where is MapR now? The answer is, MapR is alive and very well in its new home at Hewlett Packard Enterprise (HPE).

In this blog, I take a look at life today for the engineers, professional services, support, documentation, and other technical teams, as well as for the technology itself, that are MapR. 

In August 2019, MapR Technologies was acquired by HPE and the employees were converted to HPE employees. Since the acquisition, MapR has been integrated within the broader HPE enterprise software business along with continuity of the MapR product roadmap, new work by the engineering team that built the MapR Data Platform, and ongoing support and professional services provided for customer success.

The technology formerly known as the MapR Data Platform is now called the HPE Data Fabric, and it continues to be used by customers around the world for data storage and data orchestration, including interaction with containerized applications. In addition, HPE Data Fabric is also used as the pre-integrated persistent storage layer of the new HPE Container Platform, for which HPE recently announced generally availability.

Capabilities bridge different landscapes

I caught up (virtually speaking) with Fabian Wilckens, director of enterprise software sales for DACH & CERTA (i.e. Central Europe) at HPE and previously with MapR as regional VP for Central Europe and Benelux, and asked him how he sees this technology being used to advantage.

Fabian says, “MapR has always been a visionary in the areas of large-scale data management and was very much leading the market in terms of innovation around multi-cluster-management, cloud, and containerization with a focus on the enterprise. With the growing maturity of organizations in adopting these trends, our customers need a solid foundation to manage, orchestrate, and secure their data. The HPE Data Fabric allows our customers to bridge worlds, connect systems, and scale their AI/ML initiatives by providing a global data fabric that spans across all their infrastructure for all data types and data sizes.”

Similarly, companies are looking for better ways to move data across the organization, even across geographically distant locations for analytics and data acquisition. Jimmy Bates, director of solution architecture for enterprise software at HPE, worked with MapR customers in North America for years and continues to help enterprises leverage the HPE Data Fabric to fluidly handle challenges of data mobility and application mobility across their entire business landscape, from edge to core to cloud. Here's what he has to say about the HPE Data Fabric:

Ongoing customer success

As important as the technology itself is, having access to a strong support team and to professional services is key to ongoing success. In its new home at HPE, MapR’s topflight support team continues to provide the technical help that customers want to adopt this innovative technology and get the most out of it. I contacted Narsi Subramanian, senior director of customer success and support at HPE and previously senior vice president of customer success and support at MapR, to find out how his team handles support for the HPE Data Fabric. 

Narsi explained, “As we have become part of the HPE family, customer success and support are now strengthened with the adoption of the customer first and customer last vision from HPE. This approach is reflected in the continuous support (24 x 7) for the HPE Data Fabric in industries as diverse as automotive, retail, healthcare, banking, oil and gas, and telecommunications, with a team of talented and dedicated engineers around the globe.”

Customers are using HPE Data Fabric somewhere in the world every hour of the day. If the technology never “sleeps”, the support team must always be there for the customer. 

Often customers want to extend their own systems by making use of professional services to help design and implement their initial use cases as well as to expand to new use cases as they begin to appreciate the unique capabilities that this technology offers. Wayne Cappas was one of the first people I met at MapR back in 2012, and he is now VP of professional services for enterprise software solutions at HPE. Wayne builds on deep product knowledge from years managing MapR solution engineering teams to help his services teams at HPE provide customers around the globe with the expertise they need to implement a wide variety of applications based on HPE Data Fabric, HPE Container Platform, and HPE ML Ops.

Providing context for innovative technology

The strength of new technology can lie in its innovative nature, offering new ways to do familiar things and to do things you’ve never been able to do before. But that innovation also offers challenges: How do you step outside old habits to take full advantage of new approaches?

One thing that can help is providing context as people look at adopting a new approach. No one knows that better than Catherine Lyman, director of the technical documentation team for HPE Data Fabric. She also headed up technical documentation at MapR. With documentation,” says Catherine, “our team strives to provide context as well as a detailed how-to guide for using the technology. Context is particularly important because MapR, now HPE Data Fabric, provides such a fundamental and broadly applicable technology that we need to help users see the full range of possibilities.”

Having a wide range of experience with different customer challenges and solutions as well as deep knowledge about the HPE Data Fabric also helps to guide people to the most valuable uses of this technology. One person on the HPE Data Fabric team has come full circle with this experience. Andy Lerner, now a solution architect at HPE, was not only at MapR for seven years, but he also had a long career before that at Hewlett Packard. I asked Andy about examples of customers who use HPE Data Fabric software for capabilities they may not initially have known to look for. 

“We have a customer who wants to lower their data storage costs by archiving older data,” he told me. "Although we initially discussed HPE Data Fabric’s ability to tier data to a cloud or on-premises object store (even an object store on inexpensive hardware), we found that the capability for selective erasure coding and placement of data on a subset of cluster nodes could provide cost savings in less expensive servers and networking for archived data, with no additional system management and with faster access to archived data.”

People may choose HPE Data Fabric for a specific purpose. Examples include: storing large-scale data that can be accessed directly by a variety of AI or machine learning tools, doing edge computing on IoT sensor data prior to sending results to core data centers, or meeting the persistent storage requirements of containerized stateful applications with Kubernetes. But as they get used to having the ease of data orchestration across their organization, they begin to expand their use cases for HPE Data Fabric. Helping customers see context for this innovative technology is a key way for them to enjoy the full benefits it offers. 

Expanding capabilities of HPE Data Fabric

HPE is strongly invested in the continued development of the technology acquired from MapR, as well as in the teams that build and support it. This opens some exciting new possibilities. Ted Dunning, currently CTO for Data Fabric at HPE and previously CTO at MapR, is excited about this new experience with one of the legendary companies of Silicon Valley and how it expands the scope of opportunities for this technology, especially in providing state-of-the-art edge computing and even better security.

"Becoming part of HPE has allowed us to address customer needs on a whole new level,” Ted states. “This is happening partly because HPE has a strong reputation for reliability and consistent delivery, but also because we can work directly with the hardware designers so that MapR's technology—now HPE Data Fabric—will still run everywhere but will run even better on HPE.” 

There is also a need to address the rapidly growing use of containers and Kubernetes, providing greater agility in application deployment and enabling application modernization. Containerization is not just a matter of single containers running isolated processes. Increasingly, people need to be able to efficiently scale workflows up or down, with multiple containers running parallel processes. That requires excellent orchestration, with open source frameworks such as Kubernetes and related tools in the area of machine learning such as Kubeflow. 

Skyler Thomas, previously with the MapR engineering team and now a distinguished technologist in the HPE enterprise software team for Kubernetes-based AI and machine learning, points out, “To do this well, you need a complementary data layer to serve data and models to the compute workflows. This data layer needs to be able to save state from containerized applications for a variety of workloads. That’s where HPE Data Fabric comes in, providing orchestration for persistent data storage and data logistics—whether on-premises, in the public cloud, or at the edge—complementing Kubernetes orchestration for containerized computational processes. That’s why HPE Data Fabric serves as the data layer for the new HPE Container Platform.”

It’s going to be exciting to see what this new phase in the life of MapR holds in store.

Additional resources

To find out more about HPE Data Fabric software (formerly MapR Data Platform) and the ways in which it can be used:

Ellen Friedman
Hewlett Packard Enterprise


0 Kudos
About the Author


Ellen Friedman is a principal technologist at HPE focused on large-scale data analytics and machine learning. Ellen worked at MapR Technologies for seven years prior to her current role at HPE, where she was a committer for the Apache Drill and Apache Mahout open source projects. She is a co-author of multiple books published by O’Reilly Media, including AI & Analytics in Production, Machine Learning Logistics, and the Practical Machine Learning series.