HPE Ezmeral: Uncut
1753808 Members
7506 Online
108805 Solutions
New Article
Ellen_Friedman

Innovation can pay off in a big way, but can you afford it?

Afford-To-Innovate-With-HPE.jpgTo decide if you should try an innovative approach, you naturally weigh the large, potential benefits against the known risks – the disruption of existing processes and the added cost of resources needed to carry out the new project. A common response to these risks is to avoid uncertainty as much as possible and only try “safe” projects using familiar approaches. But innovative projects are speculative – they may fail. And in business, a “no failure” policy is also a “no innovation” policy. That means you could miss out on the big wins that innovation can deliver.

You can improve your tolerance for a project that may fail if you find ways to lower entry costs while protecting existing SLAs. But you may unintentionally have baked in higher costs and risks than necessary in the way that your data infrastructure and processes work. Let me tell you a story that shows how getting these right can allow you to take on speculative projects that may ultimately lead to high rewards.

Real-world example: Taking a risk can be worth it

About eight years ago, a large, well established financial company stepped outside of their comfort zone and tried an innovative project built on an AI-based system. After both the merchant and the customer opted in, the new application would provide targeted discounts and other types of upselling offers.

From the very beginning, there was high uncertainty. How long would the application take to develop? And after all the development work, would the application actually drive net new revenues? The financial company took the risk, and the entire system was developed, tested, and put into production in a matter of months. 

The new application quickly paid off with significant new revenue streams. This innovation was a big win – yet, with all the uncertainties, how did this financial company possibly afford to do the experiment?

The answer lies in the way they could minimize costs and risk, thanks to the flexibility and efficiency of their data infrastructure. They kept costs down in part because their data infrastructure did not require them to build a new cluster; it could easily support the additional project. In addition, they used customer transaction data that was already being collected for their mainstream business processes, so it wouldn’t take more effort or resources. And lastly, they did all this in a way that didn't interfere with the primary purpose of the cluster or encroach on critical business goals. In short, they were able to afford to experiment because their approach bounded the risk and cost of experimentation. 

Let’s look more concretely at strategies for system design and data infrastructure that make it easier for you to lower entry costs and bound risk when you try new approaches.

Strategies to make innovation affordable

To afford innovation, you must also be able to afford to fail. Consider these four strategies to make it reasonable to try innovative approaches:

1. Take advantage of sunk costs 

Many of these experimental projects require very large datasets. If your data infrastructure and system design require you to collect or copy new data or build a new cluster from scratch for new projects in order to protect existing SLAs, the entry costs will be too high. You would have to guarantee a successful outcome and won't be able to tolerate failure. That, in turn, means you can’t afford to try something that is actually innovative. 

Another way to take advantage of sunk costs is to use a secondary cluster normally intended for disaster recovery as a sandbox for experimental projects. You have a comprehensive copy of data standing ready in case of catastrophic loss at the primary data center. Why not put it to work in the meantime as a resource for new project development?

2. Make use of robust multi-tenancy

To take advantage of sunk costs and reduce risk of disruptions from new projects, your data platform needs to support real multi-tenancy, making it easy and safe for multiple applications and users to share the same data. For this to work, you need easily managed and consistent fine-grained access control. You also need to provide efficient automated resource allocation and be able to use containerized applications. If you can do this, you can attempt a new project with an uncertain outcome but with a potentially high return – without excessive entry costs or the danger of interfering with critical SLAs.

3. Handle data logistics at the platform level, not the application level

You can also reduce entry costs for new projects if you handle data logistics at the platform level, not the application level. Not only is this approach generally more efficient and less error prone, it also means your developers don’t have to re-implement everything as they build a new project. In other words, doing logistics at the platform level is another way you truly take advantage of sunk costs. 

Another real-world example illustrates this strategy. A retail company with highly distributed online services needed to move telemetrics from many edge data sources to their core data center to analyze service quality and for billing. This data movement had previously been done at the application level, but that was cumbersome and imposed a heavy burden on developers.

By switching to use the event message stream capabilities built into the platform level, data acquisition and transfer to core was implemented easily and reliably. This change to platform-based logistics freed up developer time for other projects to the point that the customer claimed a net result of “a year of developer time in the bank”. Details of this interesting use case are found in the article “Using Data Fabric and Kubernetes in Edge Computing”. 

4. Expand your idea of what constitutes a deliverable

A final strategy is to re-think what constitutes a “deliverable.” If a project must deliver a successful production system, the risk of failure may preclude trying anything new. What if you also count experience as a deliverable? Of course, you still must impose specific limitations on resources and time allotted to experimentation. Additionally, you will need a well-defined target that aligns with business goals. But the value of experience – if acknowledged and communicated across your organization – can improve the outcomes for other new projects.

This last idea underlines the importance of making learning a key part of your project. Innovation relies on an organization-wide build-up of tribal knowledge and experience that eventually reaches critical mass to create a solid competitive edge versus your competitor.

Key enablers

Several key enablers will help you put these strategies into play. Both of the real-world examples described here were MapR customers (MapR Technologies was acquired by HPE in 2019) and used HPE Ezmeral Data Fabric (formerly known as the MapR Data Platform). The HPE Ezmeral Data Fabric lets you handle data logistics efficiently at the platform level with conventional access to a highly scalable file system. It also supports bi-directional replication of built-in tables and event streams, incremental mirroring across data centers from edge to core to cloud, and true multi-tenancy.

Another enabler to support multi-tenancy and make innovation affordable is to leverage the convenience, resource utilization optimization, and improved performance of a container platform. HPE recently announced general availability of the HPE Ezmeral Container Platform that uses the HPE Ezmeral Data Fabric as its data layer. In a March 2020 CIO.com article, Robert Christiansen talks about the benefits IT teams enjoy by “collapsing the stack” and improving the containerization experience through use of the HPE Ezmeral Container Platform.

Next steps

A good starting point is to look at your system design: Consider structural changes to methods and data infrastructure that could make your system better able to support multi-tenancy and avoid unnecessary costs and risks for trying new projects. Then explore your data and your team’s ideas to identify new projects – even speculative ones – with high-potential rewards in line with your business goals.

For more information about HPE Ezmeral Data Fabric: 

Featured articles:

 

Ellen Friedman

Hewlett Packard Enterprise

www.hpe.com/containerplatform

www.hpe.com/mlops

www.hpe.com/datafabric

 

0 Kudos
About the Author

Ellen_Friedman

Ellen Friedman is a principal technologist at HPE focused on large-scale data analytics and machine learning. Ellen worked at MapR Technologies for seven years prior to her current role at HPE, where she was a committer for the Apache Drill and Apache Mahout open source projects. She is a co-author of multiple books published by O’Reilly Media, including AI & Analytics in Production, Machine Learning Logistics, and the Practical Machine Learning series.