- Community Home
- >
- HPE AI
- >
- AI Unlocked
- >
- Building and evolving a data platform for advanced...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
Building and evolving a data platform for advanced analytics in healthcare
In today’s business landscape, companies are overwhelmed with data. New data sources and types of analytics are constantly arriving, meaning the work of creating a data platform that allows data to be ingested, transformed, modeled, and delivered in a purpose-built form never ends.
A challenge many organizations face is how to make better use of their data--whether to save money, bolster ROI, improve patient care, or spark new innovations. Businesses must be able to exploit their data to enhance organizational functionality and thrive in the contemporary data environment. Fortunately, if companies follow the right approach, they can achieve these demanding goals.
The beginning of this process to better harness data rests on understanding what data a company has, where it lives, and how to get at it. This data foundation becomes even more powerful when amplified by cloud practices and principles for transformation and management. Then, by adding a well-crafted layer of metadata, the data starts flowing to the needed AI/ML, analytics, and other data consumers to create new dimensions of business value.
This article outlines this high-level process and provides a look at how it comes to life in healthcare.
Building the data foundation
The most essential first step in establishing a data platform for advanced analytics is to build a data foundation. During this first phase of overhauling a company’s approach to data, businesses should create a map of all their data so they can answer two essential questions:
- What data do we have?
- Where does that data reside?
Without being able to answer these two questions, companies are living in a state of data chaos where any attempt to leverage data is guesswork. They will often have data everywhere, with no organizational structure or unified approach, meaning the data is likely to be low quality, very messy, and very expensive to manage.
Building a data foundation triumphs over this by generating an inventory of what data exists, as well as its location. Once this map or glossary is in place, organizations can begin the work of improving the quality of the data and catalog it in a vital metadata layer. Generating this metadata layer will allow companies to see how users find data and understand what tools can assist them in how they use it. Metadata provides a full description of data, making it transparent and accessible to users.
The crucial role of metadata
Metadata is vital, not just for the data foundation, but for a company’s entire data strategy. Metadata enriches the underlying data so organizations can know what they have and where it’s located, what value the data has, as well as who is using it. Effective use of metadata starts by creating a simple catalog or glossary of available data and then allowing the description and usage metrics to be opportunistically enriched.
Once a business establishes this backbone of metadata, it can apply metrics to its data to enable broader governance programs. Organizations can determine what is good or bad quality data and what data is sensitive (and therefore needs greater security policies). Metadata enables an organization to create these enterprise-wide policies that strengthen and unify how data will be consumed and analyzed and ensures only the right people have access to specific data. Accelerated by metadata, eventually organizations can create data products that vastly enhance their ability to fully put their data to use.
Cloud principles
Once businesses build their data foundation, they can then begin to operationalize their approach to delivering cloud-native services. This is when they can truly start to exploit their data for better organizational outcomes. So much data is available to organizations, there’s no single way to conduct modeling and semantics, or just one “correct” data infrastructure and pipeline. But the right strategy to use data in cloud-native services for advanced analytics is predicated on ensuring users have as many self-service options as possible at every layer, both for internal users and for external/third-party users.
Using cloud principles allows organizations to work with ephemeral instances that come to life and disappear as needed, as well as auto-scaling workloads to achieve desired performance. Operations also benefit from adopting cloud principles because they provide flexibility and adaptability. The organization’s infrastructure grows and shrinks based on workload demand, rather than investing in hulking infrastructures to support peak loads.
These cloud principles should be based on open-source technology and containerization concepts common in the public cloud that will make a data strategy operate smoothly. These include separating compute from storage, deploying workloads in microservices, allowing self-service for users, and scaling data instances up or down without having to involve IT. With the cloud-native data services as the second stage and a data foundation as the first, organizations can start to extract value from the data they have.
Completing the productization of the data
The third phase of utilizing a robust data platform to exploit value from data is to productize it. This stage connects data producers with data consumers. Now that organizations know what data they have and where it is located, they can provide those connections on a product-by-product basis. Treating these individual outcomes as data products ensures the organization delivers the most value in every instance.
As organizations create data products, it is important they keep in mind the end user and the context of how that user will apply and work with the data. A data product can be as simple as a data dashboard and as complicated as a full-throttle, AI-driven machine learning model offering detailed recommendations to clinicians in patient settings. Examples of other types of data products include applications running at the edge that make decisions or an API someone has subscribed. It is up to the organization to define what the product is and have the tools and platforms in place to make the most of it so people or systems who need it have access to it.
A modern data healthcare platform
It’s now worth considering a real-life example of how building the right data platform can enhance an organization’s approach to the way it conducts and excels in its mission. In the healthcare space, being able to analyze one’s data can lead to improved patient outcomes or new approaches to research.
For instance, having an orderly data platform, instead of data chaos, means a patient’s medical information (regardless of where the healthcare occurred in the past) is in a single place and is easily accessible for physicians. Healthcare workers can then have the full context when giving care recommendations. Additionally, unified data platforms allow for improved medical research as in the case of drug trials; side effects can more quickly be identified and effectiveness determined for entire subsets of patients.
Building a data platform is not a one-time exercise but a process that never ends. To make evolution speedy and affordable, the principles and practices we have set forth must be built in from day one. An agile data platform starts with the right architecture. To learn more about how to make this a reality, watch my keynote speech on The Future of Advanced Analytics: Looking at a Multi-Phased Approach for True Data Exploitation at the AI Summit New York, delivered jointly with Robert Christiansen, VP, Office of the CTO at HPE.
Matt
Hewlett Packard Enterprise
twitter.com/HPE_Ezmeral
linkedin.com/showcase/hpe-ezmeral
hpe.com/software
- Back to Blog
- Newer Article
- Older Article
- Dhoni on: HPE teams with NVIDIA to scale NVIDIA NIM Agent Bl...
- SFERRY on: What is machine learning?
- MTiempos on: HPE Ezmeral Container Platform is now HPE Ezmeral ...
- Arda Acar on: Analytic model deployment too slow? Accelerate dat...
- Jeroen_Kleen on: Introducing HPE Ezmeral Container Platform 5.1
- LWhitehouse on: Catch the next wave of HPE Discover Virtual Experi...
- jnewtonhp on: Bringing Trusted Computing to the Cloud
- Marty Poniatowski on: Leverage containers to maintain business continuit...
- Data Science training in hyderabad on: How to accelerate model training and improve data ...
- vanphongpham1 on: More enterprises are using containers; here’s why.