Around the Storage Block
1748081 Members
5079 Online
108758 Solutions
New Article ๎ฅ‚
katedavis

Experience the data pipeline from edge to core to cloud

ATSB_Big Data_blog.jpg

Many businesses today are experiencing the influx of data from the edge โ€“ generated by devices, sensors and machines. Some of this data is expected and planned for, other times, it is more than can be handled.

Dealing with data is a consistent challenge and in this new world, from a data perspective, there are two primary challenges that customers are facing today, with regards to rationalizing and optimizing their time to value:

  1. Hybrid IT โ€“ how core infrastructure and core business applications will function in a hybrid IT world where they are being driven by increased times around deployment and increased levels of agility spinning up new applications to respond to the velocity and the variety of data types that are merging from the intelligent edge
  2. Intelligent Edge โ€“ how to deal with a wealth of devices and the data that they generate and send (including the size of the amount of data and the different data types)

When, where and how we analyze data is also changing. Analytics is a key area of focus for businesses - being able to analyze this new type of data, with the new speed of data being generated at the intelligent edge and do so in a hybrid world. It starts at the intelligent edge with data being generated from a multitude of sensors and devices. Sometimes that data is collected and analyzed at the device level itself, sometimes thereโ€™s an aggregation point and that aggregation point might be a car, a remote site (such as a hospital with patient medical sensors), or simply the PC of a data scientist.

Some form of that data is often sent back to the core. If itโ€™s coming from multiple streams, multiple devices, it needs to be analyzed in real time. This type of analytics in the core can also be happening in the cloud by customers who are looking for special purpose type functionality such as spinning up GPU-focused test beds to do machine learning that are short term projects, as well as longer term storage and creating a tiered storage environment.

All of this creates the data pipeline.

The data flow from edge to core to cloud needs routing โ€“ through a data pipeline โ€“ which provides an infrastructure for data to not just flow bi-directionally but to also allows for the implementation of analytic processes in real-time, near-real time, and at rest, as well as AI modeling.pipeline_graphic.JPG

HPE is uniquely positioned to help customers address their challenges of building a data pipeline from the edge to core to cloud.

HPE provides tested solution architectures that are purpose-built or workload optimized, from the intelligent edge to the core, and a combination of services both from a support perspective and from a perspective of professional advisory services. Itโ€™s the products as well as the expertise where we can help customers build these richer data pipelines and accelerate their outcomes based on the next generation of analytics.

At the heart of the analytic infrastructure is the HPE Elastic Platform for Big Data Analytics (or EPA for short), built primarily on the Apollo platform family, which takes innovation we have around workload optimized nodes and disaggregates storage and compute within the cluster. EPA_graphic.JPG

 For more info, Hereโ€™s @patrick_osborne with an overview on the Apollo family and EPA architecture: 

 

For more info on HPE Platform for Big Data Analytics, visit hpe.com/info/bigdata-ra

About the Author

katedavis

I have been working in the tech industry for over 15 years marketing hot topics including storage, software-defined, big data, hybrid cloud and as-a-service.