HPE Webinars - 2018
Showing results for 
Search instead for 
Did you mean: 

Webinar Series - Storage Solutions for Big Data (4 Sessions)


Webinar Series - Storage Solutions for Big Data (4 Sessions)

HPE (Storage) Solutions for Big Data  (Internal and Channel Partner use only)

Dear All,

Please join us for a 4 session series covering topics in the Big Data arena! The objective of this Solution Month is to provide sales and presales with information needed to enter into customer discussions and to uncover opportunities in the Big Data space. The sessions highlight varies aspects of Big Data Solutions and include use cases and real-world examples. The content is pan-HPE in nature with the last two sessions being more focused on storage aspects within the overall solution architectures. The session content is delivered by speakers from a variety of organizations.

Session 1:  Data Analytics and Intelligence - From Edge to Core and Cloud

Data is exploding all over the place. Our customers need to manage it, regardless of having it outside or inside the datacenter, structured or unstructured. Different siloed systems allow the customer to capture, manage, prepare and process the data. Join the call and understand the pan-HPE Point of View on an End-to-End Data Pipeline and how IoT, AI and Big Data plays together. Understand how to connect the data pipelines to get value out of the massive data.

The Pointnext CoE “AI Data and Emerging Technologies” will talk about a real world use-case on how they optimized Seagate's manufacturing process. This session will lay the foundation for the next Big Data Solution month session. 

  • Date:  December 12, 2018
  • Time:  09:30 am PST / 10:30 am MT
  • Duration: 45minutes
  • Target Audience: All
  • Content: 
    - Edge/IoT Concepts
    - Fast Data
    - Big Data
    - AI & Deep Learning
    - Use-Cases and real-world examples
    - HPE Services, Solutions and Products
  • Registration/Recording 


Session 2:  The Data Lake 3.0 – HPE offerings around the latest Hadoop release

Businesses in every industry are using Hadoop to manage data growth, connect different data sources, and extract conclusions that give them competitive advantage. In the session we will provide a quick recap of the key principles of Hadoop and what our customers are doing with the Big Data Framework.

Catch-up with the latest developments and features and on how the Hadoop Distributions differ. We will discuss how erasure coding and containerization will disrupt current deployments and will create more opportunities. See how perfect the HPE Elastic Platform for Analytics (EPA) with Apollo and BlueData fits the latest release of Hadoop. Learn how HPE and our partners can provide a perfect and robust Foundation for Hadoop 3.0 and an enterprise grade Data Lake.

  • Date:  January 15, 2019
  • Time:  08:00 am PST / 9:00 am MT 
  • Duration: 60 minutes
  • Target Audience: All 
  • Content: 
    - Hadoop Principles and Use-Cases
    - Key Changes in Hadoop 3.0
    - Hadoop Distributions
    - HPE Elastic Platform for Analytics and BlueData
    - HPE Greenlake offering
    - Solution Architect: How to qualify a Hadoop deal + Customer Q&A
  • Registration/Recording   


Session3:  Storage Special – Scale-out Storage in the Big Data Age

Gartner predicts that next year more than 50% of the storage capacity installed in enterprise data centers will be deployed with software-defined storage or hyper converged systems.  Are you ready for next year?

In this session we will cover the HPE Scale-out Storage offerings and how the solutions differ. Understand the different products and software components from Object storage, like Scality RING or Ceph, to Scale-out file with Qumulo File Fabric. Get an overview of the HPE portfolio and understand when to position which storage product and what the competition has to offer. In this session we will spotlight the new Apollo 4200 Gen10 Servers designed for Scale-out Storage and Big Data workloads.

  • Date:  January 23, 2019
  • Time:  08:00 am PST / 09:00 am MT 
  • Duration: 60 minutes
  • Target Audience: All
  • Content:
    Scale-out market and drivers
    - Concepts and techniques
    - HPE and Partner Portfolio – when to position what
    - Spotlight Apollo 4000 series
    - HPE Greenlake offering
  •  Registration/Recording   


Session 4:  Storage Special –Big Data and Storage tiering

There is one characteristic when it comes to Big Data projects: The amount of data will grow. There are different requirements from analyzing the hot data to keeping information immutable persistent at petabyte scale. Different platforms and options available. The most popular is Hadoop. However, the Hadoop HDFS storage layer is not really an efficient data store, due to the initial three-way replication. Why not make use of Storage tiering to a more “cloud likedata store or cold Data?

Join the session and learn what options are possible to connect Hadoop with Object Storage and how it change with the new release of Hadoop 3.0 and the implementation of Erasure coding. We will discuss the different storage tiering options, as well as the HPE Object Storage offerings, like Scality or CEPH. Learn when it makes sense to build an additional storage tier to enrich Big Data environments

  • Date:  February 13, 2019
  • Time:  08:00 am PST / 09:00 am MT 
  • Duration: 45 minutes
  • Target Audience: All 
  • Content: 
    - Big Data / Hadoop Customer Scenarios
    - Hadoop File System (HDFS)
    - Data tiering concepts
    - Erasure Coding vs. Cloud storage
    - HPE Portfolio and Offering
  • Registration/Recording