AI Unlocked
1820072 Members
2572 Online
109608 Solutions
New Article
EzmeralExperts

HPE, Intel, and Splunk have done it again!

HPE-Ezmeral-Runtime-Splunk-blog3.jpgLast year, HPE, Intel, and Splunk showcased amazing turbocharged infrastructure and operations for Splunk applications on HPE ProLiant Gen10 servers (powered by Intel 2nd Generation Scalable Processors.)  Combining the power of processing, storage, and containerization and the power of partnerships we achieved an increase in ingest rate per server of 17 times, scaling from 500 GBytes per day to 8.7 TBytes per day.

Yet we knew more performance was available with technology advancements from HPE and Intel. So, we set out to see how far we could push Splunk’s ingest performance and test new components, including tri-mode RAID controllers, encryption at rest, Intel 3rd Generation Xeon Scalable Processors, and Intel E810 network adapters. The result? We ramped up throughput even higher by utilizing the latest and greatest HPE ProLiant Gen10 Plus servers powered by Intel 3rd Generation Scalable Processors (previously known as Ice Lake)!

The setup

The architecture has several distinct layers and each provide a specific function.

HPE Ezmeral Runtime forms the heart of the orchestration layer and provides the control plane, Kubernetes cluster, Istio ingress, and Istio load balancing. It also coordinates with the Splunk Operator for Kubernetes.

Running containerized Splunk requires use of Splunk SmartStore, and HPE Scalable Object Storage with Scality RING provides the SmartStore target. Scality RING is certified for use with SmartStore, having passed all tests for single-site and multi-site storage.

Splunk Forwarders and Gogen instances run together on load generators separate from the Splunk Indexer and Search containerized clusters. The forwarders are themselves containerized and use a containerized deployment server to apply a consistent configuration across all forwarders. Each forwarder monitors webserver logs generated by two dedicated Gogen instances. With 8 forwarders configured per load generator, 16 Gogen instances run concurrently.

With this design, we have at least 64 Splunk Forwarders feeding the indexers in the Splunk Cluster. This design can scale to hundreds of forwarders per indexer in a larger production roll out.

Ten logical CPUs and 128 GiB of RAM are reserved for each Splunk Indexer pod. We then run Gogen against this setup and rolling buckets with SmartStore out to the Scality RING and start with one container per physical ProLiant DL380 Kubernetes worker and scale the number of containers to maximize per host throughput.

Significantly increased throughput and performance

In our previous test, running one indexer per host gave about 500 Gbytes per day of Splunk ingest per host. With updated compute, storage, and networking, we are now about to drive 3 TBytes per day of ingest per host. But at one pod per host, the CPU utilization is only 12%. To better utilize the performance capability of the servers, we ramp up by one pod at a time and get all the way to 10.4 TBytes per server per day of ingest into the system while running at 61% CPU utilization.

HPE-Ezmeral-Runtime-chart.png

At 10.4 TBytes per day of ingest per server, this is now performing 20.8 times more ingest throughput than our original test result at 500 GBytes per day per server of ingest.

Delivered as a service

To bring this solution to life, HPE leverages HPE GreenLake and delivers this Splunk-optimized solution as a fully managed and supported platform as a service (PaaS). HPE manages everything up through the container and storage layer for you no patching, performance tuning, or maintenance required. And you don’t need hard-to-find Kubernetes skills; HPE takes care of it.

Learn more at Splunk .conf21 Virtual

To learn more, visit the HPE virtual booth at Splunk .conf21 virtual, October 19-21. Check out our demos and don’t miss these two educational theater sessions:

  • Bringing Dark Data to Light with Modern Splunk Deployments, by Matt Hausmann, Group Manager - HPE Ezmeral GTM, HPE
  • Are you Smartstore Ready? By John Elliott, Sr. Technical Marketing Engineer, HPE and Maziar Tamadon, Director, Product & Solution Marketing, Scality

Notes on Test Config: Test by HPE as of August 2021. 10-node HPE ProLiant DL380 Gen10 Plus, 2x Intel Xeon Gold 6354 18 cores @ 3.0GHz, HT On, Turbo ON, 1024 GB RAM (32 x 32GB 2Rx4 PC4-3200 DIMMs), 16 x 7.68TB self-encrypting NVMe SSD, 1 x MR416i-p tri-mode RAID controller, 2 x Intel E810 10/25GbE 2 port SFP28 NIC, CentOS 7.9, Splunk 8.2.0, Splunk Operator for Kubernetes 1.0.1, HPE Ezmeral Runtime 5.3

About the authors:

Elias_Alagna_HPE.jpgElias Alagna is a Chief Technologist in the Hewlett Packard Enterprise North America Hybrid IT office of the CTO. His technical leadership includes working with commercial, government, and education entities across a broad set of products and services. Areas of expertise include ERP system architecture, HPC, business continuity and disaster recovery, OLTP and BIDW database solution architecture and storage hardware and software solutions. Elias Alagna – Elias.S.Alagna@hpe.com

 

RajeshVijayarajan_closer_crop.jpgRajesh Vijayarajan is a Distinguished Technologist with the Global Sales Engineering (GSE) team at Hewlett Packard Enterprise based out of Dallas, TX. He began his career at HPE, 17 years ago in R&D. His passion for serving customers drove him to transition to a technical role in Global Sales 11 years ago. Today, he takes lead on all emerging technologies for the group including Data Analytics, ML & AI, IIoT, Edge to Cloud architectures, etc. He is a trusted advisor to numerous top accounts including Walmart, PepsiCo, Toyota, Experian, FedEx, Motorola Solutions, and Mastercard. Rajesh Vijayarajan - rajeshvj@hpe.com

Hewlett Packard Enterprise

HPE Ezmeral on LinkedIn | @HPE_Ezmeral on Twitter

@HPE_DevCom on Twitter 

 

0 Kudos
About the Author

EzmeralExperts