Servers & Systems: The Right Compute

Enable rapid time-to-insight with Intelligent Video Analytics

Edge computing technologies are moving deep learning and other enterprise capabilities out of the data center and deriving timely insights from rich content like video right at the edge of the network.

AI timetoinsight_blog.jpgThe widespread adoption of artificial intelligence (AI) has dramatically changed how businesses in every industry analyze, visualize, and convert data into insights. Advancements in edge computing are moving AI-enabling technologies like deep learning out of the data center or cloud, improving the ability to derive timely insights from rich content like video in real-time and right at the edge of the network.

Here at HPE, we’re working with NVIDIA to provide solutions and expertise to help businesses deploy edge-to-core compute, storage, networking, data acquisition, control and management architectures that support AI capabilities, enable rapid time-to-insight, and drive better, faster decision-making. 

Tens of billions of sensors are being deployed worldwide to make every street, highway, park, airport, parking lot, and building more efficient. Arguably the richest sensor to help optimize the efficiency of public assets and maximize public safety is the video sensor.

Currently, most of this video data is not analyzed and a small portion is monitored by humans. Think security personnel in an operations room staring at banks of screens trying to spot dangerous situations. But even the best trained human operators lack the attention span and the ability to extract meaningful insight from video. Intelligent Video Analytics (IVA) powered by deep learning is the key to boosting time-to-insight from these video sensors, turning frames of video into valuable data at super-human accuracy at-scale. This technique allows both computers and humans to be used where they are of the utmost value—with automated analytics providing rich, contextual, and prioritized information that will aid humans in subjective decision-making.

Up until now, the preferred way to process any “heavy” unstructured data has been to bring it back to the data center or cloud—simply because those were the only places with the necessary infrastructure to analyze it. But with the volume of video data being generated every day, it’s become very difficult to transfer all the raw data from the edge to the core without unacceptable levels of latency and in a cost-effective and secure manner. Analyzing video right at the edge, but still using data center-class infrastructure, alleviates these concerns in a manner that can be quickly operationalized without having to retrain IT staff.

Turbocharging video analysis applications with deep learning

The basic concepts of deep learning have been around for decades, and we’ve continually honed the capabilities of neutral networks to ever more closely mimic human-like observation and decision-making. But only in the last few years have advances in high performance computing (HPC) accelerators provided the massive levels of compute needed to bring deep learning into the real world in a cost-effective and commercial-ready way. Powerful graphics processing units (GPUs) from NVIDIA delivers unprecedented levels of accuracy and analysis, and best of all does it in a compact and efficient manner suitable for deployment right at the edge and close to the source of the video data.

Traditionally, computer vision algorithms have been hand-coded using feature-based approaches, and accuracy was very dependent on the quality of that code and how often it was refreshed to incorporate new information. But the emergence of easy-to-use deep learning frameworks such as Caffe or TensorFlow, accelerating compilers and software development kits (SDKs) such as the DeepStream SDK from NVIDIA, and the wealth of high-quality labelled data (many times freely available) at our disposal is allowing developers to train deep neural networks (DNNs) much more easily, tuned to their specific use case and in less time than hand coding the same capabilities. These improvements have finally made it economical and simple enough for organizations to adopt video analytics and deep learning techniques on a more widespread basis.  

IVA powered by deep learning is helping organizations realize faster time-to-insight and improve reaction times in several areas:

  • Public safety—By automatically analyzing camera footage in public areas, law enforcement officials can quickly be alerted to incidents that merit human attention, like overcrowding, fighting, or abnormal activity for a particular person or vehicle.
  • Emergency response—Vehicle accidents detected in real-time can immediately trigger alerts to emergency responders, reroute traffic, and automatically archive video footage for police evidence all within moments of the incident.
  • Traffic monitoring—Road cameras can continuously collect real-time traffic data, helping city planners gain a full understanding of how and what kind of traffic is flowing at specific intersections, highways, bridges and tunnels, and make proactive changes to improve the commuter experience.
  • Retail analytics—Retailers can use in-store video feeds for more than just theft protection; for example, leveraging advanced analytics to determine shopper behavior, optimize product placement, determine pricing, improve recommendations, and increase personalization of offers.
  • Industrial vision—Manufacturers can use automated video analysis to detect complex product quality issues early, so they can take corrective action without impacting production or having to discard a large volume of defective products.

Moving AI capabilities out to the edge

Improving speed of reaction is critical for video footage, especially in safety, security, or incident prevention scenarios. This analysis has traditionally required sending every bit of video data back to the data center or cloud for processing, but often this process takes way too long, is cost prohibitive due to network capacity limitations, and frequently opens data up to the security risks of a public network. What’s more, concerns over data sovereignty place a lot of constraints on where such information can be sent, especially when it’s of a personally identifiable nature. New edge computing innovations are now allowing this data to be processed at the edge of the network—right where it’s being generated—by computers that watch, listen, and understand what’s happening on screen at phenomenal speeds.

HPE Edgeline Converged Edge Systems are a line of compact and ruggedized systems that support modern HPC technologies to deliver accelerated video analytics at the network edge. A portfolio of products expressly designed to exist in the harsh environment of the intelligent edge, HPE Edgeline systems transplant enterprise capabilities that were once only possible in the data center out to the edge. These capabilities include industry standard x86 compute, accelerators such as GPUs or FPGAs, storage, networking, and remote systems management. It converges these information technology (IT) capabilities with operations technology (OT) capabilities such as data acquisition and control in the same system, so the entire chain of observation to analysis to action can be executed in the tightest possible loop.

NVIDIA Tesla GPU accelerators are the platform of choice for deep learning, and through support for NVIDIA GPU accelerators in the HPE Edgeline system, HPE is enabling customers to run deep learning inference right at the edge. HPE system architects worked closely with NVIDIA to qualify the NVIDIA Tesla P4 inferencing accelerators in the HPE Edgeline EL1000 and EL4000 Converged Edge System, with a special focus on ensuring that it can operate in the harsh environment of the edge. With HPE Edgeline systems providing GPU-powered deep learning, it’s like a piece of your data center is fully functional at the edge, running the same analytics engine (not a “lite” version!) with real-time data.

Regardless of application, HPE and NVIDIA have the perfect portfolio of products and solutions to deliver an entire edge-to-core architecture—from systems at the edge doing inference, to systems at the core doing training and non-urgent inference—with the storage, networking and management capabilities to tie it all together. These computing platforms are proving that AI and deep learning are indeed feasible at the edge, and that these innovative techniques can be used to extract timely insight from video footage more quickly, accurately, and efficiently.

To learn more about how IVA can help you achieve rapid time-to-insight, and for deeper information on HPE’s HPC solutions and NVIDIA GPU accelerators, check out @HPE_HPC or @NVIDIADC.

About the Author


As VP & GM for HPC, I lead worldwide business execution and commercial HPC focus for one of the fastest growing market segments in Hewlett Packard Enterprise’s Hybrid IT (HIT) group that includes the recent Cray acquisition and the HPE Apollo portfolio.