Servers & Systems: The Right Compute
1752717 Members
5584 Online
108789 Solutions
New Article ๎ฅ‚
BillMannel

Why Field Programmable Gate Arrays (FPGAs) are the versatile accelerator

Learn how HPE and Intel deliver Field Programmable Gate Array (FPGA) solutions with the performance, adaptability, and power efficiency to accelerate business-critical workloads.

FPGA_blog.jpgThe invention and development of Central Processing Units (CPUs) have certainly played pivotal roles in the trajectory of human history. It is fair to say that Intelโ€™s development of the CPU has led to the democratization of computing and enabled countless innovations, large and small.

As with all things, further specialization is possible. Acceleration of certain workloads may be achieved through continued specialization of processing units. Graphical Processing Units (GPUs), for example, were originally created to accelerate graphical-related workloads. GPUs are now being used for other tasks, such as bitcoin mining.

For clarity, letโ€™s compare CPUs versus GPUs: A CPU is a general-purpose processor, designed to run a broad range of operations necessary for an entire system, such as IO or virtual memory. GPUs are more specifically designed for highly repetitive tasks that can be highly parallelized. Now on to discussing Field Programmable Gate Arrays. 

Focusing on Field Programmable Gate Arrays (FPGAs)

While GPUs are good at what they do, their strengths are biased towards very particular types of processes. Another more versatile type of accelerator, Field Programmable Gate Arrays (FPGAs), has seen further development by Intel and offers customizable, gate-array based multi-functional acceleration. In fact, the FPGA is designed to actually be configured by a customer or designer after manufacturing; hence it is โ€œfield-programmable.โ€ An FPGA offers high I/O bandwidth plus a fine-grained, flexible and custom parallelism, allowing it to be programmed for many different types of workloads, including Big Data analytics, financial services and deep learning. If a GPU is something like a hammer, an FPGA is like Doctor Whoโ€™s sonic screwdriver, an adaptable tool that can be used to solve many different types of problems.

HPE has teamed up with Intel to offer FPGA solutions based on HPE ProLiant DL Gen10 servers, including the HPE ProLiant DL360 and DL380 server platforms with Intelยฎ Arriaยฎ 10 GX FPGAs. The HPE ProLiant DL360 offers a 1U dual processor dense compute server with exceptional flexibility and expandability, while the HPE ProLiant DL380 provides a 2U dual processor server with world-class performance and versatility for multiple workloads. HPE servers also offer a unique Silicon Root of Trust to protect against firmware-based cybersecurity threats. The combination of HPE servers with Intel FPGAs provides flexible, industrial-strength compute solutions that can be tuned for specific workloads.

One of the traditional difficulties with FPGAs has been the specialized nature of programming required. In many cases, this has rendered FPGA technology inaccessible to data scientists and application developers. Intel has developed the Acceleration Stack for Intel Xeon CPU with FPGAs to provide a common developer interface for both application and accelerator function developers, and includes drivers, Application Programming Interfaces (APIs) and an FPGA Interface Manager. Together with acceleration libraries and development tools, Intelโ€™s Acceleration Stack enables developers to focus on the unique value-add of their solutions.

Intel has also open-sourced the Open Programmable Acceleration Engine (OPAE) technology, a software programming layer that provides a consistent API across Intel FPGA platforms. It is designed for minimal software overhead and latency, while providing an abstraction for hardware-specific FPGA resource details. OPAE is the default software stack for the Intelยฎ Xeonยฎ processor with both integrated and discrete FPGA devices.

How simplifying the programming for FPGAs plays directly to its strengths

An FPGA can be reprogrammed and updated with new algorithms for different workloads. This flexibility allows a single FPGA to accelerate many different workloads efficiently, and to support future applications without the need to change the hardware. For instance, a FPGA could handle one workload during the morning shift and a different workload during an evening shift. Programmability also allows FPGAs to stay abreast of evolving standards, such as networking protocols, and enables updates to maintain compliance when a standard is finalizedโ€”again, without having to respin the hardware.

An FPGA can also switch between multiple programs in real time to adapt to changing workloads. An example of this is with the Bigstream Acceleration solution. Bigstream accelerates Spark performance using its software solution in conjunction with an Intel FPGA. Bigstream will reconfigure the FPGA to best fit the dataflow to be processed, resulting in up to 8x performance acceleration for end-to-end applications, with a potential for higher acceleration in future releases.* This adaptability and flexibility of FPGAs effectively render them to a large extent future-proof, while also enhancing the ROI of the servers that use them by extending their lifecycle.

How performance gains enabled by FPGAs provide increased productivity and boosts ROI

Data demands on IT are continually increasing and relational databases and Microsoft SQL continue to be the backbone for enterprise-class data analytics. Swarm64 offers an innovative add-on to PostgreSQL, the S64 Data Accelerator for PostgreSQL (S64DA),  which delivers up to 4x data warehouse acceleration with no changes to the BI application. The S64DA solution is designed to significantly increase data processing and analytics performance for demanding workloads, using Intel FPGAs to overcome latency and bandwidth limitations of storage accessed via a network, either locally or from the cloud. Intel FPGAs can directly connect to networks, removing the need for data to go through processors and reducing overall system latency. Leveraging the highly parallel nature of FPGAs with optimized, workload-specific programming provides productivity gains for high value workloads.

How partners and solutions are leveraging FPGAs

Financial industry

Another example of how Intel FPGAs increase productivity is being delivered by Levyx through its Financial Risk Analytics Acceleration solution. By optimizing the performance of the underlying storage, Levyx helps offload compute-intensive functions directly onto FPGAs for faster processing in previously time-consuming and resource-intensive large-scale operations like stock/options financial algorithm backtesting at financial institutions. Backtesting is a highly parallel, data- and compute-intensive simulation workload with large multi-terabyte datasets. Backtesting is used to test thousands of trading models to find those that have been historically profitable to determine the best trading practices to maximize current and future profitability.

To stay ahead of the competition, the models must continually evolve and be rapidly evaluated for algorithmic trading success. The efficacy of these models can have a significant impact on trading revenues at capital markets firms, including money-center banks, large hedge funds and trading exchanges. Levyx effectively allows critical backtesting functions to be performed 851% faster than competing solutions.** With these low-latency, compute-intensive workloads and massive data sets, the performance, flexibility and programmability of Intel FPGAs have a direct impact on the productivity and revenue of Levyx customers.

Power savings

Since FPGAs can be optimized for specific workloads, the resulting efficiency leads to lower power consumption. This decreased power consumption allows FPGAs to be added to existing infrastructure to increase performance, while minimizing the amount of extraneous space or power required. Since lower power consumptions reduces heat within the data center, additional savings are gained by minimizing the overall power needed for a given performance level. When these power savings are multiplied over the entire data center, with attendant reduced power and cooling costs, FPGAs clearly help to minimize TCO by reducing OPEX.

AI and deep learning

In the rapidly developing field of AI and deep learning, FPGAs are being recognized as a solution for inferencing, which is essentially the application of deep learning training. In the training cycle, a neural network model is โ€œtaughtโ€ how to recognize a pattern, like cats. Inferencing occurs when the network is shown an image, and it signals whether the image is a cat or not. In other words, training develops the model while inferencing is the runtime application of the model.

Inferencing requires low-latency performance, efficiency and flexibility. FPGAs offer a highly parallel architecture coupled with high-bandwidth memory to provide the low-latency performance required for real-time inferencing. FPGAs effectively implement software algorithms in hardware for optimized performance, but also provide the energy efficiency to minimize deployment power requirements. In general, inferencing of a model is a specific task, including facial recognition and language translation, which maps well to the strengths of FPGAs.

Accelerating business-critical workloads with FPGA solutions

The collaboration between HPE and Intel provides industrial-strength FPGA solutions that accelerate business-critical workloads. The supporting software ecosystem is developing at a rapid enough pace to be able to continuously add value to customers in an ever-expanding range of uses cases. The performance, adaptability and power efficiency of FPGAs serve to increase productivity and drive innovationโ€”with rapid ROI and minimized TCO.

Learn more about FPGA solutions

For further information, please visit the Intel FPGA Acceleration Hub.

See HPE FPGA solutions at HPE-Cast Japan, the HPE HPC and AI Forum held on September 7. (Note: The HP-Cast web page is in Japanese.)

 * TPC-DS benchmark per Spark/SQL Business Intelligence Benchmarks. Compares to open source Apache Spark running on Intelยฎ Xeonยฎ CPU E5-2650 v3 @ 2.30GHz.

**  https://newsroom.intel.com/editorials/intel-fpgas-accelerating-future/


Bill Mannel
VP & GM - HPC & AI Segment Solutions
Hewlett Packard Enterprise

twitter.gif @Bill_Mannel
linkedin.gif Bill-Mannel

 

 

0 Kudos
About the Author

BillMannel

As VP & GM for HPC, I lead worldwide business execution and commercial HPC focus for one of the fastest growing market segments in Hewlett Packard Enterpriseโ€™s Hybrid IT (HIT) group that includes the recent Cray acquisition and the HPE Apollo portfolio.