Servers: The Right Compute
cancel
Showing results for 
Search instead for 
Did you mean: 

Overcoming the Barriers to Supercomputing

BillMannel

 

Supercomputing enables data analysis that’s too time-consuming and too costly for today’s standard computers. Find out how HPE Gen10 servers make it possible to discover insights into otherwise impenetrable information.

In 1965, IT industry legend Gordon Moore stated, “The complexity for minimum component costs has increased at a rate of roughly a factor of two per year.” This observation, later dubbed “Moore’s Law,” became synonymous with the exponential growth in computer processing speed and efficiency over time. Moore’s Law has since brought the computing industry to a place where transistors in silicon are now essentially free.

Over the last 20+ years, the number of transistors per processor has increased by a factor of 2,300X, but getting data in-and-out of processors is another story. During this same time frame, the number of pins per package has only increased by a factor of 7.4x.

Although the signaling rate has also grown by about 40x in terms of input/output, the industry is struggling to GenesisBlog.jpgkeep pace with the growth of computational capability. While server manufacturers can pack a lot more compute power into a square centimeter of silicon, there won’t be a comparable improvement in application performance—not without new approaches for moving greater amounts of data in-and-out of the processor package.

Leading the quest for high performance computing

This presents a big problem if you’re tasked with solving large scientific, engineering, or data analysis problems. You need to move data quickly—and that requires more powerful, more efficient machines backed by a scalable infrastructure.

Leading high performance computing (HPC) technologies can help you overcome this barrier to supercomputing. Such solutions empower innovation at any scale and give you the power to differentiate your business, drive research, and find answers in real time.

HPE, with its Gen10 solutions, is one of the key players at the forefront of next-generation IT innovation. HPE received a U.S. Department of Energy award to develop an exascale prototype design, and our expertise in HPC is overcoming significant constraints in systems architecture, component technology, energy efficiency, size, and cost. We’re helping the industry achieve the future state of supercomputing!

The importance of supercomputing

Why is supercomputing so important? It enables problem-solving and data-analysis that would otherwise be simply impossible, too time-consuming, or too costly with standard computers. A supercomputer can discover insights into vast amounts of otherwise impenetrable information.

As part of the HPE HPC program, we are building more powerful supercomputers than any other company in the world. One of our primary goals is help the U.S. maintain its position as the world leader in supercomputing. With this goal in mind, we are working to deliver exponentially-higher performance and efficiency compared to today’s supercomputers.

The ideal solution for high performance computing

HPE makes it possible to focus compute resources on data analytics problems without the cost of a full-scale supercomputer. Leading the way among our HPC solutions are the HPE Gen10 servers, which are ideal for IT teams that need to digitally transform their businesses by delivering greater levels of agility, security, and economic control:

  • More IT Agility: IT can throttle CPUs up and down quickly, according to each application’s need. Servers can also scale quickly with persistent memory that now expands to terabyte capacity while also automating and simplifying application deployments.
  • More IT Security: Gen10 servers feature a silicon fingerprint that prevents the server from booting unless the fingerprint matches the OS. The servers also verify firmware security every time they boot, and if any tampering has occurred, the firmware rolls back to its original state.
  • More Economic Control: Flexible capacity and consumption-based IT allow businesses to pay only for the compute resources they consume while buffer resources make it easy to scale those resource up and down. IT can also leverage capacity management and utilization tracking tools to prevent the over-provisioning of compute resources.

Hybrid IT environments using HPE Gen10 technologies that provide these benefits enable businesses to accelerate their transformation to an HPC IT infrastructure. HPE streamlines this process by providing the right mix of IT environments to create a superior supercomputing experience.

To find out more about the unique capabilities that HPE provides to drive innovation into the future, check out this white paper: Exascale: A Race to the Future of HPC.


Bill Mannel
VP & GM - HPC & AI Segment Solutions

twitter.gif @Bill_Mannel
linkedin.gif Bill-Mannel

Bill Mannel
Vice President and GM HPE Servers

twitter.gif@Bill_Mannel
linkedin.gifBill-Mannel

About the Author

BillMannel

As the Vice President and General Manager of HPC and AI Segment Solutions in the Data Center Infrastructure Group, I lead worldwide business and portfolio strategy and execution for the fastest growing market segments in HPE’s Data Center Infrastructure Group which includes the recent SGI acquisition and the HPE Apollo portfolio.

Events
See posts for dates
See posts for locations
HPE at 2018 Technology Events
Learn about the technology events where Hewlett Packard Enterprise will have a presence in 2018.
Read more
See posts for dates
Online
HPE Webinars - 2018
Find out about this year's live broadcasts and on-demand webinars.
Read more
View all