- Integrated Systems
- About Us
- Integrated Systems
- About Us
3 HPC cluster myths you should stop believing today
Toss out what you thought you knew about HPC clusters — HPE changes the story with new offerings and services
We are living in the Age of Insight — a new era defined by insights and discoveries in science, engineering, and business that benefit all and elevate the well-being of every human on the planet.
Data is at the core of this new era. And there's no end in sight. Continuous explosive data growth is forcing us to build larger and larger models, both math models for simulation and data models for analytics. Combine unstoppable data growth with larger models and it gives rises to another need — more powerful computational systems.
High-performance computing (HPC) systems are the backbone of modeling and simulation, high-end big data analytics, and AI workloads. They're the tool that can convert complex data into digital models that help us ultimately understand the world around us.
HPC systems come in two flavors:
- Custom-engineered supercomputers like the HPE Cray EX supercomputer
Engineered for ultimate performance these supercomputers require a specific combination of liquid-cooled high density compute blades, operating system, high-speed network and high performance storage. They are mainly used in government-funded organizations around the world who tackle the world's largest computational challenges with extreme model and data sizes.
- HPC clusters that are built on rack servers like HPE Apollo systems
While they don't reach the extreme levels of performance density and scalability of supercomputers, HPC clusters were instrumental to the first wave of HPC democratization in the 1990s. HPC clusters gave organizations that could either not afford a supercomputer or could not fit the physical dimensions of a supercomputer into their data center access to the tools needed to drive innovation and insights through advanced computation.
As the demands on them have increased, so have HPC clusters. In this blog, we'll look at three ideas about this important technology that are no longer true. Here are the three HPC "myths":
In order to get “best-of-breed” technology on every layer of the stack you need to build multi-vendor HPC clusters.
In the past, HPC users did need to build their clusters with different vendors. Vendor A for cluster management software, vendor B for the compute nodes and vendor C for the parallel HPC storage — if they wanted to get “best-of-breed” technology on every layer of the technology stack.
It's not true anymore. HPE has assembled “best of breed” technology on every layer of the cluster technologoy stack. This short video and this infographic illustrate the customer problems we solve by providing “one hand to shake from procurement to support” for the whole HPC cluster technology stack and the “best-of-breed” products that underpin it.
In order to start your migration to the next-gen HPC cluster to accelerate innovation you need to wait until you have your full budget assigned.
Again, we understand why people still believe this one. Organizations used to have to wait on the refresh of their “innovation systems” until they had secured the internal approval to commit the full funding.
Now, HPE Financial Services has made the HPE Accelerated Migration program available for HPC clusters. With this program you can fast track your HPC cluster refresh with minimal disruption to your current environment. Unlock the hidden value in your existing HPC compute and HPC storage assets as you transition to your next-generation HPC clusters. Shift existing, owned IT assets to a flexible usage payment model during the transition and free up cash for new HPC cluster investment. In the last four years HPE Financial Services has Infused $1.3 billion back into customer budgets creating incremental investment capacity. HPE Accelerated Migration helps you access the value in your legacy equipment and move forward sooner.
In order to get a true self-service cloud experience for your engineers and scientists you need to go to the Public Cloud.
For many HPC environments the classic HPC user experience is the right model. But for those organizations who want or need to provide a true cloud experience (see the definition from NIST*) to their researchers and scientists going to the public cloud was the only option.
With the recent announcement of HPE GreenLake cloud services for HPC any organization can consume advanced computing workloads with fully managed, pre-bundled HPC cloud services in their own data center or — if the organization has an “asset-light” strategy — in a colocation environment. It’s the HPC cloud that comes to you (or near you). And if you desire a hybrid or multi-cloud environment, “your” managed private HPC cloud can be tiered into the public cloud provider of your choice with HPE Advisory and Professional Services.
By offering HPC-as-a-Service, we are enabling the second wave of democratization of HPC by removing barriers to access such as system complexity, unique facilities to house them, system costs, operating costs for power and cooling, and any need for highly skilled and knowledgeable HPC technical staff.
Any organization can now have on-demand access to state-of-the-art methods of computational innovation and insight enabling them to make strides in scientific research and engineering, achieve bold new discoveries, create smarter products and better customer experiences, and thrive in a digital-first world.
As HPE we are proud to provide the expertise, tools and services to help the brightest minds of mankind to make that vision a reality.
Hewlett Packard Enterprise
Uli leads the product marketing function for high performance computing (HPC) storage. He joined HPE in January 2020 as part of the Cray acquisition. Prior to Cray, Uli held leadership roles in marketing, sales enablement, and sales at Seagate, Brocade Communications, and IBM.