- Community Home
- >
- Servers and Operating Systems
- >
- Servers & Systems: The Right Compute
- >
- Can LS-DYNA scale any higher?
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark as New
- Mark as Read
- Bookmark
- Receive email notifications
- Printer Friendly Page
- Report Inappropriate Content
Can LS-DYNA scale any higher?
Cray, Livermore Software Technology Corporation, the National Center for Supercomputing Applications, and Rolls-Royce are exploring the future of implicit finite element analyses of large-scale models using LS-DYNA and Cray supercomputing technology.
Processing and memory bottlenecks can run but they canโt hide. Not indefinitely, at least. And especially not when four technology leaders combine efforts against them.
Cray, Livermore Software Technology Corporation (LSTC), the National Center for Supercomputing Applications (NCSA) and Rolls-Royce are partnering on an ongoing project to explore the future of implicit finite element analyses of large-scale models using LS-DYNA, a multiphysics simulation software package, and Cray supercomputing technology. As the scale of finite element modelsโand the systems they run onโincrease, so do scaling issues and the amount of time it takes to run a model.
Understanding that, ultimately, only time and resource constraints limit the size and complexity of implicit analyses (and, subsequently, the insights offered by them), Cray, LSTC, NCSA and Rolls-Royce are focusing on identifying whatโs constraining these models as they scale and applying their learnings to enhancing LS-DYNA.
Theyโre making some surprising discoveries
For the project, Rolls-Royce created a family of dummy engine models using solid elements and as many as 200 million degrees of freedom. Then NCSA ran the models with specialized LS-DYNA variants on โBlue Waters,โ their Cray supercomputer system. The biggest challenge theyโve uncovered so far is the need to be able to reorder extremely large sparse matrices to reduce factorization storage and operations. This discovery has led to broader changes in LS-DYNA as well as the exposure and analysis of some unexpected bugs in the software.
After some initial adjustments to LS-DYNA, including integrating LS-GPart (an alternative nested dissection strategy based on half-level sets), the group ran a small dummy engine model with 105 million degrees of freedom to begin their analysis of the softwareโs scaling. Access to Blue Watersโ thousands of cores represented a rare opportunity to observe performance at this scaleโand the opportunity to optimize for it. The researchers got to do both. And it uncovered three interesting scaling bottlenecksโinput processing by MPI rank 0, symbolic factorization, and constraint processing.
The group has also run one large engine model simulation. Using a Cray system, this time with newer nodes containing 192 GB of memory per node, the large model ran in 12 hours using 64 nodes.
See how the collaborators reached their conclusions. Read the full white paper, โIncreasing the Scale of LS-DYNA Implicit Analysis.โ
This blog originally published on cray.com and has been updated and published here on HPEโs Advantage EX blog.
Ting Ting Zhu
Hewlett Packard Enterprise
twitter.com/hpe_hpc
linkedin.com/showcase/hpe-ai/
hpe.com/info/hpc
TingTingZhu
Ting-Ting is a performance engineer and has worked in the HPC field almost 30 years. She joined HPE in January 2020 as part of the Cray acquisition. Her primary focus is performance tuning, optimization, and benchmarking of structural mechanics and dynamics codes. Ting-Ting holds a PhD in mechanical engineering from the University of Texas at Austin.
- Back to Blog
- Newer Article
- Older Article
- PerryS on: Explore key updates and enhancements for HPE OneVi...
- Dale Brown on: Going beyond large language models with smart appl...
- alimohammadi on: How to choose the right HPE ProLiant Gen11 AMD ser...
- ComputeExperts on: Did you know that liquid cooling is currently avai...
- Jams_C_Servers on: If youโre not using Compute Ops Management yet, yo...
- AmitSharmaAPJ on: HPE servers and AMD EPYCโข 9004X CPUs accelerate te...
- AmandaC1 on: HPE Superdome Flex family earns highest availabili...
- ComputeExperts on: New release: What you need to know about HPE OneVi...
- JimLoi on: 5 things to consider before moving mission-critica...
- Jim Loiacono on: Confused with RISE with SAP S/4HANA options? Let m...
-
COMPOSABLE
77 -
CORE AND EDGE COMPUTE
146 -
CORE COMPUTE
155 -
HPC & SUPERCOMPUTING
138 -
Mission Critical
88 -
SMB
169