Servers & Systems: The Right Compute
1825069 Members
5020 Online
109679 Solutions
New Article ๎ฅ‚
TingTingZhu

Can LS-DYNA scale any higher?

Cray, Livermore Software Technology Corporation, the National Center for Supercomputing Applications, and Rolls-Royce are exploring the future of implicit finite element analyses of large-scale models using LS-DYNA and Cray supercomputing technology.

LS-Dyna-GettyImages-927204924_800_0_72_RGB800x533.png

Processing and memory bottlenecks can run but they canโ€™t hide. Not indefinitely, at least. And especially not when four technology leaders combine efforts against them.

Cray, Livermore Software Technology Corporation (LSTC), the National Center for Supercomputing Applications (NCSA) and Rolls-Royce are partnering on an ongoing project to explore the future of implicit finite element analyses of large-scale models using LS-DYNA, a multiphysics simulation software package, and Cray supercomputing technology. As the scale of finite element modelsโ€”and the systems they run onโ€”increase, so do scaling issues and the amount of time it takes to run a model.

Understanding that, ultimately, only time and resource constraints limit the size and complexity of implicit analyses (and, subsequently, the insights offered by them), Cray, LSTC, NCSA and Rolls-Royce are focusing on identifying whatโ€™s constraining these models as they scale and applying their learnings to enhancing LS-DYNA.

Theyโ€™re making some surprising discoveries

For the project, Rolls-Royce created a family of dummy engine models using solid elements and as many as 200 million degrees of freedom. Then NCSA ran the models with specialized LS-DYNA variants on โ€œBlue Waters,โ€ their Cray supercomputer system. The biggest challenge theyโ€™ve uncovered so far is the need to be able to reorder extremely large sparse matrices to reduce factorization storage and operations. This discovery has led to broader changes in LS-DYNA as well as the exposure and analysis of some unexpected bugs in the software.

After some initial adjustments to LS-DYNA, including integrating LS-GPart (an alternative nested dissection strategy based on half-level sets), the group ran a small dummy engine model with 105 million degrees of freedom to begin their analysis of the softwareโ€™s scaling. Access to Blue Watersโ€™ thousands of cores represented a rare opportunity to observe performance at this scaleโ€”and the opportunity to optimize for it. The researchers got to do both. And it uncovered three interesting scaling bottlenecksโ€”input processing by MPI rank 0, symbolic factorization, and constraint processing.

The group has also run one large engine model simulation. Using a Cray system, this time with newer nodes containing 192 GB of memory per node, the large model ran in 12 hours using 64 nodes.

See how the collaborators reached their conclusions. Read the full white paper, โ€œIncreasing the Scale of LS-DYNA Implicit Analysis.โ€


This blog originally published on cray.com and has been updated and published here on HPEโ€™s Advantage EX blog.



Ting Ting Zhu
Hewlett Packard Enterprise

twitter.com/hpe_hpc
linkedin.com/showcase/hpe-ai/
hpe.com/info/hpc

0 Kudos
About the Author

TingTingZhu

Ting-Ting is a performance engineer and has worked in the HPC field almost 30 years. She joined HPE in January 2020 as part of the Cray acquisition. Her primary focus is performance tuning, optimization, and benchmarking of structural mechanics and dynamics codes. Ting-Ting holds a PhD in mechanical engineering from the University of Texas at Austin.