Transitioning From SMP To MPP-DYNA3D For The Simulation Of Large Thermal-structural Implicit Problems
The LS-DYNA family of codes have been used at AWE for many years. For a long time they were used on our shared memory platforms (SMPs) to carry out implicit structural and coupled thermal- structural analyses, amongst others. Over time processor speeds have continually increased and larger memory has become available at reducing costs. This has led to an increase in the size of models as meshes have been refined for better definition and realism of the problems under investigation. However the simulation of the long-term responses of engineering structures poses special difficulties when large models need to be analysed encompassing non-linear behaviours. These non-linearities can arise through sliding interfaces, and more commonly through the complex constitutive responses of non-traditional fabrication materials, such as foams and explosives, that act as structural, load-bearing components. Unlike explicit analysis, in implicit problems the equations cannot be decoupled from each other, and so implicit simulations immediately make large demands on the amount of memory required to solve the problem in-core, and these requirements increase rapidly as the model is refined. For the most complex analyses the turnaround times can grow from weeks to potentially months, as model size increases. This problem is being addressed by re-writing implicit solvers to run in parallel mode on distributed memory platforms (DMPs). Although these developments have helped these codes to reduce turnaround times, work is required to further enhance their scalability. Shared memory and MPI-versions of LS-DYNA have been used at AWE to investigate the transition from SMPs to DMPs for the solution of large, contact-dominated thermal-structural implicit problems. The hybrid version of these codes was also used in some simulations, but this is early work at AWE. This paper reports our findings. It also examines the influence of code characteristics on computing platform requirements. The significant reduction in turnaround time that was realised using MPI instead of the SMP version for a major test problem will be presented, and the scaling characteristics of the MPI and hybrid versions of LS-DYNA for this problem will be shown.
https://www.dynalook.com/conferences/9th-european-ls-dyna-conference/transitioning-from-smp-to-mpp-dyna3d-for-the-simulation-of-large-thermal-structural-implicit-problems/view
https://www.dynalook.com/@@site-logo/DYNAlook-Logo480x80.png
Transitioning From SMP To MPP-DYNA3D For The Simulation Of Large Thermal-structural Implicit Problems
The LS-DYNA family of codes have been used at AWE for many years. For a long time they were used on our shared memory platforms (SMPs) to carry out implicit structural and coupled thermal- structural analyses, amongst others. Over time processor speeds have continually increased and larger memory has become available at reducing costs. This has led to an increase in the size of models as meshes have been refined for better definition and realism of the problems under investigation. However the simulation of the long-term responses of engineering structures poses special difficulties when large models need to be analysed encompassing non-linear behaviours. These non-linearities can arise through sliding interfaces, and more commonly through the complex constitutive responses of non-traditional fabrication materials, such as foams and explosives, that act as structural, load-bearing components. Unlike explicit analysis, in implicit problems the equations cannot be decoupled from each other, and so implicit simulations immediately make large demands on the amount of memory required to solve the problem in-core, and these requirements increase rapidly as the model is refined. For the most complex analyses the turnaround times can grow from weeks to potentially months, as model size increases. This problem is being addressed by re-writing implicit solvers to run in parallel mode on distributed memory platforms (DMPs). Although these developments have helped these codes to reduce turnaround times, work is required to further enhance their scalability. Shared memory and MPI-versions of LS-DYNA have been used at AWE to investigate the transition from SMPs to DMPs for the solution of large, contact-dominated thermal-structural implicit problems. The hybrid version of these codes was also used in some simulations, but this is early work at AWE. This paper reports our findings. It also examines the influence of code characteristics on computing platform requirements. The significant reduction in turnaround time that was realised using MPI instead of the SMP version for a major test problem will be presented, and the scaling characteristics of the MPI and hybrid versions of LS-DYNA for this problem will be shown.