Information Technology Reference
In-Depth Information
Alya Multiphysics Simulations on Intel's Xeon
Phi Accelerators
Mariano Vazquez 1 , 2 , Guillaume Houzeaux 1 ,
Felix Rubio 1 , and Christian Simarro 1
1 Barcelona Supercomputing Center, Spain
2 IIIA-CSIC, Spain
Abstract. In this paper we describe the porting of Alya, our HPC-
based multiphysics simulation code to Intel's Xeon Phi, assessing code
performance. This is a continuation of a short white paper where the
solid mechanics module was tested. Here, we add two tests more and
asses the code on a much wider context. From the Physical point of view,
we solve a complex multiphysics problem (combustion in a kiln furnace)
and a single-physics problem with an explicit scheme (compressible flow
around a wing). From the architecture point of view, we perform new
tests using multiple accelerators on different hosts.
1 Introduction
Alya (see for instance [3,4,10,2,5,9]) is a multiphysics simulation code developed
in Barcelona Supercomputing Center. Thanks to HPC-based programming
techniques, it is able to simulate multi-physics problems with high parallel
e ciency in supercomputers, being already tested up to one hundred
thousand cores in Blue Waters supercomputer [10]. Alya simulates multiphysics
problems such as fluid mechanics (compressible and incompressible), non linear
solid mechanics, combustion and chemical reactions, electromagnetism, etc.
Multiphysics coupling includes contact problem and deforming solids, fluid-
structure interaction or fluid-solid thermal coupling. Its parallel architecture
is based in an automatic mesh partition (using Metis [1]) and MPI tasks.
Additionally it has an inner parallelization layer based on OpenMP threads,
which combined with MPI tasks results in a hybrid parallelization scheme. In
this paper we will focus in the pure MPI case.
Since years ago, heterogeneous systems with accelerators have been a very
appealing alternative to more traditional homogeneous systems. Accelerators
are hardware specifically designed to perform very eciently a certain kind of
operations, typical of number-crunching situations. In the last years, GPGPUs
have emerged as the de-facto main alternative. NVIDIA, which is the largest
manufacturer of GPGPUs, has been carrying out a huge effort to put all the
computational power of its accelerators in the hands of scientists. They developed
a powerful programming model, CUDA, to help programmers to adapt their
codes to them. However, NVIDIA's GPGPUs' architecture is not well-suited for
Search WWH ::




Custom Search