next up previous contents
Next: 3.4 Understanding parallel I/O Up: 3 Parallelism Previous: 3.2 Running on parallel machines   Contents

Subsections

3.3 Parallelization levels

In QUANTUM ESPRESSO several MPI parallelization levels are implemented, in which both calculations and data structures are distributed across processors. Processors are organized in a hierarchy of groups, which are identified by different MPI communicators level. The groups hierarchy is as follow:

Note however that not all parallelization levels are implemented in all codes.

When a communicator is split, the MPI process IDs in each sub-communicator remain ordered. So for instance, for two images and 2n MPI processes, image 0 contains IDs 0, 1,..., n - 1, image 1 contains IDs n, n + 1,.., 2n - 1.

3.3.0.1 About communications

Images and pools are loosely coupled: inter-processors communication between different images and pools is modest. Processors within each pool are instead tightly coupled and communications are significant. This means that fast communication hardware is needed if your pool extends over more than a few processors on different nodes.

3.3.0.2 Choosing parameters

: To control the number of processors in each group, command line switches: -nimage, -npools, -nband, -ntg, -ndiag or -northo (shorthands, respectively: -ni, -nk, -nb, -nt, -nd) are used. As an example consider the following command line:
mpirun -np 4096 ./neb.x -ni 8 -nk 2 -nt 4 -nd 144 -i my.input
This executes a NEB calculation on 4096 processors, 8 images (points in the configuration space in this case) at the same time, each of which is distributed across 512 processors. k-points are distributed across 2 pools of 256 processors each, 3D FFT is performed using 4 task groups (64 processors each, so the 3D real-space grid is cut into 64 slices), and the diagonalization of the subspace Hamiltonian is distributed to a square grid of 144 processors (12x12).

Default values are: -ni 1 -nk 1 -nt 1 ; nd is set to 1 if ScaLAPACK is not compiled, it is set to the square integer smaller than or equal to the number of processors of each pool.

3.3.0.3 Massively parallel calculations

For very large jobs (i.e. O(1000) atoms or more) or for very long jobs, to be run on massively parallel machines (e.g. IBM BlueGene) it is crucial to use in an effective way all available parallelization levels: on linear algebra (requires compilation with ELPA and/or ScaLAPACK), on "task groups" (requires run-time option "-nt N"), and mixed MPI-OpenMP (requires OpenMP compilation: configure–enable-openmp). Without a judicious choice of parameters, large jobs will find a stumbling block in either memory or CPU requirements. Note that I/O may also become a limiting factor.


next up previous contents
Next: 3.4 Understanding parallel I/O Up: 3 Parallelism Previous: 3.2 Running on parallel machines   Contents