Data structures are distributed across processors. Processors are organized in a hierarchy of groups, which are identified by different MPI communicators level. The groups hierarchy is as follow:
world _ images _ pools _ task groups \_ ortho groups
world: is the group of all processors (MPI_COMM_WORLD).
images: Processors can then be divided into different "images", corresponding to a point in configuration space (i.e. to a different set of atomic positions). Such partitioning is used when performing Nudged Elastic band (NEB), Meta-dynamics and Laio-Parrinello simulations.
pools: When k-point sampling is used, each image group can be subpartitioned into "pools", and k-points can distributed to pools. Within each pool, reciprocal space basis set (plane waves) and real-space grids are distributed across processors. This is usually referred to as "plane-wave parallelization". All linear-algebra operations on array of plane waves / real-space grids are automatically and effectively parallelized. 3D FFT is used to transform electronic wave functions from reciprocal to real space and vice versa. The 3D FFT is parallelized by distributing planes of the 3D grid in real space to processors (in reciprocal space, it is columns of G-vectors that are distributed to processors).
task groups: In order to allow good parallelization of the 3D FFT when the number of processors exceeds the number of FFT planes, data can be redistributed to "task groups" so that each group can process several wavefunctions at the same time.
ortho group: A further level of parallelization, independent on plane-wave (pool) parallelization, is the parallelization of subspace diagonalization (pw.x) or iterative orthonormalization (cp.x). Both operations required the diagonalization of arrays whose dimension is the number of Kohn-Sham states (or a small multiple). All such arrays are distributed block-like across the "ortho group", a subgroup of the pool of processors, organized in a square 2D grid. The diagonalization is then performed in parallel using standard linear algebra operations. (This diagonalization is used by, but should not be confused with, the iterative Davidson algorithm).
Communications: Images and pools are loosely coupled and processors communicate between different images and pools only once in a while, whereas processors within each pool are tightly coupled and communications are significant. This means that Gigabit ethernet (typical for cheap PC clusters) is ok up to 4-8 processors per pool, but fast communication hardware (e.g. Mirynet or comparable) is absolutely needed beyond 8 processors per pool.
Choosing parameters: To control the number of images, pools and task groups, command line switch: -nimage -npools -ntg can be used. The dimension of the ortho group is set to the largest value compatible with the number of processors and with the number of electronic states. The user can choose a smaller value using the command line switch -ndiag (pw.x) or -northo (cp.x) . As an example consider the following command line:
mpirun -np 4096 ./pw.x -nimage 8 -npool 2 -ntg 8 -ndiag 144 -input my.inputThis execute the PWscf code on 4096 processors, to simulate a system with 8 images, each of which is distributed across 512 processors. K-points are distributed across 2 pools of 256 processors each, 3D FFT is performed using 8 task groups (64 processors each, so the 3D real-space grid is cut into 64 slices), and the diagonalization of the subspace Hamiltonian is distributed to a square grid of 144 processors (12x12).
Default values are: -nimage 1 -npool 1 -ntg 1 ; ndiag is chosen
by the code as the fastest n2
Since v.4.1, Scalapack can be used to diagonalize block distributed
matrixes, yielding better speed-up than the default algorithms for
large (> 1000
A further possibility to expand scalability, especially on machines
like IBM BlueGene, is to use mixed MPI-OpenMP. The idea is to have
one (or more) MPI process(es) per multicore node, with OpenMP
parallelization inside a same node. This option is activated by
preprocessing flag -D__OPENMP, by the following compiler options:
The former case requires more
disk I/O and disk space, but produces portable data files; the latter case
requires less I/O and disk space, but the data so produced can be read only
by a job running on the same number of processors and pools, and if
all files are on a file system that is visible to all processors
(i.e., you cannot use local scratch directories: there is presently no
way to ensure that the distribution of processes on processors will
follow the same pattern for different jobs).
cp.x instead always collects the final wavefunctions into a single directory.
Files written by pw.x can be read by cp.x only if wf_collect=.true. (and if
produced for k=0 case).
With the new file format (v.3.1 and later) all data (except
wavefunctions in pw.x if wf_collect=.false.) is written to and read from
a single directory outdir/prefix.save. A copy of pseudopotential files
is also written there. If some processor cannot access outdir/prefix.save,
it reads the pseudopotential files from the pseudopotential directory
specified in input data. Unpredictable results may follow if those files
are not the same as those in the data directory!
Avoid I/O to network-mounted disks (via NFS) as much as you can!
Ideally the scratch directory (ESPRESSO_TMPDIR) should be a modern
Parallel File System. If you do not have any, you can use local
scratch disks (i.e. each node is physically connected to a disk
and writes to it) but you may run into trouble anyway if you
need to access your files that are scattered in an unpredictable
way across disks residing on different nodes.
You can use option "disk_io='minimal'", or even 'none', if you run
into trouble (or angry system managers) with eccessive I/O with pw.x.
The code will store wavefunctions into RAM during the calculation.
Note however that this will increase your memory usage and may limit
or prevent restarting from interrupted runs.
3.3.0.1 Massively parallel calculations
For very large jobs (i.e. O(1000) atoms or so) or for very long jobs
to be run on massively parallel machines (e.g. IBM BlueGene) it is
crucial to use in an effective way both the "task group" and the
"ortho group" parallelization. Without a judicious choice of
parameters, large jobs will find a stumbling block in either memory or
CPU requirements. In particular, the "ortho group" parallelization is
used in the diagonalization of matrices in the subspace of Kohn-Sham
states (whose dimension is as a strict minumum equal to the number of
occupied states). These are stored as block-distributed matrixes
(distributed across processors) and diagonalized using custom-taylored
diagonalization algorithms that work on block-distributed matrixes.
LAPACK_LIBS = -lscalapack -lblacs -lblacsF77init -lblacs -llapack
The repeated -lblacs is not an error, it is needed! after Scalapack,
Blacs, BlacsInit, Blacs again (with their paths if needed), then
Lapack or a suitable replacement.
ifort: -openmp
and is implemented and tested for the following combinations of FFTs
and libraries:
xlf: -qsmp=omp
PGI: -mp
ftn: -mp=nonuma
internal FFTW copy: -D__FFTW
Currently, ESSL (when available) are faster than internal FFTW,
which in turn are faster than ACML.
ESSL: -D__ESSL or -D__LINUX_ESSL, link
with -lesslsmp
ACML: -D__ACML, link with -lacml_mp.
3.3.1 Understanding parallel I/O
In parallel exeution, each processor has its own slice of wavefunctions,
to be written to temporary files during the calculation. The way wavefunctions
are written by pw.x is governed by variable wf_collect, in namelist control.
If wf_collect=.true., the final wavefunctions are collected into a single
directory, written by a single processor, whose format is independent on
the number of processors. If wf_collect=.false. (default) each processor
writes its own slice of the final
wavefunctions to disk in the internal format used by PWscf.
3.3.1.1 Cray XT3
On the cray xt3 there is a special hack to keep files in
memory instead of writing them without changes to the code.
You have to do a:
module load iobuf
before compiling and then add liobuf at link time.
If you run a job you set the environment variable
IOBUF_PARAMS to proper numbers and you can gain a lot.
Here is one example:
env IOBUF_PARAMS='*.wfc*:noflush:count=1:size=15M:verbose,\
*.dat:count=2:size=50M:lazyflush:lazyclose:verbose,\
*.UPF*.xml:count=8:size=8M:verbose' pbsyod =\
\~{}/pwscf/pwscfcvs/bin/pw.x npool 4 in si64pw2x2x2.inp > & \
si64pw2x2x232moreiobuf.out &
This will ignore all flushes on the *wfc* (scratch files) using a
single i/o buffer large enough to contain the whole file ( 12
Next: 3.4 Tricks and problems
Up: 3 Parallelism
Previous: 3.2 Running on parallel
Contents
Paolo Giannozzi
2010-04-08