Both AMD and Intel CPUs, 32-bit and 64-bit, are supported and work, either in 32-bit emulation and in 64-bit mode. 64-bit executables can address a much larger memory space than 32-bit executable, but there is no gain in speed. Beware: the default integer type for 64-bit machine is typically 32-bit long. You should be able to use 64-bit integers as well, but it is not guaranteed to work and will not give any advantage anyway.
Currently, configure supports gfortran and the Intel (ifort), NVidia (nvfortran, formerly PGI pgf90) compilers. ARM (armflang) and NAG (nagfor) compilers are supported but little tested and may or may not work. Pathscale, Sun Studio, AMD Open64, are no longer supported after v.6.2; g95, since v.6.1.
Intel MKL mathematical libraries are supported. ACML support must be considered as obsolete.
It is usually convenient to create semi-statically linked executables (with only libc, libm, libpthread dynamically linked). If you want to produce a binary that runs on different machines, compile it on the oldest machine you have (i.e. the one with the oldest version of the operating system).
Gfortran v.4.8.5, still found on CentOS machines, no longer compiles QUANTUM ESPRESSO v.6.6 or later, due to a gfortran bug. You need at least gfortran v.4.9.X.
"There is a known incompatibility problem between different calling convention for Fortran functions that return complex values [...] If your code crashes during a call to zdotc, recompile QUANTUM ESPRESSO using the internal BLAS and LAPACK routines (...) to see if the problem disappears; or, add the -ff2c flag" (info by Giovanni Pizzi, Jan. 2013). You may also consider replacing the offending calls to zdotc with fortran intrinsic dot_product.
If you want to use MKL libraries together with gfortran, link -lmkl_gf_lp64, not -lmkl_intel_lp64.
The Intel compiler ifort http://software.intel.com/ produces fast executables, at least on Intel CPUs, but not all versions work as expected. In particular, ifort versions earlier than v.15 miscompile the new XML code in QE v.6.4 and later and should not be used any longer. In case of trouble, update your version with the most recent patches. Since each major release of ifort differs a lot from the previous one, compiled objects from different releases may be incompatible and should not be mixed.
If configure doesn't find the compiler, or if you get Error loading shared libraries at run time, you may have forgotten to execute the script that sets up the correct PATH and library path. Unless your system manager has done this for you, you should execute the appropriate script – located in the directory containing the compiler executable – in your initialization files. Consult the documentation provided by Intel.
The warning: feupdateenv is not implemented and will always fail, can be safely ignored. Complains about ``recommended formats'' may also be ignored.
configure properly detects only recent (v.12 or later) MKL libraries,
as long as the $MKLROOT environment variable is set in the current shell.
Normally this environment variable is set by sourcing the Intel MKL or Intel
Parallel Studio environment script.
By default the non-threaded version of MKL is linked, unless option
configure –with-openmp is specified. In case of trouble,
refer to the following web page to find the correct way to link MKL:
http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor/.
For parallel (MPI) execution on multiprocessor (SMP) machines, set the environment variable OMP_NUM_THREADS to 1 unless you know how to run MPI+OpenMP. See Sec.3 for more info on this and on the difference between MPI and OpenMP parallelization.
If you get a mysterious "too many communicators" error and a
subsequent crash: there is a bug in Intel MPI and MKL 2016 update 3.
See this thread and the links quoted therein:
http://www.mail-archive.com/pw_forum@pwscf.org/msg29684.html
.
export MKL_DEBUG_CPU_TYPE=5which gives an additional 10-20% speedup with MKL 2020, while for earlier versions the speedup is greater than 200%. [...] Another note, there seem to be problems using FFTW interface of MKL with AMD cpus. To get around this problem, one has to additionally set
export MKL_CBWR=AUTO`` (Info by Tobias Klöffel, Feb. 2020)
Apart from such problems, QUANTUM ESPRESSO compiles and works on all non-buggy, properly configured hardware and software combinations. In some cases you may have to recompile MPI libraries: not all MPI installations contain support for the Fortran compiler of your choice (or for any Fortran compiler at all).
If QUANTUM ESPRESSO does not work for some reason on a PC cluster, try first if it works in serial execution. A frequent problem with parallel execution is that QUANTUM ESPRESSO does not read from standard input, due to the configuration of MPI libraries: see Sec.3.5. If you are dissatisfied with the performances in parallel execution, see Sec.3 and in particular Sec.3.5.
Another option: use MinGW/MSYS. Download the installer from https://osdn.net/projects/mingw/, install MinGW, MSYS, gcc and gfortran. Start a shell window; run "./configure"; edit make.inc; uncommenting the second definition of TOPDIR (the first one introduces a final "/" that Windows doesn't like); run "make". Note that on some Windows the code fails when checking that tmp_dir is writable, for unclear reasons.
Another option is Cygwin, a UNIX environment which runs under Windows: see
http://www.cygwin.com/.
Windows-10 users may also enable the Windows Subsystem for Linux (see here:
https://docs.microsoft.com/en-us/windows/wsl/install-win10),
install a Linux distribution, compile QUANTUM ESPRESSO as on Linux. It works very well.
As a final option, one can use Quantum Mobile:
https://www.materialscloud.org/work/quantum-mobile.
"I have had some success compiling pw.x on the newish apple hardware. Running run-tests-pw-parallel resulted in all but 3 tests passed (3 unknown). QE6.7 works out of the box:
./configure FC=mpif90 CC=mpicc CPP=cpp-11 BLAS_LIBS="-L/opt/homebrew/lib -lveclibfort" LIBDIRS=/opt/homebrew/libCurrent develop branch needed two changes:
Mac OS-X machines with gfortran or with the Intel compiler ifort and MKL libraries should work, but "your mileage may vary", depending upon the specific software stack you are using. Parallel compilation with OpenMPI should also work.
If you get an error like
clang: error: no input files make[1]: *** [laxlib.fh] Error 1 make: *** [libla] Error 1iredefine CPP as CPP=gcc -E in make.inc.
Gfortran information and binaries for Mac OS-X here: http://hpc.sourceforge.net/ and https://wiki.helsinki.fi/display/HUGG/GNU+compiler+install+on+Mac+OS+X.
Mysterious crashes in zdotc are due to a known incompatibility of complex functions with some optimized BLAS. See the "Linux PCs with gfortran" paragraph.
''... despite what people can imagine, every CRAY machine deployed can have different environment. For example on the machine I usually use for tests [...] I do have to unload some modules to make QE running properly. On another CRAY [...] there is also Intel compiler as option and the system is slightly different compared to the other.'' (info by Filippo Spiga)
For recent Cray machines, use ./configure ARCH=craype.
This selects the ftn compiler, that typically uses
the crayftn compiler but may also use other ones,
depending upon the site and personal environment. ftn v.15.0.1
compiles QE properly. With the PrgEnv-cray
module v.6.0.10,
ftn v.14.0.3, you run into the following problems:
esm_stres_mod.f90
;
esm_stres_mod.f90
,
Modules/qexsd*.f90
, PW/src/pw_restart_new.f90
with reduced optimization, using -O0 or -O1 instead of the default
-O3,fp3 optimization.
If you want to use the Intel compiler instead, try something like:
$ module swap PrgEnv-cray PrgEnv-intel $ ./configure ARCH=craype [--enable-openmp --enable-parallel --with-scalapack]
Old Cray machines: T3D, T3E, X1, etc, are no longer supported.