Calculation Engines included in Nanolabo Tool

Windows version

Executable files of Quantum ESPRESSO( qe ), LAMMPS and MPI are stored in exec.WIN in the NanoLabo Tool installation location.

You may want to add the path of the executable files to the environment variable Path for convenience. To add them temporarily, execute the following.

Example in case installed in default location
set Path=C:\Program Files\AdvanceSoft\NanoLabo\exec.WIN\qe;C:\Program Files\AdvanceSoft\NanoLabo\exec.WIN\lammps;C:\Program Files\AdvanceSoft\NanoLabo\exec.WIN\mpi;%Path%

You can launch a command prompt with Path being set by executing NanoLabo.bat in bin in NanoLabo Tool installation location.

Or, to add them permanently, edit the environment variable Path from the system property.

To perform DFT (SCF) calculation using Quantum ESPRESSO, execute pw.exe with specifying the input file.

Example of executing the input file PW.inp
pw.exe -in PW.inp 1> PW.out 2> PW.err
Example of parallel execution
mpiexec.exe -n 4 pw.exe -in PW.inp 1> PW.out 2> PW.err

To perform molecular dynamics calculation using LAMMPS, execute lammps.exe with specifying the input file.

Example of executing the input file lammps.in
lammps.exe < lammps.in 1> lammps.out 2> lammps.err
Example of parallel execution
mpiexec.exe -n 4 lammps.exe < lammps.in 1> lammps.out 2> lammps.err

Linux version

Executable files of Quantum ESPRESSO( qe ), LAMMPS and MPI are stored in exec.LINUX in the NanoLabo Tool installation location.

As the dynamic library in mpi/lib are required for execution, execute the following to set it to the environment variable LD_LIBRARY_PATH.

Example in case installed in default location
export LD_LIBRARY_PATH=/opt/AdvanceSoft/NanoLabo/exec.LINUX/mpi/lib:$LD_LIBRARY_PATH

Also execute the following as the environment variable PATH and Open MPI’s environment variable OPAL_PREFIX are necessary to be set.

Example in case installed in default location
export PATH=/opt/AdvanceSoft/NanoLabo/exec.LINUX/mpi/bin:$PATH
export OPAL_PREFIX=/opt/AdvanceSoft/NanoLabo/exec.LINUX/mpi

Note

MPI executable files / library are included both in the main NeuralMD installer and the NanoLabo Tool installer.

They are stored in mpi in the installation destination by the former, and in exec.LINUX/mpi by the latter.

You have to set just one of them to the environment variable as the content are the same.

In addition, you may want to add the path of the executable files to the environment variable Path for convenience.

Example in case installed in default location
export PATH=/opt/AdvanceSoft/NanoLabo/exec.LINUX/qe_parallel:/opt/AdvanceSoft/NanoLabo/exec.LINUX/lammps_parallel:$PATH

To perform DFT (SCF) calculation using Quantum ESPRESSO, execute pw.x with specifying the input file.

Example of executing the input file PW.inp
pw.x -in PW.inp 1> PW.out 2> PW.err
Example of parallel execution
mpirun -n 4 pw.x -in PW.inp 1> PW.out 2> PW.err

To perform molecular dynamics calculation using LAMMPS, execute lammps with specifying the input file.

Example of executing the input file lammps.in
lammps < lammps.in 1> lammps.out 2> lammps.err
Example of parallel execution
mpirun -n 4 lammps < lammps.in 1> lammps.out 2> lammps.err

GPU version of LAMMPS

Use lammps_gpu instead of lammps as the LAMMPS executable to perform calculations of neural network potential using GPU.

While the same input files for non-GPU version can be used, you can configure GPU version-specific settings by creating the configuration file gpu.conf.

Note

  • GPU driver is necessary to be installed beforehand. As CUDA 11.4.4 is used, driver version 470.82.01 or later is required.

  • In case 5 or more elements are in the system, weighted symmetry function must be used when you generate the force field.

Format of gpu.conf

threads, atomBlock and mpi2Device indicate the beginning of each section, and the next and subsequent lines are the contents of the section.

Each section can be omitted. If omitted, default value is used.

You can insert empty lines or comment lines (begin with ! or #) before and/or after each section.

threads
default:

256

Number of threads per block in CUDA. Upper limit is 1024 (specification of CUDA). Multiple of 32 (warp size) is appropriate.

atomBlock
default:

4096

Number of atoms to be processed at a time when calculating the symmetry function using GPU.

mpi2Device
default:

0 for all

In case of using a machine with multiple GPUs, specify the GPU(s) to be used by the device ID.

You can check the device ID assigned to each GPU by executing nvidia-smi -L .

gpu.conf example
threads
512
atomBlock
1024
#On a machine with 2 graphic cards, to execute with 8 MPI parallel processes,
#with allocating 4 processes to device ID0 GPU and 4 to device ID1 GPU
mpi2Device
0
0
0
0
1
1
1
1

Hint

The calculation is effectively executed if you assign 2-4 processes per 1 GPU device.