# Format of sannp.propΒΆ

Executing `sannp --temp`

outputs the template of `sannp.prop`

.

A Line that initial character is # or ! is dealt with as a comment.

- restart
- default:
0

Start learning from the beginning in case of 0. Read the information of the neural network from

`sannp.data`

,`sannp.data_e`

or`sannp.data_q`

to resume the learning in case of 1.

- insituTest
- default:
0

Perform in-situ tests to calculate and output RSME against the test data in each epoch during learning in case of 1. Not perform in-situ tests in case of 0.

- withCharge
- default:
0

Calculate charges (1) or not (0).

- withHDNNP
- default:
1

Use HDNNP method (1) of setting the total energy in the system as training data or SANNP method (0) of setting energy divided into each atom as training data.

- withLJlike
- default:
0

Use Δ-NNP, combination of LJ-like force field and neural network force field (1) or not(0). Can not be used with Δ-NNP using ReaxFF at the same time.

`withClassical 1`

is the same as`withLJlike 1`

.

- withReaxFF
- default:
0

Use Δ-NNP, combination of ReaxFF and neural network force field (1) or not(0). For ReaxFF, parameter definition file

`ffield.reax`

is required. Can not be used with Δ-NNP using LJ-like force field at the same time.`withClassical 2`

is the same as`withReaxFF 1`

.

- rcutReaxFF
- default:
5.0

For Δ-NNP using ReaxFF, specify the cutoff radius(Å) of ReaxFF.

- rateReaxFF
- default:
0.5

For Δ-NNP using ReaxFF, specify the contribution (mixing rate) of ReaxFF when calculating energy and force.

- directSF
- default:
-1

Normalize symmetry functions in each mini batch (1) or in whole sample (0); in case of a negative value, normalize in whole sample for Behler symmetry function, in mini batch for Many-Body symmetry function.

- regularElem
- default:
1

When characters follow element names in the training data (e.g., Fe1, Fe2), specify whether to treat them as the same element (1) or as different elements (0) during training.

- maxForce
- default:
10.0

Specify the threshold (eV/Å) to eliminate outliers with too large force in training data. Not eliminate in case of 0 or lower.

- minEDev
- default:
0.5

Specify the lower limit (eV) of variance for normalizing of atomic energy.

- maxEDev
- default:
10

Specify the upper limit (eV) of variance for normalizing of atomic energy.

- minQDev
- default:
0.1

Specify the lower limit (e) of variance for normalizing of atomic charges.

- maxQDev
- default:
10

Specify the upper limit (e) of variance for normalizing of atomic charges.

- symmFunc
- default:
chebyshev

Specify a symmetry function. behler, chebyshev and many-body are available.

- elemWeight
- default:
1

Use weighted symmetry functions (1) or not (0). Available only when Behler symmetry function or Chebyshev symmetry function is used.

- tanhCutoff
- default:
1

Use functions composed of tanh (1) or cos (0) as cutoff functions .

- m2
- default:
100

Specify the parameter

*M*_{2}of Many-Body symmetry functions.

- m3
- default:
10

Specify the parameter

*M*_{3}of Many-Body symmetry functions.

- rinner
- default:
0.0

Specify the parameter

*R*_{inner}(Å) of Many-Body functions.

- router
- default:
6.0

Specify the parameter

*R*_{outer}(Å) of Many-Body functions.

- numRadius
- default:
50

Specify the number of radial components of Chebyshev symmetry functions.

- numAngle
- default:
30

Specify the number of angular components of Chebyshev symmetry functions.

- rcutRadius
- default:
6.0

Specify the cutoff radius

*R*_{c}(Å) of radial components of Chebyshev symmetry functions.

- rcutAngle
- default:
6.0

Specify the cutoff radius

*R*_{c}(Å) of angular components of Chebyshev symmetry functions.

- models
- default:
16

Create specified number of neural network models and perform learning in parallel. When used as the force field, the average value of energy/force output from each neural network becomes the output of the force field. If 1 is specified, it results in the ordinary method in which the force field is defined by one neural network model.

- layers
- default:
2

Specify the number of hidden layers of a neural network.

- nodes
- default:
40

Specify the number of nodes of of a neural network.

- activ
- default:
twtanh

Specify an activation function of a neural network. asis (not use), sigmoid, tanh, twtanh (twisted tanh), eLU, and GELU are available.

- lbfgs
- default:
32

Specify an optimization algorithm at learning. In case of 0, use Adam method. In case of 1 or larger, use L-BFGS method with the specified value as the number of histories.

- lineSearch
- default:
more-thuente

Specify a linear search algorithm used in L-BFGS method. more-thuente, armijo, wolfe, and srong-wolfe are available.

- lineSteps
- default:
32

Specify the maximum value of the number of trials of linear search used in L-BFGS method.

- batchs
- default:
0

Specify the mini batch size at learning. In case of 0 or lower, not use mini batches but use a full batch (whole sample) for learning.

- epochs
- default:
500

Specify the upper limit of the number of repeats (epoch) at learning.

- epochsStore
- default:
1000

Specify the interval to save data of neural network into a file at learning. In case of 0 or lower, store only at the end.

- epochsOnlyE
- default:
250

Use only energy for learning until the nuber of repeats (epoch) at learning becomes larger than this number. After that, use both energy and force for learning.

- epochsApproxF
- default:
500

Use approximate differentiation with Double Backward method when calculating from energy an error of force acting on an atom until the number of repeats (epoch) becomes larger than this number. After that, strictly calculate a differentiate.

- superEpochs
- default:
0

Setting of Super Epoch method, in which the training data is split in subsets and serially perform the training using each subset.

Specify the number of subsets (= number of Super Epochs). In case of 0 or lower, the value is automatically set based on geomsEpoch.

- geomsEpoch
- default:
500 (2500 for GPU version

`sannp_gpu`

)

Specify the number of training data per Super Epoch. In case of 0 or lower, or larger than the training data number, it is automatically set to the same value as the training data number.

- sqrtLoss
- default:
0

Specify the scale factor of a loss function with absolute value (1) or squared value (0).

- renormLoss
- default:
0

Specify whether a loss function should be normalized (1) or not (0).

- approxForce
- default:
0

Select switching approximate and strict differentiation with epochsApproxF (0) or using always approximate one (1) when calculating from energy an error of force acting on an atom.

- rmseEnergy
- default:
0.01

Specify the threshold (eV/atom) of residual (RMS) of energy to determine whether learning has converged or not.

- rmseForce
- default:
0.10

Specify the threshold (eV/Å) of residual (RMS) of force to determine whether learning has converged or not.

- rmseCharge
- default:
0.01

Specify the threshold (e) of residual (RMS) of charge to determine whether learning has converged or not.

- coefEnergy
- default:
1.00

Specify the scale factor (1/eV or 1/eV

^{2}) of a loss function of energy.

- coefForce
- default:
1.00

Specify the scale factor (Å/eV or (Å/eV)

^{2}) of a loss function of force.

- coefCharge
- default:
1.00

Specify the scale factor (1/e or 1/e

^{2}) of a loss function of charge.

- learnRate
- default:
1.0e-4

Specify the initial value of learning rate.

- learnRateFinal
- default:
1.0e-4

Specify the lower limit of learning rate.

- learnRateDecay
- default:
0.9999

Specify the attenuation factor of learning rate.

- adamBeta1
- default:
0.9

Specify the hyper parameter (

*β*_{1}of Adam method) at learning.

- adamBeta2
- default:
0.999

Specify the hyper parameter (

*β*_{2}of Adam method) at learning.

- classicalTry
- default:
64

Specify the upper limit of the number of repeats when optimizing a classical force field used in Δ-NNP.

- classicalLower
- default:
-50.0

When optimizing a classical force field used in Δ-NNP, penalty functions will be applied so that lower energy than this value (eV) has seldomly appeared.

- gpuThreads
- default:
256

(GPU version) Number of threads per block in CUDA. Upper limit is 1024 (specification of CUDA). Multiple of 32 (warp size) is appropriate.

This setting is about the threads on GPU, and not related to thread parallel (OpenMP parallel) on CPU.

- gpuAtomBlock
- default:
512

(GPU version) Number of atoms to be processed at a time when calculating the symmetry function using GPU.

- endProperty
Subsequent file contents are treated as comments.