FromScratch
Section: Execution
Type: logical
Default: false
When this variable is set to true, Octopus will perform a
calculation from the beginning, without looking for restart
information.
DebugLevel
Section: Execution::Debug
Type: integer
Default: 0
This variable decides whether or not to enter debug mode.
If it is greater than 0, different amounts of additional information
are written to standard output and additional assertion checks are performed.
This variable cannot use a dataset prefix.
Options:
ExperimentalFeatures
Section: Execution::Debug
Type: logical
Default: no
If true, allows the use of certain parts of the code that are
still under development and are not suitable for production
runs. This should not be used unless you know what you are doing.
See details on
wiki page.
This variable cannot use a dataset prefix.
ForceComplex
Section: Execution::Debug
Type: logical
Default: no
Normally Octopus determines automatically the type necessary
for the wavefunctions. When set to yes this variable will
force the use of complex wavefunctions.
Warning: This variable is designed for testing and
benchmarking and normal users need not use it.
MPIDebugHook
Section: Execution::Debug
Type: logical
Default: no
When debugging the code in parallel it is usually difficult to find the origin
of race conditions that appear in MPI communications. This variable introduces
a facility to control separate MPI processes. If set to yes, all nodes will
start up, but will get trapped in an endless loop. In every cycle of the loop
each node is sleeping for one second and is then checking if a file with the
name node_hook.xxx (where xxx denotes the node number) exists. A given node can
only be released from the loop if the corresponding file is created. This allows
to selectively run, e.g., a compute node first followed by the master node. Or, by
reversing the file creation of the node hooks, to run the master first followed
by a compute node.
ReportMemory
Section: Execution::Debug
Type: logical
Default: no
If true, after each SCF iteration Octopus will print
information about the memory the code is using. The quantity
reported is an approximation to the size of the heap and
generally it is a lower bound to the actual memory Octopus is
using.
FlushMessages
Section: Execution::IO
Type: logical
Default: no
In addition to writing to stdout and stderr, the code messages may also be
flushed to messages.stdout and messages.stderr, if this variable is
set to yes.
RestartOptions
Section: Execution::IO
Type: block
Octopus usually stores binary information, such as the wavefunctions, to be used
in subsequent calculations. The most common example is the ground-state states
that are used to start a time-dependent calculation. This variable allows to control
where this information is written to or read from. The format of this block is the following:
for each line, the first column indicates the type of data, the second column indicates
the path to the directory that should be used to read and write that restart information, and the
third column, which is optional, allows one to set some flags to modify the way how the data
is read or written. For example, if you are running a time-dependent calculation, you can
indicate where Octopus can find the ground-state information in the following way:
%RestartOptions
restart_gs | "gs_restart"
restart_td | "td_restart"
%
The second line of the above example also tells Octopus that the time-dependent restart data
should be read from and written to the "td_restart" directory.
In case you want to change the path of all the restart directories, you can use the restart_all option.
When using the restart_all option, it is still possible to have a different restart directory for specific
data types. For example, when including the following block in your input file:
%RestartOptions
restart_all | "my_restart"
restart_td | "td_restart"
%
the time-dependent restart information will be stored in the "td_restart" directory, while all the remaining
restart information will be stored in the "my_restart" directory.
By default, the name of the "restart_all" directory is set to "restart".
Some CalculationModes also take into account specific flags set in the third column of the RestartOptions
block. These are used to determine if some specific part of the restart data is to be taken into account
or not when reading the restart information. For example, when restarting a ground-state calculation, one can
set the restart_rho flags, so that the density used is not built from the saved wavefunctions, but is
instead read from the restart directory. In this case, the block should look like this:
%RestartOptions
restart_gs | "restart" | restart_rho
%
A list of available flags is given below, but note that the code might ignore some of them, which will happen if they
are not available for that particular calculation, or might assume some of them always present, which will happen
in case they are mandatory.
Finally, note that the all the restart information of a given data type is always stored in a subdirectory of the
specified path. The name of this subdirectory is fixed and cannot be changed. For example, ground-state information
will always be stored in a subdirectory named "gs". This makes it safe in most situations to use the same path for
all the data types. The name of these subdirectories is indicated in the description of the data types below.
Currently, the available restart data types and flags are the following:
Options:
RestartWrite
Section: Execution::IO
Type: logical
Default: true
If this variable is set to no, restart information is not
written. Note that some run modes will ignore this
option and write some restart information anyway.
RestartWriteInterval
Section: Execution::IO
Type: integer
Default: 50
Restart data is written when the iteration number is a multiple
of the RestartWriteInterval variable. For
time-dependent runs this includes the update of the output
controlled by the TDOutput variable. (Other output is
controlled by OutputInterval.)
WorkDir
Section: Execution::IO
Type: string
Default: "."
By default, all files are written and read from the working directory,
i.e. the directory from which the executable was launched. This behavior can
be changed by setting this variable: if you give it a name (other than ".")
the files are written and read in that directory.
stderr
Section: Execution::IO
Type: string
Default: "-"
The standard error by default goes to, well, to standard error. This can
be changed by setting this variable: if you give it a name (other than "-")
the output stream is printed in that file instead.
stdout
Section: Execution::IO
Type: string
Default: "-"
The standard output by default goes to, well, to standard output. This can
be changed by setting this variable: if you give it a name (other than "-")
the output stream is printed in that file instead.
DisableOpenCL
Section: Execution::OpenCL
Type: logical
Default: yes
If Octopus was compiled with OpenCL support, it will try to
initialize and use an OpenCL device. By setting this variable
to yes you tell Octopus not to use OpenCL.
OpenCLBenchmark
Section: Execution::OpenCL
Type: logical
Default: no
If this variable is set to yes, Octopus will run some
routines to benchmark the performance of the OpenCL device.
OpenCLDevice
Section: Execution::OpenCL
Type: integer
Default: gpu
This variable selects the OpenCL device that Octopus will
use. You can specify one of the options below or a numerical
id to select a specific device.
Options:
OpenCLPlatform
Section: Execution::OpenCL
Type: integer
Default: 0
This variable selects the OpenCL platform that Octopus will
use. You can give an explicit platform number or use one of
the options that select a particular vendor
implementation. Platform 0 is used by default.
Options:
MemoryLimit
Section: Execution::Optimization
Type: integer
Default: -1
If positive, Octopus will stop if more memory than MemoryLimit
is requested (in kb). Note that this variable only works when
ProfilingMode = prof_memory(_full).
MeshBlockSize
Section: Execution::Optimization
Type: block
To improve memory-access locality when calculating derivatives,
Octopus arranges mesh points in blocks. This variable
controls the size of this blocks in the different
directions. The default is selected according to the value of
the StatesBlockSize variable. (This variable only affects the
performance of Octopus and not the results.)
MeshOrder
Section: Execution::Optimization
Type: integer
Default: blocks
This variable controls how the grid points are mapped to a
linear array. This influences the performance of the code.
Options:
NLOperatorCompactBoundaries
Section: Execution::Optimization
Type: logical
Default: no
(Experimental) When set to yes, for finite systems Octopus will
map boundary points for finite-differences operators to a few
memory locations. This increases performance, however it is
experimental and has not been thoroughly tested.
OperateComplex
Section: Execution::Optimization
Type: integer
This variable selects the subroutine used to apply non-local
operators over the grid for complex functions.
By default the optimized version is used (except in single-precision build).
Options:
OperateDouble
Section: Execution::Optimization
Type: integer
This variable selects the subroutine used to apply non-local
operators over the grid for real functions.
By default the optimized version is used (except in single-precision build).
Options:
OperateOpenCL
Section: Execution::Optimization
Type: integer
Default: map
This variable selects the subroutine used to apply non-local
operators over the grid when OpenCL is used.
Options:
ProfilingAllNodes
Section: Execution::Optimization
Type: logical
Default: no
This variable controls whether all nodes print the time
profiling output. If set to no, the default, only the root node
will write the profile. If set to yes, all nodes will print it.
ProfilingMode
Section: Execution::Optimization
Type: integer
Default: no
Use this variable to run Octopus in profiling mode. In this mode
Octopus records the time spent in certain areas of the code and
the number of times this code is executed. These numbers
are written in ./profiling.NNN/profiling.nnn with nnn being the
node number (000 in serial) and NNN the number of processors.
This is mainly for development purposes. Note, however, that
Octopus should be compiled with --disable-debug to do proper
profiling. Warning: you may encounter strange results with OpenMP.
Options:
StatesBlockSize
Section: Execution::Optimization
Type: integer
Some routines work over blocks of eigenfunctions, which
generally improves performance at the expense of increased
memory consumption. This variable selects the size of the
blocks to be used. If OpenCl is enabled, the default is 32;
otherwise it is max(4, 2*nthreads).
StatesCLDeviceMemory
Section: Execution::Optimization
Type: float
Default: -512
This variable selects the amount of OpenCL device memory that
will be used by Octopus to store the states.
A positive number smaller than 1 indicates a fraction of the total
device memory. A number larger than one indicates an absolute
amount of memory in megabytes. A negative number indicates an
amount of memory in megabytes that would be subtracted from
the total device memory.
StatesPack
Section: Execution::Optimization
Type: logical
Default: yes
If set to yes (the default), Octopus will 'pack' the
wave-functions when operating with them. This involves some
additional copying but makes operations more efficient.
MeshPartition
Section: Execution::Parallelization
Type: integer
When using METIS to perform the mesh partitioning, decides which
algorithm is used. By default, graph partitioning
is used for 8 or more partitions, and rcb for fewer.
Options:
MeshPartitionDir
Section: Execution::Parallelization
Type: string
Default: "restart/partition"
Directory where Octopus can read or write the mesh partition.
MeshPartitionPackage
Section: Execution::Parallelization
Type: integer
Decides which library to use to perform the mesh partition.
By default ParMETIS is used when available, otherwise METIS is used.
Options:
MeshPartitionRead
Section: Execution::Parallelization
Type: logical
Default: true
If set to yes (the default), Octopus will try to use the mesh
partition from a previous run, if available, in directory MeshPartitionDir.
MeshPartitionStencil
Section: Execution::Parallelization
Type: integer
Default: stencil_star
To partition the mesh, it is necessary to calculate the connection
graph connecting the points. This variable selects which stencil
is used to do this.
Options:
MeshPartitionVirtualSize
Section: Execution::Parallelization
Type: integer
Default: mesh mpi_grp size
Gives the possibility to change the partition nodes.
Afterward, it crashes.
MeshPartitionWrite
Section: Execution::Parallelization
Type: logical
Default: no
(Experimental) If set to yes, Octopus will write the mesh
partition of the current run to directory MeshPartitionDir.
MeshUseTopology
Section: Execution::Parallelization
Type: logical
Default: false
(experimental) If enabled, Octopus will use an MPI virtual
topology to map the processors. This can improve performance
for certain interconnection systems.
ParallelizationGroupRanks
Section: Execution::Parallelization
Type: block
Specifies the size of the groups used for the
parallelization, as one number each for domains, states, k-points, and other.
For example (n_d, n_s, n_k, n_o) means we have
n_d*n_s*n_k*n_o processors and that electron-hole pairs (only for CalculationMode = casida)
will be divided into n_o groups, the k-points should be
divided into n_k groups, the states into n_s groups, and the grid
points into n_d domains. You can pass the value fill to one
field: it will be replaced by the value required to complete
the number of processors in the run. Any value for the column corresponding to
a parallelization strategy unavailable for the current CalculationMode will be ignored.
If this option is not set, the groups will be set automatically, choosing divisors of the number
of available processors, and using the largest numbers for the groups in this order:
other, k-points, states, domains (i.e. from right to left of how they are laid out in this block).
Options:
ParallelizationNumberSlaves
Section: Execution::Parallelization
Type: integer
Default: 0
Slaves are nodes used for task parallelization. The number of
such nodes is given by this variable multiplied by the number
of domains used in domain parallelization.
ParallelizationOfDerivatives
Section: Execution::Parallelization
Type: integer
Default: non_blocking
This option selects how the communication of mesh boundaries is performed.
Options:
ParallelizationPoissonAllNodes
Section: Execution::Parallelization
Type: logical
Default: true
When running in parallel, this variable selects whether the
Poisson solver should divide the work among all nodes or only
among the parallelization-in-domains groups.
ParallelizationStrategy
Section: Execution::Parallelization
Type: flag
Specifies what kind of parallelization strategy Octopus should use.
The values can be combined: for example, par_domains + par_states
means a combined parallelization in domains and states.
Default: par_domains + par_states for CalculationMode = td,
otherwise par_domains.
Options:
PartitionPrint
Section: Execution::Parallelization
Type: logical
Default: true
(experimental) If disabled, Octopus will not compute
nor print the partition information, such as local points,
no. of neighbours, ghost points and boundary points.
ScaLAPACKCompatible
Section: Execution::Parallelization
Type: logical
Whether to use a layout for states parallelization which is compatible with ScaLAPACK.
The default is yes for CalculationMode = gs, unocc, go without k-point parallelization,
and no otherwise. (Setting to other than default is experimental.)
The value must be yes if any ScaLAPACK routines are called in the course of the run;
it must be set by hand for td with TDDynamics = bo.
This variable has no effect unless you are using states parallelization and have linked ScaLAPACK.
Note: currently, use of ScaLAPACK is not compatible with task parallelization (i.e. slaves).
SymmetriesCompute
Section: Execution::Symmetries
Type: logical
Default: (natoms < 100) ? true : false
If disabled, Octopus will not compute
nor print the symmetries.
Units
Section: Execution::Units
Type: integer
Default: atomic
This variable selects the units that Octopus use for
input and output.
Atomic units seem to be the preferred system in the atomic and
molecular physics community. Internally, the code works in
atomic units. However, for input or output, some people like
to use a system based on electron-Volts (eV) for energies
and Angstroms (Å) for length.
Normally time units are derived from energy and length units,
so it is measured in \(\hbar\)/Hartree or
\(\hbar\)/eV. Alternatively you can tell
Octopus to use femtoseconds as the time unit by adding the
value femtoseconds (Note that no other unit will be
based on femtoseconds). So for example you can use:
Units = femtoseconds
or
Units = ev_angstrom + femtoseconds
You can use different unit systems for input and output by
setting the UnitsInput and UnitsOutput.
Warning 1: All files read on input will also be treated using
these units, including XYZ geometry files.
Warning 2: Some values are treated in their most common units,
for example atomic masses (a.m.u.), electron effective masses
(electron mass), vibrational frequencies
(cm-1) or temperatures (Kelvin). The unit of charge is always
the electronic charge e.
Options:
UnitsInput
Section: Execution::Units
Type: integer
Default: atomic
Same as Units, but only refers to input values.
UnitsOutput
Section: Execution::Units
Type: integer
Default: atomic
Same as Units, but only refers to output values.