Module dlkoopman.config
Configuration options
Classes
class Config (precision='float', use_cuda=True, torch_compile_backend='aot_eager', normalize_Xdata=True, use_exact_eigenvectors=True, sigma_threshold=1e-25)
-
Configuration options.
Parameters
-
precision (str, optional) - Numerical precision of tensors. Must be one of
"half"
/"float"
/"double"
.- Note that setting
precision = "double"
may make predictions slightly more accurate, however, may also lead to inefficient GPU runtimes.
- Note that setting
-
use_cuda (bool, optional) - If
True
, tensor computations will take place on CuDA GPUs if available. -
torch_compile_backend (str / None, optional) - The backend to use for
torch.compile()
, which is a feature added in torch major version 2 to potentially speed up computation.- If you are using
torch 1.x
or you settorch_compile_backend = None
,torch.compile()
will not be invoked on the DLKoopman neural nets. - If you are using
torch 2.x
, full lists of possible backends can be obtained by runningtorch._dynamo.list_backends()
andtorch._dynamo.list_backends(None)
. See thetorch.compile()
documentation for more details.
- If you are using
-
normalize_Xdata (bool, optional) - If
True
, all input states (training, validation, test) are divided by the maximum absolute value in the training data.- Note that normalizing data is a generally good technique for deep learning, and is normally done for each feature f in the input data as
X_f = \frac{X_f-\text{offset}_f}{\text{scale}_f}
(where offset and scale are mean and standard deviation for Gaussian normalization, or minimum value and range for Minmax normalization.)
However, this messes up the spectral techniques such as singular value and eigen value decomposition required in Koopman theory. Hence, settingnormalize_Xdata=True
will just use a single scale value for normalizing the whole data to get X = \frac{X}{\text{scale}} which results in the singular and eigen vectors remaining the same. - Caution: Setting
normalize_Xdata=False
may end the run by leading to imaginary parts of tensors reaching values where the loss function depends on the phase (intorch >= 1.11
, this leads to"RuntimeError: linalg_eig_backward: The eigenvectors in the complex case are specified up to multiplication by e^{i phi}. The specified loss function depends on this quantity, so it is ill-defined"
). The only benefit to settingnormalize_Xdata=False
is that if the run successfully completes, the final error metrics such as ANAE are slightly more accurate since they are reported on the un-normalized data values.
- Note that normalizing data is a generally good technique for deep learning, and is normally done for each feature f in the input data as
X_f = \frac{X_f-\text{offset}_f}{\text{scale}_f}
(where offset and scale are mean and standard deviation for Gaussian normalization, or minimum value and range for Minmax normalization.)
-
use_exact_eigenvectors (bool, optional) - If
True
, the exact eigenvectors of the Koopman matrix are used inStatePred
, ifFalse
, the projected eigenvectors are used.- For a discussion on exact and projected eigenvectors, see Tu et al or Chapter 1 of Kutz et al. The basic idea is that using exact eigenvectors is more accurate, but their computation may become less numerically stable than that of projected eigenvectors.
-
sigma_threshold (float, optional) - When computing the SVD in
StatePred
, singular values lower than this will be reported, since they can be a possible cause of unstable gradients.
Attributes
-
RTYPE (torch.dtype) - Data type of real tensors. Is automatically set to
torch.<precision>
(e.g.torch.float
ifprecision="float"
). -
CTYPE (torch.dtype) - Data type of complex tensors. Is automatically set to
torch.c<precision>
(e.g.torch.cfloat
ifprecision="float"
). -
DEVICE (torch.device) - Device where tensors reside. Is automatically set to
"cpu"
ifuse_cuda=False
or CuDA is not available, otherwise"cuda"
.
-