Leveraging Neural Koopman Operators to Learn Continuous Representations of Dynamical Systems from Scarce Data - Archive ouverte HAL Access content directly
Preprints, Working Papers, ... Year :

Leveraging Neural Koopman Operators to Learn Continuous Representations of Dynamical Systems from Scarce Data

(1, 2) , (1, 2) , (3, 4) , (5) , (6, 2)
1
2
3
4
5
6

Abstract

Over the last few years, several works have proposed deep learning architectures to learn dynamical systems from observation data with no or little knowledge of the underlying physics. A line of work relies on learning representations where the dynamics of the underlying phenomenon can be described by a linear operator, based on the Koopman operator theory. However, despite being able to provide reliable long-term predictions for some dynamical systems in ideal situations, the methods proposed so far have limitations, such as requiring to discretize intrinsically continuous dynamical systems, leading to data loss, especially when handling incomplete or sparsely sampled data. Here, we propose a new deep Koopman framework that represents dynamics in an intrinsically continuous way, leading to better performance on limited training data, as exemplified on several datasets arising from dynamical systems.
Fichier principal
Vignette du fichier
Leveraging_Neural_Koopman_Operators_to_Learn_Continuous_Representations_of_Dynamical_Systems_from_Scarce_Data_HAL.pdf (517.51 Ko) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03895087 , version 1 (12-12-2022)

Licence

Attribution - CC BY 4.0

Identifiers

  • HAL Id : hal-03895087 , version 1

Cite

Anthony Frion, Lucas Drumetz, Mauro Dalla Mura, Guillaume Tochon, Abdeldjalil Aissa El Bey. Leveraging Neural Koopman Operators to Learn Continuous Representations of Dynamical Systems from Scarce Data. 2022. ⟨hal-03895087⟩
0 View
0 Download

Share

Gmail Facebook Twitter LinkedIn More