Journal Club: SINDy-Autoencoder

Charles Guan
3 min readNov 2, 2020

A brief description of Champion, Lusch, Kutz, and Brunton (PNAS 2019).

Overview

In “Data-driven discovery of coordinates and governing equations,” Champion, Lusch, Kutz, and Brunton develop a method to discover low-dimensional dynamics from high-dimensional systems. Their work is motivated by the recognition that the discovery of governing equations for dynamical systems first requires the use of the appropriate coordinate system. The authors build on SINDy (sparse identification of nonlinear dynamics) by adding an autoencoder into SINDy’s sparse regression framework. The autoencoder serves as a flexible model to find a coordinate transform where the underlying dynamics are simple. By adding an autoencoder to the SINDy framework, the authors’ new method simultaneously discovers both the necessary coordinate system and the governing equations.

Machine Learning Methods

The proposed method, SINDy-Autoencoder, combines SINDy’s sparse regression framework with an autoencoder neural network. I will touch on each component separately below.

SINDy (sparse identification of nonlinear dynamics) solves the linear inverse problem of constructing dx/dt as a sparse regression of a library of (nonlinear) functions of x. Nonlinear functions can include polynomial, trigonometric, or other terms.

Sparse identification of nonlinear dynamics (SINDY) in the Lorenz system. Source: Brunton, Proctor, and Kutz (2016). Discovering governing equations from data by sparse identification of nonlinear dynamical systems. PNAS. https://doi.org/10.1073/pnas.1517384113

Autoencoders use neural networks to reduce the dimensionality of the input while retaining the ability to reconstruct the input. The compressed input is called the “code” and the compressing network (“encoder”) and decompressing network (“decoder”) can have a number of layers to the neural network, as shown in the diagram below. Autoencoders provide a data-driven approach to nonlinear dimensionality reduction.

Source: https://towardsdatascience.com/applied-deep-learning-part-3-autoencoders-1c083af4d798

SINDy-Autoencoder combines the autoencoder reconstruction loss, the SINDy dynamics loss, and regularization.

SINDY-Autoencoder schematic. Source:Champion, Lusch, Kutz, and Brunton (PNAS 2019) / (CC-BY 4.0)

Applicability and Limitations

The authors apply SINDy-Autoencoder to the following examples: Lorenz system, lambda–omega reaction–diffusion system, and a nonlinear pendulum. They provide example code: https://github.com/kpchamp/SindyAutoencoders

One possible limitation is that deep learning methods, including autoencoders, tend to require a large amount of data to prevent overfitting. These demonstrations, although impressive, were all simulations where the ground-truth equations were known and data was somewhat infinite.

Related work

Aside from building off standard SINDy, SINDy-Autoencoders shares some features with another recent method, LFADS (Latent Factor Analysis via Dynamical Systems). Both methods use autoencoders to discover low-dimensional dynamics of an observed system. While SINDy-Autoencoders feed in the explicit derivative as training data, LFADS uses recurrent neural networks in its autoencoder to learn the dynamics from the states on their own. The model discovery and coordinate transform are done “all at once” in LFADS.

Another distinction is that LFADS uses variational autoencoders instead of SINDy-Autoencoder’s standard autoencoders, although this difference may mostly be due to noise robustness. Variational autoencoders can be thought of as a regularized version of autoencoders, which is helpful with with “noisy” data like neural recordings.

--

--