Generative time series models using Neural ODE in Variational Autoencoders. (arXiv:2201.04630v1 [cs.LG])

In this paper, we implement Neural Ordinary Differential Equations in a
Variational Autoencoder setting for generative time series modeling. An
object-oriented approach to the code was taken to allow for easier development
and research and all code used in the paper can be found here:

The results were initially recreated and the reconstructions compared to a
baseline Long-Short Term Memory AutoEncoder. The model was then extended with a
LSTM encoder and challenged by more complex data consisting of time series in
the form of spring oscillations. The model showed promise, and was able to
reconstruct true trajectories for all complexities of data with a smaller RMSE
than the baseline model. However, it was able to capture the dynamic behavior
of the time series for known data in the decoder but was not able to produce
extrapolations following the true trajectory very well for any of the
complexities of spring data. A final experiment was carried out where the model
was also presented with 68 days of solar power production data, and was able to
reconstruct just as well as the baseline, even when very little data is

Finally, the models training time was compared to the baseline. It was found
that for small amounts of data the NODE method was significantly slower at
training than the baseline, while for larger amounts of data the NODE method
would be equal or faster at training.

The paper is ended with a future work section which describes the many
natural extensions to the work presented in this paper, with examples being
investigating further the importance of input data, including extrapolation in
the baseline model or testing more specific model setups.



Related post