Concepedia

TLDR

The paper introduces a new family of deep neural network models. The authors model the hidden‑state derivative with a neural network and solve the resulting ordinary differential equation using a black‑box solver, enabling continuous‑depth residual networks, continuous‑time latent variable models, and continuous normalizing flows that can be trained end‑to‑end via scalable back‑propagation. These continuous‑depth models exhibit constant memory usage, input‑dependent evaluation, and a trade‑off between numerical precision and speed, as demonstrated in residual networks, latent variable models, and continuous normalizing flows.

Abstract

We introduce a new family of deep neural network models. Instead of specifying a discrete sequence of hidden layers, we parameterize the derivative of the hidden state using a neural network. The output of the network is computed using a black-box differential equation solver. These continuous-depth models have constant memory cost, adapt their evaluation strategy to each input, and can explicitly trade numerical precision for speed. We demonstrate these properties in continuous-depth residual networks and continuous-time latent variable models. We also construct continuous normalizing flows, a generative model that can train by maximum likelihood, without partitioning or ordering the data dimensions. For training, we show how to scalably backpropagate through any ODE solver, without access to its internal operations. This allows end-to-end training of ODEs within larger models.

References

YearCitations

Page 1