Skip to main content

March 21, 2019

Implicit generation and generalization methods for energy-based models

Implicit Generation And Generalization Methods For Energy Based Models

We’ve made progress towards stable and scalable training of energy-based models(opens in a new window) (EBMs) resulting in better sample quality and generalization ability than existing models. Generation in EBMs spends more compute to continually refine its answers and doing so can generate samples competitive with GANs(opens in a new window) at low temperatures, while also having mode coverage guarantees of likelihood-based models(opens in a new window). We hope these findings stimulate further research into this promising class of models.

Generative modeling(opens in a new window) is the task of observing data, such as images or text, and learning to model the underlying data distribution. Accomplishing this task leads models to understand high level features in data and synthesize examples that look like real data. Generative models have many applications in natural language, robotics, and computer vision.

Energy-based models represent probability distributions over data by assigning an unnormalized probability scalar (or “energy”) to each input data point. This provides useful modeling flexibility—any arbitrary model that outputs a real number given an input can be used as an energy model. The difficulty however, lies in sampling from these models.

To generate samples from EBMs, we use an iterative refinement process based on Langevin dynamics(opens in a new window). Informally, this involves performing noisy gradient descent on the energy function to arrive at low-energy configurations (see paper for more details(opens in a new window)). Unlike GANs(opens in a new window)VAEs(opens in a new window), and Flow-based models(opens in a new window), this approach does not require an explicit neural network to generate samples - samples are generated implicitly. The combination of EBMs and iterative refinement have the following benefits:

  • Adaptive computation time. We can run sequential refinement for long amount of time to generate sharp, diverse samples or a short amount of time for coarse less diverse samples. In the limit of infinite time, this procedure is known to(opens in a new window) generate true samples from the energy model.

  • Not restricted by generator network. In both VAEs and Flow based models, the generator must learn a map from a continuous space to a possibly disconnected space containing different data modes, which requires large capacity and may not be possible to learn. In EBMs, by contrast, can easily learn to assign low energies at disjoint regions.

  • Built-in compositionality. Since each model represents an unnormalized probability distribution, models can be naturally combined through product of experts(opens in a new window) or other hierarchical models.

Generation

We found energy-based models are able to generate qualitatively and quantitatively high-quality images, especially when running the refinement process for a longer period at test time. By running iterative optimization on individual images, we can auto-complete images and morph images from one class (such as truck) to another (such as frog).

Loading...
Loading...

In addition to generating images, we found that energy-based models are able to generate stable robot dynamics trajectories across large number of timesteps. EBMs can generate a diverse set of possible futures, while feedforward models collapse to a mean prediction.

Loading...

Generalization

We tested energy-based models on classifying several different out-of-distribution datasets(opens in a new window) and found that energy-based models outperform other likelihood models such as Flow based and autoregressive models. We also tested classification using conditional energy-based models, and found that the resultant classification exhibited good generalization to adversarial perturbations. Our model—despite never being trained for classification—performed classification better than models explicitly trained against adversarial perturbations(opens in a new window).

Lessons learned

We found evidence that suggest the following observations, though in no way are we certain that these observations are correct:

  • We found it difficult to apply vanilla HMC to EBM training as optimal step sizes and leapfrog simulation numbers differ greatly during training, though applying adaptive HMC would be an interesting extension.

  • We found training ensembles of energy functions (sampling and evaluating on ensembles) to help a bit, but was not worth the added complexity.

  • We didn’t find much success adding a gradient penalty term, as it seemed to hurt model capacity and sampling.

More tips, observations and failures from this research can be found in Section A.8 of the paper(opens in a new window).

Next steps

We found preliminary indications that we can compose multiple energy-based models via a product of experts model. We trained one model on different size shapes at a set position and another model on same size shape at different positions. By combining the resultant energy-based models, we were able to generate different size shapes at different locations, despite never seeing examples of both being changed.

Loading...

Compositionality is one of the unsolved challenges(opens in a new window) facing AI systems today, and we are excited about what energy-based models can do here. If you are excited to work on energy-based models please consider applying to OpenAI!

Authors

Yilun Du, Igor Mordatch

Acknowledgments

Thanks to Ilya Sutskever, Greg Brockman, Bob McGrew, Johannes Otterbach, Jacob Steinhardt, Harri Edwards, Yura Burda, Jack Clark and Ashley Pilipiszyn for feedback on this blog post and manuscript.