Simplifying, stabilizing, and scaling continuous-time consistency models
Continuous-time consistency models with sample quality comparable to leading diffusion models in just two sampling steps.
Diffusion models have revolutionized generative AI, enabling remarkable advances in generating realistic images, 3D models, audio, and video. However, despite their impressive results, these models are slow at sampling.
We are sharing a new approach, called sCM, which simplifies the theoretical formulation of continuous-time consistency models, allowing us to stabilize and scale their training for large scale datasets. This approach achieves comparable sample quality to leading diffusion models, while using only two sampling steps. We are also sharing our research paper(opens in a new window) to support further progress in this field.
Introduction
Current sampling approaches of diffusion models often require dozens to hundreds of sequential steps to generate a single sample, which limits their efficiency and scalability for real-time applications. Various distillation techniques have been developed to accelerate sampling, but they often come with limitations, such as high computational costs, complex training, and reduced sample quality.
Extending our previous research on consistency models 1,2, we have simplified the formulation and further stabilized the training process of continuous-time consistency models. Our new approach, called sCM, has enabled us to scale the training of continuous-time consistency models to an unprecedented 1.5 billion parameters on ImageNet at 512×512 resolution. sCMs can generate samples with quality comparable to diffusion models using only two sampling steps, resulting in a ~50x wall-clock speedup. For example, our largest model, with 1.5 billion parameters, generates a single sample in just 0.11 seconds on a single A100 GPU without any inference optimization. Additional acceleration is easily achievable through customized system optimization, opening up possibilities for real-time generation in various domains such as image, audio, and video.
For rigorous evaluation, we benchmarked sCM against other state-of-the-art generative models by comparing both sample quality, using the standard Fréchet Inception Distance (FID) scores (where lower is better), and effective sampling compute, which estimates the total compute cost for generating each sample. As shown below, our 2-step sCM produces samples with quality comparable to the best previous methods while using less than 10% of the effective sampling compute, significantly accelerating the sampling process.
How it works
Consistency models offer a faster alternative to traditional diffusion models for generating high-quality samples. Unlike diffusion models, which generate samples gradually through a large number of denoising steps, consistency models aim to convert noise directly into noise-free samples in a single step. This difference is visualized by paths in the diagram: the blue line represents the gradual sampling process of a diffusion model, while the red curve illustrates the more direct, accelerated sampling of a consistency model. Using techniques like consistency training or consistency distillation 1,2, consistency models can be trained to generate high-quality samples with significantly fewer steps, making them appealing for practical applications that require fast generation.
We've trained a continuous-time consistency model with 1.5B parameters on ImageNet 512x512, and provided two-step samples from this model to demonstrate its capabilities.
Our sCM distills knowledge from a pre-trained diffusion model. A key finding is that sCMs improve proportionally with the teacher diffusion model as both scale up. Specifically, the relative difference in sample quality, measured by the ratio of FID scores, remains consistent across several orders of magnitude in model sizes, causing the absolute difference in sample quality to diminish at scale. Additionally, increasing the sampling steps for sCMs further reduces the quality gap. Notably, two-step samples from sCMs are already comparable (with less than a 10% relative difference in FID scores) to samples from the teacher diffusion model, which requires hundreds of steps to generate.
Limitations
The best sCMs still rely on pre-trained diffusion models for initialization and distillation, resulting in a small but consistent gap in sample quality compared to the teacher diffusion model. Additionally, FID as a metric for sample quality has its own limitations; being close in FID scores does not always reflect actual sample quality, and vice versa. Therefore, the quality of sCMs may need to be assessed differently depending on the requirements of specific applications.
What's next
We will continue to work toward developing better generative models with both improved inference speed and sample quality. We believe these advancements will unlock new possibilities for real-time, high-quality generative AI across a wide range of domains.