Diffusion Model Compression for Image-to-Image Translation
Abstract
As recent advances in large-scale Text-to-Image (T2I) diffusion models have yielded remarkable high-quality image generation, diverse downstream Image-to-Image (I2I) applications have emerged. Despite the impressive results achieved by these I2I models, their practical utility is hampered by their large model size and the computational burden of the iterative denoising process. In this paper, we propose a novel compression method tailored for diffusion-based I2I models. Based on the observations that the image conditions of I2I models already provide rich information on image structures, and that the time steps with a larger impact tend to be biased, we develop surprisingly simple yet effective approaches for reducing the model size and latency. We validate the effectiveness of our method on three representative I2I tasks: InstructPix2Pix for image editing, StableSR for image restoration, and ControlNet for image-conditional image generation.
Method Overview
Our compression method consists of two methods: depth-skip compression to effectively reduce model size and time-step optimization to accelerate the diffusion sampling process.
Depth-skip Compression Results
Our depth-skip pruning outperforms previous state-of-the-art methods, even without fine-tuning, by a significant margin.
Time-step Optimization Results
Our time-step optimization consistently shows better performance than the original uniform scheduling.
Citation
Acknowledgements
The website template was borrowed from Michaƫl Gharbi and ReconFusion.