We propose a novel Latent Diffusion Transformer, namely Latte, for video generation. Latte first extracts spatio-temporal tokens from input videos and then adopts a series of Transformer blocks to model video distribution in the latent space. In order to model a substantial number of tokens extracted from videos, four efficient variants are introduced from the perspective of decomposing the spatial and temporal dimensions of input videos. To improve the quality of generated videos, we determine the best practices of Latte through rigorous experimental analysis, including video clip patch embedding, model variants, timestep-class information injection, temporal positional embedding, and learning strategies. Our comprehensive evaluation demonstrates that Latte achieves state-of-the-art performance across four standard video generation datasets, i.e., FaceForensics, SkyTimelapse, UCF101, and Taichi-HD. In addition, we extend Latte to text-to-video generation (T2V) task, where Latte achieves comparable results compared to recent T2V models. We strongly believe that Latte provides valuable insights for future research on incorporating Transformers into diffusion models for video generation.
Unconditional video generation on the Taichi-HD (256 x 256), FaceForensics (256 x 256) and SkyTimelapse (256 x 256) datasets.
Taichi-HD
FaceForensics
SkyTimelapse
Given the class, Latte is able to generate the desired videos. Results are shown on the UCF101 (256 x 256) datasets.
UCF101
Results are shown by using Latte to generate disered videos.
Yellow and black tropical fish dart through the sea.
An epic tornado attacking above aglowing city at night.
Slow pan upward of blazing oak fire in an indoor fireplace.
a cat wearing sunglasses and working as a lifeguard at pool.
Sunset over the sea.
A dog in astronaut suit and sunglasses floating in space.
Visual comparison with other state-of-the-art on UCF101, Taichi-HD, FaceForensics and SkyTimelapse datasets, respectively.
PVDM
Ours
DIGAN
LVDM
Ours
StyleGAN-V
PVDM
Ours
StyleGAN-V
PVDM
Ours
If you find this work useful for your research, please consider citing it.
@article{ma2024latte,
title={Latte: Latent Diffusion Transformer for Video Generation},
author={Ma, Xin and Wang, Yaohui and Jia, Gengyun and Chen, Xinyuan and Liu, Ziwei and Li, Yuan-Fang and Chen, Cunjian and Qiao, Yu},
journal={arXiv preprint arXiv:2401.03048},
year={2024}}