Cinemo: Consistent and Controllable Image Animation with Motion Diffusion Models
Click to play results from Cinemo!
Click to play results from Cinemo!
Methodology
Image animation aims to generate dynamic visual content from input static images. Diffusion models have become mainstream in image animation research due to their powerful generative capabilities, achieving remarkable success. However, maintaining consistency with the detailed information of the input static image over time (such as style, background, and object of the input static image) and ensuring smoothness in animated video narratives guided by textual prompts remain considerable challenges. In this paper, we propose a novel method called Cinemo, which can perform motion-controllable image animation with strong consistency. Our method introduces a novel framework focused on understanding the distribution of motion residuals, rather than directly generating subsequent frames. Additionally, an effective method based on the structural similarity index is proposed to control the motion intensity. Furthermore, we propose noise refinement based on discrete cosine transform to ensure layout consistency. These three strategies help Cinemo generate highly consistent and motion-controlled image animation results. Compared to previous methods, Cinemo offers simpler and more precise user control and better generative performance. Extensive experiments against several baseline methods, including both commercial tools and research approaches, across multiple metrics, underscore the effectiveness and superiority of our proposed approach.
Comparisons
We shows the animated results generated by different methods using the prompt "girl smiling".
We qualitatively compare our method with both commercial tools and research approaches,
including Pika Labs, Genmo, ConsistI2V, DynamiCrafter, I2VGen-XL, SEINE, PIA and SVD.
Click to play the following animations!
Analysis
The ablation studies and potential applications are presented here.
Motion intensity controllability
We demonstrate that our method can finely control the motion intensity of animated videos. The prompt is "shark swimming".
Click to play the following animations!
Effectiveness of DCTInit
We demonstrate that the proposed DCTInit can stabilize the video generation process and effectively mitigate sudden motion change; the DCT frequency domain decomposition method can effectively mitigate the color inconsistency issues caused by the FFT frequency domain decomposition method. The first and second lines prompt "woman smiling" and "robot dancing", respectively
Motion control by prompt
We demonstrate that our method does not rely on complex guiding instructions and even simple textual prompts can yield satisfactory visual effects.
Motion transfer/Video editing
We demonstrate that our proposed method can also be applied to motion transfer and video editing. We use the off-the-shelf image editing method to edit the first frame of the input video.
Gallery
More animation results generated by our method are shown here.
Click to play results from Cinemo!
Project page template is borrowed from DreamBooth.