⬆️ one paper Miramo exploring image animation using linear attention was released on arXiv

We have released MiraMo on arXiv. It is among the first papers to explore linear attention for video generation.

Setup

Download and set up the repo:


git clone https://github.com/maxin-cn/MiraMo
cd MiraMo
conda env create -f environment.yml
conda activate miramo

Animation

You can sample from our pre-trained MiraMo models. Weights for our pre-trained MiraMo model can be found here. The script has various arguments for adjusting sampling steps, changing the classifier-free guidance scale, etc:


bash pipelines/animation.sh

Related model weights will be downloaded automatically, and the following results can be found here.

Xin Ma
Xin Ma
PhD Student

I’m a Ph.D canditate at Monash University. My research interests include video and image generation, multimodal models, low-level vision, and face recognition, among others.

Related