KineTy: Kinetic Typography Diffusion Model

Gwangju Institute of Science and Techonology

Examples of our KineTy dataset

Abstract

This paper introduces a method for realistic kinetic typography that generates user-preferred animatable "text content". We draw on recent advances in guided video diffusion models to achieve visually-pleasing text appearances. To do this, we first construct a kinetic typography dataset, comprising about 600K videos. Our dataset is made from a variety of combinations in 584 templates designed by professional motion graphics designers and involves changing each letter's position, glyph, and size (i.e., flying, glitches, chromatic aberration, reflecting effects, etc.). Next, we propose a video diffusion model for kinetic typography. For this, there are three requirements: aesthetic appearances, motion effects, and readable letters. This paper identifies the requirements. For this, we present static and dynamic captions used as spatial and temporal guidance of a video diffusion model, respectively. The static caption describes the overall appearance of the video, such as colors, texture and glyph which represent a shape of each letter. The dynamic caption accounts for the movements of letters and backgrounds. We add one more guidance with zero convolution to determine which text content should be visible in the video. We apply the zero convolution to the text content, and impose it on the diffusion model. Lastly, our glyph loss, only minimizing a difference between the predicted word and its ground-truth, is proposed to make the prediction letters readable. Experiments show that our model generates kinetic typography videos with legible and artistic letter motions based on text prompts.

BibTeX

@inproceedings{park2024kinety,
    title={Kinetic Typography Diffusion Model},
    author={Park, Seonmi and Bae, Inhwan and Shin, Seunghyun and Hae-Gon Jeon},
    booktitle=Proceedings of the European Conference on Computer Vision (ECCV),
    year={2024}
}