Hung Yu Ling

Character Controllers Using Motion VAEs

ACM Transactions on Graphics (SIGGRAPH 2020)

HUNG YU LING, University of British Columbia
FABIO ZINNO, Electronic Arts Vancouver
GEORGE CHENG, Electronic Arts Vancouver
MICHIEL VAN DE PANNE, University of British Columbia

Paper: PDF (11MB) / Code: GitHub / Demo: GitHub

A fundamental problem in computer animation is that of realizing purposeful and realistic human movement given a sufficiently-rich set of motion capture clips. We learn data-driven generative models of human movement using autoregressive conditional variational autoencoders, or Motion VAEs. The latent variables of the learned autoencoder define the action space for the movement and thereby govern its evolution over time. Planning or control algorithms can then use this action space to generate desired motions. In particular, we use deep reinforcement learning to learn controllers that achieve goal-directed movements. We demonstrate the effectiveness of the approach on multiple tasks. We further evaluate system-design choices and describe the current limitations of Motion VAEs.

Demo

The Motion VAE demo is now hosted on Hugging Face. It runs in the browser (requires WebGL) using ONNX.js and three.js. Check it out the demo and the code on Hugging Face 🤗.

Video