Variational pose prediction with dynamic sample selection from sparse tracking signals

Texas A&M University

Computer Graphics Forum (Eurographics 2023)

Abstract

We propose a learning-based approach for full-body pose reconstruction from extremely sparse upper body tracking data, obtained from a virtual reality (VR) device. We leverage a conditional variational autoencoder with gated recurrent units to synthesize plausible and temporally coherent motions from 4-point tracking (head, hands, and waist positions and orientations). To avoid synthesizing implausible poses, we propose a novel sample selection and interpolation strategy along with an anomaly detection algorithm. Specifically, we monitor the quality of our generated poses using the anomaly detection algorithm and smoothly transition to better samples when the quality falls below a statistically defined threshold. Moreover, we demonstrate that our sample selection and interpolation method can be used for other applications, such as target hitting and collision avoidance, where the generated motions should adhere to the constraints of the virtual environment. Our system is lightweight, operates in real-time, and is able to produce temporally coherent and realistic motions.

Supplementary Video

Acknowledgements

This work was funded in part by the National Science Foundation (CAREER-1846368) and a generous gift from Adobe. The data used in this project was obtained from mocap.cs.cmu.edu, HDM05, and Edinburgh University. The CMU database was created with funding from NSF EIA-0196217.