In proceedings of the 31st ACM Symposium on User Interface Software and Technology (UIST 2018). Project page: http://mosculp.csail.mit.edu/
We present a system that allows users to visualize complex human motion via 3D motion sculptures—a representation that conveys the 3D structure swept by a human body as it moves through space. Given an input video, our system computes a motion sculpture and provides the user with an interface for rendering motion sculptures in different styles, including the options to insert the sculpture back into the source video or render it in a synthetic scene.
To provide this end-to-end workflow, we introduce an algorithm that estimates the human’s 3D geometry over time and develop a 3D-aware image-based rendering approach to preserve the depth ordering between the sculpture and the human as observed in the video. By automating the process, our system takes motion sculpture creation out of the realm of professional artists, and makes it accessible to novice users and applicable to a wide range of existing video material.
By providing viewers with 3D information, motion sculptures reveal space-time motion information that is difficult to perceive with the naked eyes, and allow viewers to interpret how different parts of the object interact over time. We validate the effectiveness of this approach with user studies, finding that our motion sculpture visualizations are significantly more informative about motion than existing stroboscopic and space-time visualization methods.