Automated Methods for Data-Driven Synthesis of Realistic and Controllable Human Motion
Abstract
Human motion is difficult to animate convincingly ---
not only is the motion itself intrinsically complicated, but human
observers have a lifetime of familiarity with human movement,
which makes it easy to detect even minor flaws in animated motion.
To create high-fidelity animations of humans, there has been
growing interest in motion capture, a technology that obtains
strikingly realistic 3D recordings of the movement of a live
performer. However, by itself motion capture offers little control
to an animator, as it only allows one to play back what has been
recorded. This dissertation shows how to use motion capture data
to build generative models that can synthesize new, realistic
motion while providing animators with high-level control over the
properties of this motion. In contrast to previous work, this
dissertation focuses on automated algorithms that make it
feasible to work with the large data sets that are needed to
construct expressive motion models. Two models in particular are
considered. The first is the motion graph, which allows
one to rearrange and seamlessly attach short motion segments into
longer streams of movement. The second is motion blending,
which creates motions ``in between'' a set of examples and can be
used to create continuous and intuitively parameterized spaces of
related actions. Automated methods are presented for building
these models and for using them to generate realistic motion that
satisfies high-level requirements.
Downloads
My dissertation may be downloaded either in its entirety or as individual chapters.
|