|This page contains brief descriptions of some of the projects I've worked on. Clicking on the title of a project will take you to a more in-depth page with links to the paper and to videos showing results.|
(Lucas Kovar and Michael Gleicher; SIGGRAPH '04)
Large motion data sets often contain many variants of the same kind of motion, but without appropriate tools it is difficult to fully exploit this fact. This paper provides automated methods for identifying logically similar motions in a data set and using them to build a continuous and intuitively parameterized space of motions. To find logically similar motions that are numerically dissimilar, our search method employs a novel distance metric to find ``close'' motions and then uses them as intermediaries to find more distant motions. Search queries are answered at interactive speeds through a precomputation that compactly represents all possibly similar motion segments. Once a set of related motions has been extracted, we automatically register them and apply blending techniques to create a continuous space of motions. Given a function that defines relevant motion parameters, we present a method for extracting motions from this space that accurately possess new parameters requested by the user. Our algorithm extends previous work by explicitly constraining blend weights to reasonable values and having a run-time cost that is nearly independent of the number of example motions. We present experimental results on a test data set of 37,000 frames, or about ten minutes of motion sampled at 60 Hz.
(Hyun Joon Shin, Lucas Kovar, and Michael Gleicher; Pacific Graphics '03)
(Lucas Kovar and Michael Gleicher; SCA '03)
(Michael Gleicher, Hyun Joon Shin, Lucas Kovar, and Andrew Jepsen; I3D '03)
Many virtual environments and games must be populated with synthetic characters to create the desired experience. These characters must move with sufficient realism, so as not to destroy the visual quality of the experience, yet be responsive, controllable, and efficient to simulate. In this paper we present an approach to character motion called Snap-Together Motion that addresses the unique demands of virtual environments. Snap-Together Motion preprocesses a corpus of motion capture examples into a set of short clips that can be concatenated to make continuous streams of motion. The result is a simple graph structure that facilitates efficient planning of character motions. A user-guided process selects ``common'' character poses and the system automatically synthesizes multi-way transitions that connect through these poses. In this manner well-connected graphs can be constructed to suit a particular application, allowing for practical interactive control without the effort of manually specifying all transitions.
(Lucas Kovar, Michael Gleicher, and Fred Pighin; SIGGRAPH '02)
(Lucas Kovar, John Schreiner, and Michael Gleicher; SCA '02)
(Lucas Kovar and Michael Gleicher; UIST '02)
Following are some smaller projects I worked on as part of my graduate coursework.
Download the movie (~11MB)
This was produced as part of a graduate animation class. I worked with John Schreiner. This predates - and motivated - my work with motion editing and synthesis.
I worked on this with Andrew Gardner. This project was part of a graduate computer vision course here at UW.
A priori knowledge about objects in a scene can greatly simplify tracking. One particularly easy case is sphere-tracking: assuming orthographic projection, the sphere always projects to a circle, and depth information can be recovered from the radius. We designed and implemented a system to track the 3D position of a ball in real time using a single camera. The main issues we addressed were 1) robust color identification, to pick the ball out of a cluttered environment and 2) robust circle-fitting to candidate edge points, given both partial occlusions and arbitrary ball motions.