Physical Touch-up for Human Motion
The goal of this project is to improve physical feasibility of edited motions. With the ideas on hierarchical displacement map and approximation, we developed an algorithm to modify edited motion so that it obey physical laws. We incorporate a concept of ZMP together with the conservations of linear and anular momentums. Our approach is more efficient than previous work and also gives easy control to model human behaviors.
The figure right is a scene of physical touch-up. In the figure the character is just landing after jumping. The original motion is generated by stitching jumping and walking motion. Our touch-up introduces forward leaning which make motion look more realistic.
Here you can download our video files encoded with MS MPEG4 V2.
One big file(12M)
Piceces: 1st(4.2M)   2nd(4.0M)   3rd(3.9M)   4th(5.8M)   5th(3.5M)  
The paper about this project is also published:
Hyun Joon Shin, Locas Kovar, and Michael Gleicher. Physical Touch-up of Human Motions, In proceedings of Pacific Graphics 2003, to appear, 2003.


Physical Touch-up

Snap Together Motion
This project is to addresses the unique demands of virtual environments that are sufficient realism and efficiency for simulation. Snap-Together Motion (STM) preprocesses a corpus of motion capture examples into a set of short clips that can be concatenated to make continuous streams of motion. The result process is a simple graph structure that facilitates efficient planning of character motions. A user-guided process selects ˇ°commonˇ± character poses and the system automatically synthesizes multi-way transitions that connect through these poses. In this manner well-connected graphs can be constructed to suit a particular application, allowing for practical interactive control without the effort of manually specifying all transitions.
The right image is a screen shot of the system. The paper (PDF) about this project is published as,
Michael Gleicher, Hyun Joon Shin, Lucas Kovar, and Andrew Jepsen. Snap Together Motion: Assembling Run-Time Animation. 2003 Symposium on Interactive 3D Graphics. April 2003.
This paper is selected to be presented at SIGGRAPH 2003. More detail description of this project can be found at Snap Together Motion web page.
Coworkers: Michael Gleicher, Lucas Kovar, and Andrew Jepsen.


Snap Together Motion

On-line Locomotion Generation
The goal of this project is synthesizing a locomotion of a human-like character with on-line control. The basic idea is blending a set of example motion based on user control. To make correct motion, we first analyze the example motion to get a precise paremeters including speed, style and gyration. With the given control parameter we find the weights of the example motions based on scattered data interpolation technique introduced by Sloan et al. The example motions are time-warped with an incremental manner and sampled to be blended. We use the quaternion blending scheme shown below to blend the sampled postures. Finally, possible foot-skate are cleaned by the importance-based approach introduced below.
The paper about this project is published:
Sang Il Park, Hyun Joon Shin, and Sung Yong Shin. On-line Locomotion Generation Based on Motion Blending. ACM Symposium on Computer Animation, July, 2002.
Video clip is also available here:
Presentation Video (12M)
Coworker: Sang Il Park


Pangpang

Multiway Quaternion Interpolation
This project is about finding an efficient scheme for multiway interpolation of quaternion samples. Based on the same idea of quaternion averaging below, we introduce a couple of scheme to interpolate or extrapolate quaternion data while taking care of the antipodal equivalence property of quaternion space. The first scheme is finding an min square estimator which has a minimal sum of weighted sin distance from every samples. The last scheme is interpolating the rotation vector analogies of the samples about the center that is defined as the average.
The right figure shows that how the interpolation scheme works. The red, green, and blue "K" letters are the given sample and the gray letters is the result of interpolation.
The result of this project is partially used for on-line locomotion generation and snap-together-motion. The detail description of one of the final result is described in the linked paper (On-line Locomotion Generation).


Mom, is it really me?

Quaternion Averaging
In this project, we want develop to average the given quaternion samples. To introduce a novel method, we examined the existing methods and found some interesting properties of ageraving scheme. Based on definition average (min squared estimator), we find an optimal estimator which minimizes the sum of squared distances from the samples. To eliminate possible problem due to the antipodal equivalence property of quaternion space, we use sine function as distance mertic. We also showed that this estimator can be computed very efficiently with Lagrangian multiplier.
The red dot on the right image is the average we choose by the scheme with the given samples (yellow dots). The samples have Gaussian distribution centered the blue dot.
The result of this project is partially used for on-line locomotion generation and facial expression capture. The detail description of one of the final result is described in the linked paper (On-line Locomotion Generation).


Mom, is it really me?

Real-time Virtual Character Animation System
This project is designing and implementing a real-time virtual character animation system for broadcasting center (Korea Broadcasting System). Based on the techniques including computer puppetry, real-time facial expression capture, and human body deformation, we develop a full-featured system. The basic features are real-time face expression and motion capture, real-time expression and motion mapping, and character deformation. In addition to those features, it also includes real-time superimpose, camera tracking, script-based animation.
This system was used for a TV show for children (TV kindergarten: one, two three) and for 2000 Korea Assembly Election.
Video of virtual character "PangPang" which appeared in TV Kindergarten.
Video of virtual character "Aliang" which appeared in Election show.
Coworkers: Taehoon Kim, and Hyewon Pyun.
Mom, is it really me?

Computer Puppetry
In this project, we provide a comprehensive solution to the problem of transferring the observations of Our goal is to map as many of the important aspects of the motion to the target character as possible, while meeting the online, real-time demands of computer puppetry. We adopt a Kalman filter scheme that addresses motion capture noise issues in this setting. We provide the notion of dynamic importance of an end-effector that allows us to determine what aspects of the performance must be kept in the resulting motion. We introduce a novel inverse kinematics solver that realizes these important aspects within tight real-time constraints.
The paper about this work is available:
Hyun Joon Shin, Jehee Lee, Michael Gleicher, and Sung Yong Shin. Computer Puppetry: An Importance-Based Approach. ACM Transactions of Graphics. April 2001.
Several video clips used for the talks are:
Walking motion
        Original walking motion,
        Preserving the joint angles,
        Preventing penetrations,
        Preserving the end-effector positions,
        Our result. and
        Mapping to another character.
Jumping motion
        Original jumping motion,
        Mapping to Blubby, and
        Mapping to Shorty.
Hitted motion
        Original hitted motion,
        Mapping to Blubby, and
        Mapping to Shorty.
Dancing motion
        Mapping to Blubby.
Coworkers: Jehee Lee and Michael Gleicher.
Mom, is it really me?

Real-time Expression Capture
This project is about capturing the expression parameters from single video stream of a human face without any marker and mapping the parameters into a virtual face model to deform it. Capture process is consisted with transforming color-space to highlight the features, adopting modified "snakes" to find the feature curves, analytically solving head motion, and employing a Kalman filter to compensate the head motion and find the 3D feature curves. For expression mapping, we use a linear system trained over a set of samples for converting the captured movement of feature into the motion of the control curve on the virtual face. The final animation is obtained by applying "wire" deformation.
My role for this project is designing a whole system and introducing basic ideas, and integration. My coworkers solved some of the detail technical problems and implemented every component of the system.
We have a technical report about this project:
Hyewon Pyun, Hyun Joon Shin, Tae Hoon Kim, and Sung Yong Shin. Real-time Facial Expression Capture for Performance-driven Animation. CS-TR-2001-167, Korea Advanced Science and Technology.

Available video clips:
        Color space transformation,
        2D feature curve extraction,
        Head motion tracking,
        3D feature curves, and
        Animation
Notice that the this technical report just contains the capturing process of the project.
Coworkers: Taehoon Kim, and Hyewon Pyun.
Mom, is it really me?

Impulse-based Rigid Body Simulation
Mom, is it really me?

Etc.
Human Body Deformation:
I worked partially for real-time skinning of human like characters with Seung-Hyup Shin. My contribution is organizing the project and introducing the very essential idea. The result of the project is published:
S.H. Shin and S.Y. Shin, Real-time Human Body Deformation based on Rotation Angle Interpolation, 27th Proc. KISS Annual Conference, Daegu, Republic of Korea, 27, 2000.
The right figure is an example of deformation. For video clips, click here.
Mom, is it really me?
On Extracting Wire Curves form Facial Expression Models
This project is for extracting "wire" deformation parameters from the given set of geometry samples. In this project, I work on solving some mathematical problems. The articles about this project are

Hyewon Pyun, Hyun Joon Shin, and Sung Yong Shin, On Extracting the Wire Curves from Multiple Face Models for Facial Animation, Computers and Graphics, 2003. (accepted)

Hyewon Pyun, Hyun Joon Shin, and Sung Yong Shin. On Extracting Wire Curves form Facial Expression Models. CS-TR-2001-168, Korea Advanced Institute of Science and Technology.
Mom, is it really me?
Crowd Simulation
This project is for simulating the behavior of a number of individuals. In this project, I work on motion synthesis. The article about this project is
Ho Kyung Kim, Jun Kyu Oh, Min Gyu Choi, Hyun Joon Shin, Hyung Woo Kang, and Sung Yong Shin, An Event-driven Approach for Crowd Simulation with Example Motions, Technical Report, CS-TR-2001-170, Computer Science Department, KAIST.
For video clips, click here.
Mom, is it really me?


  joony@cs.wisc.edu