Motion Segmentation

Welcome to Motion Segmentation a Computational Photography project page.

About.

Overview

As the name may imply, motion segmentation divides a video into different regions based on detected motion. That is, neighborhoods of similar motion can be grouped together in a single layer, allowing for a dynamic scene to be broken into components of individual motion. The utility of this operation relies on the fact that objects in the real world tend to undergo smooth movements as coherent wholes. Thus, detecting these regions of similar motion allows us to detect moving objects in the scene.

Next

Optical Flow.

Optical Flow

Optical flow algorithms perceive any change in color or intensity as motion, and this introduces several potential sources of error. To reduce this we need to make an assumptions called the optical flow constraint .

Next

High Speed.

Rigid Motion

Our implementation best detects rigid motion. A Rigid Motion preserves the size and shape of an object, and includes rotations or translations. As the dice demonstrate rigid motion provides uniform colored vectors.

Next

Non-Rigid Motion

Subjects that do not undergo rigid motion are not accurately characterized as a single region of motion. Hence they are often segmented into multiple layers. Fire displays fantastical colors due to the confused vectors generated

Next

Composite.

Internal

For internal compositing all manipulations occur inside MATLAB scripts. We layer mask a foreground moving object onto another video or static image.

Next

External

For external compositing we write a green colored background to the isolated motion footage to act as an procedurally generated green screen. This footage is used with the chroma keying functionality of a dedicated video editor.

Next

Project Credits.

Andrew Chase, Bryce Sprecher, and James Hoffmire

Paper