Russell Manning > Research > Calibration From Scene Motion

Calibration From Scene Motion

If a camera views a single object in motion, with no fixed backround to serve as a reference, then it is impossible to determine from the images alone whether the camera was in motion or the object. However, when two or more objects are moving in different directions (or equivalently, when a "background object" is evident) then the relative motion of the objects provides extra information about the camera and the scene.

I have researched the problem of using relative object motions to self calibrate a camera. In particular, given two views (possibly from different cameras) showing two or more objects that undergo translational motion between the views, I have created an algorithm for finding the relative calibration (alternatively called the affine calibration or the camera-to-camera transformation) between the two views. Importantly, my algorithm is linear and works directly from the fundamental matrices induced by each object motion. This research represents some of the earliest work by anyone on this topic.

Example: Below are images from an experiment in which two cameras viewed a box that was translated in two different directions. The box was covered with a dot pattern for careful point tracking. For each motion, a fundamental matrix was determined from the tracked points, and from the two fundamental matrices the relative calibration between the views was found using my linear algorithm. Once relative calibration had been determined, affine scene reconstuction became possible; sample views of the reconstruction are also shown below.



First Motion

Second Motion

View from First Camera (at time 0)

View from Second Camera (at time 1)


Affine Reconstuctions



Russell Manning / rmanning@cs.wisc.edu / last modified 02/01/01