Report of Project 3

Name: Lijie Heng
Date: Oct 23th, 2007

Goal

This project is to implement a system to construct a height field from a series of images of a diffuse object under different point light sources, including calibrating the lighting directions, finding the best fit normal and albedo at each pixel,and then reconstucting the 3D modelbased on the depth at each pixel.


Results

The result maps on the six sample dataset are as follows:
Owl
Buddha
Horse
Rock
Cat
Gray

Implementation of Each Step

Calibration

To determine the directions of point light sources, a shiny chrome sphere in the same location as all the other objects are used. Since we know the shape of this object, we can determine the normal at any given pointon its surface, and therefore we can also compute the reflection direction for the brightest spot on the surface.

To determine the direction vector L:

Centroid of the sphere
Using the chrome.mask, find out the average of the non-zero pixels location(xc, yc)as the center of the ball.
Radius of the sphere
Find out the maximal distance between the boundary pixels to the center(xc, yc) as the radius r.
Centroid of the highlight pixels
Find out the average of the non-zero pixels from each of the 12 image, as the hightlight point(xh, yh).
Light direction For each of the 12 image, computer the normal n of the highlight point:
n(x) = (xh-xc)/r;
n(y) = (yh-yc)/r;
n(z) = sqrt(r^2-n(x)^2-n(y)^2);

After we get the normal n for each pixel, using L = 2(N * R)N-R, R=(0, 0, 1) to get the light direction L. The light directions for 12 images in each sample dataset is here.

Solve for Normals

Through minimizing the following weighted linear least square problem, we get image normal for each pixels
Q = sum{(I(i)^2-I(i)*L(i)*g')^2}
g = kd*n;
In order to minimize Q, we make the derivtive of Q about g equal zero. Thus, foreach pixel, we get three equations for each g:
A = [sum(I.^2*L(:,1).^2), sum(I.^2*L(:,2)*L(:,1)), sum(I.^2*L(:,3)*L(:,1));
sum(I.^2*L(:,1)*L(:,2)), sum(I.^2*L(:,2).^2), sum(I.^2*L(:,3)*L(:,2));
sum(I.^2*L(:,1)*L(:,3)), sum(I.^2*L(:,2)*L(:,3)), sum(I.^2*L(:,3).^2))];
b = [sum(I.^3*L(:,1)); sum(I.^3*L(:,2)); sum(I.^3*L(:,3))];
g = A\b;

After we get g, kd = I*J/J*J;.

Surface Fitting

For each pixel, we compute the depth z based on the following two equations:
n(z)z(i,j)-n(z)z(i+1,j)=n(x);
n(z)z(i,j)-n(z)z(i,j+1)=n(y);
It can be written as the matix equation Mz = v. Using sparse matrix in matlab to solve it, we get the depth z for each value.


To Run

The program is written in Matlab. To run this progrm:
1. Run calibration.m to compute light directions L
2. Run stereo.m to caculate the albedo and normals and plot the RGB-normalmap and albedo map
3. Run fitting.m to find out depth z and plot the 3D reconstruction image

Summary

This project is easier than the first two projects. In general, the methods all work and the results looks good. One thing we could improve is to use the intensity as the weight when computing the highlight point for each image, instead of just using the average.