CS766 Computer Vision, Fall 2007
Project 2: Panoramic Mosaic Stitching
Assigned: Sept 25, 2007
Due: Oct 10 noon, 2007
The instructor is extremely thankful to Prof Steve Seitz for allowing us to use this project which was developed in his Computer Vision class.
Project Description
Image stitching is a technique to combine a set of images into a larger image
by registering, warping, resampling and blending them together. A popular
application of image stitching is to create panoramas. Generally speaking,
there are two classes of methods for image stitching: direct methods and
feature-based methods. An example of direct methods is Szeliski and Shum's SIGGRAPH 1997 paper. Brown and Lowe's ICCV2003 paper,
Recognising Panoramas, is a cool example for feature-based methods.
In this project, you will implement a feature-based method to generate panoramic
images. You are expected to finish the following tasks to assemble panorama
images.
1. Taking images. We prefer you to check out equipments (camera, tripod, and Kaidan head) in groups of two. Each group can check out multiple times, but should not hold the equipment for more than a half day for a single checkout. Pleaes contact TA for scheduling. Everyone is responsible to write all code of their own. Using the Kaidan head helps to place the camera optical center on the rotation axis. Here is a page describing how to use the setup. If you are using your own camera, you have to estimate its focal length (Brett Allen describes one creative way to measure rough focal length using just a book and a box, or alternatively use a camera calibration toolkit to get precise focal length and radial distortion coefficients). The parameters for the class cameras are given below. The following focal length is valid only if the camera is zoomed out all the way (wide angle).
Camera | resolution | focal length | k1 | k2 |
Canon A640, tag 4726208744 | 480x640 | 664.2867 | -0.2001 | 0.25608 |
Canon A640, tag 4726208885 | 480x640 | 663.3665 | -0.19447 | 0.23382 |
Canon A640, tag 47208879 | 480x640 | 660.8799 | -0.18533 | 0.21517 |
test images | 384x512 | 595 pixels | -0.15 | 0.0 |
The test images are provided for you to debug your code. If you want to calibrate a camera by yourself using the toolkit, you need to know Matlab a little bit. If you don't, here is a jump start.
2. Warping each image into cylindrical coordinates. Compute the inverse map to warp the image into the cylindrical coordinates as discussed in the class. You will have to use the focal length f estimates for the 480x640-resolution images provided above (you can either take pictures and save them in small files or save them in large files and reduce them afterwards) . If you use a different image size, do remember to scale f according to the image size.)
3. Computing the alignment of the images in pairs. You need to implement a feature-based translational motion estimation, using RANSAC method discussed in the class. We recommend you to use the SIFT features and you can download the SIFT program for this. This program does feature detection for you and saves the feature location and feature descriptors in a file. The README in the download is very self-explanatory on how to use it. Given these features, your program should estimate a translational motion between each pair of images.
4. Stitching and cropping the resulting aligned images. Given the warped images and their relative displacements, figure out how large the final stitched image will be and their absolute displacements in the panorama. Then, resample each image to its final location and blend it with its neighbors. Try a simple feathering function as your weighting function (see mosaics lecture slide on "blending") (this is a simple 1-D version of the distance map described in [Szeliski & Shum]). For extra credit, you can try other blending functions or figure out some way to compensate for exposure differences.
Crop the resulting image to make the left and right edges seam perfectly. The horizontal extent can be computed since the first image occurs at both the left and right end of the stitched sequence (draw the “cut” line halfway through this image). Use a linear warp to the mosaic to remove any vertical “drift” between the first and last image. This warp, of the form y' = y + ax, should transform the y coordinates of the mosaic such that the first image has the same y-coordinate on both the left and right end. Calculate the value of 'a' needed to perform this transformation.
5. Creating the final result. Convert your resulting image to a JPEG and paste it on a Web page along with code to run the interactive viewer. Click here for instructions on how to do this.
Bonus Points
Here is a list of suggestions for extending the program for extra credit. You are encouraged to come up with your own extensions. We're always interested in seeing new, unanticipated approaches and results!
You are welcome to do any other extensions or develop algorithm related to image mosaics. The bonus depends on how useful and difficult these extensions are.
Submission