Computer Sciences Dept.

CS 638-1: Computational Photography

Spring 2009


HW #5 Projects



  • Michael Beaverson and Ernest Lee
    Automated Generation of High Dynamic Range Images
    The High Dynamic Range Images have been popularly used in the digital photography world today. We will attempt to write our own HDR image creation program for use with various different images with different times of exposure. Our program will implement the method presented in Paul E. Debevec and Jitendra Malik, Recovering High Dynamic Range Radiance Maps from Photographs, Proc. SIGGRAPH 97, 1997. Our purpose of this project is to demonstrate the use of HDR images and how it affects the current perspective of digital images and photographers. The program will construct radiance maps, generate the HDR image, and tonemap it for display on non-HDR viewers. We seek to go beyond Debevec's design by creating an implementation which is fully automated, requiring no user input other than the source images and exposure times.


  • A.J. Bureta
    Manipulation at its Finest
    If a picture is worth a thousand words, how much is one worth when it can be manipulated to say exactly what the artist intends it to? In this paper, I will present a series of artistic filters that transform a picture in a variety of ways. All of the filters described in this paper can be subdivided into three main categories: grayscale, color, and pixel position. These filters act in various ways, including the requirement of multiple source images or different parameter specifications; but all abstract the source image, at least a little bit, and even more so when combined with other filters. Some of these filters are similar to those seen in typical photo editing software, others are not. My goal of this paper was to create an automatic interface with the use of these filters that makes it much easier to get the desired effect rather than performing the required procedures one after the other until the finished product has been reached.

    Website: http://homepages.cae.wisc.edu/~bureta/Index.html


  • Lorenzo De Carli
    Rendering Images using Objects as Primitives
    Image quilting [1] is a well-known technique by Efros and Freeman, aimed at generating textures using tiles from a source image. An interesting application of this technique is texture transfer, in which tiles are used as primitives to render a target image. The result is a suggestive effect in which the target image appears as it is "emerging" from the generated texture; example results can be found on the website of one of the authors [2]. A similar technique allows to generate photomosaics [3], which recreate a target photo by using a large number of miniatures of other photos.

    A limitation of these techniques is that their "primitives" - i.e. the elements used as rendering units - must belong to very restricted classes of images (textures in the former case, fixed-size photos in the latter). This project will consist in the development of a similar technique, capable to use objects of variable size and shape as primitives (where an "object" is an image representing a well-defined physical entity). In other word, the aim is to create an automated tool to synthesize images of an object (e.g. a face) by drawing together a large number of pictures of other objects (e.g. fruit).

    The idea was inspired by the work of the XVI-century Italian painter Giuseppe Arcimboldo. Obtaining results comparable to his paintings [4] would require knowledge of the structure and the nature of the objects both in the source and the target pictures. Instead, the project will make use of simple image-based techniques. While this approach is not expected to generate high-quality results, it will provide a starting point for a more advanced implementation.

    [1] A. Efros and W. Freeman, "Image Quilting for Texture Synthesis and Transfer," Proc. SIGGRAPH, 2001.
    [2] Image quilting: transfer results, http://graphics.cs.cmu.edu/people/efros/research/quilting/results3.html
    [3] Wikipedia page on photomosaics, http://en.wikipedia.org/wiki/Photomosaic
    [4] Wikipedia page on Giuseppe Arcimboldo, http://en.wikipedia.org/wiki/Arcimboldo

    Website: http://pages.cs.wisc.edu/~lorenzo/obr/


  • Dale Emmons
    Augmented Reality Using Square Marker Patterns
    Augmented reality (AR) is a field of research that deals with the combination of realworld and computer-generated data in which computer graphics objects are blended into real footage in real time. The applications of such methods are many; from computer games to computer assisted navigation. Currently most AR research uses printed markers of some sort to ease the object detection step. In this paper I present an AR implementation which uses square black markers which contain patterns of white squares. After detecting the orientation of a marker and determining its unique pattern, I overlay it with an image that has been appropriatly deformed to fit the marker in space.


  • Brett Epps
    The World in Dalivision: Crowdsourced Photographic Mosaics
    "The World in Dalivision," or simply, "Dalivision," is an implementation of an "infinite" photographic mosaic inspired by a Salvador Dalí painting along with other pointillist work like that of Chuck Close. Using concepts derived from art such as luminance matching, Dalivision takes a source photo and converts it into a tiled mosaic, pulling pictures from an online photo-sharing website to produce a unique output image. This article describes a rudimentary mosaic-making method and then delves into the details of this specific implementation and the intriguing engineering problems that arise with this type of project. It concludes with a discussion of potential improvements to the program.

    Website: http://eppsilon.com/projects/mosaics/


  • Patrick Flynn
    Single-Image Vignetting Correction Using the Radial Gradient
    This project is based on the paper "Single-Image Vignetting Correction Using Radial Gradient Symmetry" by Y. Zheng et al., Proc. Computer Vision and Pattern Recognition Conference, 2008, which describes a method to identify and correct images that suffer from vignetting. Vignetting is an effect where the image intensity drops off away from the center of the image, especially in the corners. The idea in this paper is that this effect has many radial properties and they use a so-called radial gradient to match an image to a model for vignetting and correct the effect. Using the symmetry of the distribution of this radial gradient the authors can then determine whether or not the image suffers from vignetting and they can correct the vignetting effects by minimizing the asymmetry of this distribution. This algorithm requires no training sets or user interaction. In this project I implemented the vignetting detection features and attempted to implement one of the correction methods from the paper that tried to fit a model for the vignetting and remove the vignetting from the original image.

    Website: http://pages.cs.wisc.edu/~flynn/cs638_project.htm


  • David He
    Image-based Texture Tile Synthesis
    Texture tiles are ubiquitous in computer graphics, as the primary method for decorating surfaces. Currently, the steps in texture tiles creation require human intervention, which is time consuming and expensive. We present an adaptation of Efros and Freeman’s image quilting algorithm which automatically synthesizes texture tiles from a sample texture image. Results show that this algorithm performs very well for stochastic textures, but badly on structured textures.


  • Alex Leffelman
    Progressively Trained Semi-Automatic Photo Tagging
    In this paper, a combination of existing methods is proposed to implement a consumer application with the goal of implementing semi-automatic photo-tagging for a collection of digital photographs. The two-step process consisting of 1) face detection and 2) face authentication is implemented using "Face Detection in Color Images" by Hsu et al. and "Face Authentication using the Trace Transform" by Srisuk et al. The reasons for choosing these specific methods, a comparison of alternative methods, and the proposed consumer application are detailed in this paper.


  • Jason Malinowski
    Stitching Stereo Panoramas
    An approach to automatically stitch "anaglyph" stereo panoramas is presented. Given two sets of images of the same subject taken from different perspectives, the system automatically blends the images to create a unified panorama with both image sets in register. The images are then rendered into a red-blue stereo "anaglyph" image viewable on any computer monitor.

    Website: http://cs.wisc.edu/malinows/stereostitch


  • Matthew Mueller
    Piximilar: Image by Color
    Piximilar: Image by Color is a project based on the work of Idée Labs, where you search for images by color rather than by text. Piximilar offers a unique experience that allows web designers to define the color scheme on the drawing board and write the cascading style sheet (CSS) before picking out the images. Piximilar leverages Flickr's vast free photo archive picking out only the images that Flickr deems "interesting." Piximilar analyzes these pictures using an in-house algorithm that maps each pixel to color palette options, allowing pictures to be retrieved based on their color contents. The processing is abstracted away from the user and built behind a beautiful user interface, which is user-friendly, scalable, and fast.


  • Abe Rubenstein
    Analogous Colorization of Grayscale Images
    The colorization of grayscale images is a valuable tool for many applications such as colorizing black and white films or restoring old photographs. Most techniques for the colorization of an image require manual designation of the locations to be colored and the colors themselves. This work can be greatly diminished by utilizing a color image analogous to the grayscale image to be colored and transferring the same color scheme to the grayscale image based on its luminance. Using the research of T. Welsh et al. and others, I implemented an algorithm for this analogous colorization process and tested it on a variety of source-target image pairs. This algorithm’s best results are achieved with images whose features have similarly corresponding luminance values but it does have significant functionality if this is not the case.

    Website: http://pages.cs.wisc.edu/~abraham/cs638-hw5.html


  • Chris Waclawik
    Creating Better Thumbnails
    When a user wants to find a particular image in a set, they will often scan a table of thumbnails instead of flipping through the full-sized images. The smaller the thumbnails, the more images can be displayed on one screen, and the shorter it (theoretically) takes to find a particular image. Once a thumbnail is small enough, however, the loss of detail can make it difficult to recognize the the original image, lessening the thumbnails effectiveness. A smarter way would be to first select the the most recognizable, or salient, part of the image, and then shrink it. This project implements a thumbnail creator that creates a saliency map for a given image, and crops/scales it down to a specified size.

    Website: http://pages.cs.wisc.edu/~waclawik/cs638/


  • Chunsong Wang
    Interactive Shape-Simplifying Image Abstraction
    In this project an approach of automatic image abstraction is presented and an interactive tool is designed to refine the final result. By following previous papers using the method called mean curvature flow, the basic idea is iteratively smoothening local features of images. Since MCF alone does not work very well(much too blur), other filters including shock filter(enhancing the edges), tangent vector(preserve local direction feature), min/max filter(dilation/erosion) and Gaussian blur(smooth combination) is applied to the image. Finally, user draw masks to preserve the parts of abstracted images and the result are a combination of them.

    Website: http://pages.cs.wisc.edu/~chunsong/cs638/


  • Judith Warzel and Chris Wilson
    Making Faces: Creating Novel Faces by Blending
    Given multiple input images containing faces, we create a random matching of the facial features within those images to building a new face. Masks are used over the general feature areas (eyes, nose and mouth) to assist the blending algorithm. Our algorithm constructs the Laplacian Pyramid for the two target images and blends them with the Gaussian Pyramid of the mask to provide a "smooth" blend over many scales of the output image in the pyramid, resulting in a nicer finished product.

    Website: http://pages.cs.wisc.edu/~cwilson/638proj.html


 
CS 638-1 | Department of Computer Sciences | University of Wisconsin - Madison