My UW
|
UW Search
Computer Science Home Page
> CS 638-1
> Projects
General Info
Final Grades
Syllabus
Getting Started
E-Mail Archive
Readings
Homework
Lecture Slides
Class Portraits
Apps, Code, Data
Comp Photog in the News
Other Reading
Other Courses
Links
|
 |
|
CS 638-1: Computational Photography
Spring 2010
|
|
HW #5 Projects
- Tyler Ambroziak and Ryan Fox
Virtual Barber
What would you look like without a beard? Or how about with a different type of beard? Think of the beards as a layer on top of the face rather than part of the face itself. Using techniques developed by Nguyen, Lalonde, Efros, and De la Torre, the beard layer can be automatically removed by estimating a beardless face from a database of beardless faces. To refine the image synthesis, you can then using the differences between the original and synthesized images to define a beard layer mask. To examine less drastic measures, we extended the paper's techniques to include "style filters" which leave part of the beard intact by masking only parts of the beard layer. Some applications of this technique include helping match pictures of wanted persons who may have changed their appearance or helping men choose a facial hair style that fits their personality.
For more information, see the
webpage.
- Kelsey Bloomquist
Joiners: From Man to Machine
Incorporating multiple viewpoints into the same picture has the potential to create more informative representations than a single viewpoint photograph. This recently developed multi-viewpoint form of visual expression is known as joiner photographs. This artistic twist to photos consists of pictures layered onto a 2D canvas with some obstructing the view of others and with the individual boundaries somewhat visible to create an artistic effect. The idea was created by David Hockney when his work was populated with these photomontages between the years 1970 and 1986[1]. This technique is mostly composed manually, but can be time consuming, and therefore the goal is to automate the creation of this type of composition on the computer. The purpose of this experiment is to create a joiner photograph with a completely automated process to generate representations that are visually appealing and easily readable.
For more information, see the
webpage.
- Jason Brant-Horton
TrekIt!
The TrekIt! project's purpose is to provide a unique and
intuitive alternative to image searches by breaking past the
traditional image search paradigm that is heavily cemented in
today's society. Existing methods of image search today are
based heavily upon keyword searches in order to specify and
narrow down the scope of useful/interesting images, which are in
turn returned back to the user. Targeted image searches are
useful to the user if the exact request is known, however this
can also be a problem for that particular method of searching.
In certain situations it may be desirable to a user to search a
broad range of subjects without being restricted solely to
specified parameters - this is where TrekIt! is designed to
operate best.
TrekIt! merely acts as an entry point into a cloud of
related images instead of attempting to narrow the scope of the
search prematurely before a user gets a chance to explore. A
typical use case scenario for TrekIt! is as follows: A user
expresses an interest in a general subject such as 'Japan' and
wants to explore images related to the keyword. A traditional
search utilizing the keyword 'Japan' may be too general and
could produce a large, unorganized set of images to be returned
for the user to sift through; therefore, a user enters the
keyword into TrekIt! instead. TrekIt! initially returns an
abstracted graph view with the specified keyword in the center
of focus; off-shooting from the central keyword node are other
related keywords such as Sakura Blossom, Kyoto, Tokyo, Mt. Fuji,
Samurai, etc. A user may then choose to 'zoom in' on the
currently focused keyword. TrekIt! will then display a subset
cloud of images related to the keyword or a user may instead
choose to click on any of the other associated keywords. Doing
so will bring focus to the chosen keyword; centering and
displaying additional associated keywords based on the currently
focused keyword and so on. By visually displaying associated
nodes as the user crawls along the graph, a user is presented
with the opportunity to explore areas that they may not have had
the foresight to search before, therefor exposing them to
previously unrealized images. Once a user chooses to zoom in to
the thumbnail level of images, images of interest may be
activated by being clicked upon. Activating will overlay the
original image over the graph to be viewed or dismissed as the
user chooses.
In conclusion, TrekIt! does not completely redefine the way
that a user searches for images, but merely aggregates images at
a higher level of abstraction and displays the content in an
organized manner to the user. By utilizing TrekIt! a user is
better able to explore related content without the hassle of
needing to research externally for additional content to get
related ideas. As it is expected to use the right tool for the
job, TrekIt! serves as another tool that may be utilized under
the right circumstances to get the job done where other tools
would fall short.
For more information, see the
webpage (works with Chrome browser).
- Aaron Brown
Super-Resolution with Epitomes
Techniques exist for aligning and stitching photos of a scene and for interpolating image data to generate higher-resolution images. We show how to use image epitomes to increase the resolution of images with the aid of high resolution sample data. Applications include generating stills from movie clips and using low-quality image sensors, ubiquitous to mobile devices, to capture photos.
For more information, see the
webpage.
- Danielle Corona
Finding Books Via User Drawn Cover Art
This paper explains the process of creating a computer program that allows users to search for books by the book’s cover art. The user inputs a drawing of what they remember of the cover art and the program outputs the most similar cover art in the database. The main algorithm used to implement this program was originally presented in the paper "Fast Multiresolution Image Querying" by Jacobs et al. The user inputted query image and all of the database images are assigned "signatures" based on an equation from the Jacobs et al. paper. The signature of the query image is then compared to the signatures of the database images with the use of an "image querying metric." The database images are assigned a weight after being compared to the query image. The database image with the least amount of weight is then presented to the user with hopes that it is the cover of the book the user was attempting to recall. Ideas involving the potential use of a web application involving this program are also presented in this paper.
For more information, see the
webpage.
- Amanda Fahrenbach and Rachel Wroblewski
Glassify!
Glassify is an automated stained glass creator. Normally a piece of stain glass art can take hours to weeks to finish and creating glass patterns takes patience and a creative eye. With our software, you can create an image that is similar to artistic pieces of stained glass. This can be used as a piece of art by itself, or as a way to generate a pattern and have an idea of what a completed piece of similar stained glass may look like. The results work relatively well as long as the image is not too finely detailed.
- Shih-Hsuan Hsu
Four Methods of Estimating Camera Movement in a Motion-Blurred, Night View Photo
Modern cities exhibit their fascinating appearances by lights during night time. This draws many people to take pictures of city light that shine dazzling ray. However, the relative low light condition causes these photos to suffer from camera shake during the exposure time. In this project, four approaches are experimented in order to estimate the movement of camera given a blurred image and another referenced image. The first method estimates the motion by searching "clue point" and then based on these clue points, determine a path from a sliding window algorithm. The second method uses the clue points as the first method does, but a locally threshold procedure is performed with succeeding morphological operations. The third and fourth method both solves a quadratic optimization problem but differ in that the third method solves it with constraints and the fourth method solves it without constraints and then project the solution onto the constraint set of method 3.
For more information, see the
webpage.
- Kenneth Jones
Portraits using Texture Transfer
Texture transfer using a homogeneous texture source image (e.g., white rice) can produce interesting results, particularly when applied to portraits. Skin and hair elements in portraits have distinct qualities, thus facial rendering might be more visually appealing if heterogeneous texture sources (e.g., partially mixed brown and white rice) were used to better represent such contrasting features. This paper proposes extensions to the basic texture transfer algorithm to better address this aspect. The goal is to synthesize more recognizable photo-realistic facial rendering using enhanced texture transfer techniques.
For more information, see the
webpage.
- Olivier Lebon and Ketan Surender
3D Views from an Ordered Image Sequence
Multi-view stereo from calibrated images and camera calibration estimation are two highly studied areas. By using aspects of both of these areas it is possible to provide an end-to-end system that takes in a set of input images of an object and outputs a 3D representation of it. We describe such a system that does this for ordered image sequences, similar to that generated by a video. Feature matching and sparse bundle adjustment provide for camera pose estimation while a normalized cross correlation approach is used for the generation of depth maps. These depth maps are projected into 3D space to generate a 3D view of the image. Outputs of this system on the Middlebury TempleRing data set are shown.
For more information, see the
webpage.
- Mark Lenz
Social Tourism using Photos
Mobile phones are frequently used to determine one's location and
navigation directions, and they can also provide a platform for connecting
tourists with each other. This paper proposes a system that uses a set
of geotagged photos to automatically compute the geographic location of
another photo and then augment that photo with text and images that
enable collaboration and enhance navigation. The system breaks up the
world into smaller regions and then computes a geographic location using
one method and a complete camera pose using a more complex method,
both methods utilize robust local feature matching. Tourists can then add
text and images to objects in a photo which are then augmented onto other
photos viewing the same object, allowing them to share information linked
to specific objects in their environment. Navigation information in the
form of arrows are projected onto photos pointing tourists in the direction
of their selected destination. This system is particularly applicable to
places such as malls, museums, and theme parks where photos can provide
more information than GPS, but it can be applied anywhere.
For more information, see the
webpage.
- George Wanant
Fragment-based Image Completion
Image completion is the process of synthesizing missing or removed parts of an image from its remaining parts or from other similar sources. This is often a desirable method used to removed unwanted objects from a scene. Since the actual area behind the removed objects cannot be known, image completion is a way to generate a reasonable looking replacement. This project implements a fragment-based image completion algorithm that searches for target and source fragments then blends them together until a final resulting image is constructed.
|
|
 |