My UW
|
UW Search
Computer Science Home Page
> CS 534
> HW #5 Projects
General Info
Final Grades
Syllabus
Getting Started
Readings
Homework
Midterm Exam
Lecture Slides
Apps, Code, Data
Comp Photog in the News
Other Reading
Other Courses
Links
|
 |
|
CS 534: Computational Photography
Fall 2016
|
|
HW #5 Projects
- James Babb
Deshredder: Reconstructing Shredded Documents through Computational Photography Techniques
Shredding is often a popular way to destroy important documents. In general, a shredded document is very hard to reconstruct by hand due to the vast amount of time required to figure out which pieces go together (think jigsaw puzzle with no distinct shapes). Placing the strips of many shredded documents in one bag enhances security. However, in this project, I analyze that document shredding is in fact not a secure way of destroying documents, and demonstrate how a simple program can be written to at least partially reconstruct individual documents.
Project web page.
- Sean Bauer and Abishek Muralimohan
Capturing Depth in Photos: Applications of the Z-Channel
In this project, we present a system for obtaining depth data (a Z-channel) along with conventional RGB channels while capturing a photo or video. The system consists of the Microsoft Kinect which calculates a coarse depth map of a scene using an infrared laser grid projected onto the scene. Real-time depth information, along with an RGB video stream is made available to the user through the Kinect SDK (beta). Then, we seek to augment existing image processing algorithms with depth information. Our primary examples will be seam carving and defocusing. This will allow the dynamic resizing of images without losing any important (based on depth) objects. Similarly, we can automatically defocus or create lens blur for background, foreground, or specific objects based on depth.
- Jacob Berger
Sky Replacement
Most previous methods for replacing aspects of images have
used large databases of like-images and most segmentation algorithms have
used matting techniques and other user input. This project, instead, aimed
to replace the sky portion of a background of an input image with portions
of a small group of images, while maintaining a semantically valid image.
In order to achieve this goal, the algorithm focused on an automatic
segmentation method, based on energy functions. The scene matching
algorithm focused on similar location, low energy, and similar color
structure. The combined algorithm requires minimal user input and only a
small selection of like-images.
- Jacob Dejno and Allen Sogis-Hernandez
Video/Image Colorization and Recoloring Using User Input Optimization
Our project implements a
recoloring and coloring technique within a paper called "Colorization using Optimization" by Levin,
Lischinski, and Weiss. This paper presents a faster, more accurate algorithm to recoloring photos
and videos using a very small amount of user input in the form of color squiggles within certain
regions marked by using similar neighboring pixels. These neighboring pixels have similar
intensities, and thus should have similar colors based on the intensity. The authors use a cost
function and obtain an optimization solved efficiently using standard techniques.
The user marks very few color areas and the picture or video based on the YUV colorspace marked by
the regions created by each neighboring pixel. Implementing this method, we found that the
algorithm presented within this paper has implications to revolutionize the recoloring and
restoration industries by using very little
user input.
- Nick Dimick and Lorenzo Zemella
Multi-Focus Image Fusion
Many types of cameras can have a very limited depth of focus and images taken with
these cameras are able to only focus on objects only within the same plane. This leads to
difficulty when trying to have several objects on different planes in focus. Specific cameras with
confocal lenses may be used, but the performance is usually slow or unsuitable. This project looks
at an alternative to using special equipment. Using a single camera to collect a group of images
with different planes of focus, it is possible to develop an algorithm that will combine
all of the images to form a single image with an extended depth of focus. The result is an image
containing all objects of interest in focus as if it had been taken using a very large depth of
focus. Advantages of this project include requiring no special equipment and the full capabilities
of the "standard" camera can be used. This means the full camera resolution can be used as well as
faster shutter speeds in dark areas that would otherwise not be possible. Computational power
is cheap, so this is a very viable and efficient alternative. The idea of this project was inspired
by the LYTRO camera and will be exploring the work completed by the Center of Bio-Image
Informatics, University of California and possibly others.
- Tanner Engbretson, Zhiwei Ye and Muhammad Yusoff
Beyond Human Visionary: Exploring Image Change Detection Techniques
Image Change Detection is one of the emerging technique in the field of computer vision. This technique is developed to complement visionary jobs that go beyond human's capability. It is a set of methods that fully utilizes the computer's capability in detecting the differences among multiple images that taken in different time. It is applicable in such various fields: remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure, and underwater sensing (Radke et al., 2005). Information change is critical in all these disciplines, and it provides support to future managements. Furthermore, video change detection (Gargi et al., 2000) is also developed by implementing similar techniques involved in here. Our project is focusing on land change detection from satellite images, i.e. deforestation, urban growth etc. and we also try to extend and imply our algorithm to detect changes in regular images. Our project is based on several papers including Automatic Analysis of The Difference Image for Unsupervised Change Detection by Bruzzone, L. and Prieto, D.F. and Image Change Detection Algorithms: A Systematic Survey by Radke, R.J., Andra, S., Al-Kofahi, O., and Roysam, B.
Project web page.
- Nathan Figueroa
Reconstructing Shredded Documents
Despite advances in computational forensics, the task of reassembling a shredded document into its original form remains a difficult and time consuming process. The most successful attempts at computationally solving this problem rely heavily on human interaction. This paper examines fully automated techniques for the extraction of real-world shredded documents and for the reassembly of synthetically shredded ones.
- Matt Flesch and Erin Rasmussen
UW-Madison Hall View: An Extension and Integration with Google Maps Street View
Google Maps Street View currently exists and provides users with the ability to view 360 degree panoramas of many streets. Users can navigate down the street and view a panorama as if they were standing in the middle of the road. We use the Google Maps Javascript API to create and integrate panoramas we take into the existing Google panoramas. We take sets of images from multiple locations inside the Computer Science building and build a number of panorama tiles using Photoshop's Photomerge method. Then we linked these custom tiles together to allow users to virtually navigate the halls of the Computer Science building. We have also added a dropdown menu for the user to interact with and select a room they would like to go to. They receive text directions to the specific room and the image tile spins and automatically walks in the direction they should leave the room.
Project web page.
- Aaron Fruth
iSpy: Automatic Reconstruction of Typed Input
With smartphones, many have looked for ways to accurately gather information from these phones. Often, however, it was difficult to obtain this information because it often involved hacking the target's phone or collaborating with service providers or filming the target's phone using expensive cameras. Now, user input can be obtained using a low-resolution camera, even when the phone is viewed in a mirror or reflection. However, this method can raise privacy and security concerns for those who are being targeted unfairly or illegally by this software, which is why taking measures against this form of data extraction is important.
To find what letter is pressed, we take a video frame or photograph and input it into the iSpy.m algorithm. The script gives the image to the findPhone function, which uses a SIFT feature detector to find the phone in the image. These feature points are compared to a reference screenshot image, and then the scalar and rotational errors are extracted from the point and are corrected in the image. With errors removed, the phone image is cropped out and given to the compareInput function. This function creates an image gradient of an input image, then compares the energy function with the energy function of reference images of letters and numbers. The lowest ratio of error will be a match to the correct letter. The letter is returned and printed on the screen.
Project web page.
- Chris Hall and Lee Yerkes
Videos to Interesting Stills
Videos provide a series of related images that can be manipulated to
synthesize new interesting images which contain more information than
any single frame in the video. Our work explores ways that common
videos can be used to synthesize new still images such as panoramas,
activity synopsis syntheses (also sometimes called chronophotography),
stroboscopy, small occlusion removal, and "peeking" images.
Project web page.
- Caitlin Kirihara and Tessa Verbruggen
Constructing Lichtenstein-esque Images
Roy Lichtenstein is a pop artist who was mainly active in the 1960s. His images
are based on the style of advertisements and comic strips. They have large smooth areas, of which
some have a dot-matrix effect, and black edges.
For this assignment we attempted to reproduce this effect automatically with Matlab. The program
takes an input photograph and outputs a new image altered similarly to the style of Roy
Lichtenstein. To achieve this, the program applies three different effects. First, a bilateral
filter smooths regions of a similar color into a flatter, more comic-like surface. Next, a dot
matrix effect is applied to the regions of the image detected as skin. Lastly, a combination of
region and edge detection is used to find and superimpose the strong black edges and areas that can
be accented to create the bold
black lines Lichtenstein used in his paintings.
- Cara Lauritzen and Rose O'Donnell
Which is Better? An Analysis of Photo Quality Assessment
This project was inspired by Luo, Wang, and Tang's "Content-Based Photo Quality Assessment." The
main objective of the project is the implementation of the techniques as described in various
papers on computer-based evaluation of photo quality. Our work calculates quality based
on six methods of assessment: hue count, spatial distribution of edges, colorfulness,
rule of thirds, luminance, and blurriness. We chose non-reference methods to evaluate quality.
We also collect human-based quality assessment to gage the effectiveness of our methods. We use ten
pairs of images, each pair has images of the same content but slight variances in color, subject
placement, lighting, etc. Results from the computer-based
quality assessment are compared with the human-based quality assessment results.
- Jared Lutteke and Sam Wirch
Social GeoTagging: Utilizing Information from GeoTagged Photos
The last few years have seen a growing abundance of publicly available
photos with location information. Applications which allow a user to explore sites of
location related photographs have been released. Such examples include Panoramio,
which uses related photos to allow a user to explore various locations. This article
describes various methods of extending geotagged photo information extraction. A
simple program then uses this information to identify local landmarks. It allows a user to
see clusters of photos based on various algorithms and then find additional landmarks
based on seen clusters. Beyond the implementation, certain application improvements
are also described and explored.
Project web page.
- Phil Morley
Haze Removal and Depth Modeling
This project will investigate the use of haze removal
as described in the paper "Haze Removal Using Dark
Channel Prior" by K. He, J. Sun, and X. Tang. An implementation of their code is created from
scratch and parameters are experimented with to maximize haze removal. The method also creates a
depth map. Using this depth map, a unique form of modeling the
new scene can be created.
- Ryan Penning
Multi-Frame Super-Resolution utilizing Techniques from Compressed Sensing
Compressed sensing has taken the signal processing world by storm over the last few years. Although most applications have focused on sensing in the frequency domain, a few researchers have looked at extending the method to measurements made in the spatial domain, such as traditional images. I propose a method to generate higher resolution images from multiple low resolution images by using methods from compressed sensing. By combining these source images and generously discarding mismatched pixels, my method then utilize a Total Variation minimization solver to recover the missing information. The solver appears to work well with random noise, providing a reasonable recovery of the image even with 70% of the pixels missing. Unfortunately, initial results using this algorithm to produce super-resolved images do not appear promising. Issues most likely arise from the organized structure of the missing pixels in the high resolution image.
- Leigh-Ann Seifert and Brian Ziebart
Storyboard-It
Inspired by previous students' work "Sketchify," we not only turn video's into sketches but turn them into storyboards or comics. We take "Sketchify" one step further by improving upon the filters and orienting the program at only video instead of pictures. By applying bilateral filtering and edge detection we achieve the look of a colored sketch, similar to storyboards. Then, using the difference in entropy values between frames, we chose a threshold that signifies our key frames. The key frames are then put together in order into one image. The end result will look like a storyboard or comic strip which can give a single picture representation of a story or video.
- Kong Yang
Defocus Magnification
Professional photographers control the depth of field of their camera in order to control the focused region of a scene. The vast majority of new phone now come standard with a built-in camera. These cameras are usually limited in the range of depth of field supported by the camera lens with most containing a small aperture size. A smaller aperture size equals a greater the depth of field where most of the image is in focus. Defocus magnification is a technique that estimate the blur map of an image and magnifies the defocused region to generate a shallower depth of field. This paper will discuss and evaluate the defocus magnification algorithm described by Soonmin and Durand.
|
|
 |