> CS 534
> HW #5 Projects
HW #5 Projects
- Alex Kocher, Blake Nigh and Konnor Beaulier
Adherent Raindrop Detection and Removal in Videos
With the emergence of wearable technologies such as wearable cameras like GoPros,
adverse weather directly affects how the video of these cameras turns out. In the case
of rain, raindrops can stick to the lens of the camera and cause blurring and even more
distortion in the output picture or video. We decided to look at these raindrops as
removable through various techniques, thus removing the blur and distortion.
- Andrew Chase, Bryce Sprecher and James Hoffmire
Motion Detection and Segmentation using Optical Flow
This paper discusses the detection, analysis and segmentation of motion in video using
optical flow algorithms. Optical flow detects motion within each neighborhood of pixels by
registering changes in the color and intensity of pixels from frame to frame. Vectors
indicating the direction and magnitude of detected motion are created, and groups of
similar vectors are then segmented from each other. The primary purpose of optical flow is
to use segmentation to isolate individual moving objects within a video. This paper first
introduces the fundamentals of optical flow and offers an overview of existing literature
related to general motion detection. We next discuss the theory behind optical flow,
followed by a description of our method and implementation in MATLAB. Finally, we share
our results along with our takeaways about optical flows applications and limitations.
- Meiirbek Ashirgaziyev and Aoyu Fan and Alexandra Grupe
Texture Transfer: A Matlab Implementation
In this paper, we present an image-based Matlab implementation of texture transfer. This implementation does not use image quilting for texture synthesis, of which texture transfer is an extension. Instead, it imposes specific constraints on input images concerning the image dimensions and the relative size of the two input images. If these constraints are maintained, this implementation of texture transfer produces high-quality output images.
- Amr Hassaballah, Jimmy Yuan and Austin Schaumberg
This paper discusses the approach and
implementation to a new video effect coined
the "Materialized Tracer" effect. Within
this paper we will describe our method in
detail, including pseudocode and some of the
general parameters used to produce a given
tracer's frequency and density. Commonly
speaking, the technique is a fusion of
foreground segmentation combined with
a clever approach to image overlaying in
order to produce the effect. Moreover, we
will provide an analysis on our algorithm's
efficiency in addition to a variety of our
project's initial results.
- Brett Geschke, John Carmichael and Steven Radeztsky
Our project will be an Android application that will allow any user to
submit images with the goal to create a comic from those images. For the
sake of simplicity the user will submit 2-4 images. Those pictures will then
be cartoonified using bilateral filtering and edging techniques learned in
lecture. Our application will then display those images in a layout similar to
a comic book. The user will have the ability to write their own captions for
each picture. To make it more of an interactive experience, the user can
click on each image to view the image and its caption. This effect will
create an interactive comic book experience so that people of all ages and
artistic abilities can use our application.
- Chao Li, Bryan Suzan and Alayna Truttmann
Photo Booth Utilizing Non-Photorealistic Rendering
This project aims at filtering input images with various effects, via means of color quantization,
bilateral filtering, tone mapping, color adjustments, and more. Specifically, the filters we
implemented were sketch, X-ray, sepia, black and white, cartoon, and pop art. Our program
allows users to quickly and easily select images and customize filter effects on them through our
user-friendly GUI. Users can also compare different effects through a photo booth style setup.
Users can then share these images if they choose through social media.
- Evan Hernandez and David Liang
Artistic style transfer is the process of merging the style of one
image with the content of another. This is a useful technique
because it saves animators drawing time, particularly when
a large sequence of stylized images is necessary. Furthermore, the
effects are visually appealing, so the technique can be applied simply
to produce beautiful images for any purpose. At the same time,
artistic style transfer is difficult because the notions of style and
content are abstract and difficult to separate. After all, content
and style aren't quantitative by any means, so creating a digital
representation of these two characteristics seems nearly impossible.
However, Gatys et al. have developed a style transfer algorithm
that uses a deep convolutional neural network (CNN) to divorce
image style and content and Ruder et al. propose an extension
of this technique to video. In this paper, we will describe these
algorithms, our implementation, and our results.
- Garrett Andrews, Matthew Muccianti , Elliott Janssen Saldivar and Aabhas Singh
3D Mesh Reconstruction from a Structure-from-Motion Point Cloud
Simultaneous Localization And Mapping (SLAM) is commonly used to represent and
estimate spatial uncertainty for autonomous algorithms such as collision avoidance systems
and scene reconstruction for robotics. We utilize drone video to create a 3D model of an object
or scene post-production in MATLAB. To represent the model, we attempt to use a MATLAB
implementation of structure-from-motion as well as Microsoft Photosynth to create a point
cloud of the scene. Three algorithms (Marching Cubes by MeshLab, Poisson Reconstruction by
MeshLab, and our custom algorithm in MATLAB) were tested against this point cloud to
determine which algorithm can best recreate the 3D representation of the structure. While
mostly used for 2D applications, MeshLab offers an implementation of Delaunay triangulation
so we compare its results with the other three.
- Joseph Hushek, Haohua Wang and He Wang
Inspired by face stickers made popular by Snapchat, we decided to create a program to
add stickers to fingers. In the project, we detect a hand within an image by a range of skin colors,
select the hand, use contour features to identify the fingertips, then add a face sticker on each of
the fingertips. The result is an image of a funny-looking hand.
- Jacob Draeger, Alex Herreid and Raghav Bhagwat
For our project, we wanted to create a face swapping program using MATLAB and computational photography skills. we used MATLAB's inbuilt implementation of the Viola-Jones algorithm for face detection and feature point recognition. This was a function of MATLAB's Computer Vision Toolbox. We created our own face mask of 13 specific points and used the polygon function of the face detection method to draw our mask around the area of the face. After identifying the faces, we crop them out and swap them and use a blending algorithm to give a more uniform result. We designed our custom face mask to give us more accurate results and implemented multiple snapchat style masks.
- Jing Qian, Xiaofei Liu and Ruihao Zhu
Shinkai Makoto's Imagination: An Attempt to Mimic the Japanese Anime Style
The project is motivated by the artwork of Japanese anime director, Shinkai Makoto. This report demonstrates the process of converting a real life landscape image into the typical hand drawing style of Shinkai Makoto. The process is divided into three steps, including detecting the sky region of an input image, animefying the input image and replacing the original sky region with the selected sky source image. While we successfully transform some input images to the anime style, there are also some limitations in this program. One is that sky detection algorithm is not robust for all kinds of input images and the other is that color adjustment works better for input images with higher brightness and saturation.
- Christina Stiff, Nicholas Smith and Joanne Lee
Automatic Background Decolorizer
This project outlines a method to take a color image as input, distinguish between the
foreground and background regions using the k-means++ clustering algorithm, and decolor the
background region to emphasize the in-color foreground subject. In our implementation, we
use color as the feature vector and assume the cluster with the most assigned pixels to be the
background region that is decolored. The pixels assigned to the other clusters remain in color.
- John Louk, Brian Nelson and Riley Morrison
Stellar Photo Matching
Capturing the beauty and wonder of the night sky has been the goal of many
photographers. However with the rise of cell-phone cameras and light pollution, taking photos of
stars has become quite challenging. Phone cameras do not have the same capabilities as true
cameras and cannot get the same level of detail. The goal of our Stellar Photo Matching program
is to improve these poor quality star images and make appear natural. This is done by matching
star patterns in the image to patterns in a star database using the Murtagh method. Then our
method will project new stars from the database onto the original image aligning based on
homography. The output image will show the stars that would have appeared had the photo been
taken under better conditions. Our program, tested on matching the Orion constellation to its
surrounding stars, has been successful and proves that star matching can improve the quality of
star photos effectively.
- Jake Becco, Justin Aniban and Joseph Hoffmann
Fingerprint recognition has been researched for a long period of time and it has
shown fantastic results in the real world. However, due to the near infinite
possibilities of fingerprint variations, fingerprint recognition can be a challenging
problem (Raja et al., 2009) . When comparing two fingerprints there are many
different facets that can go wrong, and the most important is the feature matching
algorithm. With the advancement of technology, fingerprint scanners are becoming
more widely used. However, these scanners are becoming smaller, such as on
phones, and only sense only a limited portion of the fingerprint. Therefore, with
these small fingerprints, only a small portion of overlap can occur causing
difficulties in determining matches (Figure 1). Because of this, advanced matching
algorithms must be used to determine matches with a respectable success rate. The
most successful algorithm is based off minutiae points.
- Josh Chaimson, Bryce Greiber and Sambhav Jain
Face replacement has gained a lot of popularity through many new applications -- one of them being the face swap feature on Snapchat. Our algorithm will take in input images, allow the user to select an image that needs face replacement, and use landmark features to help choose what photo will be best to choose parts of the face from. When choosing the image, users will also be able to choose what particular part of the photo they want to replace by creating a mask. The last step is to blend the replacement face into the new head.
- Kailee Tapia
Adding Text to Images
The TextToImage program allows users to import an image and text. The user
then determines how they want the text aligned both vertically (top-aligned, bottomaligned,
or centered) and horizontally (left-aligned, right-aligned, or centered) on the
image, as well as how large they would like the text, what font they would like used,
and whether or not they want it bold, italic, or plain. The program then uses the height
and width of both the image and the text to determine the correct alignment based on
the user's input. The program also allows the user to input what they would like the
new image saved as.
- Lingfeng Huang and Fang Wang
Discovering Panoramas in Web Videos
Panoramas have been widely used in may applications in multimedia, but the main
constraint for panoramas is that they must be taken by people who physically present at the
place. In this project, we implement our version of Discovering Panoramas in Web Videos by
Liu et al. Our goal is to solve two problems: First, the program should be able to select
optimal segments within a given web videos to synthesis a panorama using visual quality
measurement introduced in Liu et al. Second, the program should be able to generate a
set of panoramas if multiple panoramas can be synthesized in a given video. The whole
procedure is a optimization problem where we optimize the three criteria which are wide
field of view, mosaicability, and high image quality.
- Arjun Gurumurthy, Ashok Marannan and Manoj Nagarajan
Machine Learning based Efficient Image Colorization
Image colorization is a process that adds color to grayscale images. Manual image colorization
is tedious and prone to human error. Existing approaches to colorize images automatically use
either pixel-based scanning (expensive), are scribble-based (manual), assume strong correlation
with target grayscale image, or require large number of training examples. In this project, we
aim to look at approaches that perform automatic image colorization with a small number of
similar reference images using Machine Learning. Initially, (1) we plan to design features which
would capture different properties of a grayscale image for training ML models. Then, (2)
grouping pixels into superpixels enables capturing of local information with reduced
complexity. If we have a corpus of training images, (3) one or more of Machine Learning
algorithms like support vector regression or k-means could be used to train an algorithm to
colorize parts of images with reasonable accuracy. Based on the output of the model, (4) we
perform some post-processing (like smoothing) to re-assign colors of segments predicted with
low confidence. Via these four steps, we color images with reasonable accuracy using only a
small subset of training images than needed in other approaches.
- Mary Feng and Sabrina Yu
Image Analogies for Artistic Filters
This project focuses on implementing image analogies for artistic filters as outlined in the paper
of the same name (Hertzmann et al., 2001). Given a pair of training images A (an unfiltered
image) and A' (some "filtered" version of image A), the same "filter" is applied to B to produce
B', thus completing the image analogy. While image analogies can be used for a wide variety of
effects depending on the training pair provided, we chose to focus on artistic filters. We found
that some test images and filters provided better results than others. Although substantial effort
was put into this project, there were various challenges (especially time, since synthesizing B'
takes many hours even for small images) leading to output images not looking as pleasant as
desired due to noise. Possible reasons for this are discussed.
- Matthew Nicol, Brad Miller and James Merrill
Partial Image Placement Software (PIPS)
A fluid object to picture placement and blending method is something many people could use in their daily lives. With how expensive Photoshop and other alternatives are, and the difficulty of using them, having a program that can do everything from cutting out the object in the image to placing it a second image and blending it is something worth having.
- Micaela Connors, Angus Kinsey and Jason Chen
The goal of this project is to highlight differences between two contrasting
images taken of the same scene at different points in history or at different
times of day by combining them to create a visual timeline output image. This
will be accomplished by compositing two input images into one blended image
where each image takes up about half of the space in the final processed image.
In order to combine the images, we will use the Laplacian Pyramid Blending
method to seamlessly combine the images, creating a composite output image
of the two input images in the middle and leaving the originals unchanged
further from the center.
- Jackson Milkey, Michael Salmon and Andrew Zeitlow
Cartoonization of Images
Based on the paper "Realtime Video Abstraction," our project aims to implement a form of
rendering . A variety of methods are used to abstract an image: detailed
regions are exaggerated while visually uninteresting areas are artificially reduced. With an
abstracted image, simple color quantization is employed to achieve a smooth, cartoonlike
- Neha Godwal, Sidharth Mudgal and Yipeng Zhang
Converting 2D to 3D
Due to advancements in visuals and emerging market of virtual reality, there is a vested interest of users to create 3D content. In today's world, the creation of 3D movies can be done using stereo cameras, which is generally not affordable by many people whereas 2D video cameras are easily accessible to everyone. In this project, our aim is to achieve automatic conversion of 2D videos to 3D using deep neural networks. Our approach is to train deep neural network model on stereo pairs extracted from 3D movies/videos. As compare to other algorithms for 2D to 3D conversion, we expect deep-learning model performs better in quantitative and human subject evaluations.
- Wangtao Lian, Rulan Zheng and Ruiqi Yin
Generating Artistic Drawing Styles
Phones and camera have become accessible in the modern era; hence, taking a photo has
become common to public. Some artists have used these snapshots as inspiration to create
different art forms; however, not everyone who wants to convert images to aesthetic outcome has
the necessary skills. In this project, we have created a program to resolve this issue by asking the
user to choose the artistic styles they want. Our project aims to generate four different styles to
an input image, and the output image could be converted into a cartoonified picture, a contour
drawing, pencil drawings in black and white, and in colors.
- Jacob Holiday and Sahil Verma
Applications that add filters/effects to one's face have become extremely
popular as of late. Mobile phone users have developed a new demand for
applications which can take their selfies to the next level by adding fun
effects. The goal of this project was to learn more about the technologies that
go into face detection, feature recognition, and image warping in order to
help others gain a better understanding of the core concepts behind hugely
popular applications like Snapchat. Previous projects demonstrate this same
goal although with less accuracy in filter placement and more pixilation and
noise around the edges of filters. To enhance filter placement, we utilize
OpenCV and DLib for accurate face detection and detailed facial feature
point recognition. In addition, we explore a simple method by which to
inform the program how a filter should fit onto to a face. We also implement
a method of alpha blending for clean compositing of transparent filters over
images. We hope this project gives others the confidence to explore this
quickly emerging area of computational photography and further improve
on our work.
- Eric Arndt, Justin Xayarinh and Shuruthy Yogarajah
Technology has advanced to the point where a digital artist can create a photorealistic image,
which is any image that appears to be real, but is not. Even though this is the case, for a
number of years computational photographers have been creating various ways to make digital
photos look hand drawn, as if cameras have never existed. This is called non-photorealistic
rendering. Our algorithm is based off of Lu et al.  and other related non-photorealistic
rendering algorithms. It begins with any photo and produces an output that mimics a
pencil-drawn sketch. This pencil-drawn sketch is created by combining a line drawing image
with a tonal texture image.
- Shu Chen, You Wu and Sijia Zhang
Turning Scene Photos into Anime Art
After seeing this year's phenomenal film, "Your Name," by Shinkai Makoto, we feel
strongly motivated to virtually bring this kind of pure sky back to real life. Shinkai
Makoto's film is known for beautifully designed images that every frame of his film can
be used as wallpaper. We feel like we can dedicate to transform real life scenes in the
world to a gorgeous piece of art.
This project, turning scene photos into an anime art is a project involving multiple
processes. In general, the project includes a segmentation algorithm and two filter
algorithms. When fulfilling what we are aiming for, we encounter a problem. Since real
life clouds do not look like the marshmallow clouds in the anime, if only apply an oil
paint filter, it does not achieve the effect we are looking for. As the result, we decide to
take out the whole sky area and replace it with a pre-set anime sky full of prominent
clouds. The overall process of the project can be divided into six parts. First, we read the
input image and adjust its size to a pre-set value. Second, we detect the sky area of the
image and crop it out by utilizing a segmentation algorithm. Third, we apply customized
filter as well as the bilateral filter to the rest of the image. Fourth, we apply the edge
sharpening technique to enhance the image's overall quality. Fifth, we adjust other
factors of the image such as brightness, exposure, saturation, and etc. Last, we put the
previously designed anime sky to the image to form an intact final output image. This
document provides an analytical elucidation of the design, the process, the inspiration,
the implementation, the methodology and the final result. The references we consulted
are listed at the end and mentioned throughout the report.
- Cheng Xiang, Siyu Chen and Yizhe Qu
Face Detection and Adding Stickers
We decided to work on a Matlab program that achieves face recognition and modification on the detected faces. Face detection is a commonly used technology these days. Such image processing technique is used to identify human faces in digital images. Many popular developers now have deployed this feature, such as Apple and Microsoft. In addition to the face detection feature, we decided to add modifications such as adding a pair of glasses over all the faces we detected from a photo.