Alex Kocher, Blake Nigh and Konnor Beaulier Shape-Time Accumulating Stop-Motion
We are basing our project off of a paper titled "Shape-Time Photography" written by William T. Freeman and Hao Zhang, 2003.
We want to achieve similar results as the shape-time photography (per Freeman & Zhang), and add on an interesting ".gif"-style motion that allows one to visualize the addition of shape-time picture additions. Different from a normal ".gif", where each frame is independent from the last, we will choose to keep certain artifacts from the previous frames and layer them. For each image that adds to the shape-time image, we will show the accumulation to show the progression of our algorithm adding on different shape-time artifacts.
Andrew Chase, Bryce Sprecher and James Hoffmire High-Speed Motion Magnification
Our objective is to replicate the results found in the 2005 paper "Motion magnification" by Liu et al. The process involves detecting the subtle visual motion within an input video, and exaggerating the magnitude of the motion vectors to make it more apparent and interesting. We plan to incorporate high-speed video with their technique to make motion that is both too subtle and too short in duration to observe fully apparent. We would also like to explore other means to accentuate this magnification, with color shifting and image filtering as potential options.
Meiirbek Ashirgaziyev and Aoyu Fan and Alexandra Grupe Texture Transfer and its Artistic Application
In this project, we are going to apply texture transfer technique to create artistic visuals.
First, we are going to implement the texture transfer method, and then try to use it to approach an artistic feature.
We will reference to the Paper "Image Quilting for Texture Synthesis and Transfer by Alexei A. Efros William T. Freeman"
Amr Hassaballah, Jimmy Yuan and Austin Schaumberg Tracer Materialization
In this paper, we will present a MatLab application which takes a short video as input of a subject performing some kind of motion or activity (given a static background). The goal is to output a video where the subject's motion gives the illusion of a materialized tracer effect surrounding the subject. This will be accomplished by taking the initial input and altering a set of parameters which will determine the density and frequency of the subject's motion as it occurs; as well as the speed of the subject's materialized tracer effect.
Brett Geschke, John Carmichael and Steven Radeztsky Trending Places
Want to travel to a new location somewhere in the world but don't know where to go? With Trending Places, you are just a few taps away from having the ability to see multiple pictures of a city that interests you. By using Twitter and Instagram APIs, the user will be able to search any city in the world and our app will display top trending images from that particular location.
Chao Li, Bryan Suzan and Alayna Truttmann Photo Booth Utilizing Non-Photorealistic Rendering
This project aims at filtering input images with various cartoon-like effects, via means of anisotropic diffusion, color quantization, bilateral filter, and some other auxiliary methods. It allows users to select desired filtering effects and input images through our user-friendly GUI. The program will quickly transform the input image with intended effects. It also allows the user to compare different effects through a photo booth. The core of this program is to create cartoon-like effects as vividly as real paintings and users can use them as Facebook or twitter selfies. We plan to create images filters with effects like sepia, pop art, X ray and Sketch. The filters are implemented with image abstraction framework that abstracts imagery by modifying the contrast of visually significant features like luminance and color intensity. We reduce the contrast in low-contrast regions and amplifies contrast in higher contrast regions using difference-of-Gaussian edges. Then we apply color quantization to create consistent cartoon-like effects. We also plan to create an interactive GUI that allows user to directly apply color filters. If time allows, we will also add this program to our website so that users can also filter images on the web.
Evan Hernandez and David Liang Style Transfer Using CNNs
Sometimes it is desirable to transfer the artistic style of a palette image onto some target image
while preserving the salient structures of the latter. One way to mimic style transfer is to apply
texture synthesis via image quilting with the added constraint that chosen texture blocks must
have similar intensity to the corresponding block of the target image. Another way to mimic style
transfer is to use a convolutional neural network (CNN) to learn the style of an image and then
reconstruct the target image with these styled features. Gatys et al. presents such a method.
Style transfer need not be limited to still images. It is also desirable to extend the process to
video; that is, to a sequence of temporally related images. The naive approach for video style
transfer is applying an isolated style transfer algorithm to each individual frame. Unfortunately,
such an approach may result in a choppy and unrealistic video, as it does not consider the
temporal structure of the images. Ruder et al. propose a novel algorithm for video style transfer
that combines the CNN feature-detecting methods used for still images with an added cost
function that enforces temporal structure.
Our project will implement style transfer for video using the algorithm described by Ruder et al.
The program will take as input a target video file and a palette image and will output a video with
the same length and frame-by-frame/temporal structure as the target, but with a style that
matches the palette. We will begin by implementing single-image style transfer with the
style-learning CNN described in Gatys et al. We will then implement the algorithm proposed by
Ruder et al. and provide a small UI for recording video with a webcam (or just uploading video).
Anticipated References
Gatys, L.A., Ecker, A.S., Bethge, M.: A neural algorithm of artistic style. CoRR abs/1508.06576
(2015), paper
Ruder, M., Dosovitskiy, A., Brox, T.: Artistic style transfer for videos.
paper
Garrett Andrews, Matthew Muccianti , Elliott Janssen Saldivar and Aabhas Singh
Simultaneous Localization And Mapping (SLAM) is commonly used to represent and estimate spatial uncertainty for autonomous algorithms such as collision avoidance systems and scene reconstruction for robotics. However, not all camera implementations allow for simultaneous mapping. This project utilizes a drone's video frames and SIFT descriptors to create a 3D model post production in MatLab and recreate the mapping from the SLAM algorithm. To represent the model, a structures-from-motion (SfM) algorithm will create a point cloud of the structure from the video frames. A constant circular orbit and an arbitrary fly-over are used to discern which of the two mapping styles yield a denser point cloud experimentally. From each method's point cloud, a mesh will be created using Poisson's Reconstruction and Marching Cubes to determine the optimal mapping. This mesh will allow the user to arbitrarily view the object post production. The research will yield six 3D models of the same object, two of which will be the control using Microsoft Photosynth, and will be analyzed for accuracy in representation of a real object.
Joseph Hushek, Haohua Wang and He Wang Finger Face
Hand gesture recognition has been a popular research field in computer vision. Inspired by Snapchat, we decided to add “filter” to the hands/fingers. In the project, in order to recognize a hand gesture, we detect the hand region by skin color, extract features, and use neural network with a hidden layer to classify the gesture. We then be able to identify each part of the hand and put a little face based on what's recognized on it.
Jacob Draeger, Alex Herreid and Raghav Bhagwat Hide and Seek
The project we chose is one that implements an advanced steganography algorithm to hide text, images, and sound within an image. We have currently only found algorithms that use the least significant bits to hide the encoded part. We would like to find an advanced algorithm that uses more image computation techniques rather than manipulating bits. We have found a few research papers with general descriptions, but we need to find more details. Our goal is to develop an encoder and decoder that alters an image to hide information with few the fewest artifacts.
Jing Qian, Xiaofei Liu and Ruihao Zhu
We intend to create an Japanese anime effect of scenes in daily life by applying filters to the foreground buildings and using texture synthesis to replace the background. Inspired by the paper listed above, essentially, we first will need to change the values of the foreground image ------ target image, including its intensity value, rgb values and hsv values. We could accomplish this by doing style transfer. Then, we will need to segment background from foreground and replace the background of target image by the interested areas from the source image specified by users. Users can use any kind of artistic work as a background image as well.
Some challenges we are facing:
Find an appropriate filter to apply on the foreground so that it will have similar effect as the anime background
Find a way to seamlessly apply the background texture.
We can either manually specify edge between the foreground and the background. (or automatically find the edge using segmentation)
We will use patch-based texture synthesis algorithms mentioned in the paper to apply the textures that is specified by users.
Christina Stiff and Nicholas Smith and Joanne Lee Automatic Background Decolorizer
There are manual methods to preserve color in foreground content and decolor the background, for example, using Photoshop to manually differentiate foreground and background regions. This project will take a color image as input and automatically remove the color from all background regions in order to accentuate the foreground subjects. Background regions are defined as those regions of a picture that fall below a certain set focus threshold according to a defocus map. The project will use papers detailing estimation of defocus blur, which can be used to detect background regions to be decolorized. The background region will then be decolorized using a method of color removal that preserves saliency in the image. Once the background has been decolored, the colors in the grayscale background and color foreground will be enhanced using the Lab color space for increased vividity.
John Louk, Brian Nelson and Riley Morrison
This algorithm would be used to take a "bad" star photo (such as those taken by a cell phone - small sensor size makes night imaging very difficult) and improve the image by matching the star pattern in the image to the corresponding area of a star chart or coordinate map, and then add the missing stars to create a new image that shows the stars that would actually have appeared if the picture had been taken with a better camera. The matching technique is point pattern matching, which uses the relationships between uniform points to match a "constellation" of points to the corresponding constellation in a reference image, despite all feature points being identical.
There are a couple difficulties that might arise if we try to tackle this project. Most of the current software solutions are being applied on images from telescopes that have hundreds of stars, rather than the ten or so bright stars you might see on a cell phone picture. Otherwise, this does seem to be a very novel area for a computation photography final project.
Other applications that could arise from this project include real-time constellation identifier with overlay, location finding (computerized sextant), or aberrant star identifier (an extra "star" might actually be a meteor, comet, plane, alien spaceship, etc).
Jake Becco, Justin Aniban and Joseph Hoffmann Fingerprint Matching
The advent of solid-state fingerprint sensors presents a fresh challenge to traditional fingerprint matching algorithms. These sensors provide a small contact area (/spl ap/0.6"/spl times/0.6") for the fingertip and, therefore, sense only a limited portion of the fingerprint. Thus multiple impressions of the same fingerprint may have only a small region of overlap. Minutiae-based matching algorithms, which consider ridge activity only in the vicinity of minutiae points, are not likely to perform well on these images due to the insufficient number of corresponding points in the input and template images. We present a hybrid matching algorithm that uses both minutiae (point) information and texture (region) information for matching the fingerprints. Results obtained on the MSU-VERIDICOM database shows that a combination of the texture-based and minutiae-based matching scores leads to a substantial improvement in the overall matching performance.
Josh Chaimson, Bryce Greiber and Sambhav Jain
Face replacement has gained a lot of popularity through many new applications and the popularity of face swap on Snapchat. Our algorithm will take in two input images and use landmark features to help scale the replacement face onto the target image. We are going to find the landmark feature points by using Xiong and De la Torre's Supervised Descent Method. This is used to help warp the image and scale it to fit the new target face. The last step is to blend the replacement face to the new head.
Kailee Tapia Adding Text to Images
For my project, I plan on developing a program that will allow users to import an image and text. The user will then determine how they want the text aligned both vertically (top-aligned, bottom-aligned, or centered) and horizontally (left-aligned, right-aligned, or centered) on the image, as well as how large they would like the text. The program will then determine the height and width of the image and the text, based on the size the user defined. It will then place the text on the image as specified by the user, and open and save the final result.
Lingfeng Huang and Fang Wang Discovering Panoramas in Web Videos
Panoramas have been widely used in many applications in multimedia, but the main constraint for panoramas is that they must be taken by people who physically present at the place. In this project, we will implement our version of discovering panoramas in web videos to solve the problem by selecting optimal segments within a given video, then perform synthesizing to obtain panorama. This whole procedure is basically optimization problem where we optimize the three criteria which are wide field of view, mosaicability, and high image quality.
Arjun Gurumurthy, Ashok Marannan and Manoj Nagarajan Machine Learning based Efficient Image Colorization
Image colorization is a process that adds color to grayscale images. Manual image colorization is tedious and prone to human error. Existing approaches to colorize images automatically use either pixel-based scanning (expensive), are scribble-based (manual), assume strong correlation with target grayscale image, or require large number of training examples. In this project, we aim to look at approaches that perform automatic image colorization with a small number of similar reference images using Machine Learning. Initially, (1) we plan to design features which would capture different properties of a grayscale image for training ML models. Then, (2) grouping pixels into superpixels enables capturing of local information with reduced complexity. If we have a corpus of training images, (3) one or more of Machine Learning algorithms like support vector regression or k-means could be used to train an algorithm to colorize parts of images with reasonable accuracy. Based on the output of the model, (4) we plan to perform some post-processing (like smoothing) to re-assign colors of segments predicted with low confidence. Via these four steps, we hope to color images with reasonable accuracy using only a small subset of training images than needed in other approaches.
Mary Feng and Qing (Sabrina) Yu Image Analogies
This project focuses on implementing image analogies as described in the paper of the same name (Hertzmann et al., 2001). There are three input images: A (some image), A' (a "filtered" version of image A), and B (an image we would like to apply the same filter that was used for A -> A'). The images A and A' act as training data so that the filter relating them can be learned and applied to B. The resulting image B' is computed such that B' is related to B in the same way A' is related to A, completing the image analogy. Image analogies can be used for a variety of effects depending on the input images provided. We would like to focus on artistic filters or colorization. Originally, the goal of this project was to focus on using image analogies for both artistic filtering and colorization but since there are only two people in this group, we will focus on one of those topics to simplify the project. Paper and
Examples: examples
Matthew Nicol, Brad Miller and James Merrill Partial Image Placement Software (P.I.P.S)
The goal of our project is to be able to select part of an image and blend it into a second image. We will consider a few different approaches to select the area that will be moved. The first approach we will consider is lazy snapping. This technique involves selecting areas that you want moved and also areas that you do not want moved. A second approach we will consider is paint selection. In this technique the user selects the area by drawing over it. A final technique we will consider is drawing loops around the desired area. To blend the selected part into a new image we will use Poisson blending.
Micaela Connors, Angus Kinsey and Jason Chen Visual Timeline
The goal of this project is to highlight differences between two contrasting images taken of the same scene at different points in history or at different times of day by combining them to create a visual timeline output image. This will be accomplished by compositing two input images into one blended image where each image takes up about half of the space in the final processed image. In order to combine the images, we will use the Laplacian Pyramid Blending method to seamlessly combine the images, creating a composite output image of the two input images in the middle and leaving the originals unchanged further from the center.
Jackson Milkey, Michael Salmon and Andrew Zeitlow Automatic Cartoonization
Using bilateral filtering we will attempt to make a photo, and maybe a live video, automatically appear as if it were a cartoon.
Neha Godwal, Sidharth Mudgal and Yipeng Zhang Converting 2D videos to 3D
Due to advancements in visuals and emerging market of virtual reality, there is a vested interest of users to create 3D content. In today's world, creation of 3D movies can done using stereo cameras which is generally not affordable by many people whereas 2D video cameras is easily accessible to everyone. In this project, our aim is to achieve automatic conversion of 2D videos to 3D using deep neural networks. Our approach is to train deep neural network model on stereo pairs extracted from 3D movies/videos. As compare to other algorithms for 2D to 3D conversion, we expect deep learning model performs better in quantitative and human subject evaluations.
Wangtao Lian, Rulan Zheng and Ruiqi Yin Emotion Mask
Our group project's idea is to apply a emotion mode on a person's neutral face. So we think the input image is always a neutral face, and then user can choose mode to change the input into different facial expression such as happy, sad or surprise. A paper says we can first detect the face and mark feature points and then implement some formulas to change the ratios of those points.
Jacob Holiday, Sahil Verma Detailed Facial Recognition and Photo Effects for Portraits (Selfies)
In today's world, where we take selfies as often as we blink, many applications focus on taking selfies to the next level by enhancing them in some way or adding different effects. Smartphone users have created a constant demand for new filters. For our project we will be developing our own creative filters and working on a way to realistically apply them to faces. First, we will be exploring methods to achieve detailed face detection. Not only will we recognize faces but specific scale, location, and orientation of facial features. Next, we will use this in depth recognition to apply realistic effects to a face such as skin smoothing or fun masks. This project employs various algorithms of feature detection, feathering, and face morphing.
Eric Arndt, Justin Xayarinh and Shuruthy Yogarajah Image to Sketch
Our project will take an image and turn it into a sketched version of itself, so it would look like a drawing of the original image. We first plan on attempting to take an image and turning it into the 'pencil' sketched version and then trying to see if we can further develop our project by adding additional features.
Some additional feature ideas we were thinking of are:
(1) chalkboard and (2) animation of the drawing being done.
Shu Chen, You Wu and Sijia Zhang Anime a Scene Photo
We present a method to transform a scenery photograph into an anime film clip using customized image filter and image segmentation algorithm. First we apply an edge sharpening and an oil paint filter to make the image comply to the general Anime style. Then we adjust the photo in terms of size, brightness and saturation to achieve a brighter and vivid anime color - contrast. Finally we identify and separate the sky area of the photo in which we apply a segmentation algorithm and replace it with an Anime sky of your choice. The biggest challenge we faced is using image segmentation to figure out different parts of image and replace the targeted part with our input image.
Cheng Xiang, Siyu Chen and Yizhe Qu Face Detection and Modification
After some research and discussion, we decided to work on a matlab program that achieves face detection and modification.
Face detection is a commonly used technology these days. Such image processing technique is used to identify human faces in digital images. Many popular developers now have deployed this feature, such as apple and microsoft. In addition to the basic face detection feature, we might augment it with some modifications such as swapping faces of different persons from the same images if we have enough time. We would probably work on this based on Robust Real-Time Face Detection Viola and Jones, 2003.
We will work on two implementations: first is the face detection, which we will use the ideas and codes from the references to implement the algorithm; second is the modification of detected faces, we will develop interesting effects on them.