Effective Replays and Summarization of Virtual Experiences

Abstract: Direct replays of the experience of a user in a virtual environment are difficult for others to watch due to unnatural camera motions. We present methods for replaying and summarizing these egocentric experiences that effectively communicate the users observations while reducing unwanted camera movements. Our approach summarizes the viewpoint path as a concise sequence of viewpoints that cover the same parts of the scene. The core of our approach is a novel content dependent metric that can be used to identify similarities between viewpoints. This enables viewpoints to be grouped by similar contextual view information and provides a means to generate novel viewpoints that can encapsulate a series of views. These resulting encapsulated viewpoints are used to synthesize new camera paths that convey the content of the original viewers experience. Projecting the initial movement of the user back on the scene can be used to convey the details of their observations, and the extracted viewpoints can serve as bookmarks for control or analysis. Finally we present performance analysis along with two forms of validation to test whether the extracted viewpoints are representative of the viewers original observations and to test for the overall effectiveness of the presented replay methods. 

Link: http://pages.cs.wisc.edu/~kponto/lel-pubs/ieee-vr-2012.html

Virtual Exertions: a user interface combining visual information, kinesthetics and biofeedback for virtual object manipulation
Abstract: Virtual Reality environments have the ability to present users with rich visual representations of simulated environments. However, means to interact with these types of illusions are generally unnatural in the sense that they do not match the methods humans use to grasp and move objects in the physical world. We demonstrate a system that enables users to interact with virtual objects with natural body movements by combining visual information, kinesthetics and biofeedback from electromyograms (EMG). Our method allows virtual objects to be grasped, moved and dropped through muscle exertion classification based on physical world masses. We show that users can consistently reproduce these calibrated exertions, allowing them to interface with objects in a novel way. 

Link: http://pages.cs.wisc.edu/~kponto/lel-pubs/ieee-3dui-2012.html

Leonardo da Vinci's Lost Mural: The Battle of Anghiari
Abstract:  The Battle of Anghiari disappeared nearly 500 years ago when the Hall of the 500 in the Palazzo Vecchio was remodeled by Giorgio Vasari, starting in 1563. But was "The Battle of Anghiari" destroyed? Did Vasari protect it behind his own new mural? And if the da Vinci masterpiece remained in place, did it crumble - or has it survived to this day? Our group is working on advanced imaging and visualization techniques to answer these questions.

Link: http://cisa3.calit2.net/research/anghiari.php

The Valley of the Khans Project
Abstract:  The objective of this study is to perform a non-destructive archaeological search for the tomb of Genghis Khan utilizing modern digital tools from a variety of disciplines, including digital imagery, computer vision, non-destructive surveying, and on-site digital archaeology. The goal of the search is to identify the location of the tomb without disturbing it, thus maintaining respect and reverence for local customs while enabling protective measures through organizations such as UNESCO. With the growing trend of rouge mining and looting of antiquities in this region, such protective measures may ensure the preservation of this iconic symbol of world cultural heritage.

Link: http://valleyofthekhans.org/

Abstract:  The Highly Interactive Parallelized Display Space project (HIPerSpace) is brought to you by the creators of HIPerWall. HIPerSpace is the next generation concept for ultra-high resolution distributed display systems that can scale into the billions of pixels, providing unprecedented high-capacity visualization capabilities to experimental and theoretical researchers. HIPerSpace has held the distinction of the "World's Highest Resolution Display" since it was first introduced in 2006, taking the top spot previously held by HIPerWall, which held it since 2005. HIPerSpace has served as the baseline system for nearly all OptIPortals that have been deployed since the end of 2006, i.e. it is the godfather of most high-resolution multi-tile walls that have emerged recently, most of which are being developed as nearly identical carbon copies. HIPerSpace is being powered by our cluster graphics library and cluster management framework, called CGLX.

Link: http://vis.ucsd.edu/mediawiki/index.php/Research_Projects:_HIPerSpace

Abstract: CGLX (Cross-Platform Cluster Graphic Library ) is a flexible, transparent OpenGL-based graphics framework for distributed high performance visualization systems in a master-slave setup. The framework was developed to enable OpenGL programs to be executed on visualization clusters such as a high resolution tiled display system and to maximize the achievable performance and resolution for OpenGL-based applications on such systems. To overcome performance and configuration related challenges in networked display environments, CGLX launches and manages instances of an application on all rendering nodes through a light-weight thread-based network communication layer. A GLUT-like (Open GL Utility Toolkit) interface is presented to the user, which allows this framework to intercept and interpret OpenGL calls and to provide a distributed large scale OpenGL context on a tiled display. CGLX provides distributed parallized rendering of OpenGL applications with all OpenGL extensions that are supported through the graphics hardware.

Link: http://vis.ucsd.edu/mediawiki/index.php/Research_Projects:_HIPerSpace


©2011-2012 University of Wisconsin–Madison. Last Updated 2012-01-20.