I conduct my research with advisors Dr. Mike Gleicher of the Graphics Group, and Dr. Bilge Mutlu of the Human Computer Interaction Lab. I have also done two internships at Microsoft Research.


Authoring Directed Gaze for Full-Body Motion Capture

Directed gaze is an important component of believable character animation that can situate characters in the scene, signal the focus of their attention, and convey their personality and intent. Directed gaze is composed of coordinated movements of the eyes, head, and (sometimes) the upper body toward targets in the scene. In animation practice, gaze is typically hand-authored, which requires considerable expertise and effort, due to the intricate pose and timing relationships of the eye, head, and torso movements. In this project, we introduce an approach for automatically adding editable gaze to a captured body motion. First, we analyze the body motion and scene layout and we automatically infer when and where the character is looking. The output of this step is an abstract representation of the character's directed gaze behavior. From this repreesntation, we can automatically synthesize eye animation that matches the scene. This representation is also conveniently editable; animators can use the provided editing tool to easily correct errors in the original performance, adapt gaze direction to changes in scene layout, and alter the character's communicative intent and personality.

Publications:

Pejsa, T., Rakita, D., Mutlu, B., and Gleicher, M. (2016). Authoring directed gaze for full-body motion capture. In Proceedings of the 9th ACM SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia (SIGGRAPH Asia 2016). To appear.

Rakita, D., Pejsa, T., Mutlu, B., and Gleicher, M. (2015). Inferring gaze shifts from captured body motion. In ACM SIGGRAPH 2015 Posters (SIGGRAPH '15), Los Angeles, CA. pdficon


Room2Room: Enabling Life-size Telepresence in a Projected Augmented Reality Environment

Room2Room is a telepresence system that leverages projected augmented reality to enable life-size, room-scale interaction between two remote participants. The system performs 3D capture of the local user with color+depth cameras and projects their life-size virtual copy into the remote room, achieving an illusion of the remote person's physical presence in the local space. The participants are able to communicate naturally using a range of nonverbal cues, such as gaze, gestures, and posture. The system facilitates collaboration by providing a large workspace at the scale of the entire room, which the participants can view and seamlessly interact with. We also contribute strategies for projecting virtual copies onto physically plausible locations in the remote space, such that they form a natural and consistent conversational formation with their partners.

This research was carried out during my Summer 2014 internship at Microsoft Research, in collaboration with Hrvoje Benko, Julian Kantor, Eyal Ofek, and Andrew Wilson.

Publications:

Pejsa, T., Kantor, J., Benko, H., Ofek, E., and Wilson, A.D. (2016). Room2Room: Enabling Life-size Telepresence in a Projected Augmented Reality Environment. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW 2016). Best Paper Award. pdficon


Natural Communication about Uncertainties in Situated Interaction

Physically situated, multimodal interactive systems must make probabilistic inferences about properties of the world, users, and their intentions and actions. These inferences are drawn from noisy sensor data and they are subject to uncertainties - failure to account for these uncertainties may cause the interaction to break down. In this work, we have developed methods for estimating and communicating uncertainties that occur during situated, multiparty interaction. We propose a representation that captures both the magnitude and underlying causes of uncertainty. We also introduce policies and behaviors that leverage verbal and nonverbal affordances of an embodied conversational agent to naturally communicate about uncertainties to conversational participants, enlist their help in uncertainty resolution, and prevent further uncertainties from arising by informing participants about the system's limitations.

This research was carried out during my Spring 2014 internship at Microsoft Research, in collaboration with Michael Cohen, Dan Bohus, Nick Saw, Jim Mahoney, and Eric Horvitz.

Publications:

Pejsa, T., Bohus, D., Cohen, M.F., Saw, C., Mahoney, J.M., and Horvitz, E. (2014). Natural Communication about Uncertainties in Situated Interaction. In Proceedings of the 16th ACM International Conference on Multimodal Interaction (ICMI 2014), 283-290. pdficon


Designing Effective Gaze Mechanisms for Virtual Agents

gaze

Virtual agents hold great promise in human-computer interaction with their ability to afford embodied interaction using nonverbal human communicative cues. Gaze cues in particular are integral in mechanisms for communication and management of attention in social interactions, which can trigger important social and cognitive processes, such as establishment of affiliation between people or learning new information. Our goal is to explore how agents might trigger such processes through changes in properties of their gaze behavior, in particular spatial and temporal coordination of the eye, head, and upper body movement during gaze shifts. We draw on research in human physiology to develop models of gaze behavior; implement these gaze models on virtual agents using a character-creation pipeline consisting of DAZ Studio, Autodesk 3ds Max, and Unity; and run user studies to test how manipulating the parameters in our gaze models may lead to positive social and cognitive outcomes in human-agent interactions.

Publications:

Pejsa, T., Andrist, S., Gleicher., M., and Mutlu, B. (2014). Gaze and Attention Management for Embodied Conversational Agents. ACM Transactions on Interactive and Intelligent Systems, 5(1), Article 3, 34 pages. pdficon

Andrist, S., Pejsa, T., Mutlu, B., and Gleicher, M. (2012). Designing Effective Gaze Mechanisms for Virtual Agents. In Proceedings of the 30th ACM/SigCHI Conference on Human Factors in Computing (CHI 2012), 705-714. pdficon

Andrist, S., Pejsa, T., Mutlu, B., and Gleicher, M. (2012). A Head-Eye Coordination Model for Animating Gaze Shifts of Virtual Characters. In Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human-Machine Interaction held at the International Conference on Multimodal Interfaces, 4:1-4:6. pdficon


Stylized and Performative Gaze for Character Animation


In this project, we have developed a parametric computational model for synthesis of directed gaze shifts, which can be applied to stylized and anthropomorphic characters and supports enhanced control over communicative content by incorporating viewer-oriented "staging effects." Our model incorporates techniques for automatic, online adaptation of gaze motion to varying geometric properties of the eyes ahead, with the goal of achieving visually pleasing and natural gaze motion in cartoon-style and non-human characters, while retaining communicative and expressive properties of real human gaze. We explore in a study with human participants how changes to spatial and kinematic properties gaze motion enacted by our model affect the communicative accuracy and perceived naturalness of the character's gaze cues.

Publications:

Pejsa, T., Mutlu, B., and Gleicher., M. (2013). Stylized and Performative Gaze for Character Animation. Computer Graphics Forum (Proceedings of EUROGRAPHICS 2013), 32(2), 143-152. pdficon