I work with Dr. Bilge Mutlu in the Human-Computer Interaction lab, and Dr. Michael Gleicher in the UW Graphics Group. A chronological list of my publications can be found on the publications page or on my Mendeley profile.

 

Modeling Gaze Mechanisms for Virtual Agents and Humanlike Robots

Embodied social agents, through their ability to afford embodied interaction using nonverbal human communicative cues, hold great promise in application areas such as education, training, rehabilitation, and collaborative work. Gaze cues are particularly important for achieving significant social and communicative goals. In this research, I explore how agents, both virtual and physical, might achieve these goals through various gaze mechanisms. I am developing control models of gaze behavior that treat gaze as the output of a system with a number of multimodal inputs.

By giving embodied agents the ability to draw on the full communicative power of gaze cues, this work will lead to human-agent interactions that are more engaging and rewarding. The primary outcome of this research will be a set of gaze models which can be dynamically combined to achieve any and all functions of gaze for a wide array of embodied characters and interaction modalities. These models will range from low-level computational models to high-level qualitative models. The primary hypothesis is that gaze cues generated by these models, which will be theoretically grounded in literature on human gaze, will evoke positive social and cognitive responses, and these results will generalize across agent representations and task contexts.

Media Coverage

New Scientist (UK), 2014: "The robot tricks to bridge the uncanny valley"

AAAS Science Update (US), 2014: "Robot gaze aversion"

Science Nation (US), 2012: "Robots that can teach humans"

Publications

Pejsa, T., Andrist, S., Mutlu, B., and Gleicher, M. (Under Review). Gaze and Attention Management for Embodied Conversational Agents. Submitted to ACM Transactions on Interactive and Intelligent Systems (TiiS).

Andrist, S., Tan, X. Z., Gleicher, M., and Mutlu, B. (2014). Conversational Gaze Aversion for Humanlike Robots. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction (HRI '14). ACM. New York, NY, USA. 25-32. (pdf) [Best Paper Award Nominee]

Ruhland, K., Andrist, S., Badler, J., Peters, C., Badler, N., Gleicher, M., Mutlu, B., and McDonnell, R. (2014). "Look Me in the Eyes": A Survey of Eye and Gaze Animation for Virtual Agents and Artificial Systems. In Eurographics State-of-the-Art Report (EG '14 STARs).

Andrist, S. (2013). Controllable Models of Gaze Behavior for Virtual Agents and Humanlike Robots. In Proceedings of the 15th ACM International Conference on Multimodal Interaction (ICMI '13), Doctoral Consortium. ACM. New York, NY, USA. 333-336. (pdf)

Andrist, S., Mutlu, B., and Gleicher, M. (2013). Conversational Gaze Aversion for Virtual Agents. In R. Aylett, B. Krenn, C. Pelachaud, & H. Shimodaira (Eds.), Proceedings of the 13th International Conference on Intelligent Virtual Agents (IVA '13). Springer Berlin Heidelberg. 249-262. (pdf) [Highly Commended Paper Award]

Andrist, S., Pejsa, T., Mutlu, B., and Gleicher, M. (2012). Designing Effective Gaze Mechanisms for Virtual Agents. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12). ACM. New York, NY, USA. 705-714. (pdf)

Andrist, S., Pejsa, T., Mutlu, B., and Gleicher, M. (2012). A Head-Eye Coordination Model for Animating Gaze Shifts of Virtual Characters. In Proceedings of the 14th International Conference on Multimodal Interaction (ICMI '12), 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction (Gaze-In '12). ACM. New York, NY, USA (pdf)

 

Developing Engaging Behaviors for Virtual Characters Interacting with Groups of Children

In the fall of 2012 I was a lab associate intern at Disney Research Pittsburgh, where I engaged in research on multiparty turn-taking with groups of children interacting with an embodied conversational agent. Using Unity and Maya, I first implemented a game in which an on-screen virtual agent (partially autonomous, partially wizard-controlled) played a game with groups of children. I then developed verbal and nonverbal behaviors that the agent could employ to enforce better turn-taking and less overlapping speech in the children, while keeping the game fun and spontaneous. Finally, I conducted a pilot study with children recruited from the local Pittsburgh area to test the effectiveness of these new character behaviors.

Publications

Andrist, S., Leite, I., and Lehman, J. (2013). Fun and Fair: Influencing Turn-taking in a Multi-party Game with a Virtual Agent. In Proceedings of the 12th International Conference on Interaction Design and Children (IDC '13). ACM. New York, NY, USA. 352-355. (pdf)

Leite, I., Hajishirzi, H., Andrist, S., and Lehman, J. (2013). Managing Chaos: Models of Turn-taking in Character-multichild Interactions. In Proceedings of the 15th International Conference on Multimodal Interaction (ICMI '13). ACM. New York, NY, USA. 43-50. (pdf)

Leite, I., Hajishirzi, H., Andrist, S., and Lehman, J. (2013). Take or Wait? Learning Turn-Taking from Multiparty Data. In AAAI Conference on Artificial Intelligence (Late-Breaking Developments). (pdf)

 

Robots as Experts: How Robots Might Persuade People Using Linguistic Cues of Expertise

Robots hold great promise as informational assistants such as museum guides, information booth attendants, concierges, shopkeepers, and more. In such positions, people will expect robots to be experts on their particular topic. If an informational robot is not perceived to be an expert, people may not trust the information that it gives them. In order to raise trust and compliance with the information that robots provide, they need to be able to effectively communicate their expertise. This research draws upon literature in psychology and linguistics to examine cues in speech that not only provide information, but also demonstrate the expertise of the speaker. These cues have been assembled into an overall model of expert speech, which we have implemented on various robot systems to examine their effectiveness in communication with human participants in different contexts. Our results have generally shown that participants are strongly influenced by the robot's use of expert speech cues. We are currently investigating how to adapt these linguistic cues for different languages and cultures.

Publications

Mutlu, B., Andrist, S., Sauppé, A. (In Press). Enabling Human-Robot Dialogue. In J. Markowitz (Ed.) Robots that Talk and Listen. De Gruyter.

Andrist, S., Spannan, E., and Mutlu, B. (2013). Rhetorical Robots: Making Robots More Effective Speakers Using Linguistic Cues of Expertise. In Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI '13). IEEE Press. Piscataway, NJ, USA. 341-348. (pdf)

 

Previous research experience includes assisting on a joint project between Dr. Victoria Interrante and researchers at Medtronic on interactive heart visualizations during heart surgery, as well as work on the RoboCup Rescue Agent Simulation competition with the MinERS group at University of Minnesota.