Designing Socially Contingent Gaze Behaviors for Embodied Agents
In my dissertation research, I explore how embodied agents--both virtual agents and humanlike robots--might achieve positive social and communicative outcomes through the use of gaze mechanisms. To this end, I am developing computational control models of gaze behavior that are contingent on a number of social variables, including the characteristics and behaviors of the human user and the goals of the interaction. My work explores: (1) how agents can produce gaze shifts that target specific high-level interaction outcomes, (2) how agents can effectively utilize gaze aversions in conversation, (3) how agents can adapt their gaze behaviors to the personality of their users for rehabilitation, and (4) how agents can coordinate their gaze with the user’s gaze while collaborating on a physical task.
Popular Science (US), 2014: "Robots seem more thoughtful if they glance away while they talk"
New Scientist (UK), 2014: "The robot tricks to bridge the uncanny valley"
AAAS Science Update (US), 2014: "Robot gaze aversion"
Badger Herald (US), 2014: "UW student researches ways to make robots more human"
Science Nation (US), 2012: "Robots that can teach humans"
Andrist, S., Huang, C.-M., and Mutlu, B. (Under Review). Perceptual Common Ground in Communication with Embodied Agents. Submitted to Topics in Cognitive Science.
Andrist, S., Collier, W., Gleicher, M., Mutlu, B., and Shaffer, D. (2015). Look Together: Analyzing Gaze Coordination with Epistemic Network Analysis. Frontiers in Psychology. 6:1016. 1-15. (paper)
Huang, C.-M., Andrist, S., Sauppé, A., and Mutlu, B. (2015). Using Gaze Patterns to Predict Task Intent in Collaboration. Frontiers in Psychology. 6:1049. 1-12. (paper)
Pejsa, T., Andrist, S., Gleicher, M., and Mutlu, B. (2015). Gaze and Attention Management for Embodied Conversational Agents. ACM Transactions on Interactive and Intelligent Systems (TiiS). 5(1), Article 3. 34 pages. (pdf)
Ruhland, K., Peters, C. E., Andrist, S., Badler, J. B., Badler, N. I., Gleicher, M., Mutlu, B. and McDonnell, R. (2015). A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour Generation, User Interaction and Perception. Computer Graphics Forum. (publication site)
Andrist, S., Mutlu, B., and Tapus, A. (2015). Look Like Me: Matching Robot Personality via Gaze to Increase Motivation. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM. New York, NY, USA. 3603-3612. (pdf) [Best of CHI Honorable Mention Award]
Andrist, S., Tan, X. Z., Gleicher, M., and Mutlu, B. (2014). Conversational Gaze Aversion for Humanlike Robots. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction (HRI '14). ACM. New York, NY, USA. 25-32. (pdf) [Best Paper Award Nominee]
Ruhland, K., Andrist, S., Badler, J. B., Peters, C. E., Badler, N. I., Gleicher, M., Mutlu, B., and McDonnell, R. (2014). "Look Me in the Eyes": A Survey of Eye and Gaze Animation for Virtual Agents and Artificial Systems. In Eurographics 2014 - State of the Art Reports (EG '14 STARs). (publication site)
Andrist, S. (2013). Controllable Models of Gaze Behavior for Virtual Agents and Humanlike Robots. In Proceedings of the 15th ACM International Conference on Multimodal Interaction (ICMI '13), Doctoral Consortium. ACM. New York, NY, USA. 333-336. (pdf)
Andrist, S., Mutlu, B., and Gleicher, M. (2013). Conversational Gaze Aversion for Virtual Agents. In R. Aylett, B. Krenn, C. Pelachaud, & H. Shimodaira (Eds.), Proceedings of the 13th International Conference on Intelligent Virtual Agents (IVA '13). Springer Berlin Heidelberg. 249-262. (pdf) (publication site) [Highly Commended Paper Award]
Andrist, S., Pejsa, T., Mutlu, B., and Gleicher, M. (2012). Designing Effective Gaze Mechanisms for Virtual Agents. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12). ACM. New York, NY, USA. 705-714. (pdf)
Andrist, S., Pejsa, T., Mutlu, B., and Gleicher, M. (2012). A Head-Eye Coordination Model for Animating Gaze Shifts of Virtual Characters. In Proceedings of the 14th International Conference on Multimodal Interaction (ICMI '12), 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction (Gaze-In '12). ACM. New York, NY, USA. Article 4, 6 pages. (pdf)
Rhetorical Robots: Exploring Linguistic Cues of Expertise and Persuasiveness Across Cultures
Robots hold great promise as expert informational assistants. However, if an informational robot is not perceived to be an expert, people may not trust the information that it gives them. In order to raise trust and compliance with the information that robots provide, they need to be able to effectively communicate their expertise. This research draws upon literature in psychology and linguistics to examine cues in speech that not only provide information, but also demonstrate the expertise of the speaker. I assembled these linguistic cues into a model of expert speech to enable robots to more effectively communicate with people in different contexts and across languages and cultures. Our studies revealed that users are strongly influenced by the robot's use of expert speech cues in both English and Arabic.
Andrist, S., Ziadee, M., Boukaram, H., Mutlu, B., and Sakr, M. (2015). Effects of Culture on the Credibility of Robot Speech: A Comparison between English and Arabic. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI '15). ACM. New York, NY, USA. 157-164. (pdf)
Mutlu, B., Andrist, S., Sauppé, A. (2014). Enabling Human-Robot Dialogue. In J. Markowitz (Ed.) Robots that Talk and Listen. De Gruyter.
Andrist, S., Spannan, E., and Mutlu, B. (2013). Rhetorical Robots: Making Robots More Effective Speakers Using Linguistic Cues of Expertise. In Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI '13). IEEE Press. Piscataway, NJ, USA. 341-348. (pdf) (publication site)
Developing Engaging Behaviors for Virtual Characters Interacting with Groups of Children
In the fall of 2012 I was a lab associate intern at Disney Research Pittsburgh, where I engaged in research on multiparty turn-taking with groups of children interacting with an embodied conversational agent. Using Unity and Maya, I first implemented a game in which an on-screen virtual agent (partially autonomous, partially wizard-controlled) played a game with groups of children. I then developed verbal and nonverbal behaviors that the agent could employ to enforce better turn-taking and less overlapping speech in the children, while keeping the game fun and spontaneous. Finally, I conducted a pilot study with children recruited from the local Pittsburgh area to test the effectiveness of these new character behaviors.
Andrist, S., Leite, I., and Lehman, J. (2013). Fun and Fair: Influencing Turn-taking in a Multi-party Game with a Virtual Agent. In Proceedings of the 12th International Conference on Interaction Design and Children (IDC '13). ACM. New York, NY, USA. 352-355. (pdf)
Leite, I., Hajishirzi, H., Andrist, S., and Lehman, J. (2013). Managing Chaos: Models of Turn-taking in Character-multichild Interactions. In Proceedings of the 15th International Conference on Multimodal Interaction (ICMI '13). ACM, New York, NY, USA. 43-50. (pdf)
Leite, I., Hajishirzi, H., Andrist, S., and Lehman, J. (2013). Take or Wait? Learning Turn-Taking from Multiparty Data. In AAAI Conference on Artificial Intelligence (Late-Breaking Developments). (pdf)
Previous research experience includes assisting on a joint project between Dr. Victoria Interrante and researchers at Medtronic on interactive heart visualizations during heart surgery, as well as work on the RoboCup Rescue Agent Simulation competition with the MinERS group at University of Minnesota.