My research generally involves designing, building, and evaulating socially interactive technologies. A chronological list of my publications can be found on the publications page or on my Mendeley profile.

 

Designing Socially Contingent Gaze Behaviors for Embodied Agents

In my dissertation research, I explored how embodied agents--both virtual agents and humanlike robots--might achieve positive social and communicative outcomes through the use of gaze mechanisms. To this end, I am developing computational control models of gaze behavior that are contingent on a number of social variables, including the characteristics and behaviors of the human user and the goals of the interaction. My work explores: (1) how agents can produce gaze shifts that target specific high-level interaction outcomes, (2) how agents can effectively utilize gaze aversions in conversation, (3) how agents can adapt their gaze behaviors to the personality of their users for rehabilitation, and (4) how agents can coordinate their gaze with the user’s gaze while collaborating on a physical task.

Video Gallery

Selected Publications

Andrist, S., Collier, W., Gleicher, M., Mutlu, B., and Shaffer, D. (2015). Look Together: Analyzing Gaze Coordination with Epistemic Network Analysis. Frontiers in Psychology. 6:1016. 1-15. (paper)

Pejsa, T., Andrist, S., Gleicher, M., and Mutlu, B. (2015). Gaze and Attention Management for Embodied Conversational Agents. ACM Transactions on Interactive and Intelligent Systems (TiiS). 5(1), Article 3. 34 pages. (pdf)

Ruhland, K., Peters, C. E., Andrist, S., Badler, J. B., Badler, N. I., Gleicher, M., Mutlu, B. and McDonnell, R. (2015). A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour Generation, User Interaction and Perception. Computer Graphics Forum. (publication site)

Andrist, S., Mutlu, B., and Tapus, A. (2015). Look Like Me: Matching Robot Personality via Gaze to Increase Motivation. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM. New York, NY, USA. 3603-3612. (pdf) [Best of CHI Honorable Mention Award]

Andrist, S., Tan, X. Z., Gleicher, M., and Mutlu, B. (2014). Conversational Gaze Aversion for Humanlike Robots. In Proceedings of the 2014 ACM/IEEE International Conference on Human-Robot Interaction (HRI '14). ACM. New York, NY, USA. 25-32. (pdf) [Best Paper Award Nominee]

Andrist, S., Pejsa, T., Mutlu, B., and Gleicher, M. (2012). Designing Effective Gaze Mechanisms for Virtual Agents. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '12). ACM. New York, NY, USA. 705-714. (pdf)

 

Rhetorical Robots: Exploring Linguistic Cues of Expertise and Persuasiveness Across Cultures

Robots hold great promise as expert informational assistants. However, if an informational robot is not perceived to be an expert, people may not trust the information that it gives them. In order to raise trust and compliance with the information that robots provide, they need to be able to effectively communicate their expertise. This research draws upon literature in psychology and linguistics to examine cues in speech that not only provide information, but also demonstrate the expertise of the speaker. I assembled these linguistic cues into a model of expert speech to enable robots to more effectively communicate with people in different contexts and across languages and cultures. Our studies revealed that users are strongly influenced by the robot's use of expert speech cues in both English and Arabic.

Selected Publications

Andrist, S., Ziadee, M., Boukaram, H., Mutlu, B., and Sakr, M. (2015). Effects of Culture on the Credibility of Robot Speech: A Comparison between English and Arabic. In Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction (HRI '15). ACM. New York, NY, USA. 157-164. (pdf)

Andrist, S., Spannan, E., and Mutlu, B. (2013). Rhetorical Robots: Making Robots More Effective Speakers Using Linguistic Cues of Expertise. In Proceedings of the 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI '13). IEEE Press. Piscataway, NJ, USA. 341-348. (pdf) (publication site)

 

Developing Engaging Behaviors for Virtual Characters Interacting with Groups of Children

In the fall of 2012 I was a lab associate intern at Disney Research Pittsburgh, where I engaged in research on multiparty turn-taking with groups of children interacting with an embodied conversational agent. Using Unity and Maya, I first implemented a game in which an on-screen virtual agent (partially autonomous, partially wizard-controlled) played a game with groups of children. I then developed verbal and nonverbal behaviors that the agent could employ to enforce better turn-taking and less overlapping speech in the children, while keeping the game fun and spontaneous. Finally, I conducted a pilot study with children recruited from the local Pittsburgh area to test the effectiveness of these new character behaviors.

Selected Publications

Andrist, S., Leite, I., and Lehman, J. (2013). Fun and Fair: Influencing Turn-taking in a Multi-party Game with a Virtual Agent. In Proceedings of the 12th International Conference on Interaction Design and Children (IDC '13). ACM. New York, NY, USA. 352-355. (pdf)

Leite, I., Hajishirzi, H., Andrist, S., and Lehman, J. (2013). Managing Chaos: Models of Turn-taking in Character-multichild Interactions. In Proceedings of the 15th International Conference on Multimodal Interaction (ICMI '13). ACM, New York, NY, USA. 43-50. (pdf)

 

Previous research experience includes assisting on a joint project between Dr. Victoria Interrante and researchers at Medtronic on interactive heart visualizations during heart surgery, as well as work on the RoboCup Rescue Agent Simulation competition with the MinERS group at University of Minnesota.