Lilia Moshkina-Martinson, PhD




Lilia Moshkina-Martinson, Ph.D.

I completed the Ph.D. program in Computer Science at the Georgia Institute of Technology in December 2007 with an emphasis on Artificial Intelligence and Robotics. Since then, I have completed a National Academies Post-Doctoral Fellowship at the US Naval Research Laboratory in Washington, DC. I specialize in Human-Robot Interaction and Social robotics. In my spare time, I enjoy travelling, hiking, reading and ice skating. 



Research Projects and Videos

Large-Scale Observational Study on Social Engagement with a Story-Telling Robot

In this paper, we describe a large-scale (over 4000 participants) observational field study at a public venue, designed to explore how social a robot needs to be for people to engage with it. In this study we examined a prediction of Computers Are Social Actors (CASA) framework: the more machines present human-like characteristics in a consistent manner, the more likely they are to invoke a social response. Our humanoid robot’s behavior varied in the amount of social cues, from no active social cues to increasing levels of social cues during story-telling to human-like game-playing interaction. We found several strong aspects of support for CASA: the robot that provides even minimal social cues (speech) is more engaging than a robot that does nothing, and the more human-like the robot behaved during story-telling, the more social engagement was observed. However, contrary to the prediction, the robot’s game-playing did not elicit more engagement than other, less social behaviors.

Title: Octavia the robot telling a joke with full body motion


 
Associated Publication:
"Social Engagement in Public Places: A Tale of One Robot"
International Conference on Human-Robot Interaction, March 2014
Lilia Moshkina, Susan Trickett, J. Greg Trafton

 

Evaluation of the Effect of Robotic Expressions of Negative Mood and Fear on Request Compliance and Measures of Persuasiveness, Naturalness and Understandability

This paper describes design and results of a human-robot interaction study aimed at determining the extent to which affective robotic behavior can influence participants' compliance with a humanoid robot’s request in the context of a mock-up search-and-rescue setting. The results of the study argue for inclusion of affect into robotic systems, showing that nonverbal expressions of negative mood (nervousness) and fear by the robot improved the participants' compliance with its request to evacuate, causing them to respond earlier and faster.
 
 

Associated Publication:

L. Moshkina (2012), Improving Request Compliance through Robot Affect

  • AAAI'12 (Twenty-Sixth AAAI Conference on Artificial Intelligence), Toronto, Canada

 

Evaluation of the Effect of Robotic Expressions of Extraversion and Introversion on Task Appropriateness and Task Performance

This project investigates the effect of expressions of robotic Extraversion and Introversion on human subjects in two different types of tasks: a subject-matter presentation by a robotic museum guide and a problem solving task requiring concentration from participants, with a robot as a proctor. The trait of Extraversion was specified as a part of the Trait Module within a comprehensive framework for time-varying affective robotic behavior, TAME, which stands for personality Traits, Attitudes, Moods and Emotions. Two variants along the Extraversion scale (low and high) were implemented on a small biped humanoid robot Nao (Aldebaran Robotics). As a result of a 1-factor between-subject HRI experiment involving 30 participants, it was found that an Extraverted robot was judged as more welcoming and fun for a task as a museum robot guide, where an engaging and gregarious demeanor was expected; whereas an Introverted robot was rated as more appropriate and unobtrusive for a less social problem solving task, and the problem itself was rated as less demanding.

 
 
 
Experiment described in:

Moshkina, L. "An Integrative Framework of Time-Varying Affective Robotic Behavior." Ph.D Dissertation, College of Computing, Georgia Institute of Technology, 2011
 

 Public Opinion Survey on the Use of Robots Capable of Lethal Force in Warfare

Public opinions with regards to acceptability and ethical considerations of using potentially lethal robots in warfare were examined in an online survey of over 500 respondents.