Home Consortium Research Activities Deliverables Publications Links Contact

  Research Activity 6: Intentionality and Initiative

'...to perform their tasks safely and efficiently, robots must show the same degree of precisions in their skills as humans do. ...'


Partners involved

Objectives

Results

Final review presentation

Videos

Deliverables

Related Publications


Partners involved(RA leader)

LAAS, UH, UniKarl, IPA, EPFL

Objectives

A cognitive robot companion must be able to evolve and learn in an open environment and to interact with humans. This requires capabilities in terms of decision-making, attribution of intentionality and expression of robot intentions.

In RA6, a conceptual architecture has been studied that provides a framework that integrates all these capabilities. This conceptual architecture has been implemented partially in the Key Experiments (RA7) where several decisional issues linked to human-robot interaction have been illustrated. The contribution of RA6 is the construction of several high-level decisional control components of the architecture. Studies on intentionality attribution have also been conducted within this RA and have served as inspiration for the design of the developed capabilities.

Results from Research Activity 6 have been integrated essentially in Key Experiments 2 ('The Curious Robot') in close relation with the other project Research Activities.

Results

WP6.1 A generic architecture for a cognitive robot

The issue of architecture have been studied in RA6, but is also related to the integration of the functions (RA7). The challenge of "the" generic architecture for a cognitive robot is daunting. We do not pretend that we have provided a final answer. We have learned from these ambitious objectives of integrated demonstrators and from our coordinated efforts to elaborate these systems some fundamental concepts on how to design an architecture of a Cognitive Robot. This was achieved with a step-by-step procedure where required capabilities from the Cogniron Functions and architecture concepts gained from former implementations and experiments. In the last period, the focus was oriented towards further refinement of ideas and concepts elaborated in the previous periods with an emphasis on making things concrete in order to effectively run and use complete instances and draw lessons.

Two main architectural implementations have been achieved:

  • KE1 - Memory-focused Instantiation of the Cognitive Architecture
  • KE2 - A task-oriented architecture for an interactive robot

Besides, several software Tools, and particularly the Go environment, have been developed in the framework of KE3.

WP6.2 Decision-making for Interactive Human-Robot task achievement

Our efforts toward the development and integration of a scheme where the robot reasons not only reasons on its own capabilities in a given context but also on a human model have conducted to two main results:

  • One contribution is a task planner, called "Human-Aware Task Planning", that has been designed to produce and incrementally update plans for an assistive robot. This planner has been fully implemented and integrated in KE2 setup.
  • Another focus is on detection and classification of human-robot interaction states in which we investigate the use of POMDPs to deal with uncertainty in observation and human robot interaction.

Human-Aware Task Planning

The key features of the "Human- Aware Task Planner" (HATP) are:

  • the use of a temporal planning framework with the explicit management of two time-lines (for the human and the robot),
  • a hierarchical task structure allowing incremental context-based refinement and fully compatible with BDI approach adopted at the level of the robot supervisor
  • a plan elaboration and selection algorithms that searches for plans with minimum cost and that satisfy a set of so-called "social rules".

The social rules have been designed in order to allow a flexible specification of social conventions in a "declarative" (e.g. undesirable states) or in "procedural" way (e.g. undesirable sequences or plans intricacy).

Detection, classification of Human-Robot interaction states

A new approach for detection and classification of robot task states in interaction with humans has been conducted in a joint LAAS-UniKarl effort. The approach uses a novel service robot reasoning system which utilises partially observable Markov decision processes (POMDPs) to deal with uncertainty in observation and human behaviour. A modular Bayesian forward filter allows to detect possible task states probabilistically and can thus cope with sensor limitations and non-deterministic human behaviour. This filter, embedded into a hierarchical perceptive architecture, preserves information about sensory uncertainty while including model based, predictive elements and transforms the perceptions into more abstract task state representations. Task states are represented symbolically as POMDP states while the environment and human behaviour are represented by statistical (POMDP) models which are utilised by the robot when making a decision.

The POMDP belief state is assembled from self-localisation, speech-recognition and human activity recognition, each containing information about measurement uncertainty. The whole system architecture as well as experiments with human-robot interaction in realistic settings on a physical robot, serving cups completely autonomously, has been illustrated.

WP6.3 - Intentionality Attribution

Attribution of Intentionality in HRI Proxemics

Based on our previous work on attribution of intentionality using video methodologies, we had established that participants tended to rate robots with humanoid appearances as more humanlike, as well as more extravert, agreeable and emotionally stable. Participants also tended to state a preference for interacting with robots with more humanoid appearance. Based on these results, as well as general research done in the field of human proxemics (Burgoon & Walther, 1990; Gillespie & Leffler, 1983), we addressed the question of whether or not these attributions would impact the proxemic aspects of the interaction in a live trial.

These experiments were conducted jointly with the research activities in RA3. They consisted of the robot approaching the participants according to three different interaction types as well as different directions. On the basis of our previous results mentioned in the paragraph above, and human proxemics research, we hypothesised that if participants attributed more humanlike traits to a humanoid robots, this would also entail expectancies as to robot behaviour, leading participants to expect behaviour that was more socially appropriate from the humanoid robot, which in this particular experiment would exhibit itself as maintaining a further social distance.

Our results from these studies (Koay et al., 2007; Syrdal, Koay et al., 2007) , supported this hypothesis. An overall effect was found across the experimental conditions, in which participants preferred the humanoid robot to maintain a further distance from the participants than the mechanoid robot.

On the basis of these results, we did find experimental evidence of a direct link between our previously discovered questionnaire-based attribution of personality and behavioural preferences within a live trial, which is translated as behavioural expectations that are similar to those we would have for other humans in terms of proxemics. This effect was linear and did not interact significantly with individual differences.

Attribution of Intentionality and Perception of Privacy

We also investigated the role of intentionality in respect to how participants viewed the role of a robot companion with regards to recording and storing information about its users in an exploratory study. This investigation was based on issues raised in the EURON Roboethics Roadmap (Veruggio, 2006) as well as more general discussions within the field of HCI (Mayer-Schoenberger, 1997). The main focus of this exploratory trial was to what extent participants attributed agency towards the robot in terms of divulging personal information to third parties. This trial was conducted by exposing the participants to an interaction with a peoplebot robot as well as an experimenter, in which the robot would divulge personal information about the experimenter during the course of a conversation between the experimenter and participant.

The results from this exploratory trial suggested that participants found the issue of a personal robot storing and divulging personal information regarding its users problematic, but also saw the need for retaining this information. Participants tended to see this issue best resolved by reducing the agency of the robot to divulge this information by reducing robot autonomy and tying the use of such information directly to tasks explicitly requested by its users. While this particular aspect of intentionality attribution is still very much an open field, the results from this study (Syrdal, Walters et al., 2007) suggest that participants' attributions of robot intentionality have a clear impact, not only on the particulars of given interactions, but also on how participants perceive the impact of a robot companion on their wider everyday experience beyond these interactions.

Deliverables

2004 - D6.1.1 Specification of an architecture for a cognitive robot
2004 - D6.2.1 Report on Paradigms for Decisional Interaction
2004 - D6.3.1 Results from evaluation of user studies on intentionality and attribution
2005 - RA6 Joint Deliverable Intentionality and Initiative
2006 - RA6 Joint Deliverable Intentionality and Initiative
2007 - Joint RA6 deliverable: Updated deliverable on Intentionality and Initiative for a Cognitive Robot

Final review presentation

RA6 presentation(by Rachid Alami (LAAS))
associated video:
Albert.mov

Videos

HATP and Shary execution of the task serveup

In this film, two levels of plan and robot action representations are shown: (1) on the bottom-left, one can see a plan as produced by HATP: it's a hierarchical task structure with precedence links at each level and decomposition links from level to a lower level. The leavses correspond to elementary tasks (for HATP) that might me further refined by SHARY when executed. The currently executed task is in green. (2) the top of the figure shows the current state of execution maintained by SHARY. SHARY traverses and updates the plan tree but also further refines the tasks depending on the actual context. Tasks produced by HATP are represented by diamonds while tasks refined on-line are represented by ellipses. Several types of links are shown with different colors: grey arcs correspond to task decomposition, orange arcs correspond to causal/precedence links. Finally a color code illustrates the stask of a task: green means ``under execution'', red means `` impossible or stopped'' and blue means ``achieved''.

Shary execution of a give object task with a suspension

This film illustrates the internal data structure manipulated by Shary when the robot hands an object to a person. We have chosen here the case where the person is disturbed by a phone call and consequently turns his head away from the robot. (color code used in the previous video is also used for this one)

Related publications

Below are only listed some of the RA-related publications, please see the Publications page for more.

  • A. Clodic, Rachid Alami, Vincent Montreuil, Shuyin Li, Britta Wrede, Agnes Swadzba, "A study of interaction between dialog and decision for human-robot collaborative task achievement", RO- MAN 2007, Jeju Island, Korea
  • A. Clodic, Maxime Ransan, Rachid Alami, Vincent Montreuil "A management of mutual belief for human-robot interaction", In proceedings of the 2007 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2007), October, 2007.
  • Jannik Fritsch and Sebastian Wrede. An Integration Framework for Developing Interactive Robots, volume 30 of Springer Tracts in Advanced Robotics. Springer, Berlin, 2007.
  • Marc Hanheide, Sebastian Wrede, Christian Lang, and Gerhard Sagerer. Who am i talking with? a face memory for social robots. In Proc. Int. Conf. on Robotics and Automation, Pasadena, CA, USA, 2008.
  • Koay, K. L.;Sisbot, E. A.;Syrdal, D. S.;Walters, M. L.;Dautenhahn, K.; & Alami, R. 2007. "Exploratory Study of a Robot Approaching a Person in the Context of Handing Over an Object". AAAI 2007 Spring Symposia - Technical Report SS-07-07, 26-28 March 2007: 18-24.
  • Koay, K. L.;Syrdal, D. S.;Walters, M. L.; & Dautenhahn, K. 2007. "Living with Robots: Investigating the Habituation Effect in Participants' Preferences during a Longitudinal Human-Robot Interaction Study". IEEE International Symposium on Robot and Human Interactive Communication (Ro-man 2007), Jeju Island, Korea: 564-569.
  • V. Montreuil, Aurelie Clodic, Rachid Alami " Planning Human Centered Robot Activities", In proceedings of the 2007 IEEE International Conference on Systems, Man, and Cybernetics (SMC 2007), October, 2007.
  • Christopher Parlitz, Winfried Baum, Ulrich Reiser, Martin Hagele. Intuitive Human-Machine Interaction and the Implementation on a Household Robot Companion, 12th International Conference on Human-Computer Interaction (HCII 2007), Beijing, China, 2007
  • S. R. Schmidt-Rohr, S. Knoop, M. Lsch, and R. Dillmann. " Reasoning for a multi-modal service robot considering uncertainty in human-robot interaction", In Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction, 2008.
  • S. R. Schmidt-Rohr, R. Jkel, M. Lsch, and R. Dillmann. " Compiling POMDP models for a multimodal service robot from background knowledge". in EUROS Conference, 2008.
  • Frederic Siepmann. Refacturing der systemarchitektur eines mobilen roboters fr die multi-modale mensch-roboter interaktion. Diplomarbeit (in german), Bielefeld University, 2008.
  • E. A. Sisbot, Aurlie Clodic, Rachid Alami, Maxime Ransan "Supervision and Motion Planning for a Mobile Manipulator Interacting with Humans", In Proceedings of the 3rd ACM/IEEE international conference on Human robot interaction, 2008.
  • E. A. Sisbot, Luis F. Marin and Rachid Alami, "Spatial Reasoning for Human Robot Interaction", 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2007), San Diego, USA
  • Thorsten Spexard, Frederic Siepmann, and Gerhard Sagerer. Memory-based Software Integration for Development in Autonomous Robotics. Proceedings 10th International Conference on Intelligent Autonomous Systems, 2008.
  • T.P. Spexard, M. Hanheide, and G. Sagerer. Human-oriented interaction with an anthropomorphic robot. IEEE Transactions on Robotics, 23:852-862, 2007.
  • Syrdal, D. S.;Dautenhahn, K.;Woods, S.;Walters, M.; & Koay, K. L. 2007. "Looking Good? Appearance Preferences and Robot Personality Inferences at Zero Acquaintance. Multidisciplinary Collaboration for Socially Assistive Robotics": Papers from the AAAI Spring Symposium - Technical Report SS-07-07: 86-92.
  • Syrdal, D. S.;Koay, K.-L.;Walters, M. L.; & Dautenhahn, K. 2007. "A personalised robot companion? The role of individual differences on spatial preferences in HRI scenarios". IEEE International Symposium on Robot and Human Interactive Communication(Ro-man), Jeju Island, Korea.
  • Syrdal, D. S.;Walters, M. L.;Otero, N. R.;Koay, K. L.; & Dautenhahn, K. 2007. "He knows when you are sleeping - Privacy and the Personal Robot". Technical Report from the AAAI-07 Workshop W06 on Human Implications of Human-Robot Interaction.
  • Sven Wachsmuth, Sebastian Wrede, and Marc Hanheide. Coordinating interactive vision behaviors for cognitive assistance. Computer Vision and Image Understanding, 108(1-2):135-149, October 2007.
  • Walters, M.;Syrdal, D. S.;Dautenhahn, K.;Boekhorst, R. T.;Koay, K. L.; & Woods, S. 2008. "Avoiding the Uncanny Valley Robot Appearance, Personality and Consistency of Behavior in an Attention-Seeking Home Scenario for a Robot Companion". Autonomous Robots 24(2): 159-178.
  • Walters, M. L.;Dautenhahn, K.;Boekhorst, R. t.;Koay, K. L.; & in., S. N. W. 2007. "Exploring the Design Space of Robot Appearance and Behavior in an Attention-Seeking 'Living Room' Scenario for a Robot Companion". IEEE Symposium on Artificial Life, (Honolulu, Hawaii, USA, 2007): 341-347.
  • Sebastian Wrede, Marc Hanheide, Sven Wachsmuth, and Gerhard Sagerer. Integration and coordination in a cognitive vision system. In Proc. of International Conference on Computer Vision Systems, St. Johns University, Manhattan, New York City, USA, 2006. IEEE. IEEE ICVS'06 Best Paper Award.
  • Woods, S.;Dautenhahn, K.;Kaouri, C.;te Boekhorst, R.;Koay, K. L.; & Walters, M. 2007. "Are Robots Like People? - The Role of Subject and Robot Personality Traits in Robot Interaction Trials". Interaction Studies. (2): 281-305.

 

An Integrated Project funded by the European Commission's Sixth Framework Programme