What Will You Do Next? A Cognitive Model for Understanding Others’ Intentions Based on Shared Representations

Risultato della ricerca: Chapter

3 Citazioni (Scopus)

Abstract

Goal-directed action selection is the problem of what to do next in order to progress towardsgoal achievement. This problem is computationally more complex in case of joint action settingswhere two or more agents coordinate their actions in space and time to bring about a common goal:actions performed by one agent influence the action possibilities of the other agents, and ultimately thegoal achievement. While humans apparently effortlessly engage in complex joint actions, a number ofquestions remain to be solved to achieve similar performances in artificial agents: How agents representand understand actions being performed by others? How this understanding influences the choice ofagent’s own future actions? How is the interaction process biased by prior information about the task?What is the role of more abstract cues such as others’ beliefs or intentions?In the last few years, researchers in computational neuroscience have begun investigating how controltheoreticmodels of individual motor control can be extended to explain various complex social phenomena,including action and intention understanding, imitation and joint action. The two cornerstones ofcontrol-theoretic models of motor control are the goal-directed nature of action and a widespread use ofinternal modeling. Indeed, when the control-theoretic view is applied to the realm of social interactions,it is assumed that inverse and forward internal models used in individual action planning and controlare re-enacted in simulation in order to understand others’ actions and to infer their intentions. Thismotor simulation view of social cognition has been adopted to explain a number of advanced mindreadingabilities such as action, intention, and belief recognition, often in contrast with more classicalcognitive theories - derived from rationality principles and conceptual theories of others’ minds - thatemphasize the dichotomy between action and perception.Here we embrace the idea that implementing mindreading abilities is a necessary step towards a morenatural collaboration between humans and robots in joint tasks. To efficiently collaborate, agents need tocontinuously estimate their teammates’ proximal goals and distal intentions in order to choose what todo next.We present a probabilistic hierarchical architecture for joint action which takes inspiration fromthe idea of motor simulation above. The architecture models the casual relations between observables(e.g., observed movements) and their hidden causes (e.g., action goals, intentions and beliefs) at twodeeply intertwined levels: at the lowest level the same circuitry used to execute my own actions isre-enacted in simulation to infer and predict (proximal) actions performed by my interaction partner,while the highest level encodes more abstract task representations which govern each agent’s observablebehavior. Here we assume that the decision of what to do next can be taken by knowing 1) what thecurrent task is and 2) what my teammate is currently doing. While these could be inferred via a costly(and inaccurate) process of inverting the generative model above, given the observed data, we willshow how our organization facilitates such an inferential process by allowing agents to share a subset ofhidden variables alleviating the need of complex inferential processes, such as explicit task allocation,or sophisticated communication strategies.
Lingua originaleEnglish
Titolo della pubblicazione ospiteVirtual Augmented and Mixed Reality. Designing and Developing Augmented and Virtual Environments
Pagine253-266
Numero di pagine14
Stato di pubblicazionePublished - 2013

Serie di pubblicazioni

NomeLecture Notes in Computer Science

Fingerprint

Cognitive Models
Motor Control
Robots
Simulation
Planning
Computational Neuroscience
Communication
Task Allocation
Imitation
Generative Models

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Cita questo

Chella, A., & Dindo, H. (2013). What Will You Do Next? A Cognitive Model for Understanding Others’ Intentions Based on Shared Representations. In Virtual Augmented and Mixed Reality. Designing and Developing Augmented and Virtual Environments (pagg. 253-266). (Lecture Notes in Computer Science).

What Will You Do Next? A Cognitive Model for Understanding Others’ Intentions Based on Shared Representations. / Chella, Antonio; Dindo, Haris.

Virtual Augmented and Mixed Reality. Designing and Developing Augmented and Virtual Environments. 2013. pag. 253-266 (Lecture Notes in Computer Science).

Risultato della ricerca: Chapter

Chella, A & Dindo, H 2013, What Will You Do Next? A Cognitive Model for Understanding Others’ Intentions Based on Shared Representations. in Virtual Augmented and Mixed Reality. Designing and Developing Augmented and Virtual Environments. Lecture Notes in Computer Science, pagg. 253-266.
Chella A, Dindo H. What Will You Do Next? A Cognitive Model for Understanding Others’ Intentions Based on Shared Representations. In Virtual Augmented and Mixed Reality. Designing and Developing Augmented and Virtual Environments. 2013. pag. 253-266. (Lecture Notes in Computer Science).
Chella, Antonio ; Dindo, Haris. / What Will You Do Next? A Cognitive Model for Understanding Others’ Intentions Based on Shared Representations. Virtual Augmented and Mixed Reality. Designing and Developing Augmented and Virtual Environments. 2013. pagg. 253-266 (Lecture Notes in Computer Science).
@inbook{0a0f2b562fdb46ce8ad282cc4138b269,
title = "What Will You Do Next? A Cognitive Model for Understanding Others’ Intentions Based on Shared Representations",
abstract = "Goal-directed action selection is the problem of what to do next in order to progress towardsgoal achievement. This problem is computationally more complex in case of joint action settingswhere two or more agents coordinate their actions in space and time to bring about a common goal:actions performed by one agent influence the action possibilities of the other agents, and ultimately thegoal achievement. While humans apparently effortlessly engage in complex joint actions, a number ofquestions remain to be solved to achieve similar performances in artificial agents: How agents representand understand actions being performed by others? How this understanding influences the choice ofagent’s own future actions? How is the interaction process biased by prior information about the task?What is the role of more abstract cues such as others’ beliefs or intentions?In the last few years, researchers in computational neuroscience have begun investigating how controltheoreticmodels of individual motor control can be extended to explain various complex social phenomena,including action and intention understanding, imitation and joint action. The two cornerstones ofcontrol-theoretic models of motor control are the goal-directed nature of action and a widespread use ofinternal modeling. Indeed, when the control-theoretic view is applied to the realm of social interactions,it is assumed that inverse and forward internal models used in individual action planning and controlare re-enacted in simulation in order to understand others’ actions and to infer their intentions. Thismotor simulation view of social cognition has been adopted to explain a number of advanced mindreadingabilities such as action, intention, and belief recognition, often in contrast with more classicalcognitive theories - derived from rationality principles and conceptual theories of others’ minds - thatemphasize the dichotomy between action and perception.Here we embrace the idea that implementing mindreading abilities is a necessary step towards a morenatural collaboration between humans and robots in joint tasks. To efficiently collaborate, agents need tocontinuously estimate their teammates’ proximal goals and distal intentions in order to choose what todo next.We present a probabilistic hierarchical architecture for joint action which takes inspiration fromthe idea of motor simulation above. The architecture models the casual relations between observables(e.g., observed movements) and their hidden causes (e.g., action goals, intentions and beliefs) at twodeeply intertwined levels: at the lowest level the same circuitry used to execute my own actions isre-enacted in simulation to infer and predict (proximal) actions performed by my interaction partner,while the highest level encodes more abstract task representations which govern each agent’s observablebehavior. Here we assume that the decision of what to do next can be taken by knowing 1) what thecurrent task is and 2) what my teammate is currently doing. While these could be inferred via a costly(and inaccurate) process of inverting the generative model above, given the observed data, we willshow how our organization facilitates such an inferential process by allowing agents to share a subset ofhidden variables alleviating the need of complex inferential processes, such as explicit task allocation,or sophisticated communication strategies.",
author = "Antonio Chella and Haris Dindo",
year = "2013",
language = "English",
isbn = "978-3-642-39404-1",
series = "Lecture Notes in Computer Science",
pages = "253--266",
booktitle = "Virtual Augmented and Mixed Reality. Designing and Developing Augmented and Virtual Environments",

}

TY - CHAP

T1 - What Will You Do Next? A Cognitive Model for Understanding Others’ Intentions Based on Shared Representations

AU - Chella, Antonio

AU - Dindo, Haris

PY - 2013

Y1 - 2013

N2 - Goal-directed action selection is the problem of what to do next in order to progress towardsgoal achievement. This problem is computationally more complex in case of joint action settingswhere two or more agents coordinate their actions in space and time to bring about a common goal:actions performed by one agent influence the action possibilities of the other agents, and ultimately thegoal achievement. While humans apparently effortlessly engage in complex joint actions, a number ofquestions remain to be solved to achieve similar performances in artificial agents: How agents representand understand actions being performed by others? How this understanding influences the choice ofagent’s own future actions? How is the interaction process biased by prior information about the task?What is the role of more abstract cues such as others’ beliefs or intentions?In the last few years, researchers in computational neuroscience have begun investigating how controltheoreticmodels of individual motor control can be extended to explain various complex social phenomena,including action and intention understanding, imitation and joint action. The two cornerstones ofcontrol-theoretic models of motor control are the goal-directed nature of action and a widespread use ofinternal modeling. Indeed, when the control-theoretic view is applied to the realm of social interactions,it is assumed that inverse and forward internal models used in individual action planning and controlare re-enacted in simulation in order to understand others’ actions and to infer their intentions. Thismotor simulation view of social cognition has been adopted to explain a number of advanced mindreadingabilities such as action, intention, and belief recognition, often in contrast with more classicalcognitive theories - derived from rationality principles and conceptual theories of others’ minds - thatemphasize the dichotomy between action and perception.Here we embrace the idea that implementing mindreading abilities is a necessary step towards a morenatural collaboration between humans and robots in joint tasks. To efficiently collaborate, agents need tocontinuously estimate their teammates’ proximal goals and distal intentions in order to choose what todo next.We present a probabilistic hierarchical architecture for joint action which takes inspiration fromthe idea of motor simulation above. The architecture models the casual relations between observables(e.g., observed movements) and their hidden causes (e.g., action goals, intentions and beliefs) at twodeeply intertwined levels: at the lowest level the same circuitry used to execute my own actions isre-enacted in simulation to infer and predict (proximal) actions performed by my interaction partner,while the highest level encodes more abstract task representations which govern each agent’s observablebehavior. Here we assume that the decision of what to do next can be taken by knowing 1) what thecurrent task is and 2) what my teammate is currently doing. While these could be inferred via a costly(and inaccurate) process of inverting the generative model above, given the observed data, we willshow how our organization facilitates such an inferential process by allowing agents to share a subset ofhidden variables alleviating the need of complex inferential processes, such as explicit task allocation,or sophisticated communication strategies.

AB - Goal-directed action selection is the problem of what to do next in order to progress towardsgoal achievement. This problem is computationally more complex in case of joint action settingswhere two or more agents coordinate their actions in space and time to bring about a common goal:actions performed by one agent influence the action possibilities of the other agents, and ultimately thegoal achievement. While humans apparently effortlessly engage in complex joint actions, a number ofquestions remain to be solved to achieve similar performances in artificial agents: How agents representand understand actions being performed by others? How this understanding influences the choice ofagent’s own future actions? How is the interaction process biased by prior information about the task?What is the role of more abstract cues such as others’ beliefs or intentions?In the last few years, researchers in computational neuroscience have begun investigating how controltheoreticmodels of individual motor control can be extended to explain various complex social phenomena,including action and intention understanding, imitation and joint action. The two cornerstones ofcontrol-theoretic models of motor control are the goal-directed nature of action and a widespread use ofinternal modeling. Indeed, when the control-theoretic view is applied to the realm of social interactions,it is assumed that inverse and forward internal models used in individual action planning and controlare re-enacted in simulation in order to understand others’ actions and to infer their intentions. Thismotor simulation view of social cognition has been adopted to explain a number of advanced mindreadingabilities such as action, intention, and belief recognition, often in contrast with more classicalcognitive theories - derived from rationality principles and conceptual theories of others’ minds - thatemphasize the dichotomy between action and perception.Here we embrace the idea that implementing mindreading abilities is a necessary step towards a morenatural collaboration between humans and robots in joint tasks. To efficiently collaborate, agents need tocontinuously estimate their teammates’ proximal goals and distal intentions in order to choose what todo next.We present a probabilistic hierarchical architecture for joint action which takes inspiration fromthe idea of motor simulation above. The architecture models the casual relations between observables(e.g., observed movements) and their hidden causes (e.g., action goals, intentions and beliefs) at twodeeply intertwined levels: at the lowest level the same circuitry used to execute my own actions isre-enacted in simulation to infer and predict (proximal) actions performed by my interaction partner,while the highest level encodes more abstract task representations which govern each agent’s observablebehavior. Here we assume that the decision of what to do next can be taken by knowing 1) what thecurrent task is and 2) what my teammate is currently doing. While these could be inferred via a costly(and inaccurate) process of inverting the generative model above, given the observed data, we willshow how our organization facilitates such an inferential process by allowing agents to share a subset ofhidden variables alleviating the need of complex inferential processes, such as explicit task allocation,or sophisticated communication strategies.

UR - http://hdl.handle.net/10447/95437

UR - http://www.scopus.com/record/display.url?eid=2-s2.0-84884864441&origin=resultslist&sort=plf-f&src=s&st1=dindo+h*&sid=B46745EB0B3A91D37F605352E7D93CCD.N5T5nM1aaTEF8rE6yKCR3A%3a320&sot=b&sdt=b&sl=21&s=AUTHOR-NAME%28dindo+h*%29&relpos=3&relpos=3&citeCnt=0&searchTerm=AUTHOR-NAME%28dindo+h*%29#

M3 - Chapter

SN - 978-3-642-39404-1

T3 - Lecture Notes in Computer Science

SP - 253

EP - 266

BT - Virtual Augmented and Mixed Reality. Designing and Developing Augmented and Virtual Environments

ER -