Learning High-Level Tasks through Imitation

Antonio Chella, Haris Dindo, Ignazio Infantino, Ignazio Infantino

Research output: Contribution to conferenceOther

4 Citations (Scopus)

Abstract

This paper presents the cognitive architecture Con-SCIS (conceptual space based cognitive imitation system), which tightly links low-level data processing with knowledge representation in the context of imitation learning. We use the word imitate to refer to the paradigm of program-level imitation: we are interested in the final effects of actions on objects, and not on the particular kinematic or dynamic properties of the motion. The same architecture is used both to analyze and represent the task to be imitated, and to perform the imitation by generalizing in novel and different circumstances. The implemented experimental scenario is a simplified two-dimensional world populated with various objects in which observation/imitation takes place. During the observation phase, the user shows her/his hand while performing arbitrary tasks of manipulating objects in front of a single calibrated camera. The task is then segmented into meaningful units and its properties (objects' color and shape, their absolute position and orientation, relations between objects) are represented into high-level symbolic terms. In the imitation phase, the symbolic information is employed to drive the robot's actions. To validate our approach, we report some results concerned with the problem of teaching a humanoid hand/arm robotic system tasks of assembling different workspace objects
Original languageEnglish
Pages3648-3654
Number of pages7
Publication statusPublished - 2006

    Fingerprint

All Science Journal Classification (ASJC) codes

  • Control and Systems Engineering
  • Software
  • Computer Vision and Pattern Recognition
  • Computer Science Applications

Cite this

Chella, A., Dindo, H., Infantino, I., & Infantino, I. (2006). Learning High-Level Tasks through Imitation. 3648-3654.