Visually-Grounded Language Model for Human-Robot Interaction

Antonio Chella, Haris Dindo, Daniele Zambuto

Risultato della ricerca: Articlepeer review

Abstract

Visually grounded human-robot interaction is recognizedto be an essential ingredient of socially intelligent robots, and theintegration of vision and language increasingly attracts attention ofresearchers in diverse fields. However, most systems lack the capabilityto adapt and expand themselves beyond the preprogrammed setof communicative behaviors. Their linguistic capabilities are still farfrom being satisfactory which make them unsuitable for real-worldapplications. In this paper we will present a system in which a roboticagent can learn a grounded language model by actively interactingwith a human user. The model is grounded in the sense that meaningof the words is linked to a concrete sensorimotor experience of theagent, and linguistic rules are automatically extracted from the interactiondata. The system has been tested on the NAO humanoid robotand it has been used to understand and generate appropriate naturallanguage descriptions of real objects. The system is also capable ofconducting a verbal interaction with a human partner in potentiallyambiguous situations.
Lingua originaleEnglish
pagine (da-a)105-115
Numero di pagine16
RivistaINTERNATIONAL JOURNAL OF COMPUTATIONAL LINGUISTICS RESEARCH
Volume1:3
Stato di pubblicazionePublished - 2010

Fingerprint

Entra nei temi di ricerca di 'Visually-Grounded Language Model for Human-Robot Interaction'. Insieme formano una fingerprint unica.

Cita questo