TY - JOUR
T1 - Visually-Grounded Language Model for Human-Robot Interaction
AU - Chella, Antonio
AU - Dindo, Haris
AU - Zambuto, Daniele
PY - 2010
Y1 - 2010
N2 - Visually grounded human-robot interaction is recognizedto be an essential ingredient of socially intelligent robots, and theintegration of vision and language increasingly attracts attention ofresearchers in diverse fields. However, most systems lack the capabilityto adapt and expand themselves beyond the preprogrammed setof communicative behaviors. Their linguistic capabilities are still farfrom being satisfactory which make them unsuitable for real-worldapplications. In this paper we will present a system in which a roboticagent can learn a grounded language model by actively interactingwith a human user. The model is grounded in the sense that meaningof the words is linked to a concrete sensorimotor experience of theagent, and linguistic rules are automatically extracted from the interactiondata. The system has been tested on the NAO humanoid robotand it has been used to understand and generate appropriate naturallanguage descriptions of real objects. The system is also capable ofconducting a verbal interaction with a human partner in potentiallyambiguous situations.
AB - Visually grounded human-robot interaction is recognizedto be an essential ingredient of socially intelligent robots, and theintegration of vision and language increasingly attracts attention ofresearchers in diverse fields. However, most systems lack the capabilityto adapt and expand themselves beyond the preprogrammed setof communicative behaviors. Their linguistic capabilities are still farfrom being satisfactory which make them unsuitable for real-worldapplications. In this paper we will present a system in which a roboticagent can learn a grounded language model by actively interactingwith a human user. The model is grounded in the sense that meaningof the words is linked to a concrete sensorimotor experience of theagent, and linguistic rules are automatically extracted from the interactiondata. The system has been tested on the NAO humanoid robotand it has been used to understand and generate appropriate naturallanguage descriptions of real objects. The system is also capable ofconducting a verbal interaction with a human partner in potentiallyambiguous situations.
KW - Human-Robot Interaction
KW - Language grounding
KW - Language learning
KW - Human-Robot Interaction
KW - Language grounding
KW - Language learning
UR - http://hdl.handle.net/10447/61881
UR - http://www.dline.info/jcl/v1n3.php
M3 - Article
SN - 0976-416X
VL - 1:3
SP - 105
EP - 115
JO - INTERNATIONAL JOURNAL OF COMPUTATIONAL LINGUISTICS RESEARCH
JF - INTERNATIONAL JOURNAL OF COMPUTATIONAL LINGUISTICS RESEARCH
ER -