TY - CONF
T1 - Autonomous acquisition of natural language
AU - Chella, Antonio
AU - Dindo, Haris
AU - Ognibene, Dimitri
AU - Steunebrink, Bas R.
AU - Nivel, Eric
AU - Rodriguez, Manuel
AU - Helgason, Helgi P.
AU - Sanz, Ricardo
AU - Hernandez, Carlos
AU - Pezzulo, Giovanni
AU - Thórisson, Kristinn R.
AU - Schmidhuber, Jüergen
AU - Jonsson, Gudberg K.
PY - 2014
Y1 - 2014
N2 - An important part of human intelligence is the ability to use language. Humans learn how to use language in a society of language users, which is probably the most effective way to learn a language from the ground up. Principles that might allow an artificial agents to learn language this way are not known at present. Here we present a framework which begins to address this challenge. Our auto-catalytic, endogenous, reflective architecture (AERA) supports the creation of agents that can learn natural language by observation. We present results from two experiments where our S1 agent learns human communication by observing two humans interacting in a realtime mock television interview, using gesture and situated language. Results show that S1 can learn multimodal complex language and multimodal communicative acts, using a vocabulary of 100 words with numerous sentence formats, by observing unscripted interaction between the humans, with no grammar being provided to it a priori, and only high-level information about the format of the human interaction in the form of high-level goals of the interviewer and interviewee and a small ontology. The agent learns both the pragmatics, semantics, and syntax of complex sentences spoken by the human subjects on the topic of recycling of objects such as aluminum cans, glass bottles, plastic, and wood, as well as use of manual deictic reference and anaphora.
AB - An important part of human intelligence is the ability to use language. Humans learn how to use language in a society of language users, which is probably the most effective way to learn a language from the ground up. Principles that might allow an artificial agents to learn language this way are not known at present. Here we present a framework which begins to address this challenge. Our auto-catalytic, endogenous, reflective architecture (AERA) supports the creation of agents that can learn natural language by observation. We present results from two experiments where our S1 agent learns human communication by observing two humans interacting in a realtime mock television interview, using gesture and situated language. Results show that S1 can learn multimodal complex language and multimodal communicative acts, using a vocabulary of 100 words with numerous sentence formats, by observing unscripted interaction between the humans, with no grammar being provided to it a priori, and only high-level information about the format of the human interaction in the form of high-level goals of the interviewer and interviewee and a small ontology. The agent learns both the pragmatics, semantics, and syntax of complex sentences spoken by the human subjects on the topic of recycling of objects such as aluminum cans, glass bottles, plastic, and wood, as well as use of manual deictic reference and anaphora.
KW - Autonomy
KW - Communication
KW - Computer Science (all)
KW - Knowledge acquisition
KW - Natural language
KW - Autonomy
KW - Communication
KW - Computer Science (all)
KW - Knowledge acquisition
KW - Natural language
UR - http://hdl.handle.net/10447/216564
M3 - Other
SP - 58
EP - 66
ER -