When artificial agents interact and cooperate withother agents, either human or artificial, they need to recognizeothers’ actions and infer their hidden intentions from the soleobservation of their surface level movements. Indeed, action andintention understanding in humans is believed to facilitate anumber of social interactions and is supported by a complexneural substrate (i.e. the mirror neuron system). Implementationof such mechanisms in artificial agents would pave the route tothe development of a vast range of advanced cognitive abilities,such as social interaction, adaptation, and learning by imitation,just to name a few.We present a first step towards a fully-fledged intention recognitionsystem by enabling an artificial agent to internally representaction patterns, and to subsequently use such representations torecognize - and possibly to predict and anticipate - behaviorsperformed by others. We investigate a biologically-inspired approachby adopting the formalism of Associative Self-OrganizingMaps (A-SOMs), an extension of the well-known Self-OrganizingMaps. The A-SOM learns to associate its activities with differentinputs over time, where inputs are high-dimensional and noisyobservations of others’ actions. The A-SOM maps actions tosequences of activations in a dimensionally reduced topologicalspace, where each centre of activation provides a prototypicaland iconic representation of the action fragment. We presentpreliminary experiments of action recognition task on a publiclyavailable database of thirteen commonly encountered actions withpromising results.
|Numero di pagine||5|
|Stato di pubblicazione||Published - 2013|
All Science Journal Classification (ASJC) codes
- Computer Networks and Communications