An autonomous robot involved in long and complex missions should be able to generate, update and process its own plans of action. In this perspective, it is not plausible that the meaning of the representations used by the robot is given from outside the system itself. Rather, the meaning of internal symbols must be firmly anchored to the world through the perceptual abilities and the overall activities of the robot. According to these premises, in this paper we present an approach to action representation that is based on a "conceptual" level of representation, acting as an intermediate level between symbols and data coming from sensors. Symbolic representations are interpreted by mapping them on the conceptual level through a mapping mechanism based on artificial neural networks. Examples of the proposed framework are reported, based on experiments performed on a RWI-B12 autonomous robot. © 2001 Elsevier Science B.V.
|Number of pages||13|
|Journal||Robotics and Autonomous Systems|
|Publication status||Published - 2001|
All Science Journal Classification (ASJC) codes
- Control and Systems Engineering
- General Mathematics
- Computer Science Applications