We propose a framework for the representation of visual knowledge in a robotic agent, with special attention to the understanding of dynamic scenes. According to our approach, understanding involves the generation of a high level, declarative description of the perceived world. Developing such a description requires both bottom-up, data driven processes that associate symbolic knowledge representation structures with the data coming out of a vision system, and top-down processes in which high level, symbolic information is in its turn employed to drive and further refine the interpretation of a scene. On the one hand, the computer vision community approached this problem in terms of 2D/3D shape reconstruction and of estimation of motion parameters. On the other, the AI community developed rich and expressive systems for the description of processes, events, actions and, in general, of dynamic situations. Nevertheless, these two approaches evolved separately and concentrated on different kinds of problems. We propose an architecture that integrates these two traditions in a principled way. Our assumption is that a link is missing between the two classes of representations mentioned above. In order to fill this gap, we adopt the notion of conceptual space (CS - Gaerdenfors (2000)), a representation where information is characterized in terms of a metric space. A CS acts as an intermediate representation between subconceptual (i.e., not yet conceptually categorized) information, and symbolically organized knowledge. The concepts of process and action have immediate characterizations in terms of structures in the conceptual space. The architecture is illustrated with reference to an experimental setup based on a vision system operating in a scenario with moving and interacting people.
|Number of pages||44|
|Publication status||Published - 2000|
All Science Journal Classification (ASJC) codes
- Language and Linguistics
- Linguistics and Language
- Artificial Intelligence