We describe a generative Bayesian model for action understanding in which inverse-forward internal model pairs are considered "hypotheses" of plausible action goals that are explored in parallel via an approximate inference mechanism based on sequential Monte Carlo methods. The reenactment of internal model pairs can be considered a form of motor simulation, which supports both perceptual prediction and action understanding at the goal level. However, this procedure is generally considered to be computationally inefficient. We present a model that dynamically reallocates computational resources to more accurate internal models depending on both the available prior information and the prediction error of the inverse-forward models, and which leads to successful action recognition. We present experimental results that test the robustness and efficiency of our model in real-world scenarios.
|Numero di pagine||7|
|Stato di pubblicazione||Published - 2011|
All Science Journal Classification (ASJC) codes
- Artificial Intelligence