Learning probabilistic interaction models
- Creators
- BAYDOUN, MOHAMAD
Description
We live in a multi-modal world; therefore it comes as no surprise that the human brain is tailored for the integration of multi-sensory input. Inspired by the human brain, the multi-sensory data is used in Artificial Intelligence (AI) for teaching different concepts to computers. Autonomous Agents (AAs) are AI systems that sense and act autonomously in complex dynamic environments. Such agents can build up Self-Awareness (SA) by describing their experiences through multi-sensorial information with appropriate models and correlating them incrementally with the currently perceived situation to continuously expand their knowledge. This thesis proposes methods to learn such awareness models for AAs. These models include SA and situational awareness models in order to perceive and understand itself (self variables) and its surrounding environment (external variables) at the same time. An agent is considered self-aware when it can dynamically observe and understand itself and its surrounding through different proprioceptive and exteroceptive sensors which facilitate learning and maintaining a contextual representation by processing the observed multi-sensorial data. We proposed a probabilistic framework for generative and descriptive dynamic models that can lead to a computationally efficient SA system. In general, generative models facilitate the prediction of future states while descriptive models enable to select the representation that best fits the current observation. The proposed framework employs a Probabilistic Graphical Models (PGMs) such as Dynamic Bayesian Networks (DBNs) that represent a set of variables and their conditional dependencies. Once we obtain this probabilistic representation, the latter allows the agent to model interactions between itself, as observed through proprioceptive sensors, and the environment, as observed through exteroceptive sensors. In order to develop an awareness system, not only an agent needs to recognize the normal states and perform predictions accordingly, but also it is necessary to detect the abnormal states with respect to its previously learned knowledge. Therefore, there is a need to measure anomalies or irregularities in an observed situation. In this case, the agent should be aware that an abnormality (i.e., a non-stationary condition) never experienced before, is currently present. Due to our specific way of representation, which makes it possible to model multi-sensorial data into a uniform interaction model, the proposed work not only improves predictions of future events but also can be potentially used to effectuate a transfer learning process where information related to the learned model can be moved and interpreted by another body.
Additional details
- URL
- http://hdl.handle.net/11567/997450
- URN
- urn:oai:iris.unige.it:11567/997450
- Origin repository
- UNIGE