Published 2018 | Version v1
Publication

Learning Multi-Modal Self-Awareness Models for Autonomous Vehicles from Human Driving

Description

This paper presents a novel approach for learning self-awareness models for autonomous vehicles. Proposed technique is based on the availability of synchronized multi-sensor dynamic data related to different maneuvering tasks performed by a human operator. It is shown that different machine learning approaches can be used to first learn single modality models using coupled Dynamic Bayesian Networks; such models are then correlated at event level to discover contextual multimodal concepts. In the presented case, visual perception and localization are used as modalities. Cross-correlations among modalities in time is discovered from data and are described as probabilistic links connecting shared and private multi-modal DBNs at the event (discrete) level. Results are presented on experiments performed on an autonomous vehicle, highlighting potentiality of the proposed approach to allow anomaly detection and autonomous decision making based on learned self-awareness models.

Additional details

Created:
April 14, 2023
Modified:
December 1, 2023