Published October 21, 2013 | Version v1
Conference paper

Combining Multiple Sensors for Event Recognition of Older People

Description

We herein present a hierarchical model-based framework for event recognition using multiple sensors. Event models combine a priori knowledge of the scene (3D geometric and semantic information, such as contextual zones and equipments) with moving objects (e.g., a Person) detected by a monitoring system. The event models follow a generic ontology based on natural language; which allows domain experts to easily adapt them. The framework novelty relies on combining multiple sensors (heterogeneous and homogeneous) at decision level explicitly or implicitly by handling their conflict using a probabilistic approach. The implicit event conflict handling works by computing the event reliabilities for each sensor, and then combine them using Dempster-Shafer Theory. The multi-sensor system is evaluated using multi-modal recording of instrumental daily living activities (e.g., watching TV, writing a check, preparing tea, organizing the week intake of prescribed medication) of participants of a clinical study of Alzheimer's disease. The evaluation presents the preliminary results of this approach on two cases: the combination of events from heterogeneous sensors (a RGB camera and a wearable inertial sensor); and the combination of conflicting events from video cameras with a partially overlapped field of view (a RGB- and a RGB-D-camera). The results show the framework improves the event recognition rate in both cases.

Abstract

MIRRH, held in conjunction with ACM MM 2013.

Abstract

International audience

Additional details

Identifiers

URL
https://inria.hal.science/hal-00907033
URN
urn:oai:HAL:hal-00907033v1

Origin repository

Origin repository
UNICA