Published June 17, 2020
| Version v1
Conference paper
Joint Attention for Automated Video Editing
Contributors
Others:
- Biologically plausible Integrative mOdels of the Visual system : towards synergIstic Solutions for visually-Impaired people and artificial visiON (BIOVISION) ; Inria Sophia Antipolis - Méditerranée (CRISAM) ; Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)
- Université Côte d'Azur (UCA)
- Unity Technologies [San Francisco]
- University of California [Santa Cruz] (UC Santa Cruz) ; University of California (UC)
- Computer Science (North Carolina State University) ; North Carolina State University [Raleigh] (NC State) ; University of North Carolina System (UNC)-University of North Carolina System (UNC)
Description
Joint attention refers to the shared focal points of attention for occupants in a space. In this work, we introduce a computational definition of joint attention for the automated editing of meetings in multi-camera environments from the AMI corpus. Using extracted head pose and individual headset amplitude as features, we developed three editing methods: (1) a naive audio-based method that selects the camera using only the headset input, (2) a rule-based edit that selects cameras at a fixed pacing using pose data, and (3) an editing algorithm using LSTM (Long-short term memory) learned joint-attention from both pose and audio data, trained on expert edits. The methods are evaluated qualitatively against the human edit, and quantitatively in a user study with 22 participants. Results indicate that LSTM-trained joint attention produces edits that are comparable to the expert edit, offering a wider range of camera views than audio, while being more generalizable as compared to rule-based methods.
Abstract
International audienceAdditional details
Identifiers
- URL
- https://hal.inria.fr/hal-02960390
- URN
- urn:oai:HAL:hal-02960390v1
Origin repository
- Origin repository
- UNICA