Cross-view action recognition refers to the task of recognizing actions observed from view-points that are unfamiliar to the system. To address the complexity of the problem, state of the art methods often rely on large-scale datasets, where the variability of viewpoints is appropriately represented. However, this comes to a significant price,...
-
2022 (v1)PublicationUploaded on: April 14, 2023
-
2020 (v1)Publication
Viewpoint is an essential aspect of how an action is visually perceived, with the motion appearing substantially different for some viewpoint pairs. Data driven action recognition algorithms compensate for this by including a variety of viewpoints in their training data, adding to the cost of data acquisition as well as training. We propose a...
Uploaded on: April 14, 2023 -
2021 (v1)Publication
Apparent motion information of an action may vary dramatically from one view to another, making transfer of knowledge across views a core challenge of action recognition. Recent times have seen the use of large scale datasets to compensate for this lack in generalization, and in fact most state-of-the-art methods today require large amounts of...
Uploaded on: March 27, 2023 -
2020 (v1)Publication
MoCA is a bi-modal dataset in which we collect Motion Capture data and video sequences acquired from multiple views, including an ego-like viewpoint, of upper body actions in a cooking scenario. It has been collected with the specific purpose of investigating view-invariant action properties in both biological and artificial systems. Besides...
Uploaded on: April 14, 2023 -
2019 (v1)Publication
In this work we discuss the action classification performance obtained with a baseline assessment of the MoCA dataset: a multimodal, synchronised dataset including Motion Capture data and multi-view video sequences of upper body actions in a cooking scenario. To this purpose, we setup a classification pipeline to manipulate the two data type....
Uploaded on: April 14, 2023