Point-Based Neural Rendering with Per-View Optimization
- Others:
- GRAPHics and DEsign with hEterogeneous COntent (GRAPHDECO) ; Inria Sophia Antipolis - Méditerranée (CRISAM) ; Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)
- Université Côte d'Azur (UCA)
- Adobe Research
- This research was funded by the ERC Advanced grant FUNGRAPH N° 788065 (http://fungraph.inria.fr). The authors are grateful to the OPAL infrastructure from Université Côte d'Azur for providing resources and support.
- European Project: 788065,H2020 Pilier ERC,FUNGRAPH(2018)
Description
There has recently been great interest in neural rendering methods. Some approaches use 3D geometry reconstructed with Multi-View Stereo (MVS) but cannot recover from the errors of this process, while others directly learn a volumetric neural representation, but suffer from expensive training and inference. We introduce a general approach that is initialized with MVS, but allows further optimization of scene properties in the space of input views, including depth and reprojected features, resulting in improved novel view synthesis. A key element of our approach is a differentiable point-based splatting pipeline, based on our bi-directional Elliptical Weighted Average solution. To further improve quality and efficiency of our point-based method, we introduce a probabilistic depth test and efficient camera selection. We use these elements together in our neural renderer, allowing us to achieve a good compromise between quality and speed. Our pipeline can be applied to multi-view harmonization and stylization in addition to novel view synthesis.
Abstract
OPAL-Meso
Abstract
International audience
Additional details
- URL
- https://hal.inria.fr/hal-03268140
- URN
- urn:oai:HAL:hal-03268140v3
- Origin repository
- UNICA