The primary goal of inverse rendering is to recover 3D information from a set of 2D observations, usually a set of images or videos. Observing a 3D scene from different viewpoints can provide rich information about the underlying geometry, materials, and physical properties of the objects. Having access to this information allows many...
-
November 24, 2023 (v1)PublicationUploaded on: March 13, 2024
-
September 27, 2023 (v1)Conference paper
Neural Radiance Fields, or NeRFs, have drastically improved novel view synthesis and 3D reconstruction for rendering. NeRFs achieve impressive results on object-centric reconstructions, but the quality of novel view synthesis with free-viewpoint navigation in complex environments (rooms, houses, etc) is often problematic.While algorithmic...
Uploaded on: October 11, 2023 -
June 2021 (v1)Journal article
There has recently been great interest in neural rendering methods. Some approaches use 3D geometry reconstructed with Multi-View Stereo (MVS) but cannot recover from the errors of this process, while others directly learn a volumetric neural representation, but suffer from expensive training and inference. We introduce a general approach that...
Uploaded on: December 4, 2022 -
July 2023 (v1)Journal article
Radiance Field methods have recently revolutionized novel-view synthesisof scenes captured with multiple photos or videos. However, achieving highvisual quality still requires neural networks that are costly to train and render,while recent faster methods inevitably trade off speed for quality. Forunbounded and complete scenes (rather than...
Uploaded on: May 7, 2023 -
July 26, 2023 (v1)Journal article
Radiance Field methods have recently revolutionized novel-view synthesisof scenes captured with multiple photos or videos. However, achieving highvisual quality still requires neural networks that are costly to train and render,while recent faster methods inevitably trade off speed for quality. Forunbounded and complete scenes (rather than...
Uploaded on: December 25, 2023 -
March 18, 2024 (v1)Conference paper
International audience
Uploaded on: October 10, 2024 -
December 2022 (v1)Journal article
View-dependent effects such as reflections pose a substantial challenge for image-based and neural rendering algorithms. Above all, curved reflectors are particularly hard, as they lead to highly non-linear reflection flows as the camera moves. We introduce a new point-based representation to compute Neural Point Catacaustics allowing...
Uploaded on: December 3, 2022 -
July 28, 2024 (v1)Conference paper
In the wake of many new ML-inspired approaches for reconstructing and representing high-quality 3D content, recent hybrid and explicitly learned representations exhibit promising performance and quality characteristics. However, their scaling to higher dimensions is challenging, e.g. when accounting for dynamic content with respect to...
Uploaded on: October 8, 2024 -
May 2023 (v1)Journal article
Neural Radiance Fields (NeRFs) have revolutionized novel view synthesis for captured scenes, with recent methods allowing interactive free-viewpoint navigation and fast training for scene reconstruction. However, the implicit representations used by these methods — often including neural networks and complex encodings— make them difficult to...
Uploaded on: March 25, 2023