NeRAF: 3D Scene Infused Neural Radiance and Acoustic Fields

Mines Paris, PSL Research University
NERAFConcept

NeRAF synthesizes audio-visual data at novel sensor positions by learning radiance and acoustic fields from a collection of images and audio recordings. It enables audio auralization and spatialization, as well as improved image rendering, all of which are crucial for creating a realistic perception of space. NeRAF leverages cross-modal learning without the need for co-located audio and visual sensors for training. Our method allows for the independent rendering of each modality.

Abstract

Sound plays a major role in human perception. Along with vision, it provides essential information for understanding our surroundings. Despite advances in neural implicit representations, learning acoustics that align with visual scenes remains a challenge. We propose NeRAF, a method that jointly learns acoustic and radiance fields. NeRAF synthesizes both novel views and spatialized room impulse responses (RIR) at new positions by conditioning the acoustic field on 3D scene geometric and appearance priors from the radiance field. The generated RIR can be applied to auralize any audio signal. Each modality can be rendered independently and at spatially distinct positions, offering greater versatility. We demonstrate that NeRAF generates high-quality audio on SoundSpaces and RAF datasets, achieving significant performance improvements over prior methods while being more data- efficient. Additionally, NeRAF enhances novel view synthesis of complex scenes trained with sparse data through cross-modal learning. NeRAF is designed as a Nerfstudio module, providing convenient access to realistic audio-visual generation.

NeRAF overview

Model Overview

NeRF maps 3D coordinates and orientations to density and color. The grid sampler fills a 3D grid representing the scene by querying the radiance field with voxel center coordinates and multiple viewing directions. NAcF learns to map source-microphone poses and directions to STFT. It is conditioned by extracted scene features. Predicted RIRs can be convolved with audio to obtain auralized and spatialized audio matching the scene.

Videos

We present examples of audio-visual and audio-only generation (last video) done with NeRAF.
Audios are obtained by convolving the predicted RIRs with the source audio. To best experience the videos, please use headphones.

BibTeX

@article{NeRAF,
        title={NeRAF: 3D Scene Infused Neural Radiance and Acoustic Fields}, 
        author={Amandine Brunetto and Sascha Hornauer and Fabien Moutarde},
        year={2024},
        eprint={2405.18213},
        archivePrefix={arXiv},
  }