Sound plays a major role in human perception.
Along with vision, it provides essential information for understanding our surroundings.
Despite advances in neural implicit representations, learning acoustics that align with visual scenes remains a challenge.
We propose NeRAF, a method that jointly learns acoustic and radiance fields.
NeRAF synthesizes both novel views and spatialized room impulse responses (RIR) at new positions by conditioning the acoustic field on 3D scene geometric and appearance priors from the radiance field.
The generated RIR can be applied to auralize any audio signal. Each modality can be rendered independently and at spatially distinct positions, offering greater versatility.
We demonstrate that NeRAF generates high-quality audio on SoundSpaces and RAF datasets, achieving significant performance improvements over prior methods while being more data- efficient.
Additionally, NeRAF enhances novel view synthesis of complex scenes trained with sparse data through cross-modal learning. NeRAF is designed as a Nerfstudio module, providing convenient access to realistic audio-visual generation.