Neural Directional Encoding for Efficient and
Accurate View-Dependent Appearance Modeling

1UC San Diego, 2Adobe Research

Abstract

Novel-view synthesis of specular objects like shiny metals or glossy paints remains a significant challenge. Not only the glossy appearance but also global illumination effects, including reflections of other objects in the environment, are critical components to faithfully reproduce a scene. In this paper, we present Neural Directional Encoding (NDE), a view-dependent appearance encoding of neural radiance fields (NeRF) for rendering specular objects. NDE transfers the concept of feature-grid-based spatial encoding to the angular domain, significantly improving the ability to model high-frequency angular signals. In contrast to previous methods that use encoding functions with only angular input, we additionally cone-trace spatial features to obtain a spatially varying directional encoding, which addresses the challenging interreflection effects. Extensive experiments on both synthetic and real datasets show that a NeRF model with NDE (1) outperforms the state of the art on view synthesis of specular objects, and (2) works with small networks to allow fast (real-time) inference.

Real-Time Web Rendering

NDE is efficient and supports real-time web rendering. For all results in the video, we use a MLP width of 64.

Comparison with Baseline Methods


Comparison on fine angular details
ENVIDR Ours Ref-NeRF
Comparison to NeRO on real dataset
NeRO Ours

Results on Real Scenes from NeRO dataset

Here we render the foreground objects without the reflections from the capturer. Note that the flickering on the cardboard is due to frequently-changing shading caused by the moving capturer.

Applications on Editing

NDE supports removal of objects and their reflections from the scene.

BibTeX


@inproceedings{wu2024neural,
  author = {Liwen Wu and Sai Bi and Zexiang Xu and Fujun Luan and Kai Zhang and Iliyan Georgiev and Kalyan Sunkavalli and Ravi Ramamoorthi},
  title = {Neural Directional Encoding for Efficient and Accurate View-Dependent Appearance Modeling},
  booktitle = {CVPR},
  year = {2024}
}