ReShader: View-Dependent Highlights
for Single Image View-Synthesis

1Texas A&M University, 2Leia Inc.
SIGGRAPH Asia 2023 (TOG)
3D Moments
Ours

Input with annotation
Input
Reshaded
Ours
Interactive visualization. Hover or tap to move the zoom cursor.

To properly handle the view-dependent effects, we propose to break down the view synthesis process into two tasks of pixel reshading and relocation. During reshading, we use a neural network to generate a new version of the input image (shown on the left) with the shading computed based on the novel view. As shown on the middle, our reshading network correctly leaves the diffuse areas intact (the dog’s head), but moves the highlights on the specular areas (wooden floor). The relocation process takes this reshaded image and generates the novel view image. The red crosses mark the same location on the wooden floor to make it easier to observe the effect of reshading and relocation.


Abstract

In recent years, novel view synthesis from a single image has seen significant progress thanks to the rapid advancements in 3D scene representation and image inpainting techniques. While the current approaches are able to synthesize geometrically consistent novel views, they often do not handle the view-dependent effects properly. Specifically, the highlights in their synthesized images usually appear to be glued to the surfaces, making the novel views unrealistic. To address this major problem, we make a key observation that the process of synthesizing novel views requires changing the shading of the pixels based on the novel camera, and moving them to appropriate locations. Therefore, we propose to split the view synthesis process into two independent tasks of pixel reshading and relocation. During the reshading process, we take the single image as the input and adjust its shading based on the novel camera. This reshaded image is then used as the input to an existing view synthesis method to relocate the pixels and produce the final novel view image. We propose to use a neural network to perform reshading and generate a large set of synthetic input-reshaded pairs to train our network. We demonstrate that our approach produces plausible novel view images with realistic moving highlights on a variety of real world scenes.

Video


Results

We compare against a modular single image view synthesis approach. 3D Moments warps the highlights along with the texture.

3D Moments
Ours
3D Moments
Ours
3D Moments
Ours
3D Moments
Ours
3D Moments
Ours
3D Moments
Ours
3D Moments
Ours
3D Moments
Ours
3D Moments
Ours

BibTeX

@article{Paliwal2023reshader,
  author     = {Paliwal, Avinash and Nguyen, Brandon G. and Tsarov, Andrii and Kalantari, Nima Khademi},
  title      = {ReShader: View-Dependent Highlights for Single Image View-Synthesis},
  journal    = {ACM Trans. Graph.},
  publisher  = {Association for Computing Machinery},
  year       = {2023},
  issue_date = {December 2023},
  volume     = {42},
  number     = {6},
  articleno  = {216},
  numpages   = {9},
  month      = {dec},
  doi        = {10.1145/3618393},
}

Acknowledgements

We thank the SIGGRAPH Asia reviewers for their comments and suggestions. This work was funded by Leia Inc. (contract #415290). Nima Khademi Kalantari was in part supported by CAREER Award (#2238193). Portions of this research were conducted with the advanced computing resources provided by Texas A&M High Performance Research Computing.

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.