PhotoMat: A Material Generator Learned from Single Flash Photos
Siggraph 2023
-
Xilong Zhou
Texas A&M University;
Adobe Research -
Miloš Hašan
Adobe Research
-
Valentin Deschaintre
Adobe Research
-
Paul Guerrero
Adobe Research
-
Yannick Hold-Geoffroy
Adobe Research
-
Kalyan Sunkavalli
Adobe Research
-
Nima Khademi Kalantari
Texas A&M University
Abstract

Authoring high-quality digital materials is key to realism in 3D rendering. We circumvent this limitation by proposing PhotoMat: the first material generator trained exclusively on real photos of material samples captured using a cell phone camera with flash. Supervision on individual material maps is not available in this setting. Instead, we train a generator for a neural material representation that is rendered with a learned relighting module to create arbitrarily lit RGB images; these are compared against real photos using a discriminator. We train PhotoMat with a new dataset of 12,000 material photos captured with handheld phone cameras under flash lighting. We demonstrate that our generated materials have better visual quality than previous material generators trained on synthetic data. Moreover, we can fit analytical material models to closely match these generated neural materials, thus allowing for further editing and use in 3D rendering.
Supplementary Video
BibTeX
@inproceedings{zhou2023PhotoMat, title={PhotoMat: A Material Generator Learned from Single Flash Photos}, author={Zhou, Xilong and Hašan, Miloš and Deschaintre, Valentin and Guerrero, Paul and Hold-Geoffroy, Yannick and Sunkavalli, Kalyan and Kalantari, Nima Khademi}, booktitle={SIGGRAPH 2023 Conference Papers}, year={2023} }
Acknowledgements
This project was funded in part by the NSF CAREER Award #2238193 and a generous gift from Adobe. The website template was borrowed from Michael Gharbi.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.