GeLaTO: Generative Latent Textured Objects

Spotlight

Google Research

GeLaTO Instance Interpolations

Accurate modeling of 3D objects exhibiting transparency, reflections and thin structures is an extremely challenging problem. Inspired by billboards and geometric proxies used in computer graphics, this paper proposes Generative Latent Textured Objects (GeLaTO), a compact representation that combines a set of coarse shape proxies defining low frequency geometry with learned neural textures, to encode both medium and fine scale geometry as well as view-dependent appearance. To generate the proxies' textures, we learn a joint latent space allowing category-level appearance and geometry interpolation. The proxies are independently rasterized with their corresponding neural texture and composited using a U-Net, which generates an output photorealistic image including an alpha map. We demonstrate the effectiveness of our approach by reconstructing complex objects from a sparse set of views. We show results on a dataset of real images of eyeglasses frames, which are particularly challenging to reconstruct using classical methods. We also demonstrate that these coarse proxies can be handcrafted when the underlying object geometry is easy to model, like eyeglasses, or generated using a neural network for more complex categories, such as cars.

Overview

GeLaTO View Interpolations

Citation

@inproceedings{martinbrualla2020gelato, author = {Martin-Brualla, Ricardo and Pandey, Rohit and Bouaziz, Sofien and Brown, Matthew and Goldman, Dan B}, title = {{GeLaTO: Generative Latent Textured Objects}}, booktitle = {European Conference on Computer Vision}, year={2020} }

Acknowledgements

We thank Thomas Hayes for his help capturing the dataset, Matthew Wilson for his help printing the mannequin head, Supasorn Suwajanakorn for his code to render ShapeNet, and Matthew Tancik whom we borrowed the website template from.