Learning Physically Plausible Object Relighting for Open-World Image Generation
Abstract
Controllable object relighting has become a key capability for modern image synthesis systems, supporting applications that range from creative editing to simulation for downstream visual learning tasks. However, most existing approaches focus either on strictly constrained laboratory settings or on black-box generative models that ignore the underlying physics of illumination and reflectance. This work studies the problem of learning physically plausible object relighting in the setting of open-world image generation, where objects exhibit diverse geometry, complex materials, and highly varied environmental illumination. We consider a setting where a model receives a single image of an object under unknown lighting and a new target lighting condition, and must generate a relit image that is both photorealistic and consistent with physical light transport. To address this problem, we introduce a hybrid formulation that couples an explicit, differentiable approximation of image formation with a high-capacity generative backbone. Our approach decomposes the prediction into geometry-aware shading, reflectance, and residual appearance components, and constrains them through energy-preserving and reciprocity-inspired training objectives. We further introduce statistical regularizers that bias the learned model toward physically reasonable behaviors while still permitting deviations needed to account for missing information and real-world violations of ideal assumptions. Extensive experiments on synthetic and real imagery indicate that the framework supports open-world generalization, improves temporal and compositional consistency compared to purely data-driven baselines, and gracefully degrades when the target lighting falls far outside the training distribution. The analysis emphasizes where physically motivated constraints are most beneficial and where data-driven flexibility remains essential for realistic open-world relighting.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 authors

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.