WarpRF: Multi-View Consistency for Training-Free Uncertainty Quantification

and Applications in Radiance Fields



KUIS AI Center, Koç University1
University of Bologna2

Given a radiance field-based surface reconstruction framework trained on an initial set of images -- e.g. the 3D Gaussian Splatting 3DGS$_{t0}$ (left) trained on the blue viewpoints -- WarpRF estimates the best next view (in orange in the figure) by quantifying the rendering uncertainty $\mathcal{U}_0$ associated to it through warping. This is added to the training set and used to fit a more accurate 3DGS$_{t1}$ (center), from where we can identify a new next best view with maximum uncertainty $\mathcal{U}_1$, add it to the training set and obtain 3DGS$_{t2}$ (right), and so on and so forth.


Method

We introduce WarpRF, a training-free general-purpose framework for quantifying the uncertainty of radiance fields. Built upon the assumption that photometric and geometric consistency should hold among images rendered by an accurate model, WarpRF quantifies its underlying uncertainty from an unseen point of view by leveraging backward warping across viewpoints, projecting reliable renderings to the unseen viewpoint and measuring the consistency with images rendered there. WarpRF is simple and inexpensive, does not require any training, and can be applied to any radiance field implementation for free. WarpRF excels at both uncertainty quantification and downstream tasks, e.g., active view selection and active mapping, outperforming any existing method tailored to specific frameworks.


BibTeX

@article{safadoust20WarpRF,
      title={WarpRF: Multi-View Consistency for Training-Free Uncertainty Quantification and Applications in Radiance Fields},
      author={Safadoust, Sadra and Tosi, Fabio and G{\"u}ney, Fatma and Poggi, Matteo},
      booktitle={arXiv preprint arXiv:2506.22433},
      year={2025}
    }