Acquiring high-quality video in challenging conditions—such as low light, heat haze, or adverse weather—is difficult, often resulting in degraded footage that hinders both human and machine interpretation. PriorPool addresses this by leveraging priors from high-quality videos with similar content to guide restoration and enhancement. The project develops an unsupervised framework that tackles blind inverse problems, combining robust content representations, prior retrieval, and context-aware optimisation. By exploiting the knowledge embedded in high-quality videos, PriorPool aims to overcome information loss and the lack of ground truth, enabling more accurate and effective video restoration.
This work introduces Zero-TIG, a zero-shot learning approach for low-light video enhancement that combines Retinex theory with optical flow. The network includes an enhancement module, handling denoising, illumination estimation, and reflection removal, and a temporal feedback module that enforces consistency via histogram equalization, optical flow, and image warping. These components enable temporally coherent and visually consistent enhancement without paired training data.
[Project page]
TThis work proposes an unpaired learning method for simultaneous colorization and denoising of ultra-high-resolution videos. To address memory constraints at large scales, we introduce a multiscale patch-based framework that captures both local and contextual features, complemented by an adaptive temporal smoothing technique to suppress flickering artifacts.