Production Workflows for Low-light Environments

AI-based Image Processing and Computer Vision

The human visual system operates using various opponent processes, present in both the retina and visual cortex. These processes heavily rely on distinctions in color, luminance, or motion to trigger salient reactions. Contrast, which refers to differences in luminance and/or color that enable the differentiation of objects, plays a crucial role in subjectively evaluating image quality. Images and videos captured in low-light conditions often exhibit poor quality and visibility due to limitations in shutter angles, high ISO resulting in noise, and spectral biasing toward blue. Traditional enhancement techniques tend to wash out details, flatten the appearance, and amplify noise.

This project aims to develop and validate a perceptually inspired deep learning framework for joint restoration of noisy, low light content (targeting natural history filmmaking) ensuring temporal consistency in terms of colour, luminance and motion.

Funder
UKRI MyWorld Strength in Places Programme (SIPF00006/1), BRISTOL+BATH CREATIVE R+D (AH/S002936/1).

Research team

Core
Undergrad/Postgrad projects
  • Anastasia Yi (2023), A comprehensive study of object tracking in low-light environments [Thesis]
  • Siyu Zhou (2023), Temporal consistency in low-light video enhancement [Thesis]
  • Felicia Dubicki-Piper (2023), VideoINR+: Video denoising with implicit neural representation [Code]

Downloads

Publications
Datasets

Related publications from VI-Lab
Denoising in different modalities