The human visual system operates using various opponent processes, present in both the retina and visual cortex. These processes heavily rely on distinctions in color, luminance, or motion to trigger salient reactions. Contrast, which refers to differences in luminance and/or color that enable the differentiation of objects, plays a crucial role in subjectively evaluating image quality. Images and videos captured in low-light conditions often exhibit poor quality and visibility due to limitations in shutter angles, high ISO resulting in noise, and spectral biasing toward blue. Traditional enhancement techniques tend to wash out details, flatten the appearance, and amplify noise.
This project aims to develop and validate a perceptually inspired deep learning framework for joint restoration of noisy, low light content (targeting natural history filmmaking) ensuring temporal consistency in terms of colour, luminance and motion.