PhD Scholarship

PriorPool: Intelligent Video Restoration and Enhancement via a Large Prior Database

  • Visual Information Laboratory, School of Computer Science
  • Scholarship Details: £19,237 p.a. as in 2025/26
  • Duration: 3.5 years
  • Eligibility: Home (UK) and EU citizens who have confirmation of UK settlement or presettlement status under the EU Settlement Scheme. If you are an overseas student, please contact a lead supervisor for potential fee waiver.
  • Start date: September 2025 or before

Supervisory team

Collaborators
BBC R&D, Lux Aeterna VFX, Esprit Film and Television, MyWorld

Project description

Acquiring high-quality footage in challenging environments, such as low light, heat haze, and adverse weather conditions, presents significant difficulties. These conditions not only produce visually unappealing videos but also hinder interpretation by both humans and machines. As a result, post-processing becomes essential. However, video restoration and enhancement remain challenging due to the inherent loss of information, compounded by the general absence of ground truth data.

This research project, PriorPool, seeks to address these challenges in an innovative way. We propose that prior information, extracted from high-quality videos sharing similar content with the distorted footage, can act as constraints during the modelling algorithms' learning process. By leveraging the inherent characteristics and knowledge embedded in high-quality videos, this approach provides valuable guidance for the learning-based restoration and enhancement of distorted videos.

The PriorPool project aims to develop a comprehensive framework for video restoration and enhancement by tackling blind inverse problems using unsupervised learning. Working collaboratively as a team comprising a postdoctoral researcher and a PhD student, the specific objectives are as follows.

  1. To define and acquire a comprehensive database that includes priors relating to high-quality videos serving as references for enhancing distorted videos.
  2. To develop a new robust high-level representation of the video content. Distortions will generally alter video characteristics, increasing the difference between the input videos and the corresponding high-quality videos in the database, even if they have the similar content. This will minimise this gap to maximise the accuracy of acquired priors.
  3. To develop a prior retrieval system, providing global, local, and context-based priors, along with statistically driven models that provide a reliable basis for video restoration and enhancement process.
  4. To address blind inverse problems, where the degradation process during video acquisition is unknown. We will define a network to learn distortion functions from data that simultaneously inform the optimisation in learning process.
  5. To develop and refine optimisation and learning strategies that are aware of the acquisition context and capable of learning without explicit ground truth information. The aim is to enhance video quality using unsupervised learning approaches.

The enhancement of distorted video is important in a number of fields including cell microscopy, space imaging, industrial metrology, surveillance, robotics and autonomous vehicles. While any solutions will have broad applicability, in this work we will initially target natural history filmmaking in challenging environments. The creative industries are a strength in the UK economy and Bristol leads the world, known as the Green Hollywood for natural history content, responsible for well over 40% of the world's productions.

Entry requirements

Applicants must hold or achieve a minimum of a master’s degree (or international equivalent) in a relevant discipline. Applicants without a master’s qualification may be considered in exceptional cases, provided they hold a first-class undergraduate degree. Please note that acceptance will also depend on evidence of readiness to pursue a research degree. More details can be found here.

Contact detail for further information:
Assoc. Prof. Pui Anantrasirichai