Remote sensing imaging modalities are the golden standards in the literature for the purposes of various environmental applications, such as marine monitoring, glacier analysis, vegetation classification. Optical imagery (passive), though being the modality that displays the earth as the human eye sees it, is, unfortunately, be at the mercy of the clouds and the amount of sunshine. However, the synthetic aperture radar (SAR) system constitutes an active sensor, which emits microwaves towards the Earth and receives the back-scattered signals from the surface. Since SAR utilizes larger wavelengths (1cm to 1m) compared to the optical sensors (near-visible light or 1 micron), it can image at both day and night, and in almost all-weather conditions through clouds and storms (while optical sensors are not). Despite the advantages compared to optical imagery, an important problem degrading statistical inference from SAR imagery is the presence of multiplicative speckle noise.
Both modalities have some advantages compared to each other and various major challenges remain to be solved specifically for the environmental applications. Recently, in remote sensing imaging research, it has become increasingly common to fuse both optical and SAR data sets. Particularly, the aim of fusing SAR and optical data is to leverage the synergistic properties of the two imaging systems for the purpose of any image processing application.
In this project, we, contrary to the general practice in the existing environmental applications which focus on a single imaging modality, exploit the advantages of both optical and SAR imagery in super-resolving and fusing observed scenes both in spatial and temporal axes. This will lead to novel and highly informative feature extraction, and target models by considering the targets’ spectral characteristics and interactions with the earth surface under different conditions of each imaging modality. Due to the difference in looking angles of optical and SAR sensors, advanced Bayesian image registration approaches will be developed here prior to fusing both modalities. Then, this will follow the development of a novel Bayesian implementation of advanced data-driven techniques with minimal supervision for image-to-image translation operation for active-passive remote sensing fusion operations.
This project is for Self-Funded Students. To apply for the PhD Scholarship, follow the instructions here.
In the funding field of your application, indicate “I am applying for 2021 PhD Scholarship in Computer Science and Informatics”, and specify the project title and supervisors of this project in the text box provided.
A 2:1 Honours undergraduate degree or a master’s degree, in computing or a related subject. Applicants with appropriate professional experience are also considered. Degree-level mathematics (or equivalent) is required for research in some project areas.
Applicants for whom English is not their first language must demonstrate proficiency by obtaining an IELTS score of at least 6.5 overall, with a minimum of 6.0 in each skills component.
This project is accepting applications all year round, however, if you are interested in applying for a PhD Scholarship, please note that the deadline is the 30th June 2021. For more information, please visit our website to find out more about the Scholarships application deadline. Apply online here.
For more information on the project, please contact [email protected]