TROPWET: towards a reproducible Earth Observation system for mapping and monitoring tropical wetlands.


Wetlands are vital for human survival, and among the most prolific providers of ecosystem services on Earth. Despite this importance, wetlands are being lost at an alarming rate, with 35% area lost since 1970. Moreover, remaining wetlands are now experiencing increasing deterioration from unsustainable use and disrupted flow regimes, as well as being at a very high risk of exposure to climatic extremes and associated ecological and environmental disasters. To support the reversal of such trends, continuous monitoring of the status of wetlands worldwide needs to be improved from local to global scales, so we can truly assess wetland condition and ultimately support more effective management and policy decisions.

Monitoring the distribution, extent, status and changes in wetlands is however time consuming, resource hungry and expensive, making it unachievable for many government agencies or civil society organisations. For this reason, detailed knowledge of the distribution, extent and conservation status of wetlands is still lacking worldwide, especially for tropical wetlands. Current estimates of global wetland extent vary from 12 to 17 x 10^6 km2, placing it at an area similar in size to Russia, of which c.a. 50% are in tropical regions. It is likely that these estimates are far from reality, as the complexity of wetlands makes satellite mapping a challenging task. Wetlands are highly dynamic in space and time, respond rapidly to hydrological changes, and often present themselves at the limit of the spatial resolution of most orbital sensors. However, modern Artificial Intelligence (AI) approaches have been significantly pushing the envelope of what is achievable by remote sensing. Deep Learning-based approaches such as convolutional networks have been extensively applied to satellite images to improve the performance of remote sensing applications, often exploring multi-sensor and multi-temporal information from diverse satellite platforms.

Still, regardless of the advances in remote sensing methods, a major bottleneck for developing effective space-enabled AI monitoring systems continues to be the poor availability of accurate ground observations to support the training of these robust algorithms. For that, direct involvement and engagement of wetland end-users (local populations, landowners, governments) through citizen science platforms can be a promising avenue for obtaining this essential ground information, and to simultaneously promote increased recognition and ownership of wetland environments by local stakeholders. Combining space-enabled observation systems with citizen science approaches may thus maximize the effectiveness and resulting benefits of monitoring solutions, providing a cost-effective approach to collecting accurate data on tropical wetland distribution.

In this project, you will learn and apply cutting-edge methods in AI-powered Earth Observation to develop a cloud-based, automated wetland mapping system, and help develop and test a cloud-based citizen science application to collect data on wetland occurrence and conservation/degradation status. Using this information, You will then combine this information to deliver the first ever satellite-based assessment of wetland distribution, extent and conservation status for the entire South American continent, as a proof of concept for a future global monitoring system. To this end, you will also have the support from a network of collaborators in South America to support you on a field validation/data collection mission at key wetland locations.


The mapping solution will be based on three main components:

1) an automated system to ingest and process freely available satellite imagery from existing cloud databases;

2) an expert state of the art AI system to combine the ground data provided by the citizen science component (below) with ancillary data and imagery inputs to produce maps of wetland extent and quality in near real-time; and

3) a cloud-based bespoke wetland monitoring app for mobile phones and tablets that will enable any citizen to document and locate wetlands.

The algorithm suite to ingest and process available satellite imagery will be based on the Application Programming Interfaces offered by cloud-processing engines such as Google Earth Engine and Microsoft Planetary Computer, comprising modern data from the Sentinel-1 (synthetic aperture radar, SAR) and Sentinel-2 (optical imaging) platforms and both modern and historical records from the ALOS-PALSAR (SAR) and Landsat (optical) satellite series. You will then establish a method to combine this data into Analysis Ready Datacubes (ARD), including radiometric correction and harmonisation, application of cloud and cloud-shadow masks, temporal aggregation of observation data into synthetic cloud-free image composites and automatic checking and correction of image alignment between different sensors. You will also integrate ancillary datasets to this stack, such as topographic and hydrological data.

In parallel with implementing the data acquisition and preparation pipeline, you will also contribute with the development of a bespoke wetland monitoring app for mobile phones and tablet, which will enable any citizen to document and locate wetlands. The app will then be tested with selected participants from the scientific and stakeholder communities of key wetlands in South America, and then released to the general public. The app will be able to collect photographic and video observations together with voice and written comments, all geotagged using mobile GPS capabilities. The final app design should encourage and facilite the reporting of wetland location, current use and degradation status of the recorded wetlands. These field observations can also b combined with existing local maps of wetland distribution to augment the size of the ground samples database.

You will then develop a method to integrate the ARDs with the gathered citizen science observations, and use them to train a predictive AI-model based on convolutional neural networks(C-NN). By having the system based on automated ingestion of satellite and ground data into cloud-enabled processing platforms, the model will be able to be re-trained and re-tested on demand to improve output quality and deliver updated maps shortly after new observations become available. You will then conduct a field-based validation expedition, supported by key collaborators in strategic locations throughout South America, to provide independent validation of the accuracy of the provided maps and on the effectiveness of the mobile app.

Finally, once the first version of the map dataset is produced, you will use the data to analyse the current distribution, extent and status of wetland environments for all of South America South America at medium-high resolution, investigate their natural seasonality and identifying evidences for ongoing or future risk of degradation.

Project Timeline

Year 1

During year 1 you will receive extensive training in state-of-the-art Earth observation and machine learning theory and methods, including the development of strong programming and analytical skills, as well as training on the environmental and socioeconomic aspects of wetlands in South America. You will also start to familiarise yourself with the main sources of globally available satellite data and cloud-processing platforms currently available, and with modern image processing methods such as radiometric harmonisation and best available pixel (BAP) analysis.

During the second semester of year one, you will practice these skills by implementing and iterating over an analytical workflow targeting test sites with reduced spatial extent. These sites will comprise key wetland locations in South America that are well known and studied by the main supervisors. You will also be able to undertake additional training in academic skills as offered by the IAPETUS program and the supporting institutions.

By the end of year 1, you shall have a first fully working prototype of the image analysis solution that is ready for scaling up using cloud platforms, which will be documented as your first chapter/scientific article for submission. You will also provide scientific and technical input on the development and testing of the citizen science web application, as well as having the opportunity to participate in its direct development and implementation if you so choose.

Year 2

During year 2 you will start implementing the analytical algorithms on available cloud-processing platforms, and integrate it with the citizen data collection system. To this end, you will be able to undertake industry recognised training and certification on the use of cloud-based technologies for implementing solutions, provided by the technology providers themselves (e.g. Microsoft Azure, Amazon AWS), as well as third-party industry certifications. You will also again be able to undertake additional training in academic skills as offered by the IAPETUS program and the supporting institutions. During the second semester of year 2 you will finalise the implementation testing of the system and generate the first version of the final mapping products for stakeholder analysis and validation, which will be documented as your second chapter/journal submission.

Year 3

At the start of year 3, you will conduct field work at strategic locations in South America, supported by local collaborators of the main supervisors, to validate and improve the mapping and data collection solutions. You will then devote the rest of year to analyse the validated datasets to produce a comprehensive assessment of the distribution, extent and status of South American Wetlands, and quantify the risks of current and future degradation. You will then report these results as your third chapter / wide impact journal publication, as well as producing easily accessible summaries for policy makers, and interact with university media teams for press releases of the obtained results. You should also aim to attend and present at at least one field specific conference (e.g. IGARSS, Intecol Wetlands) and one broader international conference (eg Living Planet Symposium, EGU, AGU).

Year 3.5

During Year 3.5, you will finalise your dissertation and submit your final scientific manuscript for publication, and will also interact with key stakeholders in South America and elswhere to disseminate your results in support of initiatives that will increase the security and conservation of wetland environments.

& Skills

Your will receive training and mentoring by all four supervisors on the theoretical background and state-of-the-art methods in Earth Observation and Machine Learning, through direct interactions as well as participating on postgraduate modules offered by the supervisors on these subjects. That will involve acquiring soli skills in programming and data analysis, and optionally on software development practices. You will also be able to undertake industry-based certification in cloud-based technologies, which are highly valued in the current job market. Through IAPETUS and the supporting universities, you will have ample opportunity to engage with training on research practices, writing and speaking skills, career development and specific technical training. The skills you will develop should position you equally well to continue towards either an academic or industry career.
References & further reading

Fluet-Chouinard, E., Lehner, B., Rebelo, L.-M., Papa, F., Hamilton, S.K., 2015. Development of a global inundation map at high spatial resolution from topographic downscaling of coarse-scale remote sensing data. Remote Sensing of Environment 158, 348–361.

Gumbricht, Thomas, A., 2012. Mapping global tropical wetlands from earth observing satellite imagery (No. Working Paper 103). Center for International Forestry Research (CIFOR).

Janse, J.H., van Dam, A.A., Hes, E.M.A., de Klein, J.J.M., Finlayson, C.M., Janssen, A.B.G., van Wijk, D., Mooij, W.M., Verhoeven, J.T.A., 2019. Towards a global model for wetlands ecosystem services. Current Opinion in Environmental Sustainability, Environmental Change Assessment 36, 11–19.

Junk, W.J., An, S., Finlayson, C.M., Gopal, B., Květ, J., Mitchell, S.A., Mitsch, W.J., Robarts, R.D., 2013. Current state of knowledge regarding the world’s wetlands and their future under global climate change: a synthesis. Aquat Sci 75, 151–167.

Kandus, P., Minotti, P.G., Morandeira, N.S., Grimson, R., González Trilla, G., González, E.B., San Martín, L., Gayol, M.P., 2018. Remote sensing of wetlands in South America: status and challenges. International Journal of Remote Sensing 39, 993–1016.

Maggiori, E., Tarabalka, Y., Charpiat, G., Alliez, P., 2017. Convolutional Neural Networks for Large-Scale Remote-Sensing Image Classification. IEEE Transactions on Geoscience and Remote Sensing 55, 645–657.

Reichstein, M., Camps-Valls, G., Stevens, B., Jung, M., Denzler, J., Carvalhais, N., Prabhat, 2019. Deep learning and process understanding for data-driven Earth system science. Nature 566, 195–204.

Tuia, D., Marcos, D., Schindler, K., Le Saux, B., 2021a. Deep Learning-based Semantic Segmentation in Remote Sensing, in: Deep Learning for the Earth Sciences. John Wiley & Sons, Ltd, pp. 46–66.

Tuia, D., Roscher, R., Wegner, J.D., Jacobs, N., Zhu, X., Camps-Valls, G., 2021b. Toward a Collective Agenda on AI for Earth Science Data Analysis. IEEE Geoscience and Remote Sensing Magazine 9, 88–104.

Zhu, X.X., Tuia, D., Mou, L., Xia, G.-S., Zhang, L., Xu, F., Fraundorfer, F., 2017. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. IEEE Geoscience and Remote Sensing Magazine 5, 8–36.

Further Information

Dr. Thiago Silva, University of Stirling,

Apply Now