Research visit: Akash Jaiswal – Machine learning to analyse acoustic bird monitoring data
This week we are welcoming Akash Jaiswal, a PhD student from Jawaharlal Nehru University (Delhi, India). Akash has obtained a Newton-Bhabha Fund PhD placement to spend 3 months in the UK working with us on using machine learning to analyse acoustic bird monitoring data.
The placement description is as follows:
Worldwide increasing urbanization is directly associated with the rapid transformation of landscapes, impacting more and more natural habitats. Biodiversity assessment is essential to improve the management and quality of these habitats for greater biodiversity benefit and better provisioning of ecosystem services, and also to understand the ecological changes shaping urban animal communities. Birds are representative taxa for biodiversity monitoring in terrestrial habitats and have been studied frequently in this context. It has been observed that bird communities in habitats close to urban infrastructures exhibit reduced species richness with a few successful species more dominant as compared to adjacent natural habitats. But the mechanisms creating such community-level changes are poorly understood.
My PhD project is to understand such variations in bird communities across different habitats in a fast changing urban landscape like Delhi using birds’ singing activity as proxy for community composition. Acoustic monitoring of vocalizing animal communities proves to be less time-consuming and more resource-efficient as compared to field surveys of biodiversity. In this context, the aim of this research work is to assess the efficacy of using eco-acoustic indices to measure and characterize avian biodiversity, and its application to account for community-level variation in vocalizing birds (avian soundscape).
Although acoustic monitoring appears to be a promising solution for biodiversity assessment, the analysis of recorded acoustic samples remains challenging as the manual detection and identification of individual species vocalization from the large amount of field recordings is nearly impossible and also subject to observer bias/error. Automation of such analysis using machine learning techniques can facilitate identification of species and data analysis. Dr Dan Stowell at Queen Mary University of London has been working upon machine learning techniques to study sound signals including birdsongs, music and environmental sounds for years. He is also working on automated processes for analyzing large amounts of sound recordings – detecting the bird sounds and their relation to each other.
I am visiting Dr Stowell’s lab and working under his supervision to learn and use machine learning methods and automated processes to facilitate the analysis of the large amount of audio data I am collecting during my field sampling. Besides, I also believe that his supervision will certainly improve the quality and impact of my research work. This is certainly going to help in applicability of this modern technique to enhance my current and future projects.