Call for PhD applications at the Machine Listening Lab – 2025 entry

The Machine Listening Lab of Queen Mary University of London is inviting applications for PhD study for Autumn 2025 start across various funding schemes. Below are suggested PhD topics offered by academics; interested applicants can apply for a PhD under one of those topics, or can propose their own topic. In all cases, prospective applicants are strongly encouraged to contact lab academics to informally discuss prospective research topics.

Opportunities include internally and externally funded positions for PhD projects to start in Autumn 2025. It is also possible to apply as a self-funded student or with funding from another source. Applicants can apply for a 3-year PhD degree in Computer Science or Electronic Engineering, or for a 4-year PhD in AI and Music. Studentship opportunities include:


Audio-visual sensing for machine intelligence

Supervisor: Lin Wang

Eligible funding schemes: S&E Studentships for Underrepresented Groups, CSC PhD Studentships, International PhD Funding Scheme

The project aims to develop novel audio-visual signal processing and machine learning algorithms that help improve machine intelligence and autonomy in an unknown environment, and to understand human behaviours interacting with robots. The project will investigate the application of AI algorithms for audio-visual scene analysis in real-life environments. One example is to employ multimodal sensors e.g. microphones and cameras, for analysing various sources and events present in the acoustic environment. Tasks to be considered include audio-visual source separation, localization/tracking, audio-visual event detection/recognition, audio-visual scene understanding.  


Automated machine learning for music understanding

Supervisor: Emmanouil Benetos

Eligible funding schemes: S&E Studentships for Underrepresented Groups, CSC PhD Studentships, International PhD Funding Scheme

The field of music information retrieval (MIR) has been growing for more than 20 years, with recent advances in deep learning having revolutionised the way machines can make sense of music data. At the same time, research in the field is still constrained by laborious tasks involving data preparation, feature extraction, model selection, architecture optimisation, hyperparameter optimisation, and transfer learning, to name but a few. Some of the model and experimental design choices made by MIR researchers also reflect their own biases.

Inspired by recent developments in machine learning and automation, this PhD project will investigate and develop automated machine learning methods which can be applied at any stage in the MIR pipeline as to build music understanding models ready for deployment across a wide range of tasks. This project will also compare the automated decisions made on every step in the MIR pipeline, as compared with manual model design choices made by researchers. The successful candidate will investigate, propose and develop novel deep learning methods for automating music understanding, resulting in models that can accelerate MIR research and contribute to the democratisation of AI.


Interpretable AI for Sound Event Detection and Classification

Supervisor: Lin Wang and Emmanouil Benetos

Eligible funding schemes: S&E Studentships for Underrepresented Groups, CSC PhD Studentships, International PhD Funding Scheme

Deep-learning models have revolutionized state-of-the-art technologies for environmental sound recognition motivated by their applications in healthcare, smart homes, or urban planning. However, most of the systems used for these applications are based on black boxes and, therefore, cannot be inspected, so the rationale behind their decisions is obscure. Despite recent advances, there is still a lack of research in interpretable machine learning in the audio domain. Applicants are invited to develop ideas to reduce this gap by proposing interpretable deep-learning models for automatic sound event detection and classification in real-life environments.