Welcoming new Lecturer: Bhusan Chettri

We’re pleased to welcome new Lecturer Bhusan Chettri to our group!

Bhusan started a full-time role this month as Lecturer in Data Analytics. He says:

Bhusan Chettri photo“I completed my Ph.D. degree from QMUL in August 2020 under the supervision of Dr. Emmanouil Benetos and Dr. Bob Sturm (who is now at KTH, Sweden). During my Ph.D. studies, I was also working as a part-time Lecturer of Data Analytics at QMUL.

“My Ph.D. research focused on the design and analysis of secure voice biometrics. I look forward to continuing my work on fake audio/speech detection using generative models, representation learning, and adversarial attacks. Furthermore, I am also keen to explore interpretability for voice biometrics and anti-spoofing systems.”


Posted in Uncategorized

Honorary Lectureship for Helen Bear

We’re pleased to announce that Helen Bear (Yogi), who has been working with us since 2018, has been awarded a QMUL Honorary Lectureship for 3 years.

Yogi says:

“I’m delighted to continue being a part of the team in C4DM. Complementary to my work in industry, at QMUL I am excited to continue my work in applied AI for visual and audio domains. Most recently I have been working in environmental sound scene analysis for multiple tasks, such as audio geotagging. But additionally I have been creating partnerships across QM including clinicians at the St Barts NHS trust to use AI to support healthcare and patients. “

To learn more about Yogi and her work, you can read this recent interview in Wonk Magazine.

Posted in People

Welcoming new Lecturer: Huy Phan

We’re pleased to welcome new Lecturer Huy Phan to our group!

Huy is a Lecturer in AI, and joined the C4DM in April this year. His interests are a great match to the Machine Listening Lab, and we look forward to working together (remotely and in person!). Huy says:

Photo: Huy Phan

“I am a Lecturer in AI at C4DM. Before joining QMUL, I was a postdoctoral research assistant at the University of Oxford and a lecturer at the University of Kent. I received PhD degree from the University of Lübeck, Germany. I am interested in applying machine learning to temporal signal analysis and processing (e.g. audio, EEG).

“At C4DM, I hope to join force with colleagues and students to make contribution to multi-view, multi-task, privacy-preserving, and non-iid generalisation perspectives of machine learning algorithms. I will focus on applications like audio event detection and localisation, audio scene classification, speech enhancement, and healthcare.”

Posted in People

New journal papers from the Machine Listening Lab

The Machine Listening Lab has recently had great success towards publishing journal papers related to the lab’s research priorities on developing new machine learning and signal processing methodologies for audio and timeseries analysis. Our new work ranges from new methods for speaker anti-spoofing, to visibility graphs for large-scale time series analysis, and on new evaluation methodologies for music prediction and transcription.

The list of our recently accepted and published journal papers can be found below; many of them are freely available or have links to preprints so you can read them already:

Posted in Publications

MLLab papers at ICASSP 2020

On 4-8 May 2020, several MLLab researchers will participate virtually at the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2020). ICASSP is the flagship conference of the IEEE’s Signal Processing Society and is the leading conference in the field of signal processing. For this year, ICASSP is organised as a virtual conference offering free registration to attendees.

As in previous years, the Machine Listening Lab will have a strong presence at the conference, both in terms of numbers and overall impact. The following papers authored/co-authored by MLLab members will be presented:

Posted in Events, Publications

MLLab at WASPAA, DCASE and SANE 2019

In late October, Machine Listening Lab researchers will be participating at a series of back-to-back workshops in the United States focused on audio signal processing and computational scene analysis: the 2019 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA 2019) taking place in New Paltz, NY, the 8th Speech and Audio in the Northeast workshop (SANE 2019) taking place in New York City, and the 4th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE 2019) taking place in New York City.

The following papers from MLLab members will be presented at WASPAA 2019:

The following papers will be presented at DCASE 2019:

Finally, the following posters will be presented at SANE 2019 (click here for poster abstracts):

  • “An extensible cluster-graph taxonomy for open set sound scene analysis”, by Helen L. Bear and Emmanouil Benetos
  • “Adversarial Attacks in Audio Applications”, by Vinod Subramanian, Emmanouil Benetos, and Mark B. Sandler
  • “Neural Machine Transcription for Sound Events”, by Arjun Pankajakshan, Helen L. Bear, and Emmanouil Benetos

See you in New York state and city!

Posted in dcase, Publications

MLLab at Interspeech 2019

The Machine Listening Lab will be participating in this year’s INTERSPEECH conference, taking place on 15-19 September 2019 in Graz, Austria. The following papers will be presented by MLLab members:

Posted in Events, Publications

Research visit: Akash Jaiswal – Machine learning to analyse acoustic bird monitoring data

This week we are welcoming Akash Jaiswal, a PhD student from Jawaharlal Nehru University (Delhi, India). Akash has obtained a Newton-Bhabha Fund PhD placement to spend 3 months in the UK working with us on using machine learning to analyse acoustic bird monitoring data.

The placement description is as follows:

Worldwide increasing urbanization is directly associated with the rapid transformation of landscapes, impacting more and more natural habitats. Biodiversity assessment is essential to improve the management and quality of these habitats for greater biodiversity benefit and better provisioning of ecosystem services, and also to understand the ecological changes shaping urban animal communities. Birds are representative taxa for biodiversity monitoring in terrestrial habitats and have been studied frequently in this context. It has been observed that bird communities in habitats close to urban infrastructures exhibit reduced species richness with a few successful species more dominant as compared to adjacent natural habitats. But the mechanisms creating such community-level changes are poorly understood.

My PhD project is to understand such variations in bird communities across different habitats in a fast changing urban landscape like Delhi using birds’ singing activity as proxy for community composition. Acoustic monitoring of vocalizing animal communities proves to be less time-consuming and more resource-efficient as compared to field surveys of biodiversity. In this context, the aim of this research work is to assess the efficacy of using eco-acoustic indices to measure and characterize avian biodiversity, and its application to account for community-level variation in vocalizing birds (avian soundscape).

Although acoustic monitoring appears to be a promising solution for biodiversity assessment, the analysis of recorded acoustic samples remains challenging as the manual detection and identification of individual species vocalization from the large amount of field recordings is nearly impossible and also subject to observer bias/error. Automation of such analysis using machine learning techniques can facilitate identification of species and data analysis. Dr Dan Stowell at Queen Mary University of London has been working upon machine learning techniques to study sound signals including birdsongs, music and environmental sounds for years. He is also working on automated processes for analyzing large amounts of sound recordings – detecting the bird sounds and their relation to each other.

I am visiting Dr Stowell’s lab and working under his supervision to learn and use machine learning methods and automated processes to facilitate the analysis of the large amount of audio data I am collecting during my field sampling. Besides, I also believe that his supervision will certainly improve the quality and impact of my research work. This is certainly going to help in applicability of this modern technique to enhance my current and future projects.

Posted in Bird, Lab Updates

PhD Studentships – AIM Centre for Doctoral Training

The Machine Listening Lab at Queen Mary University of London (QMUL) would like to invite applicants for PhD positions as part of the newly funded UKRI Centre for Doctoral Training in Artificial Intelligence and Music (AIM). The AIM programme offers up to 12 fully-funded PhD studentships to start September 2019, with more studentships in the coming years. Studentships cover fees and a stipend for 4 years.

The Machine Listening Lab is inviting applications for the AIM PhD programme for the following topics:

The deadline for applications is on 15 April 2019. Detailed application guidelines can be found in the AIM website. For informal queries to discuss the above topics or other potential topics which can be supported by the Machine Listening Lab, please email Dan Stowell and Emmanouil Benetos. For informal queries regarding the application process, please email aim-enquiries@qmul.ac.uk

Posted in Lab Updates, Opportunities

ICASSP 2019 research papers from the MLLab

The QMUL Machine Listening Lab (MLLab) has had great success in research papers accepted for ICASSP 2019!

ICASSP 2019 (the IEEE International Conference on Acoustics, Speech and Signal Processing) is one of the major conferences in the field, and this year will be held in Brighton, UK.

Here are the papers from our lab that will be presented at the event. Many of them have links to preprints so you can read them already:

In addition to this, Dan Stowell will be (with Naomi Harte and Theo Damoulas), chairing a Special Session on *Adaptive Signal Processing for Wildlife Bioacoustics*. For the session, six papers have been accepted for oral presentations.

See you in Brighton!

Posted in Events, Publications