Blog Archives

MLLab at WASPAA, DCASE and SANE 2019

In late October, Machine Listening Lab researchers will be participating at a series of back-to-back workshops in the United States focused on audio signal processing and computational scene analysis: the 2019 IEEE Workshop on Applications of Signal Processing to Audio

Posted in dcase, Publications

MLLab at Interspeech 2019

The Machine Listening Lab will be participating in this year’s INTERSPEECH conference, taking place on 15-19 September 2019 in Graz, Austria. The following papers will be presented by MLLab members: “Towards joint sound scene and polyphonic sound event recognition” by

Posted in Events, Publications

PhD Studentships – AIM Centre for Doctoral Training

The Machine Listening Lab at Queen Mary University of London (QMUL) would like to invite applicants for PhD positions as part of the newly funded UKRI Centre for Doctoral Training in Artificial Intelligence and Music (AIM). The AIM programme offers

Posted in Lab Updates, Opportunities

MLLab participates in DCASE 2018 Workshop and Challenge

Machine Listening Lab researchers will be participating at the 2018 Workshop and Challenge on Detection and Classification of Acoustic Scenes and Events (DCASE). The workshop, which is at its third iteration, is taking place on 19-20 November 2018 in Surrey,

Posted in dcase, Events, Publications

PhD Studentships – Alan Turing Institute & QMUL

The Machine Listening Lab at Queen Mary University of London (QMUL) would like to invite applicants for PhD positions to be held jointly between the Alan Turing Institute and QMUL. Three PhD students across QMUL will be sponsored by this

Posted in Opportunities

MLLab contributions to new Springer book in Sound Scene Analysis

MLLab members contributed two chapters to an upcoming book published by Springer on “Computational Analysis of Sound Scenes and Events“. The book, which is edited by Tuomas Virtanen, Mark D. Plumbley and Dan Ellis, will be published on 20 October

Posted in Publications

Best paper award at 2017 AES Conference on Semantic Audio

As part of the 2017 AES Conference on Semantic Audio, paper “Automatic transcription of a cappella recordings from multiple singers” by Rodrigo Schramm and Emmanouil Benetos has received the conference’s Best Paper Award. A postprint of the paper can be

Posted in Publications

MLLab research in the IEEE/ACM TASLP special issue on Sound Scene and Event Analysis

Two papers authored by members of the Machine Listening Lab have been published at a special issue of IEEE/ACM Transactions on Audio, Speech and Language Processing on “Sound Scene and Event Analysis”: D. Stowell, E. Benetos, and L. F. Gill,

Posted in Publications

Singing transcription project started

A new collaborative project that will address the problem of automatic transcription of multiple singers has been launched by Queen Mary University of London and Federal University of Rio Grande do Sul (UFRGS – Brazil). The £24k project, entitled “Automatic

Posted in Uncategorized