MLLab at Interspeech 2019

The Machine Listening Lab will be participating in this year’s INTERSPEECH conference, taking place on 15-19 September 2019 in Graz, Austria. The following papers will be presented by MLLab members:

Posted in Events, Publications

Research visit: Akash Jaiswal – Machine learning to analyse acoustic bird monitoring data

This week we are welcoming Akash Jaiswal, a PhD student from Jawaharlal Nehru University (Delhi, India). Akash has obtained a Newton-Bhabha Fund PhD placement to spend 3 months in the UK working with us on using machine learning to analyse acoustic bird monitoring data.

The placement description is as follows:

Worldwide increasing urbanization is directly associated with the rapid transformation of landscapes, impacting more and more natural habitats. Biodiversity assessment is essential to improve the management and quality of these habitats for greater biodiversity benefit and better provisioning of ecosystem services, and also to understand the ecological changes shaping urban animal communities. Birds are representative taxa for biodiversity monitoring in terrestrial habitats and have been studied frequently in this context. It has been observed that bird communities in habitats close to urban infrastructures exhibit reduced species richness with a few successful species more dominant as compared to adjacent natural habitats. But the mechanisms creating such community-level changes are poorly understood.

My PhD project is to understand such variations in bird communities across different habitats in a fast changing urban landscape like Delhi using birds’ singing activity as proxy for community composition. Acoustic monitoring of vocalizing animal communities proves to be less time-consuming and more resource-efficient as compared to field surveys of biodiversity. In this context, the aim of this research work is to assess the efficacy of using eco-acoustic indices to measure and characterize avian biodiversity, and its application to account for community-level variation in vocalizing birds (avian soundscape).

Although acoustic monitoring appears to be a promising solution for biodiversity assessment, the analysis of recorded acoustic samples remains challenging as the manual detection and identification of individual species vocalization from the large amount of field recordings is nearly impossible and also subject to observer bias/error. Automation of such analysis using machine learning techniques can facilitate identification of species and data analysis. Dr Dan Stowell at Queen Mary University of London has been working upon machine learning techniques to study sound signals including birdsongs, music and environmental sounds for years. He is also working on automated processes for analyzing large amounts of sound recordings – detecting the bird sounds and their relation to each other.

I am visiting Dr Stowell’s lab and working under his supervision to learn and use machine learning methods and automated processes to facilitate the analysis of the large amount of audio data I am collecting during my field sampling. Besides, I also believe that his supervision will certainly improve the quality and impact of my research work. This is certainly going to help in applicability of this modern technique to enhance my current and future projects.

Posted in Bird, Lab Updates

PhD Studentships – AIM Centre for Doctoral Training

The Machine Listening Lab at Queen Mary University of London (QMUL) would like to invite applicants for PhD positions as part of the newly funded UKRI Centre for Doctoral Training in Artificial Intelligence and Music (AIM). The AIM programme offers up to 12 fully-funded PhD studentships to start September 2019, with more studentships in the coming years. Studentships cover fees and a stipend for 4 years.

The Machine Listening Lab is inviting applications for the AIM PhD programme for the following topics:

The deadline for applications is on 15 April 2019. Detailed application guidelines can be found in the AIM website. For informal queries to discuss the above topics or other potential topics which can be supported by the Machine Listening Lab, please email Dan Stowell and Emmanouil Benetos. For informal queries regarding the application process, please email aim-enquiries@qmul.ac.uk

Posted in Lab Updates, Opportunities

ICASSP 2019 research papers from the MLLab

The QMUL Machine Listening Lab (MLLab) has had great success in research papers accepted for ICASSP 2019!

ICASSP 2019 (the IEEE International Conference on Acoustics, Speech and Signal Processing) is one of the major conferences in the field, and this year will be held in Brighton, UK.

Here are the papers from our lab that will be presented at the event. Many of them have links to preprints so you can read them already:

In addition to this, Dan Stowell will be (with Naomi Harte and Theo Damoulas), chairing a Special Session on *Adaptive Signal Processing for Wildlife Bioacoustics*. For the session, six papers have been accepted for oral presentations.

See you in Brighton!

Posted in Events, Publications

Suggested reading: getting going with deep learning

Based on a conversation we had in the Machine Listening Lab last week, here are some blogs and other things you can read when you’re – say – a new PhD student who wants to get started with applying/understanding deep learning. We can recommend plenty of textbooks too, but here it’s mainly blogs and other informal introductions. Our recommended reading:

Andrew Ng’s coursera course on “Deep Learning”
– it’s not free to be a student on the course, BUT it is free to “audit” the course, either by signing in, or simply by watching the videos on Youtube

A brief overview of Deep Learning – a v good intro. (Also: DO read the comments. Some big names give their thoughts.)
http://yyue.blogspot.co.uk/2015/01/a-brief-overview-of-deep-learning.html

This overview Nature paper is good too:
http://www.nature.com/nature/journal/v521/n7553/full/nature14539.html

This blog post series covers the LINEAR ALGEBRA underlying deep learning and numerical optimisation
https://hadrienj.github.io/posts/Deep-Learning-Book-Series-Introduction/

Introductions that show all the different types of NN architectures:
https://towardsdatascience.com/the-mostly-complete-chart-of-neural-networks-explained-3fb6f2367464
https://blog.statsbot.co/neural-networks-for-beginners-d99f2235efca

Deep learning book (free online)  by Ian Goodfellow and Yoshua Bengio and Aaron Courville
https://www.deeplearningbook.org/

Deep learning book (free online) by Michael Nielsen
http://neuralnetworksanddeeplearning.com/

PRACTICAL:

My Neural Network isn’t working! What should I do?
http://theorangeduck.com/page/neural-network-not-working
…you’ll need this!

ADVANCED:

a very readable tutorial on image generation using deep learning (specifically, GANs)
http://bamos.github.io/2016/08/09/deep-completion/

For Multi Task Learning:
http://ruder.io/multi-task/index.html

For LSTMs (a popular type of recurrent neural network):
http://colah.github.io/posts/2015-08-Understanding-LSTMs/

Posted in Uncategorized

Machine Listening Lab 2018: The year in review

2018 has been a fascinating year for the Machine Listening Lab. Here are the headlines!

Grant success and projects

Events

  • MLLab Symposium 2018: featuring an invited industrial speaker (Katerina Kosta, Jukedeck), 3 internal speakers, and 11 lightning talks from students and postdocs. There were 42 attendees including MLLab members, interested people from around QMUL, and a collaborator visiting from Brazil (Rodrigo Schramm).
  • SoundCamp 2018: Animal Diplomacy Bureau, an art/design/public engagement commission by Kaylene Kau for Dan Stowell’s research project. Lots of members of the public, adults and children, engaged with bird sounds and birds’ lives in a park in South London. 43-ba8a4006-kaylene_explaining_kidsMLLab students helped make this a success: Will Wilkinson, Veronica Morfi, and Sophie McDonald.
  • QMUL Festival of Ideas, a group of MLLab academics and postdocs got together to play FolkRNN tunes from Bob Sturm’s algorithm.
  • The second Bird Audio Detection challenge took place in Summer 2018: a data challenge lead-organised by Dan Stowell. Thirteen teams from around the world took part, and the highest-scoring systems represented a dramatic improvement on the state of the art.
  • Emmanouil Benetos was Programme Co-Chair for the 19th International Society for Music Information Retrieval Conference (ISMIR 2018).
  • Machine Listening Lab researchers took part in the 2018 Workshop and Challenge on Detection and Classification of Acoustic Scenes and Events (DCASE 2018) – presenting papers and also leading a task on Bird Audio Detection. (More info)
  • Dan Stowell announced a Special Session on “Wildlife Bioacoustics and Adaptive Signal Processing”, to happen at the ICASSP 2019 conference. He will chair it together with Naomi Harte and Theo Damoulas.

Awards

  • Dan Stowell and Emmanouil Benetos were awarded Turing Fellowships from The Alan Turing Institute (2018-2020).
  • Emmanouil Benetos received the “Outstanding Contribution to ISMIR Award” from the International Society for Music Information Retrieval (Oct 2018).
  • Emmanouil Benetos was promoted to Senior Lecturer (Oct 2018).
  • Emmanouil Benetos received a “Research Performance Award” from the Faculty of Science & Engineering of Queen Mary University of London (Jan 2018).

New people, visitors, and farewells

Farewells:

  • We said farewell to Bob Sturm – a founding member and co-leader of the MLLab, he has now moved to Stockholm to take up a position as Associate Professor at KTH.
  • Congratulations to MLLab PhD student Maria Panteli, who successfully defended her PhD on Computational Analysis of World Music Corpora (April 2018).

New people:

Visiting researchers:

  • Pavel Linhart and Tereza Petruskova (Czech Republic)
  • Vincent Lostanlen (Cornell University & New York University, USA)
  • Mathieu Lagrange and Felix Gontier (Ecole Centrale de Nantes, France)

Editorial activities

Invited talks

  • Dan Stowell gave the invited opening lecture at the 2018 Intelligent Sensing Summer School (video online here), and an invited talk at Silwood Park (Imperial College London).
  • Emmanouil Benetos gave a keynote on “Automatic transcription of world music collections” at the 8th International Workshop on Folk Music Analysis, Thessaloniki, Greece, June 2018.
  • Emmanouil Benetos gave an invited talk on “Automatic Music Transcription: Representations and Categorical (mis)Conceptions” at the 5th International Conference on Analytical Approaches to World Music, Thessaloniki, Greece, June 2018.

Selected Publications

Book chapters:

Approaches to complex sound scene analysis
E Benetos, D Stowell, and M D Plumbley
Computational Analysis of Sound Scenes and Events, Springer, 2018

Computational bioacoustic scene analysis
D Stowell
Computational Analysis of Sound Scenes and Events, Springer, 2018

Journal articles:

Deep Learning for Audio Event Detection and Tagging on Low-Resource Datasets
V Morfi and D Stowell
Applied Sciences 8 (8), 1397

Automatic acoustic detection of birds through deep learning: the first Bird Audio Detection challenge
D Stowell, MD Wood, H Pamuła, Y Stylianou, and H Glotin
Methods in Ecology and Evolution

A supervised classification approach for note tracking in polyphonic piano transcription
JJ Valero-Mas, E Benetos, and JM Iñesta
Journal of New Music Research, 1-15

A review of manual and computational approaches for the study of world music corpora
M Panteli, E Benetos, and S Dixon
Journal of New Music Research 47 (2), 176-189

Speaker recognition with hybrid features from a deep belief network
H Ali, SN Tran, E Benetos, and ASA Garcez
Neural Computing and Applications 29 (6), 13-19

Detection and classification of acoustic scenes and events: Outcome of the DCASE 2016 challenge
A Mesaros, T Heittola, E Benetos, P Foster, M Lagrange, T Virtanen, and MD Plumbley
IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP) 26 (2), 379-393

Posted in Events, Lab Updates, Publications

MLLab participates in DCASE 2018 Workshop and Challenge

Machine Listening Lab researchers will be participating at the 2018 Workshop and Challenge on Detection and Classification of Acoustic Scenes and Events (DCASE). The workshop, which is at its third iteration, is taking place on 19-20 November 2018 in Surrey, UK, and aims to provide a venue for researchers working on computational sound scene analysis to present and discuss their work. The challenge, which is at its fourth edition, continues to support the development of computational sound scene analysis methods by comparing different approaches using common publicly available datasets.

The following papers by MLLab researchers will be presented at the DCASE 2018 Workshop:

In addition, Dan Stowell is the lead organiser of DCASE Challenge Task 3 which is all about bird audio detection. The challenge made use of data from collaborators all around the world, and 13 teams participated. Dan will be presenting the overall results of the bird audio detection task.

See you all in Surrey!

Posted in dcase, Events, Publications

PhD Studentships – Alan Turing Institute & QMUL

The Machine Listening Lab at Queen Mary University of London (QMUL) would like to invite applicants for PhD positions to be held jointly between the Alan Turing Institute and QMUL. Three PhD students across QMUL will be sponsored by this scheme, which provides 3.5-year studentships that come with an increased annual stipend of £20,500 plus a travel allowance and tuition fees. Doctoral students will split their time between QMUL and the Institute headquarters at the British Library in London. Students will be supervised by Turing fellows from QMUL and will work in collaboration with research teams at the Turing Institute.

The Machine Listening Lab is inviting applications for Turing-QMUL PhD projects in the following research topics:

  • Privacy-preserving urban/domestic sound monitoring
  • Automatic recognition of acoustic environments in sound archives
  • High-resolution machine listening for wildlife sounds

Please note that the Turing-QMUL PhD funding is across many subject areas and your application will be considered alongside applications from other disciplines. More information for the Turing-QMUL PhD positions is available at: https://www.applieddatascience.qmul.ac.uk/news/?eid=3623

Candidates who are interested in carrying out research hosted by the Machine Listening Lab should apply directly to the School of Electronic Engineering and Computer Science of Queen Mary University of London at: http://www.eecs.qmul.ac.uk/phd/how-to-apply/

The deadline for submission is midday Monday 14 January 2019. In your application, and when choosing a supervisor, applicants should make it clear that they would like to be considered for the Turing doctoral studentship. Queen Mary will complete an initial assessment of your application and will refer selected candidates to the Institute in early February 2019. If successful, you will then be invited to attend a further interview at The Alan Turing Institute.

For informal queries before application, or to discuss topics, please email Dan Stowell and Emmanouil Benetos.

Posted in Opportunities

Special Session: “Wildlife Bioacoustics and Adaptive Signal Processing”, IEEE ICASSP 2019 (Brighton, UK)

– Special session at IEEE ICASSP 2019

Organisers: Dan Stowell, Naomi Harte, Theo Damoulas

 

Summary:

Wildlife bioacoustics is witnessing a surge in both data volumes and computational methods. Monitoring projects worldwide collect many thousands of hours of audio each [1,2,3,4], and computational methods are now able to mine these datasets to detect, isolate and characterise recorded wildlife sounds at scale [2,5]. Such bioacoustic data is crucial for monitoring the rapid declines in many wildlife populations [6], as well as advancing the science of animal behaviour.

However, many open problems remain, including:

  • detecting/discriminating very brief low-SNR sounds;
  • estimating distance to animals from single-microphone recordings;
  • integrating evidence from multiple sensors, and from differing sensing sources (e.g. radar plus acoustics);
  • sampling bias, imbalance and concept drift in data sources;
  • high-resolution and perceptual measures of animal sound similarity;
  • sound source separation and localisation in complex wildlife sound scenes.

These problems must be solved in order to bring the data fully to bear on urgent global issues such as the loss of animal habitats.

New methods come from machine learning and advanced signal processing: matrix factorisations, deep learning, Gaussian processes, novel time-frequency transforms, and more. These new methods have been shown to be useful in various projects, across many species – bird, bat, cetacean and terrestrial mammals. However, much of this happens in isolated projects and there is a need to bring together lessons learned and establish state-of-the-art methods and directions for the future of this field.

The topic of computational wildlife bioacoustics is growing but lacks any dedicated conference or workshop. ICASSP is an ideal event through which to strengthen the application and the development of signal processing work in this increasingly important domain.

 

Submissions:

** Please note: submissions are not through the main ICASSP submission website! Use the link below. **

Full papers (4 pages, with an optional 5th page of references) should be submitted via the ICASSP special session submission system. The deadline is the same as for other ICASSP papers (Oct 29th 2018). The review process and the quality threshold for acceptance will also be the same.

For template and formatting information, please see the paper kit.

 

 

Organiser biographies:

Dan Stowell

Dan Stowell is Senior Researcher at Queen Mary University of London. He co-leads the Machine Listening Lab at Queen Mary University of London, based in the Centre for Digital Music, and is also a Turing Fellow at the Alan Turing Institute  Dan has worked on voice, music and environmental soundscapes, and is currently leading a five-year EPSRC fellowship project researching the automatic analysis of bird sounds.

Naomi Harte

Naomi Harte is Associate Professor of Digital Media Systems and a Fellow of Trinity College Dublin, Ireland. She is Co-PI of the ADAPT Research Centre, and a PI in the Sigmedia Research Group. Naomi’s primary focus is human speech communication, including speech quality, audio visual speech processing, speaker verification for biometrics and emotion in speech. Since 2012, she has collaborated on bird song analysis with the Dept. of Zoology in TCD.

Theo Damoulas

Theo Damoulas is an Associate Professor in Data Science with a joint appointment in Computer Science and Statistics at the University of Warwick. He is a Turing Fellow at the Alan Turing Institute and a visiting academic at NYU. His research interests are in machine learning and Bayesian statistics with a focus on spatio-temporal problems in urban science and computational sustainability.

 

References:

[1] Lostanlen et al (2018) Birdvox-full-night: A Dataset and Benchmark for Avian Flight Call Detection In Proceedings of the IEEE International Conference on Acoustics , Speech, and Signal Processing (ICASSP), Calgary, Canada, Apr. 2018.

[2] Stowell et al (2018) “Automatic acoustic detection of birds through deep learning: the first Bird Audio Detection challenge.” Methods in Ecology and Evolution. In press. https://arxiv.org/abs/1807.05812

[3] AmiBio project, http://www.amibio-project.eu/

[4] Sullivan et al (2014), “The eBird enterprise: an integrated approach to development and application of citizen science.” Biological Conservation, vol. 169, pp. 31–40, 2014.

[5] Goeau et al (2016), “LifeCLEF Bird Identification Task 2016: The arrival of deep learning”, Working Notes of CLEF 2016, 440-449, 2016.

[6] Joppa (2017), “The case for technology investments in the environment.” Nature 552, 325-328, 2017.

Posted in Events

Turing Fellowship success for the Machine Listening Lab

Two of the Machine Listening Lab’s lead academics, Dan Stowell and Emmanouil Benetos, have been awarded Turing Fellowships.

The Alan Turing Institute (ATI) is a new UK national institute for data science and artificial intelligence (founded in 2015). Our university QMUL recently joined the ATI as a partner. The new QMUL Institute of Applied Data Sciences connects together QMUL’s researchers working on data science and artificial intelligence, and also acts as a conduit to the ATI.

Through the Turing Fellowships, effective 1st October 2018, Stowell and Benetos plan to work with ATI partners to build on their research themes in advanced audio analysis – to analyse urban, wildlife, domestic and musical sound recordings. They will work with academic, industry and government on the effective and ethical development of technology, including the development of privacy-preserving analysis methods.

Posted in Uncategorized