MLLab participates in DCASE 2018 Workshop and Challenge

Machine Listening Lab researchers will be participating at the 2018 Workshop and Challenge on Detection and Classification of Acoustic Scenes and Events (DCASE). The workshop, which is at its third iteration, is taking place on 19-20 November 2018 in Surrey, UK, and aims to provide a venue for researchers working on computational sound scene analysis to present and discuss their work. The challenge, which is at its fourth edition, continues to support the development of computational sound scene analysis methods by comparing different approaches using common publicly available datasets.

The following papers by MLLab researchers will be presented at the DCASE 2018 Workshop:

In addition, Dan Stowell is the lead organiser of DCASE Challenge Task 3 which is all about bird audio detection. The challenge made use of data from collaborators all around the world, and 13 teams participated. Dan will be presenting the overall results of the bird audio detection task.

See you all in Surrey!

Posted in dcase, Events, Publications

PhD Studentships – Alan Turing Institute & QMUL

The Machine Listening Lab at Queen Mary University of London (QMUL) would like to invite applicants for PhD positions to be held jointly between the Alan Turing Institute and QMUL. Three PhD students across QMUL will be sponsored by this scheme, which provides 3.5-year studentships that come with an increased annual stipend of £20,500 plus a travel allowance and tuition fees. Doctoral students will split their time between QMUL and the Institute headquarters at the British Library in London. Students will be supervised by Turing fellows from QMUL and will work in collaboration with research teams at the Turing Institute.

The Machine Listening Lab is inviting applications for Turing-QMUL PhD projects in the following research topics:

  • Privacy-preserving urban/domestic sound monitoring
  • Automatic recognition of acoustic environments in sound archives
  • High-resolution machine listening for wildlife sounds

Please note that the Turing-QMUL PhD funding is across many subject areas and your application will be considered alongside applications from other disciplines. More information for the Turing-QMUL PhD positions is available at: https://www.applieddatascience.qmul.ac.uk/news/?eid=3623

Candidates who are interested in carrying out research hosted by the Machine Listening Lab should apply directly to the School of Electronic Engineering and Computer Science of Queen Mary University of London at: http://www.eecs.qmul.ac.uk/phd/how-to-apply/

The deadline for submission is midday Monday 14 January 2019. In your application, and when choosing a supervisor, applicants should make it clear that they would like to be considered for the Turing doctoral studentship. Queen Mary will complete an initial assessment of your application and will refer selected candidates to the Institute in early February 2019. If successful, you will then be invited to attend a further interview at The Alan Turing Institute.

For informal queries before application, or to discuss topics, please email Dan Stowell and Emmanouil Benetos.

Posted in Opportunities

Special Session: “Wildlife Bioacoustics and Adaptive Signal Processing”, IEEE ICASSP 2019 (Brighton, UK)

– Special session at IEEE ICASSP 2019

Organisers: Dan Stowell, Naomi Harte, Theo Damoulas

 

Summary:

Wildlife bioacoustics is witnessing a surge in both data volumes and computational methods. Monitoring projects worldwide collect many thousands of hours of audio each [1,2,3,4], and computational methods are now able to mine these datasets to detect, isolate and characterise recorded wildlife sounds at scale [2,5]. Such bioacoustic data is crucial for monitoring the rapid declines in many wildlife populations [6], as well as advancing the science of animal behaviour.

However, many open problems remain, including:

  • detecting/discriminating very brief low-SNR sounds;
  • estimating distance to animals from single-microphone recordings;
  • integrating evidence from multiple sensors, and from differing sensing sources (e.g. radar plus acoustics);
  • sampling bias, imbalance and concept drift in data sources;
  • high-resolution and perceptual measures of animal sound similarity;
  • sound source separation and localisation in complex wildlife sound scenes.

These problems must be solved in order to bring the data fully to bear on urgent global issues such as the loss of animal habitats.

New methods come from machine learning and advanced signal processing: matrix factorisations, deep learning, Gaussian processes, novel time-frequency transforms, and more. These new methods have been shown to be useful in various projects, across many species – bird, bat, cetacean and terrestrial mammals. However, much of this happens in isolated projects and there is a need to bring together lessons learned and establish state-of-the-art methods and directions for the future of this field.

The topic of computational wildlife bioacoustics is growing but lacks any dedicated conference or workshop. ICASSP is an ideal event through which to strengthen the application and the development of signal processing work in this increasingly important domain.

 

Submissions:

** Please note: submissions are not through the main ICASSP submission website! Use the link below. **

Full papers (4 pages, with an optional 5th page of references) should be submitted via the ICASSP special session submission system. The deadline is the same as for other ICASSP papers (Oct 29th 2018). The review process and the quality threshold for acceptance will also be the same.

For template and formatting information, please see the paper kit.

 

 

Organiser biographies:

Dan Stowell

Dan Stowell is Senior Researcher at Queen Mary University of London. He co-leads the Machine Listening Lab at Queen Mary University of London, based in the Centre for Digital Music, and is also a Turing Fellow at the Alan Turing Institute  Dan has worked on voice, music and environmental soundscapes, and is currently leading a five-year EPSRC fellowship project researching the automatic analysis of bird sounds.

Naomi Harte

Naomi Harte is Associate Professor of Digital Media Systems and a Fellow of Trinity College Dublin, Ireland. She is Co-PI of the ADAPT Research Centre, and a PI in the Sigmedia Research Group. Naomi’s primary focus is human speech communication, including speech quality, audio visual speech processing, speaker verification for biometrics and emotion in speech. Since 2012, she has collaborated on bird song analysis with the Dept. of Zoology in TCD.

Theo Damoulas

Theo Damoulas is an Associate Professor in Data Science with a joint appointment in Computer Science and Statistics at the University of Warwick. He is a Turing Fellow at the Alan Turing Institute and a visiting academic at NYU. His research interests are in machine learning and Bayesian statistics with a focus on spatio-temporal problems in urban science and computational sustainability.

 

References:

[1] Lostanlen et al (2018) Birdvox-full-night: A Dataset and Benchmark for Avian Flight Call Detection In Proceedings of the IEEE International Conference on Acoustics , Speech, and Signal Processing (ICASSP), Calgary, Canada, Apr. 2018.

[2] Stowell et al (2018) “Automatic acoustic detection of birds through deep learning: the first Bird Audio Detection challenge.” Methods in Ecology and Evolution. In press. https://arxiv.org/abs/1807.05812

[3] AmiBio project, http://www.amibio-project.eu/

[4] Sullivan et al (2014), “The eBird enterprise: an integrated approach to development and application of citizen science.” Biological Conservation, vol. 169, pp. 31–40, 2014.

[5] Goeau et al (2016), “LifeCLEF Bird Identification Task 2016: The arrival of deep learning”, Working Notes of CLEF 2016, 440-449, 2016.

[6] Joppa (2017), “The case for technology investments in the environment.” Nature 552, 325-328, 2017.

Posted in Events

Turing Fellowship success for the Machine Listening Lab

Two of the Machine Listening Lab’s lead academics, Dan Stowell and Emmanouil Benetos, have been awarded Turing Fellowships.

The Alan Turing Institute (ATI) is a new UK national institute for data science and artificial intelligence (founded in 2015). Our university QMUL recently joined the ATI as a partner. The new QMUL Institute of Applied Data Sciences connects together QMUL’s researchers working on data science and artificial intelligence, and also acts as a conduit to the ATI.

Through the Turing Fellowships, effective 1st October 2018, Stowell and Benetos plan to work with ATI partners to build on their research themes in advanced audio analysis – to analyse urban, wildlife, domestic and musical sound recordings. They will work with academic, industry and government on the effective and ethical development of technology, including the development of privacy-preserving analysis methods.

Posted in Uncategorized

Bird audio detection DCASE challenge – less than a week to go

The second Bird Audio Detection challenge, now running as Task 3 in the 2018 DCASE Challenge, has been running all month, and the leaderboard is hotting up!

screenshot: leaderboard scores

There’s less than a week to go.

Who will get the strongest results? How well will these leaderboard preview scores predict the final outcome? (Note: the “preview” scores come from only 1000 audio files, from only two of our three evaluation sets. Whose algorithm generalises well?)

Who is this “ML” consistently getting preview scores above 90%? Will this beat the system from “Arjun” which was the first to get past the 90% mark? Will someone get a final score above 90%? Above 95%?

These scores aren’t just for fun, they represent (indirectly) the amount of manual labour saved by an automatic detector. If the AUC score gets twice as close to the 100% mark, this indicates approximately halving the number of false-positive detections that you have to tolerate from your system, which can thus save hundreds of person-hours in data mining, or enable some automation that wasn’t possible before.

Also importantly, what are the advances in the state of the art that are bringing these high-quality results? In the first Bird Audio Detection challenge, which was only last year, the strongest system attained 88.7% AUC – and this time we made the task harder by expanding to more different datasets, and by using a slightly modified scoring measure. (Instead of the overall AUC, we use the harmonic mean of the AUCs obtained with each of the three evaluation datasets. This tends to yield lower scores, especially for systems which do well at some datasets and poorly at others.) Machine learning is moving fast, and it’s not always clear which new developments provide real benefits in practical applications. It’s clear that some innovations have come along since last year, for bird audio detection.

So, we wait with interest to see the technical reports that will be submitted to the DCASE workshop (deadline next week, 31st July!)

Posted in Bird, dcase

SoundCamp 2018 – Animal Diplomacy Bureau

35-cw6a0613

Over the recent holiday weekend, in order to spread ideas about bird sounds and ecology, we made groups of kids and grown-ups run around in a field wearing bird heads and searching for food.

35-cw6a0613

This was Animal Diplomacy Bureau, a game created by Kaylene Kau. Since my research fellowship is all about birds and sounds and how they interact, and the game is all about exploring those topics, I was really pleased to commission Kaylene to develop an expanded version of the game and show it as part of SoundCamp 2018.

It was a glorious sunny bank holiday weekend, and the game was really popular, kids and grown-ups queueing up to play. They had to find food tokens in the park while either hunting or being hunted by other species, and using the sound of bird calls as clues.

95-ba8a4636_player_with_speaker52-ba8a4096
54-cw6a068396-ba8a4666

The game consisted of participants taking on the role of either a parakeet, a goldfinch or a sparrowhawk while searching for red discs representing berries. The food location was indicated by recorded goldfinch calls heard nearby. The sparrowhawks had to wear small loudspeakers, which would play back sparrowhawk alarm calls giving the prey a chance to react. The prey players could resist the sparrowhawks by getting together and “mobbing” the bird, which is what many birds do in real life.

After the game, Kaylene hosted discussions where she talked about how the bird species they’d been playing interact in real life, and how living in the city affects them. She invited participants to discuss what cities would be like if they were designed for animals as well as for humans.

43-ba8a4006-kaylene_explaining_kids

We had dozens of players, and also dozens of queries from people in the park curious to know what was going on with these bird people foraging around the place. Participants went away with an inside perspective on how it is to be a goldfinch, a sparrowhawk or a parakeet in London!

Thanks to my PhD students for helping Kaylene to run the event: Will Wilkinson, Veronica Morfi and Sophie McDonald.

(Financial support: EPSRC Fellowship EP/L020505/1 (Dan Stowell))

55-cw6a0688

Posted in Bird, Events

Machine Listening Lab symposium 2018

On 29th March 2018 we held a Machine Listening Lab symposium to gather together people across the university who are doing work in machine listening.

qmul_symposium_001

qmul_symposium_030

Katerina Kosta (Jukedeck)

The programme of talks included:

  • Emmanouil Benetos (QMUL):
    Machine listening for music and everyday sounds: the year ahead
  • Rob Lachlan (QMUL):
    Learning about bird song learning via inference from population-level variation
  • Katerina Kosta (invited guest speaker):
    Creating music at Jukedeck
  • Michael McLoughlin (QMUL):
    Sea to Farm: Bioacoustics in Animal Behaviour and Welfare

Plus 12 lightning talks from students and postdocs, and from Rodrigo Schramm visiting from Brazil to update us on his work since spending time with us last year.

qmul_symposium_013

Rodrigo Schramm (UFRGS)

Thank you to everyone who took part!

 

qmul_symposium_046

(Funded by Dan Stowell EPSRC Research Fellowship EP/L020505/1)

Posted in Uncategorized

Many C4DM papers accepted for ICASSP 2018

C4DM researchers have had a lot of success this year in being accepted for ICASSP 2018, the IEEE’s International Conference on Acoustics, Speech and Signal Processing. Most of these papers are led by C4DM PhD students, on MIR and Machine Listening topics:

  • “A Deeper Look At Gaussian Mixture Model Based Anti-Spoofing Systems” by Bhusan Chettri and Bob L. Sturm
  • “Towards Complete Polyphonic Music Transcription: Integrating Multi-Pitch Detection and Rhythm Quantization” by Eita Nakamura, Emmanouil Benetos, Kazuyoshi Yoshii, and Simon Dixon
  • “Polyphonic Music Sequence Transduction With Meter-Constrained LSTM Networks” by Adrien Ycart and Emmanouil Benetos
  • “Feature Design Using Audio Decomposition for Intelligent Control of the Dynamic Range Compressor” by Di Sheng and György Fazekas
  • Adversarial Semi-Supervised Audio Source Separation applied to Singing Voice Extraction” by Daniel Stoller, Sebastian Ewert, Simon Dixon
  • “Similarity Measures For Vocal-Based Drum Sample Retrieval Using Deep Convolutional Auto-Encoders” by Adib Mehrabi, Kuenwoo Choi, Simon Dixon, Mark Sandler
  • Shift-Invariant Kernel Additive Modelling for Audio Source Separation” by Delia Fano Yela, Sebastian Ewert, Ken O’Hanlon, Mark B. Sandler
  • “Improved detection of semi-percussive onsets in audio using temporal reassignment” by K. O’Hanlon and M.B. Sandler

See you all in Calgary!

Posted in Publications

Machine Listening Lab 2017: The year in review

2017 has been a fascinating year for the Machine Listening Lab. Here are the headlines!

Grant success and projects:

  • Rob Lachlan, along with David Clayton and Dan Stowell, were awarded a BBSRC grant for a £659,000 project to study “Machine Learning for Bird Song Learning” (BB/R008736/1).
  • Emmanouil Benetos was awarded an EPSRC first grant for a £122,299 project to study “Integrating sound and context recognition for acoustic scene analysis” (EP/R01891X/1).
  • Bob Sturm (with co-investigator Oded Ben-Tal of Kingston University) was awarded AHRC follow-on funding (AH/R004706/1) for a £70,000 project titled, “Engaging three user communities with applications and outcomes of computational music creativity”.
  • Emmanouil Benetos is co-investigator (Co-I) for the European Training NetworkNew Frontiers in Music Information Processing” (MIP-Frontiers), with Simon Dixon as PI (and network coordinator) and Mark Sandler as Co-I. The budget is €819,863 for QMUL, €3,937,088 total. Duration: April 2018 – March 2022.
  • The LiveQuest project began – a collaboration between QMUL and 4 other institutions in the UK and China, to develop IoT sensing devices to aid with chicken welfare monitoring. The project is led by QMUL’s Yue Gao; on the machine listening side, Becky Stewart and Alan McElligott are co-investigators.
  • The Machine Listening Lab received an NVIDIA GPU grant for a Titan Xp GPU (RRP £1,149).

Events:

  • HORSE 2017, the second workshop on “Horses” in applied machine learning, was organised and led by Bob Sturm at QMUL, a one-day workshop with a range of international speakers in machine learning.
  • QMUL Festival of Ideas (June 29 2017) – Dan Stowell gave a public talk on “Can we decode the Dawn Chorus”, and the Machine Listening Lab gave a concert of 3 parts (Sturm’s folk-rnn, Ewerts’ one-handed Gould, Stowell’s thrush nightingale transcription). It was attended by staff from many different departments around the college and was named as one of the highlights of the festival.
  • MLLab members led sessions at international research conferences:
    Bob Sturm co-organised the ML4Audio workshop @ NIPS 2017 (USA).
    Dan Stowell organised and chaired special sessions at EUSIPCO (Greece), IBAC (India), and chaired a session at DCASE (Germany).
    Emmanouil Benetos was Programme Co-chair (with Emmanuel Vincent) for the DCASE 2017 Workshop. Also Programme Committee member (i.e. meta-reviewer) for ISMIR 2017.
  • Emmanouil Benetos was invited keynote speaker for Digital Musicology Symposium, London, Sep 2017
  • 30 teams from around the world took part in the Bird Audio Detection Challenge, led by Dan Stowell. Many of the best performing methods were presented at EUSIPCO 2017.
  • Bob Sturm organised generative music concerts featuring many algorithms and composers, at venues around London: Partnerships in May, and Music in the Age of Artificial Creation in Nov

Awards

New people, and farewells

This year we said farewell to Sebastian Ewert – a founding member and co-leader of the MLLab, he has now moved on to Spotify where he will be a Senior Research Scientist. And also to Alan McElligott, an affiliated academic of the MLLab. He’s moved on to Roehampton University where he is a Reader in Animal Behaviour.

Michael McLoughlin joined us as a postdoc on the LiveQuest farm chicken welfare technology project mentioned above. Welcome!

Other news from the MLLab:

  • Two chapters authored in a new Springer textbook on Sound Scene Analysis: a chapter written by Dan Stowell, and a chapter lead-authored by Emmanouil Benetos (with Stowell and Plumbley)
  • Dan Stowell appeared live on on the flagship morning shows of BBC Radio 4 and the BBC World Service (March 20th) talking about birdsong and machine listening.
  • Bob Sturm appeared on French national television (Canal+, Nov 18th)  discussing whether artificial intelligence would take over from music artists.
  • Dan Stowell and Emmanouil Benetos were invited visitors to Beijing University of Posts and Telecommunications (BUPT) under its “International Academic Talents” programme.

Visiting researchers:
Rodrigo Schramm (UFRGS, Brazil, Aug 2016 – Aug 2017)
Mina Mounir (KU Leuven, Belgium, May 2017)
Hanna Pamula (AGH, Poland, June 2017 – August 2017)
Andrew McLeod (University of Edinburgh, Aug 2017)
Qing Zhou (Xi’an Jiaotang University, China, Oct 2017 – March 2018)

Journal articles:

D. Stowell, E. Benetos, and L. F. Gill, “On-bird Sound Recordings: Automatic Acoustic Recognition of Activities and Contexts“, IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 6, pp. 1193-1206, Jun. 2017.
postprint

E. Benetos, G. Lafay, M. Lagrange and M. D. Plumbley, “Polyphonic Sound Event Tracking using Linear Dynamical Systems“, IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 6, pp. 1266-1277, Jun. 2017.
postprint

S Wang, S Ewert, S Dixon, “Identifying Missing and Extra Notes in Piano Recordings Using Score-Informed Dictionary Learning“, IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol 25, no. 6, pp. 1877-1889, Jun. 2017.

F Spielmann, A Helmlinger, J Simonnot, T Fillon, G Pellerin, BL Sturm, “Zoom arrière : L’ethnomusicologie a` l’ère du Big Data”, Cahiers d’ethnomusicologie

BL Sturm, O Ben-Tal, “Bringing the models back to music practice: The evaluation of deep learning approaches to music transcription modelling and generation”, Journal of Creative Music Systems

S Abdallah, E Benetos, N Gold, S Hargreaves, T Weyde, D Wolff, “The digital music lab: A big data infrastructure for digital musicology“, Journal on Computing and Cultural Heritage (JOCCH) 10 (1), 2

Posted in Uncategorized

MLLab contributions to new Springer book in Sound Scene Analysis

MLLab members contributed two chapters to an upcoming book published by Springer on “Computational Analysis of Sound Scenes and Events“. The book, which is edited by Tuomas Virtanen, Mark D. Plumbley and Dan Ellis, will be published on 20 October 2017. The two book chapters contributed by MLLab members are:

  • E. Benetos, D. Stowell, and M. D. Plumbley, “Approaches to complex sound scene analysis“, in Computational Analysis of Sound Scenes and Events, T. Virtanen, M. D. Plumbley, and D. P. W. Ellis (eds.), Springer, Oct. 2017.
  • D. Stowell, “Computational Bioacoustic Scene Analysis“, in Computational Analysis of Sound Scenes and Events, T. Virtanen, M. D. Plumbley, and D. P. W. Ellis (eds.), Springer, Oct. 2017.
Posted in Publications