Machine Listening Lab 2017: The year in review

2017 has been a fascinating year for the Machine Listening Lab. Here are the headlines!

Grant success and projects:

  • Rob Lachlan, along with David Clayton and Dan Stowell, were awarded a BBSRC grant for a £659,000 project to study “Machine Learning for Bird Song Learning” (BB/R008736/1).
  • Emmanouil Benetos was awarded an EPSRC first grant for a £122,299 project to study “Integrating sound and context recognition for acoustic scene analysis” (EP/R01891X/1).
  • Bob Sturm (with co-investigator Oded Ben-Tal of Kingston University) was awarded AHRC follow-on funding (AH/R004706/1) for a £70,000 project titled, “Engaging three user communities with applications and outcomes of computational music creativity”.
  • Emmanouil Benetos is co-investigator (Co-I) for the European Training NetworkNew Frontiers in Music Information Processing” (MIP-Frontiers), with Simon Dixon as PI (and network coordinator) and Mark Sandler as Co-I. The budget is €819,863 for QMUL, €3,937,088 total. Duration: April 2018 – March 2022.
  • The LiveQuest project began – a collaboration between QMUL and 4 other institutions in the UK and China, to develop IoT sensing devices to aid with chicken welfare monitoring. The project is led by QMUL’s Yue Gao; on the machine listening side, Becky Stewart and Alan McElligott are co-investigators.
  • The Machine Listening Lab received an NVIDIA GPU grant for a Titan Xp GPU (RRP £1,149).

Events:

  • HORSE 2017, the second workshop on “Horses” in applied machine learning, was organised and led by Bob Sturm at QMUL, a one-day workshop with a range of international speakers in machine learning.
  • QMUL Festival of Ideas (June 29 2017) – Dan Stowell gave a public talk on “Can we decode the Dawn Chorus”, and the Machine Listening Lab gave a concert of 3 parts (Sturm’s folk-rnn, Ewerts’ one-handed Gould, Stowell’s thrush nightingale transcription). It was attended by staff from many different departments around the college and was named as one of the highlights of the festival.
  • MLLab members led sessions at international research conferences:
    Bob Sturm co-organised the ML4Audio workshop @ NIPS 2017 (USA).
    Dan Stowell organised and chaired special sessions at EUSIPCO (Greece), IBAC (India), and chaired a session at DCASE (Germany).
    Emmanouil Benetos was Programme Co-chair (with Emmanuel Vincent) for the DCASE 2017 Workshop. Also Programme Committee member (i.e. meta-reviewer) for ISMIR 2017.
  • Emmanouil Benetos was invited keynote speaker for Digital Musicology Symposium, London, Sep 2017
  • 30 teams from around the world took part in the Bird Audio Detection Challenge, led by Dan Stowell. Many of the best performing methods were presented at EUSIPCO 2017.
  • Bob Sturm organised generative music concerts featuring many algorithms and composers, at venues around London: Partnerships in May, and Music in the Age of Artificial Creation in Nov

Awards

New people, and farewells

This year we said farewell to Sebastian Ewert – a founding member and co-leader of the MLLab, he has now moved on to Spotify where he will be a Senior Research Scientist. And also to Alan McElligott, an affiliated academic of the MLLab. He’s moved on to Roehampton University where he is a Reader in Animal Behaviour.

Michael McLoughlin joined us as a postdoc on the LiveQuest farm chicken welfare technology project mentioned above. Welcome!

Other news from the MLLab:

  • Two chapters authored in a new Springer textbook on Sound Scene Analysis: a chapter written by Dan Stowell, and a chapter lead-authored by Emmanouil Benetos (with Stowell and Plumbley)
  • Dan Stowell appeared live on on the flagship morning shows of BBC Radio 4 and the BBC World Service (March 20th) talking about birdsong and machine listening.
  • Bob Sturm appeared on French national television (Canal+, Nov 18th)  discussing whether artificial intelligence would take over from music artists.
  • Dan Stowell and Emmanouil Benetos were invited visitors to Beijing University of Posts and Telecommunications (BUPT) under its “International Academic Talents” programme.

Visiting researchers:
Rodrigo Schramm (UFRGS, Brazil, Aug 2016 – Aug 2017)
Mina Mounir (KU Leuven, Belgium, May 2017)
Hanna Pamula (AGH, Poland, June 2017 – August 2017)
Andrew McLeod (University of Edinburgh, Aug 2017)
Qing Zhou (Xi’an Jiaotang University, China, Oct 2017 – March 2018)

Journal articles:

D. Stowell, E. Benetos, and L. F. Gill, “On-bird Sound Recordings: Automatic Acoustic Recognition of Activities and Contexts“, IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 6, pp. 1193-1206, Jun. 2017.
postprint

E. Benetos, G. Lafay, M. Lagrange and M. D. Plumbley, “Polyphonic Sound Event Tracking using Linear Dynamical Systems“, IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 6, pp. 1266-1277, Jun. 2017.
postprint

S Wang, S Ewert, S Dixon, “Identifying Missing and Extra Notes in Piano Recordings Using Score-Informed Dictionary Learning“, IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol 25, no. 6, pp. 1877-1889, Jun. 2017.

F Spielmann, A Helmlinger, J Simonnot, T Fillon, G Pellerin, BL Sturm, “Zoom arrière : L’ethnomusicologie a` l’ère du Big Data”, Cahiers d’ethnomusicologie

BL Sturm, O Ben-Tal, “Bringing the models back to music practice: The evaluation of deep learning approaches to music transcription modelling and generation”, Journal of Creative Music Systems

S Abdallah, E Benetos, N Gold, S Hargreaves, T Weyde, D Wolff, “The digital music lab: A big data infrastructure for digital musicology“, Journal on Computing and Cultural Heritage (JOCCH) 10 (1), 2