SoundCamp 2018 – Animal Diplomacy Bureau

35-cw6a0613

Over the recent holiday weekend, in order to spread ideas about bird sounds and ecology, we made groups of kids and grown-ups run around in a field wearing bird heads and searching for food.

35-cw6a0613

This was Animal Diplomacy Bureau, a game created by Kaylene Kau. Since my research fellowship is all about birds and sounds and how they interact, and the game is all about exploring those topics, I was really pleased to commission Kaylene to develop an expanded version of the game and show it as part of SoundCamp 2018.

It was a glorious sunny bank holiday weekend, and the game was really popular, kids and grown-ups queueing up to play. They had to find food tokens in the park while either hunting or being hunted by other species, and using the sound of bird calls as clues.

95-ba8a4636_player_with_speaker52-ba8a4096
54-cw6a068396-ba8a4666

The game consisted of participants taking on the role of either a parakeet, a goldfinch or a sparrowhawk while searching for red discs representing berries. The food location was indicated by recorded goldfinch calls heard nearby. The sparrowhawks had to wear small loudspeakers, which would play back sparrowhawk alarm calls giving the prey a chance to react. The prey players could resist the sparrowhawks by getting together and “mobbing” the bird, which is what many birds do in real life.

After the game, Kaylene hosted discussions where she talked about how the bird species they’d been playing interact in real life, and how living in the city affects them. She invited participants to discuss what cities would be like if they were designed for animals as well as for humans.

43-ba8a4006-kaylene_explaining_kids

We had dozens of players, and also dozens of queries from people in the park curious to know what was going on with these bird people foraging around the place. Participants went away with an inside perspective on how it is to be a goldfinch, a sparrowhawk or a parakeet in London!

Thanks to my PhD students for helping Kaylene to run the event: Will Wilkinson, Veronica Morfi and Sophie McDonald.

(Financial support: EPSRC Fellowship EP/L020505/1 (Dan Stowell))

55-cw6a0688

Posted in Bird, Events

Machine Listening Lab symposium 2018

On 29th March 2018 we held a Machine Listening Lab symposium to gather together people across the university who are doing work in machine listening.

qmul_symposium_001

qmul_symposium_030

Katerina Kosta (Jukedeck)

The programme of talks included:

  • Emmanouil Benetos (QMUL):
    Machine listening for music and everyday sounds: the year ahead
  • Rob Lachlan (QMUL):
    Learning about bird song learning via inference from population-level variation
  • Katerina Kosta (invited guest speaker):
    Creating music at Jukedeck
  • Michael McLoughlin (QMUL):
    Sea to Farm: Bioacoustics in Animal Behaviour and Welfare

Plus 12 lightning talks from students and postdocs, and from Rodrigo Schramm visiting from Brazil to update us on his work since spending time with us last year.

qmul_symposium_013

Rodrigo Schramm (UFRGS)

Thank you to everyone who took part!

 

qmul_symposium_046

(Funded by Dan Stowell EPSRC Research Fellowship EP/L020505/1)

Posted in Uncategorized

Many C4DM papers accepted for ICASSP 2018

C4DM researchers have had a lot of success this year in being accepted for ICASSP 2018, the IEEE’s International Conference on Acoustics, Speech and Signal Processing. Most of these papers are led by C4DM PhD students, on MIR and Machine Listening topics:

  • “A Deeper Look At Gaussian Mixture Model Based Anti-Spoofing Systems” by Bhusan Chettri and Bob L. Sturm
  • “Towards Complete Polyphonic Music Transcription: Integrating Multi-Pitch Detection and Rhythm Quantization” by Eita Nakamura, Emmanouil Benetos, Kazuyoshi Yoshii, and Simon Dixon
  • “Polyphonic Music Sequence Transduction With Meter-Constrained LSTM Networks” by Adrien Ycart and Emmanouil Benetos
  • “Feature Design Using Audio Decomposition for Intelligent Control of the Dynamic Range Compressor” by Di Sheng and György Fazekas
  • Adversarial Semi-Supervised Audio Source Separation applied to Singing Voice Extraction” by Daniel Stoller, Sebastian Ewert, Simon Dixon
  • “Similarity Measures For Vocal-Based Drum Sample Retrieval Using Deep Convolutional Auto-Encoders” by Adib Mehrabi, Kuenwoo Choi, Simon Dixon, Mark Sandler
  • Shift-Invariant Kernel Additive Modelling for Audio Source Separation” by Delia Fano Yela, Sebastian Ewert, Ken O’Hanlon, Mark B. Sandler
  • “Improved detection of semi-percussive onsets in audio using temporal reassignment” by K. O’Hanlon and M.B. Sandler

See you all in Calgary!

Posted in Publications

Machine Listening Lab 2017: The year in review

2017 has been a fascinating year for the Machine Listening Lab. Here are the headlines!

Grant success and projects:

  • Rob Lachlan, along with David Clayton and Dan Stowell, were awarded a BBSRC grant for a £659,000 project to study “Machine Learning for Bird Song Learning” (BB/R008736/1).
  • Emmanouil Benetos was awarded an EPSRC first grant for a £122,299 project to study “Integrating sound and context recognition for acoustic scene analysis” (EP/R01891X/1).
  • Bob Sturm (with co-investigator Oded Ben-Tal of Kingston University) was awarded AHRC follow-on funding (AH/R004706/1) for a £70,000 project titled, “Engaging three user communities with applications and outcomes of computational music creativity”.
  • Emmanouil Benetos is co-investigator (Co-I) for the European Training NetworkNew Frontiers in Music Information Processing” (MIP-Frontiers), with Simon Dixon as PI (and network coordinator) and Mark Sandler as Co-I. The budget is €819,863 for QMUL, €3,937,088 total. Duration: April 2018 – March 2022.
  • The LiveQuest project began – a collaboration between QMUL and 4 other institutions in the UK and China, to develop IoT sensing devices to aid with chicken welfare monitoring. The project is led by QMUL’s Yue Gao; on the machine listening side, Becky Stewart and Alan McElligott are co-investigators.
  • The Machine Listening Lab received an NVIDIA GPU grant for a Titan Xp GPU (RRP £1,149).

Events:

  • HORSE 2017, the second workshop on “Horses” in applied machine learning, was organised and led by Bob Sturm at QMUL, a one-day workshop with a range of international speakers in machine learning.
  • QMUL Festival of Ideas (June 29 2017) – Dan Stowell gave a public talk on “Can we decode the Dawn Chorus”, and the Machine Listening Lab gave a concert of 3 parts (Sturm’s folk-rnn, Ewerts’ one-handed Gould, Stowell’s thrush nightingale transcription). It was attended by staff from many different departments around the college and was named as one of the highlights of the festival.
  • MLLab members led sessions at international research conferences:
    Bob Sturm co-organised the ML4Audio workshop @ NIPS 2017 (USA).
    Dan Stowell organised and chaired special sessions at EUSIPCO (Greece), IBAC (India), and chaired a session at DCASE (Germany).
    Emmanouil Benetos was Programme Co-chair (with Emmanuel Vincent) for the DCASE 2017 Workshop. Also Programme Committee member (i.e. meta-reviewer) for ISMIR 2017.
  • Emmanouil Benetos was invited keynote speaker for Digital Musicology Symposium, London, Sep 2017
  • 30 teams from around the world took part in the Bird Audio Detection Challenge, led by Dan Stowell. Many of the best performing methods were presented at EUSIPCO 2017.
  • Bob Sturm organised generative music concerts featuring many algorithms and composers, at venues around London: Partnerships in May, and Music in the Age of Artificial Creation in Nov

Awards

New people, and farewells

This year we said farewell to Sebastian Ewert – a founding member and co-leader of the MLLab, he has now moved on to Spotify where he will be a Senior Research Scientist. And also to Alan McElligott, an affiliated academic of the MLLab. He’s moved on to Roehampton University where he is a Reader in Animal Behaviour.

Michael McLoughlin joined us as a postdoc on the LiveQuest farm chicken welfare technology project mentioned above. Welcome!

Other news from the MLLab:

  • Two chapters authored in a new Springer textbook on Sound Scene Analysis: a chapter written by Dan Stowell, and a chapter lead-authored by Emmanouil Benetos (with Stowell and Plumbley)
  • Dan Stowell appeared live on on the flagship morning shows of BBC Radio 4 and the BBC World Service (March 20th) talking about birdsong and machine listening.
  • Bob Sturm appeared on French national television (Canal+, Nov 18th)  discussing whether artificial intelligence would take over from music artists.
  • Dan Stowell and Emmanouil Benetos were invited visitors to Beijing University of Posts and Telecommunications (BUPT) under its “International Academic Talents” programme.

Visiting researchers:
Rodrigo Schramm (UFRGS, Brazil, Aug 2016 – Aug 2017)
Mina Mounir (KU Leuven, Belgium, May 2017)
Hanna Pamula (AGH, Poland, June 2017 – August 2017)
Andrew McLeod (University of Edinburgh, Aug 2017)
Qing Zhou (Xi’an Jiaotang University, China, Oct 2017 – March 2018)

Journal articles:

D. Stowell, E. Benetos, and L. F. Gill, “On-bird Sound Recordings: Automatic Acoustic Recognition of Activities and Contexts“, IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 6, pp. 1193-1206, Jun. 2017.
postprint

E. Benetos, G. Lafay, M. Lagrange and M. D. Plumbley, “Polyphonic Sound Event Tracking using Linear Dynamical Systems“, IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 6, pp. 1266-1277, Jun. 2017.
postprint

S Wang, S Ewert, S Dixon, “Identifying Missing and Extra Notes in Piano Recordings Using Score-Informed Dictionary Learning“, IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol 25, no. 6, pp. 1877-1889, Jun. 2017.

F Spielmann, A Helmlinger, J Simonnot, T Fillon, G Pellerin, BL Sturm, “Zoom arrière : L’ethnomusicologie a` l’ère du Big Data”, Cahiers d’ethnomusicologie

BL Sturm, O Ben-Tal, “Bringing the models back to music practice: The evaluation of deep learning approaches to music transcription modelling and generation”, Journal of Creative Music Systems

S Abdallah, E Benetos, N Gold, S Hargreaves, T Weyde, D Wolff, “The digital music lab: A big data infrastructure for digital musicology“, Journal on Computing and Cultural Heritage (JOCCH) 10 (1), 2

Posted in Uncategorized

MLLab contributions to new Springer book in Sound Scene Analysis

MLLab members contributed two chapters to an upcoming book published by Springer on “Computational Analysis of Sound Scenes and Events“. The book, which is edited by Tuomas Virtanen, Mark D. Plumbley and Dan Ellis, will be published on 20 October 2017. The two book chapters contributed by MLLab members are:

  • E. Benetos, D. Stowell, and M. D. Plumbley, “Approaches to complex sound scene analysis“, in Computational Analysis of Sound Scenes and Events, T. Virtanen, M. D. Plumbley, and D. P. W. Ellis (eds.), Springer, Oct. 2017.
  • D. Stowell, “Computational Bioacoustic Scene Analysis“, in Computational Analysis of Sound Scenes and Events, T. Virtanen, M. D. Plumbley, and D. P. W. Ellis (eds.), Springer, Oct. 2017.
Posted in Publications

Best paper award at 2017 AES Conference on Semantic Audio

As part of the 2017 AES Conference on Semantic Audio, paper “Automatic transcription of a cappella recordings from multiple singers” by Rodrigo Schramm and Emmanouil Benetos has received the conference’s Best Paper Award. A postprint of the paper can be found here.

Posted in Publications

MLLab research in the IEEE/ACM TASLP special issue on Sound Scene and Event Analysis

Two papers authored by members of the Machine Listening Lab have been published at a special issue of IEEE/ACM Transactions on Audio, Speech and Language Processing on “Sound Scene and Event Analysis”:

Posted in Publications

Seminar: Mauricio Álvarez, Sequential latent force models for segmenting motor primitives

As part of the C4DM seminar series, the Machine Listening Lab and the Centre for Intelligent Sensing jointly present Mauricio Álvarez giving a talk about Sequential latent force models for segmenting motor primitives.

  • Date and Time: Wednesday, 24th May 2017, at 4:00pm
  • Place: Room GC 2.22, Graduate Centre, Queen Mary University of London, Mile End Road, London E1 4NS. (Directions)

Abstract
Motor primitives are basic representations of human motion that, in a similar way to phonemes in a language, can be used to compose complex movements used for imitation learning in humanoid robotics. The first step when using motor primitives in imitation learning consists of defining a basic vocabulary of motor skills, according to a particular task that the humanoid robot is supposed to perform. Such vocabularies are usually learned from multivariate time course data. In this talk, I will describe two alternatives for segmentation of motor primitives from multivariate time course data that involve the use of latent force models. A latent force model encodes a dynamic motor primitive in the form of a kernel function that can be used as the covariance function of a Gaussian process. I will describe how latent force models can be used on their own, or in combination with hidden Markov models for segmenting motion templates.

Bio
Dr. Álvarez received a degree in Electronics Engineering (B. Eng.) with Honours, from Universidad Nacional de Colombia in 2004, a master degree in Electrical Engineering (M. Eng.) from Universidad Tecnológica de Pereira, Colombia in 2006, and a Ph.D. degree in Computer Science from The University of Manchester, UK, in 2011. After finishing his Ph.D., Dr. Álvarez joined the Department of Electrical Engineering at Universidad Tecnológica de Pereira, Colombia, where he was appointed as a Faculty member until Dec 2016. From January 2017, Dr. Álvarez was appointed as Lecturer in Machine Learning at the Department of Computer Science of the University of Sheffield, UK.

Dr. Álvarez is interested in machine learning in general, its interplay with mathematics and statistics, and its applications. In particular, his research interests include probabilistic models, kernel methods, and stochastic processes. He works on the development of new approaches and the application of Machine Learning in areas that include applied neuroscience, systems biology, and humanoid robotics.

Posted in Events

“Machine Learning Methods in Bioacoustics” – Call for Abstracts, IBAC 2017

DEADLINE EXTENDED: 30th May 2017

We are pleased to announce a symposium on “Machine Learning Methods in Bioacoustics”, to be held as part of the 2017 International Bioacoustics Congress (Haridwar, India, 8-13 October 2017).

To submit an abstract, see: http://www.ibac2017india.com/abstracts/ – Please ALSO send an e-mail with the title of your contribution to dan.stowell@qmul.ac.uk before 30th April 30th May.

Please forward this announcement to anyone who may be interested. We aim for a broad representation, across the diverse fields of practitioners who have an interest in using/developing machine learning methods for animal sounds.

  • Symposium chair: Dan Stowell
  • Deadline for abstracts: 30th April 30th May
Posted in Uncategorized

EUSIPCO Special Session on Bird Audio Signal Processing

Bird Audio Signal Processing

Special Session at

25 th EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO) 2017

28 August – 2 September, 2017 – Kos Island, Greece

http://www.eusipco2017.org/

Theme and scope

This session will bring together research on an application domain of growing recent interest, and of high practical importance: signal processing and machine learning applied to the sounds of birds. Acoustic monitoring of species is an increasingly crucial tool in tracking population declines and migration movements affected by climate change. Detailed signal processing can also reveal scientific understanding of the evolutionary mechanisms operating on bird acoustic communication. What is needed is a set of tools for scalable and fully-automatic detection and analysis across a wide variety of bird sounds.

Workshops such as Listening in the Wild 2013/2015, the BirdClef Challenge 2014/2015/2016 and a special session at InterSpeech 2016 demonstrate the growing and active community in the area. Our session builds on this momentum, providing a focused European session. One component of this special session will be the outcomes of the Bird Audio Detection Challenge <http://tinyurl.com/badchallenge>, which provided new datasets and saw bird detection algorithms developed by more than 20 teams from around the world. The session will also invite new research contributions in the broader emerging topic of bird audio signal processing.

For information about how to submit your paper please see the EUSIPCO website.

 

Important Dates

  • Paper submission: February 17, 2017
  • Decision notifications: May 25, 2017
  • Camera-ready papers: June 17, 2017

Organizers

  • Dr Dan Stowell, Queen Mary University of London, London, UK.
  • Pr. Hervé Glotin, Scaled Acoustic BioDiversity Dept, Univ. of Toulon & Inst. Univ. de France.
  • Pr. Yannis Stylianou, Computer Science Dept, University of Crete.
  • Dr Mike Wood, University of Salford, Greater Manchester, UK.
Posted in Bird, Events