Special Session: “Wildlife Bioacoustics and Adaptive Signal Processing”, IEEE ICASSP 2019 (Brighton, UK)
– Special session at IEEE ICASSP 2019 –
Organisers: Dan Stowell, Naomi Harte, Theo Damoulas
Summary:
Wildlife bioacoustics is witnessing a surge in both data volumes and computational methods. Monitoring projects worldwide collect many thousands of hours of audio each [1,2,3,4], and computational methods are now able to mine these datasets to detect, isolate and characterise recorded wildlife sounds at scale [2,5]. Such bioacoustic data is crucial for monitoring the rapid declines in many wildlife populations [6], as well as advancing the science of animal behaviour.
However, many open problems remain, including:
- detecting/discriminating very brief low-SNR sounds;
- estimating distance to animals from single-microphone recordings;
- integrating evidence from multiple sensors, and from differing sensing sources (e.g. radar plus acoustics);
- sampling bias, imbalance and concept drift in data sources;
- high-resolution and perceptual measures of animal sound similarity;
- sound source separation and localisation in complex wildlife sound scenes.
These problems must be solved in order to bring the data fully to bear on urgent global issues such as the loss of animal habitats.
New methods come from machine learning and advanced signal processing: matrix factorisations, deep learning, Gaussian processes, novel time-frequency transforms, and more. These new methods have been shown to be useful in various projects, across many species – bird, bat, cetacean and terrestrial mammals. However, much of this happens in isolated projects and there is a need to bring together lessons learned and establish state-of-the-art methods and directions for the future of this field.
The topic of computational wildlife bioacoustics is growing but lacks any dedicated conference or workshop. ICASSP is an ideal event through which to strengthen the application and the development of signal processing work in this increasingly important domain.
Submissions:
** Please note: submissions are not through the main ICASSP submission website! Use the link below. **
Full papers (4 pages, with an optional 5th page of references) should be submitted via the ICASSP special session submission system. The deadline is the same as for other ICASSP papers (Oct 29th 2018). The review process and the quality threshold for acceptance will also be the same.
For template and formatting information, please see the paper kit.
Organiser biographies:
Dan Stowell
Dan Stowell is Senior Researcher at Queen Mary University of London. He co-leads the Machine Listening Lab at Queen Mary University of London, based in the Centre for Digital Music, and is also a Turing Fellow at the Alan Turing Institute Dan has worked on voice, music and environmental soundscapes, and is currently leading a five-year EPSRC fellowship project researching the automatic analysis of bird sounds.
Naomi Harte
Naomi Harte is Associate Professor of Digital Media Systems and a Fellow of Trinity College Dublin, Ireland. She is Co-PI of the ADAPT Research Centre, and a PI in the Sigmedia Research Group. Naomi’s primary focus is human speech communication, including speech quality, audio visual speech processing, speaker verification for biometrics and emotion in speech. Since 2012, she has collaborated on bird song analysis with the Dept. of Zoology in TCD.
Theo Damoulas
Theo Damoulas is an Associate Professor in Data Science with a joint appointment in Computer Science and Statistics at the University of Warwick. He is a Turing Fellow at the Alan Turing Institute and a visiting academic at NYU. His research interests are in machine learning and Bayesian statistics with a focus on spatio-temporal problems in urban science and computational sustainability.
References:
[1] Lostanlen et al (2018) Birdvox-full-night: A Dataset and Benchmark for Avian Flight Call Detection In Proceedings of the IEEE International Conference on Acoustics , Speech, and Signal Processing (ICASSP), Calgary, Canada, Apr. 2018.
[2] Stowell et al (2018) “Automatic acoustic detection of birds through deep learning: the first Bird Audio Detection challenge.” Methods in Ecology and Evolution. In press. https://arxiv.org/abs/1807.05812
[3] AmiBio project, http://www.amibio-project.eu/
[4] Sullivan et al (2014), “The eBird enterprise: an integrated approach to development and application of citizen science.” Biological Conservation, vol. 169, pp. 31–40, 2014.
[5] Goeau et al (2016), “LifeCLEF Bird Identification Task 2016: The arrival of deep learning”, Working Notes of CLEF 2016, 440-449, 2016.
[6] Joppa (2017), “The case for technology investments in the environment.” Nature 552, 325-328, 2017.