The QMUL Machine Listening Lab (MLLab) has had great success in research papers accepted for ICASSP 2019!
ICASSP 2019 (the IEEE International Conference on Acoustics, Speech and Signal Processing) is one of the major conferences in the field, and this year will be held in Brighton, UK.
Here are the papers from our lab that will be presented at the event. Many of them have links to preprints so you can read them already:
- Pablo A. Alvarado, Mauricio A. Álvarez and Dan Stowell, Sparse Gaussian Process Audio Source Separation Using Spectrum Priors in the Time-Domain
- Inês Nolasco, Alessandro Terenzi, Stefania Cecchi, Simone Orcioni, Helen L. Bear and Emmanouil Benetos, Audio-based identification of beehive states
- Lin Wang, Daniel Roggen, Sound-based transportation mode recognition with smartphones
- William J. Wilkinson, Michael Riis Andersen, Joshua D. Reiss, Dan Stowell and Arno Solin, End-to-End Probabilistic Inference for Nonstationary Audio Analysis
- Sai Samarth R Phaye, Emmanouil Benetos and Ye Wang, SubSpectralNet – Using sub-spectrogram based convolutional neural networks for acoustic scene classification
- F. Lins, M. Johann, E. Benetos, and R. Schramm, Automatic transcription of diatonic harmonica recordings
- Ken O’Hanlon and Mark B. Sandler, Comparing CQT and reassignment based chroma features for template-based automatic chord recognition
In addition to this, Dan Stowell will be (with Naomi Harte and Theo Damoulas), chairing a Special Session on *Adaptive Signal Processing for Wildlife Bioacoustics*. For the session, six papers have been accepted for oral presentations.
See you in Brighton!