Skip to main content

Research Areas


Automatic Drum Transcription

Researchers within the Digital Media Technology centre have produced a state-of-the art music transcription system that has been evidenced in multiple papers and community-led evaluations.

Research aims

Automatic music transcription (AMT) is the annotation of pitch and duration information from acoustic signals, and is crucial to several fields of study (e.g., musicology, information retrieval).

A key motivation in this research is to develop tools for extracting the rhythm embedded within musical audio signals. We set out to investigate if we could improve the field through the incorporation of deep learning systems tailored specifically for the task, and we were the first and most successful in this endeavour.

How has the research been carried out?

Over four years, we developed systems for music transcription and improved upon these through iterative development of the algorithms, training procedures/optimisation criteria and datasets. The findings are catalogued through over 10 papers from high-impact conferences and journals.

Research outcomes

  • Achieved state-of-the-art results in the Music Information Retrieval evaluation eXchange (MIREX) community evaluation (evidenced by results on MIREX webpage).
  • Achieved best results in a community led comparison of all current drum transcription models in IEEE TALSP journal paper.
  • Development of evaluation strategies for music transcription systems now used by community.
  • New datasets for task used by community.
Want to know more?

Intelligent Music Production and Automated Music Transformations

Simplifying complex music production processes and making them more accessible, bridging the gap between musicians and technology.

Research aims

Intelligent music production and automated audio transformations intend to simplify complex music production processes, making them more accessible, thus bridging the gap between musicians and technology.

To do this, we consider interfaces of reduced complexity and semantic control of low-level music production parameters. We approach this problem using a variety of techniques, including machine learning, adaptive audio processing and natural language processing.

Example applications include content-aware audio effects, synthesiser and audio effect preset recommendation, virtual music assistants and automatic remixing.

How has the research been carried out?

Various projects utilised different methods, however for the most part, there has been a marked increase in our reliance on cutting-edge deep learning methods.

We steer these towards two key areas: parameter estimation for pre-existing musical effects (e.g., work by Stasis et al.) and sound and rhythm generation (e.g., work by Tomczak et al. and Drysdale et al.). Please see associated publications for further information.

Research outcomes

  • 20+ publications in high-impact conferences and journals with numerous citations for all work on a variety of projects.
  • Semantic Labs spinout and development of Faders.io by Ryan Stables and others.
  • Best paper award (2nd) in premiere signal processing conference for audio production software (evidenced on DAFx 2015 website).
  • References in cutting edge audio plugins such as flowEQ (evidenced at floweq.ml), and new products utilising intelligent music production tools (e.g., iZotope).
Want to know more?

Computational Musicology

The analysis of music is an established field but computational analysis in this area is relatively recent. Analysis of recordings and gestural data coupled with an understanding of the underlying traditions and motivations of the performer can provide researchers with alternative methods, leading to deeper understanding of the music.

This research work is completed using a variety of computational analysis techniques including machine learning, audio and gestural analysis. These techniques are aplied to classify musicians based on nuances of individual playing style.

A number of projects have been carried out including classification of traditional Irish flue playing styles, led by Islah Ali-MacLachlan, and analysis of fiddle bowing styled led by Will Wilson.

Research outcomes

  • A number of projects in journals and conference proceedings.
  • Two datasets of traditional Irish solo flute with extensive metadata annotations.
Want to know more?