Research project

Machine Learning-Driven Gestural Control for Expressive Sound Synthesis: Bridging Movement, Music, And Musculoskeletal Health Through Analogue Synthesisers

Project overview

This project explores the potential of recent implementations of machine learning algorithms for expressive gestural control of sound synthesis, specifically analogue synthesisers. Body movements are essential aspects of musical performance. When we make music, we spontaneously use our bodies to match the sonic features we associate with musical expression. This project investigates how to enhance the experience of performing analogue synthesisers by developing tools and strategies for mapping bodily gestures to sound synthesis parameters. The aim is to allow users to expressively control the sound of analogue synthesisers using their bodies, making the performance a rich embodied experience. We hope this research will contribute to expanding the performance capabilities of analogue synthesisers, as well as support rehabilitation and physical therapy through joyful music-making exercises.

Staff

Lead researchers

Dr Pablo Galaz

Lecturer in Composition and Analysis
Connect with Pablo

Other researchers

Dr Richard Polfreman

Associate Professor
Research interests
  • New Interfaces for Musical Expression (NIME)
  • Music and Movement
  • User-Interface Design and HCI
Connect with Richard

Dr Martin Warner PhD

Associate Professor
Connect with Martin

Dr Arturo Vazquez Galvez

Engineering Research Fellow
Connect with Arturo

Collaborating research institutes, centres and groups

Research outputs