Project overview
This project explores the potential of recent implementations of machine learning algorithms for expressive gestural control of sound synthesis, specifically analogue synthesisers. Body movements are essential aspects of musical performance. When we make music, we spontaneously use our bodies to match the sonic features we associate with musical expression. This project investigates how to enhance the experience of performing analogue synthesisers by developing tools and strategies for mapping bodily gestures to sound synthesis parameters. The aim is to allow users to expressively control the sound of analogue synthesisers using their bodies, making the performance a rich embodied experience. We hope this research will contribute to expanding the performance capabilities of analogue synthesisers, as well as support rehabilitation and physical therapy through joyful music-making exercises.