| KEYNOTE ADDRESS |
Georg Boenn
Human Hearing teaches the Algorithm – Can we extract something meaningful from audio?
In the past decades, research into human hearing has discovered groundbreaking insights into the biomechanics and the neural processing of the inner ear, the cochlea. Based on this new understanding of pitch and timbre perception, machine-learning models became very precise in classifying melodies, separating sound sources, and identifying speech and musical timbres. My research is focused on the problem of beat and rhythmic pattern detection with the aim of identifying and extracting tempo, meter, and musical rhythms from audio sources. I can demonstrate my solution and want to explore how these models can be combined to inform a modern understanding of music perception. For this purpose, I have teamed up with Dr. Andre Rupp from the University of Heidelberg, who leads the section of biomagnetism in the department of neurology. He is also an expert in auditory image models. We are currently exploring a very interesting piece of electronic dance music by Aphex Twin to see how our models can compare with real human brain activity measured by fMRI scans. Our aim is to gain new insights into music cognition and embodied responses to music.

A pitch-o-gram of the electronic music composition “Tha” from Ambient Works 85–92 (1992) by Richard James, aka Aphex Twin (Boenn and Rupp, 2025)
| PARTICIPANTS |