CIMTR public lecture panel: Human interaction assessment and generative segmentation in health and music

  • Dates: 28 April 2025, 17:30 - 18:30
  • Cost: Free
  • Venue: Online
Register to attend
Digital illustration of bright blue lines connecting dots, in a shape suggesting a human brain

HIGH-M: Human Interaction assessment and Generative segmentation in Health and Music

Improvisation in music therapy has been shown to be an effective technique for engaging clients in emotionally rooted (inter)action to treat affective disorders such as major depression (Aalbers et al., 2017; Erkkilä et al., 2011). During improvisation, however, a variety of musical information is exchanged, resulting in a highly complex musical and interpersonal situation. While traditional models of music therapy analysis emphasise aural analysis and assessment of single sessions (Bruscia, 1987), more recent and elaborated methods, such as microanalysis, focus on the detailed development of improvisation sessions (Wosch, 2021; Wosch & Erkkilä, 2016), which comes at the cost of a more time-consuming application process.

Digital processing, as in music information retrieval and machine learning, seems promising to accelerate the analysis process, but requires considerable preliminary work in data preprocessing and formalisation of the high-level concepts used in music therapy to develop a suitable dataset for model training. Moreover, additional benefits of digital processing comprehend a more detailed and precise analysis of musical data.

The HIGH-M project aims to combine the qualitative validity of microanalytic methods with the speed and precision of digital processing, and therefore operates in a highly interdisciplinary research environment. To this end, the Improvisation Assessment Profile - Autonomy Microanalysis (Wosch, 2007) has been formalised by mapping it onto both the Music Therapy Toolbox (Erkkilä & Wosch, 2018), a music information retrieval tool for music therapy improvisation, and the Social Systems Game Theory (Burns et al., 2018), a sociological model for analysing and describing human interaction. By applying this framework to the dataset provided by the University of Jyväskylä (Erkkilä et al., 2021), an enriched dataset was created that is feasible for the training of a supervised machine learning classification model to recognise different types of interaction in dyadic piano improvisations.

The lecture will include a summary of the theoretical framework and shows the process from manual segment annotation to the training of a machine learning algorithm alongside a single improvisation example. Finally, possible conclusions regarding the specificity of (depressive) musical interaction on a statistical level will be discussed.

About the panel

Thomas Wosch

Thomas is Professor of Music Therapy at Technical University of Applied Sciences Würzburg-Schweinfurt (THWS) in Germany. He is head of Master Music Therapy for Empowerment and Participation, head of the Music Therapy Lab in the Institute for Applied Social Sciences, a research institute of THWS, and in this position head and member of several research projects, i.e. on music therapy in dementia care (HOMESIDE, MIDDEL, MUSIC MOVES, DigiMus) and automatized music therapy assessment, e.g. HIGH-M. He was music therapist in adult in- and out-patient psychiatry. As a researcher he developed Microanalysis in Music Therapy and published the textbook on this approach and other publications. He is co-founder of IMTAC, the International Music Therapy Assessment Consortium. He published three books and 92 journal articles and book chapters on microanalysis, outcome research, technology and assessment, skill-sharing of music therapy, music therapy for psyche disorders and emotional processing in music therapy.

Olivier Lartillot

Oliver, a researcher at the RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion at the University of Oslo, focuses on computational music analysis and artificial intelligence. As a postdoctoral researcher in a music therapy project led by Jaakko Erkkilä and Petri Toiviainen in 2004-2005 at the University of Jyväskylä, he developed the Music Therapy Toolbox (MTTB). He then conceived, with Petri Toiviainen, the MIRtoolbox, a recognized tool for music feature extraction from audio. As part of a five-year Academy of Finland research fellowship, he developed MiningSuite, an analytical framework that integrates audio and symbolic research for comprehensive music analysis. He obtained funding from the Research Council of Norway under the FRIPRO-IKTPLUSS program for the MIRAGE project (2020-2023), which aims to enhance computers' ability to understand music and develop technologies to facilitate music appreciation and engagement. He is a partner in the HIGH-M project.

Bastian Vobig

Bastian studied musicology at the Julius-Maximilians-Universität Würzburg (JMU), focusing on argumentation strategies in music analysis as well as on topics of systematic musicology (“authenticity”). From 2021 to 2023, he was a lecturer at the Institute for Music Research at JMU, where he taught courses in music analysis, computational musicology, and hip-hop studies. Since 2022, he has contributed as a research associate within the HIGH-M (Human Interaction assessment and Generative segmentation in Health and Music) project at the Institute for Applied Social Sciences (IFAS) at the Technical University of Applied Sciences Würzburg-Schweinfurt (THWS), which aims to develop a machine learning-based analysis tool for clinical improvisation. In 2023, he also became a research associate at the Nuremberg University of Music where he started his PhD project “Analyzing Autonomy in Music Therapy Improvisations” in 2024 as part of the HIGH-M project. His current research focuses on the interdisciplinary modelling of musical interaction and the integration of music therapy analysis methods into sociological and digital frameworks.

This event is part of the CIMTR Public Lecture Series 2024-25.

  • Dates: 28 April 2025, 17:30 - 18:30
  • Cost: Free
  • Venue: Online
Register to attend