Real-time Timbral Analysis for Musical and Visual Augmentation
Real-time Timbral Analysis for Musical and Visual Augmentation
Martin Daigle (McGill) & Pauline Patie (UdeM)
March 15th, 2023
ABSTRACT:
Our project seeks to explore the capabilities of cutting-edge machine learning (ML) techniques for real-time sound analysis in the context of a new composition. Led by Martin Daigle, Pauline Patie, and Emmanuel Lacopo, the project features software such as Rodrigo Constanzo’s SP-Tools to provide real-time instrumental augmentation for the percussion and guitars. Furthermore, it will be used to control real-time interactive visual projections created by Pauline.
Our main technical method to generate and perform this piece is SP-Tools which is based on the FluCoMa external to exploit large banks of sound samples with a focus on drum augmentation. These tools, combined with drum triggers, microphones, a direct input from guitars, and a marimba, will be used to train the computer and generate a corpus of samples for each instrument which are differentiated and organized by their descriptors (loudness, pitch, spectralshape, mfccs, melbands).
Within a fraction of a millisecond, the corpus will be compared with an active sound input and determine the closest sound and play it immediately with the same velocity. One of the first milestones is to create a corpus of sounds from all instruments involved and provide examples in the form of a performance video of all possible combinations (E.g.Drum kit playing the guitar corpus, guitar playing the marimba corpus, etc.). Moreover, the corpus will be tested with a variety of musical genres including Metal, Jazz, Hip-Hop and other genres to observe the range of changes to the descriptors.
The main goal of this project is to generate new piece that features and promotes the use of ML technology so we can perform this repertoire and demonstrate the power of these tools online and in concert settings.