EN | FR
Summary
This group aims to study computational tools and AI for orchestration while looking at the musical research and composers' feedback in order to understand the current state of models and creative tools used for orchestral composition.
Workgroup Leader
Philippe Esling
Contact: philippe.esling[at]ircam.fr
Overview
The focus of the working group will be to discuss the amazing wealth of current research in artificial intelligence and machine learning, and how these can be efficiently leveraged in the context of musical orchestration. The major framework that will be discussed revolves around generative models, as recent work at IRCAM has produced deep variational learning with multivariate, multimodal, and multi-scale approaches, in order to bridge symbolic, signal, and perceptual information on creative processes into a joint information space. These models should be developed through the analysis of musical orchestration, which lies at the intersection between the symbol (score), signal (recording), and perceptual representations. Furthermore, as current research focuses on the generation of a single content, studying the interactions between different generated elements along with their different time scales represents the next paramount challenge in generative models. Hence, the working group will be dedicated to opening up new paths and designing new tools for AI applied to orchestration, while trying to clarify how this can be instantiated and successfully implemented through pragmatic collaborative projects in the near future.
Subgroups
The different subgroups are:
Interaction and control issues
Co-creativity and emergence
Generative probabilistic models
Multi-instrumental generation
Idealized tools brainstorming
Research relationships inside ACTOR
Active or Envisioned Projects
Generative probabilistic models for orchestration
We aim to develop generative models that would be able to perform various orchestration tasks on both the symbolic and the signal level, such as turning a piano score into an orchestral one based on inferred rules.
Audio synthesis based on machine learning
High-quality audio synthesis of orchestral instruments aims to allow to produce audio recordings of human-level performances directly from symbolic scores
FlowSynthesizer - Audio synthesizer control
We aim to simplify the control for audio synthesizers, which can lead to whole new areas of creativity. This can be especially important in the case of electroacoustic music.
Multi-intsrumental generation
Most of the generative models are used for single instruments. We aim to target the co-generation of instruments, and the generation of instruments conditioned on others.
Lightweight deep learning for embedded platforms
In order to enhance the usability of deep audio models, we aim to reduce their computational footprints in order to embed them and lead to innovative musical instruments.