Back to All Events

Breakout Sessions #1

  • Taxonomies 1 (Stravinsky room)

  • Artificial intelligence (Studio 5)

Orchestration Analysis Taxonomies (Bouliane/McAdams) [Stravinsky room]
Orchestration can be studied or practiced from different perspectives. One of the important ideas of the ACTOR project is to develop coherent, organized perspectives of analysis and study which, combined together, can contribute to a better understanding of orchestration in general and of specific methods used and phenomena perceived in particular. We will present a common structure for annotations as a potent method of investigation to foster analysis, discussion, sharing and dissemination of knowledge on orchestration. Several legs of analysis have been proposed that can each contribute to a better global understanding of Orchestration and that could be developed more or less concurrently.Two specific legs will be presented in detail. Orchestral effects related to auditory grouping form a taxonomy of perception. It classifies and demonstrates the psychoacoustic/acoustic consequences of orchestration based on concurrent, sequential and segmental auditory grouping processes: how we group acoustic information into "events", perceptually connect successive events into streams, textures or strata, and hierarchically segment sequential structures into smaller- and larger-scale units. Orchestration techniques derived from compositional and pedagogical practice form a taxonomy of compositional actions. They comprise a comprehensive listing of actual techniques an orchestrator might use or has used.

Artificial intelligence and generative models (Esling) [Studio 5]
The focus of the working group will be to discuss the amazing wealth of current research in artificial intelligence and machine learning, and how these can be efficiently leveraged in the context of musical orchestration. The major framework that will be discussed revolves around generative models, as recent work at IRCAM has produced deep variational learning with multivariate, multimodal, and multi-scale approaches, in order to bridge symbolic, signal, and perceptual information on creative processes into a joint information space. These models should be developed through the analysis of musical orchestration, which lies at the intersection between the symbol (score), signal (recording), and perceptual representations. Furthermore, as current research focuses on the generation of a single content, studying the interactions between different generated elements along with their different time scales represents the next paramount challenge in generative models. Hence, the working group will be dedicated to opening up new paths and designing new tools for AI applied to orchestration, while trying to clarify how this can be instantiated and successfully implemented through pragmatic collaborative projects in the near future.

Previous
Previous
July 14

DAY 2: Coffee

Next
Next
July 14

Coffee break