Y7 | Student Presentations

Y7

Y7 Homepage | Student Presentations

Y7 | Student Presentations

Congratulations to the student members selected to present at the plenary session (July 7) of the Y7 ACTOR Workshop:

 

 

Abstracts

Joshua Rosner

To Be or Not to Bop: How Phonetics and Timbre Shape Rhythm and Phrase in Scat Singing

Musicologists often draw attention to timbre’s essential role in jazz (e.g., Schuller, 1986) yet scholars tend to focus on the use of instrumental timbre as a means of individual expression (Floyd, 1995) or its ability to convey personhood (Lewis, 1995). Elaborating upon predilections for conceiving African American music, Olly Wilson (1999, 160) draws attention to the heterogeneous sound ideal—a fundamental bias towards contrast of color, a desirable musical texture “that contains a combination of diverse timbres.” In jazz, scat singing—the vocal practice of improvising with non-lexical vocables or “nonsense” syllables—leverages phonetic variation to generate the heterogeneous sound ideal. In doing so, however, vocalists’ phonetic choices reveal how specific speech sounds (phones) and phonetic patterns convey phrase structure and accentuation. Extending Lawson (1968), who suggests that vowel quality and musical timbre are similar functions of common acoustic correlates, our project treats vowel, consonant, and syllable choice as timbral decisions. Building upon previous case studies (Bauer, 2007; Givan, 2004), we compile a corpus of musical and phonetic transcriptions of improvised scat solos by English-speaking vocalists (e.g., Ella Fitzgerald, Sarah Vaughan). Through corpus analysis, we demonstrate how timbre/phonetics communicate phrase structure (i.e., notions of beginning, middle, and ending), metrical accent structure, and melodic contour. By combining timbre and linguistic analysis techniques, we begin to uncover the phonotactics—the permissible combinations of speech sounds—of Anglophone scat singing. Furthermore, we demonstrate that timbre variation not only provides jazz’s ideal aesthetic result but crucially also conveys rhythmic information.

 

 

Amit Gur

Form and Material in Visual and Auditory Perception

The objective of my research was to develop conceptual and empirical tools for defining auditory equivalents of visually perceived form and material. To establish an empirical basis for my hypotheses, I anchored my research in the well-studied ‘similarity with enlargement’ model (Goldmeier 1936)—a framework broadly recognized for its ability to empirically distinguish between visual form and material. I developed an auditory analogy to this model, tested it empirically, and found that the results parallel those observed in vision.
Investigating the auditory equivalents of visual form and material presents significant challenges due to differences between the modalities. For example, in vision, form and material are associated with structures in two- or three-dimensional space, while in audition, time and perceptual aspects of sound—such as pitch, loudness, and duration—are one-dimensional. For these and other reasons, a direct analogy between auditory and visual form and material is not feasible. To address this, I defined visual form and material in terms of perceptual features that can be explained irrespectively of visual space. By examining how these features manifest in the auditory domain, I was able to define the auditory equivalents of visual form and material.
The research places particular emphasis on discussing the notions of timbre and texture. Although these phenomena may seem disparate—timbre is associated with one unified sound while texture can be associated with the collective sound of an ensemble—they share distinct perceptual features, making them key examples of auditory material.

 

 

Kelsey Lussier

Identifying Orchestrational Norms and Trends in Funk Grooves

What is it about funk music that makes us want to dance? Analytical and perceptual studies of this repertoire suggests that it’s due to the notion of groove—a repeated musical cycle that plays a significant role in structuring funk music. Most research on groove focuses almost exclusively on rhythm at the expense of other essential parameters. For instance, studies by Witek et al. (2014) attempt to predict perceived grooviness based on the amount of syncopation within a cycle without considering texture, timbral contrast, or register, which are all central to the experience and structure of groove. Studies that aim to resolve this issue tend to incorporate timbre, texture, and orchestration unsystematically (Danielsen, 2006) or based entirely on perceptual data without a strong analytical foundation (Sioros et al. 2023).
In an effort to both resolve these shortcomings and support the continued perceptual research of groove, this project presents orchestrational profiles of funk grooves from 1960–present. This project examines a corpus of funk grooves drawn from the Lucerne Groove Research Library and Danielsen (2006), analyzing and comparing each groove’s rhythmic content, textural profile(s), instrumentation, and orchestration. Analysis of each groove’s timbral content is conducted using multi-dimensional scaling using stem tracks and Sonic Visualizer to precisely describe each timbre’s relative similarity/difference. Analysis of each groove’s orchestration combines MDS and the TOGE methodology to break each groove down into its most likely component auditory streams. These profiles are presented as testable hypotheses serve as the basis of perceptual studies of groove experience.

 

 

Benjamin Lavastre

Orchestration and timbre writing challenges in mixed music with digital musical instruments: Case study of Instrumental Interaction V for 3 Karlax and ensemble

A digital musical instrument (DMI) has no intrinsic sound identity. It comprises an interface equipped with sensors, a sound-generating system, and mapping strategies that associate sensor data with sound synthesis parameters [Miranda & Wanderley, 2006]. So what can DMIs bring to orchestration practices, and what role can they play in shaping the resulting timbre?
This presentation discusses the compositional strategies of Instrumental Interaction V (Lavastre, 2025) for 3 Karlax, 7 acoustic instruments, and electronics, to be performed on February 21, 2025. The Karlax is a clarinet-like DMI whose main sensors are continuous keys, velocity pistons, a rotary axis, and an inertial measurement unit. This self-analysis examines several excerpts from the piece and compares the desired effects with the audiovisual results. The piece was composed around pre-selected “gestures” (rebounds, infinite lines, circles, etc.) favoring musical interactions. For each excerpt, I detail the techniques for instrumentation, sound synthesis, mapping, spatialization, notation, and body expression. Then, I focus on the sonic relationships between the Karlax and the acoustic instrument parts, drawing on perceptual principles [Touizrar & McAdams. 2019]. I examine Karlax's roles in overall musical “gestures” in terms of orchestration. I describe certain strategies, such as the “extension” of the acoustic instrumental world, the real-time transformation of the ensemble's timbre, or the play on the ambiguity of the sound/gesture relationship. Finally, I discuss the tools and compositional framework a composer/orchestrator might adopt, and the benefits of integrating DMIs like the Karlax into their practice.

 

 

Jonas Régnier

Everyday Sounds as Emotional Catalysts: A Research-Creation Study in Contemporary Music

Everyday sounds are frequently used in electroacoustic music. These sounds—ranging from rain sounds to frying food—are rich in acoustic and perceptual complexity, making them unique in timbral characteristics.
Whether everyday sounds have a significant emotional impact on listeners has yet to be examined. We aim to explore how composers can integrate everyday sounds into contemporary music to enhance emotional experiences.
A research-creation approach was employed to investigate these questions. In phase 1, semi-structured interviews were conducted. Participants were asked to recall sounds throughout their lives that are associated with love, joy, anger, sadness, and fear. In phase 2, 20 sounds provided by the interviewed participants were integrated into a contemporary mixed composition for trumpet and live electronics (7.1 surround). This 11-minute piece explores various orchestration techniques to blend acoustic trumpet sounds with everyday sounds. Identified sounds were implemented to convey specific emotional meanings in specific sections and transitions. In phase 3, listeners rated their felt emotional intensity while listening to the entire piece binaurally in a controlled booth environment. Listeners then rated the perceived love, joy, anger, sadness, and fear of selected excerpts and identified which sound sources conveyed these emotions.
Participants report stronger emotional intensity when certain everyday sounds occur (e.g., screaming, alarms, etc.). On average, participants believe these sounds enhance the piece’s emotional impact moderately to highly, as indicated in the post-experiment questionnaire.
This study highlights the ability of everyday sounds to enhance emotional engagement in music for listeners. Our study demonstrates a feedback relationship between the compositional practice and the listeners’ perception regarding the emotional impact of everyday sounds.

Previous
Previous

Y7 | Note Taking

Next
Next

Year 6 Workshop | Vancouver, Canada