The improvisational focus on prosodic gestures as primary musical material described in the former section, led me to intuitively adapt a general methodology of abstraction when working musically with speech. That approach is also related to the difficulty of listening to speech without focusing on the semantic content and the narrative implied by the words – the persons speaking, the setting, the time and place, and the story unfolding. In the early days of electroacoustic music, Pierre Schaeffer proposed a mode of “reduced listening” to focus listening on sounds as “objets sonores”, sound objects completely removed from any links to their cause or source (Schaeffer, 1966). While this was a productive way of listening to sounds from a new perspective, it seems to be a particularly hard exercise when listening to speech sounds, and as I was concerned with the more abstract gestural shapes of spoken utterances, I felt it was necessary to introduce some kind of filter or veil to make the words less intelligible and direct the focus towards the musical features.
This resulted in methods for abstracting and stylizing conventional musical parameters like rhythm, melody, harmony etc., in addition to extracting gestural features like phrase durations, pauses, pulse etc. The structural identity of the sound as utterance can still be quite strong, and it is interesting how far speech can be abstracted this way and still be recognizable as intentional communicative gestures.
Sound example: abstracted speech gesture
The limit of recognition seems to be when approaching static time, to the point when gestures are no longer recognized as such, and fade away perceptually as background static or ambient texture. However, I found that even with the gestural proportions intact, if the resulting music became too abstract for longer periods it would soon lose the perceived connection to speech altogether, and appear just like any other kind of abstract-sound music. The topic of speech needed to be present in the music, and I found that the most interesting things happened when I managed to strike a balance between recognition and abstraction so that the focus of perception was right on the edge between the semantic and the aesthetic. I started developing additional methods for using unprocessed speech recordings in such ways that their original sources, contexts and semantic contents become so fragmented and relativized that the result is perceived as abstracted or poetic sound structures, or sound objects in Schaeffer’s parlance. In that way, the formal aspects can be kept in focus while the topic of speech is never lost.
Sound example: collage
So, on the one hand it seems like speech gestures can be recognizable as communicative utterances even after radical transformation and abstraction, but also that segments of perfectly intelligible speech can be organized in such a way that they are perceived as abstract sound collages while still being recognized as speech sounds.
A number of different methods and techniques have thus been developed and used in various ways for abstraction in this project:
Filtering and smoothing of the changes and contours of frequencies, amplitudes, spectrum, spectral resolution, etc.
Stylization, extension and ornamentation of melodic phrases and gestures: arpeggio, overlaps, pointillistic clouds/swarms, voice shadowing, choral doubling, counter-voices, rhythmical diminutions and augmentations, etc.
Abstraction by spatial distribution into different layers: foreground, background, frequency register, instrumentation/orchestration, etc.
Fragmentation and repetition of segments organized as collages. Like in poetry, alternative arrangement of words dissolves the narrative and places emphasis on sound qualities and formal associations.
Juxtaposition of different conversations in the same speech genre shifts focus to the common features of the genre, rather than each individual story.
Juxtaposition of different languages (same as above, but even more generalized)
Selection by function, using only specific parts of conversation (greeting, back-channeling, laughter etc.): focus on particular types of interaction.
Early in the project I tried to conceptualize the musical possibilities resulting from these methods of abstraction in terms of continua between opposites. For instance, between the fluid continuous quality of speech and the discrete and clearly defined pitches and attacks of the piano, or between acoustic sound and speaker-mediated sound, between vocal and instrumental etc. As a result, musical ideas often took shape of transformations envisioned in a multidimensional space between such opposites.
However, I found that not all ideas are best thought of in terms of such opposites, and also that this very formalistic approach had its limitations incorporating the social, interactional dimensions of the speech material. The concept of speech genres however, proved an interesting way of channeling the musical ideas generated through the methods abstraction into an overall concept of speech genre portraits. Not necessarily a faithful rendering of all the typical traits characteristic of a speech genre, but developing ideas based on certain prominent features that could serve as starting points for exploring musical implications of that genre’s characteristics.
How these methods were used in practice is elaborated further in the chapter about performance methods, but before that it is necessary to look at the particular instrument system developed for this purpose, to understand how these methods were implemented in the actual tools used for performance.
Schaeffer, P. (1966). Traité des objets musicaux. Paris: Le Seuil.
|← Previous page: Speech gestures as material||Next page: Software instrument system →|