With a full MIDI-enabled pipe organ at my disposal, I wanted to try how it would sound when controlled by my speech-to-music software. Since the organ is like an orchestra all by itself with all the manuals, pedals and registers, this was an occasion to explore some polyphony with layers. As I had not really thought of controlling/orchestrating several different parts with the same MIDI source when programming, I had to use three instances of the same software in parallell in order to try this out (this is perhaps something that I should try to implement within one device later).
Organ registers were selected manually, and the parts were created using different segmentation of the same speech recording, expressing structures based on syllables, breath groups, stress and pitch accents, and even breath pauses – creating abrupt interjections between phrases. The organ’s ability to hold tones indefinitely also makes for exploring dense textures of accumulated sustained pitches.
A quick test of the organ: