The Creation of “Dance at the End of Time”

“Dance at the End of Time” is an electro-acoustic dance piece using choral samples. Having written the darkly comic “The Garden of Earthly Delights” just before the Covid epidemic, I wanted to make a more positive, joyful piece. The CDP software was extended to create and transform rhythmic choral melodies so they could be modified in sung-text, melody, rhythm, tempo, and vocal articulation, and assembled into rhythmic chordal events.

Trevor Wishart Trevor Wishart
Veröffentlichung
Reading time
10 Minuten
Listen to article
Loading the Elevenlabs Text to Speech AudioNative Player...

The motifs remember their own “word”, melody, key, tempo, rhythm, articulation, event count and beat count through a special naming scheme, and these motifs have a special protected status within the CDP environment, making them difficult to delete or wrongly rename. Instrumental and Percussion motifs don’t need such a large amount of information and are therefore handled differently.

The piece uses an imaginary language, so that the “meaning” of the text does not determine the syllable sequence used. The piece is constructed from consonant-vowel-pair syllables, as in a syllabary, and the vocal samples are all long events (c 10 seconds) consisting of some consonant followed by some extended vowel. Fifteen consonants and six vowels are used. These were recorded at 4 spaced pitches, and these pitch-shifted to complete the chromatic gamut. This required 360 samples to be recorded for each of the four vocal ranges. Valid staccato versions were also derived from these.

Rhythms are constructed from rhythmic cells displayed on an interface. Additional facilities allow the tweaking of level, timing and duration of samples in the created motif. These motifs exist as both a template mixfile (from which variants can be constructed) and a sound output. They remember their beat count and hence can be mixed in exact rhythmic sequence.

Later modifications to the software allow any desired tuning (modal, or non-tempered) to be specified and used.

Audio file

Dance at the End of Time by Trevor Wishart. This is an 8-channel composition mixed down to a stereo version. The 2-channel version cannot reproduce all the qualities of the multi-channel version.

Origins

Just before the Covid epidemic I completed “The Garden of Earthly Delights”. This is an hour-long, dystopian comic piece, using recorded actors and singers, to reinterpret Bosch’s vision for our contemporary world. The extended Covid epidemic, the resulting loss of life and the lockdown (which confined me to my own house for 2 years), followed by a subsequent year of undiagnosed health problems of my own, made me acutely aware of my own mortality, and I felt reluctant to end my career with such a “downbeat” offering. I therefore decided to make a “positive” piece, reusing some ideas from a work of the 1970s – the “Passion” dedicated to the schoolchildren of Soweto – which employed melodic/harmonic materials suggested by African choral music. A further influence was a gig in Rio de Janeiro, where I experienced funk carioca, a dance-inducing medium even for a 73-year-old. These ideas were finally assembled into an electro-acoustic dance piece using choral samples. The principal focus was the transformation of choral materials, so I developed an extension of the CDP software to deal explicitly with rhythmic choral materials.

Technical Goals

The imagined form of the piece was a set of rhythmic variations on a choral motif. The aim therefore was to construct vocal motifs from choral samples and, in particular, to be able to generate variants of the motifs.

All the things one would like to do would be easy for a vocal motif represented as notated music, with just the help of a pencil and an eraser …

Change the Rhythm and/or Tempo of the motif.

  1. Change the Note Order (Reverse, Invert, Cycle events).
  2. Change the Key, or Harmonic Field, of the motif, and transpose it, by semitones, or within its Key, or within the Harmonic Field defined by the notes it used.
  3. Change the sung Text of the motif.
  4. Change Articulation (phrasing, accentuation, staccato-legato choice) and add Ornaments to the motif.
  5. Change the Loudness envelope, or erase notes from the motif.
  6. Change the entire Melody Line to a different one, but in the same rhythm, enabling the creation of synchronised sung parts, and hence of sequences of sung chords.

However, for motifs as sound material, this raises difficult issues. In particular, to change some aspects of a motif without changing others, it’s necessary for the motif to know its own internal structure. Sound files which were “motifs” were therefore assigned a special status within the CDP, and assigned extended names, which encoded …

(a) The syllable-sequence (“word”) used.
(b) The note-sequence (melody) and its (closest) key.
(c) The rhythmic pattern (including subtler additions like “swing”).
(d) The tempo.
(e) The number of beats in the motif.
(f) Indicators that a particular pattern of staccato events, ornamentations, or melodic extensions is used.

The resulting naming and subdirectory structure is not perfect. For example, one can add an ornament to a note in a motif, but you cannot add an ornament to an already ornamented motif. (You must make a different ornamented version from the original motif, adding that same ornament as before with other ornaments added to different notes of the motif).

Furthermore, because motif processes (applying only to “motifs”) carefully rename their outputs, and because of the importance of the information stored in these names, a much stricter code for motif files is implemented. They cannot be manipulated within the broader context of the CDP without special “permission” being given. And they can only be deleted one at a time, and after two affirmations that that was truly what one wanted to do.

To obviate the very long names (of both motif directories and files) an “Alias” feature was implemented, allowing the files to be accessed using much shorter names (in the end, this option was not applied).

Slightly different protocols apply to instrumental motifs (not used in the piece) as there are no possible changes of “syllable”, and to percussion motifs, where one can use sustained, damped-staccato, quiet, loud etc samples as alternative inputs, so the “articulation” of samples themselves is not so important.

Selecting and generating the source samples

Some initial decisions were made about the vocal samples to be used.

  1. The piece uses an imaginary language. This means that the “meaning” of the text (which would determine the syllable sequence to be used) does not enter into consideration when making rhythmic variants of a given line – syllables may be added, subtracted, repeated etc as required, to fit the rhythmic structure of the melodic variant. The way this is done, however, can be such that the text appears meaningful in some language the listener does not understand.
     
  2. Words in the language are constructed from consonant-vowel-pair syllables (as in a syllabary – like Japanese) and the vocal samples are all long events (c 10 seconds) consisting of some consonant followed by some vowel e.g.”koooooooooo…”, “nyaaaaaaaaa …”, (where “ny” is an internally morphing consonant) and so on. This means that shorter samples can be created simply by cutting off the ends off the originals. 15 consonants and 6 vowels were chosen, giving 90 possibilities. 

    However, the duration of consonants varies e.g. “k” and “d” are much shorter than  “sh” or “ny”. If the tempo is very fast, or staccato is specified, cutting the original sample to size may mean that the post-consonant vowel is lost. Hence specifically staccato versions of samples were made by a combination of cutting and, where necessary, time-shrinking them in the spectral domain.

    In addition, every sample is assigned an attack time (the position of the initial peak) and (for non-staccato samples) a pair of times bracketing the (middle) section of the sample that may be time-stretched if an event longer than the sample is required. Knowing the attack time allows precision alignment of events. This information is stored, along with other features, in a special directory of data associated with the motifs. (Neither of these options was used in the actual realisation of the piece).
     
  3. Samples were recorded (for each of the S, A, T, B voice parts) on just 4 pitches, spanning each range in major 3rds. Events at other pitches could then be generated by pitch-changing the originals by no more than 3 semitones, and this could be achieved simply by sample rate change (rather than going to the spectral domain and retaining the formants which, in my experience, can introduce unwanted artefacts). For some ranges, these pitch changes slightly shift the vowel, but not sufficiently to be significant in the context of the piece. 

    This meant that 360 samples needed to be recorded for each voice. I’m indebted to the York University choral group “The 24” and their director, Robert Hollingworth, for their dedication and patience. 

Creating motifs and variants

Once the samples were available, construction of the choral motifs proceeded as follows …

  1. At the initial stage, a “Naïve” motif is made. A sequence of syllables and a voice (S, A, T, B) is specified. Then a MIDI melody, with the same number of events, is entered from a keyboard or typed as text into a file. It is then given a unique name and an output sound is created. Each naïve motif has its directory, (using the same naming conventions as for the naive motif itself). In this directory, the naïve motif exists both as a mixfile, where the full-length samples are used and as a sound where, before mixing, those samples are cut to a fixed duration so that the initial melody-syllable sequence you hear has no overlaying of consecutive events. This allows the text-plus-melody content of the motif to be clearly heard. The name of this naïve motif contains (only) its melody name with its (closest) key, the “word” (syllables) it uses, and the number of events within it. The original (MIDI) duration of the (keyboard) events is stored elsewhere, should this prove to be useful later.
     
  2. From the naïve motif, one or more rhythmicised versions can be created. For this, an interface displays a large selection of short rhythmic patches, classified according to the number of events, the number of beats, and the type (e.g. Regular, Weighted, Skipping, Bartok-style etc) which can be chosen and placed in sequence to create a complete rhythmic pattern. The rhythmic pattern must have the same number of events as the number of syllables in the original naïve motif. Patches have stressed beats (and sometimes secondary stresses) and relative levels of stressed and unstressed beats can be specified. In addition, staccato events can be indicated here (in very fast tempi, if events are very short, staccato samples are automatically used), and certain other general features specified (swing, high rhythmic precision, legato etc – “legato” here implies motif events follow one another without intervening pauses).

    The tempo is also specified here (and, if required, both an initial and final tempo can be indicated, for accelerating or decelerating motifs).

    The software then first copies the naïve mixfile and adjusts the onset times of the samples in this new mixfile. It then (except where staccato events are used) cuts the samples to the required inter-onset duration, replacing the original full-length samples with these cut versions, before mixing the final output. Once generated, the new rhythmic motif can be played, and there are further options to slightly alter the onset times and the relative levels of events in order to make the rhythm more precise, plus the ability to shorten staccato events (useful in very fast tempi, where even pre-made staccato events may appear overly legato). Once the output is as required, it is saved to disk. The resulting rhythmicised motif is named using the extended naming convention described previously, and placed in the directory of the original naïve motif.
     
  3. Other, modified versions of the rhythmic motifs can then be generated. These include …
    (a) Swapping in a new melody over the same text, in a different voice. In this way, a choral sequence could be built from individual motivic lines.
    (b) Substituting different syllables for any of the original sets.
    (c) Changing the articulation of the motif e.g. changing the number or placement of staccato events, or adding ornamentation to (non-staccato) events.
    (d) Creating various types of melodic variation, changes of loudness contour and so on.

    The melodic variation of the motif involves choosing different samples for the mixfile, rather than transposing the existing samples. A temporary copy of the original naïve mix is generated, but with the new samples substituted, and this is used to generate the output motif.

    The ornamentation options apply various CDP processes (pitch-shift, distortion, reverb etc) to individual syllables in the motif. In practice, trills and vocal glides were the most useful and vocally realistic  − gliding slightly upwards from a note but not reaching the next note still produces the percept of a true vocal glide between the two notes.
     
  4. Finally the motifs can be joined in a rhythmic sequence. This depends on each motif knowing its own beat duration. The beat duration is the duration of the (known) number of beats at the (known) tempo and is often not the same as the actual duration of the sound e.g. the final event in the motif may be staccato – making the sound shorter than the beat duration – or it may be reverberated  – making the sound longer than the beat-duration.

In addition, there are various subtleties not mentioned here, such as rhythmic patches that begin with a (silent) offset, and so on.

The Structure of the piece

The final piece is a set of rhythmic variations on a 6-element phrase. A counter-melody (reminiscent of the material in VOX 3) is first introduced in the 2nd paragraph of the piece. In addition, there are transition interludes between the paragraphs using a different melody.

The harmonic language of the central material is suggested by various types of African choral music, and is entirely diatonic, in a major key. However, the countermelody is harmonically at odds with this, in the manner of English choral music of the early Renaissance, where rising and falling 7ths may clash, e.g. in the music of William Byrd. So the counterpointing materials are in “Lydian Blue” mode (a Lydian scale with a flattened 7th). The intervening interludes are more chromatic, using a repeating refrain which is constantly reharmonised.

Afterthoughts

After completing the piece I extended the software to allow the use of (any specified) non-tempered or modal tuning. Motifs can be created, transposed and otherwise manipulated within these Harmonic Fields. For non-tempered tunings, the offset of any pitch from the nearest MIDI (integer) pitch is stored separately. The non-tempered pitch is then replaced by its nearest-integer equivalent and the software proceeds as normal. The pitch offsets are then reaccessed at the appropriate stage in calculations, and the integer pitches are transposed appropriately.

Trevor Wishart

Trevor Wishart (*1946), composer/performer from the North of England specialising in sound metamorphosis, and constructing the software to make it possible (Sound Loom / CDP). He has lived and worked as a composer-in-residence in Australia, Canada, Germany, Holland, Sweden, Mexico and the USA. He creates music with his own voice, for professional groups, or in imaginary worlds conjured up in the studio and his most recent work “The Garden of Earthly Delights” (2021) is a darkly comic take on the human situation using the voices of both actors and politicians. He is also the principal developer of music processing software for the Composer’s Desktop Project. His aesthetic and technical ideas are described in the books On Sonic Art, Audible Design and Sound Composition. In 2008 he was awarded the international Giga-Herz Grand Prize for his life’s work, and in 2018 the British Association of Songwriters, Composers and Authors (BASCA) Award for Innovation.

Original language: English
Article translations are machine translated and proofread.