Dugal McKinnon

Composer – Sound Artist – Researcher

Category: electronica

Let x = [Binaural version] (IEM#9)

I’ve just uploaded a binaural version of Let x = (on Soundcloud) for icosahedral loudspeaker (ICO) and 24-channel loudspeaker hemisphere, composed while I was 2014 composer-in-residence at IEM (Graz, Austria). The binaural version combines recordings of the ICO, made using a Schoeps KFM 6 mic, with mix-downs from the 24-channel ambisonic audio. The result isn’t the same as hearing the piece in-situ – the verticality of the piece is lost and the degree of immersion is reduced – but it gives some sense of its spatiality and I hope also conveys the ICO’s spatialisation capabilities, which I described in an earlier post.

IEM Cube

IEM Cube

Just so you know what you’re listening to, the piece is:

In 5 sections, which alternate and combine use of the ICO and hemisphere: 1. ICO > hemisphere; 2. ICO, 3. Hemisphere; 4. ICO, 5. hemisphere, ICO/hemisphere > hemisphere. A wide range of tools were used in composing the work, but the most significant were certainly Matthias Kronlachner’s ambix and mcfx plug-in suites, which made the task of mixing and spatialising for both the ICO and the hemisphere wonderfully straightforward.

The first section of a larger work-in-progress based on the transformation of speech into sounding objects with carnal, cultural and environmental resonances. The texts are metaphors, in multiple languages, coupling the human body and the natural environment, aiming to dissolve “the barrier between ‘over here’ and ‘over there,’… the illusory boundary between ‘inside and outside’” (Timothy Morton). There’s more to read about my compositional intentions and materials, and my creative process, in an earlier posts.

Here’s the programme note (nice and concise):

Let x = (2014-)

Kaki Langit – Foot of the Sky

 “Flesh = Earth, Bone = Stone, Blood = Water, Eyes = Sun, Mind = Moon, Brain = Cloud, Head = Heaven, Breath = Wind” (Adams & Mallory, Encyclopedia of Indo-European Culture).

Another way of saying this kind of thing comes from Levi Bryant:

[E]cology must be rescued from green ecology, or that perspective that approaches it as a restricted domain of investigation, pertaining only to rain forests and coral reefs. Ecology is a name of being tout court. It signifies not nature, but relation. To think ecologically is to think beings in relation; regardless of whether that being be the puffer fish, economy, or a literary text. Everything is ecological. Above all, we must think culture and society as ecologies embedded in a broader ecology.


Let x = technology (IEM #6)

It has often struck me that while there is a plethora of information, particularly online, about how to use audio technology, it is very rare to see a composer talk about their use of technology towards creative outcomes. Why is this? A cynical interpretation is that there’s a kind of anxiety around technology-based creativity, driven by concern not to be regarded as a insufficiently technical composer, or perhaps by the fear that creativity involving technology is not “real” creativity (you feed something into the black box and it spits out the finished work). I completely reject the former, citing the work of composers such as John Cage and New Zealander Douglas Lilburn which, in very different ways, is not particularly sophisticated at a technical level but is quite singular in terms of its musical value. The latter is a (psuedo-)fallacy but does point to the reality that the work of electronic musicians, and particularly composers of electronic music, is very often determined – to greater and lesser extents – by the engineers of the tools they are working with.

To go down the fully deterministic route, one could argue that you simply wouldn’t have electronic dance music – with rigidly metronomic rhythms – without early sequencers (including drum machines). Early technology didn’t do “humanise”, it did 16-step (sometimes more) sequencing, with each step having precisely the same duration. At a certain point in musical history this was abhorrent, and then all of a sudden it was a style, entirely accepted as authentic music making. Put post-humanly, this means the musical machine afforded new ways not just of making music but also of hearing and understanding it (as celebrated in Kodwo Eshun’s afrofuturism paean More Brilliant than the Sun). In popular electronic music, a terribly vague but useful categorisation, this seems not to be a problem at all and in fact is integral to some of the many fleeting microgenres that Adam Harper celebrates (see his Fader article on the voice in the digital landscape).

By contrast, technological determinism, let alone technological influence, doesn’t go down at all well in electronic art music (EAM, another vague but useful category). I think here of Denis Smalley’s (1997) exhortations against technological listening and the purported sterility of music which does not feature qualities that are perceived on the human level of gesture and utterance. In EAM, much of which is made in a context attached to the tail end of Romanticism/Modernism (the difference is not so great), man masters machine and does so alone. But this old-school humanistic/anthropocentric approach is blind to the degree to which the composer is bound to the machine and to the engineer of that machine (digital or analogue). Another way to say this is that EAM is a collaborative practice, even when there’s just one person in the room. And yes of course there are many artists who are also architects of their own machines (Michael Norris and his Soundmagic Spectral VST plugins, for example) but yet the history of EAM is strewn with unacknowledged relationships between composers and the technicians/technologists who aided and abetted them (Sean Williams has researched the relationship between Karlheinz Stockhausen and the WDR studio technicians and technology, but I’ve yet to read his work). This does seem to be changing, and I’m looking forward to reading William’s chapter on King Tubby and Stockhausen’s use of analogue technology and the influence it had on their sound (the two are presumably considered independently, but what a fantastic pairing!). The notion that Stockhausen’s work has a sound is already an upsetter. Stockhausen made music, his music was sound, but did not have a sound (“the seductive, if abrasive, curves of [Studie II’s] mellow, noisy goodness“). Yes, it does, just like Barry Truax’s PODX era granular work has a sound, and many works in the era of early FFT have a sound, and countless late musique concrète composers sound like GRM-Tools, and my work has a sound which sometimes sounds like FFT and sometimes like GRM-Tools etc etc.

This has gone a little off-target, but it does support my initial point: composers, of EAM, don’t like to talk about how they do what they do. They’ll tell you what they used to do it, but not what they did with that it. Similarly, technologically adept artists will explain the tools they’ve developed, but not how these tools have been creatively applied. In either case this is a shame, as it limits the pace and depth at which the practice can evolve. If artists explain how they do what they do, other artists can learn from them, and apply a particular set of technicaI routines in their own ways. I don’t buy the argument that this might lead to a kind of technology-based plagiarism. There’s already enough sonic and aesthetic homogeneity in EAM. Opening up its creative-technological processes would, I imagine, lead to greater technical refinement and a wider creative palette, and – heaven forbid – perhaps even some criticism of aesthetic homogeneity where it is found. More than this, acknowledgement on the part of composers that they are using technology that has been designed and implemented by another human being might actually lead to establishing a stronger feedback loop between engineer and end-user. This is one of the real beauties of using the Ardour and Reaper DAW‘s – their design teams implement user feedback in an iterative design process – resulting in DAWs that are much much easier and friendlier to use than just about any other I can think of. It also strikes me that what I’m outlining is different to the kind of DIY/Open Source culture that makes contemporary software and electronic art cultures so strong. I’m not talking about how to make analogue and digital stuff, but rather how to make musical stuff with it (and if this requires that both the technology and its creative deployment be discussed, all the better).

It is of course a fair point that the artist might not want to spend their time explaining how they do what they do (there’s already too little time in which to do it), but I do think practitioners should open up their laptops and outline they ways in which they achieve certain creative outcomes. If this simply reveals that they ran a recording of dry kelp (being cracked and ripped) through GRM-Tools Shuffling and pushed the parameter faders around until they got a sound (a sound!) they liked, that would be a start. This is just what I did almost 20 years ago when I first started seriously trying to make EAM. What I still haven’t done is explain to myself, or anyone else, why this combination of sound, DSP and parameter settings, produced a result that made me feel there was musical value in what was coming out of the loudspeakers. The initial act may have been relatively simple (setting aside the design of the still wonderful GRM-Tools), but the process and outcomes are not. Untangling and explaining this, or indeed any (interesting) creative-technological method, could be a valuable and useful thing to do. So, this is a task I’m setting myself: in a future entry on this blog, hopefully the next one, I’ll attempt to dissect a particular technical method used in the composition of Let x = and also try to explain why the outcome of the process found musical application (i.e. had musical value to me in the context of the work-in-progress).

Guided by voices (IEM #4)

After having written below about the need to introduce constraints into the creative process, and then having fully immersed myself in the composition of the piece, what has become clear is that while it is useful to introduce pre-compositional constraints – establishing a work-concept that informs the creative process – these constraints themselves are only a scaffold which is eventually replaced by the more substantial constraints the work itself gradually establishes the further one moves inside this emergent territory. For example, in working with vocal material for the newly completed Let x = I had presumed that the heterogeneity of voice, the fact that it resonates at causal, semantic, and reduced levels (voice as voice, meaning and sound), could be accomodated in a single work. Certainly it can be and there are plenty of examples of works in which all these levels (and more) are operating simultaneously (one of my favourites remains John Young’s Sju), but in Let x = the constraint that the work itself introduced was a product of exploring the spatial sound environments afforded by IEM’s Ikosaeder (20-channel loudspeaker) and ambisonic hemisphere (24-channels), and the combination of these. In investigating what can be done with these spatially, it quickly became clear that the work was veering away from voice as speech (semantic), and voice as voice (causal) (aside from in a delimited but structurally significant way, i.e. as a means to mark key structural moments in the piece), and that in fact it needed to, wanted to, was going to, do so. The voice as sound, transformed but still vocalesque (voice-like), afforded enough sonic ambiguity and abstraction for sound materials to be utilised spatially without the histrionics of voices that behave like winged creatures or the schizophrenic effects of invisible conclaves, juries and choruses. The outcome is a work that, at least in my reckoning (I don’t need to be reminded that “the author is dead“), through which I manage to achieve one of the initial aims, but which also guided itself in a direction I had not anticipated. In the vocally tinged spatial atmospheres, textures and trajectories of Let x = there is a commingling of voice and environment which I feel fulfils my stated aims of  “the transformation of speech into objects resonating at embodied.. and environmental levels” and the dissolution of  “the barrier between ‘over here’ and ‘over there,’… the illusory boundary between ‘inside and outside’” (Tim Morton). Yet at the same time, the aim of deploying speech “as an ‘interface language’ between the kinaesthetic (nonverbal expression, including music, utterance, gesture and space) and the cenaesthetic (complex cognitive structures, including poetic language and the semantics of music)” has not really begun to be explored, simply because there is so little direct speech in the piece and when speech is heard it is in forms difficult to understand unless one is Indonesian (“Kaki langit,” the foot of the sky is the phrase that opens the piece) or capable of deciphering 6 languages at once (the closing moment of the piece simultaneously introduces the same phrase in English, German, Italian, Farsi, Bengali and Indonesian). Recognising that the piece guided itself is an important thing for me, not only because it recognises the extent to which things themselves have their own propensities and powers to which we are always having to respond (Graham Harman’s Guerrilla Metaphysics is good on this topic), but also because it means one can feel good about letting one’s own work-concept fall away and trust that engagement with the complex thing-in-itself (shadowy though its being is) will produce an object which is cohesive in its own ways, irrespective of how closely these match with the hypothetical thing it was intended to become. One of the other very satisfying aspects of this is that I can still attempt to compose the piece I thought I was composing, and by concentrating less on spatiality and more on semantics, perhaps a new work will emerge which “[deploys] speech “as an ‘interface language’ between the kinaesthetic… and the cenaesthetic”. Therefore the work itself is not finished. Let x =.


Weirding the voice

A second show on the voice for Upbeat, this one looking at technological transformations of the voice in music that sits happily in the shade of popular music.

John Oswald (1988). “Pretender”. Plunderphonics [EP]. RPM facilitates gender-bending (as do reel-to-reel tape machines, samplers, etc etc): “Over the course of this song Dolly Parton gets an aural sex change. Check out the last verse in which she gets to sing a duet with himself. Meanwhile, the arrangement goes from infinitely fast to infinitely slow. (John Oswald).

Goldfrapp (2000). “Deer Stop”. Felt Mountain [CD]. Alison Goldfrapp’s vocal transformed via Will Gregory’s electronics, rendering the whispery noir delivery all the more potent, as if Gregory’s production tools are microscopes for revealing the sonic qualia of emotion…

Burial (2007). “Archangel”. Untrue [CD]. Burial (Willian Bevan) samples Ray J’s song “One Wish” (2005) – which apparently charted here in NZ –, and uses pitch-shifting and time-stretching to map the vocal to a new melody, a side effect of which is that the voice is androgenized. No more a song of boy meets/loves/loses girl, instead we hear a shape-shifting jilted lover, singing the universal song of being lost in and through love.

Mouse on Mars (2001). “Actionist Respoke”. Idiology [CD]. Voice becomes an electronic instrument, a bionic rhythm machine, thanks to the vocalist’s supper of a sampler and turntable… This track works nicely in tandem with Kodwo Eshun’s book More Brilliant than the Sun: Adventures in Sonic Fiction (London: Quartet Books, 1998). Eshun’s afrofuturism might just admit two white guys from Germany (Kraftwerk helped Afrika Bambaataa on his way, so why not?)