Let x = technology (IEM #6)

It has often struck me that while there is a plethora of information, particularly online, about how to use audio technology, it is very rare to see a composer talk about their use of technology towards creative outcomes. Why is this? A cynical interpretation is that there’s a kind of anxiety around technology-based creativity, driven by concern not to be regarded as a insufficiently technical composer, or perhaps by the fear that creativity involving technology is not “real” creativity (you feed something into the black box and it spits out the finished work). I completely reject the former, citing the work of composers such as John Cage and New Zealander Douglas Lilburn which, in very different ways, is not particularly sophisticated at a technical level but is quite singular in terms of its musical value. The latter is a (psuedo-)fallacy but does point to the reality that the work of electronic musicians, and particularly composers of electronic music, is very often determined – to greater and lesser extents – by the engineers of the tools they are working with.

To go down the fully deterministic route, one could argue that you simply wouldn’t have electronic dance music – with rigidly metronomic rhythms – without early sequencers (including drum machines). Early technology didn’t do “humanise”, it did 16-step (sometimes more) sequencing, with each step having precisely the same duration. At a certain point in musical history this was abhorrent, and then all of a sudden it was a style, entirely accepted as authentic music making. Put post-humanly, this means the musical machine afforded new ways not just of making music but also of hearing and understanding it (as celebrated in Kodwo Eshun’s afrofuturism paean More Brilliant than the Sun). In popular electronic music, a terribly vague but useful categorisation, this seems not to be a problem at all and in fact is integral to some of the many fleeting microgenres that Adam Harper celebrates (see his Fader article on the voice in the digital landscape).

By contrast, technological determinism, let alone technological influence, doesn’t go down at all well in electronic art music (EAM, another vague but useful category). I think here of Denis Smalley’s (1997) exhortations against technological listening and the purported sterility of music which does not feature qualities that are perceived on the human level of gesture and utterance. In EAM, much of which is made in a context attached to the tail end of Romanticism/Modernism (the difference is not so great), man masters machine and does so alone. But this old-school humanistic/anthropocentric approach is blind to the degree to which the composer is bound to the machine and to the engineer of that machine (digital or analogue). Another way to say this is that EAM is a collaborative practice, even when there’s just one person in the room. And yes of course there are many artists who are also architects of their own machines (Michael Norris and his Soundmagic Spectral VST plugins, for example) but yet the history of EAM is strewn with unacknowledged relationships between composers and the technicians/technologists who aided and abetted them (Sean Williams has researched the relationship between Karlheinz Stockhausen and the WDR studio technicians and technology, but I’ve yet to read his work). This does seem to be changing, and I’m looking forward to reading William’s chapter on King Tubby and Stockhausen’s use of analogue technology and the influence it had on their sound (the two are presumably considered independently, but what a fantastic pairing!). The notion that Stockhausen’s work has a sound is already an upsetter. Stockhausen made music, his music was sound, but did not have a sound (“the seductive, if abrasive, curves of [Studie II’s] mellow, noisy goodness“). Yes, it does, just like Barry Truax’s PODX era granular work has a sound, and many works in the era of early FFT have a sound, and countless late musique concrète composers sound like GRM-Tools, and my work has a sound which sometimes sounds like FFT and sometimes like GRM-Tools etc etc.

This has gone a little off-target, but it does support my initial point: composers, of EAM, don’t like to talk about how they do what they do. They’ll tell you what they used to do it, but not what they did with that it. Similarly, technologically adept artists will explain the tools they’ve developed, but not how these tools have been creatively applied. In either case this is a shame, as it limits the pace and depth at which the practice can evolve. If artists explain how they do what they do, other artists can learn from them, and apply a particular set of technicaI routines in their own ways. I don’t buy the argument that this might lead to a kind of technology-based plagiarism. There’s already enough sonic and aesthetic homogeneity in EAM. Opening up its creative-technological processes would, I imagine, lead to greater technical refinement and a wider creative palette, and – heaven forbid – perhaps even some criticism of aesthetic homogeneity where it is found. More than this, acknowledgement on the part of composers that they are using technology that has been designed and implemented by another human being might actually lead to establishing a stronger feedback loop between engineer and end-user. This is one of the real beauties of using the Ardour and Reaper DAW‘s – their design teams implement user feedback in an iterative design process – resulting in DAWs that are much much easier and friendlier to use than just about any other I can think of. It also strikes me that what I’m outlining is different to the kind of DIY/Open Source culture that makes contemporary software and electronic art cultures so strong. I’m not talking about how to make analogue and digital stuff, but rather how to make musical stuff with it (and if this requires that both the technology and its creative deployment be discussed, all the better).

It is of course a fair point that the artist might not want to spend their time explaining how they do what they do (there’s already too little time in which to do it), but I do think practitioners should open up their laptops and outline they ways in which they achieve certain creative outcomes. If this simply reveals that they ran a recording of dry kelp (being cracked and ripped) through GRM-Tools Shuffling and pushed the parameter faders around until they got a sound (a sound!) they liked, that would be a start. This is just what I did almost 20 years ago when I first started seriously trying to make EAM. What I still haven’t done is explain to myself, or anyone else, why this combination of sound, DSP and parameter settings, produced a result that made me feel there was musical value in what was coming out of the loudspeakers. The initial act may have been relatively simple (setting aside the design of the still wonderful GRM-Tools), but the process and outcomes are not. Untangling and explaining this, or indeed any (interesting) creative-technological method, could be a valuable and useful thing to do. So, this is a task I’m setting myself: in a future entry on this blog, hopefully the next one, I’ll attempt to dissect a particular technical method used in the composition of Let x = and also try to explain why the outcome of the process found musical application (i.e. had musical value to me in the context of the work-in-progress).

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s