Let x = technology (IEM #6)

It has often struck me that while there is a plethora of information, particularly online, about how to use audio technology, it is very rare to see a composer talk about their use of technology towards creative outcomes. Why is this? A cynical interpretation is that there’s a kind of anxiety around technology-based creativity, driven by concern not to be regarded as a insufficiently technical composer, or perhaps by the fear that creativity involving technology is not “real” creativity (you feed something into the black box and it spits out the finished work). I completely reject the former, citing the work of composers such as John Cage and New Zealander Douglas Lilburn which, in very different ways, is not particularly sophisticated at a technical level but is quite singular in terms of its musical value. The latter is a (psuedo-)fallacy but does point to the reality that the work of electronic musicians, and particularly composers of electronic music, is very often determined – to greater and lesser extents – by the engineers of the tools they are working with.

To go down the fully deterministic route, one could argue that you simply wouldn’t have electronic dance music – with rigidly metronomic rhythms – without early sequencers (including drum machines). Early technology didn’t do “humanise”, it did 16-step (sometimes more) sequencing, with each step having precisely the same duration. At a certain point in musical history this was abhorrent, and then all of a sudden it was a style, entirely accepted as authentic music making. Put post-humanly, this means the musical machine afforded new ways not just of making music but also of hearing and understanding it (as celebrated in Kodwo Eshun’s afrofuturism paean More Brilliant than the Sun). In popular electronic music, a terribly vague but useful categorisation, this seems not to be a problem at all and in fact is integral to some of the many fleeting microgenres that Adam Harper celebrates (see his Fader article on the voice in the digital landscape).

By contrast, technological determinism, let alone technological influence, doesn’t go down at all well in electronic art music (EAM, another vague but useful category). I think here of Denis Smalley’s (1997) exhortations against technological listening and the purported sterility of music which does not feature qualities that are perceived on the human level of gesture and utterance. In EAM, much of which is made in a context attached to the tail end of Romanticism/Modernism (the difference is not so great), man masters machine and does so alone. But this old-school humanistic/anthropocentric approach is blind to the degree to which the composer is bound to the machine and to the engineer of that machine (digital or analogue). Another way to say this is that EAM is a collaborative practice, even when there’s just one person in the room. And yes of course there are many artists who are also architects of their own machines (Michael Norris and his Soundmagic Spectral VST plugins, for example) but yet the history of EAM is strewn with unacknowledged relationships between composers and the technicians/technologists who aided and abetted them (Sean Williams has researched the relationship between Karlheinz Stockhausen and the WDR studio technicians and technology, but I’ve yet to read his work). This does seem to be changing, and I’m looking forward to reading William’s chapter on King Tubby and Stockhausen’s use of analogue technology and the influence it had on their sound (the two are presumably considered independently, but what a fantastic pairing!). The notion that Stockhausen’s work has a sound is already an upsetter. Stockhausen made music, his music was sound, but did not have a sound (“the seductive, if abrasive, curves of [Studie II’s] mellow, noisy goodness“). Yes, it does, just like Barry Truax’s PODX era granular work has a sound, and many works in the era of early FFT have a sound, and countless late musique concrète composers sound like GRM-Tools, and my work has a sound which sometimes sounds like FFT and sometimes like GRM-Tools etc etc.

This has gone a little off-target, but it does support my initial point: composers, of EAM, don’t like to talk about how they do what they do. They’ll tell you what they used to do it, but not what they did with that it. Similarly, technologically adept artists will explain the tools they’ve developed, but not how these tools have been creatively applied. In either case this is a shame, as it limits the pace and depth at which the practice can evolve. If artists explain how they do what they do, other artists can learn from them, and apply a particular set of technicaI routines in their own ways. I don’t buy the argument that this might lead to a kind of technology-based plagiarism. There’s already enough sonic and aesthetic homogeneity in EAM. Opening up its creative-technological processes would, I imagine, lead to greater technical refinement and a wider creative palette, and – heaven forbid – perhaps even some criticism of aesthetic homogeneity where it is found. More than this, acknowledgement on the part of composers that they are using technology that has been designed and implemented by another human being might actually lead to establishing a stronger feedback loop between engineer and end-user. This is one of the real beauties of using the Ardour and Reaper DAW‘s – their design teams implement user feedback in an iterative design process – resulting in DAWs that are much much easier and friendlier to use than just about any other I can think of. It also strikes me that what I’m outlining is different to the kind of DIY/Open Source culture that makes contemporary software and electronic art cultures so strong. I’m not talking about how to make analogue and digital stuff, but rather how to make musical stuff with it (and if this requires that both the technology and its creative deployment be discussed, all the better).

It is of course a fair point that the artist might not want to spend their time explaining how they do what they do (there’s already too little time in which to do it), but I do think practitioners should open up their laptops and outline they ways in which they achieve certain creative outcomes. If this simply reveals that they ran a recording of dry kelp (being cracked and ripped) through GRM-Tools Shuffling and pushed the parameter faders around until they got a sound (a sound!) they liked, that would be a start. This is just what I did almost 20 years ago when I first started seriously trying to make EAM. What I still haven’t done is explain to myself, or anyone else, why this combination of sound, DSP and parameter settings, produced a result that made me feel there was musical value in what was coming out of the loudspeakers. The initial act may have been relatively simple (setting aside the design of the still wonderful GRM-Tools), but the process and outcomes are not. Untangling and explaining this, or indeed any (interesting) creative-technological method, could be a valuable and useful thing to do. So, this is a task I’m setting myself: in a future entry on this blog, hopefully the next one, I’ll attempt to dissect a particular technical method used in the composition of Let x = and also try to explain why the outcome of the process found musical application (i.e. had musical value to me in the context of the work-in-progress).

Icosahedral Loudspeaker: the ICO as instrument for electronic chamber music (IEM #5)

IEM‘s ICO is a 20-channel loudspeaker, developed by Franz Zotter at IEM. The ICO, and its smaller spherical cousin, was developed as a part of Zotter’s PhD research into sound radiation synthesis as a tool for replicating the acoustic radiation of instruments and measuring the acoustic response of rooms: “This work demonstrates a comprehensive methodology for capture, analysis, manipulation, and reproduction of spatial sound-radiation. As the challenge herein, acoustic events need to be captured and reproduced not only in one but in a preferably complete multiplicity of directions” (Zotter, 2009). The ICO was developed primarily as a technical tool but through collaborations between Zotter and composer / sound artist Gerriet Sharma it has found application as a creative tool, or indeed as an instrument. As Sharma and Zotter (2014) outline “[The ICO] is capable of providing a correct and powerful simulation of musical instruments in their lower registers in all their 360◦ directional transmission range. The device is also suitable for the application of new room acoustic measurements in which controllable directivity is used to obtain a refined spatial characterization.” It is this “controlled directivity” that has primarily found artistic application. The “beamforming algorithm developed in [Zotter’s PhD research] allows strongly focused sound beams to be projected
onto floors, ceilings, and walls… [This] allows to attenuate sounds [sic] from the ICO itself
while sounds from acoustic reflections can be emphasized. Beams are not only freely adjustable in terms of their radiation angle, also different ones can be blended, or their beam width can be increased. A loose idea behind employing such sound beams in music is to orchestrate reflecting surfaces, maybe yielding useful effects in the perceived impression.”

ICO 20-channel loudspeaker (IEM)
ICO 20-channel loudspeaker (IEM)

My work with the ICO, in combination with the IEM 24-channel hemisphere array, certainly confirmed that beam-forming can find artistic application, and indeed that the phenomena described by Zotter and Sharma are actual (hearing is believing). My exploration of the ICO’s propensities as a compact loudspeaker array are confirmed by small sample listener-response research presented in their 2014 paper. Using spatial controller plug-ins (VST) developed by Matthias Kronlachner it is mercifully trivial to shape and control acoustic beams in terms of perceived size, movement, and to use beam-forming to create reflections on surfaces within the performance space.

It is the latter that is perhaps the most surprising and lively aspect of the ICO (although there’s much to be said for the capabilities of the ICO as an acoustic surface, see below). To my ears this is because it requires one to fully engage with the rich interactions of source material, loudspeaker and room response in ways which one tends not to do when using the hemispherical array (or indeed any other multichannel approach). This is simply because in such arrays one is concerned with creating a virtual sound image/space and concern for the room acoustic tends to be limited to minimising its impact on the qualities of audio reproduction or reinforcement (in the case of live electronic music). Using the ICO as an instrument to “orchestrate reflecting surfaces” on the other hand, requires engagement with the acoustic properties of the performance space as an integral aspect of the creative process, and also an awareness of the specific capabilities of the ICO. The outcomes of this are interesting:

The (electroacoustic) work itself can no longer considered as independent of the space in which it is to be performed. In composing Let x = (2014-) for the ICO and hemisphere, for example, many of the compositional decision made in those sections of the work for the ICO alone were based on achieving results that may not be achievable in other spaces. (I hope I get the chance to find out!) When combining the ICO with the hemisphere this was not the case as the latter masks the acoustic response of the IEM Cube space, making such sections or passages more readily transposable to other spaces. One of the really exciting things about working with the ICO is that you are required to closely tune in to the interaction sound, space and movement, and often encounter results that are quite unexpected as the room response enacts spatial behaviours that could not be anticipated from the topology and morphology of source material projected from the ICO. The room itself is revealed as a sonic object integral to the work as a whole and this raises the very question of what it is to compose spatially.

When one uses the ICO to orchestrate space, or more accurately to orchestrate perceived relationships between sound and space (sound and space being interdependent), one is orchestrating for a specific space, creating a relatively high (but far from total) degree of site-specificity. In moving a work from one space to another, the piece needs then to be spatially recomposed in order to work in a new acoustic context. This is entirely possible, as Sharma wonderfully demonstrated in his Signale portrait concert (11 Nov 2104) in the György-Ligeti-Saal in MUMUTH, which featured works not originally composed for this space. The question arises though: what kind of work, using the ICO and focused on sound-space orchestration, is more readily adapted for effective outcomes in different spaces, and indeed are all spaces suitable for the ICO and such application? (I hope to return to this topic in a later post, including the ways in which the ICO can be used in sound installation work, as it has been by Martin Rumori.)

The ICO, despite its unique affordances, is characterised by a number of limitations. That these are noticeable is due to recognition of what the ICO is as a musical object, rather than a response to the false expectation that the ICO is some kind of super-loudspeaker, possessed of sonic superpowers. In fact, such limitations should lead one to consider the ICO as an instrument which, as is the case in effective use of any instrument, needs to be used in ways that exploit its strengths and are not compromised by its shortcomings (don’t ask a trumpet to do the rapid passage work of a flute, for example). Due to the driver size (16.5cm), the frequency response of the ICO is reduced below around 150Hz and the power of each individual speaker is also limited. These limitations can be overcome by coupling loudspeakers to create greater loudness and improve bass response (the typical solution is to feed low-pass signal below 150Hz to all 20 loudspeakers). However, these solutions have acoustic-spatial implications. Bass material, even in low-mid range, is clearly non-directional when an omni-source is created (as just described), which decouples this frequency range from directional beam-forming producing occasionally quite unusual spatial effects (which Sharma has exploited). Similarly, beam-forming is compromised when loudspeakers are coupled to increase loudness as the signal is spread over a larger area. Moreover, the icosahedron that houses the 20 loudspeakers is itself something of a resonating chamber which, although far from functioning like a resonator in a traditional instrument, has its own acoustic colour.

Beam-forming and sound-spatial orchestration are two of the strengths of the ICO, but I shouldn’t forget that the ICO packs a lot of loudspeakers into a relatively compact object. This comes right down to the possibility to address each loudspeaker individually, creating a very distinct point source. While I was surprised by what can be done in working with the ICO as a tool for creating acoustic reflections, I was equally pleased to hear it realise my ideas for enacting complex acoustic surfaces. Pontillistic textures skittering across its surface, sweeps of material oscillating between indirect and direct loudspeakers, clearly stratified layers of sound, and all emanating from a discrete spatial field. As I hope was evident in Let x = the ICO offers an abundance of spatial-compositional possibilities, even before using it to stimulate and shape room response or conjoining it with a more traditional array such as the IEM hemisphere.

Given the just outlined propensities and limitations of the ICO, it’s my feeling that not only should it be considered as an instrument, but more than this it is an instrument suited to chamber music. After all, in itself it is a chamber, a space which has a certain resonance which gives it the quality of enclosing sound as much as it projects sound. In other words, it has its own sound, it’s own sonic grain, as any instrument does. Through its frequency response and loudness it is best suited for small to medium size rooms, such as those in which chamber music is best heard, not so much because it cannot project sufficiently to be heard in large spaces but more because it requires listener proximity allowing perception of the detail it generates, both in itself and in the space it activates.

Guided by voices (IEM #4)

After having written below about the need to introduce constraints into the creative process, and then having fully immersed myself in the composition of the piece, what has become clear is that while it is useful to introduce pre-compositional constraints – establishing a work-concept that informs the creative process – these constraints themselves are only a scaffold which is eventually replaced by the more substantial constraints the work itself gradually establishes the further one moves inside this emergent territory. For example, in working with vocal material for the newly completed Let x = I had presumed that the heterogeneity of voice, the fact that it resonates at causal, semantic, and reduced levels (voice as voice, meaning and sound), could be accomodated in a single work. Certainly it can be and there are plenty of examples of works in which all these levels (and more) are operating simultaneously (one of my favourites remains John Young’s Sju), but in Let x = the constraint that the work itself introduced was a product of exploring the spatial sound environments afforded by IEM’s Ikosaeder (20-channel loudspeaker) and ambisonic hemisphere (24-channels), and the combination of these. In investigating what can be done with these spatially, it quickly became clear that the work was veering away from voice as speech (semantic), and voice as voice (causal) (aside from in a delimited but structurally significant way, i.e. as a means to mark key structural moments in the piece), and that in fact it needed to, wanted to, was going to, do so. The voice as sound, transformed but still vocalesque (voice-like), afforded enough sonic ambiguity and abstraction for sound materials to be utilised spatially without the histrionics of voices that behave like winged creatures or the schizophrenic effects of invisible conclaves, juries and choruses. The outcome is a work that, at least in my reckoning (I don’t need to be reminded that “the author is dead“), through which I manage to achieve one of the initial aims, but which also guided itself in a direction I had not anticipated. In the vocally tinged spatial atmospheres, textures and trajectories of Let x = there is a commingling of voice and environment which I feel fulfils my stated aims of  “the transformation of speech into objects resonating at embodied.. and environmental levels” and the dissolution of  “the barrier between ‘over here’ and ‘over there,’… the illusory boundary between ‘inside and outside’” (Tim Morton). Yet at the same time, the aim of deploying speech “as an ‘interface language’ between the kinaesthetic (nonverbal expression, including music, utterance, gesture and space) and the cenaesthetic (complex cognitive structures, including poetic language and the semantics of music)” has not really begun to be explored, simply because there is so little direct speech in the piece and when speech is heard it is in forms difficult to understand unless one is Indonesian (“Kaki langit,” the foot of the sky is the phrase that opens the piece) or capable of deciphering 6 languages at once (the closing moment of the piece simultaneously introduces the same phrase in English, German, Italian, Farsi, Bengali and Indonesian). Recognising that the piece guided itself is an important thing for me, not only because it recognises the extent to which things themselves have their own propensities and powers to which we are always having to respond (Graham Harman’s Guerrilla Metaphysics is good on this topic), but also because it means one can feel good about letting one’s own work-concept fall away and trust that engagement with the complex thing-in-itself (shadowy though its being is) will produce an object which is cohesive in its own ways, irrespective of how closely these match with the hypothetical thing it was intended to become. One of the other very satisfying aspects of this is that I can still attempt to compose the piece I thought I was composing, and by concentrating less on spatiality and more on semantics, perhaps a new work will emerge which “[deploys] speech “as an ‘interface language’ between the kinaesthetic… and the cenaesthetic”. Therefore the work itself is not finished. Let x =.


Creativity and constraints (IEM #3)

The fecundity and prodigiousness of sonic matter is something of a trope in (experimental) electronic music. This is implicit in the title of Paul Theberge’s book Any Sound You Can Imagine. If anything is possible, everything is the result. Everything, needless to say, is a lot. Put in more serious terms, this (over)abundance of materials – and of meanings and creative possibilities – was core theme in my paper “The Acousmatic and the Language of the Technological Sublime” (presented at EMS 2007)

Faced with the sonic fecundity of technology, the acousmatic composer becomes a bricoleur, sorting through and trying to make sense of the mountain of sonic material produced by the very technology the composer claims mastery over.

Way back in 1951 Pierre Schaeffer was grappling with the problem of how to get from a surfeit of wayward concrete sounds to music, and although he was ultimately defeated by his predilection for the conventionally musical, his articulation of this challenge remains germane:

We want to create a work. How shall we go about it? First provide ourselves with material, then trust to instinct? And how shall we establish the score? How are to to imagine a priori the thousand unexpected transformations of concrete sound? How can we choose between hundreds of samples when no system of classification, and no notation, has yet been decided upon? (In Search of Concrete Music, 78-79)

If anything is possible, if sounds are growing and mutating like an audible viruses (another trope in electronic music, cf. Goodman’s “audio virology”), then where should I start? In instrumental music, the blank sheet of manuscript at least carries with it some minimal level of structure because the smooth or open space of “pansonority” (Ivan Wyschnegradsky, 1928) has already been striated into pitch-space, and the reserve of instruments and their repertoire of sounds etc. already lies waiting. In comparison, the blank screen of a Max patch, for example, is much blanker. The magnitude of the tabula rasa is that much greater. This white on white picture I’m painting is over-simplified and ignores the fact that I am at present creating a piece based on a fairly specific set of materials (see the previous two entries), but nonetheless Schaeffer’s “how shall we start?” problem remains. The usual, and useful, response to “how shall we?” is to seek to fetter the sonic wilds. Or, as Jaccque Attali has it, to discipline noise so that it behaves itself and transforms into music. At a grand level this is Schaeffer’s project in Traité des Objets Musicaux (realising the missing “system of classification”). At a lower, more individual level, this is the challenge of creating a “work-concept”, a box to work in that is neither too constricting nor too roomy.

In contemporary parlance this is the matter of creative constraints. The dimensions of imaginative space which each project requires so that a number of things can happen. Firstly, that creative agoraphobia doesn’t set in (constraint establishes boundaries). Secondly, that any action taken can be directed towards a goal (constraint creates pathways). Thirdly, that as ideas and materials accumulate there are reasons to keep some of these and discard others (constraint encourages economy). Fourthly, that as a piece begins to take shape / develop / disclose itself it is able to hold itself together and isn’t pulled apart by the different forces at work in its various elements (constraint promotes cohesion). Fifthly, creative energy and focus increases within the reaction chamber of a project (constraint affords flow). Or, in Stravinsky’s words “My freedom will be so much the greater and more meaningful the more narrowly I limit my field of action and the more I surround myself with obstacles” (The Poetics of Music in the Form of Six Lessons).

All fine so far. But what is a constraint exactly? A limitation or restriction, as per my dictionary widget. This is partly what I mean. In that any particular creative project should (realistically can) only engage with some ideas, materials, forms, processes. That is, in theory everything is possible, but in this particular project only some of those things are useful. The negative senses of constraint, as can’ts and shoulds (the auxiliary verbs of convention), are not of interest here, except as something to attend to in the ongoing excavation of one’s creative psychology , including phenomena such as Bloom’s anxiety of influence or the social imperatives to produce work that demonstrates certain features as markers of membership in particular creative communities (complexity, technical virtuosity etc being those that often apply in contemporary composition). Constraints in the positive sense engender a kind of negentropy, such that time and energy isn’t frittered on ideas and concerns that are peripheral to the project at hand. The question is of course, what is central and what is peripheral? In the initial stages of a project, one simply doesn’t know for sure. So is there a tool for establishing at least a provisional certainty (is that an oxymoron?). I am very tempted to say that it intuition is the best tool to rely on here. The feeling that something is right, even if – and this is very important – the rightness is accompanied by other intimations, such as the seemingly inordinate difficulties involved in seeing through the thing that seems right (recent-ish research suggests that hurdles are a very good stimulus for creativity). Intuition, even when properly tempered by stubbornness and willingness to take risks, is often maligned in artistic work (particularly through the push to legitimate art-work as artistic research), but it is important because it involves recognition of one’s own (possibly) unique position in relationship to a set of materials and ideas which are in all likelihood not unique but which one chooses as one’s own and in doing so a unique situation is set up, which is not boundless but limited through this choosing. Within it, one can’t do anything, but only those things that fit and fit into this temporary situation. It’s intuition all the way down. The rightness of the materials and ideas, affording a sense of the rightness of their combination and articulation, the rightness of the situation these establish which in turn affords and excludes further choices. Constraint as the feedback loop of the self which in a deeper sense is the acceptance of one’s finitude. This doesn’t make creative work any easier, but it does make it more possible.

Reading for writing (IEM #2)

The reading for my IEM project with Nicholas Isherwood goes in (at least) two directions, the theoretical (coupling of kinaesthetic and cenaesthetic, as mentioned in the previous entry, and a topic for later entries) and the creative. I say creative, because I’m not seeking out text with high literary value, but rather that which will facilitate creating a new work. The initial, and still core, idea is to use everyday expressions that link body and environment. Dead or frozen metaphors as they’re sometimes called. Language as Ralph Waldo Emerson’s “fossil poetry”. Breath of wind. Mouth of the river. Stream of words. And similar expression in languages other than English. Farsi, courtesy of Mo Zareei, presents some twists: Ghorreshe abr (the roar of the cloud), Gerye yeh abr (the tear of the cloud). Indonesian too, thanks to Yono Soekarno: Kaki langit (foot/leg sky – the horizon), Jari mentari (fingers sun – sun rays). Some phrases are common across many languages, and will find a place in the project (navel of the world, heart of stone). In working with such expressions, and their musical and sonic potential, I’m excited by the idea that the piece might follow a path back to the “brilliant image” that Emerson’s fossil poetry is a remnant of.

The etymologist finds the deadest word to have been once a brilliant picture. Language is fossil poetry. As the limestone of the continent consists of infinite masses of the shells of animalcules, so language is made up of images, or tropes, which now, in their secondary use, have long ceased to remind us of their poetic origin. But the poet names the thing because he sees it, or comes one step nearer to it than any other. This expression or naming, is not art, but a second nature, grown out of the first, as a leaf out of a tree.

My focus is not a literary one though, as I’m not seeking to revivify the beings of second nature through language. Rather, I’m hoping to draw attention to the connection between first and second nature, to afford the listener an experience of second nature growing “out of the first.” Sound, music and language growing in and out of each other.

To underpin these metaphors, I’m also seeking out foundational texts (creation myths of one kind of another) which speak directly of the meshing of body and environment, though most often in these the body (or some kind of divine being) magically always already exists and the environment emerges from it.

Puruṣa from the Rig Veda (India)
When they divided Puruṣa how many portions did they make?
What do they call his mouth, his arms? What do they call his thighs and feet?

The Moon was gendered from his mind, and from his eye the Sun had birth

Ovid’s Metamorphoses
Atlas, so huge, became
A mountain; beard and hair were changed to forests,
Shoulders were cliffs, hands ridges; where his head
Had lately been, the soaring summit rose

Pan Gu (Chinese)
P’an-Ku’s bones changed to rocks; his flesh to earth; his marrow, teeth and nails to metals; his hair to herbs and trees; his veins to rivers; his breath to wind; and his four limbs became pillars marking the four corners of the world

The one which most struck me, a creation myth appropriate to the contemporary world, a piece of speculative science, is Plato’s Timaeus.

God took such of the primary triangles as were straight and smooth, and were adapted by their perfection to produce fire and water, and air and earth – these, I say, he separated from their kinds, and mingling them in due proportions with one another, made the marrow out of them to be a universal seed of the whole race of mankind; and in this seed he then planted and enclosed the souls, and in the original distribution gave to the marrow as many and various forms as the different kinds of souls were hereafter to receive. That which, like a field, was to receive the divine seed, he made round every way, and called that portion of the marrow, brain, intending that, when an animal was perfected, the vessel containing this substance should be the head.

Reading this via Lakoff and Johnson’s work (and ignoring the triangles) there seem a number of metaphors at work here. The head is a vessel. The body is a substance (marrow, as the essential substance). But underlying these is the more fundamental image of mankind as both plant and earth – first nature (seed and field) – in and out of which grows second nature – humanity.

And finally, unexpectedly, I came across this from Pierre Schaeffer, one of musique concrète’s most significant inventors:

A shell against your ear will make your blood sing to the rhythm of the sea. This is because there are two universes, similar in every way, separated on by the surface of your skin.

Emerson rephrased? Merleau-Ponty seems a more likely influence. Nevertheless, this might just find a place in the piece.







IEM residency begins (IEM #1)

This begins the documentation of my composer residency at the Institut für Elektronische Musik und Akustik (IEM) in Graz, Austria. While here I’ll be creating a new piece in collaboration with renowned vocalist Nicholas Isherwood. This will be realized firstly as an acousmatic work for icosahedral loudspeaker and ambisonic audio (in the IEM CUBE), which will then be adapted as a live work for vocalist, live electronics and ambisonic audio.

The core of the project is to find ways to use speech as an ‘interface language’ between the kinaesthetic (nonverbal expression, including music, utterance, and gesture) and the cenaesthetic (complex cognitive structures, such as poetic language and the semantic dimension of music), bringing these two cognitive dimensions together through the “decoding” of speech into “lower” level sonic forms, which resonate at embodied, cultural and environmental levels. The text for the piece uses metaphors, in multiple languages, that couple bodily, social and environmental imagery, aiming to “[dissolve] the barrier between ‘over here’ and ‘over there,’ and more fundamentally, the illusory boundary between ‘inside and outside’” (Morton, 25). The voice is of course central to this. As Stephen Connor puts it in Dumbstruck (3-4)

My voice comes from me first of all in a bodily sense. It is produced by means of my vocal apparatus… It is my voice I hear resonating in my head, amplified and modified by the bones of my skull, at the same time as I see and hear its effects upon the world… Giving voice is the process which simultaneously produces articulate sound, and produces myself, as a self- producing being… Listen, says a voice: some being is giving voice… [Voice] is me, it is my way of being me in my going out of myself.. My voice is not something I merely have… Rather it is something I do

What this means in practice, I’m not yet sure, but I am hugely excited about the project(s), for many reasons, not least of all that it affords me a chance to return to the combination of music and language, via the voice, an area I’ve not worked in for quite a while now. At the same time there’s a lot of learning to be done in terms of multichannel audio and composing for higher order ambisonics (thus far I’ve only had the opportunity to explore first order ambisonics). Along the way I’ll be attempting to keep a close record of the project, primarily in terms of its aesthetic and creative aspects.



Is Music on the Right Track?

How would I know? But here are some crystal-ball thoughts for the future of music/technology, ahead of the “Is Music on the Right Track?” panel discussion (which I’m contributing to) hosted by IPENZ and the New Zealand Music Commission, and focussing on the “future challenges to music, for both industry and consumers, driven by technology”.

  1. The musician-engineer is becoming a standard presence in the musical landscape. This is the musician who designs and builds the tools needed to make the music they want to hear. A recent example: the development of Live, which reputedly started life as a prototype built in the widely used Max environment. The outcome? Loops for all. Ableton has transformed the landscape of electronica, but it still adheres to the musical norms that govern the development of most new musical tools. The multi-track tape paradigm for example, which is the basis for every DAW and is still present in Ableton, requires that we think in terms of channel strips and a metric timeline. Building  accessible tools that don’t adhere to such norms  – Quince is an example – would be an excellent first step towards renewing musical culture. Here’s where mainstream music-making can borrow from the aesthetic and technical riches amassed by experimental musicians over the last century. Couple this with the Web 2.0 culture of ideas/information sharing and we might just be reminded of what left-field truly is and leave behind prosumer technology in the process.
  2. Technology has outstripped us. We’ve made machines that are capable of doing more, or at very least facilitating more, than most of us are capable of imagining. Now we have to catch up. Catching up requires either that we work towards abandoning staid and normative concepts of music, or that we accept our affective and aesthetic faculties are not up to the task of following our sound-making technologies into their future. Either we stay home and play until all the guitar strings are broken, or we accept that our music-making tools are on their own band-wagon and get on board.
  3. Music is an affective technology. In its experimental forms it has outstripped our abilities to listen. Following Adorno’s thesis that music predicts the shape of society to come,  we can  learn a great deal about the world by listening to the music that has gone off our cultural grid (formally defined by the presence of metric rhythm and equal-tempered pitch). If we could all listen to the outer reaches of music, then perhaps we’ll grow within ourselves the empathetic and intellectual tools required to grasp the realities of hyper-phenomena such as climate change, mega-cities, and a planet of slums. Time for another cognitive big bang.