This Storm Is Called Progress

This Storm Is Called Progress (2016), a dual-screen audio-visual installation created in collaboration with filmmaker Grayson Cooke, was recently shortlisted for the Waterhouse Natural Science Art Prize. The work was highly commended by the judging panel and will be exhibited at the South Australian Museum in Adelaide, 10 June – 31 July 2016.

The project articulates the temporal and spatial disjunctions that underpin the Anthropocene, through juxtaposition of the “deep time” of ancient geological formations (the Naracoorte Caves in South Australia) with the technologically translated time of the anthropogenic present (Landsat images of Antarctic ice shelves).

The role of sound and music in the installation is to affectively charge and temporally vectorise its non-human, non-sentient subjects. This renders the non-human in humanly accessible form, affording the installation’s audience immediate access to ecological phenomena – hyperobjects – that otherwise exceed and elude human ken. This is a vital social-ecological contribution that art can make in response to the multiple environmental crises that define our contemporary era.

Anthroposcenes (1, 2)

Anthroposcenes (1, 2), for voice and electronics, was commissioned by IEM and premiered by the wonderful counter-tenor Kai Wessel in IEM’s Signale series back in March. The concert featured works by Trevor Wishart and Karlheinz Essl – fabulous to be in such outstanding company! The piece itself is an ongoing exploration of the body-environment metaphors, in multiple languages, which underpinned Let x =, created while composer-in-residence at IEM in 2014-15.

Lost Oscillations

Lost Oscillations, a collaboration between myself, Jim Murphy, and Mo H. Zareei, is a sound installation that requires the human touch – literally – of its audience in reactivating and feeling through the layered sonic archeology of Christchurch, the city’s contemporary and historic soundscapes and ever-shifting spatial character.

In Lost Oscillations the immediacy of listening and touch embed the participant in a field of phantom sound that their touch draws forth from the city, sonically and emotionally colouring the cityscape surrounding the installation.

The installation was commissioned by the 2015 Audacious Festival of Sonic Art. Jim and I were interviewed about the project by Eva Radich on Radio NZ Concert’s Upbeat show. You can listen to the interview here.

Let x = [Binaural version] (IEM#9)

I’ve just uploaded a binaural version of Let x = (on Soundcloud) for icosahedral loudspeaker (ICO) and 24-channel loudspeaker hemisphere, composed while I was 2014 composer-in-residence at IEM (Graz, Austria). The binaural version combines recordings of the ICO, made using a Schoeps KFM 6 mic, with mix-downs from the 24-channel ambisonic audio. The result isn’t the same as hearing the piece in-situ – the verticality of the piece is lost and the degree of immersion is reduced – but it gives some sense of its spatiality and I hope also conveys the ICO’s spatialisation capabilities, which I described in an earlier post.

IEM Cube
IEM Cube

Just so you know what you’re listening to, the piece is:

In 5 sections, which alternate and combine use of the ICO and hemisphere: 1. ICO > hemisphere; 2. ICO, 3. Hemisphere; 4. ICO, 5. hemisphere, ICO/hemisphere > hemisphere. A wide range of tools were used in composing the work, but the most significant were certainly Matthias Kronlachner’s ambix and mcfx plug-in suites, which made the task of mixing and spatialising for both the ICO and the hemisphere wonderfully straightforward.

The first section of a larger work-in-progress based on the transformation of speech into sounding objects with carnal, cultural and environmental resonances. The texts are metaphors, in multiple languages, coupling the human body and the natural environment, aiming to dissolve “the barrier between ‘over here’ and ‘over there,’… the illusory boundary between ‘inside and outside’” (Timothy Morton). There’s more to read about my compositional intentions and materials, and my creative process, in an earlier posts.

Here’s the programme note (nice and concise):

Let x = (2014-)

Kaki Langit – Foot of the Sky

 “Flesh = Earth, Bone = Stone, Blood = Water, Eyes = Sun, Mind = Moon, Brain = Cloud, Head = Heaven, Breath = Wind” (Adams & Mallory, Encyclopedia of Indo-European Culture).

Another way of saying this kind of thing comes from Levi Bryant:

[E]cology must be rescued from green ecology, or that perspective that approaches it as a restricted domain of investigation, pertaining only to rain forests and coral reefs. Ecology is a name of being tout court. It signifies not nature, but relation. To think ecologically is to think beings in relation; regardless of whether that being be the puffer fish, economy, or a literary text. Everything is ecological. Above all, we must think culture and society as ecologies embedded in a broader ecology.


Broken Magic: the Liveness of Loudspeakers (IEM#8)

Further to my experiences working with IEM’s hemisphere and IKO systems in 2014, here’s a draft chapter on the liveness of loudspeakers, and music written for loudspeakers, which may (fingers crossed), appear in print sometime in the coming year.

This reflection attends to the role of the loudspeaker in the creation and experience of liveness in electronic music in the context of immersive loudspeaker environments, such as the Hemisphere at the Institut für Elektronische Musik und Akustik (Graz, Austria), BEAST (Birmingham Electroacoustic Sound Theatre, UK) and 4DSound. These sound systems are allied through their potential to create vibrant aesthetic experiences in the absence of live performers and any significant visual element. Such acousmatic contexts, while not live in a conventional sense, use sonic immersion, dynamic spatial articulation of sound, and the experience of sound as invisible matter, as means to create a unique form of liveness. As a composer of fixed-media electronic music who works within the acousmatic domain, I find this enthralling and somatically powerfully, yet highly fragile as its auditory objects (both real and virtual) are contradicted by the visible physicality of the objects that give rise to them – loudspeakers.

To begin with it can be pointed out that loudspeaker listening is ubiquitous. More music is experienced via loudspeakers (including headphones) and acousmatically (without a visual element), than in an unmediated form. Even in live contexts, irrespective of whether music is realised physically-acoustically (traditional performance) or quasi-physically (by operator-musicians, interfacing with electronics), most music is mediated by loudspeakers, from classical Indian music through to stadium rock. Even performances of acoustic music in the Western classical tradition are often mediated via loudspeakers, using sound reinforcement systems designed to compensate for acoustic deficiencies in the room or deployed simply to increase the acoustic presence and impact of the music.

Perhaps due to its ubuity, this technologisation of music is rarely noted. In the case of the sound reinforcement of music in the Western classical tradition, this is unsurprising as such sound systems are designed to be visually and acoustically transparent. For other acoustic musics (jazz or singer-songwriter forms for example) transparency is also desired, but this often rubs against the requirement of making intimate low amplitude music audible in a larger space. In such cases, as the music is usually reinforced using stereo PA systems with speakers either side of the stage, a spatial division between visual source (musicians, the actual source of sound) and sonic source (loudspeakers) is introduced. This phenomenon is a kind of schizophonia.[1] For an alert listener, this split will be noticeable, but equally the phenomenon of audiovisual magnetisation tends to result in the perceptual gravity of the visual source drawing in sound such that listeners hear it as emanating from the visual source.[2] This phenomenon reaches an extreme in high amplitude performances, such as in clubs or large stadiums, where loudness is often so extreme as to create a pervasive and directionless sonic field. Here the magnetisation effect still applies, often supported by large-scale video projections that create a virtual visual source which may well not be located near the actual visual source. Similarly, in cinemas equipped with surround sound systems, the screen is the experiential locus and any sound which strays from the front and centre will be either magnetised to the screen or, in the case of off-screen sound, explained by on-screen events (as is often encountered in genres involving intense action).

The naturalisation of schizophonic listening in mediated live performance can be constrasted with the aural environments created by composers of fixed-media electronic music using the kind of immersive loudspeaker systems mentioned earlier. In such systems, regardless of the technical approach taken (arrayed stereo, surround, ambisonic, wavefield synthesis) there is a general concern to create a cohesive and credible sonic field or image that is heard as not projecting from the real source of the sound – loudspeakers. Here the trompe l’oreille (the aural equivalent of the trompe l’oeil) is paramount, emerging not from a disavowal of loudspeakers (despite their obvious presence) but from the active effort of the composer-engineer to render them sonically transparent. [3] Such immersive sound systems, in which sound is encountered as a distributed and multiplicitous omnispherical field, are akin to real-world listening – as analysed by Don Ihde[4] – in which sound will be heard from many different spatial locations simultaneously. This affords immersive loudspeaker systems an environmental quality not enountered in live-mediated performance (in which spatial magnetisation is in effect). Sound is always immersive (as Stefan Helmreich has argued) to the extent that Tim Ingold describes sound – like light – as a medium in which we exist (we are “ensounded”).[5] Immersive systems intensify the ensoundment of the listener, deploying sonic space as a core parameter rather regarding it as secondary field emanating from the sonic source much as the interior of a cinema is inevitably lit by reflected light from the screen. Affectively, this intensification can be unnerving and/or exhilarating, as listeners encounter sound as sound, in forms of ambiguous provenance, heard in the absence of visual cues. This redoubling of sonic experience, noted by the blind and exploited by artists working in the acousmatic domain (notably Francisco Lopez, who blindfolds his listeners), is also a feature of environmental audition in which the listener must actively engage in making sense of audial experience that is not an a posteriori residue of visual phenomenon nor determined by the conventions of musical cultures and systems.[6]

It is interesting to observe then, that in mediated live performace audiovisual schizophonia is barely noticed, while in the presentation of fixed-media electronic music – even when this offers a highly naturalistic and unified sound-image – the non-liveness introduced by its non-visuality and the temporal splitting of sound and source draws attention. It appears that the fixed-media acousmatic object (for all its virtual realism), in contrast to the live performance event (no matter how simulated it may be), represents a mode of aesthetic production and reception which falls outside normative understanding of what music is.[7] Indeed, a recurrent theme in electronic music circles is how to overcome the experiential difficulties presented to audiences by loudspeakers in which the provenance of sound is opaque even as its actual technological source is plainly visible.

That loudspeakers, and the media-specific music made for them, place many listeners in an interpretative quandry seems odd. After all, the loudspeaker long ago reshaped our expectation of how music should sound live, just as the recording has warped our expectations of live performance.[8] The sonic presence, grain, balance, and amplitude of music is now insuperable from its mediation via loudspeakers such that listeners will very likely be disappointed by the sonic qualities of a purely acoustic music or unsufficiently reinforced music. No one questions the magnification of sonic scale that takes place, for example, when an electric guitarist gently picks a string, exciting an acoustic response that is barely audible at 10 metres, but unleashing an electroacoustic object of enormous intensity (think here of Pink Floyd’s David Gilmour, or the even more restrained Fennesz). Loudspeakers render massively exponential the relationship between input gesture and sonic output, despite the rockist efforts of many performers to take physical ownership of this input/output disparity.

A key observation to make here is that the performer-loudspeaker assemblage is entirely naturalised assemblage, remaining so even when the performer is not a physical agent, but a virtual one, implied in the music and understood as present by the listener. The loudspeaker is granted liveness by the actual or virtual presence of the performer, presence which also renders the loudspeaker invisible. The performer on the other hand is always perceived as live, even when heavily mediated, such as is the case in much simulated music (music in which performance is fabricated). When no performer or performance can be readily heard in mediated music then there is no longer liveness. When the human body is not involved or perceived as involved in the active production or reception of mediated music then the presence of technology is foregrounded. Thus this assemblage is a tool that breaks  when a central component – human agency – is not perceptibly present. Remove the performer and the assemblage is denaturalised, the music broken, the loudspeakers starkly apparent as inorganic technological things. Remove the loudspeakers and you are left with a rather less impressive performance which is nevertheless musical. This also means the assemblage is assymetrical, formed of unequally weighted components, for on its own the loudspeaker has no liveness, despite its role in generating detailed and/or massive sonic presence such as is exploited for sheer somatic impact of Jamaican and club sound systems, or in the 4DSound system’s fourth (low frequency) dimension – underfloor subwoofers – which corporeally enliven the system’s audience.

For the listener, when the ontological shift from performative presence to absence is noticed, there is often an accompanying epistemological shift: music moves towards noise or soundscape (environmental sound) – non-music in any case. Put coarsely, this means that for many listeners there is no music when people are not clearly involved in its sounding form. This is an understandably anthropocentric view. After all, until very recently music has been an exclusively human art, involving technologies that are more like traditional tools, which do nothing without constant human input. Listeners versed in music that is rooted in instrumental or vocal performance (and their historically determined materials and organisational systems) tend to find technologically grounded music lacks “soul” or “spirit” or sounds “like it was made by a machine” (sometimes it is). This is the Memorex trope (“Is it live or is it Memorex?”, i.e. technology), given contemporary expression in the high value placed on detailed simulations of human performance (in sample library-based film soundtracks, for example), or in this statement from electroacoustic music theorist Denis Smalley: “music which does not take some account of the cultural imbedding of [performative] gesture will appear to most listeners a very cold, difficult, even sterile music.”[9]

This points to the ontological ambiguity of loudspeaker music (“What are you?” as Batman was asked in 1989), produced by a different form of schizophonia than found in live-mediated music, where live visual source and audial sound source are spatially split but temporally (more or less) synchronous. Rather it is one in which there is sonic-spatial unity – the trompe l’oreille in which the listener is ensounded – with a temporal split between sound and source. This is unavoidable in any fixed-media music for what is heard is not happening live but is a reproduction of earlier events or a fabricated event which may have no correlate in reality.[10] This temporal split introduces the space in which interpretation must occur, a necessity intensified by the fact that there’s nothing to see (except loudspeakers). Audiovisual experience involves spatial magnetisation of sound to image, but the warping of sound into visually determined forms (ontologically, espistomologically and affectively). By contrast, the monomodality of loudspeaker music, the hermeneutic gap it exists in, the form’s intensification through immersive sound environments, creates significant uncertainty for the listener, radically focusing them on sonic experience and its multivalence.

Multivalence is of course a feature of any (musical) experience. As Jean-Jacque Nattiez has it, “[What] horizons of experience might the musical work invoke? […] these horizons are immense, numerous and heterogeneous” But these horizons, when not visually-anthropomorphically determined (as most music is), are magnified still further. Loudspeaker music, shifts the centre of gravity away from the performer and towards the listener, reconstituting liveness as listener-determined. By way of example, consider the following: The ontological dimension of music, when detached from real or virtual human sources, becomes unsettled and labile, and can can only be stabilised through listener interpretation. Alva Noë asks “What would disembodied music even be?”, concluding that there is no such thing as music without bodies to make it, even if we can’t see them. Loudspeaker music complicates this through its profusion of unknown and unfamiliar bodies; There is also the somatic or corporeal dimension of sound and its physical-affective stimulation of the listener’s body, often coupling sound with the resonant spaces of the body to produce affects which can be pleasurable (bass music), painful (sonic weapons), or stimulating (Maryanne Amacher’s works involving the physiological response of the ear itself)[11]; Similarly, there are the empathetic-affective responses of listeners to sonic corporeality. The involuntary response at hearing the noises of another person’s body (as analysed by Stacey Sewell)[12], or the sounds of physical bodies and processes – physis – which embed and implicate the listener’s body in the mesh of an acoustic hyperobject that is no longer tied to the performing bodies of human beings.[13]

The liveness of loudspeaker music then, particularly in immersive sonic environments, emerges in the interaction of sound, space and the somatic, affective and interpretative activity of the listener. This can only happen in the absence of performer and performance, and in the presence of the loudspeaker. Such liveness is both singular and radical, particularly considered within a contemporary cultural context dominated by multimedia, whether spectacular or mundane. Yet the loudspeaker is always a broken tool, its visual-physical presence undermining the very audial-immaterial – but corporeal – experiences it creates, even as this deficient object propagates qualitative abundance, ontological ambiguity and somatic immediacy.

[1] Introduced as a pejorative term by R. Murray Schafer to describe the technological splitting of sound from source in recording. The term has since been used in a more positive sense by ethnomusicologist Stephen Feld.

[2] Chion, M. (1994). Audio-vision: Sound on screen. New York: Columbia University Press.

[3] Batchelor, Peter (2007. Really Hearing the Thing: An Investigation of the Creative Possibilities of Trompe L’Oreille and the Fabrication of Aural Landscapes. Electroacoustic Music Studies conference proceedings 2007.

[4] Ihde, D (2007). Listening and voice: A phenomenology of sound. Buffalo: SUNY Press.

[5] Helmreich, S (2007). An anthropologist underwater: Immersive soundscapes, submarine cyborgs, and transductive ethnography. American Ethnologist, Vol. 34, No. 4, pp. 621–641. Ingold, T (2007). Against Soundscape. In Autumn leaves: Sound and the environment in artistic practice, Carlyle, A. (ed). Paris, France: Association Double-Entendre in association with CRISAP.

[6] For a discussion of auditory perception in the blind, see Blesser, B., & Salter, L. (2007). Spaces speak, are you listening?: Experiencing aural architecture. Cambridge, Mass: MIT Press. Environmental audition is discussed in Fisher, J. (1998). What the Hills Are Alive with: In Defense of the Sounds of Nature. The Journal of Aesthetics and Art Criticism, Vol. 56, No. 2: 167-179.

[7] Object is used here to denote something concrete, fixed, in contrast to the use of event to denote something indeterminate, in flux. An object can of course be used as a blanket term of anything that is perceived, as in philosopher Graham Harman’s phenomenological use of the term

[8] Katz, M., 1970. (2004). Capturing sound: How technology has changed music. Berkeley: University of California Press.

[9] Smalley, Denis (1997). “Spectromorphology: explaining sound-shapes”. Organised Sound 2(2): 107–26.

[10] See Dellaira, M. (1995). Some Recorded Thoughts on Recorded Objects. Perspectives of New Music, vol. 33, 1 & 2: 192-207.

[11] Goodman, S. (2010). Sonic warfare: Sound, affect, and the ecology of fear. Cambridge, Mass: MIT Press; Amacher, M. (1999). Sound characters (making the third ear). New York: Tzadik.

[12] Sewell, S. (2010). Listening Inside Out: Notes on an embodied analysis. Performance Research: A Journal of the Performing Arts, vol. 15, 3: 60-65.

[13] Morton, T. (2013). Hyperobjects: Philosophy and ecology after the end of the world. Minneapolis: University of Minnesota Press.

Situated and non-situated spatial composition (IEM #7)

What do I think I mean by space in spatial composition? In this post I want to outline a distinction between situated and non-situated spatial composition, at least as it applies to my use of the IEM icosahedral loudspeaker (ICO) and 24-channel hemisphere array. This distinction extends upon an earlier post arguing for the ICO as an instrument for electronic chamber music.

In multichannel composition, including ambisonic works (and the very few wavefield synthesis compositions that exist), space is generally understood as the acoustic space, as a virtual sound field (VSF) and its shaping over time, created within the listening space, whether this be via headphones or a loudspeaker array of some sort in a room. The VSF is a composed space, which may consist of multiple other kinds of spaces (outlined in considerable detail by Denis Smalley PDF) either real or virtual in provenance but which ultimately are all virtual spaces, regardless of this provenance. A microphone recording of a real-world scene (a field recording) is a real space rendered virtual through the decontextualisation of recording and recontextualisation within the VSF of the musical work. In some cases the spatial resolution of the recording and its reproduction (recontextualisation) may be very high indeed, such that it might pass a blind listening test (an odd idea, but it has a history). In other instances the recording might be of a lower resolution such that the listener can identify the scene captured in (not by) the recording but would acknowledge that a spatial transformation has been enacted (stereo recordings “flatten” real acoustic space even as they convey a strong sense of space). Where the VSF is created through entirely synthetic means, say additive synthesis with multiple delay lines creating a sense of the sound coming into being within a space (implying reflections and reverb), the VSF is not real but has qualities which are heard as spatial through reference to real world acoustic spaces. Between the fully virtual space heard as having real world (i.e. physical) qualities and the real world space that is rendered virtual through recording, there are hybrid spaces created by digital audio processing which may transform the spatiality of recordings or impart new spatial qualities upon any sonic materials (the VSF created by impulse response reverberation is paradigmatic here). Such digital events and interactions are of course virtual, even if they convey qualities that impart a sense of the real world. [Another way to say all this, but quickly: the spatial attributes of sound within the VSF are a product of reference to the physical-acoustic spatiality of the real world, regardless of the actual provenance of the VSF.]

Spatial composition is engaged with the VSF alone unless it engages directly with the sonic space that contains the VSF – the room – and with the interactions between the VSF and the space of its reproduction or audition (the real physical space in which sound is reproduced and heard) in a way that transforms the VSF itself. The room itself usually is not taken into consideration, or even considered, unless it in some way interferes with the the VSF. (One could think of the room in Heideggerian sense: it is a tool for presenting music and isn’t noticed until it itself speaks, which in this case means, speaks out of turn, interferes) Typically, such interference is something to be counteracted so as to ensure that the VSF is compromised to the least possible extent. A composer or sound artist working within this understanding is engaging in non-situated spatial composition. The musical work here is self-contained and can be transposed from from one space to another without transformation. However, if the real space is brought into deliberate interaction with the VSF, or at an extreme is entirely integrated with it, then we are talking about situated composition. The situated composition, like site-specific art, cannot be moved from the site for which it has been realised, without transformation.

There’s a great deal to be said about this topic, and many examples that could be discussed, but here I’m focusing on the way that the ICO requires an approach that is situated (unless one uses it simply as another – albeit prolific – loudspeaker, which is entirely to ignore its full potential, something I discussed in an earlier post). The ICO affords and encourages a situated approach because one of its most interesting uses is as an instrument to “orchestrate reflecting surfaces” (Sharma and Zotter, PDF) and to do this of course requires that the VSF itself is brought into close relationship with the room acoustic. This means that the way in which one spatialises music material with the ICO will have to be created anew, or at least adapted, if this material is shifted from one room to another, especially if the material itself has been created to allow space to be given emphasis in the composition (which seems to require a reduction in the complexity of the music itself). Furthermore, given that each room has its own acoustic qualities, it is the case that not all materials will afford equal spatiality in all spaces (an upcoming post will explore this topic). At an extreme, this means that a situated work might truly be site-specific – to move the piece is to lose the piece (so to say). The IEM hemisphere on the other hand, is simply yet another multichannel array. A carefully considered and constructed array, but more or less homogeneous with other arrays which have been designed to allow the realisation of non-situated VSFs (just like cinemas, which can be better or worse, but in the end are designed for cinematic experience, not the experience of a cinema).

Working with the ICO and the hemisphere together for Let x = required that I minimise the situated-composition potential of the ICO, constraining such usage to spatial effects (there must be a better word to use than this…) which either strongly contrasted with the VSF of the hemisphere, or which afforded smooth blending between the two. In the former I tended to use the ICO solo, used as a complex acoustic surface, with layered textural materials skittering and moving across its surface, emphasising direct sound (the half of the ICO oriented towards the audience) more than indirect. In the former I tended to use the ICO to create reflections in the front-left corner of the room (directly behind the ICO), which allowed blended transitions between ICO “solos” and sections in which the hemisphere predominated (this worked because it was difficult to distinguish between reflected sound from the ICO and direct sound from the hemisphere in this part of the room). Such usage was not necessary and certainly there are many other possible approaches to combining the ICO and the hemisphere, but this approach was taken partly as a response to the site-specific limitations of using the ICO in IEM’s smallish (for a concert hall) Cube space. The size of this space means that the audience sits quite close to the ICO, reducing the possibilities for orchestrating reflections (for most of the audience there’s too little difference between direct and reflected sound for this to be compositionally useful, excepting the cases just described). In other words: in using the ICO I took an approach that was situated, perhaps even site-specific, but which didn’t fully explore the ICO’s potential as an instrument. So when Let x = is presented in another space it will required fair amount of reworking (I’m ignoring the conundrum of what to do with music written for an instrument – the ICO – that only exists at IEM). In using the hemisphere I took a standard compositional approach, the creation of a VSF which can be realised using any similar multichannel system, and this approach was only minimally adapted to afford a certain set of interactions with the ICO.

What would be interesting as a next step is to move this hybrid (non-)situated piece into a larger space, to hear to the ICO in interaction with such a space and listen to the ways it demands that my materials and spatialisation be transformed in order to remain effective. If this isn’t possible, then I’ll know that Let x = is a fully situated work.