1 - Consideration of Space in the presentation of EDM

The fundamental research question underpinning this portfolio is how can real-time spatialisation be used in Electronic Dance Music (EDM), specifically in the context of live performance? I believe the 3D presentation of sound can enhance the visceral qualities of music through a more immersive sound experience. Although we are used to cinematic surround (Dolby 5.1/7.1), surround sound has been little used in EDM. There are a wide array of techniques and tools available for spatial composition, from IRCAM’s SPAT (Spatialisateur), OctoGRIS, Ambisonics to the 4DSOUND system. I will evaluate these technologies and examine how such techniques are incorporated into my compositional methodology.

An historical overview of composers who have used spatial techniques and the often-bespoke places or systems they have created for, will allow me to situate my compositions within this artistic practice. The different implementations of spatialisation methods that have been used (as well as the principal locations dedicated to this form of activity) will be interrogated. In addition, I will focus on how spatial techniques are integral to my creative practice. I will demonstrate that, with better access to powerful, yet simple and efficient technologies, spatialisation in EDM can enhance the listening experience.

1.1 - Why use large multichannel spatialisation techniques?

Ludger Brümmer writes that, “Spatiality in music is more than a parameter for the realisation of aesthetic concepts. Spatiality aids in the presentation, the perception, and the comprehension of music” (Brümmer, 2016). Regarding the research he conducted at the Zentrum fur Kunst und Medientechnologie (ZKM), Brümmer goes on to state that:

Human hearing is capable of simultaneously perceiving several independently moving objects or detecting groups of a large number of static sound sources and following changes within them. Spatial positioning is thus well-suited for compositional use (Brümmer, 2016, p. 2).

Throughout my research, the spatial location of sounds is not used to convey a meaningful structure. Despite Brummer’s contention above, from my working in spatial sound, I contend that any more than four layers of sonic movement cannot be accurately perceived and recalled. This means that the integration of space as a structural parameter needs to be carefully controlled. What interests me about the combining of multiple spatial trajectories is how they coalesce in a three-dimensional space to provide a sense of immersivity and viscerality.

The sense of immersivity through the use of spatialisation arises from “the fact that human hearing is capable of perceiving more information when it is distributed in space than when it is only slightly spatially dispersed” (Brümmer, 2016, p. 2). It is a phenomenon where sounds are capable of concealing or masking each other (Bregman, 1990, p. 320).

1.2 - Why spatial music?

In my work, spatialisation is an important dimension of the music; it provides a dynamism and physicality that enhances the auditory experience. Regarding the attribution of value to space in my music, there is an integration of spatialisation with a developmental sense of how the track unfolds, it is a meaningful part of how I structure my pieces. Space is actually used as a structural element. Spatialisation enables me to think about critical compositional decisions; do I keep one or more sonic elements moving in space or should I keep it static, should I reduce the immersive environment by thinning the texture down, etc. Also, it allows me to build my piece towards a certain type of energy. One important element of spatialisation for me is in the articulation of form. A trajectory is not always of structural importance but what is significant is the amount of spatialisation, where the density of spatial movement creates a more or less immersive sense. Depending on the context, the reduction of spatialisation is necessary in a moment of repose or for the beginning of a build-up. Adding layers of movement with the spatialisation, as a result, becoming more complex, enables me to articulate form through the emergence and disappearance of sonic elements.

The significance of spatialisation in my work varies; it is to some extent present all of the time but in varying degrees of importance. It is always present and perceptible. It is not an add-on effect which is how Steve Lawler used it when I attended his performance with the Dolby Atmos system at Ministry of Sound in London (August 19th, 2019). (https://www.decodedmagazine.com/steve-lawler-atmos/). My spatialisation can highlight a specific sonic element or it can emerge and disappear during an intense musical section because at certain moment the rhythm is the current important musical parameter. In my performance practice, the space behaviours are pre-set (depending on their frequency content, each loop has a fixed role attributed in space), and I fade them in and out. During a performance, I do not improvise with spatial movement and I do not create holes within this spatialisation; I always try to create a holistic immersive environment.

In my experimentations with space, I relate to Mexican band leader Juan García Esquivel and how he must have felt at the time when he used the potential of the stereo image on his recording (Exploring New Sounds in Hi-Fi/Stereo (May 1959, RCA Victor) in order to create new listening experiences. I have developed diverse types of spatialisation which I consider effective in my music. I have developed, and continue to develop a working method for spatialisation in EDM. It is dance music on its own term of what I can do. It is establishing a new outlook for the contemporary EDM producers. What I am doing has significance, interest and value by itself. Thus, I suggest that we are at the point of establishing a new era of spatialisation for EDM.      

There is much valuable research about “spatialisation and meaning”, and how it creates a narrative of space. My research can be akin to Ruth Dockwray and Allan Moore’s (2000) article about sonic placement in the “Sound-box”, where I discuss localization and movement of sounds in my “Dome-box” (within the SPIRAL Studio).

From the outset of electronic music, the earliest practitioners investigated the spatial presentation of this new genre, particularly in a performance context. “Spatialisation has been an important element in classical electronic music, showing up in work by Karlheinz Stockhausen, Luciano Berio, Luigi Nono, John Chowning, and many others. A mainstream practice emerged in which spatialisation consisted of the simulation of static and/or moving sound sources” (Puckette, 2017, p. 130). Of the composers who have used spatialisation in commercial projects, Amon Tobin is one of the most prominent. His album Foley Room was performed at the GRM over the acousmonium in multi- channels format. He also scored for video games and he commented about his surround mix for the soundtrack of Tom Clancy’s Splinter Cell: Chaos Theory (2005):

The reason it was easier than a stereo mix is because you have a lot more physical room to spread out all the different sounds and frequencies. So, the issue of sounds clashing and frequencies absorbing all the frequency range in the speakers is a lot smaller when you've got that much more room to play with (Tobin, 2005)[2].

Several commercial electronic artists also considered space in their CD/DVD releases. In 2005, Birmingham based Audio-Visual collective Modulate produced a multichannel project which allowed a 5.1 playback at home (Modulate 5.1 DVD)[3]. The electronic music duo Autechre, consisting of Rob Brown and Sean Booth, explained their use of space in the stereo field when working on the album Quaristice (2008):

If we’re using effects that are designed to generate reverbs or echoes the listener is going to perceive certain sized spaces, so you can sort of dynamically evolve these shapes and sounds to actually evoke internal spaces or scales of things’. […] You can play with it way beyond music and notes and scales. (Brown quoted in Ramsey, 2013, p.26)[4]

When composing music for a multichannel system, adding movement and localisation opens new possibilities for musical expression and listening experience. Such systems provide a multitude of listening positions, which offer new ways of listening to music. These modes of listening cohere with music theorist Ola Stockfelt’s invitation to develop and cultivate, in our modern life, a variety of modes of listening:

To listen adequately hence does not mean any particular, better, or “more musical”, “more intellectual, or “culturally superior” way of listening. It means that one masters and develops the ability to listen for what is relevant to the genre in the music, for what is adequate to understanding according to the specific genre’s comprehensible context […] we must develop our competence reflexively to control the use of, and the shifts between, different modes of listening to different types of sounds events (Stockfelt, 1989, p. 91).

In his PhD thesis entitled The Composition and Performance of Spatial Music, Enda Bates observed that, “the study of the aesthetics of spatial music and the musical use of space as a musical parameter therefore appears to be a good way to indirectly approach electroacoustic music composition and the performance of electronic music in general” (Bates, 2009, p. 5). Furthermore, he adds “spatial music is in many respects a microcosm of electroacoustic music, which can refer to many of the different styles within this aesthetic but is not tied to any one in particular” (Bates, 2009, p. 5).

Concerning the validity of spatialisation, I have found analogous opinions to mine in Ben Ramsey’s research, where he stated: “this idea of using space as a compositional narrative is a complete departure from more commercial dance music composition practice, and very much enters the realm of acousmatic music, where space and spatialisation is often considered part of the musical discourse for a piece” (Ramsey, 2013, p. 17). This also relates to Denis Smalley’s approach to acousmatic composition and his concept of spatiomorphology:

And I invented the term ‘spatiomorphology’ to highlight, conceptually, the special concentration on spatial properties afforded by acousmatic music, stating that space, formed through spectromorphological activity, becomes a new type of source bonding (Smalley, 2007, p. 53).

Stefan Robbers from Eevo Lute Muzique has engineered a performative sound system called the Multi Angle Sound Engine (MASE). It offers the DJ/performer a multichannel diffusion system which allows to play with space on a stereo system:

The MASE interface offers DJs or producers eight independent audio inputs and a library of sound movements. The user has ample options for assigning a trajectory to an incoming audio signal and to start, stop or localise this. Specially designed software allows users to programme and store their own motion trajectories. The system is space-independent, users can input the dimensions and shape of a room and the number of speakers which are to be controlled (Evo Lute Muzique quoted in Ramsey, 2013, online)[5].

The ideas behind the MASE system and the compositional territory it offers, opens up a way to incorporate commercial dance music with “the more experimental and aurally challenging compositional structures and sound sources that are found in acousmatic music” (Ramsey, 2013).

Another artist that embraced new technologies in order to compose music in 3D is Joel Zimmerman (aka Deadmau5). In 2017, he converted his production studio to be fully compatible and compliant with Dolby Atmos systems. “Deadmau5 even says that he’ll produce all his new songs first in Atmos to give them the most three-dimensional sound, and then “submix” down to stereo after for more common systems and listening” (Meadow, 2018).

1.3 - Temples for sound spatialisation

A variety of 3D sound projects are currently finding their way into the world of nightclubs and EDM culture more widely, demonstrating a steady and growing interest in spaces with sound spatialisation. These environments, both academic and commercial, provide a place to experiment with spatial audio and this could be seen as part of a broader shift towards more public venues and experimental performance spaces valuing immersive, 3D audio experiences.

1.3.1 - 4DSOUND System

Figure 4 - 4DSOUND system (Image by Georg Schroll via Compfight), 2015.

Figure 4 - 4DSOUND system (Image by Georg Schroll via Compfight), 2015.

The software for the 4DSOUND system (see Figure 4 above) is coded in Max4Live (a joint project between Cycling74 (Max) and Ableton (Live). The founder Paul Oomen explains that “the hardware comprises an array of 57 omni-directional speakers – 16 pillars holding three each, as well as nine sub woofers beneath the floor. By carefully controlling the amount of each sound going to each speaker, it is possible to localise it, change its size, and move it in all directions” (Oomen, 2016).

In February 2016, I attended 4DSOUND’s second edition of the Spatial Sound Hacklab at ZKM in Karlsruhe, Germany, as part of ‘Performing Sound, Playing Technology’[6], a festival of contemporary musical instruments and interfaces. During four intensive days, creators, coders and performers experimented with new performance tools, a variety of instrumental approaches and different conceptual frameworks in order to write sound in space. The founder Paul Oomen believes that “spatial awareness and how we understand space through sound plays an integral role in the development of our cognitive capacities” (Oomen, 2016). As a result, he considers that “there will be new ways to discover how we can express ourselves musically through space, and our understanding of the nature of space itself will evolve” (Oomen, 2016). He also writes that:

Spatiality of sound is among the finest and most subtle levels of information we are able to perceive. Both powerful and vulnerable, we can be completely immersed in it, it can evoke entire new worlds – if we are only able to listen. After eight years of developing the technology and exploring its expressive possibilities, it has become clear that the development of the listener itself, the evolution of our cognitive capacities, is an integral part of the technology.

Spatial sound is a medium that can open the gate to our consciousness, encouraging heightened awareness of environment, a deeper sense of the connection between mind and body, empathic sensitivity and more nuanced social interaction with those around us. It challenges us to listen to the world in a more engaging way, offering us a chance to become more sensitive human beings (Oomen, 2016)[7].

According to its founder, “central to 4DSOUND’s plan for the project is to establish a laboratory for artists, thinkers and scientists to explore ideas about space through sound, and create a platform for cross-fertilization of different fields of knowledge to further the development of the medium” (Oomen, 2016). The lab also allows for a new form of spatial listening: in enabling the refinement of conscious listening practice, increasing awareness of surroundings and exploration of a deeper connection to the self and others. Paul Oomen (2016) hopes to “encourage a change in the quality of our everyday experience” through the 4DSOUND system. Oomen writes:

We are committed to engendering a new ecology of listening, improving the sound within and of our environments and expanding our ability to listen. I think this is a movement that will really begin to take shape over the coming years as we evaluate many aspects of modern life, of our shared environments, and our understanding of sound in influencing this (Oomen, 2016)[8].

Figure 5 - 4DSOUND’s second edition of ‘Spatial Sound Hacklab’ at ZKM in Germany, 2016.

Figure 5 - 4DSOUND’s second edition of ‘Spatial Sound Hacklab’ at ZKM in Germany, 2016.

From my presence at the hacklab, I was able to experience the qualities of the 4DSOUND system and perceive its potential for diffusing music in space (see Figure 5 above). I was able to interview many of the participants. Ondřej Mikula (aka Aid Kid), a composer from the Czech Republic, stated that, “these systems of spatial audio (4DSOUND and Ministry of Sound’s Dolby Atmos) are the future of electronic music.” Furthermore, he mentioned that:

When you have this much space, you can really achieve a ‘clean’ sound because of the mix. When the frequencies are crushing in your stereo mix, you can only put the sounds on the side [or the middle] or using the ‘side-chain’ effect [in order to create frequency space in your mix]. But when you have this full room [of space] you don’t have to worry [about clashing frequencies], you just put the sounds somewhere else. I have lots of [sound] layers in my music (the way I compose) and it handles all of them (numbers of layers) without cutting the frequencies. This is a big advantage for me [when I compose]. (Mikula, 2016)[9]

On site at the ZKM, I interviewed the French composer, Hervé Birolini, who commented that: 

Spatialisation [in my work] is fundamental. I conceive myself to be a stage director of space, in a close manner of the theatrical term. […] It appears to me more and more like something extremely natural [to include spatialisation when I compose]. It is so natural that for every situation that I have been proposed to participate [and to compose sound], like for stage music, a ‘classic’ electroacoustic music, a work for the radio or else… I will adapt the piece [of music] and its spatialisation to the space that I wish to create (Birolini, 2016)[10].

He added that his experience was unique and offered efficacious results when playing with space on the 4DSOUND system:

In order to create a sense of realism in music, I had the possibility of ‘exercising’ the elevation (sense of height). Although more complex to integrate in a work, it can enhance the surround environment in electroacoustic music. Thus, 4DSOUND is ‘Space’ in all of its dimensions; in front, behind, at the sides, above and below us. I can say that this system is unique, I’ve had the experience to experiment with several [diffusion] systems [around the world] and this one can’t be heard anywhere else. Furthermore, it [the 4DSOUND] operates ‘naturally’ and efficiently (Birolini, 2016)[11].

Some well-known commercial artists have been able to use and perform with the 4DSOUND system including Max Cooper and Murcof. Max Cooper is a DJ and producer from London, whose work exists in the intersection between dance floor experimentation, fine-art sound design, and examination of the scientific world through visuals. Murcof is the performing and recording name of Mexican electronica artist Fernando Corona. Murcof’s work with the 4DSOUND system has allowed him to expand and develop his method for structuring narrative and composition with spatialisation: “The system really demands to be heard before writing down any ideas for it, and it also pushes you to change your approach to the whole composition process” (Murcof, 2014)[12]. Max Cooper has acknowledged that: “The 4DSOUND system, and a lot of the work I do with my music in terms of spatiality and trying to create immersive spaces and structures within them, has to do with psycho-acoustics and the power of sound to create our perception of the reality we’re in” (Cooper, 2017)[13].

This unique sound experience is technically fascinating and leaves us with a great spatial sound impression. One of the sonic characteristics that I found convincing with this system is the realistic and perceptible sense of height, providing a coherent 3D sound image and listening experience. However, the system requires two trucks to transport all of the equipment, which makes it expensive to stage a 4DSOUND show, and each is a one-off experience and therefore although highly attractive as a format. Furthermore, it is not appropriate to my current research, which aims to utilise commercially available tools in a highly versatile yet portable system that can be performed with Live.

1.3.2 - Dolby Atmos

Figure 6 - London’s Ministry of Sound collaborates with Dolby Laboratories to bring Dolby Atmos sound technology to dance music. Photo credit: unnamed, Found at www.mondodr.com, 2016.

Figure 6 - London’s Ministry of Sound collaborates with Dolby Laboratories to bring Dolby Atmos sound technology to dance music. Photo credit: unnamed, Found at www.mondodr.com, 2016.

Another system for the spatial presentation of Electronic Dance Music is the newly installed Dolby Atmos technology at London’s Ministry of Sound club (see Figure 6 above). The partnership between Dolby and Ministry of Sound gave rise to an important innovation in the performance of EDM in clubs, allowing music to be spatialised on the vertical as well as on the horizontal level. Matthew Francey, managing editor for Ministry of Sound’s website, wrote: “For the listener, this means that sound can appear anywhere along the left-to-right and front-to-back axes, and also at different heights within the audio field” (Francey, 2016)[14]. What makes the Dolby Atmos system an interesting solution is that it does not require a specific number of speakers for the spatialisation to function. In a classic Dolby Digital 5.1 mix, sounds are assigned to a specific speaker, so if you want a sound to come from behind the listener on the right, you would pan it to the rear right channel. Mark Walton, music journalist, states that:

With Atmos, sounds are "object based", meaning that the sound is given a specific XYZ coordinate [like Ambisonics] within a 3D space, and the system figures out which speaker array to send the sounds through, no matter how many (up to 64) or few (as low as two) there are. Even when the sound is panned, you move it through each individual speaker in the path, creating an immersive experience (Walton, 2016)[15].

Figure 7 - Prior to the Ministry Of Sound pilot, tracks were processed using the Dolby Atmos Panner plug-in, which was used to automate the three-dimensional panning of various musical elements (Robjohns, Sound On Sound, 2017)[16].

Figure 7 - Prior to the Ministry Of Sound pilot, tracks were processed using the Dolby Atmos Panner plug-in, which was used to automate the three-dimensional panning of various musical elements (Robjohns, Sound On Sound, 2017)[16].

My experience of using the Dolby Atmos plugin tools (see Figure 7 above) at their London studio in August 2017 was an easy adaptation of the knowledge I had gained from the tools I was already using within Ableton. Dolby’s tools tap into a new market of potential users for sound spatialisation. With such commercial tools becoming available, it is clear that spatialisation is a growing compositional element for dance music producers. From discussions I had while at their London studio, Dolby is investing in this technology because they have a vision that it will impact the world of nightclubbing and they want to develop this market in many of the big metropolitan cities around the world. The first Atmos system is in London, the second is installed in Chicago (Sound-Bar), with plans for more (Halcyon in San Francisco, 2018). I was fortunate to try the Dolby Atmos Panner plug-in at their London studio, although for the purpose of this research, I could not use this tool extensively since it is still a private pre-commercial tool in (beta) development.

Spatial thinking about sound plays an important role for Robert Henke (aka Monolake), and his performance ‘Monolake Live 2016’ is a vivid example.[17] It is presented as a multichannel surround sound experience, which Henke has been experimenting with for many years, and includes versions for wave field synthesis[18], ambisonics and other state of the art audio formats.[19] Richie Hawtin (aka Plastikman) has stated that “experimenting in technologies which also work within that field of surround sound, is not only inspiring and challenging but also a good brush up on skills you may need later in life.”[20] (Hawtin, 2005) He has produced a DVD in 5.1 surround sound (DE9: Transitions, 2005 [21]) for a home listening experience. Other music artists like Björk (Vespertine, 2001), Beck (Sea of Change, 2002) and Peter Gabriel (Up, 2002) have all experimented with surround sound releases but spatial audio considerations have not become a key focus of their output. Beyond these artists mentioned above, the use of space, in or out of the studio, has not been used significantly by commercial producers. Whilst this could due to the major record labels having little commercial imperative to promote surround audio formats (SACD, DVD-A, Blu-Ray), more probable is the lack of a single common software/hardware format that allows producers to travel from one venue to the next and setup quickly and efficiently without the need for bespoke hardware requirements.

1.3.3 - SARC - The Sonic Laboratory

Despite the relative lack of commercial interest in spatial music, in experimental sound, space has been an important consideration since the late 1940s. Karlheinz Stockhausen was one of the early pioneers of electronic music to be interested in the spatial distribution of sound both in his electronic and instrumental music. He was interested in space as a parameter in music that could be manipulated just like pitch and rhythm. Stockhausen wrote that: “Pitch can become pulse […] take a sound and spin it, it becomes a pitch rather than its sound” (Stockhausen and Maconie, 1989, p. 93). This presents an extreme form of spatialisation, stemming from a thinking about rhythm and how the parameters can merge into one another. This idea informed Stockhausen works such as Gesang der Jünglinge (1956), Oktophonie (1991),and the Helikopter Quartet (1993). Because of his interest in space, and particularly his performances at the Osaka World Fair in 1970, Stockhausen was invited to open the Sonic Arts Research Centre (SARC), in Belfast on April 22nd 2004. SARC is a world-famous institute for sound spatialisation focused around their spatial laboratory/auditorium which boasts a floating floor with rings of speakers both under and above the audience – an auditorium designed after Stockhausen’s ideas from Musik in Raum (1959).

Figure 8 - The Sonic Laboratory at Queen's University in Belfast, 2005.

Figure 8 - The Sonic Laboratory at Queen's University in Belfast, 2005.

Whilst the SARC Laboratory (see Fig. 8) and the 4DSOUND system all offer a unique spatial experience, I am not wanting to work with a bespoke system. What I aim for is a system where I can utilize off-the-shelf software and play and more importantly, perform: a flexible and practical tool to create spatial EDM in a variety of musical space.

1.3.4 - Sound Field Synthesis Methods

Figure 9 - The world's only transportable Wave Field Synthesis system, from ‘The Game Of Life’ (gameoflife.nl), was stationed in Amsterdam for the ‘Focused Sound in Sonic Space’ event in 2011.

Figure 9 - The world's only transportable Wave Field Synthesis system, from ‘The Game Of Life’ (gameoflife.nl), was stationed in Amsterdam for the ‘Focused Sound in Sonic Space’ event in 2011.

Ambisonics and Wave field synthesis (see Fig. 9) are two ways of rendering 3D audio, which both aim at physically reconstructing the soundfield. They derive from distinct theoretical considerations of sound and how it propagates through space. They do slightly different things as they treat sounds in different ways.

Wave Field Synthesis (WFS) allows the composer to create virtual acoustic environments. It emulates nature like wave fronts according to the Huygens-Fresnel principle (and developed by A.J. Berkhout in Holland since 1988) by the assembling of elementary waves, synthesized by a very large number of individually driven loudspeakers. In much the same way that complex sounds can be synthesised by additive synthesis using simple sine tones, so in Wave Field Synthesis a complex wavefront can be constructed by the superimposition of spherical waves. The advantage is that there is a much-enlarged sweet spot. The concept behind WFS is the propagation of sound waves through space and positioning the listener within this environment. It has a precise sound localisation but requires huge number of speakers to prevent spatial aliasing; it is therefore expensive and impractical for my own use.

Ambisonics was pioneered by Michael Gerzon. It quickly gained advocates throughout the 1970s such as David Mallam at the University of York but was never a commercial success. Only recently, since its adoption by Google and the games manufacturer Codemasters has it achieved significant attention. The resurgence of Virtual (Immersive) Reality has seen companies such as Facebook and YouTube adopt spatialisation (3D sound) in their applications in order to provide audio content with a binaural spatial experience.

Ambisonics is a type of 3D spatialisation system that, like WFS and Dolby Atmos, is not speaker dependent, and creates virtual spaces within a speaker environment. The minimum number of channels for a full-sphere soundfield of a fifth-order Ambisonics is 36 channels. They are used to project the spatial information. With the increase of speakers there is an increase in detail in spatial perception. This spatial technique is achieved through manipulating the phase of sound sources rather than through amplitude changes and it can result in a blurring of transients, which is not good for the type of music I make. The lack of transient clarity on kick drum and hi-hat samples has caused me to search for an alternative solution despite the portability of the system.

When assessing different software and hardware systems for spatialisation, and tools for performance I had the following questions in mind:

-       Does speaker size found in WFS bring issues for bass resolution for EDM production?

-       Does the lack of height (elevated sounds) on the WFS system impact the immersivity that height speakers provide?

-       Is my live performance setup, consisting of the Push 2 and the Novation Launchpad control XL instruments, compatible with WFS?

-       In Ambisonics are the transients within the soundfield too blurred and not precise enough for low frequency materials found in a kick drum?

My solutions to these questions have shaped my research and technical setup.

1.3.5 - SPIRAL Studio

The studio I have concentrated my research activities in is the SPIRAL (Spatialisation and Interactive Research Laboratory) at the University of Huddersfield. (See Figure 10 below)

Figure 10 - SPIRAL Studio at the University of Huddersfield, 2015.

Figure 10 - SPIRAL Studio at the University of Huddersfield, 2015.

My research has facilitated new ways for me to compose music and has developed my approach to spatialising sounds in the SPIRAL Studio. In order to compose and perform live within an immersive listening environment, I have explored many tools but settled on Ableton Live[22], in conjunction with the Push 2 and Novation’s Launchpad control XL (see Figure 11 below), using Max4Live spatialisation objects. This integrated performance and compositional setup has enabled me to create EDM with a sense of sound envelopment using a system comprising 24 channels on three octophonic rings of speakers. From a compositional perspective, the SPIRAL, drawing on Brümmer’s insights mentioned earlier, allows me more perceptual freedom to add more layers to my music compared to the clustered traditional stereo approach to sound.

Figure 11 - Ableton Push 2 and Novation’s Launchpad control XL, 2017.

Figure 11 - Ableton Push 2 and Novation’s Launchpad control XL, 2017.

Concerning the earnest attention given to space by acousmatic composers, I concur with Brümmer’s findings in his article New developments for spatial music in the context of the ZKM Klangdom: A review of technologies and recent productions, where he states that:

The musical potential of spatiality is only beginning to unfold. The ability to listen consciously to and make use of space will continue to be developed in the future, larger installations will become more flexible and more readily available overall, and the capabilities of the parameter space will be further explored through research and artistic practice. This will make it easier for composers and event organisers to stimulate and challenge the audience’s capacity for experience, as the introduction of the recently introduced object-based Dolby Atmos standard indicates. But composers will also find more refined techniques and aesthetics that will take advantage of the full power of spatial distribution. If this happens the audience will follow, looking for new excitements in the perception of sound and music (Brümmer, 2016, p. 18).

Ultimately, my aim in this research has been to bring together real time spatialisation and live composition. Gerald Bennett (Department Head at the Institut de Recherche et Coordination Acoustique/Musique (IRCAM) in Paris from 1976-1981 and Director of the Institute for Computer Music and Sound Technology at the Hochschule Musik und Theater, Zurich from 2005-2007) noted that, “finding a balance between spatialisation and the restriction of interpretation in performance is difficult” (Bennett, 1997, p. 2). I want, as much as possible, to create and plan my compositional spatial movement in the studio but to be able to intervene in the spatial trajectories assigned to sounds during my performance. When working in the studio, I noticed that when I use more than two channels (stereo), the additional speakers allow my sounds to ‘breathe’ and facilitates a spatial counterpoint that generates musical relationships or dialogues between them. Sometimes, the clustering of sound materials grouped together within a stereo file can lead to sounds masking each other that would do so when presented over large multi-channel systems. This is not merely a matter of mixing skill within the stereo field, but rather about the spreading or separation of frequency content within a 3D space to create a sense of immersion and viscerality that is not possible within the stereo field. Thus, by having spectral divisions in my sounds, what I call ‘gravitational spatialisation’, it helps me to achieve the immersive quality that I want the audience to experience no matter where they are situated within the performance space.

1.4. - Composition and spatialisation tools

The SPIRAL Studio is a 25.4 channel studio. It comprises three octophonic circles of speakers that provide a height dimension, a central high speaker pointing straight down to the sweet spot, and four subwoofers. Within Ableton Live I experimented with several software tools that enable spatialisation.

The Spatial Immersion Research Group (GRIS) at the University of Montreal developed the OctoGRIS[23] and SpatGRIS (Audio Unit plugins for controlling sounds over speakers) allows the latter plugin up to 128 outputs and includes a height dimension and was released towards the end of this research project in March 2018). The OctoGRIS allows the user to control a live spatialisation on a dome from within an audio sequencer. Considering the number of spatial gestures that I wanted to include with several audio loops, the plugin used too much CPU. Therefore, I had to find an alternative, less CPU intensive solution.

Another tool that I tested was MNTN (The Sound of the Mountain)[24]. This software allows the user to design immersive listening experiences with a flexible, lightweight, and easy-to-use graphic user interface (GUI) for spatial sound design. With it, the user can play the space as an instrument. MNTN (see Figure 12 below) enables the user to perform spatial concerts with real 3D sound, and with as many loudspeakers as is desired. As I developed and built my performance setup with Ableton’s software, I selected a tool already integrated within the plugins of Live since I was more interested in the aesthetic application of these tools rather than their technical implementation.

Figure 12 - MNTN, the software was developed with the idea of enabling the production of immersive sound, 2017.

Figure 12 - MNTN, the software was developed with the idea of enabling the production of immersive sound, 2017.

I have used the Dolby Atmos plugin in their studio (with the Rendering Master Unit – RMU). It was easily adaptable for the software I am already using: Ableton Live. Dolby is developing a plugin to be used within popular DAWs, for the user to integrate the spatialisation dimension into their work but it is unfortunately not yet commercially available to dance music producers and the general public.

1.4.1 - How do software and hardware tools affect my workflow

My goal is to bring aspects of my past acousmatic practice into dance music in order to create immersive and visceral sound environments that are still novel and relatively uncommon. I am not claiming that I am developing new sound synthesis or new sound processing techniques, nor any new sound spatialisation techniques, it is rather about the integration of and application of these ideas in to my compositional practice. Even though there are many other spatialisation techniques that I could have used throughout my research and could have developed far more sophisticated directional or structural sound trajectories and gestures, the desire to work and mix live has shaped the tools I have chosen to work with. The concept of performing a spatial music is fundamental for me. This extends far beyond sound diffusion – a practice common in acousmatic music – and more towards the concept of ‘real-time composition’.

I decided not to use a MAX patch with SPAT to create sound trajectories, which though perhaps more sophisticated and controllable in the studio, caused CPU issues when used in real-time with multiple instantiations as plug-ins on separate tracks. I have aimed to use and adopt standardized tools in order to optimize their potential and create something that is highly flexible and easily transferable from one system to another without having to install additional software tools. Whilst I acknowledge that my technical setup has allowed me to do achieve my research objectives, I also acknowledge that the system has its limits – ones that I would like to transcend as my practice continues to develop. I have used all of Live’s sends features to explore its potential as a real-time spatialisation tool.  In this respect, my research has been highly successful as I have been able to create up to two-hour live sets of 24.4 channel EDM on my live steaming YouTube channel.

1.5 - Spatialisation and EDM: My setup

When performing my work, I aim to create an immersive and visceral flow of sound that carries the listener like an ever-moving wave. Immersion in that musical flow is more important than the musical dynamic or listening to the gestural dynamic, because it is more about feeling and absorbing the music rather than appreciating key changes or a specific sound. “Low-frequency beats can produce a sense of material presence and fullness, which can also serve to engender a sense of connection and cohesion” (Garcia, 2015). It is the effect that becomes more important rather than the music itself, and because there are not many contrasting musical sections, the listener can become immersed in it. Rupert Till in his article Lost in Music: Pop Cults and New Religious Movements, writes:

In most traditional societies, Western European culture being a notable exception, musical activity is a social or group-based activity, and is associated with the achievement of altered states of consciousness. […] Music has the power to exert enormous influence on the human mind, especially when people are gathered in groups, and the euphoric power of group dynamics is brought into play (Till, 2010, p. 12).

In my work, I intend the audience to achieve such altered state of consciousness, to offer them a musical journey into the Kantian sublime. Also, I am interested that the audience perceives the movement of the sounds in space rather than paying attention to the specific trajectory of sounds that creates a space within where they are situated. The way I create a sense of immersion and viscerality in my music is not just about volume, rhythm, repetition, but it is how I handle the music material, through long emerging textures. These textures accumulate through the sense of musical flow rather than discreet blocks of sounds.

What I intend with my spatialisation research is to provide an enhanced experience of EDM. In my opinion, spatialisation is not just structurally or compositionally significant; it enhances the nightclub experience through creating a sense of immersivity with the distinct visceral quality arising from the enveloping of the club-goers with music in a particular space. It is this experimental sensation of immersivity that drives my spatial thinking rather than abstract concepts.

The method I have used to spatialise my sounds allows me to perform on a sound system that can have up to 24 speakers. Most sound systems I have come across have less speakers than this, thus I can adapt my Ableton sessions to a multitude of speaker setups and presentation formats. The Max4Live plugins “Max Api Ctrl1LFO” and the “Max Api SendsXnodes” provide spatialisation easily and intuitively (see Figure 13 below).

Figure 13 - Max4Live spatialisation tool for multichannel diffusion, 2017.

Figure 13 - Max4Live spatialisation tool for multichannel diffusion, 2017.

This pair of plugins allows the user to send audio to 1 or to 24 channels at the time. The selection of numbers of speakers can vary and can be modified throughout the composition process. My technique was influenced by the position of the three rings of speakers at different height in the SPIRAL Studio. In keeping with my concept of ‘gravitational spatialisation’, I decided to keep most of my loops containing ‘heavy’ low frequencies on the bottom circle of eight speakers while moving or positioning the mid and high frequencies on the two rings of speakers above. This creates a ‘gravitational spatialisation where higher pitch sounds are usually heard above the lower bass sounds or kick. Since the ears perceive and localise the high pitch sound more easily (Lee, 2014), I tend to place the sounds on the higher rings of speaker. This last finding is supported by Hyunkook Lee’s article Psychoacoustic Considerations in Surround Sound with Height, where he states that: “The addition of height channels in new reproduction formats such as Auro-3D, Dolby Atmos and 22.2, etc. enhances the perceived spatial impression in reproduction” (Lee, 2014, p. 1).

When composing music with space as a musical parameter, there are spatial compositional techniques that we can take into consideration, as outlined by acousmatic composer Natasha Barrett:

Common compositional techniques encompass the following:

-Creating trajectories; these trajectories will introduce choreography of sounds into the piece, and this choreography needs to have a certain meaning.

-Using location as a serial parameter (e.g. Stockhausen); this will also introduce choreography of sounds.

-Diffusion, or (uniform) distribution of the sound energy; creating broad, or even enveloping, sound images.

-Simulation of acoustics; adding reverberation and echoes.

-Enhancing acoustics; tuned to the actual performance space, by exciting specific resonances of the space (e.g. Boulez in Répons).

-Alluding to spaces by using sounds that are reminiscent of specific spaces or environments (indoor/outdoor/small/large spaces) (Barrett, 2002, p. 314).

In all of my works, I follow some of Barrett’s considerations. I create trajectories for certain sounds (mainly mid and high-pitched material) in order to introduce a choreography of sounds, which will provide a certain meaning to the piece. In addition, the use of localisation is found in every piece of my work. I observed that it is perceptually better for low frequency sonic content to be fixed in the lower speakers, adhering to my concept of ‘gravitational spatialisation’. Diffusion, or (uniform) distribution of sound energy is often implemented in my composition in order to create an introduction to the piece; this technique helps me to immerse the audience in sounds with the use of all the speakers surrounding the listeners. We can hear an example of diffusion in my piece Chilli & Lime (2017), where the main part of the piece is introduced by a guitar loop. In this piece, there is also the simulation of acoustics in order to play with the sense of space: a delay effect was added to the guitar loop which helps expand the perceived localisation, making the sound feel as though it comes from beyond the speakers. These are the main considerations from Barrett’s list I consider important when composing a spatial work. Other elements in Barrett’s list pertain to other forms of art music rather than being directly applicable to EDM.

Depending on the sonic content of my work, my spatialisation techniques serve different functions when composing. The register of certain sounds can be more perceptible; therefore, it will influence my decision when deciding how the space will be used or manipulated. Marije Baalman discusses such ideas, writing that:

In a sense – within electroacoustic music – composers are interested in ‘abusing’ the technology; composers are not so much concerned with creating ‘realistic’ sound events, but rather interested in presenting the listener with spatial images which do not occur in nature (Baalman, 2010).

Figure 14 – Layout of speaker disposition in the SPIRAL Studio.

Figure 14 – Layout of speaker disposition in the SPIRAL Studio.

Figure 15 – Diagram of spatialisation for Not The Last One (2017).

Figure 15 – Diagram of spatialisation for Not The Last One (2017).

My piece Not The Last One (2017) begins with a Kick drum on all the speakers (see Figure 15 above) of the lower circle (speakers 1-8). At 0’41”, a second loop (a rhythmical white noise sweep) is placed on the middle ring (speakers 9-16) in order to start expanding the sonic space. Following this at 1’05” a third loop (another white noise with a fast panning effect) is introduced circling at moderate speed around all 24 speakers (from speakers 1 to 24), going up clockwise from the bottom and repeating this pattern constantly throughout the piece. This produces a sense of slow movement. A fourth sound layer (a gritty “metallic” rhythmical loop) appears at 1’23”, circling rapidly counter-clockwise starting on the higher speakers and going down and returning up after the whole sequence of speakers (from speakers 24 to 1). This will allow the listeners’ attention to focus on both the global sound trajectories and the multiple sound sources around the performance hall. The fifth sonic layer introduced at 1’34” is localized on the higher ring of speakers as it is (a rhythmical white noise beat) (speakers 17-24) in order to complete the filling of the whole space with sound. The first two minutes thus comprise not only an exposition of sonic material to be used in the composition, but also a spatial exposition. Other loops added throughout the track are positioned according to a sonic equilibrium that helps create a sense of immersion and movement on the diffusion system during the piece are adjusted for specific musical passages that require spatial attention.

Figure 16 – Diagram of spatialisation for Cyborg Talk (2017).

Figure 16 – Diagram of spatialisation for Cyborg Talk (2017).

In my work Cyborg Talk (2017), I start with a long, synthesized line that is placed on all the array of speakers (see Figure 16 above). Second, at 0’33” a pad sound is presented, moving slowly clockwise from the bottom towards the middle and higher rings. Third, at 0’49” the sound of a snare drum moves 90 degrees counter clockwise at every bar (front, left side, back, right side), circling back and forth between the middle and higher rings. The next two loops at 1’19” and 1’27” are drum sounds that are spread across the whole array of 24 speakers. A long sixth synthesized sound at 2’21” sets the singular and intriguing mood of the piece, circling slowly clockwise from the bottom towards the middle and higher rings, going up and down. The main character of the piece, fully established in loop seven at 3’05”, is located over all of the speakers. When all of the sounds are playing and some of them moving, it creates a spatial counterpoint that allows the listener to perceive a multitude of sonic possibilities, no matter where they are situated inside the array of speakers.

When I compose spatial music, I take into consideration the sonic elements of a track and how these are best suited to spatialisation using my concept of gravitational space. In wanting to create an immersive space I am aware that many of the finer details in a mix may not be perceived by those listening. Enda Bates concurs with this approach in his PhD research:

As Denis Smalley states the piece as a whole must be looked at, because “the whole is the space or spaces of the piece”. The success of any work of spatial music can therefore only be considered in terms of the overall compositional strategy which describes the relationship between space and every other musical parameter (Bates, 2009, p. 207).

In my work, I have observed that each spatialisation tool is optimised for a certain way of working or performance situation. As such, the most pertinent tool will depend on the particular spatial effect or mode of performance required. For instance, in So It Goes (2017) at 22’13” and 23’11” the audio content is largely focussed in the low frequency region on heavy drum and bass loops that provide an intense sonic immersion; the sounds are not moving much but they present an oppressive wall of sound. The effect aims to bring the climatic moment to an extreme sensation of immersion achieved through surround sound, while a drop in the music intensity brought by a filtering effect that culminates in the re-introduction of the musical climax, which creates an impactful and visceral sonic experience. The spatialisation was not a primary element of composition for that part of the piece, thus when creating and developing my musical ideas, I offer what is most appropriate for the sonic journey; focusing either on melody, harmony, timbre, rhythm or space when necessary.

With every musical project, I assign a specific movement or localisation for each of the sounds in the composition. Through experimentation, I respect the musical arrangement of the work in order to do the appropriate spatialisation for each of the sounds, which will remain throughout the piece. For instance, in my work Rocket Verstappen (2017), after establishing the introduction with the first two drum loops on the whole array of speakers (see Figure 17 below), I then add to the spatial dimension by introducing a third loop at 0’38” that would circulate clockwise, on the middle ring of eight speakers. The fourth layer, at 1’17” is a low bass loop that furthers the solidification of the low content of the piece on the lower ring of speakers. The fifth loop at 1’37” enhances the use of space even further by circulating counter clockwise in the upper ring of eight speakers. In that instant, the momentum of the piece is fully articulated and then continues forwards with new loops at 2’04” and 2’17” that emerge and disappear, so developing the structure of the piece while keeping their spatialisation throughout.

Figure 17 – Diagram of spatialisation for Rocket Verstappen (2017).

Figure 17 – Diagram of spatialisation for Rocket Verstappen (2017).

The dynamic spatialisation of sound can bring a new dimension to EDM. Producers will soon be able to write and perform spatial audio, using open source software that will advance electronic music, as we know it. I am seeking to change the way we experience live sound. In my work, it is essential to feel sound filling the whole space. The aim with my spatialisation is to create the sensation that the music is part of the room itself.

1.6 - Studio - Binaural Mix

My composition Not The Last One was a first attempt at translating my works of spatial electronic dance music from 24 channels to a binaural (2 channels) audio version. This work is important since I wish to create a mobile musical listening format where people can experience my compositions in their own private space wherever they are with a pair of headphones. Since only a limited number of people can come to Huddersfield for my concerts on a diffusion system with 24 speakers, the conversion for me became a key aspect of my PhD research that could allow me to reach a larger audience and could offer them the ability to carry this experience around the world.

The method I have set up provides a sense of immersion and spatialisation with my music, going from 24 channels to 2 channels audio. It was achieved by recording my live performance on my computer and then replaying the performance while recording it with a Neumann (binaural) recording head. This recording device replicates the human interaction with our auditory reality by having microphones placed in the ears of the dummy head. This technique provides a sonic realism since it recreates the time difference of sounds reaching an ear before (or after) the other, which offers a realistic manner of how the human brain calculates distances from the sound sources and its position. Of course, what is accountable in this recording technique is the pinnae of our ears, which is always different from one human to another, but the generic dummy head can provide an approximation for most human beings. Thus, with this technique, I was able to capture the spatialisation implemented in my composition and keep the sense of horizontal and vertical envelopment.

In this composition, I wanted to be able to perceive and locate where the sounds were moving in this virtual (binaural) space. The form of Not The Last One has three simple peaks of intensity introduced by increasing musical action and reaching a climax (at 2’15”, 5’06” and 8’57”), the middle section is followed by a falling section into a final denouement (starting at 4’13”). I have used sound materials that are mid-high pitched since they are easy to locate in space compared to the low frequency material, which I kept fixed mainly in the lower part of the mix.

I have used Hyunkook Lee’s multi-channel studio (Applied Psychoacoustics Lab, University of Huddersfield) in order to record my performance and to capture it with the Neumann Dummy Head. Similar to the SPIRAL Studio, Lee’s studio has surrounding (horizontal) speakers and also elevated (vertical) loudspeakers in order to create a sonic dome. Lee writes that: 

Height channel loudspeakers used in new 3D multichannel audio formats […] add the height dimension to the width and depth dimensions existing in the conventional surround formats. The added height channels are naturally expected to enhance perceived spatial impression (Lee, 2014, p. 1).

Thus, with this study in mind, it led me towards conceiving a stereo version of my 3D music. However, so far this technique is insufficient to provide a verbatim translation of the binaural sound experience. Therefore, some EQing on the mid frequencies suggested by Lee (+6dB around 3000Hz) were beneficial to clarify the spatialisation in the piece. Furthermore, in order to create this sense of spatial immersion, I have ‘downmixed’ the performance (contained on 11 channels of audio) from the software recording into a stereo file. In order to achieve this version, I have exaggerated the panoramation of the ‘moving’ elements from the binaural version. Also, the clarity of the software version enables me to provide the punchy and dynamic strength that supports the Neumann’s mix. At the moment, unfortunately, not every compositional project has frequency content that is suitable for this type of recording technique using the Neumann recording head. However, this is the technique I have found to be the most efficient and satisfying during my research. The potential is there, it is perhaps not as great as expected but it is still, for me, a better final version than the simpler stereo version. It is difficult to create that sense of height within a stereo mix. My solution is a hybrid formula of techniques to provide this binaural sense of immersion.

To support this procedure, a fellow researcher on sound spatialisation at the University of Huddersfield, Oliver Larkin, has also worked with the recorded files to provide a clearer sense of spatialisation. He has achieved this result by filtering out the low frequency material from the Dummy Head recording, to make a mono version of the low frequency content that will be finally added back to his filtered Neumann Dummy Head mix as his final version of this immersive audio mix. Larkin has kept the mid and high frequency material to provide the sense of spatialisation from the composition, while being supported by the bass content in mono. This is the technique he found being the most efficient so far in his research.

There is still a lot of work to do in order to create a greater experience than the stereo version but eventually the quality of the immersive experience will hopefully become more accessible and practical.