My Practice of Live Performance of Spatial Electronic Dance Music 

by

Sebastian DeWay

A thesis submitted to the University of Huddersfield in fulfilment of the requirements, for the degree of Doctor of Philosophy

Department of Music, Humanities and Media

University of Huddersfield

May 2019

 Contents

Abstract  
Copyright                                                                                                 
List of works submitted                                               
Introduction                                                                                             
1 - Consideration of Space in the presentation of EDM                          
            1.1 - Why use large multichannel spatialisation techniques?
            1.2 - Why spatial music?
            1.3 - Temples for sound spatialisation
                     1.3.1 - 4DSOUND System
                     1.3.2 - Dolby Atmos
                     1.3.3 - SARC - The Sonic Laboratory 
                     1.3.4 - Sound Field Synthesis Methods
                     1.3.5 - SPIRAL Studio
            1.4. - Composition and spatialisation tools
                     1.4.1 - How do software and hardware tools affect my workflow
            1.5 - Spatialisation and EDM: My setup
            1.6 - Studio - Binaural Mix
2 - Composition As Performance  
            2.1 - Music Styles - Techno-House-Trance
            2.2 - Composition Overview
            2.3 - Compositional Flow
            2.4 - Structure: process and intuition
            2.5 - Improvisation
            2.6 - Conclusion: outlets and dissemination
3 - Performance As Composition                               
            3.1 - A Case Study: GusGus’ performance setup     
            3.2 - Rethinking Composition as Performance
            3.3 - Why 123-128bpm?
            3.4 - Physical response 
            3.5 - Live performance and Musical Flow
            3.6 - Methodology for the emerging artists
            3.7 - Experimentation with Live Spatialisation 
Conclusion                                                                                            
bibliography                                                                                         
Discography                                                                                         
Appendix I
Websites
                                                                                              

Abstract

In this commentary I will discuss the technical implementation of sound spatialisation in EDM (electronic dance music) performance practice and outline my compositional approaches involving these techniques. The use of space as a musical parameter in EDM is becoming more common as the accessibility of the technology increases. The technical means of performance and the sonic material combine to create a unique musical aesthetic and listening experience in EDM culture. An historical overview of compositions using spatial considerations as a main musical parameter will situate my work within this artistic practice. Different implementations and propositions of sound spatialisation, as well as the principal locations dedicated to this form of activity will be discussed to contextualise my work. 
A fundamental part of my research concerns the use of spatialisation tools and techniques to enhance EDM through an immersive sound experience. Concepts and notions of musical ‘flow’ and live improvisation have shaped this research and the compositional and performance aesthetics that have come to underpin my creative practice. Furthermore, the idea of immersivity and the sublime have informed my compositional thinking, and this will be assessed in relation to my objective to create an enhanced listening experience in my live performances. A discussion of the blurred roles of composer/producer/performer will demonstrate how I consider my live performance practice to redefine what a composer of EDM can be. Thus, I consider this research to propose a viable model for modern EDM composers.

Copyright Statement

The following notes on copyright and the ownership of intellectual property rights must be included as written below: 
i. The author of this thesis (including any appendices and/ or schedules to this thesis) owns any copyright in it (the “Copyright”) and s/he has given The University of Huddersfield the right to use such Copyright for any administrative, promotional, educational and/or teaching purposes. 
ii. Copies of this thesis, either in full or in extracts, may be made only in accordance with the regulations of the University Library. Details of these regulations may be obtained from the Librarian. Details of these regulations may be obtained from the Librarian. This page must form part of any such copies made. 
iii. The ownership of any patents, designs, trademarks and any and all other intellectual property rights except for the Copyright (the “Intellectual Property Rights”) and any reproductions of copyright works, for example graphs and tables (“Reproductions”), which may be described in this thesis, may not be owned by the author and may be owned by third parties. Such Intellectual Property Rights and Reproductions cannot and must not be made available for use without permission of the owner(s) of the relevant Intellectual Property Rights and/or Reproductions.

List of works submitted

Rocket Verstappen (2016-2017 – Live Stereo & Binaural – 8 & 12 minutes)
So It Goes (2017 – Stereo & Binaural – 49 minutes)
Cyborg Talk (2017 – Stereo & Binaural – 14 minutes)
Not The Last One (2017 – Stereo & Binaural – 9 minutes)
Chilli & Lime (2017 – Stereo & Binaural – 17 minutes)
Stix (2017 – Stereo & Binaural – 13 minutes)
Groove Society (2019 – Live Stereo & Live Binaural – 25 minutes)

Introduction

When I compose music using spatialisation techniques, I am aiming to create a sense of immersion and movement. I am enthralled by the ability and possibility to move sound in space and I consider it an important feature of music. For me, it enhances the listening experience, and this is achieved through localization, diffusion, height and trajectories of sounds. The implementation of spatial counterpoint in my compositions utilises parallel and contrary spatial motion. This use of spatial counterpoint inherently implies a set of compositional considerations for approaching a new work. My music is different to more typical EDM since we can hear sound trajectories, changeable rates of speed in sonic movement, localization and a height dimension in the sound. I have arrived at a heuristic meaning and common sense sets of rules for spatialisation. This is my spatial counterpointal system for this practice: I have called it ‘gravitational spatialisation’. Essentially it comprises a spectral separation of the audio content with sound positioned according to its frequency.

As a young composer, my first works were created with my own recordings of concrète material “objets sonores” (Schaeffer, 1956, p. 62). These audio files were in a stereo format put into organized sound compositions. The premiere of my first work at an acousmatic concert provided the initial impetus for my discovery of electronic music in space as it was performed over a sound diffusion system which allowed me to spatialise my composition with an orchestra of loudspeakers. This newly acquired awareness expanded my musical horizons; I thought this dimension was to be an important aspect of my future in music. Acousmatic music concerts offered me insightful lessons during my musical progression and the art of diffusion became a prime focus throughout these formative years.

As my thinking and knowledge about working with sound in space developed, the next logical step was to compose works on a surround setup of speakers in order to create pieces containing recorded (fixed) spatialisation, ready for performance. Accessible and intuitive tools to implement and to record spatialisation in my compositions such as OctoGRIS, enabled me to create works for a surround array of eight speakers. This skill of writing sound in space became my preferred mode of expression and creation. I believed the experience of an acousmatic environment was an important and forward thinking musical advancement for the contemporary artist.

I started to go to nightclubs at the age of fourteen years old and I grew up listening to Dance Music, which at the time (around 1988) coincided with the birth of House Music. Over the years, I have acquired knowledge from these music genres by regularly attending events of this kind and I have developed a strong inclination for rhythmical music, especially Electronic Dance Music (EDM). Unfortunately most of them do not utilise the art of spatialisation in any significant way. This is where I thought I could reconcile my musical aspirations: incorporating my skills from acousmatic composition and implement them in an EDM style. I started as an acousmatic composer and I became a composer/performer. I have developed a studio practice, where I want to integrate live performance and in-the-moment decision making at every point. My upbringing in nightclubs was dedicated towards the physical aspect of life. I relate this to my on-going need to experience music viscerally when I compose, and although I had a rigorous training in it, I do not find this sense of physicality in much of the acousmatic repertoire. Since, I have always been drawn to EDM, I started to apply my acousmatic practice to these styles of music production.

In this project, concerning the spatialisation of EDM, the research questions I posited were:

  • What role can spatialisation play in EDM?

  • How can I bring acousmatic experience into EDM to enhance it as a performative and compositional genre?

  • How can creative work merge composition and improvisation?

  • How can live performance practice redefine what a composer of EDM is?

  • How do I achieve this through notions of ‘gravitational spatialisation’, immersion and the idea of the sublime?

These questions affect my work as a composer, my role as a performer, and they also influence my sense of flow. Furthermore, this interrogative process made me research the idea of immersivity and the sublime, which have informed my compositional thinking. Through my particular compositional practice, I mix Techno, House and Trance music. I also define how my composer/producer/performer interchangeable roles work seamlessly as it is an interesting creative model. I have explored through the evaluation of spatialisation tools and plugins, which ones would be relevant to implement to EDM.

In any discussion of my practice as research, I consider it important to question what differentiates my music from a normal DJ set. Firstly, I regard my works as compositions. They do however, contain long stretches of improvisation and did my acousmatic works. In these earlier works that I created during my undergraduate and Masters study, I would select materials that I thought could work together, and improvise with them in the studio. I would record all of my improvisations and then make a piece with these elements. Essentially, I am doing this now in real time; I create material that form the opening of my piece and through controllers, the Ableton Push 2 and Novation Launchpad Control XL (see Figure 1 below), I improvise with these materials using filtering, reverberation and shuffling effects in order to create a new section of the piece in real time. To demonstrate this creative process, in my piece Stix (2017), at 3’40”, there are sonic transformations occurring on the original materials with heavy filtering, which makes the music nearly disappear until all the loops return with a different rhythm, created by using the shuffling effect. That similarity of practice is important, though I do not do the same thing in every piece. I either have a planned structure for my piece or, I develop my pieces as a logical stream of interlocking ideas that have a forward sense of flow to them. Compositional strategies I employ include applying an identical rhythmic pattern to all the loops, or expanding the sonic space by adding reverb to all the sounds or the speed of movement of these sounds. All of my pieces have a particular set of sonic materials that is unique to them. There is a lot of selection of sonic materials and thinking about space behind my compositions. For instance, Not The Last One (2017) contains mainly sounds similar to pink noise, and this provides a greater cohesion between all the sonic material, especially at 2’23”. My compositional approach is bound to the sense of flow: letting the materials potential steer the form and structure of my pieces.

Figure 1 – Performance setup during Livestream events; Laptop using Ableton Live in conjunction with the Push 2 and the Novation’s Launchpad Control XL, plus an iPad using Launchpad app in order to operate transitions from one piece to another in Li…

Figure 1 – Performance setup during Livestream events; Laptop using Ableton Live in conjunction with the Push 2 and the Novation’s Launchpad Control XL, plus an iPad using Launchpad app in order to operate transitions from one piece to another in Live, 2017.

There is a preparation, a selection, and a structure to each of my performances; I organize the set list and program for each specific event. The experimentation is done as the performance evolves, reading and sensing the components of that moment in order to provide the required musical offering:

When an EDM producer (usually also a DJ) in a recording studio selects and organizes sounds in determined ways, he is already acting in accordance with their virtual effects on a dance floor. Experimenting with sound combinations, he is also experimenting with his audience's movements, thus producing a kind of tool that comes from and arrives at his relations with the dance floor (Ferreira, 2008, p. 18).

Thus, there are similarities between the act of DJing and my live performance. They both are organized in advance, but they also allow the freedom to improvise with the already composed musical material. What differs is my ability to decompose and recompose certain parts from the pieces as they unfold. In addition, I can apply specific effects on individual loops, which allows me to generate new musical ideas and directions to develop during the improvisational sections. Although, Pedro Peixoto Ferreira writes that:

This is not to say that there is no experimentation in EDM, only that it is not usually focused on the artistic creation of musical forms but rather on the technological modulation of the sound-movement relation. In other words, EDM is not a kind of creative message sent by a performer to his audience, but the sonorous dimension of a particular collective movement (Ferreira, 2008).

In my work I do consider my method of formal arrangement to be a form of experimentation into what a track or composition can be. I want the experience to be an immersive one that can be appreciated both by the body and the head.

The conception of a website as the ultimate product of this research is a suitable form to transmit and share this project. Therefore, I have built my own webpage to store and share information regarding the progression and development of my music. It is a suitable platform to inform and teach people about music since it reaches our different senses; we can read about music in the form of a text, we can see performances visually, we can hear recording of compositions. All of these digital formats (text, video, sound) are presented on my website with hyperlinks, to gain further insight into the research. The website is a more accessible and involving resource for the propagation of music.

The first chapter of this research concerns the spatial presentation of EDM. I found that spatiality aids the perception and the comprehension of music, and I will explain why I use multichannel spatialisation techniques when I compose. I then contextualize the idea of writing sound in space referencing significant composers and also the theoretical advancements and the technologies related to it. To illustrate the importance and valuable consideration that academic and commercial environments have offered towards spatialisation, I have assessed some important locations and systems that focus on the art of spatial sound. I discuss how software and hardware tools affect my workflow and I will share the performance setup I have developed in order to unite spatialisation and EDM in my work. At the end of this chapter, I contribute some observations on how I create binaural mixes of my compositions for headphone listening.

In the second chapter, I discuss how I consider the act of composing as a performance. My pieces utilise two basic structural methodologies: working with a pre-conceived structure that leads onto a more improvisatory framework or starting with the improvisation itself and then slowly letting the pre-conceived structure emerge from this.

When I consider improvisation in my work, it is set within defined musical boundaries of time and rhythmical patterns. The application of improvisation in my work is limited in its scope, especially if I compare it to a musician such as Evan Parker with his electroacoustic ensemble, who exploits a larger spectrum of sonic and temporal possibilities when playing. He performs without any rules beyond the logic or inclination of his musical state, whereas what I play stays within the EDM genre.

My style of improvisation relates more toward the model of the Big Band era of the 1940s and early 1950s, for instance Benny Goodman or Glenn Miller, which relied more on arrangements that were written or learned by ear and memorized. Thus, my pieces are composed and structured but also include moments where I, as a soloist, improvise within these arrangements. As a Big Band would, I also interpret pieces in individual ways, never playing the same composition in the same manner twice. Depending on the mood, experience, and interaction with the audience, melodies, harmonies, and rhythmical patterns develop and change from one performance to the next.

The idea of rhythm and dancing has been important throughout my life and over several years I attended numerous nightclubs in order to hear DJs during the ‘golden age’ of House and Techno music. The instant somatic gratification from the bass frequencies was compelling. Yet, I was also drawn to the compositional and intellectual aspects of this music. I find a parallel in the writings of Arthur Bissell when he is describing the primary perceptions and expectations of an artist during a performance:

The pleasure … arises from the perception of the artist’s play with forms and conventions which are ingrained as habits of perception both in the artist and his audience. Without such habits … there would be no awareness whatever of the artist’s fulfillment of and subtle departures from established forms … But the pleasure which we derive from style is not an intellectual interest in detecting similarities and difference, but an immediate aesthetic delight in perception which results from the arousal and suspension or fulfillment of expectations which are the products of many previous encounters with works of art (Bissell, 1926, p. viii).

During the first year of this research, I produced some fixed media work that attempted to bridge these different styles of composition: the academic and more commercial music. For me, these pieces lack the sense of physical and intellectual fulfillment that I am looking for when I create work. Consequently, throughout the following experimentations, I concentrated my compositions towards live performance which includes my concept of ‘gravitational spatialisation’, which is a positioning according to the frequency content of my sounds. Furthermore, I realized that sitting down at a concert was not my preferred mode of listening and because I like to move, and I like to hear the sounds moving as well, this form of embodied listening propelled me to search for a new way to perform and experience my music. Ultimately, my objective is to have a self-awareness of what it is I am making musically, where it is drawing from, and to demonstrate that what I am doing is synthesizing those key characteristics into something that is compositionally my own.

One of the things that I have realised while undertaking this research is the idea of multiplicity in my work. As an example, my music cannot be only described as a simple river of musical flow, neither as a massive ocean of music, ultimately my music has characteristics from both. In French, there is a term that describes the natural flow of water that crosses an area of land between a lake, river and ocean; it is called Fleuve. Thus, a Fleuve can be narrow enough to be considered a river and connects to a lake, and it also has the grandeur to reach the ocean. The Amazon River is one of the most important (Fleuve) rivers in the world; it occupies that multiple role of being small enough to be a river but large enough to reach the Atlantic Ocean. Similarly, my music can be pleasing to the ‘outside’ institutions (nightclubs), as well as the ‘inside’ establishment of academic music (universities). It contains hedonistic and visceral sonic qualities, but it can also intellectually stimulate the educated musical ear. The same applies to my compositional methodology: Is my work an improvisation or is it a structured piece? It is actually both; at different times it can change, evolve, progress without boundaries.

My work as a composer-producer is not bound to the studio, and the live performance is not the remix of my music. There is a triangulation (see Figure 2 below), a symbiotic relationship between the studio work, the live performance and the final product (piece) that is essential to my methodology. All of these feed into each other. There is a continuous loop between them, a flow of musical direction going back and forth among them. As such, none of my pieces exist in a final form, they essentially become the sounds of a tool kit. I do not see any distinction between the roles of composer/producer/performer. These are not relevant distinctions for me, as there is a relationship between them that creates the ideal environment for me to make music. This inventive immersion, for me as an artist, is important to engender the creative flow, which has helped me to be prolific over the last two years of this research.

Figure 2 – Triangulation and flow: a symbiotic relationship between the studio work, the live performance and the final product (piece).

Figure 2 – Triangulation and flow: a symbiotic relationship between the studio work, the live performance and the final product (piece).

The way I compose and perform is through musical structure, process and intuition. The key compositional elements are the gradual accumulation and fragmentation of texturally and rhythmically driven loops. These elements help me to create a sense of musical flow through the ‘emergence’ and ‘disappearance’ of sonic content. The sense of flow is important for me when I compose, create or perform because it is organically evolving, transforming and changing what is happening musically. It allows me to have a vivid awareness, which enables me to react, respond and adapt to the music. It also allows me to reach an elevated state of consciousness in- and of-the-moment.

The concept of creative flow is considered in detail in Chapter Three. I will elaborate on psychologist Mihály Csíkszentmihályi’s theories of ‘flow’ (1990, 1997). In them, he describes ‘flow’ as being in the zone, “in a mental state of operation in which performing is an activity where we are fully immersed in a feeling of energized focus, fully involved, and enjoying the process of the activity” (Csíkszentmihályi, 1990). Thus, I connect his theories to what I am trying to do both musically and aesthetically; not just about the mechanics of it, but also how I want to involve the audience in it. Furthermore, as a composer, I want to integrate concepts of immersivity and viscerality in order to reach the sublime in music.

Among the characteristics that I consider important when composing is the immersive quality of the music. This immersion is related to the enclosed space where an array of speakers is the vehicle to convey the spatialisation and the musical gesture to ‘transport’ the audience during a performance. Ultimately, I want the audience to experience a sense of immersion within the concert space, and through articulation points to make them aware of the musical structure. In addition, viscerality is another concept that is included in my work. It is achieved through immersion, the use of low frequencies and ‘gravitational spatialisation’; it is a phenomenon that emerges from all the actions I take when I perform my music.

One of my musical aims aspire to use an immersive sonic environment to create a three-dimensional audible experience. New media artist and theorist, Frances Dyson (2009), also investigated the significance and implication of immersion in her book Sounding New Media writing that:

The experience of this immaterial, simulated “space” operates through “immersion” – a process or condition whereby the viewer becomes totally enveloped within and transformed by the “virtual environment.” Space acts as a pivotal element in this rhetorical architecture, since it provides a bridge between real and mythic spaces, such as the space of the screen, the space of the imagination, cosmic space, and literal, three-dimensional physical space. Space implies the possibility of immersion, habitation, and phenomenal plenitude (Dyson, 2009, p. 1)

Her notion of immersivity, however, is more related to the immaterial, intangible dimension of life, which is quite opposite to my desire to ‘touch’ viscerally (almost physically) the listener with my music.

When I perform live EDM, I want to achieve greater expression in my work rather than offering a passive acousmatic sound diffusion. Additionally, a way to accomplish the expressive sublime in my music is through the idea of immersion. The concept of immersion within my work is produced in the studio and it becomes a reality during the performance when using an array of speakers that surrounds the audience. Composer Simon Emmerson (2007), elaborates on ideas of new spaces and perspectives in regards of immersivity in a live performance:

Two new kinds of listening spaces relying on technology for their sound systems emerged from the 1960s to the 1990s. Both are totally immersive, but one is large and public, the other small and private. First the spaces of leisure listening, increasingly with the participation of dance, group encouraging and inclusive; secondly the space of the ‘personal stereo’, individual and exclusive. […] The image is here totally immersive, designed to envelope, to create a total space into which intrusion of extraneous sound is impossible, not because it is excluded but because it is masked. The image is close, surrounding and omnidirectional, possessing a kind of amniotic reassurance (Emmerson, 2007, p. 103).

Immersivity has multiple theoretical aspects that are explored by Emmerson (2007), but for me it is really about being surrounded by sound. There is not one sonic point of origin, the listener/dancer is in the middle of it (the speaker setup) and the central position (sweet spot) is not important since people are moving and changing positions while they are dancing. For me, immersivity is where the body is engulfed in an overwhelming feeling of sonic presence. I concur with Emmerson’s idea of the performance space being the ultimate listening environment: “These spaces were always there […] but have now become more fully integrated into the ‘total’ experience” (Emmerson, 2007, p. 116).

I compose my music in a particular kind of way, where I use a certain type of reverb to give an artificial sense of distance. I work with the sense of length and scale, and I use multiple speakers to create that sense of immersion in that live space. It is not an imaginary landscape, but it is leading to the idea of the sublime in music, where I am imagining something that is physically imposing. When composing, I envision my music playing in a massive warehouse, crowded with people, with the sound coming from all around me. Part of the musical experience is when we let our ego depart from reality, not thinking about trying to imagine things but to be immersed in the physical sensation of the music and the sonic qualities ‘in-the-moment’ that are giving pleasure. In my music, I am trying to conjure something that is large scale and monumental, that is beyond human scope in order to create the idea of the musically sublime. I am doing this through playing with high volume, a wide frequency spectrum and a large number of loudspeakers in order to reach the intended sonic result.

One of the most important aspects of what I am doing is creating a sense of the sublime both for me as a composer/performer and the audience. The definition of the sublime that I am using is finding roots in classical antiquity, specifically in the influential first-century Hellenistic treatise the Peri Hupsous, attributed to Dionysius Longinus (Gilman, 2009, p. 533). Longinus concerns himself with the emotive force of the sublime as a moral agent, as a persuasive power to move and better the mind.

In my compositional quest, there is always this consideration to create, for me and the audience, something that is almost awe inspiring, as with the idea of the sublime. I am not aiming to make nice landscapes that are beautiful, I am endeavoring to create epic mountains, hence oriented towards the concept of the sublime. This is the aesthetic intent of my music; to be massive, impactful, almost symphonic in scale. I want to take the audience on a journey, not a physical or mental one but an emotional sublime experience.

The idea of the sublime in my music refers more to philosophical aesthetics than to solely musical terms. The notion of the sublime I refer to is Kantian in nature (2007, p. 51): like having vertigo, like teetering on the edge, where the ego disappears. It is the idea that people are involved in an experience in which they get carried along with the flow of the music that I am creating, to such an extent that their own ego, and their own perception of the music becomes less important, they are immersed in this experience where they become one as a whole collective.

In my music, there are powerful and visceral - somatic (bodily) booming sounds, not simply a pleasurable (beautiful) sonic content, it relates to the idea of the sublime in the philosophy of art. In aesthetic theory, “a more classical conception of beauty might claim that something is beautiful because it is a correct and coherent arrangement of parts into a whole” (Beauty, Stanford Encyclopedia of Philosophy[1]). A piece of music is beautiful because all of the element within it fit together creating a perceived sense of good balance and continuation or flow to form something pleasurable. Unlike the beautiful, the sublime is impressive and awe inspiring.

The sublime usually includes the impression of an object which can be fearful but does not inspire fear at that moment. Philosopher Immanuel Kant conceived of the sublime as “the power of reason over nature” (Kant, 2007, p. 51). His description of the dynamic sublime falls in line with Edmund Burke’s conception; “it is the ability of reason to overcome the feeling of fear that we get from seeing something which can be dangerous but poses no current danger” (Burke, 1958, p. 69). For example, Anton Hansch’s painting (see Figure 3 below) of a majestic mountain range, is connected to the idea of sublime inspired awe and even terror due to its vastness and power, but because we are distant from that potential danger and we are in a safe place, we can then appreciate the grandeur of it and the feeling of the sublime.

Figure 3 – Anton Hansch’s painting of the big three mountains of Eiger, Mönch, and Jungfrau, part of the Swiss Alps (1857).

Figure 3 – Anton Hansch’s painting of the big three mountains of Eiger, Mönch, and Jungfrau, part of the Swiss Alps (1857).

Thus, the beautiful elicits positive responses or emotions to sounds, while the sublime manifests unease or distress often followed by pleasure having realised that the sounds are not something which pose an immediate danger. My composition So It Goes (2017) includes instances of sonic intensity (at 22’22” and 23’18”), where the musical climaxes can sometime sound powerful and be overwhelmingly intense but there will be a release of sonic activity and a return to calmer moments. This musical crescendo and decrescendo connects to the idea of the sublime; facing the exhilaration of reaching peaks without falling off the cliffs, or the fear of stumbling down deeper into the valleys.

My work plays with these ideas of sonic puissance but also, it offers peaceful and exquisite musical transitions in order to reach the sublime. This is echoed in Kiene Brillenburg Wurth’s thesis on The Musically Sublime (2002): 

It is to say that the particular structure of Kantian sublime experience parallels the structure of the process of sublimation in so far as a ‘negative’ feeling of frustration or terror (pain) is removed and transformed into a ‘positive’ feeling of delight or elevation (pleasure). What happens in the Kantian sublime, I will explain, is that an initial, apparently unacceptable awareness of self-limitation (manifested as frustration or terror) is resolved – removed and sublimated – into a delightful, psychologically more welcome, realization of one’s own supersensible power and limitlessness. In the end, the Kantian sublime experience is thus never truly disturbing but rather reassuring: any feeling of helplessness, frustration, or fear, any self-undermining sensation, in all its negativity, promises (if not already implies) a positive ‘result’ of self-affirmation and self-elevation exorcising that very frustration or fear (Brillenburg, 2002, p. xx).

Additionally, the sublime in my music is achieved through the ‘flow’ that provides the extension and duration for my piece to get longer, as with stream of consciousness. Traditional musical syntax is almost overturned; the sense of musical timing is very much extended; my tracks last between 15-20 minutes. My compositional process is not simply short piece after short piece, it has long flowing, almost symphonic lines of sounds, perhaps being lost musically, structurally within that. In this approach, we are carried along as a listener on the surface of the music, but the understanding of the musical structure is practically impossible to perceive because of the length, of the changes and transformations that occur, and also because the musical phrases are continually developing or evolving. My music is not structured like a conventional EDM track, or a verse and chorus pop format in which the listener can more easily orient themselves. There are musical materials that do come back, but they do so in an organic manner. This organic quality lends itself to achieve the sublime as there is a sense of order but it cannot simply be predicted by the listener.

My compositions relate to an idea developed by Adam Krims (2000) regarding a sublime musical aesthetic, where “the result is that no pitch combination may form conventionally representable relationships with the others; musical layers pile up, defying aural representability for musically socialized Western listeners” (Krims, 2000, p. 68). According to his article, the idea can be described as the “hip-hop sublime”. Thus, following Krim´s representation of this musical style I suggest a “techno sublime”, where it connects with the reality of its environment; in desolate and underground warehouses, which also suggest a figure for inner-city life that describes a post-industrial urban devastation.

We are reminded of Edmund Burke´s formulation of the sublime. Burke goes on to describe the fear of being smashed by “unfigurable” power. [Techno sublime´s] representation situates the listener in the geographical and social location from which capital´s smashing power is most visible. In other words, [the Techno sublime] presents a view from [the destitute post-industrial rave parties] at the massive, unfigurable but menacing force of world capital (Krims, 2000, p. 72).

Conceptually, I relate my work to that of musician Jon Hopkins. In his piece Collider (2013), he plays with the sense of timing and change; it is not focusing on simple harmonies that slowly evolve, the layers do not coincide, and the harmonic layers grate against each other in order to produce a sense of inner rhythmic instability. These musical elements create a certain musical (physical) unease or disturbance that relates to this idea of the sublime in music.

The feel-good quality to my music can be interpreted as superficial (pleasant to the ears) but there is an intellectual questioning regarding what comes next in order to provide this musical continuum. The objective when I compose/perform is to feed into what sounds good to me and into different models of musical decision-making. The inter-relationships between the fast-paced rhythmic loops from my pieces mixed with the longer, reverberating sounds are playing with the concept of ebbing water; transitioning smoothly or not with current and new sonic materials. This methodology of composing is reflected in my music and is described in Jacques Attali’s book Noise:

[Music] is more than an object of study: it is a way of perceiving the world. A tool of understanding. […] Music, the organization of noise, is one such form. It reflects the manufacture of society; it constitutes the audible waveband of the vibrations and signs that make up society (Attali, 1976, p. 4).

In my research into the application of spatial technologies and techniques to EDM, in order to create immersive and sublime experiences, this is one way in which I use music to understand my place as an original creative person within society. The music is more than an object of study and technical implementation of compositional principles, it is a way for me to articulate how I express myself in the world.

1 - Consideration of Space in the presentation of EDM

The fundamental research question underpinning this portfolio is how can real-time spatialisation be used in Electronic Dance Music (EDM), specifically in the context of live performance? I believe the 3D presentation of sound can enhance the visceral qualities of music through a more immersive sound experience. Although we are used to cinematic surround (Dolby 5.1/7.1), surround sound has been little used in EDM. There are a wide array of techniques and tools available for spatial composition, from IRCAM’s SPAT (Spatialisateur), OctoGRIS, Ambisonics to the 4DSOUND system. I will evaluate these technologies and examine how such techniques are incorporated into my compositional methodology.

An historical overview of composers who have used spatial techniques and the often-bespoke places or systems they have created for, will allow me to situate my compositions within this artistic practice. The different implementations of spatialisation methods that have been used (as well as the principal locations dedicated to this form of activity) will be interrogated. In addition, I will focus on how spatial techniques are integral to my creative practice. I will demonstrate that, with better access to powerful, yet simple and efficient technologies, spatialisation in EDM can enhance the listening experience.

1.1 - Why use large multichannel spatialisation techniques?

Ludger Brümmer writes that, “Spatiality in music is more than a parameter for the realisation of aesthetic concepts. Spatiality aids in the presentation, the perception, and the comprehension of music” (Brümmer, 2016). Regarding the research he conducted at the Zentrum fur Kunst und Medientechnologie (ZKM), Brümmer goes on to state that:

Human hearing is capable of simultaneously perceiving several independently moving objects or detecting groups of a large number of static sound sources and following changes within them. Spatial positioning is thus well-suited for compositional use (Brümmer, 2016, p. 2).

Throughout my research, the spatial location of sounds is not used to convey a meaningful structure. Despite Brummer’s contention above, from my working in spatial sound, I contend that any more than four layers of sonic movement cannot be accurately perceived and recalled. This means that the integration of space as a structural parameter needs to be carefully controlled. What interests me about the combining of multiple spatial trajectories is how they coalesce in a three-dimensional space to provide a sense of immersivity and viscerality.

The sense of immersivity through the use of spatialisation arises from “the fact that human hearing is capable of perceiving more information when it is distributed in space than when it is only slightly spatially dispersed” (Brümmer, 2016, p. 2). It is a phenomenon where sounds are capable of concealing or masking each other (Bregman, 1990, p. 320).

1.2 - Why spatial music?

In my work, spatialisation is an important dimension of the music; it provides a dynamism and physicality that enhances the auditory experience. Regarding the attribution of value to space in my music, there is an integration of spatialisation with a developmental sense of how the track unfolds, it is a meaningful part of how I structure my pieces. Space is actually used as a structural element. Spatialisation enables me to think about critical compositional decisions; do I keep one or more sonic elements moving in space or should I keep it static, should I reduce the immersive environment by thinning the texture down, etc. Also, it allows me to build my piece towards a certain type of energy. One important element of spatialisation for me is in the articulation of form. A trajectory is not always of structural importance but what is significant is the amount of spatialisation, where the density of spatial movement creates a more or less immersive sense. Depending on the context, the reduction of spatialisation is necessary in a moment of repose or for the beginning of a build-up. Adding layers of movement with the spatialisation, as a result, becoming more complex, enables me to articulate form through the emergence and disappearance of sonic elements.

The significance of spatialisation in my work varies; it is to some extent present all of the time but in varying degrees of importance. It is always present and perceptible. It is not an add-on effect which is how Steve Lawler used it when I attended his performance with the Dolby Atmos system at Ministry of Sound in London (August 19th, 2019). (https://www.decodedmagazine.com/steve-lawler-atmos/). My spatialisation can highlight a specific sonic element or it can emerge and disappear during an intense musical section because at certain moment the rhythm is the current important musical parameter. In my performance practice, the space behaviours are pre-set (depending on their frequency content, each loop has a fixed role attributed in space), and I fade them in and out. During a performance, I do not improvise with spatial movement and I do not create holes within this spatialisation; I always try to create a holistic immersive environment.

In my experimentations with space, I relate to Mexican band leader Juan García Esquivel and how he must have felt at the time when he used the potential of the stereo image on his recording (Exploring New Sounds in Hi-Fi/Stereo (May 1959, RCA Victor) in order to create new listening experiences. I have developed diverse types of spatialisation which I consider effective in my music. I have developed, and continue to develop a working method for spatialisation in EDM. It is dance music on its own term of what I can do. It is establishing a new outlook for the contemporary EDM producers. What I am doing has significance, interest and value by itself. Thus, I suggest that we are at the point of establishing a new era of spatialisation for EDM.      

There is much valuable research about “spatialisation and meaning”, and how it creates a narrative of space. My research can be akin to Ruth Dockwray and Allan Moore’s (2000) article about sonic placement in the “Sound-box”, where I discuss localization and movement of sounds in my “Dome-box” (within the SPIRAL Studio).

From the outset of electronic music, the earliest practitioners investigated the spatial presentation of this new genre, particularly in a performance context. “Spatialisation has been an important element in classical electronic music, showing up in work by Karlheinz Stockhausen, Luciano Berio, Luigi Nono, John Chowning, and many others. A mainstream practice emerged in which spatialisation consisted of the simulation of static and/or moving sound sources” (Puckette, 2017, p. 130). Of the composers who have used spatialisation in commercial projects, Amon Tobin is one of the most prominent. His album Foley Room was performed at the GRM over the acousmonium in multi- channels format. He also scored for video games and he commented about his surround mix for the soundtrack of Tom Clancy’s Splinter Cell: Chaos Theory (2005):

The reason it was easier than a stereo mix is because you have a lot more physical room to spread out all the different sounds and frequencies. So, the issue of sounds clashing and frequencies absorbing all the frequency range in the speakers is a lot smaller when you've got that much more room to play with (Tobin, 2005)[2].

Several commercial electronic artists also considered space in their CD/DVD releases. In 2005, Birmingham based Audio-Visual collective Modulate produced a multichannel project which allowed a 5.1 playback at home (Modulate 5.1 DVD)[3]. The electronic music duo Autechre, consisting of Rob Brown and Sean Booth, explained their use of space in the stereo field when working on the album Quaristice (2008):

If we’re using effects that are designed to generate reverbs or echoes the listener is going to perceive certain sized spaces, so you can sort of dynamically evolve these shapes and sounds to actually evoke internal spaces or scales of things’. […] You can play with it way beyond music and notes and scales. (Brown quoted in Ramsey, 2013, p.26)[4]

When composing music for a multichannel system, adding movement and localisation opens new possibilities for musical expression and listening experience. Such systems provide a multitude of listening positions, which offer new ways of listening to music. These modes of listening cohere with music theorist Ola Stockfelt’s invitation to develop and cultivate, in our modern life, a variety of modes of listening:

To listen adequately hence does not mean any particular, better, or “more musical”, “more intellectual, or “culturally superior” way of listening. It means that one masters and develops the ability to listen for what is relevant to the genre in the music, for what is adequate to understanding according to the specific genre’s comprehensible context […] we must develop our competence reflexively to control the use of, and the shifts between, different modes of listening to different types of sounds events (Stockfelt, 1989, p. 91).

In his PhD thesis entitled The Composition and Performance of Spatial Music, Enda Bates observed that, “the study of the aesthetics of spatial music and the musical use of space as a musical parameter therefore appears to be a good way to indirectly approach electroacoustic music composition and the performance of electronic music in general” (Bates, 2009, p. 5). Furthermore, he adds “spatial music is in many respects a microcosm of electroacoustic music, which can refer to many of the different styles within this aesthetic but is not tied to any one in particular” (Bates, 2009, p. 5).

Concerning the validity of spatialisation, I have found analogous opinions to mine in Ben Ramsey’s research, where he stated: “this idea of using space as a compositional narrative is a complete departure from more commercial dance music composition practice, and very much enters the realm of acousmatic music, where space and spatialisation is often considered part of the musical discourse for a piece” (Ramsey, 2013, p. 17). This also relates to Denis Smalley’s approach to acousmatic composition and his concept of spatiomorphology:

And I invented the term ‘spatiomorphology’ to highlight, conceptually, the special concentration on spatial properties afforded by acousmatic music, stating that space, formed through spectromorphological activity, becomes a new type of source bonding (Smalley, 2007, p. 53).

Stefan Robbers from Eevo Lute Muzique has engineered a performative sound system called the Multi Angle Sound Engine (MASE). It offers the DJ/performer a multichannel diffusion system which allows to play with space on a stereo system:

The MASE interface offers DJs or producers eight independent audio inputs and a library of sound movements. The user has ample options for assigning a trajectory to an incoming audio signal and to start, stop or localise this. Specially designed software allows users to programme and store their own motion trajectories. The system is space-independent, users can input the dimensions and shape of a room and the number of speakers which are to be controlled (Evo Lute Muzique quoted in Ramsey, 2013, online)[5].

The ideas behind the MASE system and the compositional territory it offers, opens up a way to incorporate commercial dance music with “the more experimental and aurally challenging compositional structures and sound sources that are found in acousmatic music” (Ramsey, 2013).

Another artist that embraced new technologies in order to compose music in 3D is Joel Zimmerman (aka Deadmau5). In 2017, he converted his production studio to be fully compatible and compliant with Dolby Atmos systems. “Deadmau5 even says that he’ll produce all his new songs first in Atmos to give them the most three-dimensional sound, and then “submix” down to stereo after for more common systems and listening” (Meadow, 2018).

1.3 - Temples for sound spatialisation

A variety of 3D sound projects are currently finding their way into the world of nightclubs and EDM culture more widely, demonstrating a steady and growing interest in spaces with sound spatialisation. These environments, both academic and commercial, provide a place to experiment with spatial audio and this could be seen as part of a broader shift towards more public venues and experimental performance spaces valuing immersive, 3D audio experiences.

1.3.1 - 4DSOUND System

Figure 4 - 4DSOUND system (Image by Georg Schroll via Compfight), 2015.

Figure 4 - 4DSOUND system (Image by Georg Schroll via Compfight), 2015.

The software for the 4DSOUND system (see Figure 4 above) is coded in Max4Live (a joint project between Cycling74 (Max) and Ableton (Live). The founder Paul Oomen explains that “the hardware comprises an array of 57 omni-directional speakers – 16 pillars holding three each, as well as nine sub woofers beneath the floor. By carefully controlling the amount of each sound going to each speaker, it is possible to localise it, change its size, and move it in all directions” (Oomen, 2016).

In February 2016, I attended 4DSOUND’s second edition of the Spatial Sound Hacklab at ZKM in Karlsruhe, Germany, as part of ‘Performing Sound, Playing Technology’[6], a festival of contemporary musical instruments and interfaces. During four intensive days, creators, coders and performers experimented with new performance tools, a variety of instrumental approaches and different conceptual frameworks in order to write sound in space. The founder Paul Oomen believes that “spatial awareness and how we understand space through sound plays an integral role in the development of our cognitive capacities” (Oomen, 2016). As a result, he considers that “there will be new ways to discover how we can express ourselves musically through space, and our understanding of the nature of space itself will evolve” (Oomen, 2016). He also writes that:

Spatiality of sound is among the finest and most subtle levels of information we are able to perceive. Both powerful and vulnerable, we can be completely immersed in it, it can evoke entire new worlds – if we are only able to listen. After eight years of developing the technology and exploring its expressive possibilities, it has become clear that the development of the listener itself, the evolution of our cognitive capacities, is an integral part of the technology.

Spatial sound is a medium that can open the gate to our consciousness, encouraging heightened awareness of environment, a deeper sense of the connection between mind and body, empathic sensitivity and more nuanced social interaction with those around us. It challenges us to listen to the world in a more engaging way, offering us a chance to become more sensitive human beings (Oomen, 2016)[7].

According to its founder, “central to 4DSOUND’s plan for the project is to establish a laboratory for artists, thinkers and scientists to explore ideas about space through sound, and create a platform for cross-fertilization of different fields of knowledge to further the development of the medium” (Oomen, 2016). The lab also allows for a new form of spatial listening: in enabling the refinement of conscious listening practice, increasing awareness of surroundings and exploration of a deeper connection to the self and others. Paul Oomen (2016) hopes to “encourage a change in the quality of our everyday experience” through the 4DSOUND system. Oomen writes:

We are committed to engendering a new ecology of listening, improving the sound within and of our environments and expanding our ability to listen. I think this is a movement that will really begin to take shape over the coming years as we evaluate many aspects of modern life, of our shared environments, and our understanding of sound in influencing this (Oomen, 2016)[8].

Figure 5 - 4DSOUND’s second edition of ‘Spatial Sound Hacklab’ at ZKM in Germany, 2016.

Figure 5 - 4DSOUND’s second edition of ‘Spatial Sound Hacklab’ at ZKM in Germany, 2016.

From my presence at the hacklab, I was able to experience the qualities of the 4DSOUND system and perceive its potential for diffusing music in space (see Figure 5 above). I was able to interview many of the participants. Ondřej Mikula (aka Aid Kid), a composer from the Czech Republic, stated that, “these systems of spatial audio (4DSOUND and Ministry of Sound’s Dolby Atmos) are the future of electronic music.” Furthermore, he mentioned that:

When you have this much space, you can really achieve a ‘clean’ sound because of the mix. When the frequencies are crushing in your stereo mix, you can only put the sounds on the side [or the middle] or using the ‘side-chain’ effect [in order to create frequency space in your mix]. But when you have this full room [of space] you don’t have to worry [about clashing frequencies], you just put the sounds somewhere else. I have lots of [sound] layers in my music (the way I compose) and it handles all of them (numbers of layers) without cutting the frequencies. This is a big advantage for me [when I compose]. (Mikula, 2016)[9]

On site at the ZKM, I interviewed the French composer, Hervé Birolini, who commented that: 

Spatialisation [in my work] is fundamental. I conceive myself to be a stage director of space, in a close manner of the theatrical term. […] It appears to me more and more like something extremely natural [to include spatialisation when I compose]. It is so natural that for every situation that I have been proposed to participate [and to compose sound], like for stage music, a ‘classic’ electroacoustic music, a work for the radio or else… I will adapt the piece [of music] and its spatialisation to the space that I wish to create (Birolini, 2016)[10].

He added that his experience was unique and offered efficacious results when playing with space on the 4DSOUND system:

In order to create a sense of realism in music, I had the possibility of ‘exercising’ the elevation (sense of height). Although more complex to integrate in a work, it can enhance the surround environment in electroacoustic music. Thus, 4DSOUND is ‘Space’ in all of its dimensions; in front, behind, at the sides, above and below us. I can say that this system is unique, I’ve had the experience to experiment with several [diffusion] systems [around the world] and this one can’t be heard anywhere else. Furthermore, it [the 4DSOUND] operates ‘naturally’ and efficiently (Birolini, 2016)[11].

Some well-known commercial artists have been able to use and perform with the 4DSOUND system including Max Cooper and Murcof. Max Cooper is a DJ and producer from London, whose work exists in the intersection between dance floor experimentation, fine-art sound design, and examination of the scientific world through visuals. Murcof is the performing and recording name of Mexican electronica artist Fernando Corona. Murcof’s work with the 4DSOUND system has allowed him to expand and develop his method for structuring narrative and composition with spatialisation: “The system really demands to be heard before writing down any ideas for it, and it also pushes you to change your approach to the whole composition process” (Murcof, 2014)[12]. Max Cooper has acknowledged that: “The 4DSOUND system, and a lot of the work I do with my music in terms of spatiality and trying to create immersive spaces and structures within them, has to do with psycho-acoustics and the power of sound to create our perception of the reality we’re in” (Cooper, 2017)[13].

This unique sound experience is technically fascinating and leaves us with a great spatial sound impression. One of the sonic characteristics that I found convincing with this system is the realistic and perceptible sense of height, providing a coherent 3D sound image and listening experience. However, the system requires two trucks to transport all of the equipment, which makes it expensive to stage a 4DSOUND show, and each is a one-off experience and therefore although highly attractive as a format. Furthermore, it is not appropriate to my current research, which aims to utilise commercially available tools in a highly versatile yet portable system that can be performed with Live.

1.3.2 - Dolby Atmos

Figure 6 - London’s Ministry of Sound collaborates with Dolby Laboratories to bring Dolby Atmos sound technology to dance music. Photo credit: unnamed, Found at www.mondodr.com, 2016.

Figure 6 - London’s Ministry of Sound collaborates with Dolby Laboratories to bring Dolby Atmos sound technology to dance music. Photo credit: unnamed, Found at www.mondodr.com, 2016.

Another system for the spatial presentation of Electronic Dance Music is the newly installed Dolby Atmos technology at London’s Ministry of Sound club (see Figure 6 above). The partnership between Dolby and Ministry of Sound gave rise to an important innovation in the performance of EDM in clubs, allowing music to be spatialised on the vertical as well as on the horizontal level. Matthew Francey, managing editor for Ministry of Sound’s website, wrote: “For the listener, this means that sound can appear anywhere along the left-to-right and front-to-back axes, and also at different heights within the audio field” (Francey, 2016)[14]. What makes the Dolby Atmos system an interesting solution is that it does not require a specific number of speakers for the spatialisation to function. In a classic Dolby Digital 5.1 mix, sounds are assigned to a specific speaker, so if you want a sound to come from behind the listener on the right, you would pan it to the rear right channel. Mark Walton, music journalist, states that:

With Atmos, sounds are "object based", meaning that the sound is given a specific XYZ coordinate [like Ambisonics] within a 3D space, and the system figures out which speaker array to send the sounds through, no matter how many (up to 64) or few (as low as two) there are. Even when the sound is panned, you move it through each individual speaker in the path, creating an immersive experience (Walton, 2016)[15].

Figure 7 - Prior to the Ministry Of Sound pilot, tracks were processed using the Dolby Atmos Panner plug-in, which was used to automate the three-dimensional panning of various musical elements (Robjohns, Sound On Sound, 2017)[16].

Figure 7 - Prior to the Ministry Of Sound pilot, tracks were processed using the Dolby Atmos Panner plug-in, which was used to automate the three-dimensional panning of various musical elements (Robjohns, Sound On Sound, 2017)[16].

My experience of using the Dolby Atmos plugin tools (see Figure 7 above) at their London studio in August 2017 was an easy adaptation of the knowledge I had gained from the tools I was already using within Ableton. Dolby’s tools tap into a new market of potential users for sound spatialisation. With such commercial tools becoming available, it is clear that spatialisation is a growing compositional element for dance music producers. From discussions I had while at their London studio, Dolby is investing in this technology because they have a vision that it will impact the world of nightclubbing and they want to develop this market in many of the big metropolitan cities around the world. The first Atmos system is in London, the second is installed in Chicago (Sound-Bar), with plans for more (Halcyon in San Francisco, 2018). I was fortunate to try the Dolby Atmos Panner plug-in at their London studio, although for the purpose of this research, I could not use this tool extensively since it is still a private pre-commercial tool in (beta) development.

Spatial thinking about sound plays an important role for Robert Henke (aka Monolake), and his performance ‘Monolake Live 2016’ is a vivid example.[17] It is presented as a multichannel surround sound experience, which Henke has been experimenting with for many years, and includes versions for wave field synthesis[18], ambisonics and other state of the art audio formats.[19] Richie Hawtin (aka Plastikman) has stated that “experimenting in technologies which also work within that field of surround sound, is not only inspiring and challenging but also a good brush up on skills you may need later in life.”[20] (Hawtin, 2005) He has produced a DVD in 5.1 surround sound (DE9: Transitions, 2005 [21]) for a home listening experience. Other music artists like Björk (Vespertine, 2001), Beck (Sea of Change, 2002) and Peter Gabriel (Up, 2002) have all experimented with surround sound releases but spatial audio considerations have not become a key focus of their output. Beyond these artists mentioned above, the use of space, in or out of the studio, has not been used significantly by commercial producers. Whilst this could due to the major record labels having little commercial imperative to promote surround audio formats (SACD, DVD-A, Blu-Ray), more probable is the lack of a single common software/hardware format that allows producers to travel from one venue to the next and setup quickly and efficiently without the need for bespoke hardware requirements.

1.3.3 - SARC - The Sonic Laboratory

Despite the relative lack of commercial interest in spatial music, in experimental sound, space has been an important consideration since the late 1940s. Karlheinz Stockhausen was one of the early pioneers of electronic music to be interested in the spatial distribution of sound both in his electronic and instrumental music. He was interested in space as a parameter in music that could be manipulated just like pitch and rhythm. Stockhausen wrote that: “Pitch can become pulse […] take a sound and spin it, it becomes a pitch rather than its sound” (Stockhausen and Maconie, 1989, p. 93). This presents an extreme form of spatialisation, stemming from a thinking about rhythm and how the parameters can merge into one another. This idea informed Stockhausen works such as Gesang der Jünglinge (1956), Oktophonie (1991),and the Helikopter Quartet (1993). Because of his interest in space, and particularly his performances at the Osaka World Fair in 1970, Stockhausen was invited to open the Sonic Arts Research Centre (SARC), in Belfast on April 22nd 2004. SARC is a world-famous institute for sound spatialisation focused around their spatial laboratory/auditorium which boasts a floating floor with rings of speakers both under and above the audience – an auditorium designed after Stockhausen’s ideas from Musik in Raum (1959).

Figure 8 - The Sonic Laboratory at Queen's University in Belfast, 2005.

Figure 8 - The Sonic Laboratory at Queen's University in Belfast, 2005.

Whilst the SARC Laboratory (see Fig. 8) and the 4DSOUND system all offer a unique spatial experience, I am not wanting to work with a bespoke system. What I aim for is a system where I can utilize off-the-shelf software and play and more importantly, perform: a flexible and practical tool to create spatial EDM in a variety of musical space.

1.3.4 - Sound Field Synthesis Methods

Figure 9 - The world's only transportable Wave Field Synthesis system, from ‘The Game Of Life’ (gameoflife.nl), was stationed in Amsterdam for the ‘Focused Sound in Sonic Space’ event in 2011.

Figure 9 - The world's only transportable Wave Field Synthesis system, from ‘The Game Of Life’ (gameoflife.nl), was stationed in Amsterdam for the ‘Focused Sound in Sonic Space’ event in 2011.

Ambisonics and Wave field synthesis (see Fig. 9) are two ways of rendering 3D audio, which both aim at physically reconstructing the soundfield. They derive from distinct theoretical considerations of sound and how it propagates through space. They do slightly different things as they treat sounds in different ways.

Wave Field Synthesis (WFS) allows the composer to create virtual acoustic environments. It emulates nature like wave fronts according to the Huygens-Fresnel principle (and developed by A.J. Berkhout in Holland since 1988) by the assembling of elementary waves, synthesized by a very large number of individually driven loudspeakers. In much the same way that complex sounds can be synthesised by additive synthesis using simple sine tones, so in Wave Field Synthesis a complex wavefront can be constructed by the superimposition of spherical waves. The advantage is that there is a much-enlarged sweet spot. The concept behind WFS is the propagation of sound waves through space and positioning the listener within this environment. It has a precise sound localisation but requires huge number of speakers to prevent spatial aliasing; it is therefore expensive and impractical for my own use.

Ambisonics was pioneered by Michael Gerzon. It quickly gained advocates throughout the 1970s such as David Mallam at the University of York but was never a commercial success. Only recently, since its adoption by Google and the games manufacturer Codemasters has it achieved significant attention. The resurgence of Virtual (Immersive) Reality has seen companies such as Facebook and YouTube adopt spatialisation (3D sound) in their applications in order to provide audio content with a binaural spatial experience.

Ambisonics is a type of 3D spatialisation system that, like WFS and Dolby Atmos, is not speaker dependent, and creates virtual spaces within a speaker environment. The minimum number of channels for a full-sphere soundfield of a fifth-order Ambisonics is 36 channels. They are used to project the spatial information. With the increase of speakers there is an increase in detail in spatial perception. This spatial technique is achieved through manipulating the phase of sound sources rather than through amplitude changes and it can result in a blurring of transients, which is not good for the type of music I make. The lack of transient clarity on kick drum and hi-hat samples has caused me to search for an alternative solution despite the portability of the system.

When assessing different software and hardware systems for spatialisation, and tools for performance I had the following questions in mind:

-       Does speaker size found in WFS bring issues for bass resolution for EDM production?

-       Does the lack of height (elevated sounds) on the WFS system impact the immersivity that height speakers provide?

-       Is my live performance setup, consisting of the Push 2 and the Novation Launchpad control XL instruments, compatible with WFS?

-       In Ambisonics are the transients within the soundfield too blurred and not precise enough for low frequency materials found in a kick drum?

My solutions to these questions have shaped my research and technical setup.

1.3.5 - SPIRAL Studio

The studio I have concentrated my research activities in is the SPIRAL (Spatialisation and Interactive Research Laboratory) at the University of Huddersfield. (See Figure 10 below)

Figure 10 - SPIRAL Studio at the University of Huddersfield, 2015.

Figure 10 - SPIRAL Studio at the University of Huddersfield, 2015.

My research has facilitated new ways for me to compose music and has developed my approach to spatialising sounds in the SPIRAL Studio. In order to compose and perform live within an immersive listening environment, I have explored many tools but settled on Ableton Live[22], in conjunction with the Push 2 and Novation’s Launchpad control XL (see Figure 11 below), using Max4Live spatialisation objects. This integrated performance and compositional setup has enabled me to create EDM with a sense of sound envelopment using a system comprising 24 channels on three octophonic rings of speakers. From a compositional perspective, the SPIRAL, drawing on Brümmer’s insights mentioned earlier, allows me more perceptual freedom to add more layers to my music compared to the clustered traditional stereo approach to sound.

Figure 11 - Ableton Push 2 and Novation’s Launchpad control XL, 2017.

Figure 11 - Ableton Push 2 and Novation’s Launchpad control XL, 2017.

Concerning the earnest attention given to space by acousmatic composers, I concur with Brümmer’s findings in his article New developments for spatial music in the context of the ZKM Klangdom: A review of technologies and recent productions, where he states that:

The musical potential of spatiality is only beginning to unfold. The ability to listen consciously to and make use of space will continue to be developed in the future, larger installations will become more flexible and more readily available overall, and the capabilities of the parameter space will be further explored through research and artistic practice. This will make it easier for composers and event organisers to stimulate and challenge the audience’s capacity for experience, as the introduction of the recently introduced object-based Dolby Atmos standard indicates. But composers will also find more refined techniques and aesthetics that will take advantage of the full power of spatial distribution. If this happens the audience will follow, looking for new excitements in the perception of sound and music (Brümmer, 2016, p. 18).

Ultimately, my aim in this research has been to bring together real time spatialisation and live composition. Gerald Bennett (Department Head at the Institut de Recherche et Coordination Acoustique/Musique (IRCAM) in Paris from 1976-1981 and Director of the Institute for Computer Music and Sound Technology at the Hochschule Musik und Theater, Zurich from 2005-2007) noted that, “finding a balance between spatialisation and the restriction of interpretation in performance is difficult” (Bennett, 1997, p. 2). I want, as much as possible, to create and plan my compositional spatial movement in the studio but to be able to intervene in the spatial trajectories assigned to sounds during my performance. When working in the studio, I noticed that when I use more than two channels (stereo), the additional speakers allow my sounds to ‘breathe’ and facilitates a spatial counterpoint that generates musical relationships or dialogues between them. Sometimes, the clustering of sound materials grouped together within a stereo file can lead to sounds masking each other that would do so when presented over large multi-channel systems. This is not merely a matter of mixing skill within the stereo field, but rather about the spreading or separation of frequency content within a 3D space to create a sense of immersion and viscerality that is not possible within the stereo field. Thus, by having spectral divisions in my sounds, what I call ‘gravitational spatialisation’, it helps me to achieve the immersive quality that I want the audience to experience no matter where they are situated within the performance space.

1.4. - Composition and spatialisation tools

The SPIRAL Studio is a 25.4 channel studio. It comprises three octophonic circles of speakers that provide a height dimension, a central high speaker pointing straight down to the sweet spot, and four subwoofers. Within Ableton Live I experimented with several software tools that enable spatialisation.

The Spatial Immersion Research Group (GRIS) at the University of Montreal developed the OctoGRIS[23] and SpatGRIS (Audio Unit plugins for controlling sounds over speakers) allows the latter plugin up to 128 outputs and includes a height dimension and was released towards the end of this research project in March 2018). The OctoGRIS allows the user to control a live spatialisation on a dome from within an audio sequencer. Considering the number of spatial gestures that I wanted to include with several audio loops, the plugin used too much CPU. Therefore, I had to find an alternative, less CPU intensive solution.

Another tool that I tested was MNTN (The Sound of the Mountain)[24]. This software allows the user to design immersive listening experiences with a flexible, lightweight, and easy-to-use graphic user interface (GUI) for spatial sound design. With it, the user can play the space as an instrument. MNTN (see Figure 12 below) enables the user to perform spatial concerts with real 3D sound, and with as many loudspeakers as is desired. As I developed and built my performance setup with Ableton’s software, I selected a tool already integrated within the plugins of Live since I was more interested in the aesthetic application of these tools rather than their technical implementation.

Figure 12 - MNTN, the software was developed with the idea of enabling the production of immersive sound, 2017.

Figure 12 - MNTN, the software was developed with the idea of enabling the production of immersive sound, 2017.

I have used the Dolby Atmos plugin in their studio (with the Rendering Master Unit – RMU). It was easily adaptable for the software I am already using: Ableton Live. Dolby is developing a plugin to be used within popular DAWs, for the user to integrate the spatialisation dimension into their work but it is unfortunately not yet commercially available to dance music producers and the general public.

1.4.1 - How do software and hardware tools affect my workflow

My goal is to bring aspects of my past acousmatic practice into dance music in order to create immersive and visceral sound environments that are still novel and relatively uncommon. I am not claiming that I am developing new sound synthesis or new sound processing techniques, nor any new sound spatialisation techniques, it is rather about the integration of and application of these ideas in to my compositional practice. Even though there are many other spatialisation techniques that I could have used throughout my research and could have developed far more sophisticated directional or structural sound trajectories and gestures, the desire to work and mix live has shaped the tools I have chosen to work with. The concept of performing a spatial music is fundamental for me. This extends far beyond sound diffusion – a practice common in acousmatic music – and more towards the concept of ‘real-time composition’.

I decided not to use a MAX patch with SPAT to create sound trajectories, which though perhaps more sophisticated and controllable in the studio, caused CPU issues when used in real-time with multiple instantiations as plug-ins on separate tracks. I have aimed to use and adopt standardized tools in order to optimize their potential and create something that is highly flexible and easily transferable from one system to another without having to install additional software tools. Whilst I acknowledge that my technical setup has allowed me to do achieve my research objectives, I also acknowledge that the system has its limits – ones that I would like to transcend as my practice continues to develop. I have used all of Live’s sends features to explore its potential as a real-time spatialisation tool.  In this respect, my research has been highly successful as I have been able to create up to two-hour live sets of 24.4 channel EDM on my live steaming YouTube channel.

1.5 - Spatialisation and EDM: My setup

When performing my work, I aim to create an immersive and visceral flow of sound that carries the listener like an ever-moving wave. Immersion in that musical flow is more important than the musical dynamic or listening to the gestural dynamic, because it is more about feeling and absorbing the music rather than appreciating key changes or a specific sound. “Low-frequency beats can produce a sense of material presence and fullness, which can also serve to engender a sense of connection and cohesion” (Garcia, 2015). It is the effect that becomes more important rather than the music itself, and because there are not many contrasting musical sections, the listener can become immersed in it. Rupert Till in his article Lost in Music: Pop Cults and New Religious Movements, writes:

In most traditional societies, Western European culture being a notable exception, musical activity is a social or group-based activity, and is associated with the achievement of altered states of consciousness. […] Music has the power to exert enormous influence on the human mind, especially when people are gathered in groups, and the euphoric power of group dynamics is brought into play (Till, 2010, p. 12).

In my work, I intend the audience to achieve such altered state of consciousness, to offer them a musical journey into the Kantian sublime. Also, I am interested that the audience perceives the movement of the sounds in space rather than paying attention to the specific trajectory of sounds that creates a space within where they are situated. The way I create a sense of immersion and viscerality in my music is not just about volume, rhythm, repetition, but it is how I handle the music material, through long emerging textures. These textures accumulate through the sense of musical flow rather than discreet blocks of sounds.

What I intend with my spatialisation research is to provide an enhanced experience of EDM. In my opinion, spatialisation is not just structurally or compositionally significant; it enhances the nightclub experience through creating a sense of immersivity with the distinct visceral quality arising from the enveloping of the club-goers with music in a particular space. It is this experimental sensation of immersivity that drives my spatial thinking rather than abstract concepts.

The method I have used to spatialise my sounds allows me to perform on a sound system that can have up to 24 speakers. Most sound systems I have come across have less speakers than this, thus I can adapt my Ableton sessions to a multitude of speaker setups and presentation formats. The Max4Live plugins “Max Api Ctrl1LFO” and the “Max Api SendsXnodes” provide spatialisation easily and intuitively (see Figure 13 below).

Figure 13 - Max4Live spatialisation tool for multichannel diffusion, 2017.

Figure 13 - Max4Live spatialisation tool for multichannel diffusion, 2017.

This pair of plugins allows the user to send audio to 1 or to 24 channels at the time. The selection of numbers of speakers can vary and can be modified throughout the composition process. My technique was influenced by the position of the three rings of speakers at different height in the SPIRAL Studio. In keeping with my concept of ‘gravitational spatialisation’, I decided to keep most of my loops containing ‘heavy’ low frequencies on the bottom circle of eight speakers while moving or positioning the mid and high frequencies on the two rings of speakers above. This creates a ‘gravitational spatialisation where higher pitch sounds are usually heard above the lower bass sounds or kick. Since the ears perceive and localise the high pitch sound more easily (Lee, 2014), I tend to place the sounds on the higher rings of speaker. This last finding is supported by Hyunkook Lee’s article Psychoacoustic Considerations in Surround Sound with Height, where he states that: “The addition of height channels in new reproduction formats such as Auro-3D, Dolby Atmos and 22.2, etc. enhances the perceived spatial impression in reproduction” (Lee, 2014, p. 1).

When composing music with space as a musical parameter, there are spatial compositional techniques that we can take into consideration, as outlined by acousmatic composer Natasha Barrett:

Common compositional techniques encompass the following:

-Creating trajectories; these trajectories will introduce choreography of sounds into the piece, and this choreography needs to have a certain meaning.

-Using location as a serial parameter (e.g. Stockhausen); this will also introduce choreography of sounds.

-Diffusion, or (uniform) distribution of the sound energy; creating broad, or even enveloping, sound images.

-Simulation of acoustics; adding reverberation and echoes.

-Enhancing acoustics; tuned to the actual performance space, by exciting specific resonances of the space (e.g. Boulez in Répons).

-Alluding to spaces by using sounds that are reminiscent of specific spaces or environments (indoor/outdoor/small/large spaces) (Barrett, 2002, p. 314).

In all of my works, I follow some of Barrett’s considerations. I create trajectories for certain sounds (mainly mid and high-pitched material) in order to introduce a choreography of sounds, which will provide a certain meaning to the piece. In addition, the use of localisation is found in every piece of my work. I observed that it is perceptually better for low frequency sonic content to be fixed in the lower speakers, adhering to my concept of ‘gravitational spatialisation’. Diffusion, or (uniform) distribution of sound energy is often implemented in my composition in order to create an introduction to the piece; this technique helps me to immerse the audience in sounds with the use of all the speakers surrounding the listeners. We can hear an example of diffusion in my piece Chilli & Lime (2017), where the main part of the piece is introduced by a guitar loop. In this piece, there is also the simulation of acoustics in order to play with the sense of space: a delay effect was added to the guitar loop which helps expand the perceived localisation, making the sound feel as though it comes from beyond the speakers. These are the main considerations from Barrett’s list I consider important when composing a spatial work. Other elements in Barrett’s list pertain to other forms of art music rather than being directly applicable to EDM.

Depending on the sonic content of my work, my spatialisation techniques serve different functions when composing. The register of certain sounds can be more perceptible; therefore, it will influence my decision when deciding how the space will be used or manipulated. Marije Baalman discusses such ideas, writing that:

In a sense – within electroacoustic music – composers are interested in ‘abusing’ the technology; composers are not so much concerned with creating ‘realistic’ sound events, but rather interested in presenting the listener with spatial images which do not occur in nature (Baalman, 2010).

Figure 14 – Layout of speaker disposition in the SPIRAL Studio.

Figure 14 – Layout of speaker disposition in the SPIRAL Studio.

Figure 15 – Diagram of spatialisation for Not The Last One (2017).

Figure 15 – Diagram of spatialisation for Not The Last One (2017).

My piece Not The Last One (2017) begins with a Kick drum on all the speakers (see Figure 15 above) of the lower circle (speakers 1-8). At 0’41”, a second loop (a rhythmical white noise sweep) is placed on the middle ring (speakers 9-16) in order to start expanding the sonic space. Following this at 1’05” a third loop (another white noise with a fast panning effect) is introduced circling at moderate speed around all 24 speakers (from speakers 1 to 24), going up clockwise from the bottom and repeating this pattern constantly throughout the piece. This produces a sense of slow movement. A fourth sound layer (a gritty “metallic” rhythmical loop) appears at 1’23”, circling rapidly counter-clockwise starting on the higher speakers and going down and returning up after the whole sequence of speakers (from speakers 24 to 1). This will allow the listeners’ attention to focus on both the global sound trajectories and the multiple sound sources around the performance hall. The fifth sonic layer introduced at 1’34” is localized on the higher ring of speakers as it is (a rhythmical white noise beat) (speakers 17-24) in order to complete the filling of the whole space with sound. The first two minutes thus comprise not only an exposition of sonic material to be used in the composition, but also a spatial exposition. Other loops added throughout the track are positioned according to a sonic equilibrium that helps create a sense of immersion and movement on the diffusion system during the piece are adjusted for specific musical passages that require spatial attention.

Figure 16 – Diagram of spatialisation for Cyborg Talk (2017).

Figure 16 – Diagram of spatialisation for Cyborg Talk (2017).

In my work Cyborg Talk (2017), I start with a long, synthesized line that is placed on all the array of speakers (see Figure 16 above). Second, at 0’33” a pad sound is presented, moving slowly clockwise from the bottom towards the middle and higher rings. Third, at 0’49” the sound of a snare drum moves 90 degrees counter clockwise at every bar (front, left side, back, right side), circling back and forth between the middle and higher rings. The next two loops at 1’19” and 1’27” are drum sounds that are spread across the whole array of 24 speakers. A long sixth synthesized sound at 2’21” sets the singular and intriguing mood of the piece, circling slowly clockwise from the bottom towards the middle and higher rings, going up and down. The main character of the piece, fully established in loop seven at 3’05”, is located over all of the speakers. When all of the sounds are playing and some of them moving, it creates a spatial counterpoint that allows the listener to perceive a multitude of sonic possibilities, no matter where they are situated inside the array of speakers.

When I compose spatial music, I take into consideration the sonic elements of a track and how these are best suited to spatialisation using my concept of gravitational space. In wanting to create an immersive space I am aware that many of the finer details in a mix may not be perceived by those listening. Enda Bates concurs with this approach in his PhD research:

As Denis Smalley states the piece as a whole must be looked at, because “the whole is the space or spaces of the piece”. The success of any work of spatial music can therefore only be considered in terms of the overall compositional strategy which describes the relationship between space and every other musical parameter (Bates, 2009, p. 207).

In my work, I have observed that each spatialisation tool is optimised for a certain way of working or performance situation. As such, the most pertinent tool will depend on the particular spatial effect or mode of performance required. For instance, in So It Goes (2017) at 22’13” and 23’11” the audio content is largely focussed in the low frequency region on heavy drum and bass loops that provide an intense sonic immersion; the sounds are not moving much but they present an oppressive wall of sound. The effect aims to bring the climatic moment to an extreme sensation of immersion achieved through surround sound, while a drop in the music intensity brought by a filtering effect that culminates in the re-introduction of the musical climax, which creates an impactful and visceral sonic experience. The spatialisation was not a primary element of composition for that part of the piece, thus when creating and developing my musical ideas, I offer what is most appropriate for the sonic journey; focusing either on melody, harmony, timbre, rhythm or space when necessary.

With every musical project, I assign a specific movement or localisation for each of the sounds in the composition. Through experimentation, I respect the musical arrangement of the work in order to do the appropriate spatialisation for each of the sounds, which will remain throughout the piece. For instance, in my work Rocket Verstappen (2017), after establishing the introduction with the first two drum loops on the whole array of speakers (see Figure 17 below), I then add to the spatial dimension by introducing a third loop at 0’38” that would circulate clockwise, on the middle ring of eight speakers. The fourth layer, at 1’17” is a low bass loop that furthers the solidification of the low content of the piece on the lower ring of speakers. The fifth loop at 1’37” enhances the use of space even further by circulating counter clockwise in the upper ring of eight speakers. In that instant, the momentum of the piece is fully articulated and then continues forwards with new loops at 2’04” and 2’17” that emerge and disappear, so developing the structure of the piece while keeping their spatialisation throughout.

Figure 17 – Diagram of spatialisation for Rocket Verstappen (2017).

Figure 17 – Diagram of spatialisation for Rocket Verstappen (2017).

The dynamic spatialisation of sound can bring a new dimension to EDM. Producers will soon be able to write and perform spatial audio, using open source software that will advance electronic music, as we know it. I am seeking to change the way we experience live sound. In my work, it is essential to feel sound filling the whole space. The aim with my spatialisation is to create the sensation that the music is part of the room itself.

1.6 - Studio - Binaural Mix

My composition Not The Last One was a first attempt at translating my works of spatial electronic dance music from 24 channels to a binaural (2 channels) audio version. This work is important since I wish to create a mobile musical listening format where people can experience my compositions in their own private space wherever they are with a pair of headphones. Since only a limited number of people can come to Huddersfield for my concerts on a diffusion system with 24 speakers, the conversion for me became a key aspect of my PhD research that could allow me to reach a larger audience and could offer them the ability to carry this experience around the world.

The method I have set up provides a sense of immersion and spatialisation with my music, going from 24 channels to 2 channels audio. It was achieved by recording my live performance on my computer and then replaying the performance while recording it with a Neumann (binaural) recording head. This recording device replicates the human interaction with our auditory reality by having microphones placed in the ears of the dummy head. This technique provides a sonic realism since it recreates the time difference of sounds reaching an ear before (or after) the other, which offers a realistic manner of how the human brain calculates distances from the sound sources and its position. Of course, what is accountable in this recording technique is the pinnae of our ears, which is always different from one human to another, but the generic dummy head can provide an approximation for most human beings. Thus, with this technique, I was able to capture the spatialisation implemented in my composition and keep the sense of horizontal and vertical envelopment.

In this composition, I wanted to be able to perceive and locate where the sounds were moving in this virtual (binaural) space. The form of Not The Last One has three simple peaks of intensity introduced by increasing musical action and reaching a climax (at 2’15”, 5’06” and 8’57”), the middle section is followed by a falling section into a final denouement (starting at 4’13”). I have used sound materials that are mid-high pitched since they are easy to locate in space compared to the low frequency material, which I kept fixed mainly in the lower part of the mix.

I have used Hyunkook Lee’s multi-channel studio (Applied Psychoacoustics Lab, University of Huddersfield) in order to record my performance and to capture it with the Neumann Dummy Head. Similar to the SPIRAL Studio, Lee’s studio has surrounding (horizontal) speakers and also elevated (vertical) loudspeakers in order to create a sonic dome. Lee writes that: 

Height channel loudspeakers used in new 3D multichannel audio formats […] add the height dimension to the width and depth dimensions existing in the conventional surround formats. The added height channels are naturally expected to enhance perceived spatial impression (Lee, 2014, p. 1).

Thus, with this study in mind, it led me towards conceiving a stereo version of my 3D music. However, so far this technique is insufficient to provide a verbatim translation of the binaural sound experience. Therefore, some EQing on the mid frequencies suggested by Lee (+6dB around 3000Hz) were beneficial to clarify the spatialisation in the piece. Furthermore, in order to create this sense of spatial immersion, I have ‘downmixed’ the performance (contained on 11 channels of audio) from the software recording into a stereo file. In order to achieve this version, I have exaggerated the panoramation of the ‘moving’ elements from the binaural version. Also, the clarity of the software version enables me to provide the punchy and dynamic strength that supports the Neumann’s mix. At the moment, unfortunately, not every compositional project has frequency content that is suitable for this type of recording technique using the Neumann recording head. However, this is the technique I have found to be the most efficient and satisfying during my research. The potential is there, it is perhaps not as great as expected but it is still, for me, a better final version than the simpler stereo version. It is difficult to create that sense of height within a stereo mix. My solution is a hybrid formula of techniques to provide this binaural sense of immersion.

To support this procedure, a fellow researcher on sound spatialisation at the University of Huddersfield, Oliver Larkin, has also worked with the recorded files to provide a clearer sense of spatialisation. He has achieved this result by filtering out the low frequency material from the Dummy Head recording, to make a mono version of the low frequency content that will be finally added back to his filtered Neumann Dummy Head mix as his final version of this immersive audio mix. Larkin has kept the mid and high frequency material to provide the sense of spatialisation from the composition, while being supported by the bass content in mono. This is the technique he found being the most efficient so far in his research.

There is still a lot of work to do in order to create a greater experience than the stereo version but eventually the quality of the immersive experience will hopefully become more accessible and practical.

2 - Composition As Performance

The context of making music in the SPIRAL Studio at the University of Huddersfield and in a nightclub is different, so when I compose again it influences my new works. This process refines my music. It is a music that wants an audience and I get inspiration from the external factors present during a performance. I do not distinguish between my audiences (academic or non-academic), but I favour environments where I can play my music for people used to attending nightclubs since there is a sense of cultural and musical kinship. Regarding performability, there could be issues when I transfer my works for 24 channels to other configuration systems (nightclub environment). I was able to perform on three occasions on various spatial systems that are different to the SPIRAL Studio, and I was pleased to hear positive comments and see the reaction of the audience after listening to my music. After my performance at the Safari Lounge in Edinburgh (December 28th, 2018), a member of the audience shared with me his appreciation for this novel experience of hearing sounds moving in space. He said he was looking forward to hearing more of this kind of musical setup. Another such performance was at the Re.Sound event (on May 18th 2018) (https://youtu.be/AYRv8sBdaSY) in the Atrium of  the Richard Steinitz Building, University of Huddersfield. A video captured the event and we can observe the reaction of the listeners. There are two instances where audience members points the localization of a sound source, at 19:18 and at 19:40. I believe these experiences open the public’s mind for new ways of listening to music and I hope to propagate this sonic environment/experience to more people.

My setup at the University of Huddersfield comprises 24 channels (3 rings of 8 speakers) and compared to most nightclubs, where they usually play immersive music in a Mono format, I can provide an enhanced experience through localization, diffusion, height and trajectories of sounds that Mono cannot match. Of course, with my setup, promoters may not understand why I need all this configuration but if we look around in places such as Ministry Of Sound and the 4DSound System, we can see that the culture in which this music is performed is changing towards more spatial forms. Whilst the industry is moving in that direction, as of yet, there are not many places that have as sophisticated a setup as the SPIRAL Studio. Thus, I always have to make changes and adapt the idealized version that I am creating at the University for live performance. Besides, I face other challenges when I do a Livestream on YouTube or Facebook, where translating 24 channels audio to a binaural (stereo) format is not yet a simple or efficient process.

Previous to my arrival to Huddersfield, my proficiency of composing Electronic Dance Music was rudimentary, thus I needed to sharpen my skills in order to provide works that were on par with my electroacoustic knowledge. My research aspiration was to apply methods of sound manipulation learned from electroacoustics to EDM content, but the early results were not satisfactory. These attempts were not commendable and drove me towards the investigation of new tools (first the Ableton Push, and eventually to the addition of the Novation controller) and methods (using loops and restricting myself to Ableton’s native effects and plug-ins) for creating more suitable EDM.

Over the course of this research, I came across a multitude of research issues and discoveries. Rocket Verstappen (2016) was the first piece in which I was capable of creating a convincing “locked groove”, which is for me a powerful musical idea. The “locked groove” is described in the introduction of Mark Butler’s book as “the ultimate realization of the principle of looping – a manifestation of EDM’s essential structural unit in both physical and musical terms” (Butler, 2006, p. 90). On September 10th, 2016, I presented Rocket Verstappen (2016) during my performance at the Belgrave Music Hall in Leeds. This work did create a real moment of intensity in my performance. At only 8 minutes long, I wanted to capitalize upon this momentum and intensity. EDM’s format of 7-8 minutes tracks restricted and limited my musical flow, therefore I needed to find a way to continue and extend this musical energy. This is akin to what DJs do in a nightclub environment: continuously mixing tracks one after the other. I found that the bass should preferably come out of the lower speakers using a fixed location. As for the mid and high frequency content, depending on the amount of audio loops present, it was astute to use diffusion, height and trajectories of sounds.

In the subsequent piece, So It Goes (2017), I wanted to explore what happens compositionally and formally when you move from 8 minutes to 50 minutes for a track. I was also curious about the idea of the progression of the form and the incremental development of sonic materials; how can I move from one locked groove to another, creating seamless transitions through the concept of emergence and disappearance, whilst maintaining a logical musical progression. Also, it allowed me to use a variety of musical intensities within 50 minutes that explore merging of aspects of Techno, Trance and House, flowing between intense and calm musical passages. I aimed at developing a long form piece that takes elements from each of these three genres and subsumes them into a musical flow that has a coherent structure. This arrangement could be intuitively ordered or improvised, but nevertheless has a sense of ebb and flow. This is where I put to practice this dialectic between ease and discomfort, consonance-dissonance, expectation-release. These are the structural and principles that guided this musical arc, from the beginning to the end. Regarding the spatialisation, I had to limit the number of loops having spatial movement otherwise the piece became disorienting. I have discovered that keeping a limit of 3 sounds is effective in order not to obscure the spatialisation and still being able to segregate the individual loops.

My work relates to a DJ since I am also creating a set, and this set has a name with defined boundaries and a flow of musical ideas (emergence and disappearance).  I want to explore the possibilities of all of the sonic content. This aligns with the idea of “project possibilities” by Butler (2006), where I accept musical resolutions or deny them to the audience. This has a psychological imperative, when people expect things, we can either give it to them, or not. This allows me to interact with the audience when the moment requires it. I am thinking like a DJ about the long form; taking the raw material, exploring it in the moment, and exploit its potential. I choose sounds either because they fit (conforms to a particular texture or rhythm) the sonic content (aligns with the musical style) or because I want to deny it. For instance, So It Goes (2017) starts with a more Techno driven intensity, then calms in its momentum in order to lead to a musical climax more characteristic of the Trance style.

The fact that So It Goes (2017) was excessively weighty (in terms of CPU usage) when using spatialisation, resulted in me having to reduce and optimize my Ableton sessions in order to be able to improvise with the sonic content and to provide a spatial dimension to my works. The research imperatives drove me to keep the initial development of a musical idea (containing up to eight audio loops) and, when it is well established, to progress and complement it with an improvisation, using several effects (reverb, shuffler and filter) to manipulate the sounds. In assessing the specific loops with their sonic frequencies, I carefully selected the spatialisation role for each of the individual loops. This new practice led me to a novel compositional method and to elaborate my concept of ‘gravitational spatialisation’.

Spatialisation is present in all of my works but my concept of ‘gravitational spatialisation’ (having spectral divisions) was mainly established in my piece Not The Last One (2017). With this idea, I kept most low frequencies loops on the bottom speakers while moving or positioning the mid and high frequency loops on the speakers above. Additionally, in this work, musical loops are, at times, in parallel spatial motion and/or contrary spatial motion in order to draw the listeners’ attention on both the multiple sound sources and motions, around the performance hall. This method helped me provide an immersive musical environment, enhancing the live setting of a concert experience.

In the piece Cyborg Talk (2017), a concept that was explored was that of sonic manipulation – a process in which I take one sound and gradually alter its characteristics so that acts as a bridge from one musical context and another. In doing so, I am selecting pertinent musical features that will enable the processing of loops from one musical context to the next. One of the motivations for this work was to take a certain loop (the cyborg message) and to transform it so that it can blend with the underlying rhythmic pattern. This is imperative in my works; to shape sounds in order to make them fit a particular texture and transform them into another. Furthermore, Cyborg Talk (2017) investigates the idea of metrical localization for certain loops, where I experimented with quantized musical space. The movement of the loops are in synchronicity with the tempo of the piece, on every beat a certain loop changes localization.

With my final two pieces Chilli & Lime (2017) and Stix (2017), the emphasis was mainly on the integration and expansion of my live performance setup. They have helped me structure my working methodology for live sessions using the Ableton Push 2 and the Novation controller, while the previous works only used the Push 2. The aim with these two pieces was to add significant elements of improvisation into their structure and sonic development. Ultimately, these findings have impacted how I think about the notion of composition, where my role has a hybrid of composer-performer started to take shape.

2.1 - Music Styles - Techno-House-Trance

Techno is predominantly built around the rhythm, specifically how the repetition and the rhythm give structure to the work. There is a sense in which the rhythm becomes more complex with the addition of new blocks of material, and then recedes to a simpler musical state to create a sense of expectancy and rebuilding. The looping layers in my music work to highlight this process. The rhythm is akin to that found in Techno (artists such as Joey Beltram and Maceo Plex) but is developed in a different way. Nevertheless, it is this methodology of movement from rhythmic simplicity to complexity that enables me to use Techno as a starting point. The slow transformation of my musical elements allows me to create that sense of euphoria that I want from my music. It is like being in front of a musical buffet and taking certain characteristics and elements from a genre and blending them in order to generate something new.

Although Techno is the rhythmic behavioural template in my work, I also draw influences from House music (artists such as Dennis Ferrer and Steve ‘Silk’ Hurley) and the textural elements come from Trance. My music merges or hybridises these distinct elements altogether. The way my music diverges from those elements is in its continuity, its texture and its linear development. My sense of musical line is more connected with Trance’s long evolution of sounds (artists such as Tiësto and Paul Van Dyk); sweeping filters, sonic drones and continuous looping sounds. We can hear such a sweeping effect in my work So It Goes (2017), where this effect lasts over forty-five seconds (from 21’37”) before culminating in the climactic return of the beat at 22’22”. When I am using the Novation instrument, I am creating more textural lines of sounds, extending the length of my short material through spectral and rhythmic means. The short looping material coming from House and Techno is progressively transformed into Trance-like material through performance with the Novation controller. In Trance music, the sounds often have a sense of gradual evolution, it is more organic, and there is a musical flow, where sounds are spiralling around. In Techno, we clearly hear the rhythmic template as the most important element of the minimal structure around which other elements are clustered.

Figure 18 – Electronic Dance Music genres; EDM - three letters… so many styles (Found at www.designinfographics.com, 2016). A documentary about the soundtrack of our dance music era: Can You Feel It - How Dance Music Conquered the World [25]

Figure 18 – Electronic Dance Music genres; EDM - three letters… so many styles (Found at www.designinfographics.com, 2016). A documentary about the soundtrack of our dance music era: Can You Feel It - How Dance Music Conquered the World [25]

My music draws from aspects of different characteristics or components from different EDM genres. In my composition, I take the bass element from House, the rhythmic element of Techno and through performance it hybridises elements of Techno and Trance. This demonstrates the musical innovation that I bring through my research as I am not just writing Techno music or House music. I had to find a way to translate my work that was based on these genres of EDM (see Figure 18 above) but not fully integrated within only one style.  

Another style that relates to my work is Trance music, where “in the section before the breakdown, the lead motif is often introduced in a simplified form and the final climax is usually a culmination of the first part of the track mixed with the main melodic reprise" (Snoman, 2013, p. 268). As Trance music is more overtly melodic and harmonic than other electronic dance music such as House and Techno, it is important to avoid dissonance with a mix in order not to disrupt the flow of a track: “Trance in its early days became known for its long build-ups and explosive climaxes, as well as the hypnotizing or “trance” like feel that the music provokes in its listeners.” (Spin Academy website, 2017)

I appreciate the euphoria that Trance can create that I find missing in House or Techno music. It is a set of ingredients: a low, dark and groovy bass from House, a strong rhythm and sense of repetition from Minimal Techno, the multiple layers of textural loops through performance found in Trance. I am most drawn towards the sustained textural looping elements from Trance and the really slow timbral manipulations from Minimal Techno.

In contrast, since my compositions are “characterized by a stripped-down aesthetic that exploits the use of repetition and understated development” (Wartovsky, 1997), I also associate myself with the genre of Minimal Techno “that focuses on rhythm and repetition instead of melody and linear progression” (Sherburne, 2006). The introduction from my piece Rocket Verstappen (2017), illustrates this repetitiveness, without much linear change, the sonic activity literally kicks in only at 2’04”. My music is also “typified by accelerating peaks and troughs throughout the duration of the track and are, in general, less obvious than in House” (Sfetcu, 2014). A demonstration of this can be heard in my work So It Goes (2017), where sonic climaxes are present at 22’22” and 23’17” for example, as for a relaxing ambient section can be heard at 37’29”. There are over 40 styles associated with House music, thus I am only referencing the ones closest to my aesthetic. “Layering different sounds on top of each other and slowly bringing them in and out of the mix is a key idea behind the progressive movement” (Sfetcu, 2014), which is associated with Progressive House music. For instance, these progressive characteristics can be heard at 7’00” of my piece Cyborg Talk (2017), where multiple sonic layers are intertwined, emerging and disappearing as the piece evolves. I find many correlations with Tech-House music and my music, since “as a mixing style, Tech House often brings together deep or Minimal Techno music, the soulful and jazzy end of House, some Minimal Techno and Micro House and very often some dub elements” (Bogdanov, 2001). There is some overlap with Progressive house, which can contain deep, soulful, dub, and Techno elements which “often become deeper and sometimes more minimal. However, the typical Progressive House mix has more energy than Tech House, which tends to have a more ‘laid-back’ feel” (Bogdanov, 2001). Tech House music tends to “focus on subtlety, as well as the mid frequencies that add variety on the Techno beats and eschews the ‘banging’ of House music for intricate rhythms” (Bogdanov, 2001).

2.2 - Composition Overview

Much of my research has been undertaken in the SPIRAL Studio at the University of Huddersfield. I approach compositional projects with a specific methodology regarding both form and spatialisation. My work conflates notions of composition and improvisation. Coming from an acousmatic training in Montreal, I am used to a non-real time studio mixing environment in which every sonic detail is meticulously hewn. In my PhD research, my compositions could be more described as live electronic compositions, just as J.S. Bach would use a figured bass and extemporize a keyboard part on a pre-conceived harmonic structure. So, in my works I create a repository of sounds specific to a given composition. The form and processing of the materials is pre-conceived to an extent but is deliberately left open, so that the work can be re-configured anew in each performance.

My pieces are predominantly made up of pre-composed loops (coming from my own collection of sounds and from commercial sample libraries) with additional non-looping sonic material providing further layers of detail. I utilise two basic structural methodologies:1) I work with a pre-conceived structure that presents an exposition of the chosen materials for a track and then leads on to a more improvisatory framework using the sonic elements from the exposition; or 2) I start with the improvisation itself and then slowly let the pre-conceived structure emerge from this. Additionally, I employ variants on these strategies, such as multiple iterations of the first method followed by a return to the composed material at the end to close the sonic structural loop.

Figure 19 – Livestream performance setup in the Spiral Studio: Laptop with Ableton software and 2 controllers (Push 2 and Launchpad Control XL), 2018.

Figure 19 – Livestream performance setup in the Spiral Studio: Laptop with Ableton software and 2 controllers (Push 2 and Launchpad Control XL), 2018.

The two main compositional methodologies I utilise are differentiated both in terms of the number of sonic layers I use, and also the technology I use to create them (see Figure 19 above). These models act as an abstract blueprint for all my work in this research project. Neither formal archetype indicates or prescribes what it is going to happen at the moment to moment level, rather they suggest or map an overall strategy to approach my pieces. As such, they demonstrate a commonality of structure and compositional thinking between the pieces in the portfolio.

The most important concepts I use are the ‘emergence’ and ‘disappearance’ of sonic materials. These work on many levels within the compositional framework. At the uppermost musical level, I create layers of material that accumulate, often to the point of saturation at the climax of a piece. The sense of musical flow and overall structure I create is attained through the emergence and disappearance of sonic layers. Often, several mixes or versions are created in real-time in the studio. I am interested in capturing an intuitive sense of play with a seamless (de)construction of layers within the overall musical flow. A compositional sense of expectation is achieved through cutting out or adding sonic layers. Often, I maintain a sense of continuity or musical flow by using one or more layers, either rhythm or more textural, as a skeletal frame while other layers emerge or disappear around them.

As with contemporary producers such as Richie Hawtin (see Figure 20 below) who uses Ableton as an on-the-fly compositional / improvisational software tool (for the Plastikman show) in live performance (Bougaïeff, 2013, p. 8), I often start with a set of pre-composed materials that I have ordered in an Ableton session. I gradually introduce new sonic materials that either complement or contrast with existing materials depending on the sense of flow I am seeking to achieve. The temporal structure of the work is open, dependent on how many repetitions of a given loop or textural combination of loops I desire. The act of listening to new loops in sync, whilst I am constructing the Ableton session for my track, enables me to evaluate the sonic potential of the multiple layers in different combinations.

Figure 20 - Point Blank Performance Masterclass with Richie Hawtin July 31, 2015 (Damian Albetto).

Figure 20 - Point Blank Performance Masterclass with Richie Hawtin July 31, 2015 (Damian Albetto).

Although my work is predominantly based on loops I still feel the necessity for my tracks to have a specific direction or musical purpose. By way of illustration, I draw on the similarity between the act of composing with repetitive loops and the act of performing massotherapy; both include repeated patterns of activity, that over time allow additional layers (of music or muscles) to be revealed.

This act of repetition permits a more micro-level examination of a musical (or bodily) structure. The repetitions release the tension. The act of repetition with gradual micro-changes is a fundamental element of the improvisational parts of my practice. Whereas in the pre-composed sections in my work the focus is on the combination of discrete layers, in the improvised part, it is the sound manipulation of these individual layers that is both more important and far more radical. The layering of materials is often less dynamic, rather there is more emphasis on the timbral, rhythmic, and spectral development of the materials. This different approach in thinking in ‘horizontal’ layers of sound or ‘vertical’ manipulation of sound correlates to the technology I use in each section. I am using two different pieces of technology: the Ableton Push 2 and the Novation LaunchPad Control XL (see Figure 21 below). In the pre-composed elements, I use the Push 2 more frequently. In the improvisatory sections, I tend to sculpt the sonic material more with the LaunchPad Control XL. The preference for one technology over the other in each of the sections is due to the ease of parameter manipulation enabled by the Novation’s faders and three layers of pots as opposed to the Push’s design which is focused around trigger pads. Each of these tools offers me, as a composer and performer, the potential to create something unique in the moment that can trigger further musical development and sonic transformation.

Figure 21 – My controllers - Push 2 (left) and Launchpad Control XL (right), 2018.

Figure 21 – My controllers - Push 2 (left) and Launchpad Control XL (right), 2018.

My performance practice with the Push has been clearly around slowly adjusting the volume in order to facilitate the emergence and disappearance of layers into the musical texture to create a sense of flow.  

I have built up a performance practice on the Push and the Novation. These instruments in conjunction with my laptop allow me to generate this prolific creative environment during my compositional process. To elaborate on this topic, Paul Théberge discusses the idea that:

the assemblage is variable, and the same instrument can be used differently and take on different meanings depending on its place within a particular assemblage … I want to introduce the idea of musical instruments as a kind of “assemblage”, a concept that allows one to take instruments into account not only as they are defined by their technical characteristics but also as they are constituted in variable sets of musical practices, genres, institutional settings, social ideologies and discourses (Théberge, 2016, p. 65).

Since my second year of research, I have focused my attention solely on the live performance of my music, specifically examining the performance as composition paradigm. Therefore, I wanted to improve my stage presence and practice in order to deliver a strong music performance. Basically, I wanted to compose-create music in a live setting. My electroacoustic background has taught me about shaping and composing my sound material in the studio and my study of Techno, House and Trance have helped me to create, transform and perform my music in a live setting.

My objective is to have a self-awareness of what it is I am making musically, where it is drawing from, and to demonstrate that what I am doing is synthesizing those key characteristics into something that is compositionally my own: a musical manifesto. A fundamental thing is that I do not have to do all of the same thing in every piece, so I accept that some pieces are more influenced by Techno and House, while certain pieces can be more House and Trance-like. When I combine these musical ingredients, I can identify them. In the track Cyborg Talk, I wanted to concentrate on rhythm and space. The swirling sounds provide an immersive quality in regards of space (at 2’21”). The groove of the rhythm is transformed through the use of a shuffling effect (from 8’30” to 9’30”). As a result, my music sounds and is structured differently to commercial EDM because I am using different sonic ingredients in different combinations.

2.3 - Compositional Flow

A constant flow of emerging and disappearing musical layers allows me to create a sense of immersivity in my music. I want my music to fill a space with sounds that match and connect with the environment rather than work against it. Also, through that musical flow, I want the audience to experience a continual environment of spatial sound with the ebb and flow of sonic layers, creating a dynamic movement of my music. My sense of musical flow enables the creation of a sonic environment that does not draw attention to itself. The audience will experience a sense of immersion within that space, and through articulation points will make them aware of the musical structure. This sense of viscerality and immersion aims to affect the listener physically through the use of low frequency material and rhythmical elements (paying attention to the willingness of the audience to dance). If that sense of flow is missing, then the audience will become acutely aware of their body, and what it should be doing in that space. If the music is changing too much, with too many dynamic contrasts, we lose that sense of flow, because we are constantly brought back to the present moment. Thus, in my works I aim to create sonic environments that are formally articulated but do not contain a strong dynamic gestural profile. I am able to compose music using several layers and rhythms that are not too distracting in the gestural content, keeping the listener flowing in an immersive experience. My musical style is like ebbing water: the water can get agitated, but it never drastically ruptures and is always smooth on the surface with strong currents underneath driving it onwards.

Flow creates the vibe of my music. According to psychologist Mihály Csíkszentmihályi:

Purpose, resolution, and harmony unify life and give it meaning by transforming it into a seamless flow experience’, concluding that those who make the most of the potential inherent in music […] have strategies for turning the experience into flow […] They plan carefully the selection to be played, and formulate specific goals for the session to come (Csíkszentmihályi, 1990, p. 6).

The way I treat my material has to do with the fact that I want to have a sense of flow from a compositional perspective. For instance, in Chilli & Lime, at the section beginning at 4’31”, I introduce the new loops at the momentum that I judge appropriate to the continuity of the performance. Inspired by the thinking of Csíkszentmihályi, I build on the sense of flow through spatialisation:

The mystical heights of the Yu [Flow] are not attained by some superhuman quantum jump, but simply by the gradual focusing of attention on the opportunities for actions in one’s environment, which results in a perfection of skills that with time becomes so thoroughly automatic as to seem spontaneous and outwardly. The performance of a great violinist or a great mathematician seem equally uncanny, even though they can be explained by the incremental honing of challenges and skills (Csíkszentmihályi, 1990, p. 151).

This allows the audience to become the center of a larger musical environment. It is as if this sense of flow was an organic bubble that is constantly going around the audience since they are in the middle of it. This idea resonates with Rupert Till’s concept of altered states of consciousness for the performer:

Playing music is itself a trance-like or transcendental experience, one where musicians often feel like they are somewhere else, have an altered state of consciousness or perceive an enhanced level of connection with their emotions, their fellow musicians and their audience in comparison to when not playing music (Till, 2010, p. 58).

One particular DJ whose performance is recognized as having a strong sense of flow and soulfulness is Derrick May. May is known for his strong melodic style and:

spiritual form of DJing. [He] really just loves to be surrounded by “creative people, entertainers, and even people involved in live stage events” basically anything that involves creating something and seeing it grow and flourish (Reiss, 2003).

On his CD compilation The Mayday Mix (1997)[26], Derrick May executes tight beatmatching which hovers between Techno and House music. Another mix compilation that creates a sense of immersion is Richie Hawtin's Mixmag Live Vol. 20 (1995)[27]. His minimal and funky aesthetic was a standard-bearer of its time and it undoubtedly influenced an entire generation of DJs and producers. The way I relate to their practice is in the mixing and matching of several layers of sound. The difference is that I am using up to eight full rhythmical loops simultaneously. Due to my spatialisation it is a greater challenge regarding saturation of the sonic frequency content but when appropriately manipulated, this allows me to expand the musical potential of all the layers by improvising with them. 

As much as a sculptor is constantly touching, moving and shaping his material, I transform and shape my sounds and that is why I consider myself to be more in the flow when composing and performing in the moment rather than in a more passive studio reflective mode about my composition. My compositional process is similar to figured bass structures found in Baroque. Just as figured bass prescribes a fixed harmonic structure without determining the overall details, so I constantly add ornamentation to my musical structure. In my pieces, I use similar types of processes (filters, reverbs and shuffling effects) on the different layers. So, through these processes, I can connect all the sound processes materials – just as Francis Dhomont does in Espace/Escape (1989) in which diverse sonic materials are unified through panning and delay techniques. Similarly, I apply sonic processing ‘ornamentation’ to different layers thus linking these musical layers. With the more improvised elements of the piece, I am applying a fixed set of processes to different layers and they aid the structure of the piece by giving a sense of continuity and colliding of the musical layers in the way which they are treated. This processing creates musical coherence, moving away from a direct sense of repetition.

For me it is important that my work is continuously moving and evolving. I relate this musical objective to Richard Middleton who writes in his essay Over And Over:

Ceasing to repeat is to die: this is true for individual organisms, for genes and species, for cultures and languages. Yet repetition without renewal is also a kind of death — the royal road to extinction. Repetition, then, grounds us in more than one sense — and nowhere more than in music, the art of iteration, whose multiple periodicities choreograph our every level of replication (Middleton, 1996, p. 1).

A musical progression can be achieved through a change in the rhythm, the reverberation, and frequency filtering, adding audio effects or changing a loop’s length, in order to make potent musical changes. I generally do not appreciate having loops repeating constantly without any change, I always try to maintain an active musical flow. In most of my compositions, I create compositional situations where I have a multitude of musical decisions to assess as I am performing. Although, I can prepare the piece in order for me to be less active in the performance, I usually prefer the freedom to interject into the sonic flow and intuitively shape the work into a seamlessly evolving experience.

The way I compose in real-time is more about the process of discovering how materials work together. My musical approach can be compared to Dubspeeka’s Primary K293 (2015)[28]. In Primary K293 there is a sense that structure is built up by introducing layers of materials. These layers are quite discrete, and they have two operational behavioural characteristics: when a high pitch sound is introduced, it creates a musical rupture or change in the musical texture and when there is a cut or ‘drop’ it is reintroduced often with a slight change. In Primary K293, there is a good deal of syncopation (a closed hi-hat at 0’31” and a two-note synth line at 0’50”). There is also a temporal skewing to create rhythmic lines that whilst attached to a certain loop structure, are not always in time with the bass (they work against the bass even though they are looping), so they create a sense of multiple time streams on each of those layers. It also creates a sense of space, of musical continuity that keeps the momentum going. The lines repeat but they create a sense of temporal shifting. Another characteristic of this track is that all of the materials are very short; they are readily identifiable, often less than five seconds long, and the changes within them are to do with discrete processing rather than gradual spectral or timbral evolution. This is quite different to my music in which sound objects and spaces are looped which provide a point of articulation. In my music, I use articulation in a different manner; there is little sense of musical dead space where nothing occurs. My music is loop-based and often texturally continuous. The materials loop in order to create longer lines that evolve spectrally. This is highly stratified music, compared to Dubspeeka’s K293. His instrumental layers are discreet and pristine, mine are loaded and often occupy most of the frequency spectrum, which clearly demonstrates the structure of the piece and we would immediately see how all of those layers are interacting. An example of this musical stratification can be found in my piece Not The Last One (2017), starting at 4’44”, where we can hear a simple beat getting richer and thicker as it progresses and evolves in time. Perhaps Dubspeeka’s is ‘vertical’ placement of discrete sound objects within a loop and mine is about the ‘horizontal’ layers which emphasizes the sense of flow.

These characteristics in my music are related to Minimal Techno as it focuses on small changes in the sound material. It also deals more with the texture of the music rather than complex melodies. Sam Paganini’s Rave (2014)[29] and my piece Rocket Verstappen exemplify typical Techno features using a kick drum on every beat, having a tempo between 118 and 135 beats per minute, and being instrumental rather than vocal. Regarding the more physical, visceral dimension of my work, I relate this element specifically to Trance since it creates climaxes and rises to play with the sense of tension-release (see Energy 52’s Café del Mar (1997)[30] and my piece Cyborg Talk). As is evident, I prefer taking different compositional elements from these genres to create music. From a musical point of view, it generates a strong sense of flow, an endless stream of continuously evolving sound in which my intuitive mental flow reprocesses, redevelops and reconfigures the musical interplay.

It is important to be able to play with that sense of expectation: knowing what will happen and what has happened. It is that kind of approach that allows me to structure my piece and why I have 8-minute and 12-minute versions of my music (Rocket Verstappen 2016 & 2017). I am using these techniques of multiple layers that create these textural musical masses and it is about the recombination of those layers. In the recombination of those layers, I am able to set up expectations in the listener; what will happen, where it might go, how can I tease the listener by taking them in another direction and finally fulfilling their expectations by bringing in the bass or the kick. When the layers stop, how those materials combine and how they re-combine gives me an insight into how the music is progressing. 

Repetition is key in my music but at some point, strongly profiled musical gestures are essential in order to steer the composition in another direction. I am careful not to over-saturate the listener with the same musical material. Thus, by removing some elements and bringing them back in a transformed manner changes the dynamic profile of the progression of the piece.

2.4 - Structure: process and intuition

In my work, key formal elements are the gradual accumulation and fragmentation of texturally and rhythmically driven loops by means of layering. These ‘layers’ create a sense of musical flow through ‘emergence’ and ‘disappearance’. This process of handling materials functions on several levels. The simplest is on the level of volume, with materials fading in and out. The loops emerge and disappear temporally becoming shorter or longer. Emergence and disappearance also work on a textural level with the layers creating moments of repose, or climax.

In my portfolio of submitted works, the first part of a track is often more composed and constructed and where I expose the main material and then build towards a first climax. There is often a general architectural framework I establish at the beginning of the piece, so by creating that structure at first, I can then play with it in the improvisatory section of the work, de-composing and re-composing layers of material(see Figure 22 below).

Figure 22 – Mixing several loops simultaneously, making them emerge and disappear gradually with the Push 2, 2016.

Figure 22 – Mixing several loops simultaneously, making them emerge and disappear gradually with the Push 2, 2016.

This ‘emergence’ and ‘disappearance’ of material creates a sense of expectation for the listener and is achieved by techniques such as removing the rhythmic content (kick), filtering the frequency content, or playing a rising loop, amongst others. The same process can be seen at the climax of specific sections of a track; it is the rhythm that is increasing in its complexity, sometimes supported by the amount of sonic layering I use, creating a culmination of sounds at the climactic moment. This increasing rhythmic and musical complexity is often presented with a rising filter sweep to enhance the re-introduction of the bass frequencies in the drop. After the drop, more layers and rhythm will be introduced to create a sense of musical explosion or release. Sometimes, I deliberately play with the clichés (such as removing the low content with the “Kill switch”) of the genres in which I work (either Trance, Techno or House music) or by playing with other musical parameters to create similar effects; such climactic moments are not always about rising frequencies, or increasing layers, or filter sweep rises. Like a flowing river, climaxes can be dissolved or elided with the next section through careful crossfading of sonic layers.

The sense of epic viscerality I seek to achieve in my works can also be achieved by gradually filling the full frequency range, to reach a point of saturation. Such music can be regarded as oppressive and physical; it is not just a gradual rise but a discreet filling-in of the spectral content. The multiple layers of materials create that sense of saturation. For instance, in my track So It Goes at 19’30”, a buildup section is progressively introduced, until reaching a musical climax at 20’30” that establishes a dense musical plateau upon which I add even more layers in order to create a sense of saturation. At 21’39”, the removal of all the layers except one, that is ‘washed out’ (with reverb), provides the expectation of the return of all the musical layers at once and this effect will impact the listener viscerally as the re-introduction of the powerful rhythm (the drop) offers such a musical intensity. The technique of having each climax increase in its level of intensity has been heard in tracks such as Swedish House Mafia’s Leave The World Behind (2009)[28] and Martin Garrix & Botnek vs Phatnoize’s Animal Discorrida Mash-up (2013)[29], and it correlates with my music. Where my music differs is in the musical technique that achieves this. Classic filtering of lower frequency content does occur in my work, but the subtler use of saturation uses notions of spectral density, frequency blocking, rhythmic complexity, and increased spatial movement to achieve this musical effect.

When composing in real-time, there are particular characteristics to the layers that I use that serve different types of musical function. Textural layers are often contracted with more directed rhythmic loops. Textural layers enable me to concentrate on the spatial element whilst rhythmic material enables me to build convincing climaxes. The layers of material that I am working with have specific musical functions. The layers themselves have two purposes: complexity and progression. Initially, the layers are often more textural but as they get closer to the climax, those materials are often developed to be more impactful. In order to achieve a sense of musical climax, textural loops are often removed from the mix and gradually replaced with more rhythmically impactful loops that lead to a structural point of musical intensity. There are normally two or three primary rhythmic loops at the beginning of a track and then a series of loops that contribute more spatial, melodic, or textural elements. In my composition So It Goes, from the beginning until 1’51”, the musical development of impactful loops directs the music toward an initial climactic section of the piece. Following this, I mix freely between these loops and then cut out certain ones with rhythm and then build up the track again.

The ways I choose to expose my layers of material in a piece is through a focus on processing parameters such as EQ filtering, reverberation and beat shuffling. All of these techniques and materials are connected so there will always be references back and forth to previously introduced musical loops. For me what is important is how the new elements are able to provide a different perspective on materials within the layers. These sonic variations coalesce to create a musical complexity and sense of progression as layers are recombined in a multitude of new configurations.

The sense of climax is not solely about the material itself and their combination but emerges from the combination of layers that create evolving musical textures through which I can achieve a climactic emotional flow. This idea is echoed in Garcia’s (2015) Beats, flesh, and grain: sonic tactility and affect in electronic dance music:

Texture of an object (or sound object) can thus be understood to carry the affective resonances of its past encounters while also engendering a sense of potential ones in the future. […] Texture can thus function as a node of articulation between material encounters and affective experience, through the sensate apprehension of past and future action in the present (Garcia, 2015, p. 72).

By adding layer upon layer, I am able to create the sense of epic emotional impact that I desire akin to a sublime experience in which the individual ego is subsumed into a collective experience. Compositionally in this layering of material musical climax is often reached through frequency saturation rather than layers of gradually rising pitch materials. In my piece Rocket Verstappen, from 4’39” until the climax at 6’34”, we can hear the application of this method in order to create an effective, albeit untraditional, musical climax.

The layers in the horizontal development of the piece create an impression of rhythmic independence. The off-beat syncopation of certain layers gives the feeling that they are tied to that four beats per bar (4/4) metric structure throughout the composition. When the layers work horizontally, they create a rhythmic complexity that is not just a vertical beat going all the way through the piece. The layers are often made of variable loop lengths, and although mostly continuous, these loops are subject to contraction and expansion. This allows me to use the sense of emergence and disappearance within the loops themselves as well as on the more global layer structure.

By having layers emerge I am able to create variety in both textural and rhythm layers. An example of textural emergence would be through deconstructing the whole mass of musical elements at a climax section and restart building another musical flow that will lead to another climax or return to the initial texture of the sounds. An example can be heard in the improvisation section of my composition Cyborg Talk, starting at 9’21”, where, after deconstructing and reducing the musical layers to a single beat I reintroduce loops slowly one by one and transform all the musical layers by the use of filters, reverberation or rhythmical effects, before ending the piece with the original sound materials so as to close the symbolic master loop.

Examples of complex rhythmic patterns created through simultaneous layers in my music can be heard clearly in the piece Stix (2017), between 1’45” and 2’15”. The loops contain several rhythms that generate a dense musical bed which provides a composite beat in order to continue the piece’s momentum. As for textural layers, we can hear the construction and deconstruction of textural elements that produce a musical structure that develops and engenders a smooth progression between the different layers. For instance, in my piece So It Goes, starting from 26’05”, we can hear an expansion of musical ideas where the additional layers selected create a detailed mass of loops that support the momentum of the composition. In the piece Chilli & Lime (2017), there are differences in my approach to layering materials with different rhythmical elements - how I use space in between these elements and their interaction when superimposed. These become immediate moments of difference that illustrate how my work is more textural than articulated. The introduction from 0’00” until 4’33” clearly demonstrates this difference particularly.

It is the structural use of musical layers and how I balance textural and rhythmic materials that differentiates my music from other artists from an operational perspective. Additionally, the introduction of new musical ideas is often achieved through a process of emergence and disappearance. There is no use of block-like contrasts as is commonly found in EDM.    

2.5 - Improvisation

When I begin a piece, the opening musical material is often more composed. In this first part, my musical thinking is more horizontal, having to do with the layers of sonic material and the temporal relationship within those layers adding nuance and exposing the material (see Figure 23 below). The second, more improvisational part focuses more on the vertical aspects of composition: harmony, textures, adding and removing different elements, deliberately playing and cutting out parts of the sound, changing the parameters of the sound. This is the way I think about musical structure and it is a recurring element in my compositional methodology. This approach of playing on the horizontal and vertical aspects of musical materials happened as I developed my proficiency when experimenting with the Novation Launchpad Control XL as an addition to the Ableton Push 2, altering filters, reverb and shuffling effects (and other audio effects). When I compose sounds for a track, I extend them by reshaping them through diverse effects. Concerning the vertical aspect of a composition, I tend to play with the spectral evolution of sounds. This occurs more in the improvisational part rather than the exposition part, thus my method of composing starts usually by creating more dramatic evolutionary sections in the first part and I frequently focus on audio effects in the second part, where it is more about sonic interventions into the musical layers, rather than the combination of layers.

Figure 23 – Ableton’s software in conjunction with the Push 2 instrument allows me to develop the structured and arranged sections of my compositions, 2018.

Figure 23 – Ableton’s software in conjunction with the Push 2 instrument allows me to develop the structured and arranged sections of my compositions, 2018.

There is a sense of intuitive musicianship when developing or performing an improvisatory section in the studio through mixing the musical layers live, selecting the elements that are more impactful or relevant to the direction of the composition. The use of textural sonic elements helps build up the piece, but I always introduce additional material towards the climax that has sonic characteristics that are different from that used for developing the preceding textures.

The improvisation section is more drastic in the removal of sonic content than during the exposition of the piece. At the beginning, it is just one or two loops that are added, juxtaposed or removed to create forwards movement. This can be compared to the improvisation sections, where the sense of movement and transformation is mostly concerned with how layers are subject to gradual effects changes and transformed by my intuitive decisions. These sonic developments take place on the top surface output layer. They are achieved by three layered effects: 1) on the rhythmical content using a buffer shuffler effect which allows the musical progression to drift and shift to a different rhythmic pattern; 2) on the spatial effect (reverb) by making the sounds appear closer or further away; and 3) on the frequency spectrum (filters). These effects create greater transformation on the musical content than the more subtle use of filters in the exposition of the piece, which are linked to the emergence and disappearance of material rather than their transformation.

The improvisation section allows me to deconstruct the piece and rebuild it in a new way in order to highlight the musical potential of the loops, be it the rhythmic, spatial or spectral content. The manipulation of these parameters allows me to create new possibilities and directions for the music to evolve, change and transform. I never know what exactly the outcome of my improvisation will be. I have to listen carefully and balance intuition with my preconceived ideas for the musical direction of the track. Although I am thinking more from a compositional perspective, I am aware that this particular practice is important when it comes to performing in front of a large crowd. My compositions are influenced by and derive from my performance practice. When I am in the studio, I will establish certain sonic elements in the development of my piece, but it is only in the ‘flow’ of the performance that the understanding about what works or not, musically, will be realised, and thus what actions (remove the bass, filter the high frequencies, add reverb, etc.) I should take in order to achieve the desired effect to progress my musical ideas.  Including pre-composed elements and improvisation in my work allows me a certain freedom to feel what is happening with the crowd and let this direct the musical flow, as well as proposing a certain musical direction through more profiled, pre-constructed material. Both elements have the intention of communicating a musical journey, one in which I chose the materials, but the overall formal shape is open and designed in the moment to create an immersive experience of sound and space.

My use of musical layers has a kinship with the work of Dubspeeka. When analysing Dubspeeka’s Techno-influenced Primary K293, we find that the use of shorter sonic gesture and silences creates a rhythmic interplay between discrete musical layers between the loops. I work in a similar way with musical layers when producing my compositions. However, my work is more influenced by Trance and hence, is more sonically linear, whereas Dubspeeka’s block-like handing of rhythm can be derived more from House music. In my piece Cyborg Talk at 6’35”, there is an emphasis on a synthesized melody and a House-style 4/4 beat. This section of the track is based around a heavily quantized melody, with a Trance-like, hypnotic and often repetitive feel. Basically, I use both musical ideas of Techno and Trance to compose a hybrid genre of music.

In my work, the improvisatory section is built around the development of a maximum of eight musical elements. Improvisation will focus on either spectral, rhythmic or spatial elements and it usually occurs after the presentation of the opening composed section. Having a pre-conceived compositional structure generated from a pool of materials is a common feature of my music in the portfolio that allows me to generate a ‘fresh’ version of the piece depending on my emotional state while reading the crowd when performing. I have a pool of material that I can do anything with, but that material is already composed into a loosely-defined structure. It is up to me to navigate interesting ways through that structure anew in each performance.

2.6 - Conclusion: outlets and dissemination

I do not necessarily sit and work in the studio, producing a track that comes into fruition as a studio musician would. When I am talking about composition and improvisation, there is always an element of live performance in the studio. The pieces that I am submitting for my PhD are versions that I am satisfied, but they are one version of many. They are open (having a pool of material and I have the choice to do what I wish with all the sounds from that piece). The performance of that piece will always create a slightly different version of the piece, but several versions will reveal the piece’s identity. Overall, for me, there is no such thing as the final version of that piece. My creative process is a circular approach with regards to my practice when I compare with other artists who work differently or similarly (GusGus[30] and Jon Hopkins[31]).

My research also concerns finding a suitable means of dissemination for my work including a multichannel Livestream on YouTube. This works in an unusual way, it can not necessarily be performed in 3D every Friday night since there are not many systems to perform with, thus I am searching further opportunities. One is the multichannel dissemination of fixed versions. This is why I have binaural versions of the work on my YouTube channel. Looking for alternative routes to a commercial market is also a valid part of the research and to investigate possible new means of disseminating such music not supported by current commercial formats. The type of music that I perform is usually played in mono format and I wish to create a multichannel content comprising the 3D space that is written into my music. I have assessed Dolby’s and other spatialisation systems that can provide this kind of sonic experience, as well as looking at different digital platforms. What I am doing is using the knowledge from those systems in order to disseminate my work, because if I create 3D EDM I need an outlet for it.

3 - Performance As Composition

My structuring logic when it comes to improvisation is to present my sonic material and to repeat it for long enough to become familiar and then apply developmental processes to it. This fundamental performance strategy can vary from composition to composition, where depending on the dynamic context, I can apply, or not, some improvisation. It is possible that a piece can begin with an improvised section or could simply expose the written structure. Basically, my work is about being able to communicate the composition in real-time and it is also about being able to respond to the audience. Furthermore, there is a necessity to be seen while performing because even though there is not necessarily a focal point on the sound, there is a need to have one on the performer. One of my musical aspirations is to maintain a connection with the audience and this relates to the concept of ‘creative authenticity’ found in Simon Zagorski-Thomas’ article concerning functional staging and perceived authenticity in record production:

Functional staging is also fused with and involved in the maintenance of the audience perception of creative authenticity and of the music maintaining the appropriate sonic qualities of what perceived to be an appropriate listening/engaging environment (Zagorski-Thomas, 2010, p. 263).

There are risk-taking elements during the performance that will influence the development and the directions of the work. The risks come from manipulating the sounds in real-time and trying to push their musical potential. Also, playing with the audio effects on the sonic content can be done to exaggeration, such as too many repetitions. As a result, the musical flow can be lost, or we can steer the musical direction astray. Furthermore, doing improvisation can also bring their hazards. The improvised moments occur when I loop the sonic content and transform the timbral content rather than focussing on sonic progression. If materials get repeated for too long the audience loses a sense of immersion and flow and instead focuses on the development of individual sounds and elements. The improvisations are dependent on several factors that can influence or modify my interpretation; the acoustics of the location, my state of mind of the moment, and how many people listening. The composition will never be exactly the same and will therefore always bring a dimension of risk in live performance. Another element of risk is to include loops that are not dividable by the conventional “four on the floor” (referring to the bass drum, situated on the floor, which plays every single beat) pattern, sometimes they can be 6, 10, 14 bars long and I need to calculate the right moment to bring them or to remove them. Furthermore, I do apply reverb, shuffling or filtering effects to one or more loops, thus I have to carefully craft and mix these processed sounds in order to generate variety to the work.

Another element of risk is having a preconceived idea of what I think the composition will be and while performing it, realizing that I have to move away from my indented structure or intended means of delivery halfway through. For instance, during my performance at Electric Spring 2018, one piece was coming to an end and because of the reaction of the audience, I decided to extend the piece by repeating the last musical idea and pushing it further. The result was unprepared and improvised but it became a highlight in my career as a performer (we can watch full clip of the action here, or the moment of the reprise with this link). Practically, this example demonstrates the process being able to read the audience, to manipulate the length of sound when it is working and extend the musical passages.

Live performance is important for me in order to keep my music fresh, changing and different. It is a music that needs an audience. There is a mastery that comes with practice and it is a reason that keeps me looking for better ways to play and perform my work. Taking risk in a live setting has enabled me to improve and enhanced my composition practice, as well as my performance skills. I have made plenty of errors when performing but through perseverance and repetition I have been able to master the performance presentation of my pieces. I developed a way of composing that stems from all the risk-taking moments from my live performance, and without risk I would have never achieved that level of proficiency or developed the template in which I structure my pieces. EDM is a performed culture and ultimately, I want to create a magical moment, a moment that did not exist before, but has the ability to fit a moment so perfectly that will make me, and the audience live something unique and special.

This unheard-of moment, this unique experience is part of the desire I thrive for in a live performance. It is the live experience which has importance and less the recording of it (the recording is there to capture the magical moments when they occur and to keep a memory of it). I enjoy performing and improvising live because I like to respond to the audience. When I perform, the pacing varies to the needs of the audience and this is different to performing electroacoustic music where I cannot change the pace of a work. I prefer the tension between the performer and its audience; between what I think I can get out of my instruments and what the audience wants. While I perform, I have to explore and improvise to the point that I still have the audience engaged, this can be done by the control of the delivery and the pace that I have over the music. These risks are limited; I do not take risks with the tempo or the meter of the composition. It is a method of engaging the audience with the control of the pacing. I have a specific BPM and that is fine with the stylistic domain. It is about judging the pacing of intensity with the audience. It is to play with a variety of intensity, where some moments I want a climax and other times, I want a calm musical section. I have done performances with DJs playing at the same event and I have perceived that the audience reacted positively to more “organic” and “human” music than with pre-configured sets of commercial tracks that have a similar level of production. People resonate and connect more with that human factor than with generic DJ sets.

To add to the pitfall of live performance, I am using the “Kill switch” on my Novation controller for cutting out the bass frequencies from the sonic content and this is where I risk of cutting at the wrong time. I am using the “Kill switch” because it is an important auditory, visual and physical cue that interacts with the audience, it is communicated by the exaggerated gesture of my hands pushing down those buttons on the controller. This performance technique is exposed in Mark Butler’s introduction (2006, p. 3) of his book about the perceptions and performance world of EDM, where the DJ is working the audience and making a drop:

Sometimes (DJ Stacey) Pullen cuts the bass drum out. The audience turns to him expectantly, awaiting its return. For one measure, and then another, he builds their anticipation, using the mixing board to distort the sounds that remain. As the energy level increases, he gauges their response. A third measure passes by, and a fourth, and then – with an instantaneous flick of the wrist – he brings the beat back in all of its forceful glory. As one the crowd raises their fists into the air and screams with joy, dancing even more energetically than before (Butler, 2006, p. 3).

I want people to respond physically, bodily to my music. I can achieve this by creating a sense of pulse. Furthermore, I search for ways to interact and communicate with the audience. The removal of the bass frequencies helps me to engage with them throughout my performance. Hans Zeiner-Henriksen (2010) unveils that process in his article where he discusses musical rhythm in the age of digital reproduction:

This is particularly evident in dance music. On the dance floor, the impact of the bass drum is, for example, shown through a common technique used by DJs: its removal (or the filtering out of low frequencies). This has an immediate effect on the dancers: the intensity of their movement decreases, and their attention shifts to the DJ as they await the bass drum’s return. The DJ may keep the crowd in suspense for quite some time while slowly building up to the climactic moment when the bass drum is re-introduced and the crowd delightfully satisfied returns to the dancing (Zeiner-Henriksen, 2010, p. 121).

In order to generate a sense of immersion and spatialisation in a work, I use loops with either low, mid or high frequency content, which offer a feeling of spatial movement. Zeiner-Henriksen (2010) has observed this phenomenon in Björn Vickhoff’s study on emotion and theory of music perception.

Our understanding of high and low, up and down, above and below, and ascending and descending in music presumably informs this relation. Björn Vickhoff writes: ‘Although there are no obvious directions of melody movement, most listeners feel directions in music. When the melody is moving “upwards” or “downwards” you get a feeling of spatial direction’ (Vickhoff quoted in Zeiner-Henriksen, 2010, p. 128).

Another important thing that I want to create, is a sense of intimacy, where people are able to see my setup as I am performing. This can also be achieved during my Livestreams: the audience can observe my hands touching and moving the controllers, especially since I have added a second camera in order to focus on the gesture of my hands. The viewer can watch me do these things live, realizing that the music is crafted in real-time and it keeps them engaged because what they see is not pre-recorded. Sometimes, when I am completely involved with my music, I also dance and jump as I am playing. Thus, the physical correlation between sound and bodily movement is important to connect with the audience.

3.1 - A Case Study: GusGus’ performance setup

Growing up, I found myself very much liking music in a spectator (passive) mode. When I decided to take music seriously and also wanted to become a(n) (active) proponent of music, I asked myself this question before initiating my musical studies: Do I wanted to become a DJ? Furthermore, I also had in mind a quotation by Gilbert Perreira: “Be yourself; everyone else is already taken”[32]. Thus, I assessed that ultimately, I wanted to play my own music and not perform other people’s music. I knew this route would be a longer pathway to a musical career. I have heard plenty of successful DJs while evolving in the nightclub culture. Nevertheless, becoming a composer/performer was simply the natural progression for my personal artistic career.

Electronic music is generated in the studio, which functions as a compositional tool as well as a musical instrument (Eno, 2004, p. 127).

This quotation from Brian Eno (2004) supports the idea of the performing producer discussed in this chapter. The position of performing electronic music live has become a lifestyle and a way to emancipate and express myself. As with traditional instruments, the hours spent playing develop a proficiency that provides expertise and skills in order to transmit information (music) to the listeners. Over the course of this research, I have developed skills that enable me to create/play/perform music, comparable to a Jazz musician who can improvise on a ‘standard’ of the repertoire; the rendition will never be the same and will always bring something new to the re-creation. I appreciate the possibility of striving for an ‘ideal’ studio composition and to be capable of transforming it into a live performance while still retaining its essence. Thus, my work in the studio and in a live performance are two of the same kind; it is the same work yet different. 

GusGus, an Icelandic band, have been musically important for me for over 20 years. Their performance setup has evolved throughout the decades and their current technical setup of hardware/software is something I am interested in emulating with regards to the arrangement of my musical instruments. The two main devices that constitute the core of their setup are an Akai Sampler/Sequencer MPC2500 SE and a 16 channel mixer, the Mackie 1604 VLZ-3 (see Figure 24 below). The first one contains all the sounds from each of their tracks. The mixer receives the audio loops from the Akai, i.e. Bass Drum, Snare Drum, Hi-Hat, Percussions, Synth Bass, Synth Melodic and also Voice. In this way, Biggi Veira, the member of the group that controls these instruments can arrange and re-arrange the musical elements as he pleases. This offers a flexibility over the clarity and selection for the musical focus. Along with the singer(s), they interact with each other (voice and instruments) in order to play and synchronize between the different parts (intro, verse, chorus, break, outro, etc.).

Figure 24 – GusGus Live setup for performance: Sampler, Controller, Filters and Effects, 2017.

Figure 24 – GusGus Live setup for performance: Sampler, Controller, Filters and Effects, 2017.

To make their music more organic, they have implemented a variety of sound effects that help them improvise with the materials of the pieces during the performance. This is done with a Doepfer filter unit that consists of a Voltage Controlled Filter (VCF), Low Frequency Oscillator (LFO), Voltage Controlled Oscillator (VCO) and Voltage Controlled Amplifier (VCA), these units are used for generating sound, modulating audio signals as well as compression. This live setup offers a multitude of choices in order to transform and perform with their sonic elements. Compared to my setup, I have fewer options in order to manipulate individual sounds but works similarly; using a pool of loops from the specific track and transforming the pieces as I re-compose/re-arrange them. The main difference is that there is a group of looping materials which allows multiple sonic interactions in order to decide and influence the musical interpretation of a track. Similar to the way I approach my compositions, they do have album versions of their work and use improvisational methodology to perform them live. Here is an example of their piece Deep Inside from their album Arabian Horse in 2011 and a live version on the KEXP Radio show in 2017

3.2 - Rethinking composition as Performance

The creation and manipulation of sound, along with the performance of sound in space, constitute two fundamental areas of interest that bridge the transition from instrumental music through analogue electronic music to, most recently, computer music. Spatialisation and interpretation in performance are constantly being re-evaluated and developed. In my research practice, I explore both of these aspects of music using new tools and technologies. In my practice, there are some similarities to the interpretation of acousmatic music, particularly the aspect of sound diffusion, using a stereo sound file or stems played on an array of loudspeakers. Although, fixed multi-channel pieces are sometimes performed with minimal intervention as all of the relative speakers’ volumes are fixed in the studio, I personally strive to take charge of the arrangement of sound in space, therefore playing and orchestrating the sonic elements in the concert hall.

Figure 25 – SPIRAL Studio, where these 3 rings of 8 speakers can be considered as a High Density Loudspeakers Array, 2018.

Figure 25 – SPIRAL Studio, where these 3 rings of 8 speakers can be considered as a High Density Loudspeakers Array, 2018.

Invited as a guest editor for the Computer Music Journal, in 2016, Eric Lyon wrote about composing and performing spatial audio music on High-Density Loudspeaker Arrays (HDLA) (see Figure 25 above):

A small but growing number of computer musicians have taken up the challenge of imagining a new kind of computer music for the increasingly available HDLA facilities. […] In 1950, theorist Abraham Moles wrote in a letter to Pierre Schaeffer regarding the new practice of musique concrète, “As much as music is a dialectic of duration and intensity, the new procedure is a dialectic of sound in space, and I think that the term spatial music would suit it much better” (Schaeffer 2012). The advent of HDLA computer music gives us one more opportunity to realize this early vision of electronic music with great vividness, to the benefit of artists and audiences hungry for new immersive auditory experiences (Lyon, 2016, p. 5).

Furthermore, Lyon says: “composing computer music for large numbers of speakers is a daunting process, but it is becoming increasingly practicable” (Lyon, 2016). Compositions such as Jonty Harrison’s Going Places (2017)[33] for 32 channel audio and Natasha Barrett’s Involuntary Expression (2017) for 50 or more loudspeakers (as the work is for High Order Ambisonics) arranged above and around the audience, are working with specific and fixed pre-composed spatial movement with little possibility for any spatialisation during the performance. In comparison, electronic dance music producers are mostly triggering stereo tracks and shaping the music with EQs, filters, etc., but rarely give any attention to spatialisation in their work. Thus, in my work, I am trying to combine both; detailed spatialisation, live performance and the live manipulation of sound. I believe that having control simultaneously of these different musical aspects allows a greater interaction with the audience while performing.

The live performance of my music is essential for me and this is why I am not presenting fixed 24 channel spatial music. As a result, my choice of software and use of hardware are determined by what I want to do. My experience as an acousmatic composer brings in a whole range of influences, approaches to space and thinking about sound that is different from those producers who came to music from a more commercial route.

I am intuitively thinking about form when I improvise/perform a composition, even if my sonic style leans towards a certain music structure or form. I consider this different from a fully organised and planned composition. My work has a form and shape arising from my pre-compositional and in-the-moment decisions. This can be a quasi-linear quality emerging from a progressive succession of loops, somewhat like a Markov chain. There are musical signposts and key points, where the musical elements are slowly transformed over time. 

My acousmatic practice made use of an improvisatory sense of play in creating the initial sound material and I then took them into the studio, processed them and made them into a piece. Now I am curating samples and integrating my improvisational practice into the piece itself. I am trying to combine the behaviours of my acousmatic practice and develop sound in real-time during performance as equally important elements in my current practice. In regard to my music, this process demonstrates a sense of progression by transforming a sound and generating several versions of it as the piece unfolds. With my performance, I am trying to provoke the kind of sonic immersion and musical journey that the audience will experience within the concert space. These goals are reached through the transformation and organisation of sounds, plus the surrounding environment created by the loudspeaker array.

From the experience of my performance at Electric Spring 2018 and the HISS (Huddersfield Immersive Sound System) workshop during the summer of 2017, I was able to perceive how my spatial gestural thinking from the SPIRAL Studio could move to the HISS installation in Phipps Hall, a performing space at the University of Huddersfield. I was pleased to hear that the translation of my work from a smaller (studio) setup to a concert hall was not losing its sense of immersivity. I have noticed that the spatial movements were still accurate and precise as with my initial (studio) intention. My piece Rocket Verstappen (2017) has some sonic elements circling the speaker array at 01’37” counter clockwise on the upper ring of eight speakers and I could follow the direction and location of the sound precisely in Phipps Hall. Not all of the spatial translations were as exact or as intended as this one. For instance, in my work Chilli & Lime (2017) the introduction produced a different perspective on the diffusion of sounds in Phipps Hall than it did in the SPIRAL Studio. In this piece, a guitar loop is subject to various kinds of reverberation in order to play with the sense of space: a delay effect was added to the guitar loop which helps to expand the perceived localisation, making the sound feel as though it comes from beyond the speakers. Since Phipps Hall is a larger space (see Figure 26 below), the diffusion and distribution of sound energy was greater than in the SPIRAL Studio and it helped me to immerse the audience in moving layers of sound.

Figure 26 – Huddersfield Immersive Sound System (HISS) during Electric Spring 2018.

Figure 26 – Huddersfield Immersive Sound System (HISS) during Electric Spring 2018.

There is a functional difference in my loudspeaker setup when I compare it with the acousmonium’s orchestra of speakers which is front-of-stage oriented. I am using a dome of speakers surrounding the listeners rather than an acousmonium in order to create a sense of immersion. The idea of spatiomorphology (Smalley, 1997), so prevalent in acousmatic music, is not so pertinent to my music because the critical listening is not the same when you are moving and dancing, as opposed to when you are sitting down in an acousmatic concert:

EDM is made, above all, for non-stop dancing. Admittedly, it is possible to do many other things while listening to EDM, but it is the immersion in the intense experience of non-stop dancing, more than anything else, which defines its specificity, its operative nexus (Ferreira, 2008, p. 18).

My work offers a space that the listener can occupy in order to experience a sense of immersion, where there is sometimes a focus on certain musical elements. My music is not about a spatial interplay akin to Jonty Harrison’s work within the BEAST configuration. It is more ‘listener centric’ than focused within the music itself. The quotation below demarcates where the audience should be standing during a concert when comparing with Jonty Harrison or Christian Clozier (Clozier, 2001) are doing (seated and fixed, as opposed to be free to move around):

It is crucial to note that performance in EDM involves DJs and audience, humans and machines, in mixed proportions, for it concerns them not in their opposition but in their shared sonorous-motor reality. […] When sound meets movement in EDM's drive for non-stop dancing, DJ and dance floor, machine sounds and human movements insistently perform their shared reality (Ferreira, 2008, p. 19).

At the beginning of my research, I was only using 8 channels to create surround music. Progressively, I added another ring of 8 speakers (above the first one). Since the software’s (Ableton) audio configuration was limited, I finally found a solution where I could use the whole 24 speakers of the SPIRAL Studio (see Figure 27 below) by coupling the stereo audio files into pairs of speakers (front, right side, back and left side). This led to the development of my concept of ‘gravitational spatialisation’ that separates the audio frequency content according to their position on the frequency spectrum.

Figure 27 – SPIRAL Studio with its 3 rings of 8 speakers, 2018.

Figure 27 – SPIRAL Studio with its 3 rings of 8 speakers, 2018.

3.3 - Why 123-128bpm?

Humans are vibrational beings who are influenced by music. Music is produced by sound waves caused from traditional and virtual instruments. Being an ethical hedonistic person, my philosophy is deeply rooted in the self; my individual needs and wants are the most important things that will exist in life. Thus, I am striving to find pleasures and to maximize this sense of enjoyment; to do everything in my power to have an enjoyable life. Therefore, when I am composing or performing, I aim at reaching this feeling of aural satisfaction, to produce a musical environment where sounds provide a joyful, captivating and entertaining experience.

I am creating music for pleasure, be it physical or intellectual. Combining the two is my ultimate goal as a composer. The type of music I compose has a physical and emotive quality to it, but yet also a technical and intellectual aspect to it. In my work Cyborg Talk (2017), I play with the sensorial dimension with certain sounds (introduced at 2’21”) that are situated in the low frequency register, which have a visceral quality. These low frequencies, according to Richard Middleton in regard to the viscerality of sounds, affect us wholly:

just as physical bodies (including parts of our own) can resonate with frequencies in the pitch zone, so they can with the lower frequencies found in the rhythm zone. 'The producer of sound can make us dance to his tune by forcing his activity upon us', and when we 'find ourselves moving' in this way, there is no more call for moral criticism of the supposedly 'mechanical' quality of the response than when a loudspeaker 'feeds back' a particular pitch. Boosting the volume can force zonal crossover, as when very loud performance makes us 'feel' a pitch rather than hearing it in the normal way; our skin resonates with it, as with a rhythm (Middleton, 1993, p. 179).

Furthermore, these specific sounds spiral around our head in order to give a whirling sensation, to hypnotize us within this sonic immersion. Later on, when Part B is introduced (at 5’45”), I reduce the musical intensity in order to bring in two new sounds (trumpet sounds an octave apart). Putting these through a reverb effect creates the impression of a physical distance with the sound, that plays and appeals to a certain nostalgic feeling, a longing to a distant past, while these sounds are spatialised around the audience. This sonic texture continues until I reintroduce a strongly visceral sound with substantial low frequency content, and add it to the distant, nostalgic sounds, thus encompassing both physical and intellectual quality at the same time (Part A + B = C).

Viscerality, sound vibration and low frequencies are working in common in a sonic therapy called Vibroacoustic therapy. “It is a recently recognized technology that uses sound in the audible range to produce mechanical vibrations that are applied directly to the body” (Boyd-Brewer, 2003). The recognition of low frequency content as a potent vessel for physical sensation of sounds, provides a justification regarding my musical intention for including such sonic material in my work. The work of Mehdi Mark Nazemi supports this interaction between viscerality (physical response) and music:

One of the most natural responses of the human body to music is the synchronization of anatomical movements and other physiological/psychological functions with musical rhythms. Our body - through pre-conscious processes - is quick to recognize and “feel” vibrations, the foundation of what makes the rhythmic pulse. Our body senses the vibrational events, and we begin to interpret this sonic resonance as rhythmic events (Nazemi, 2017, p. 42).

Throughout my teenage years in nightclubs, I found that music that had a certain beat per minute (bpm) rhythm, situated between 123-128 bpm would make me dance more than the ones below (115bpm and less) or the ones over (130bpm or higher)[34]. Of course, there are always exceptions but generally there is a clear trend in the way I am naturally attracted towards certain bpm. These bpm are felt viscerally, and this is how they create a somatic (bodily) connection with me; the sound oscillations resonate with me. It is empirical evidence and observation from my experience with EDM. Trance music usually has a faster bpm and higher frequency content, whilst Hip Hop has a slower bpm, between 85-110 bpm.

At the beginning of the 1990s, I became interested in House music. It was different to most music I had heard until then. It was more fashionable than the commercial Dance music of that era. It was reaching, subliminally, those specific bpms that made me vibrate and connect with the sounds. House music’s most prominent characteristics, as defined by the Oxford Music Online website[35], demonstrates that the bpms most associated with the genre are exactly the ones I am striving for:

House […] feature a 4/4 meter (“four on the floor”), with a kick drum sounding on every beat. Ample use of hi-hats, as well as snare hits or handclaps on the second and fourth beat lend a syncopated “disco” feel to a steady kick drum pulse. House music borrows heavily from disco and soul—producers emphasized melody and vocals, and added embellishments such as drum fills and strings. The tempo of house music tracks hovers around 120 beats per minute (as opposed to techno tracks, which range from 120 to upwards of 140 beats per minute). The Roland TR-808 and TR-909 drum machines were often used to provide the drum tracks for house music. The Roland TB-303, a “bass synthesizer,” was often utilized by house producers for its unique timbres—the twitchy, rubbery synth lines produced by the TB-303 spawned the sub-genre of acid house (Dayal, 2013).

Within the House music community, it is well understood that it is hard to explain what exactly this genre is. Aficionados from the early days describe it as “House is a feeling”. In this excerpt below, Chuck Roberts preaches the gospel of House music in Fingers Inc.’s Can You Feel It (1988):

In the beginning there was Jack, and Jack had a groove. And from this groove came the grooves of all grooves. And while one day viciously throwing down on his box, Jack boldly declared: "Let there be house!" And house music was born. I am you see, I am the creator, and this is my house, and in my house there is only house music. But I am not so selfish; because once you're into my house it then becomes our house and our house music. And you see, no one man owns house, because house music is a universal language spoken and understood by all. You see, HOUSE IS A FEELING that no one can understand really, unless you're deep into the vibe of house. House is an uncontrollable desire to jack your body. And as I told you before: This is our house and our house music. In every house, you understand, there is a keeper and in this house the keeper is Jack. Now, some of you might wonder, "Who is Jack and what is it that Jack does?" Jack is the one who gives you the power to jack your body. Jack is the one who gives you the power to do the snake. Jack is the one who gives you the key to the wiggly worm. Jack is the one who learns you how to walk your body. Jack is the one that can bring nations and nations of all jackers together under one house. You may be black, you may be white, you may be Jew or Gentile... It don’t make a difference in our house. And this is fresh (Roberts, 1987)[36].

The impact of House music has influenced my musical aesthetic and sensibility towards visceral emotions; I need(ed) to feel the music on the dancefloor, I was looking to ‘Jack my body’, to be under the spell of the groove and beats of House music. Thus, when I am composing, I do intentionally aim at recreating those blissful musical moments where I could not resist the power of the percussive beats and to the DJ’s rhythm. The DJ would be manipulating us like a snake charmer or shaman and put us in a tribal trance in order to dance all night long. “Fans of House music may be predisposed to prefer faster beats and pulsating rhythms, or it could be that repeated exposure and a greater understanding the subtleties of the genre are what draw us to it.” (MN2S, 2016)

A more scientific explanation of this experience of House music is described here:

We all know that listening to music brings us pleasure and joy (and sometimes pain), but what is happening inside our brains when we listen to it? The answer is: a lot. When you hear a song, all four lobes of the brain react. Memories, emotions, thoughts and movement are all impacted. Your brain processes the tone, pitch and volume of the music in the auditory cortex, then sends this information to the rest of the brain, creating a richer experience. When it gets to the amygdala, your brain releases dopamine: the chemical behind rewards and pleasure.

Considering the many subgenres of house and wider electronic music, it is not surprising that house music can affect your brain in many different ways. Other forms of house and electronic have other effects. The release of dopamine, the pleasure chemical, is found to be greater at so-called ‘peak emotional moments’ in a song. The level of reward at this moment of peak emotion is thought to correspond to the length and intensity of anticipation during the build up (MN2S, 2016)[37].

Furthermore, the frequency content of House music shares common sonic characteristics with Hip Hop music; they both “fill up the lower side of the EQ spectrum with wide 808 booms and bass” (Benediktsson, 2010)[38], which I appreciate as much as the mid and high frequency sonic content. These bass sounds seem to affect the lower portion of our body, which makes the hips move and groove to the music. In comparison, the sonorities of Trance music affect cerebral neurons, thus making the head nod back and forth.

3.4 - Physical response

The musical connection I have with my work exists on a physical somatic (relating to or affecting the body) level as well as an intellectual one. This also extends to the structured and improvisation musical segments where I can combine these two conceptual pleasures. There is a somatic-cognitive creative loop, where I am sonically pleased with the sounds therefore, I consider how I can continue developing these materials further and that makes me feel even better. It is through this process that my compositions developed from being, on average, 8 minutes to 10-15 minutes long and then even longer (up to 50 minutes). I am trying to find a balance between the physical and intellectual, creative and reflective, real-time and non-real-time, static sound and spatialised sound, structure and improvisation. I observed commonalities between my compositional methodology and the Greek doctrines;

the earliest concept of a balance of nature in Western thought saw it as being provided by gods but requiring human aid or encouragement for its maintenance. The natural balance implied relates to the rise of Greek natural philosophy where emphasis shifted to traits gods endowed species with at the outset, rather than human actions, as key to maintaining the balance […] Plato's Dialogues supported the idea of a balance of nature: the Timaeus myth, in which different elements of the universe, including living entities, are parts of a highly integrated “superorganism” (Simberloff, 2014, p. 1).

Personally, House and Techno have rhythms and frequencies (from 100Hz to 700Hz) that affect the core of my body, where my solar plexus is pulsating like a beating heart. It is my whole ‘central’ being that is moving and grooving to the tempo of those sounds; it makes my body vibrate in synchrony with the music. So, when I am composing/performing, I want to be completely involved and immersed with the sounds; mentally, physically and spiritually.

The advent of technology has had a tremendous impact on the speed of music. In the light of dance music from the 1970s and 1980s, disco music was based generally around instruments. Since then, the electronic means of producing music has allowed the expansion and increase in the tempo of dance music. As with lifestyle, ways of communication and transportation have become faster, music has also followed that trend. A professional DJ and producer C.K. stated "There was a progression as far as the speed of music is concerned. Anyone buying vinyl every week from 1989 to 1992 noticed this” (DJ Escaba Eskyee, 2015)[39]. For instance, in regards of Drum’n’Bass music, the Oxford Music Online website[40] mentions:

A genre of electronic dance music that emerged in the early 1990s in England. Drum’n’Bass draws on sampled drum breaks (typically from 1960s and 70s soul and funk records), heavy bass lines from Jamaican dub reggae, and a variety of melodic and harmonic material inspired by the genres of techno, hardcore, and hip hop. Its tempo is fast—often 160 beats per minute and higher—although many listeners perceive the bass lines as moving at half the speed of the drum breaks (Ferrigno, 2014).

We can hear some examples of this increase in bpm in music of Photek (Ni Ten Ichi Ryu, 1995)[41], Roni Size & Reprazent (Brown Paper Bag, 1997)[42] and Aphex Twin (Come To Daddy, 1997)[43]. Phoebe Weston (2017)’s article The Times They Are a-Changin suggest that Pop music is slowing down as people feel more reflective in 'dark times'[44] and, to corroborate this bpm trend, a Rolling Stone magazine[45] article from August 15th 2017 explains; “Producers, Songwriters on How Pop Songs Got So Slow”. 

In an article by Blake Madden, he sheds light on studies about tempi in the range I have discussed and also considers why the body seems to prefer certain bpms (between 123-128bpm):

For improving physical performance, Dr. Costas Karageorghis of England’s Brunel University suggests music with tempos of 120 bpm to 140 bpm during exercise, when our hearts beat in a similar range. […] Dirk Moelants, a musicologist and assistant at the department of musicology at The University of Ghent, argues that a ‘preferred tempo’ around 120 bpm is a part of our biology. His 2002 paper for the 7th International Conference on Music Perception and Cognition, titled “Preferred Tempo Reconsidered”, challenged psychologist Paul Fraisse’s previous conclusion that preferred tempo was somewhere in the range of 100 bpm (Madden, 2014).

The music that I have composed throughout my research does not involve lyrics, it is more oriented towards the creation of physical sensations. “If the constant 120 bpm average of charting pop hits represents our Platonic ideal of natural balance, then tempos far above or below that range represent the rest of the roller-coaster ride we call ‘life’” (Madden, 2014). Some popular examples of fast and slow bpm are:

The angst and nihilism of The Buzzcocks’ “Boredom” slashes through our ears at around 180 bpm. Grandmaster Flash and the Furious Five paint the tension of inner-city life in “The Message”—one of hip-hop’s first bonafide hits—at around 100 bpm. Bette Midler regularly inspires waterworks with her 60 bpms ode to an overshadowed friend, “The Wind Beneath My Wings”. 120bpm is our safe place, and the further we get from it tempo-wise, the more volatile we become (Madden, 2014).

Thus, this is perhaps why I am drawn towards these specific bpm. Although there are no definitive answers or justification, these quotations provide some evidence that lends support to my empirical knowledge and experience, after 30 years of listening to EDM. 

3.5 - Live Performance and Musical Flow

According to psychologist Mihály Csíkszentmihályi, focus and concentration hold the key to achieving ‘flow’. It is a term used to describe “feelings of enjoyment that occur when there is a balance between skill and challenge”, typically achieved when we are involved in a highly rewarding activity (Csíkszentmihályi, 1997, 1998). It is a process of having an “optimal experience; the state in which individuals are so involved in an activity that nothing else seems to matter” (Csíkszentmihályi, 1990).

In my pieces, I can start with one sonic element and add another, and the combination of those two things stimulates a third and this then combines with a fourth that leads to a fifth and so on in the manner of a Markov chain. There is an ongoing stream of musical elements, so the progression seems musically logical; nothing really stands out as being a stark juxtaposition or a completely new thing since my music flows. It is a clear musical journey where nothing repeats but is one continuous sense of flow and movement, like a combination of previous elements to create or move seamlessly into the next musical idea.

I am creating a flow experience in my music that is actually key to expressing myself as a musician, through the sounds that I make rather than thinking about structure or melody. There is a rough goal or idea but that sense of flow, stream of consciousness, the experience and the real-time sound transformations in the present moment mean that I am looking at a different rendition of the piece every time I perform it. We need to be in the moment, to acknowledge all the elements of life and to go with the flow in order to access and achieve a sense of immersivity. We need not to let ourselves be concerned with details but to focus on giving ourselves over completely in order to be that perfect vibrational being, in sync with the elements of life. The way I perform allows me to react and to change the musical course in order to direct the music towards the vibe that fits the mood of the event, to what is happening on the dance floor, or it could be to reflect my own energy of the moment, which is akin to communicating my immediate feelings to the audience. 

With diffusion, acousmatic composers have the security of presenting a fixed piece. I prefer to face the precarity of live music.  For me, this creates a more meaningful in-the-moment emotional musical event which brings me greater satisfaction as a musician. At the Electric Spring Festival (2018)[46], I gave a performance and towards the end of it, I sensed that I was concluding the concert after one hour and forty-five minutes. At that moment, the audience responded in such a way that I felt as though I needed to carry on the composition a bit longer. This is one instance, where the sense of flow is not controlled by me but by the dynamics of the performance situation itself.

Although DJs are not specifically mentioned in his discussion of optimal experience, what Csíkszentmihályi calls ‘seamless flow experience’ resonates uncannily with the DJ (producer performer) practice of programming dance music:

Ultimately, contemporary DJs (producer performers) charged with programming dance music may be regarded as humanistic visionaries, presenting and performing music in recognition, reaffirmation and celebration of those qualities that are associated with happiness, meaningfulness, fulfilment, and pleasure. Music programming, then, is an aspect of DJ musicianship that may be considered both artistic and salubrious (Fikentscher, 2013, p. 145).

The DJ [performer] is heard rather than seen and she or he is positioned as a communicator, as the vibration of the sound engages dancers kinetically, while the visual field is effectively distorted, even destroyed, enabling the participants to lose themselves into the relentless yet meditative repetitive rhythms and washes of synth sounds:

The studio producer is increasingly seen on stage with a laptop with music mixing software, improvising with the recorded ‘stems’ of their music or, as a DJ, mixing their own recordings with related music productions. Whereas the traditional DJ is a researcher, archivist, occasional remixer and interactive performer, the producer-DJ is a composer, a technician and a (sometimes self-absorbed) performer (Rietvelt, 2013, p. 89).

The objective of the performer is to translate the energy and the sounds from his music into an apparent execution of his skills with the use of musical technology:

According to this perspective, performance in EDM is not a question of localized agency, but of the effective mediation between recorded sounds and collective movements, and the performer-machine relation is not a matter of opposition but of association and transformation in the technological actualization of the sound-movement relation. We could say that, in EDM, performance corresponds to this actualization: human movements making visible what machine sounds are making audible […] EDM DJs since then have increasingly found in machines the means to refine their control over the selection and reproduction of sound, thus improving the intensity and efficacy of their relationship with the dance floor.

In the case of EDM, it seems evident that the dance floor is a very important part of the media chain through which sound is transduced, the others being vinyl records, electric wires, loudspeakers, etc. In other words, when the sounds coming out of the loudspeakers meet the movements of the dance floor, one becomes a medium for the transmission of the other, and it is this short-circuiting of machine sound and human movement that seems to constitute the specificity of performance in EDM (Ferreira, 2008, p. 18-19).

The craftmanship required during a live performance is as important as in any other musical practice. The greater freedom I have over my composition during the rearrangement of their development, the more vulnerable a position I am in while performing. I can obviously make mistakes during my Livestreams but they do not make my work less interesting they actually provide a point to rebound from and allow for a different musical direction to use them as an actual intentional moment in order to make them fit in the grand scheme of my composition or improvisation. Actually, I abide by a famous phrase attributed to Ludwig van Beethoven: “To play a wrong note is insignificant; to play without passion is inexcusable”. This citation is definitely an integral characteristic of my performance. It is possible that the quotation is intended as a snappy summary of Beethoven's beliefs as reported by his piano pupil Ferdinand Ries:

When I left out something in a passage, a note or a skip, which in many cases he wished to have especially emphasized, or struck a wrong key, he seldom said anything; yet when I was at fault with regard to the expression, the crescendo or matters of that kind, or in the character of the piece, he would grow angry. Mistakes of the other kind, he said were due to chance; but these last resulted from want of knowledge, feeling or attention. He himself often made mistakes of the first kind, even playing in public (Rosenblum, 1991, p. 29).

Thus, the capital goal while performing and composing is to be in the moment; to be present with all ears and all mind. The necessity is to be at the service of my art in order to provide the right channelling for the musical offering since I am the mediator for the music to become alive.

A practical application of this system is to setup the SPIRAL Studio ready for performance, with the lights put into the ‘concert’ mode (see Figure 19), my library of sounds at close hand, having all the elements to make the magic of music take place (space).

3.6 - Methodology for the emerging artists

My elaboration of a performative work starts within the Ableton digital audio workstation (DAW), using a personal template. This template (see Figure 28 below) consists of several audio effects combined into one audio rack and this is applied to each of the eight tracks I work with. Each of the eight tracks contains multiple audio clips.

Figure 28 – My Audio Effect Rack in Ableton Live, 2019.

Figure 28 – My Audio Effect Rack in Ableton Live, 2019.

The first effect implemented is called Utility, which is a simple amplitude reduction of the audio files to -15 dBs. Since this type of music is often boosted in order to sound loud in nightclubs, I find it wise to reduce everything initially before increasing the loudness of the individual tracks within the mix. The second effect is named Saturator, which provides an initial boost to the sounds without going into overdrive or distortion. The following effect is Drum Buss which was developed in order to ‘pump’ the drum parts and make them sound more impactful – this boosts the bass and transient attack. Also, I have included a pedal effect, in case I wish to distort or fuzz the sounds. The next effect is called EQ Eight, which allows me to see the analysis of the frequency content of the sound and remove/boost specific frequencies. Then, a compressor intensifies the audio signal of the loop, which helps to make the sounds more present in relation to the others sounds in the mix. Additional EQ Eight and a Frequency Band Compressor are available if needed in the Audio Effect Rack. Finally, a Limiter stops the sound from distorting in the final mix. A Tuner is also included in order to find the central tonality among the audio loops.

Figure 29 – Ableton session of 8 audio tracks template with the Novation Controller, 2019.

Figure 29 – Ableton session of 8 audio tracks template with the Novation Controller, 2019.


The session comprises 8 audio tracks. I use the Ableton Push 2 and the Novation Controller XL to perform this session in realtime. The fader assignment is as follows (see Figure 29 above):

 

  1. The pinky finger of the left hand controls the Kick-1 (Fader 1).

  2. A second Kick-2 (Fader 5) is placed on the index finger of the right hand.

  3. Next to the them, are audio tracks with effects; SimpleDelay (Fader 2) for the ring finger of the left hand and an Echo-1 (Fader 6) effect for middle the right hand. Furthermore, these audio tracks provide the spatialisation effect which is likely to be a good fit for this type of sonic material (containing higher frequencies than the Kick and Bass drum sounds), this decision is connected to the idea of ‘gravitational spatialisation’.

  4. The middle and the index fingers of the left hand are connected to vocal, melodic or rhythmic audio content (Audio-1 and Audio-2) (Fader 3 and Fader 4).

  5. An additional vocal, melodic or rhythmic audio loop is set for the ring finger of the right hand (Audio-3) (Fader 7).

  6. The Echo-2 (Fader 8) effect is set for the pinky finger of right hand, and it will be set with spatialisation.

  7. In order to implement improvisation to the 8 audio tracks, another Audio Effect Rack containing Filter, Reverb and Shuffling effects is set to the 3 rows of knobs above the 8 Faders from the Novation Controller.

  8. In addition, the Novation has two kill switches knobs for the removal of Low and Mid (EQ Three effect) frequency content placed below the 8 Faders.

  9. Thus, 3 audio tracks are spatialized around the dome of speakers on the middle and high ring of speakers and they are situated on track 2, 6 and 8. The other audio tracks (1, 3, 4, 5 and 7) are providing localized spatialisation on the Lower and Middle ring of speakers.

 

3.7 - Experimentation with Live Spatialisation

Throughout my research I have developed a compositional template that I implement in each work. As part of this, I have developed a spatialisation model that allows me to implement 3D sound movement into my works. This creative technique has enabled me to pre-set automated spatialisation trajectories that permit me to focus on the formal and improvisatory elements of my pieces. After several experiments using various hardware tools and plugins, my live spatialisation has come to focus on a pre-determined set of 3-4 audio tracks in which movement is controlled via an LFO moving in either a clockwise and counterclockwise direction around the middle and upper circles of 8 speakers. In addition, 4-5 audio tracks are fixed on the lower and middle circles of the SPIRAL Studio speaker configuration. I have tried composing with more than 5 audio tracks being spatialized simultaneously, however, the experiments were unmusical and did not convey my musical intention regarding how I wanted the work to unfold and develop. I have empirically found that 3-4 audio tracks of spatialisation offer clarity of spatial movement and a sense of sonic immersion that stimulate an active state of listening.

I have attempted to expand my spatialisation model for live performance by implementing direct control and manipulation of the spatial movement of selected audio loops. In order to execute this goal, I had to add another MIDI controller to my setup. This addition will increase the possibility of transforming the audio content both sonically and spatially. This will require further exploration and will take time to find new creative solutions and relevant mapping outputs for this device. So far, my experiments have comprised the evaluation of three Ableton Spatialisation plug-ins in addition to current chosen ones; Max Api Ctrl1 LFO and the Max Api SendsXnodes (See Figure 13). The recently added plug-ins are: Surround Panner (February 2018)[47], Spacer (September 2016)[48] and Envelop (April 2018)[49].

Figure 30 – Max for Live’s Surround Panner, 2019.

Figure 30 – Max for Live’s Surround Panner, 2019.

Max for Live’s Surround Panner plugin (see Figure 30 above) enables mixing sound for installations, theatres and performances, using multiple channel speaker setups in Ableton Live. It features the multi-speaker panning control of the X/Y axes. It is possible to create circular movement with the rotation control in 2D space. A focus control permits distinct positioning and speaker configuration. This plug-in is simple and intuitive to use but it has a limitation of only 8 outputs and my current setup requires 24 outputs. Thus, I have implemented only on the highest ring of 8 speakers. Because of this limitation, I have decided that if I had to perform on a smaller dome of speakers, I could potentially use it then. In order to circumvent this limitation, I have set 2 audio tracks to the higher ring of 8 speakers and 2 audio tracks on the middle ring. The result was adequate in regards of being to direct and locate the audio on the specific ring of speakers, but he conundrum arise with the decision of choosing between having a sound circulating where and when we want it. Furthermore, after moving the selected sound source it stays fixed at that same position if we are not moving it again as the piece progress, which is contrary to my spatialisation ethos; perpetual motion of sound in space. A positive quality to the Surround Panner is being able to provide a continuous circular motion to the sounds while selecting the rate of circular speed to an audio source, but this is also applicable to the current plug-in I am using (Max Api SendsXnodes), which has 24 channels of possible outputs. In order to compare my current modus operandi and the tested plug-ins, I have documented the live modulation of space versus my automated spatialisation. These tests are videos and sonic demonstrations that provides evidence that my designed automated spatialisation is as good if not better than the live modulation of space. The examination procedure was to record the live spatialisation with one of the tested plug-ins and, in order to compare with my approach, to remove the plug-ins from the audio recording and replace it with my method of designed spatial audio movements using the Max Api Ctrl1LFO and the Max Api SendsXnodes.

Listen with headphones:

Example 1 – Sebastian DeWay - Groove Society (Surround Panner)

https://youtu.be/Waji0ke_XAk

Example 2 – Sebastian DeWay - Groove Society (Surround Panner – My Automation)

https://youtu.be/UgnsqPsBQ4c

The Max Api SendsXnodes (See Figure 13) is a device based on the panning of audio content between adjacent output channels. I am using it in conjunction with the Max Api Ctrl1LFO in order to automate a selected trajectory to the center dial (audio loop). In order to expand on the possibility and the variety of sound manipulation and spatialisation, I have connected the AKAI MidiMix with the Sync Base parameter, which can select a rate of circular speed in the motion of the audio content around the chosen speakers. I have noticed that a rapid live motion of sound in space in combination with a Reverb plug-in can provide some interesting musical effects, which could potentially be included in my future performance setup. The comparative test of live modulation of space with the Base Sync from the Max Api Ctrl1LFO demonstrate that there are some benefits and some drawbacks to the performance: The possibility of direct motion with the instrument helps with on the fly spatialisation decisions but the overall quality of the sonic transformations is not enhancing the musicality of the performance.

Listen with headphones:

Example 3 – Sebastian DeWay - Groove Society (Max API Ctrl1LFO)

https://youtu.be/m4X4jiBB8eA

Example 4 – Sebastian DeWay - Groove Society (Max API Ctrl1LFO – My Automation)

https://youtu.be/bpugoUw_Pv8

Figure 31 – Envelop for Live’s (E4L) Spinner, Multi-Delay, Stereo Panner and Delay Boids, 2019.

Figure 31 – Envelop for Live’s (E4L) Spinner, Multi-Delay, Stereo Panner and Delay Boids, 2019.

The Envelop (for Live) production tools enable composers to do sound placement in three-dimensional space. These spatial audio plug-ins can create immersive mixes for headphones and for multi-channel environments. I have experimented with several of the Envelop 4 Live (E4L) plug-ins (see Figure 31 above); the E4L Delay Boids, the E4L Spinner, the E4L Multi Delay and the E4L Stereo Panner. These are sophisticated tools that deserve consideration, they are efficient and easy to use but require practice in order to exploit them fully. Comparable to the live modulations of space done with the Base Sync of the Max Api Ctrl1LFO, the E4Ls can have a negative impact on the musicality of the performance if they are not controlled properly. Envelop tools were only available in the final months of this research, otherwise I would have definitely explored their potential since they are well suited for live spatial performance. The results between my spatialisation’ setup and the use of the Envelop tools are similar; some spatial instances are a better fitted with the E4Ls, and a certain clarity of spatialisation is perceptible while using my own designed automations on the M4L APIs.

Listen with headphones:

Example 5 – Sebastian DeWay - Groove Society (Envelop)

https://youtu.be/_ibUF6V7BDk

Example 6 – Sebastian DeWay - Groove Society (Envelop – My Automation)

https://youtu.be/OSPFAD8xLB4

Figure 32 – Spacer plugin, 2019.

Figure 32 – Spacer plugin, 2019.

The experimentation with the Spacer plug-in (see Figure 32 above) was not satisfactory since it provided only 8 channels of audio. Additionally, the Gain level was difficult to balance accordingly when moving the sound sources to the different speakers (outputs) in space. Furthermore, the audio outputs of Spacer do not permit a recording and a bounce of the working session in order to provide a sonic example of its use.

Overall, these tools will inspire further creation of spatial music and we can see that there is a growing interest from developers and audiences for spatialisation. I can see myself continuing to search for suitable tools for live modulation of space.

Conclusion

 

Throughout this research, I have experimented with a range of software and hardware tools that allow differing spatial formats to be explored. From this, I was able to develop a methodology of spatialising EDM. I believe that the role of spatialisation in EDM will continue to grow as nightclubs and academic institutions continue their developing interest in it. What is inspiring for me are the new tools and environments which are going to emerge and how I will be able to incorporate them into my practice. One of the areas that excites me is opened up by the Dolby ATMOS plug-in and its application. This seems to suggest an exciting future, where the sonic experience is enhanced; it is an evolution compared to the current stereo standard in EDM. I was pleased to find out that the methodology I have developed, within the Ableton and Max4Live framework, was an easy adaptation with the Dolby tool since they function in a similar way; this demonstrates that the technology is evolving in the same direction and becoming more refined. This leads me to think that I am ready and prepared for what is next in the field of spatial EDM because I know what works for me and how to achieve it. I am also enthusiastic about the fact that Dolby “is bringing its object-based 3D-audio platform to the dance music sphere by way of club installations and apps aimed at studio and DJ mixing” (Rothlein, 2015). They are also looking to expand the locations where their technology is installed. After London (Ministry Of Sound), Chicago (Sound-Bar) and San Francisco (Halcyon), clubs in Tokyo and Berlin are also on their radar. This encourages me to continue and to strive in that direction, in order to keep my knowledge and skills relevant. Also, I am curious to see what Ableton has planned with regards to the performance of multi-channel EDM. There are Max4Live objects in their latest software version (Live 10) that have implemented High Order Ambisonic libraries and a binaural tool which enables the user to play with Multi-Channel/Surround/Ambisonic audio.

As an artist I do think it is important that I need to push boundaries - doing it step by step while still accomplishing what I want to do musically. I want to continue to promote nightclub (commercial) music as being as valid as any other genre. I see a comparable evolution with Jazz music when it was considered an ‘underground’ phenomenon, not welcomed into academia, until it reached universal acceptance. EDM has followed a similar development entering progressively academic institutions as a subject of serious study. A parallel occurrence can be observed with the recent shift in perception of society in regard of graffiti artists such as Banksy, who did not have any other outlet than the street to display his art, and now is welcomed into galleries and other prominent art centers. This shows how something can exist outside the establishment and may become institutionalized. Thus, I want to elevate EDM to a noble art form in order to teach it, not just (ab)use it for commercials purposes only.

As an artist, one of the reasons I have tackled this long academic endeavor was to push myself further as a musician in an evolving technological landscape. I was motivated to do it because I see a trend among the younger generation of music students: they often have not played traditional instruments such as a guitar, a violin or a piano, but most likely they own computer which has the potential to be a musical instrument. This coincided with my own situation, using the computer as a creative tool. In this research project, I have demonstrated that merging composition and improvisation can provide a rich creative environment for the composition and performance of EDM. Furthermore, I have established in this document that my live performance practices redefine what a composer of EDM can be. Thus, I want this research to promote a viable future for modern EDM composers. A way to share and develop these skills would be to teach at universities with computer music-based programs, as well as reputable institutions of EDM such as Point Blank Music School (London, Los Angeles, Ibiza, Mumbai, Online), Dubspot (New York, Los Angeles, Online) and The Red Bull Music Academy (Berlin, Online). As such, this commentary and portfolio presents a methodology and a set of accessible tools to be able to create music as a contemporary artist, while acknowledging the growing popularity of EDM and spatialisation.

When I first encountered the SPIRAL Studio, I was intimidated by its size and its complexity. Since then, I have assessed various tools and techniques for spatialisation and what changed most since the beginning of the research is the level of confidence I have acquired when approaching large speaker arrays. Also, since I am using accessible spatialisation tools, I have developed a method to adapt quickly and easily to various configuration of sound systems.

Over the course of this research, I have learned ways to promote myself as a performer by growing a social media presence by livestreaming my music. Since July 2016, I have performed live on several streaming platforms such as YouTube, Facebook, Twitch, and Periscope; I have over 150 videos where I play music. This experience is an ongoing project and it allows me to improve my music on a technical and practical level. It is a practical outlet for me to mature as a performer and also to reach an audience around the world. I am looking forward to the day where spatial audio will be accessible in a livestreaming format since the technology is still being developed and not yet available. Until then, I am getting ready.

Bibliography

Attali, Jacques (1985). Noise: The political economy of music. Vol. 16. Manchester: University of Manchester Press.

Attias, Bernardo, Anna Gavanas, and Hillegonda C. Rietveld (2013). DJ culture in the mix: Power, technology, and social change in electronic dance music. New York: Bloomsbury Academic.

Baalman, Marije A. J. (2010). Spatial Composition Techniques and Sound Spatialisation Technologies. Organised Sound, vol. 15, no. 3, 2010, pp. 209–218., doi:10.1017/S1355771810000245.

Barrett, Natasha (2002). “Spatio-musical composition strategies”. Organised Sound 7 (3): 313-23. (http://search.proquest.com.libaccess.hud.ac.uk/docview/215098696?accountid=11526), webpage accessed 2016-05-10.

Barrett, Natasha (2007). Trends in electroacoustic music. In: Nick Collins and Julio d'Escrivan (eds.) The Cambridge Companion to Electronic Music: 232-255. Cambridge Companions to Music. Cambridge: Cambridge University Press. (http://dx.doi.org/10.1017/CCOL9780521868617.015), webpage accessed 2016-05-24.

Barrett, Natasha (2016). A Musical Journey towards Permanent High-Density Loudspeaker Arrays. Computer Music Journal, Volume 40, Number 4, Winter 2016, pp. 35-46. Published by The MIT.

Bates, Enda (2009). The Composition and Performance of Spatial Music. A dissertation submitted to the University of Dublin for the degree of Doctor of Philosophy.

Bayle, François (2007). Space, and more. Organised Sound 12 (3): 241. (http://dx.doi.org/10.1017/S1355771807001872), webpage accessed on 2016-05-13.

Belloch, Jose A., Ferrer, Miguel. Gonzalez, Alberto. Martinez-Zaldivar, Francisco-Jose. and Vidal, Antonio (2012). Headphone-based spatial sound with a GPU accelerator. Procedia Computer Science 9: 116-25. (http://www.sciencedirect.com/science/article/pii/S1877050912001342), webpage accessed 2016-05-10.

Bennett, Gerald (1997).  A Poor Man's Techniques of Sound Diffusion (http://www.gdbennett.net/texts/A_Poor_Man’s_Techniques.pdf), [Accessed 26 June 2016].

Bissell, Arthur D. (1921). The role of expectation in music. New Haven: Yale University Press.

Bogdanov, Vladimir., Woodstra, Chris., Bush, John., Thomas Erlewine, Stephen (2001). All Music Guide to Electronica: The Definitive Guide to Electronic Music. Backbeat Books Music

Born, Georgina (2013). Music, sound and space: Transformations of public and private experience. Cambridge; New York: Cambridge University Press.

Bougaïeff, Nicolas (2013). An Approach to Composition Based on a Minimal Techno Case Study. Doctoral thesis, University of Huddersfield.

Boyd-Brewer, Chris (2003). Vibroacoustic therapy: Sound vibrations in medicine. Alternative and Complementary Therapies 9 (5): 257-63.

Boyd-Brewer, C., & Mccaffrey, R. (2004). Vibroacoustic Sound Therapy Improves Pain Management and More. Holistic Nursing Practice, 18(3), 111-118.

Bradley, John., and Soulodre, Gilbert (1995). Objective measures of listener envelopment. Journal of the Acoustical Society of America 98 (5 I;5;): 2590-7. (http://dx.doi.org.libaccess.hud.ac.uk/10.1121/1.413225), webpage accessed 2016-05-10.

Bregman, Albert S. (1990). Auditory scene analysis: The perceptual organization of sound. Cambridge, MA: Bradford Books, MIT Press.

Brillenburg Wurth, C. A. W. (2002). The musically sublime: infinity, indeterminacy, irresolvability s.n. University of Groningen.

Brümmer, Ludger (2016).  New developments for spatial music in the context of the ZKM Klangdom: A review of technologies and recent productions. Divergence Press, Huddersfield University (http://divergencepress.net/articles/2016/11/20/new-developments-for-spatial-music-in-the-context-of-the-zkm-klangdom-a-review-of-technologies-and-recent-productions) webpage accessed 2018-03-01.

Brümmer, Ludger (2008). Stockhausen on Electronics. Computer Music Journal, vol. 32, no. 4; 10-16.

Burke, Edmund, and J. T. Boulton (1958). A philosophical enquiry into the origin of our ideas of the sublime and beautiful. 1967 reprint. ed. London: Routledge and Kegan Paul.

Butler, Mark J. (2006). Unlocking the Groove: Rhythm, Meter and Musical Design in Electronic Dance Music. Indiana University Press, Bloomington.

Cage, John (1959).  Silence : Lectures and Writings. M.I.T. Press.

Clozier, C. & Olsson, J. (2001). The Gmebaphone Concept and the Cybernéphone Instrument. Computer Music Journal, vol. 25, no. 4, pp. 81-90.

Cooper, Max (2017). http://www.4dsound.net/artist/max-cooper webpage accessed 2016-04-25.

Csíkszentmihályi, Mihály (1990). Flow: The Psychology of Optimal Experience. New York: Harper & Row.

Csíkszentmihályi, Mihály (1997). Creativity: Flow and the Psychology of Discovery and Invention. New York: Harper Collins Publishers.

Dockwray, Ruth, and Allan F. Moore (2010). Configuring the sound-box 1965–1972. Popular Music 29 (2): 181-97.

Dyson, Frances (2009). Sounding new media: Immersion and embodiment in the arts and culture. 1st ed. Berkeley: University of California Press.

Emmerson, Simon (2007;2017;2013). Living electronic music. Aldershot: Ashgate.

Eno, Brian (2004). ‘The Studio as Compositional Tool’. In Christoph Cox and Daniel  Warner (eds) Audio Culture: Readings in Modern Music (127–30). London: Continuum.

Ferreira, Pedro Peixoto (2008). When sound meets movement: Performance in electronic dance music. Leonardo Music Journal 18: 17-20.

Ferrigno, Emily (2014). Drum’ n’ Bass [Jungle]. Oxford Music Online (https://doi.org/10.1093/gmo/9781561592630.article.A2256374). webpage accessed 2018-08-15.

Fikentscher, Kai (2013) in Attias, Bernardo, Anna Gavanas, and Hillegonda C. Rietveld (2013). DJ culture in the mix: Power, technology, and social change in electronic dance music. New York: Bloomsbury Academic.

Francey,  Matthew (2016). Dolby Atmos Is Here. (https://www.ministryofsound.com/posts/articles/2016/january/dolby-atmos-is-here/), webpage accessed 2016-04-25.

Garcia, Luis-Manuel (2015). Beats, flesh, and grain-sonic tactility and affect in electronic dance music. Sound Studies: An Interdisciplinary Journal, vol. 1 2015, Issue 1.

Garcia, Luis-Manuel (2005). On and On: Repetition as Process and Pleasure in Electronic Dance Music. Music Theory Online, vol. 11, no. 4, pp. 1-19.

Gilman, Todd (2009). Arne, Handel, the beautiful, and the sublime. Eighteenth-Century Studies 42 (4): 529-55.

Griesinger, David (2015). Acoustic quality, proximity, and localization in concert halls: The role of harmonic phase alignment. Psychomusicology 25 (3): 339. (http://dx.doi.org.libaccess.hud.ac.uk/10.1037/pmu0000116), webpage accessed 2016-05-10.

Harrison, Jonty and Wilson, Scott (2010). Rethinking the BEAST: Recent Developments in Multichannel Composition at Birmingham ElectroAcoustic Sound Theatre. Organised Sound, 15(3); 239-250. (http://dx.doi.org.libaccess.hud.ac.uk/10.1017/S1355771810000312), webpage accessed 2016-05-10.

Heimert, Herbert and Stockhausen, Karlheinz (1958). Die Reihe. Theodore Presser Co.; 62.

Ihde, Don (2007). Listening and voice: Phenomenologies of sound. 2nd ed. Albany: State University of New York Press.

Kant, Immanuel, et al. (2007). Critique of judgement. New York; Oxford: Oxford University Press.

Krims, Adam (2000). The Hip Hop Sublime as a Form of Commodification found in Music and marx: Ideas, practice, politics. (2002) New York: Routledge.

Lee, Hyunkook. Gribben, Christopher., and Wallis, Rory (2014). Psychoacoustic Considerations in Surround Sound with Height. 28th Tonmeistertagung: tmt 28, 20th-23rd November 2014, Cologne, Germany. (http://eprints.hud.ac.uk/23151/), webpage accessed 2016-04-25.

Meadow, Matthew (2018). Deadmau5 features in Dolby Atmos’ latest video. (https://www.youredm.com/2018/05/28/deadmau5-dolby-atmos/) webpage accessed 2018-09-25.

Meyer, Leonard B. (1956). Emotion and meaning in music. Chicago; London: University of Chicago Press.

Meyer, Leonard B. (1967). Music, the arts, and ideas. Chicago; London: University of Chicago Press.

Middleton, Richard (1993). Popular music analysis and musicology: Bridging the gap. Popular Music 12 (2): 177-90

Middleton, Richard (1996). Over And Over: Notes Towards A Politics of Repetition. The Open University.

Murcof (2014). (http://www.4dsound.net/artist/murcof) webpage accessed 2016-04-25.

Nazemi, Mehdi Mark (2017). Soundscapes as therapy: An innovative approach to chronic pain and anxiety management. PhD Thesis, Simon Fraser University.

Normandeau, Robert (2009). Timbre spatialisation: The medium is the space. Organised Sound 14 (3): 277-85. (http://dx.doi.org.libaccess.hud.ac.uk/10.1017/S1355771809990094), webpage accessed 2016-05-23.

Oomen, Paul (2016). 4DSOUND: A Retrospective by Paul Oomen. (http://www.4dsound.net/blog/a-retrospective-by-paul-oomen) webpage accessed 2016-04-25.

Puckette, Miller (2017).  Four surprises of electronic music. Juiz de Fora, PPGCOM – UFJF, v. 11, n. 2, p. 126-138, mai./ago. 2017.

Ramsay, Ben (2014). Social spatialisation: Exploring links within contemporary sonic art. (http://eprints.staffs.ac.uk/743/1/Ramsay_Social%20spatialisation%20EDITED%20FINAL.pdf). PhD Thesis, De Montfort University.

Ramsay, Ben (2013). Tools, Techniques and Composition: Bridging acousmatic and IDM. (https://econtact.ca/14_4/ramsay_acousmatic-idm.html) webpage accessed 2016-04-25.

Reynolds, Simon (1998). Energy Flash. Picador, London.

Rietveld, Hillegonda (1998). This Is Our House: House Music, Cultural Spaces and Technologies. Ashgate, Hants.

Rietveld, Hillegonda C. in Attias, Bernardo, Anna Gavanas, and Hillegonda C. Rietveld (2013). DJ culture in the mix: Power, technology, and social change in electronic dance music. New York: Bloomsbury Academic.

Rosenblum, Sandra P. (1991). Performance practices in classic piano music. Published by Indiana University Press.

Rothlein, Jordan (2015). A 60-speaker, 22-channel system will bring "an unparalleled music experience" to the main room of the long-running London nightclub. (https://www.residentadvisor.net/news/32448) webpage accessed 2018-09-25

Rumsey, Francis (2012). Spatial audio. 1st ed. Oxford: Focal.

St. John, Graham (ed.) (2004). Rave Culture and Religion. Routledge, New York.

Schaeffer, Pierre (1959). Musique concrète et connaissance de l'objet musical. Revue Belge De Musicologie / Belgisch Tijdschrift Voor Muziekwetenschap 13 (1/4).

Sfetcu, Nicolae (2014). The Music Sound. Google EBook.

Simberloff, Daniel (2014). The balance of nature-evolution of a panchreston. PLoS Biology 12 (10): e1001963.

Shapiro, Peter (ed.) (2000). Modulations, a History of Electronic Music, Throbbing Words on Sound. Caipirinha, New York.

Sherburne, Philip (2006). Digital Discipline: Minimalism in House and Techno.  Audio Culture, New York: Continuum, pp. 321–322.

Sicko, Dan (1999). Techno Rebels. Billboard Books, New York.

Smalley, Denis (2007). Space-form & The Acousmatic Image. Organised Sound, 12, pp 35-58. (http://dx.doi.org/10.1017/S1355771807001665), webpage accessed on 2016-04-25.

Smalley, Denis (1997). Spectromorphology: Explaining sound-shapes. Organised Sound 2 (2): 107-26.

Snoman, Rick (2013). Dance Music Manual. Focal Press, 2013.

Stockfelt, Ola (1989). Adequate Modes of Listening. Audio Culture; 88-93.

Stockhausen, Karlheinz (1959). ". . . How Time Passes . . “, English translation by Cornelius Cardew, Die Reihe, Vol. 5, pp. 10-40, Universal Edition Publishing, 1959.

Stockhausen, Karlheinz, and Maconie, Robin (1989). Stockhausen on Music: Lectures and Interviews. Boyars, 1989.

Thayer, Robert E. (1998). The origin of everyday moods: Managing energy, tension, and stress. New York;Oxford: Oxford University Press.

Théberge, Paul (2016). Musical_Instruments As Assemblage article found in Musical Instruments in the 21st Century : Identities, Configurations, Practices. Edited by Till Bovermann, et al., Springer Singapore, 2016. ProQuest Ebook Central, http://ebookcentral.proquest.com/lib/hud/detail.action?docID=4768412.

Till, Rupert (2010). Pop Cult: Religion in Popular Music. Continuum Publishing, 2010.

Valiquet, Patrick (2012). The spatialisation of stereophony: Taking positions in post-war electroacoustic music. International Review of the Aesthetics and Sociology of Music 43 (2): 403-21. (http://www.jstor.org.libaccess.hud.ac.uk/stable/23342829) webpage accessed on 2016-05-13.

Wallis, Rory and Lee, Hyunkook (2015). The Effect of Interchannel Time Difference on Localisation in Vertical Stereophony. Journal of the Audio Engineering Society, 63 (10); 767-776. ISSN 1549-4950 (http://eprints.hud.ac.uk/25888/), webpage accessed 2016-04-25.

Wartofsky, Alona (1997). All the Rave. The Washington Post, August 22, 1997 P. D01

Zagorski-Thomas, Simon (2010). The stadium in your bedroom: Functional staging, authenticity and the audience-led aesthetic in record production. Popular Music 29 (2): 251-66.

Zeiner-Henriksen, Hans T. 2010. Moved by the groove: Bass drum sounds and body movements in electronic dance music. In Musical rhythm in the age of digital reproduction. (2010) Ashgate.

Zvonar, Richard (2005).  A History of Spatial Music. (http://cec.sonus.ca/econtact/7_4/zvonar_spatialmusic.html), webpage accessed on 2016-05-18.

Websites

[1] https://plato.stanford.edu/entries/beauty/, webpage accessed 20 October 2018.

[2] Tobin, Amon. (n.d.). “Interview with Amon Tobin about composing the soundtrack for the video game Chaos Theory”:  http://www.gamespot.com/articles/qanda-chaos-theory-composer-amon-tobin/1100-6117182/ , webpage accessed 20 August 2016.

[3] http://www.modulate.org.uk/modulateshop.html , webpage accessed 18 March 2018.

[4] Brown, Rob (n.d.). “Autechre Radio Interview Part 1.” N.d. Available on YouTube, uploaded by user “Chris Godber” on 25 October 2010. https://www.youtube.com/watch?v=j_nt4lFvZ1w , webpage accessed 20 August 2016.

[5] Evo Lute. MASE — Multi Angle Sound Engine. https://www.eevolute.studio/

[6] http://zkm.de/en/event/2016/02/globale-performing-sound-playing-technology

[7] https://spatialsoundinstitute.com/4DSOUND-A-Retrospective-2017

[8] https://4dsound.net/About

[9] Mikula, Ondřej. (2016). Skype interview with Ondřej Mikula (aka Aid Kid).

[10] Birolini, Hervé. (2016). Skype interview with Hervé Birolini (in French) (translation by Sébastien Lavoie).

[11] Idem.

[12] https://spatialsoundinstitute.com/Murcof - https://spatialsoundinstitute.com/P_4DSOUND-A-Retrospective

[13] https://spatialsoundinstitute.com/Max-Cooper - https://spatialsoundinstitute.com/P_4DSOUND-A-Retrospective

[14] http://www.ministryofsound.com/posts/articles/2016/january/dolby-atmos-is-here/

[15] http://arstechnica.co.uk/gadgets/2016/03/ministry-of-sound-dj-dolby-atmos/

[16] https://www.soundonsound.com/techniques/dolby-atmos-ministry-sound

[17] https://www.roberthenke.com/concerts/monolake.html

[18] http://www.roberthenke.com/concerts/wfs.html

[19] https://www.roberthenke.com/concerts/surroundtb.html

[20] http://www.higher-frequency.com/e_interview/richie_hawtin02/index02.htm

[21] https://www.residentadvisor.net/reviews/3354

[22] Ableton Live is a live performance tool originally developed by Ableton AG in 2001, the current version used is Live 10 (https://www.ableton.com/)

[23] https://sourceforge.net/projects/octogris/

[24] https://mntn.rocks/

[25] BBC Four documentary: Can You Feel It - How Dance Music Conquered the World - https://www.bbc.co.uk/programmes/b0bl3cl5, webpage accessed 3 October 2018.

[26] https://youtu.be/0ap9GIGJmXM

[27] https://youtu.be/sfqR64_WuvE?t=1m1s

[28] https://youtu.be/xG4AhAnyqvE

[29] https://youtu.be/PYSrINgSgYA

[30] GusGus - Full Performance (Live on KEXP) https://youtu.be/msL7z6WpW7w

[31] Jon Hopkins - Full Performance (Live on KEXP) https://youtu.be/hHe0nckLiYU

[32] https://quoteinvestigator.com/tag/gilbert-perreira/

[33] Premiered at HCMF 2017 on the Huddersfield Immersive Sound System. https://www.uymp.co.uk/news/excellent-review-for-jonty-harrisons-hcmf-performance-one-of-british-electronic-musics-most-significant-figures

[34] https://learningmusic.ableton.com/make-beats/tempo-and-genre.html

[35] https://www.oxfordmusiconline.com/grovemusic/view/10.1093/gmo/9781561592630.001.0001/omo-9781561592630-e-1002249796?rskey=p8yrgh&result=1

[36] https://pressureradio.com/jacks-house/

[37] https://mn2s.com/news/features/this-is-your-brain-on-house/

[38] https://music.tutsplus.com/tutorials/quick-tip-the-foundation-of-a-hip-hop-mix--audio-7801

[39] https://www.beqbe.com/london-s-uk-drum-n-bass-music-scene

[40] https://www.oxfordmusiconline.com/grovemusic/view/10.1093/gmo/9781561592630.001.0001/omo-9781561592630-e-1002256374?rskey=76ICzW&result=1

[41] https://youtu.be/9qJKxaWb0_A

[42] https://youtu.be/cwI0gbGEyuI

[43] https://youtu.be/TZ827lkktYs

[44] http://www.dailymail.co.uk/sciencetech/article-4809010/Pop-music-slowing-dark-times.html

[45] https://www.rollingstone.com/music/features/producers-songwriters-on-how-pop-songs-got-so-slow-w495464

[46] https://youtu.be/EyNokQFqpn0 

[47] https://www.ableton.com/en/packs/surround-panner/

[48] http://www.maxforlive.com/library/device/3797/spacer

[49] http://www.envelop.us/software/