Happy New Ears
Natasha BarrettNatasha Barrett won the Music prize 2006 for the work" ...fetters....". The composer writes music for the ears – in her own terminology ‘acousmatic compositions’.
It sounds obvious that you compose for the ears, but it’s not that simple according to Natasha Barrett. We have asked the composer to sketch the outline of a self-portrait.
By Natasha Barrett
Every day most of us hear sound detached from its visual causation and modified in some way by analogue or digital techniques. Whether it is on the TV, on the radio, or on a CD or DVD we are used to hearing in this manner.
This is a different situation to that of the 1950s when the first compositional exercises in electronic and acousmatic music began. A full explanation of the French word ‘acousmatic’ is complex and lengthy.
For now it is enough to say that ‘acousmatic’ is a listening process that involves no direct visual cues.
Therefore acousmatic music is music where the visual causation is removed. In the 1950s one can imagine such sound heard over loudspeakers to be a ‘strange’ and maybe alien novelty.
Yet considering our society’s most common listening modes it is paradoxical that serious electroacoustic music is a less heard and less practiced segment of the total contemporary music environment.
This compositional aesthetic is essentially less than 60 years old and still in its infancy.
Only recently have composers of electroacoustic music really begun to refine their craft rather than carry out endless experiments in search of the new or the novel.
With a Master and a Doctoral degree as the most important elements of my compositional education, history’s electroacoustic pioneers have clearly influenced my views and musical choices.
I have found that ‘refinement’ and ‘control’ have lead to as many important discoveries in my composition as has experimentation.
In working with electroacoustic music I have experienced interesting collaborative projects with architects, visual artists and choreographers, but nearly all have sprung from the solo life in the studio – the electroacoustic composer’s laboratory.
Here composition is about sound. Sound can be taken apart and controlled in ways that are increasingly sophisticated as both individual skill and the available technologies evolve.
At a general level we can consider the basis of a (Western) listener’s perception and understanding of sound.
This understanding occupies two overlapping areas: intrinsic sound information – in other words the “pure music” or abstract approach to sound, and extrinsic information – information that connects to our perception of the world as perceived.
In concrete terms intrinsic sound information therefore deals with timbre, pitch, rhythm, articulation. Extrinsic sound information deals with identity (direct or vague), gesture, energy, motion and space.
As sound is a time-based form, in the continuum between intrinsic and extrinsic we can create a temporally relevant ‘morphology’. I find that as soon as a sound or a specific unit of information is ‘intentionally’ selected it has the potential to yield in both directions.
Composing in this way can be seen as an attempt to harness the potential of sound and place it in meaningful contexts that are related to, but not an exact mimesis of our experienced world.
A working method
When finding ‘raw’ sound sources I nearly always choose to record my own materials – whether they be from acoustic instruments, from isolated sound making objects recorded in the studio, or from an environment offering a complex sound picture (capturing the real-world relation between sounding objects).
The reason for recording my own material relates to how I compose. All sound has an original context that appeals to hearing, seeing, smelling, touching and even tasting.
This information received by our senses allows us to understand the complete ‘picture’. Energy transfers – such as kinetic or motion energy – and the interaction between sounding objects are particularly important. First hand experience of these features provides an understanding that a recording alone is unable to capture.
But these ‘latent’ audio features can be drawn out of the source through transformation and development techniques - isolating and enhancing, prolonging, removing or placing in a specific context within the music.
A second reason for recording my own material is a practical one: Often I will use unconventional recording techniques such as placing two or more microphones extremely close to a resonating body.
This technique captures spectral and spatial information barely heard or imagined at a normal listening distance - as if putting the sound under a microscope.
Unless recording a ‘normal’ instrumental performance, rarely will any one person place the microphones in the same location as another person. In source recording, the microphones go in search for something specific, unique or simply unimagined.
Although my source recording technique is the same for most compositions, the way this material is developed hinges on the media for which I am composing.
The visual power of the live performer can be tantalising: The virtuosity and skill in the control of the instrument leaves us in awe of being “human”.
Therefore when I work with acoustic instruments and some electroacoustic element I want to take advantage of this live aspect. The electroacoustic material is therefore used as a springboard to reinforce the power of the ‘live’.
As everyday computers become more powerful, real-time sound processing becomes more useful. Real-time sound processing takes the acoustic sound produced by the performers on stage and processes it real-time by transformation algorithms composed when the score was composed.
In earlier years, when composers relied on external effects processing hardware, the performed action had to lead to an immediate, or simply delayed electroacoustic response. When using a computer there are less technical constraints: acoustic sound can be analysed to provide control data for transformation processes later on in the performance,
or dislocated to form temporal structures as complex as any composed ‘out of real-time’ in the studio. The advantages are clear: The individual performer’s virtuosity and ‘humanness’ is fed directly into the electroacoustic part of the work.
The disadvantages are maybe less clear: Even with great programming skill it is nearly impossible to form in real-time the structure of an electroacoustic sound that has been thought and laboured over for weeks in the studio.
Am I simply not clever enough or are our computers and software are still not advanced enough? Probably a bit of both.
The difference between acousmatic and live music is well known. From a personal standpoint I listen best with my eyes closed. But I understand that many listeners find this problematic.
Without the visual element I need to search for ways to make the acousmatic information so enticing that the visual presence of a real-time human would get in the way of the music.
Although acousmatic music is also performed when ‘diffused’ over a loudspeaker orchestra, this is a performance that enhances gesture, articulation, dynamics and landscapes while the direct, real-time link between sound production and human gesture was left in the studio months or years earlier.
Sound transformation and connection to the world as experienced
The idea of 'spectromorphology' (a term coined by Denis Smalley in the 1980s) - which means the temporary change of sound identity and its spectral (frequency) information - has clearly influenced my way of transforming sound.
To take a visual analogy, one can imagine the sound to be a recognisable object with size, shape and colour that can be melted and set solid at will.
The object will nearly always maintain some characteristics of its original (either in colour, geometry or texture) but also imply other shapes and forms, or become significantly abstracted.
Unless a sound is as spectrally simple as that created by a tone generator (a sine tone being the simplest form), it is almost impossible for our cognitive system to ignore real-world connections to that sound – identity, space and context. These connections may be direct and strong, direct and weak, or indirect and abstract.
Sitting in my studio with the window open I can hear a blackbird. If we hear an unprocessed recording of this bird the connection is ‘direct and strong’.
We not only recognise, but also understand the context in which the sound was made. If on the other hand we hear a pure, high sound, modulated in a way similar to how the bird modulates its real sound, we can say this is a direct connection, but weaker than the real recording.
Further into the world of the indirect and the abstract, we find allusion, or implication. Indirect associations do not necessary need to connect to sound.
For example, bird flight is essentially a non-sounding gesture, but can be implied by a sounding gesture that in some way suggests ‘flight’.
Connecting to archetypal understandings of physical gesture and energy transfer can be a powerful musical device. If I find ways to control these ideas the potential for musical complexity increases.
Links to the experienced world
The process of experimenting with and developing sound in the studio may begin aurally and end in the non-real-time creation of temporal structures.
Although by using my own acoustic recordings there is an inherent sounding connection to the experienced world, what about how these materials are developed and structured over time?
Numerical data can be used as a way to control both sound transformation and the organisation of events along the time axis,
but rather than using abstract algorithms such as fractal or stochastic processes, I either extract my own data from the subject, or find a mathematical simulation that clearly describes the real-world process.
In other words I try to ensure that both sound and numerical data are integrated elements of the one source.
For example in the acousmatic work Viva La Selva, the sound-data link was made by venturing into a Central American rain forest, making my own four-channel 24-hour sound recordings
and later extracting various data sets. These data included the spatial, spectral and temporal distribution of animal sounds over the 24-hour period and influenced much of the compositional sound transformation and temporal structuring.
Needless to say the sound sources were extracted from the fantastic ecosystem of the deep rain forest. Nevertheless, in composing this work, as is almost always the case, there were departures away from science into the realm of composer choice and intervention.
FACTS ABOUT NATASHA BARRETT
- Educated in England: Bachelors degree in music, City University London; Masters degree in composition, Birmingham University; Doctoral degree “Structuring processes in electroacoustic music”, City University.
- After completing the doctoral work, in 1998 received a grant to work on various music technology projects at NoTAM (Norsk nettverk for Teknologi, Akustikk og Musikk).
- From 1999-2000 spent one year in a senior-lecturer position at the music conservatoire in Tromsø “Avdeling for Kunstfag”.
- Since 2000 worked as a freelance composer, concert arranger and performer based in Oslo.
- Composes for instruments and live electronics, sound installations, dance, theatre, and animation projects, but all inspiration stems from acousmatic composition (music for the ears).
Most important works:
- Acousmatic works: Trade Winds (2006, 54’00), commissioned by NoTAM; Exploratio Invisibilis (2003, 28’00), commissioned by the Ultima festival; ...fetters... (2002, 14’30), commissioned by NRK; Viva la Selva! (1999, 17'30), commissioned by NICEM.
- Live works: AGORA (2003, 60'00), produced by Oslo Opera Net; Symbiosis (2002, 17'00), commissioned by Tanja Orning; Microclimate I: Snow & Instability (1998, 17’00), commissioned by the International Computer Music Association.
- Installations: Mimetic Dynamics (1999); Boundary Conditions (2002); Adsonore (2004).
Most important CD releases:
"KRAFTFELT", (Aurora) ACD5037; "Isostasie", (empreintes DIGITALes), IMED 0262.
Most important publications:
- Barrett, N. Spatio-musical composition strategies. Organised Sound 7(3).
- Barrett, N. & Hammer, Ø. Techniques for studying the spatio-temporal distribution of animal vocalisations in tropical wet forests. Bioacoustics 12:21-35.
- Barrett, N. 2000. A compositional methodology based on data extracted from natural phenomena. International Computer Music Conference Proceedings 2000.