I will resist being labelled

Mads Walther-Hansen met Mark Grimshaw, Professor in Musicology at Aalborg University, for a talk about his new definition of sound, brain-computer interfaces to transfer auditory experience and imagination and future perspectives of sound art.
Af
8. April 2015
Interview with Professor Mark Grimshaw
  • Annonce

    Concerto Copenhagen

Mads Walther-Hansen (MWH): When I first met you almost two years ago, I knew you for your research on audio in first-person shooter games. Being your colleague, I have now learnt, that your research extends quite a lot beyond that, and you have a forthcoming book that, among other things, deals with a definition of sound. I would like to talk about this definition and your approach to sound research later in the interview. But to start things off, can you please tell me a little bit about your background and how you got into the world of computer game sound?

Mark Grimshaw (MG): How did I get into computer game sound? Well, how did I get into music, really? I actually started off when I left school studying mining engineering in South Africa. At the time my parents were in Botswana, and I really did not want to go back to England. I wanted to stay somewhere in Africa and I liked Botswana, so I thought, I could either teach or I could do mining engineering. I really didn’t want to teach. And now look at me.

So, I decided to do mining engineering and I ended up studying a Bachelor of engineering on the Witwatersrand in Johannesburg – down the gold and coal mines. But after about 6 months I decided that I really was not going to spend the rest of my life going several kilometres underground, crawling into little passages listening to the rocks cracking above me. So I stopped doing that and I moved on to another town, Durban, in South Africa and after a while I decided I would like to study music.

There was a university there and it was in a nice location and just by chance, alongside my school qualifications – I did my A Levels in biology, chemistry and physics, completely the wrong subjects – I happened to do all the equivalent courses in music theory and music practice; my instrument was trumpet. So I went off and auditioned for the music department and, much to my surprise, got in. I spent the next 4 years doing a Bachelor of Music. Then I went off to University of York in the UK to do a Master in music technology. That is where I really first got to grips with computer sound. So, I did that for a year and then I went off to Italy for a couple of years, where I worked mainly as a sound engineer in a couple of recording studios in Milan. I also did some liaison, translation and general running about work for music touring concerts for people like Tina Turner, Steve Gadd, Wayne Shorter and Hothouse Flowers. I ended up in a fulltime lecturing position at Salford University in Manchester on their popular music and recording courses and I stayed there for ten years.Foto: Mads Walther-Hansen

MWH: Where did you do your PhD?

MG: At the University of Waikato in Hamilton, New Zealand. I was there for three years and did a PhD in computer game sound. My thesis was called ‘The Acoustic Ecology of the First-Person Shooter’. I got interested in computer game sound, while I was at Salford University. I started playing games as a child on the old Atari console – ‘Pong’ and stuff like that.

My interest in computer games was re-awakened when I was at Salford. Right about 1998, I wrote an undergraduate course, a Bachelor of science in computer and video games.    .

MWH: I guess there was not much research about computer game sound at that time?

MG: No, not at that time. The only one I can think of was Steven Poole, who published a book called ‘Trigger Happy’, which had a few pages on sound. And that is still the case, when you are writing about computer games. There is usually very little written about sound, but the situation has significantly improved in the last 8 years or so.

So I developed a new degree course and it started running in 2000 and it is still running. That is something I am quite proud of. In about 2002, I decided that I had had enough of England and went off to New Zealand. But I kept up my interest when I did my PhD. I wanted to concentrate on computer game sound.

MWH: But why did you choose to focus on sound instead of music, for instance, or some other feature of the game? 

MG: Why sound rather than music? Maybe that has got something to do with the fact that I spent a lot of my childhood in various countries in Africa. And there is a lot of very interesting sounds there. Animal sounds particularly. I have often wondered about it – and the only thing I can come up with is perhaps it is because of those soundscape influences from my childhood.

MWH: How did you end up in Denmark, and more particularly in Aalborg?

MG: I first came to Aalborg in 1997 or 1998, I think, on a EU Socrates program, which was called Open to Europe. And that is how I met Martin Knakkergaard (AAU) and others here. I think I came back a couple of times with that same project over another couple of years. I made friends with Martin, and he invited me back once or twice for a couple of guest lectures. So I have long had that contact.

MWH: How do you find the research environment in Denmark and in Aalborg?

MG: I actually find it very good. I can understand that a lot of Danish academics in these last couple of years are rather concerned about the way things are going. And I can see why they would say this. But from my perspective – having spent time in several other countries, four or five other universities, and having seen how research is supported there – I think Denmark is academic heaven when it comes to research time. Maybe that has something to do with my position, but I do not think so (Mark Grimshaw is Obel Professor of Music – supported by the Obel Family Foundation, ed.).

Tom and myself, wanted to have a way of just discussing sound holistically. Broadening the idea of context particular.

Because I also see opportunities for conference travel, for funding, and so on, which I have never seen in any other university I have been at. So, I like the research environment. Like I said, I also think I benefit from my position. Because I am in a position to really do whatever I want. And that is a lot of fun.

Towards a new definition of sound
MWH: You have a forthcoming book, co-authored with Tom Garner (University of Kent), to be published at Oxford University press this year called ‘Sonic Virtuality’, where you argue that sound is an emergent perception. What do you mean by that?

MG: Well, the full definition is: sound is an emergent perception arising primarily in the auditory cortex and that is formed through spatio-temporal processes in an embodied system. We spend a whole book talking about that. The reason we came up with this definition was because the two of us, Tom and myself, wanted to have a way of just discussing sound holistically. Broadening the idea of context particular.

MWH: What is context in this case?

MG: The standard western definition of sound is the acoustic definition, that sound is a sound wave. It is a moving pressure wave through a medium – typically air. This, to us, says nothing about context. It says nothing about our psychophysiology; nothing about our emotions; nothing about the space that we actually listen to sound waves in. Psychoacoustics to a certain extent deals with that. But still treats sound as a sound wave.

Now, surprisingly when we were researching this we discovered that there were quite a few definitions of sound – sound as an object; sound as a property of an object; sound as an event, which is something Casey O’Callaghan (Casey O’Callaghan, Washington University, is philosopher with research focusing on auditory perception, ed.) talks about. There is Riddoch (Malcolm Riddoch is an electroacoustic music performer and director of the Auricle Sonic Arts Gallery, New Zealand, ed.) who talks about non-cochlear sound, which is quite possibly the closest to what we are talking about. But none of these satisfied what we wanted in terms of a holistic definition of sound that we could start to use as a means to design sound.

Now, my background and Tom’s background in terms of research is really computer game sound. So, we are very interested in how sound functions with space, how sound functions with narrative, for example, how sound functions with emotion and so on. And we wanted some way to figure out how to encapsulate all of that within a definition of sound. So, when you are talking about designing sound – and are getting down to the actual design of sound – you can then start to really use the new and emergent technologies, such as brain-computer interfaces – all the devices that interfaces with psychophysiology. You can start to use those in a more holistic manner rather than just designing frequency, amplitude, panning and so on. You can actually design, for example, a sound that has a particular emotional contour. And you could do that in such a way that it would be, in real time, designed to fit the person who is listening to the sound waves – who is using the sound waves at that point in time.

Sound art as emotion design
MWH: Can you exemplify how this real time processing of sound will enhance the experience of the game?

MG: Well, if you play a computer game, you could have the game engine monitor your emotions or monitor your excitement level, and then respond in real time with new sound waves that are designed to enhance a particular emotion, or increase your excitement, increase your fear, or calm you down in preparation for the next shock that is coming up.

The issue with computer games these days is that you have a one-size-fits-all paradigm. A sound designer comes up with a series of sounds, puts something together in the game and makes a few variations. For example, when you have different characters, you would hear different footsteps and so on. But you hear the same sounds over and over again in the computer game. After a short time you really get bored with the game. I remember thinking this playing ‘Left for Dead’ – a survival horror, first-person shooter game. There is a particular sound in there that I found quite creepy the first time I heard it. When you discover what the sound is, it comes from a witch who is sitting on the ground, making this particular sound. It sounds like a little girl crying. Nicely designed sound. You immediately follow the sound, and of course, it is a trap. You have this horrible witch who attacks you and within one-second you are dead. But after a while, having played the game several times, you get used to the sound and you become quite blasé about it.

... at this particular point in the game I want a particular emotion and here is the sound wave that will do this.

 

So what Tom Garner and I are looking for – and Tom in particular has done a lot of work on this – is, can you use the game engine to monitor your emotional state at any particular point in time and change the sound – change the dynamics of the sound, possibly even synthesize a new sound, which you could do using procedural audio techniques? Could you do that so that each time the player plays the game, he or she is kept in this constant state of unexpectedness? Can you keep them on the emotional rollercoaster that games are usually so good at? That is my interest.

We envision that in the future a sound designer will become more like an emotion designer. So you sit there with your sound creation software and you can actually say: at this particular point in the game I want a particular emotion and here is the sound wave that will do this.Foto: Mads Walther-Hansen

Future technologies for sound design
MWH: What kind of technology allows you to do this? 

MG: The real time monitoring is going to be some sort of psychophysiological device – biofeedback. Typically it will be a brain-computer interface. More and more of these are coming out. There are all sorts of ways you can monitor affective states at a particular point in time. And computing technology is now fast enough to able to take this data, analyse it, and process audio files, or synthesize sound waves, in real time.

MWH: Can you say that this new definition of sound follows from the development of new technologies?

MG: Certainly it is connected to the development of technology. I noticed that these consumer grade biofeedback devices are becoming more and more prevalent. Although, yet, any big game has really got to take account of them – other than possibly things like Microsoft's Kinect or the later Sony PlayStation. But it works slightly differently – eye tracking, gesture and so on, so they can infer a lot of affective states from that. We started noticing these consumer-grade biofeedback devices coming out and I did some empirical research on this with some colleagues from Sweden - almost ten years ago now, where we were measuring galvanic skin responses, EMG, EEG and so on as players were playing. This is where the idea came from. If you are going to talk about designing sound, if you are going to come up with a real time design system, you have to have a more holistic way of conceptualising sound.

MWH: If sound is an emergent perception, as you claim, will it then be possible to inject sound into the brain as you have hinted at earlier?

MG: (Laughing) Let me backtrack a little bit there. Yes, we have this hypothesis. But what we make quite clear is that we are not discarding other definitions of sound. We think they can co-exist quite happily depending on the context in which they are used. And this has been the case for many, many years. Definitions come and go. Theories come and go. So, in terms of your question about injecting sound – I guess injecting is a little bit of a controversial term. But yes, there is very interesting research that has started happening in the last couple of years about human-to-human interfaces mediated via computers. I forget the precise term. But basically you have two humans hooked up with an electroencephalographic  (EEG) device (device to record electrical activity in the brain, ed.) with skullcaps and electrodes. And the output from one person is fed into a computer – some magic is done with it and then it is fed back into the other person. So, one person thinks a simple motion-response, because those are the signals that are reasonably well measured by EEG on the surface of the brain. For instance, someone thinks of lifting his or her finger to press a key on the keypad, and that signal is then transferred through the computer, into transcranial magnetic stimulation (TMS), on the other person. That person’s brain is then stimulated with these signals and that translates into actions where unwillingly their finger lifts up, presses down on the keypad and so on.

Transferring auditory experience
But if you could do that, with motor signals, to move your fingers, move your arms, move your legs and so on, can you then also think about transferring perceptions? Because my interest is sound, the question then becomes: if sound is a perception – and this is another reason for coming up with the definition – can you then transfer that perception from one person to another? So the recipient can then also perceive and experience the same sound, same music, that you are thinking of or imagining right there and then.

This of course leads to the idea that perception of imagined sound is no different from the perception of sound through sound waves, which a lot of neuroscience and auditory studies suggest. There is very little difference between these modes of perception, when you look at what is happening in the brain. Thus imagined sound is as much sound as sound in the presence of sound waves. Of course, this also leads to the other thing – if you can transfer sound from one person to another person via a computer, then there is the possibility of just thinking of sound into your digital audio workstation. A long way in the future I should say.

Presumably the sound artist is thinking something wonderful about the sound. You can transfer that appreciation to the others.

 

MWH: But it surely sounds interesting and it possibly opens up a whole new field for sound art. We have already talked a lot about the function of sound in different contexts, but we could perhaps get more into the art perspectives in this new definition of sound. Does the idea of sound as an emergent perception have any value for how we may go about study the aesthetic appreciation of sound?

MG: Well, that is an interesting question. You might imagine that if you are able to transfer the sound that a sound artist is imagining, at a particular point in time in the concert hall, for example, you can transfer that to the audience. Presumably the sound artist is thinking something wonderful about the sound. You can transfer that appreciation to the others.

So, this leaves us with the question – is this the end of sound appreciation, sound aesthetics and so on? Now, you can simply mind control. The use of biofeedback for sound art has actually got a reasonably long history. I am sure that many of Seismograf’s readers can take it back a little bit further, but if you go back to the mid 60s, composers like Alvin Lucier were doing similar things – very, very primitive compared to today. Very primitive hardware, very primitive forms of audification because it was direct transferral of brain waves, into sound waves. But he also did some sonification, where he did some processing of the signals and then transferred them – mapped them across to various controls of the synthesizer and other interfaces from the 60s and so on. So this kind of thing has actually been going on for some time.

There are interesting possibilities raised if you could have yourself as an artist – and let’s say the audience were all fitted out with some kind of biofeedback device – what then are the possibilities for audience participation in the creation of your new wonderful composition, your sound art and so on?

Also, when you get sound art, a sound art installation, and you approach it in a museum or an exhibition space, and you hook on your biofeedback device, will you perceive the same sound as another person if the sound art has been designed to this concept of sound that we have? It includes the personality, the past experiences in the context of that particular person’s psychophysiology and all that. And then, of course, it leaves us with the question: do you still have a sound artist as auteur?

MWH: Do you?

MG: I think you do, because the role of the sound artist then switches to something slightly different. Not designing sound, but designing a particular space for sound – an inner space for sound.

MWH: These are interesting perspectives.

MG: Well these are interesting questions and I have not even thought about them before. But sound art and composition have always been changing in relation to new technology. So there is nothing really revolutionary about this. It is simply a matter of adapting to the new technology.

Studying sound and music
MWH: A lot of music departments have moved in the direction of more sound-based research in recent years – some call it sound studies, some call it something else. How do you see this change?

MG: How do I see the change? One way to answer that is to say that music has always been about sound. Yes, certainly in English we would say a piece of music, or can you pass me that music book over there, for example, and it has music inside it as music scores and we term that music. So in one sense music is the dot on the page – the notes on the page. 

MWH: Do you really think so?

MG: In one sense. Certainly in the English language, that is a definition of music.

MWH: Some composers, such as Arnold Schoenberg I think, have claimed that the only reason you need musicians is to allow people who cannot read music to experience music?

MG: Okay. Okay. So this is a question about – when you look at a piece of music, yes, you can actually hear the music in your head. Hmmm, now, there is an interesting relationship to our definition of sound. Is music that is imagined in your head music? Or is it simply imaginary music? Imaginary in one sense of it. Interesting question -- I must take a sip of my beer.

But, now back to the question of sound and the study of sound in music departments. Sound has always been a part of music departments, whether it becomes manifest as sound waves or not. So I simply see it as one step further along this particular trajectory. Part of the tradition of music departments has always been fundamentally about sound. You can put all the society and culture and history on top of that, but fundamentally it is sound. But it is also a step further, because now it moves into the area of interdisciplinarity. Now, you can always say that music has long been interdisciplinary; as one of the first, one of the most important, and one of the most prime examples of interdisciplinarity. It has been so for many, many centuries. It is intimately wrapped up in the study of acoustics, with mathematics, with society, with religion, for example, so it has long been interdisciplinary. And this is simply another step along the way.

MWH: There is also a long tradition for scholars sticking to a very narrowly defined field. It seems like you try to embody many different approaches at the same time.

MG: I would love to be a polymath. But I think the days of polymaths are over. There is just far too much stuff out there to get to grips with these days – certainly in terms of publications. There is an astounding amount out there. So, I think the days of polymaths are over. But I would love to have as wide an understanding of all these different fields that I possibly can. All these so-called different disciplines can really teach each other. And the more you look at them, the more you realize that actually they are all really, sort of, going on about the same thing. They hide it under jargon.

MWH: What do you think is the most important thing, subject, topic and so on, to teach music students today? What does the future need from music students? 

Dare I say this – the facts are not important, because facts change. Particularly knowledge changes.

MG: What the future needs of anyone is the old cliché – it needs thinkers, whether they are music students or from any other academic discipline. Whatever field you come from, what the future needs is for people to think. Think for themselves. But also think for other people as well. I am quite sure this has been said by educators down the century, but there is a tendency in modern education to expect students to repeat what they have been taught, rather than to take that and synthesize new knowledge to come up with their own thinking about how they are going to deal with that knowledge, make connections between different areas and so on. That is one of the advantages of working interdisciplinarily. So, thinking, really.

Dare I say this – the facts are not important, because facts change. Particularly knowledge changes.

MWH: And this of course goes for the concept of sound also?

MG: It goes for the concept of sound as well, yes. I don’t know whether this definition we have come up with will be of much use. But it is very useful to us. It may last some time. It may not last some time. The acoustic definition of sound, as a sound in-itself may not last much longer. It is perfectly useful for dealing with creating and processing sound waves. I think, what I try to do in my teaching and supervision is to get students to think for themselves, really. I am hoping, if there is anything I can do here at Aalborg, that I can carry on with what I have been doing before, which is hopefully getting students to be able to stand up in their own right and to think for themselves. 

MWH: Would you call yourself someone who does sound studies?

MG: No I would not. But I certainly would call myself someone who studies sound. For me the object of study is sound. And yes, I do also write about sound in a particular context, but for me the object of study really is sound.

At the European Sound Study Association Conference in Copenhagen last year a whole day was given over to a discussion of ‘what is sound studies?’. The fact that they need to ask that question itself is illuminating. But the fact that after a day of listening to people discussing ‘what is sound studies?’ I came away none the wiser is also illuminating. So I do not think anyone really knows what sound studies is.

There is a lot of interesting work done within the putative field or discipline, whatever you want to call it, of sound studies. So, there is a lot of useful work done there. But my problem with the actual term itself is twofold. Firstly, it is an expansionist term. It tries to include as much as possible and – mark my words – it will not be long before musicology is a part of, a subsidiary, of sound studies. The second problem I have with it is that with my interest in interdisciplinarity I have a problem with pigeonholing – with categories, attempting to define limits and so on. The most interesting thing for me about limits and boxes and categories is not what is in them, but what is left out. That is my problem with sound studies. So no, I am not in the field of sound studies, but yes, I do study sound.

MWH: Do you consider yourself as part of a field of study? Just any field?

MG: No, I do not. I am just a scholar. I will read anything and everything. And I will make use of anything and everything. And there was a saying in English: ‘a little knowledge is a dangerous thing’, and of course going back to my previous comments about the polymaths and the possibilities of being a polymath these days – you can say, in an attempt to be a polymath, which is related to interdisciplinarity, I am gathering a lot of ‘little knowledge’ about many, many things. But my point of view would be that ‘a little knowledge’ is an interesting thing, because what it actually does – it makes me start thinking about the knowledge, the area that you specifically focus on, which for me is sound. So I am a little bit like a magpie. I will take bits and pieces here and there and construct my own little nest. And whatever that nest is, whatever field you wish to say I work in, well so be it. I will resist being labelled.

Foto: Mads Walther-Hansen

Omtalt i artiklen