Emerging tech creates music from dance movements
For the second part of this month’s exploration of music and the brain, James Tytko spoke with Alexander Jensenius from the University of Oslo’s Department of Musicology…
James - Thank you so much for joining me, Alexander. We're exploring music in this episode of Naked Neuroscience, looking at some of the emerging technologies being developed to compose songs. We're going to talk about how this relates to your work a bit later. Specifically, you use motion capture technology, as I understand, to help compose music, a really interesting idea. But first, I think before we get there, we have to start by talking about dance, something we haven't talked about thus far in the podcast. When we were speaking earlier about music, we spoke about its evolutionary function in social cohesion. Is it right to look at dance as an extension of that social function?
Alexander - Absolutely, I would say so. So dance has a social function in terms of social bonding and relating to other people, and it's also highly connected to music. In many cultures, you wouldn't even be able to separate music and dance. In Norway where I live, in most folk music and dance, you would see that it's quite difficult to dance without the music. But also I think if you ask the musicians, a fiddler for example, to play without people dancing, it would be very difficult for them because it's really about the interaction between the musician and the dancers as they go.
James - Interesting. So there's a kind of two-way relationship between music and dance, and you've been cultivating some pretty interesting tools to help you learn more about their relationship, haven't you?
Alexander -Well, it started out for me when I was doing my PhD, I was interested in trying to understand more about how people move to music in different ways, including dancing, and I tried to understand more about music through the body. And that eventually turned into using different types of motion capture methods, including those type of full body motion capture systems that you can put on the body that you use for making animation movies, but also all the types of video analysis and sensor-based systems. So when you have these types of systems, you get a lot of data. And then when you have lots of data coming in, in real time, you can also turn that back again into sound and music. So in a sense, I went from an analytic approach to understanding more about dance and movement to actually creating music through movement.
James - Kind of reversing the prevailing direction of travel from dance to music rather than the other way round. I'm not a professional dancer myself, but I imagine they start with the music and think how they might interpret that into a dance. You've flipped that on its head using body movements as the instrument for composition. How would you characterise it apart from the way it's composed? What makes it different to music produced with more traditional instruments? What does it sound like?
Alexander - Most of these examples are made with computers. So you will get more synthetic sounds and more synthesiser types of sounds, electronic sounds. And we've even tested this with our set of self playing acoustic guitars, which is a collection of physical guitars that resonate and so we can actually set the guitar bodies in motion using human bodies as the starting point.
James - And other than just being an interesting new method of expression, could this have applications for people who, for whatever reason, don't learn to play an instrument or don't feel like they can sing to make music of their own?
Alexander - Absolutely. So one of the ideas I have I think is interesting is to exactly explore how you can make people become more actively engaged with musicing, making music of various kinds, even though they haven't played the piano or the violin for tens of years, they can still create some kind of music on the fly using their own bodies.