Domenico Vicinanza, Anglia Ruskin University
What could be a more fitting tribute to the 2012 Nobel prize-winning discovery of the Higgs particle than a musical composition created by the collider itself? Domenico Vicinanza heads the Electronics and Sound Engineering Research Group at Anglia Ruskin University. He’s used a process called data sonification to turn the measurements from the experiments that uncovered the Higgs Boson into a piece of music. He explained to Chris Smith how he did it...
Domenico - What we’re listening to is the result of a process called data sonification and data sonification is all about making measurements into something audible. So if you like, it’s like using notes and melodies represent data instead of using conventional points and lines.
Chris - Which data did you use?
Domenico - So, in this case, we used energy measurements that were taken by physicists in 2012. So, what we are listening to is the distribution of the energy going from really low to high energies when the Higgs was actually discovered.
Chris - I see. So, when you see a collision happen, those recordings that were made in the detectors at CERN, they’ve given you that data and you’ve done something to it to turn it into a tune.
Domenico - Exactly. What we did to the measurements was mapping them or basically associating to each single measurements in each single number a music note using an algorithm. So, a set of rules that are actually linking the data to the music notes and giving a melody.
Chris - There's only a small number of notes but the energy levels must have been a continuous variable. It must have been over a huge range. So, how do you turn something with many, many possible energies into a discreet number of musical notes?
Domenico - What we did was actually using a mapping process that was compressing the range of the energy variation to a certain number of octaves in music terms. So we had an orchestra, we had instruments really able to play really low notes like double bases, to instruments able to play really high pitch notes like flutes and piccolos. So, we decided beforehand what was our range or energy range in sonic terms in some way and we did the mapping.
Chris - You’ve got a real kind of orchestrated piece here though. It’s not just individual discreet notes. So, how did you then add the extra layers of orchestration? Did you apply one rule for one set of instrumentation and another set of rule for another, and then get something that sounds good?
Domenico - That was indeed a possibility. So, what I prefer to do was taking the energy measurements and create one single long melody, really long one. And then I was listening to it and I was extracting pieces of this long melody that was sounding particularly nice. so, it was particularly suitable for an orchestration. As a composer, what I like to do was actually using the right pieces for the right instruments. So, I actually started working, trying to imagine how these little pieces could layer on top of each other. I work on my orchestration to tell a story. And the story I wanted to tell was how the discovery happened. So, working from low energy, low frequencies, double bases and Cellos at the beginning and having it building up with woodwinds and with horns, sustaining the melody and finally the big discovery.
Chris - Could you use the same technique to do other data? Presumably, you could.
Domenico - Indeed. So, data sonification is a really, really general technique. We can actually use it to represent whatever we like. For example, I was recently involved in a research actually using data sonification to help doctors to discriminate between healthy and potentially dangerous cells in cancer so actually using sound to discriminate between healthy and unhealthy situation.
Chris - Where previously, they would look down a microscope and try to discriminate visually, you would have a computer reader slide and translate what it’s “seeing” into sounds and then the doctors using their ears to discriminate rather than exclusively their eyes.
Domenico - Exactly. The reason why we are doing that is because we believe that ears can be much better than eyes in discriminating anomalies and discovering patterns. So, in some sense, the hearing sense is a neglected one. Wwe are so much relying today on looking at graphs and looking at visual representation of information that we forgot that we can actually use other senses. Hearing is one of the best ones. We have one of the most complex way of detecting patterns embedded in our ears and we’re not using it.
Chris - I suppose this is the audio equivalent of creating a graph. If I've got a complex series of numbers and I want to represent them in the way that makes them easier to interpret and to show what the trend is, I draw a graph. You're doing the same thing with music for big data sets.
Domenico - Exactly. What we are hoping to do is actually, using the natural capability of our ears in detecting trends and patterns and anomalies. One example I really like is when we think about a graph and we think about lots of points in a graph, sometimes it’s really, really difficult to identify one misplaced point in a graph. If we think about a melody which has a lot of notes and really complex is quite easy actually to spot a wrong misplaced note in a melody. That’s all because we are so good in detecting anomalies in patterns using our ears.
Hear the Higgs Boson sound-track composed by Domenico using data from the LHC
Very interesting your project, had never thought about the use of music for data analysis. priscylla, Thu, 11th Jun 2015