Movies of the Mind
Emily Seward - Scientists have successfully decoded and reconstructed the visual images experienced by volunteers, viewing a sequence of Hollywood movies. Scanning the brain using functional magnetic resonance imaging, fMRI, they matched up how changes in the moving images correlated with changes in brain activity. They were then able to reconstruct the visual images experienced when viewing unseen movies. Published in Current Biology by Professor Jack Gallant and his colleagues from the University of California, Berkeley, this work may lead to communication with brain injured patients and even being able to watch your own dreams like a video.
Jack - The goal of our laboratory is to build a computational model to describe how your brain processes visual information. And of course, in the real world when you're walking around, most visual information is dynamic. You see things moving, you move through the environment and so we want to be able to understand how the brain processes this dynamic information. We came up with a computational model that allows us to predict brain activity to new movies and they allow us to actually decode brain activity and sort of reconstruct a coarse representation of the movies you saw.
Emily - Their experiment involved two stages using three volunteers.
Jack - So this gave us a very long list of individual movies. They were short segments of movies, like Hollywood movie trailers, that were 10 to 20 seconds long, along with the brain activity, and that told us how the brain responded to individual shapes as they were moving through the displays.
Emily - As the scanner recorded their brain responses to the movie information, a computer program matched up how changes in the moving images were correlated with changes in the brain activity. Feeding this into a computational model enabled the researchers to create a dictionary that could be used to decode how the brain responds more generally to moving shapes. This can then be used to create predictions of what the brain is seeing.
Jack - In the second part of the experiment, we had people go back in the magnet and we showed them a different set of movies that they haven't seen before. We used the computational models to predict what movies they were most likely to have seen and then the computer basically tried to build a reconstruction of what they actually saw.
Emily - Based on what it had learned from the initial training sessions, the computer was asked to predict what the subjects had been watching. And then using a hundred clips selected from over 18 million seconds of video footage from YouTube, build a reconstruction of what it thought they had seen. Though blurred, the results are breath taking. But what could this method be used for?
Jack - The methods we came up with to solve this problem in vision are general. This is a vision experiment but you could apply very similar modelling framework to sort of a dynamic thought process. So if we want to build a communication device for example to communicate with stroke patients or people who have neurological diseases that cause them to be locked in so they can't communicate, having a method to decode dynamic brain activity would allow us to essentially communicate with those sorts of people. You can also imagine this would have interesting applications both for say, entertainment and therapy. If you can decode movies then in theory, you can decode say, dreams from the brain, and that would be kind of an interesting application.
Emily - All these possibilities are still a little way in the future, but how can improving their algorithm help speed up the process?
Jack - Well the algorithm we have right now is limited, but it's limited in two fundamental ways and one kind of trivial way. The first is, the reconstructions that we have are essentially limited by computer power and disk space. We're reconstructing a movie that you saw using other movies that you didn't see strangely enough. The library of movies that we use to reconstruct what you saw actually affects the quality of reconstruction. So if we get more and more computer power then our reconstructions get better and better. At a more fundamental level, the reconstructions are limited by the quality of the models we have of the brain and as models of the brain get better and better then our ability to reconstruct brain activity and figure out what you saw get better and better.
Emily - But should you worry that people will be able to read your thoughts as you walk along the street?
Jack - So, it's natural for people to have concerns about the ethics of this process and about potentials for invasion of privacy, and I share those concerns. I think in the long run say, decades out, this kind of brain reading technology is going to face major ethical issues that are going to have to be addressed and overcome. In the short term, there's no danger of anyone having their brain read without their knowledge because it requires spending several hours in a very large MRI machine and anyone who was undergoing this procedure would know it.
Chris - Which is very reassuring. Jack Gallant ending that report by Emily Seward.