Elon Musk's Neuralink chip implanted into a human

We explore the potential of brain computer interfaces...
02 February 2024

Interview with 

Andrew Jackson, Newcastle University


A brain sparking with electricity.


The billionaire funder of the company Neuralink, Elon Musk, has claimed that the first human has received an implant from the brain-chip startup and is recovering well. The US Food and Drug Administration gave Neuralink permission to test its implant on humans last year. But how might these devices work? I’ve been speaking to Andrew Jackson, who is a professor of neural interfaces at Newcastle University:

Andrew - So they're trying to build a brain computer interface. A brain computer interface is really a device that senses electrical activity in the brain. The brain is an electrical organ, our brain cells communicate by sending bits of electrical information between themselves and if we can place a device into the brain and eavesdrop on that electrical activity, then we can take those signals out and send them to computers or assistive devices that could be helpful for people who are paralysed.

Chris - How far along this path has science, technology and medicine already advanced?

Andrew - The idea of communicating directly between our brains and technology has been around in science fiction for many years. The field really started accelerating around the turn of the millennium, and a study in the US started using electrode arrays in people for this brain computer interface idea. They started with fairly simple applications where people were controlling cursors, moving around on a computer screen. Over the years, that's developed into controlling robotic arms, and now some of the latest demonstrations involve converting brain signals into speech and language so people who are unable to speak can communicate using a voice synthesiser based on decoding their thoughts from their brain. It's worth pointing out that the accuracy of this process, the performance of these brain computer interfaces, still has some way to go to match our natural ability to move our hand or speak. The performance is slow, but as the algorithms for decoding brain activity improve, and as the technology for recording these electrical signals from more and more brain cells improves, there's significant promise that these devices could be useful for helping people who have disabilities.

Chris - And practically speaking, what's involved in doing this? When we are listening to the electrical activity of different bits of the nervous system, that is just spikes of electricity. So how is that turned into something that means something to a computer and then ultimately into the action that you want to do?

Andrew - That's a process called decoding. The easiest way to think about it, it's a little bit like counting votes in an election. The first thing you have to do is to ask the user of the device to imagine making, say, two different movements: a movement to the left and a movement to the right. Different brain cells will have preferred directions, so some brain cells will be much more active when I imagine a movement to the right and some brain cells will be much more active when I make a movement to the left. And so by counting up the votes, in a way, that each brain cell is making for its preferred direction, you can infer that the population as a whole was involved in imagining a movement to the left or to the right. As you get more complicated, decoding movements in three dimensions or trying to decode language, this process obviously gets more difficult.

I think it's one of the main limitations at the moment of the field, that in order to use these devices, the user first has to go through this process of training the algorithm to recognise what patterns of brain activity are associated with the different instructions or ideas that the interface is trying to decode. There's an interesting question in here, which is, how similar is your brain to my brain? When you think of a cat, does the same kind of pattern of activity occur in your brain as in mine when I think of a cat? And I think these are going to be some of the fundamental questions that the field is going to come up against, is can we extend this concept to decoding much more complicated ideas from brains, or are we always going to be limited to these relatively simple applications where it's moving a cursor around on a screen or something like that?

Chris - Everything we've discussed so far has been very much a one way street: taking information out of the brain. Is there ultimate aspiration that we can put information in, in some way, or send signals back? And what would be the application or the purpose of doing something like that?

Andrew - Well, we're already doing that to some extent with devices like cochlear implants used by some people who are deaf and, there, information about sound is being electrically transmitted into the auditory nerve and sent to the brain and is perceived as sound. You can think of perhaps doing something similar with blindness where we may be able to send information into the retina or potentially the visual cortex in the brain that could be perceived by the brain as visual information. Certainly, these applications are starting to be explored. There's also potentially more interesting stuff we may be able to do in the future by interfacing in both directions with circuitry within the brain that is supporting some of the more complex cognitive functions that our brain does. So people speculate as to whether we could improve people's memory, say, by sensing signals from memory circuits, but also putting information into those memory circuits. I think the problem is that as we get towards those kinds of applications, we know relatively less about how the brain is doing it already, and therefore it becomes more difficult to see how, using these kinds of technologies anytime soon, we'll be able to improve these more cognitive functions.

Chris - Are you looking forward to being a cyborg one day?

Andrew - I'm always interested in these technologies. I think that what we'll really be seeing over the coming years and decades as these technologies become more sophisticated is that we can really start to get at some of the really interesting scientific potentials of what we can do with a direct interface between the brain and technology. I'm hoping that the applications will become more than just moving cursors around on a computer screen, but I'm fascinated to see how the field develops.


Add a comment