Brain power

Sensors on the surface of the brain help paralysed people to communicate
02 April 2017

Interview with 

Jaimie Henderson, Stanford University


Brain computer interface helps paralysed people to communicate


Damage and disease affecting the brain and spinal cord are devastating. A person never recovers their lost function because the central nervous system has very limited abilities to repair and regenerate its tissues. So scientists are turning to technology to creative more effective assistive devices for patients with neurological disabilities, and they’re getting better all the time. Chris Smith heard how from Stanford University's Jaimie Henderson...

Jaimie - So, what we were trying to do with this paper is to demonstrate the ability to type using brain signals and in fact, create a system that allows people to move a cursor on a computer screen and to type at anywhere between approximately 4 and approximately 8 words per minute which is a factor of between 2 and 4 faster than what's been demonstrated before.

Chris - How would that compare with say, me or my smartphone if I'm tapping out a text message?

Jaimie - I would say, the average person probably performs somewhere in the 12 to 19 words per minute range. So we’re beginning to approach that range, but we’re still probably about half of what you can type on a cell phone.

Chris - But critically, you are doing this without a person needing to move a muscle.

Jaimie - That’s correct. This is all done through reading out brain signals and translating them into movements of a cursor on a computer screen.

Chris - Before you tell us why the competition are doing less well than you are, how does it work?

Jaimie - We began by implanting a tiny sensor on the surface of the brain. The sensor is 4x4 mm. It’s about the size of a baby aspirin with 100 probes that penetrate just into the outer layers of the brain. These pickup signals from the neurons. When those signals are read out, there's an amplifier that mounts on the top of a device called the pedestal which attaches to our research participant’s skull and protrudes through the skin. We attach a cable to the top of this, we run those signals out, amplify them, digitise them, and put them into a computer where they can be decoded. We can then use our computer algorithms to understand what the brain is trying to do, what sort of movement the person is trying to make and decode that and display it on a computer screen.

Chris - How long does it take after you put this into the person’s brain for them to learn to think along the right lines to achieve meaningful control?

Jaimie - We found that our research participants are able to get control of the cursor fairly rapidly. It’s pretty intuitive. People usually pick this up within several days and become very facile with it in several weeks. The reason for this is that we’re basing our decoders on tuning of the brain cells. So we asked our participants to imagine or attempt a movement of their opposite arm. We then pick up the signals from those brain cells which are tuned to the particular direction of movement. So, if I imagine for example moving my arm up into the left, certain of those brain cells will increase their firing rate or will be more active with that movement whereas certain other brain cells will decrease their activity. Each probe of the array that we’ve implanted can pick up signals from one or several brain cells. By decoding the tuning of each of those brain cells, we can then predict what a person is attempting to move and use that to move a cursor.

Chris - People have been eavesdropping on the brain’s motor circuits and using them to drive devices in this way for a really long time. I mean, this is not brand new – the approach you're taking, is it? So, what is the novelty here?

Jaimie - There are actually several innovations that we use to achieve this level performance. One is real time system design and real time programming. This allows us to very rapidly feedback cursor position to our participants. So the cursor positions actually updated every millisecond. That’s faster even a 120 hertz display can display it. This very rapid feedback allows participants to rapidly acquire where the cursor and the target is and to better control it. We also use digital filtering to clean up the signal. So it’s really these innovations altogether that allow us to achieve this level performance.

Chris - What about the long term stability of the interface into the brain tissue though because for many years, people have struggled to do this with these chronic implantable devices. They find the signal degrades over time?

Jaimie - Well unfortunately, that is still a problem. We do see signal degradation over time although one of the participants in the study had had the implant for approximately 3 years at the time that we performed these research sessions. Her performance was among the best in the study.

Chris - If you ask the people who use them, what was the approval rating? In other words, how did they respond? Also, given that you're now doing this in real time, what can you do to improve this further because we’re not quite up to a teenage text rate yet, are we?

Jaimie - No, we’re not quite there yet. Participants very much enjoyed using the system. One of them said that after he had been using our system for a period of time, he went back to his old head tracking system and found it somewhat cumbersome and clunky. There's obviously still a lot of work to do. We do want to be able to type faster as you mentioned. It’s possible that by reading out the planning phase of movement – in other words, when someone is beginning to plan to make a move, it’s been shown in the laboratory that one can very rapidly decode an intended movement even before a movement can be generated. So, hoping at some point to explore that concept further.


Add a comment