Brain chips: 'Moral imperative' or a danger to liberty?

Elon Musk's Neuralink chip aims to take this technology in a controversial new direction...
13 February 2024
Presented by James Tytko

CONNECTIONS_BRAIN

An outline of a human head, filled with connections like vessels or nerves.

Share

This month, James Tytko explores the dangers of debunking fake news online with Francis Madden, and discusses ongoing developments in the neuroscience of Long COVID with Stephanie Brown. Then, following on from Elon Musk's news that his brain chip company Neuralink have successfully implanted their device into a human, we explore what this means for the field of computer brain interfaces...

In this episode

Wrong way sign

00:48 - Do your own research and fall for fake news more often

Evaluating the truthfulness of a false article actually increases the likelihood of believing it...

Do your own research and fall for fake news more often
Francis Madden

In 2024, countries with a combined population of 4 billion people are going to be holding elections. Propaganda and disinformation are not particularly new phenomena in these contexts, but the way they spread has changed. With the internet and social media having established themselves as a part of our daily lives - platforms where a broad range of outlets and individuals can distribute information and ideas to lots of people very quickly - it can be difficult to know what to trust. And research by Kevin Aslett of the University of Central Florida and colleagues suggests it’s even more difficult to identify fake news than you might think.

Their study in Nature found that online searching to evaluate the truthfulness of false news articles actually increased the probability of being deceived by 19%. James Tytko got in touch with regular Naked Neuroscientist Francis Madden to find out more…

Francis - In monitoring the terms that people use to evaluate news articles, some people actually just copied and pasted the headline of the article into Google and, of people who did those, around 77% received back at least one unreliable news link. Whereas other people, who didn't directly copy and paste the headline or the URL of the article, 21% of those people had at least one unreliable news link.

James - So if you were going to advise someone who's trying to evaluate the truthfulness of a news article, you'd tell them to re-articulate the terms of the article to have a better chance of avoiding more misinformation down the line?

Francis - Yeah, exactly. That would avoid something called a 'data void,' what the researchers have called it, which is where if you use search terms unique to misinformation, then you're more likely to expose yourself to more low quality information. An example of that is a headline in one of the articles they use which is, the US faces engineered famine as Covid lockdowns and vaccine mandates could lead to widespread hunger. The term 'engineered famine' is quite unique to conspiracy theorists, and it's unlikely to be used by mainstream media outlets. So if you add the word 'engineered' in front of 'famine' when you're searching to verify the article, 63% of the search queries led to at least one unreliable result. Whereas if you just search the word famine and COVID-19, none of the results were unreliable.

James - Did the authors of this study take into account the political leanings of the participants, or their vulnerability to be taken in by conspiracy theories? I'm just wondering if what you want to believe has a big difference on the way you are searching on the internet?

Francis - So they actually examined the ideology of the participants, and people from different sides of the political spectrum might deem different things fake news. But, interestingly, they found that the increase in probability of believing fake news from evaluating it applies to everybody across the political spectrum and not just the left or the right.

James - Once you get sucked into one of these data voids, it can be more difficult to then eventually reach the truth. Prevention is better than cure in this case. What can we do to ensure that, at an earlier stage, something's sounding in our brain to say, perhaps this is not true?

Francis - That's actually relevant to the research being done at Cambridge University by Sander van der Linden and John Roozenbeek and colleagues, and they've been researching something called 'inoculation' against misinformation. This theory goes that we can build resistance to misinformation by giving ourselves a smaller dose that we can clearly identify as misinformation, and then apply that critical thinking to actual fake news articles. Because it can be hard for even professionals to evaluate whether an article is true or false. By showing participants videos of quite benign scenarios, like clips of Family Guy or clips of Star Wars, to highlight examples of the techniques used by misinformation outlets like emotionally manipulative rhetoric or false dichotomies or ad hominem attacks, by showing a trivial example of that, it can stimulate awareness such that when those participants are exposed to actual fake news articles, then they can identify that they are fake. If you go to a website called inoculation.science you can watch the videos they've used in their studies and play some games that they've created to inoculate yourself against misinformation.

Person suffering with long Covid

05:54 - Long COVID affects brain's microstructure

And understanding why might be part of finding a cure...

Long COVID affects brain's microstructure
Stephanie Brown, University of Cambridge

We’re going to take a closer look at Long COVID now, and I mean really close. Long COVID is an umbrella term for various clusters of symptoms which can affect people long after infection with COVID-19 in different ways. It’s for this reason the physiology of the condition - what’s actually going on inside the body - is so poorly understood.

One of these clusters of symptoms is neurological: fatigue, brain fog, and loss of taste or smell. Using a special type of MRI which allows scientists to zoom in on the layout of our brains to the tiniest of scales, Alexander Rau from the University Hospital Freiburg was able to identify unique microstructural changes to the brains of people suffering from the neurological symptoms of Long COVID. This could be crucial for developing treatments for those suffering with the condition.

James Tytko spoke with the University of Cambridge’s Stephanie Brown to hear more…

Stephanie - What they saw is that COVID-19 infection induced a very specific pattern of microstructural changes in various brain regions and that this pattern differed between those who had Long COVID and those who did not. So the results showed that there was no brain volume loss, or any other lesions - i.e. changes to macrostructure - that might explain the symptoms of Long COVID, but when we look deeper into the microstructural changes, they did find these symptom specific brain networks associated with impaired cognition, sense of smell and fatigue. So it looks like these changes to the network and the microstructure of the tissue is very specific to Long COVID.

James - So how might we interpret that? What might be causing microstructural changes in the brain of people who've contracted COVID and what might the pathology be? Are there any theories?

Stephanie - That is a very interesting question and I'm sure that's something that the research community looking at Long COVID will be really interested in finding more about. There are theories and I will speak very broadly about this. One would be the amount of water in the tissue, and what that means is there's potentially inflammatory processes which are causing oedema in the tissue. Then, we've got things like neuronal myelination. Basically, the myelin sheath surrounds and protects the neuron so, if that becomes damaged, then that can obviously have some negative effects on brain health.

James - A lot of work to be done, then, and on just one aspect to the whole issue of Long COVID which sadly seems like it's going to be with us for a long time yet to come.

Stephanie - Yes, I think so. Going on the recent numbers and projected people who may be affected by this, it's certainly worrying. I think Long COVID researchers do agree that it's a very broad and difficult syndrome to treat and a mix of treatments will probably be needed.

Technology is bringing computers and the brain closer together

09:21 - Giving paralysed patients the gift of movement

The computer brain interface pioneer who gave people some of their abilities back...

Giving paralysed patients the gift of movement
John Donoghue, Brown University

Before Elon Musk, computer brain interfaces have to this point been solely a medical pursuit. It dates back to the late 90’s, when pioneers like John Donoghue of Brown University were first beginning to understand how they might restore abilities to people with paralysis. I caught up with the man himself to chart the early evolution of this technology...

John - We had a pretty reasonable understanding of how the part of the brain that controls movement was structuring its output. It was putting out patterns and we could interpret what those patterns meant. And about the same time we had these electrode arrays that could pick up enough neurons so you could read out enough of that pattern and you could make sense out of it. So we said at that time, well, this is really something that can help people. People who are paralysed from a disease called ALS where the connection between the brain and the muscles degenerates, also, a spinal cord injury will cut off the messages from the brain to the body. Even at that time in the late 1990s, I was saying that the real goal would be to use the advances in electronics to reconnect the brain back to the body. In 2017 we did just that with Bob Kirsch and Hunter Peckham at Case Western University, a person who was completely paralysed from a spinal cord injury was able to pick up and drink and eat and move his own arm using this kind of device.

James - And throughout all of these developments, it's crucial to mention that they've all involved implanting something within the head, a surgical procedure to implant, as you mentioned, that electrode array which has on it the strands which are able to sit in our head and read the neuronal activity that's going on.

John - Exactly. The resolution, that is, the high fidelity resolution that we need to read out these patterns in the brain, requires that something be put in the brain. This is about the size of an aspirin, five millimetres or so, that is put into the brain surgically in a small opening and then closed up. There are still ongoing attempts to get signals from just reading on the surface of the head, but they're very crude and you can't get much more than a 'yes, no' signal.

James - What were the efficiency gains needed from helping paralysed people to move cursors on a screen to later being able to move prosthetic limbs to pick things up just with the power of the mind?

John - There are biological and technological challenges here. The electrode array that was around in the 1990s - until the slightly different technology that Elon Musk's Neuralink is using - basically all of the few dozen people that have been implanted have all used that same technology. The only difference would be is we put maybe two of them in so instead of 100 samples, we get 200 samples. The biological question is, is more samples good? It seems to be that the more information we read, the more we can get a better estimate of what you want to do. The second one is technology. To be honest, the reason we didn't control a robot is we couldn't afford one. Then, the more sophisticated work that showed really elaborate robotic arm control, that was more about what we learned of the complexity of the signals. It's like having a QR decoder for your camera to scan a product and it doesn't work very well. Over the years, we learned how to make it work better, and that's called the decoding problem, reading out the brain and understanding the signals. The next step though is that all of the people who have the implants through the research setting have a plug on their head, and that plug on their head is something that's not desirable. The step that Neuralink has made is they can put the electronics under the scalp so everything is inside the body.

James - This field was started purely as a medical pursuit. People are speculating that, in the not too distant future, brain computer interfaces could be looking at expanding our functions as people, expanding our memories or potentially uploading our consciousness to the cloud, perhaps. Are we in a position where, implanting neural devices into healthy individuals, the probability and severity of side effects is getting towards that critical point where regulatory bodies are going to be satisfied for this to be happening more regularly?

John - Brain surgery, I think, is something that puts everyone aback or most people aback, but I'd say brain surgery is actually quite safe. I'm not a clinician, so I don't have exact experience. I've worked with many neurosurgeons and I do know that there are complications in any surgical procedure - I think the overall rate is around 5% - but those range from minor skin infections to maybe a postoperative fever, which is not uncommon. I do think it's not a worry for a person who is paralysed. That small risk of a surgical procedure is worth it with a device that has been approved by regulatory bodies. Now, for able bodied people, is a 5% risk worth it? I don't think clinicians would ordinarily say, I'm going to do a surgical procedure on a person that has no clinical issues, no health problems, and that is an ethical dilemma. Where will we go with that kind of thing? Personally, I think it's not a good idea because it's not zero risk, but it's not a big risk.

James - Transitioning the focus away from helping people with paralysis to potentially expanding our memory, for example, how do you feel about that?

John - Well, my first reaction is that we have very good devices that expand our memories and do all these functions without having to have surgery or have any device implanted in our brain. And that is, for example, our smartphones. We've already expanded our memory! I'm sure those devices will only get better and better. I don't see the remarkable advantage. The fundamental question is, how many neurons over how much space do you have to record in your brain to know what you're thinking? And one side of that argument, which I sort of lean to, is if I wanted to know exactly what's going on in your brain, I would need to record from every neuron or pretty close to all of them. We're not going to do that. Sticking that many wires in your brain is not reasonable.

A robotic-looking woman's face behind a wall of computer code.

16:48 - Ramping up brain chip regulations

Protecting our freedom of thought from computer brain interface technologies...

Ramping up brain chip regulations
Marcello Ienca, EPFL

While many are sceptical of Neuralink’s long term ambitions, many argue it’s important we should be prepared for them in any case.

If the company is able to somehow combine the efficiency of our brain with powerful artificial intelligence software, for example, then we need to make sure we’re ready for its impact on society. And what’s to stop the companies who control the software inside these chips collecting our thoughts for their own ends?

Marcello Ienca is a cognitive scientist who focuses on the ethical and policy implications of neurotechnology…

Marcello - Human enhancement is something that is inherent in human beings. We tend to change over time and we use technology to improve our function. If you're wearing eyeglasses, you are improving your capacities. The same goes for wearing shoes: if we wear shoes, we can run faster than if we're barefoot. Human enhancement is not necessarily something that is morally problematic. It's something that becomes morally problematic if it goes beyond or is in violation of fundamental ethical principles. The concern I have is to not to live in a world where people can boost their memory or their other executive functions - I think that would be actually quite desirable - but I think we have to be cautious on the equality side, and probably we should also have a discussion about what human abilities are more legitimate to improve and which ones are probably not desirable.

James - Are we talking here about these dystopian possibilities where there could be an invasion of our mental privacy? The people who run the software that we're using in these brain computer interfaces could have access to our thoughts and potentially even change them. Is that a realistic concern?

Marcello - I do think it is a realistic concern. Neurotechnologies can be used to reveal or predictively infer information from the human brain about mental states including thoughts and emotions. This is something that can be done in two ways. In a broad sense we are already doing this because we can leverage the data collected by neurotechnologies and mine them using artificial intelligence algorithms to reveal privacy sensitive information - even from seemingly non-private data - by establishing statistical correlations. But what will be possible, in my view, relatively soon, is mind reading in the strong sense of brain decoding. We can use machine learning models to reconstruct the visual and semantic content of mental states from brain activity. This is something that, in the last couple of years, large language models like GPT, that ChatGPT is based on, also appear to be extremely useful to achieve that goal.

And again, this is not something dangerous per se. We need to read the mind in order to make the lives of people better. For example, there are people with locked-in syndrome or severe paralysis and, if we can read their thoughts, we can allow them to restore an interaction with their external world and that's a moral imperative in my view. But at the same time, in the big data economy we live in, then also companies will have access to potentially sensitive information about peoples' minds. I think this is a real concern because the ability to seclude information from our mind is a necessary requirement for freedom of thought.

James - The mind boggles as to what the nefarious actors of the world could potentially do with this sort of power, authoritarian governments, for example. These risks, what can we do to mitigate them?

Marcello - The brain is the most complex entity in the universe. Therefore, regulating the brain will not involve easy solutions. Definitely we should make sure that companies in these fields make their business ethical by design. I'm glad to say that a lot of neurotechnology companies are establishing ethics advisory committees, or they are developing ethical guidelines that they operationalise in their company. Unfortunately, Neuralink is not one of those. On another level, I think we need to clarify what brain data really are and where they should be located in data protection regulation. Currently, it's quite a puzzle. They're definitely health data as in the European general data protection regulation, but currently they're not classified as a special category of health data unlike, for example, genetic data. Genetic data can probably teach us a lot about how brain data should be regulated because DNA and information in the brain have a lot of similar characteristics. They're both predictive of present and future health status and behaviour, they are informationally rich. Brain data also have additional temporal resolution, so I think that regulation should catch up with that. Then we have the level of fundamental human rights. I have introduced the notion of neuro rights together with Roberto Andorno and I think in the next few years we'll see a lot of regulatory developments in this field.

James - That was going to be my final question. It's obviously such an emerging field based on a very newly emerging technology. The risks you say are not as far down the track as maybe many of our listeners will think. Are we going to be prepared to face them when they come?

Marcello - I think so. The entire history of technology is a history of dealing with dangers. Anytime a new technology comes along, there are a great deal of opportunities, but also a great deal of dangers. The more powerful the technology, the greater the danger. Artificial intelligence and neurotechnology are extremely powerful technologies that will probably help millions of people worldwide. I think here I should emphasise that ethics is not just about preventing harm, it's also about promoting good. We currently live in a world where hundreds of millions of people suffer from disorders of the brain and mind. We can't cure what we can't understand. We need to develop technologies that can help us read and understand the brain and also modulate brain function in order to alleviate this major burden of disease caused by neuropsychiatric disorders. But at the same time, we have to make sure that this technological innovation occurs within certain ethical boundaries and is ethical by design. It will not be easy, but I think the timeline is right. If we look at other technologies, with my students I make the example of social media, with social media it's pretty clear that the genie is out of the bottle. It's too late to regulate social media platforms because we have started thinking about the ethical and societal implications of these technologies only when it was too late. I think with newer technology, we are still on time, but we have to act now.

The human brain.

24:03 - Brain to Z: B is for brainstem

Next up in our series of bitesized briefings...

Brain to Z: B is for brainstem

The brainstem is the deepest, most primitive part of the brain: the first part to form in our evolutionary journey. It's a sturdy stalk that connects the brain to the spinal cord, the many pathways within it maintaining this strong line of communication. The brainstem plays a crucial role in maintaining many vital bodily processes, which means its structure is conserved across many vertebrates in the animal kingdom, including birds, fish and mammals.

It’s split into 3 parts: the medulla oblongata, the pons, and the midbrain.

Starting at the base, the medulla oblongata, also known simply as the medulla, transitions seamlessly into our spinal cord. This is the most significant portion of the brainstem in maintaining cardiovascular and respiratory functions: our breathing, heart rate and blood pressure. Using information transmitted directly from blood vessels, the medulla instigates reflexive actions when it needs to; keeping our blood pressure in a safe range and the right levels of oxygen in our blood, part of homeostasis, our body’s automatic maintenance of stable internal conditions. Dysfunctions of this line of communication between blood vessels and the medulla are common features of cardiovascular and respiratory diseases.

The next part of the brainstem is called the pons. Latin for bridge, you’ll be unsurprised to hear that the pons plays a connecting role, especially between the brainstem and our cerebellum - a part of the brain that plays an important role in our voluntary movements and balance. Within this region reside nuclei responsible for cranial nerves and handling sensations from the head and face. As well as managing the movements of our facial features, it also has a role in automatic functions such as tear and saliva production.

The final section of our journey from the spine up the brainstem leads us to the midbrain. Here you’ll find four bumps representing 2 paired structures: the superior and inferior colliculi. They play a role in eye movement, visual processing, and auditory functions. You’ll also find important nuclei producing dopamine (the happiness hormone.)

So a lot is going on in this compact region, a reflection of its important role in so many basic functions!

Comments

Add a comment