Meet Brian, the robot programmed to help look after the elderly. Plus we explore the role of robots in combat and in the classroom and we explore the ethics of how science shapes society. This special Naked Neuroscience podcast series, supported by the Wellcome Trust, reports from the International Neuroethics Society annual meeting at the AAAS headquarters in Washington DC and features guest robots Brian, Casper and Tangy, as well as human contributors Goldie Nejat, Barbara Sahakian, Paul Root Wolpe and James Giordano.
In this episode
07:49 - Should robots replace humans?
Should robots replace humans?
with Professor Barbara Sahakian, Cambridge University, Professor Goldie Nejat, Toronto University
Exploring the issues surrounding using robots in warfare, healthcare, and the classroom.
Hannah - Barbara is a President of the International Neuro Ethics Society. Barbara, and joins to discuss ethical issues that were raised during the meeting here.
Barbara - Yes. I mean, one reason we held this was because obviously the robotics technology and everything is taking off. There's a lot of important issues to do with society. We want to make sure everything is for the benefit of society within the International Neuro Ethics Society, and we want to look at the impact that these new technologies will have on people.
Hannah - So, Goldie you mentioned that one of your robots , your 3D robots cost about $10,000 dollars to produce at the moment which is considerably cheaper than employing a nursing assistant or a care worker for a year. So Barbara, what are the concerns regarding that, and how on Earth do you discuss these concerns with policy makers and businesses?
Barbara - Goldie mentioned a few of the aspects that she does in her work to make sure that things are done in an ethical way in terms of bringing in the carer to discuss what needs to be done with this particular individual so it's more focused on the person themselves. And I think that's what we really need to do with these things, is to think of them as aids for people in situations where perhaps there isn't enough help. And if a robot doesn't do it, nobody will do it.
But on the other hand, as a society, I mean, how do we want to behave? Do we want to just say, "Okay, It's fine for robots to do all these things", and to leave people to interact with a robot even no matter how good it is. Perhaps it's not as good as a human being in terms of quality of interaction, and do they give the same sense of well-being that you might have if you're interacting with a human.
I think as a society, we have to think how much do we want to use these techniques to help with situations that we can't deal with in other ways. If you're going into a very dangerous situation, one for instance they were removing nuclear waste and things like that, you have to be concerned about the health of a human being versus just a normal social interaction that one hopefully would expect in society to get from another person.
Goldie of course mentioned that it's early stages development but what was raised tonight which was very interesting in discussion, raised the ethical issue about vulnerable populations being exposed to robotics because there are issues to do with.
As a society, do we find it disturbing that perhaps somebody who is demented doesn't actually realize they're being care for by a robot or forms unusual attachments to the robot that is unexpected and, you know, some people might feel it's duping the person into behaving in a way that most people might regard as unnatural. So, there's a lot of ethical issues that come out of this type of work as well.
Hannah - And finally, Goldie, I just wondered whether your robots would ever be used in education or child care, for example.
Goldie - Yes. So, we are designing actually social robot, search-and-rescue with other applications where we're trying to use robots really as assistives. Like I mentioned before, the idea, is can robots be used as a tool, right? And can it be used as a training tool, as a tool to help people in dangerous situations where you can have a robot do it for search-and-rescue, for example, rather than put more human lives at risk. So, it's kind of this open idea of how do you interact with a robot and can you interact with it socially, then the applications kind of become unlimited that you could use it as assistive technology.
Hannah - Barbara.
Barbara - I thought another very important point that Goldie raised was the essentially cognitive training or sometimes it's called brain training in some sense. But trying to improve memory and all the people and the patients with dementia and I thought it was very interesting the way the robots had these facial expressions to try and encourage people and that's a very positive thing. But one maybe has to balance that against the fact we're now developing games that are used to do same sorts of things. And people who work in the game industry know very well how to increase motivation and somebody's getting frustrated, how to bring the level down easily. And it might be just easier to have an iPad with a game on it rather than perhaps a complicated robot that's trying to speak to you and encourage you to turn a card over.
Hannah - On a related note, I heard recently that about an autistic boy who forged a friendship with Siri the artificial virtual assistant with a voice-controlled natural language interface. Siri's got a synthetic voice. The autistic boy's mother claimed it kind of helped him to understand world and his sense of reality, and also, helped him to galvanize almost a sense of empathy with this synthetic voice. So, do you think that there is - not just for 3D robots but also 2D robots - this extra way of helping autistic children maybe, for example, or other types of vulnerable people using robots.
Barbara - There are some very good experiments actually with avatars. And they show that it helps children learn how to interact with other children so that when autistic children go back into the classroom after they've had some of the training with the avatar, they actually are much better at knowing how they should respond to different situations and interact with children in a more effective way. So they're more popular and they fit in better, so yes, these techniques turning out to be very good for special groups that need help with social interaction or other forms of cognition.
Hannah - And Proffer Ronald Arkin who spoke on the use of robots in the military, was mentioning something along the lines of 45% of American soldiers admitted that they would go against protocol and actually fire on civilians given certain situations which obviously you can program a computer or a robot not to do that. So, you could actually use arguing that could decrease the amount of humans, and specifically, civilians that are being killed in warfare.
Barbara - Absolutely. So, his idea was that collateral damage and the other damage to civilians, children would be less and if that's the case, that'll be fantastic. But obviously, it's untried at the moment. That is what his aim his and I thought that that was very good that he's kind of got this moral intention with the robotics to actually make them act in an ethical way. You know, if they identify a school bus, if they identify a hospital, they don't actually damage that building or that bus. People are concerned about who is responsible if, for instance in the context of warfare, one of the robotics that's being used is out of control or is doing something that it wasn't planned to do because we've all had the experience of computers going badly wrong. It's important that we are able to identify these problems early on and be able to deal with those. Warfare itself is horrible. People are very concerned about deaths of civilians involved but it's one thing if at least it felt that it's necessary and that human beings are involved in making the decisions. But when that is taken away, people get very concerned about will it escalate, you know, if we're using machines and we're not actually using human lives or wasting human lives.
Hannah - We're going to close now with the last question that was asked at the end of this evening's discussion which was, how do think society will change as we get more and more used to robots being used in different roles within it? How will it affect us as individuals and as a society more generally? Goldie.
Goldie - I think it's really actually an exciting time to see. If we look back in history, there's been a lot of technologies, right? Starting from the industrial revolution, all the way to when TVs, you know, came into our households and how people were going to behave, then computers, cell phones, and so on. So, I think of us as kind of developers and at this is just looking at the applications and really where we want to take this technology. So I think it's an exciting time to see really make sure that we're designing these robots for society. We need to help society, right? Then I think this is the best time to kind of really discuss and think about the policies so that we can shape this into the future.
Hannah - Barbara.
Barbara - Well, I agree with what Goldie said. I think it's absolutely brilliant that we've got all this new technology and will be very helpful. There are situations where, as we mentioned, it's unsafe not to use a robot and you know to put a human in the front line at that stage could be very hazardous if this toxic waste does something else. So, I think it's great that we have the ability to use all this new robotic information that we have but what we have to do is what we did tonight which is really discuss it with members of the public and amongst ourselves to see that we have a really good ethical way of proceeding.
Hannah - And the very last question posed at the end of the session was, what's the role of robots in sex? So apparently, you can buy robots for sex over the internet and the comments surrounding this concerned how it could benefit society by helping to set free those currently trapped in human trafficking. Thanks to Goldie Nejat and Barbara Sahakian.
17:04 - Founding a neuroethics society
Founding a neuroethics society
with Professor Paul Root Wolpe, Emory University
Meeting the founder of the International Neuroethics Society, to find out the motivations and history.
During the evening wine reception, I met with one of the founders of the International Neuro Ethics Society to find out how this meeting got started.
Paul - I'm Paul Root Wolpe from Emory University, I'm at the Asa Griggs Candler Professor of Bioethics and I have many other titles that they give me in lieu of salary increases. The Neuro Ethics Society was founded by a group of people who got together in about 2004, and we decided that with all of the attention that had been paid to genetics which had then created this kind of tsunami of ethical writing about genetics, that neuroscience was being neglected. Especially because many of the things we were writing about in genetics, and I was doing some of that writing myself, things like genetic privacy, things like human enhancement. We're going to be years away in genetics because at that time, having my genome, couldn't say anything meaningful about me but we're already here in neuroscience. We could already manipulate our brains, we do it all the time with alcohol and other things. So, we decided the time has come to just start some serious thought about the ethics of neuroscience. So, about twelve of us got together, including people like Mike Gazzaniga, and Steve Hymann, and Hank Greely, and myself, and Martha Farah, and Judy Illes, and we decided that it was time to try to bring people together who were interested 'cause there was no one place that people were talking. We were scattered all over the country, so we founded the Neuroethics Society. We made Steve Hymann our first president and now, become the International Neuroethics Society and its really been a real success story.
Hannah - And did Neuroethics as a discipline, actually exist before you founded this society?
Paul - No. A lot of people were doing neuroethics in the sense that some people were doing ethics of psychiatry that they were working on, you know, psychotropics and their impact on people or their use in lifestyle. Some people on the neurology side were talking about these kinds of issues. On the law side, about things like brain imaging in the court room but it wasn't integrated under any rubric, it was just part of bioethics. There was push-back at the beginning, with some people saying, "Oh. We don't need another sub-field," and there have been a number of sub-fields in bioethics. Many of which have faded or failed but what happened with neuroethics is because neuroscience has really become in some ways, the premier science in the United States around understanding human functioning. Neuroethics just grew.
Hannah - Thanks to Paul Root Wolpe.
19:42 - An ethical evening!
An ethical evening!
with Professor James Giordano, Georgetown University
Evening relaxation and chats at a jolly good scientific poster session.
As well as wine at the reception, there was scientific posters. I met Professor James Giordani from Georgetown University Medical Centre, he had a big presence there.
James - We have twelve of our posters and I'm very proud of those, not because they're mine, only because these are my students and my fellows. What that really indicates is that they're making a presence in the field 'cause as young and up-coming students, it's vital that not only their lens and their but this is also a platform, a nexus for them to interact with the current, the next generation of neuroscientists and that's exciting.
Hannah - And what kind of work are they presenting at the conference?
James - Well, it really reflects the interests that we have of our neuroethics studies program and of course, our program is international. So, it's based in Georgetown but it's internationally collaborative, so we're doing some work with colleagues in Germany, some colleagues in the UK, and some colleagues in Italy. The majority of our foci are really emphasizing three main themes. We're looking at the neuroethics of deep brain stimulation. We're also trying to plot the field in terms of one of the really important points. We're looking at this in a pragmatic way, so we want to make sure that we're just not pie in the sky, but really trying to plot one of the most important areas and domains the deep brain stimulation you'll encounter will affect. This should become the focus of neuroethical regard, deliberation, and discourse. We're also looking at animal neuroethics. And the way we engage animals in research based upon the most contemporary knowledge we have about animal brains, animal minds, and that speaks very, very largely also to the way we not only engage animals in the laboratory, to engagement in daily life, and perhaps even the way we look at other selves, other consciousness. And then the last area we're dealing with is the whole problem of enhancement. What constitutes treatment, what constitutes enhancement, what constitutes enablement which is a term that our group has developed, so as to look at the ways you might specifically augment particular tasks of neurological function in very, very discrete silos of performance that are socially sanctioned. Like peace officer, fire fighter, or even a soldier, or a doctor.
Hannah - And do you think that the area of neuroethics is becoming increasingly popular amongst students? Is there are an increase in the number of people that are taking up, looking into this area?
James - I think neuroethics is not only a field that is growing by virtue of popularity but I think the popularity reflects necessity, and that's important because I think our current generation of neuroscience students recognize that you can't extricate the science from the society. We don't live in a social vacuum and certainly, science is influenced by society and influences society. Neuroethics really provides that bridge. It's that nexus between what we do with our brain science and what we do with the meaning of our brain science. In many ways, we see it as the bridge between the synaptic to the social.
Hannah - And then finally, what were your top highlights from the conference today?
James - Well, you know, these conferences, the International Neuroethics Society provides a very, very unique forum. It allows some of the up and coming students, fellows, and scholars, and young academicians in the field to literally meet with the old guard if you will, those who have developed the field. The field is really only about ten to twelve years old, so there are those neuroscientists who recognize the need for neuroethics and they become almost iconographic. So, it's a nice opportunity for the up and coming generation to rub elbows with those who are the founders of the field. But it's also a great opportunity for the next generation of neuroscientists and neuroethicist to provide their voice, their lens, and make their mark on what is going to be a very, very exciting field as we move into neuroethics second generation in the second decade.
Hannah - Thanks to James Giordani, and isn't his voice lovely? He told me he's also done voice-over work and we'll be hearing more from his bassy tones later in the series discussing how the American military program DARPA, are funding neuroscience research. Well, that's all we have time for in this special episode of Naked neuroscience. I'm Hannah Critchlow reporting from the International Neuroethics 2014 meeting, hosted at the AAAS or the American Association for the Advancements for Science at Washington DC. Thanks to all those who took part in this episode, Goldie Nejet, Barbara Sahakian, Paul Root Wolpe, and James Giordani. In the next episode, we'll be hearing from DARPA, Defense Advanced Research Projects Agency who are funding brain projects, and we'll be asking should governments wipe their secret service agent's memories after they've completed missions and should we implant positive memories into veterans who return from combat? Join us again for this special Naked Neuroscience series to open your mind.
Related Content
- Previous Inside the Ebola Epidemic
- Next Peering into the Human Brain
Comments
Add a comment