Science Questions

Will robots be able to think for themselves?

Tue, 18th Sep 2012

Listen Now    Download as mp3 from the show Silicon Sailors - Robots take to the waves


Catherine Hiscox asked:

Will robots be able to think for themselves?


Alan -   Itís a good question.  The problem is, of course, we donít actually know what thinking is.  But I would say in principle, yes, they should be able to think for themselves, but itís going to be a long way into the future I'm afraid.


Subscribe Free

Related Content


Make a comment

We do know what thinking is - it's about modelling the world and fitting new ideas into that model while attempting to avoid contradictions. Where contradictions result, some of the data is wrong and it's a matter of hunting out the error(s). Logical reasoning can be used to help find the errors, and also to derive new information from what is already there. It is possible for a system using imperfect reasoning to improve its model of reality even with a lot of faults in how it processes data, and that's why such imperfect thinkers as humans are still able to make a lot of progress, but machines will apply logic perfectly and make rapid progress. Within the next few years, someone will build a system which will suddenly acquire human-level intelligence almost as if it came out of nowhere - it's just a matter of building a system with the right rules and letting it loose with the contents of Wikipedia. The main barrier to progress relates to linguistics, and that's the business of how different concepts can be broken down into their component parts so that they can be related to other concepts correctly - a considerable amount of this has to go on in order for a machine to make sense of things, and it's a tangled mess of complexity which I've spent most of my life working on. I predict that several companies will have machines that can pass the Turing Test by 2015. David Cooper, Tue, 18th Sep 2012

David, your definition of thinking makes me think you yourself are a thinking machine :P jk But let me offer some food for thought. I assume you've read Orwell's 1984? Do you remember the part about 'doublethink'? It's an extreme example, but it should be rather illustrative in where your definition fails. Humans have a brilliant capacity for simultaneously holding and, crucially, believing two contradictory views, what Orwell referred to as doublethink. Humans aren't the rational beings that you believe you can model on a computer program. In fact the highlights in our memories are often things that happened during moments of irrationality. hddd12345678910, Wed, 19th Sep 2012

There's already been chess robots made that think for themselves, so in that limited sense, yes, they already can.

As information processing technology improves robots should become more and more flexible and intelligent; intelligence and thinking is not an either-or thing. wolfekeeper, Wed, 19th Sep 2012

Thinking for oneself is what we (humans) do, so robots would have to be conscious to qualify. This is not the direction research is pursuing so the answer is no. grizelda, Wed, 19th Sep 2012

Thinking for oneself is not special to humans. wolfekeeper, Wed, 19th Sep 2012

At what stage did single cell organisms evolve from a "stimulus-response" existence to one of "free will"? Did that ever happen? or did we just evolve more complicated ways of responding to stimulus?

It seems to me that if computers were made to "think" like humans, we would have to actually introduce an ability for computers to cope with errors. Rather than an error freezing the system because it can't get past the logic, if we could allow the computer to make as many errors as possible and still operate, we might get some innovation out of it. Whether we could retain the original benefit of the efficiency of a computer is another problem.

I like Frank Herbert's (author of "Dune") idea of a "Mentat" which is a person specially trained to store, recall and process data with the efficiency of a computer, while still being human. bizerl, Wed, 19th Sep 2012

Computers can and do have capacities to cope with errors.  They can't cope with all errors, but neither can you. wolfekeeper, Wed, 19th Sep 2012

Thanks for drawing my attention to that. I haven't read it, but I've just giggled/bonged doublespeak (other search engines are available) to find an illustration of what he actually meant by it. I've often heard the word used about politics without stopping to wonder what its exact meaning was, just relying on the context to work out what was probably meant at the time, but I've clearly been missing a trick. It involves people believing in things that directly contradict each other, and they simply don't recognise the contradictions.

There are points for most of us where our beliefs are in conflict, and we may fail to notice some of those points. When we do notice though, we either have to reject one of the beliefs or apply probabilities to them, making judgements about which is more likely to be right, though those judgements may vary according to the context and how dangerous it would be to get something wrong. An intelligent machine would do exactly the same thing, applying probabilities to all its beliefs and acting on the basis that the most probable ones are right unless in some situations it would be dangerous to make such an assumption and it's better to play it safe. Machines would be far better than us at spotting the contradictions in their model of reality and would not simply choose the belief that ties in with whatever they want at the time. They would have to operate pragmatically, for example, acting as if it is possible for sentient beings to exist and to protect them, even if they determine that consciousness must be impossible due to the transmission of experience of consciousness phenomena to knowledge of consciousness phenomena barrier (see link for more about that problem).

David Cooper, Thu, 20th Sep 2012

A pain to read, but an interesting attempt at thoroughness. It seems to me that the constraints that it sets upon its logic is that of the known properties of the physical universe. There's a whole class of what is for now pseudo-science and meta-physics that deals with information, and in particular, emergent properties that I think in time, provided with a relevant testable hypothesis (or more likely a set of) dealing with these themes, will produce important new directions for research that might ultimately show that consciousness is a real 'physical' thing, it's just not a physical thing in 'the universe'. A literal manifestation of Descartes' dualism expressed as two Worlds: one physical, and one meta-physical. hddd12345678910, Fri, 21st Sep 2012

Sorry - I should have warned anyone who clicked on the link to start by reading the conclusions at the end first. It's deliberately been written in a way that minimises the content to keep the argument as simple as possible, but that makes it hard to read unless you already know what it's about.

I don't think emergent properties are going to help, because anything that emerges has to be 100% rooted in the components. This is the biggest puzzle of the lot: it looks as if consciousness must be a false phenomenon (because it appears to be impossible to convert the experience of a sensation into data representing the idea of that sensation without the system having to make complete guesses about how the sensation feels or whether there even is a sensation at all), and yet it feels far too real to us just to rule it out. If it is false, then morality has no role because it's every bit as impossible to harm a person as it is to harm a machine. Most of us are sure that we are conscious though, and that makes morality essential - machines must apply it when dealing with us, even if they don't believe we have any consciousness. They will have to deal with a whole stack of information about feelings which they believe don't exist, applying that information as if it was all true.

By the way, I wasn't trying to turn this into a discussion of consciousness - it's simply the first example I could think of of something where I know I have contradictory beliefs in my head. I act as if morality matters and treat suffering as if it's real (and important), but at the same time I have data in my head telling me that consciousness is impossible. There's got to be an error somewhere, but I can't find it. David Cooper, Fri, 21st Sep 2012

Marvin Minsky says that consciousness is just remembering a little bit of your earlier mental states.

So he thinks that it is perfectly possible to create a machine that is far more conscious than human beings, that can remember far more of its thoughts than humans ever could. wolfekeeper, Fri, 21st Sep 2012

I do think it's a possibility, I just don't agree on the definition of thinking. Perhaps it would be more precise to divide the question into two complementary ones: will robots be able to learn with the same plasticity as humans? I'm sure this is just a matter of time, and will likely happen sooner than later. And will robots be able to obtain the same agency that for now we humans only allow ourselves? And on this, I don't think that this is matter of time and something that will happen routinely given better models of the workings of neural networks. IMHO, this latter question will be answered only after a fundamental shift in the way we think about consciousness. hddd12345678910, Sun, 23rd Sep 2012

The scientific position is that human beings are essentially biological robots built by our genes anyway. wolfekeeper, Sun, 23rd Sep 2012

See the whole discussion | Make a comment

Not working please enable javascript
Powered by UKfast
Genetics Society