Catherine Hiscox asked:
Will robots be able to think for themselves?
Alan - Itís a good question. The problem is, of course, we donít actually know what thinking is. But I would say in principle, yes, they should be able to think for themselves, but itís going to be a long way into the future I'm afraid.
We do know what thinking is - it's about modelling the world and fitting new ideas into that model while attempting to avoid contradictions. Where contradictions result, some of the data is wrong and it's a matter of hunting out the error(s). Logical reasoning can be used to help find the errors, and also to derive new information from what is already there. It is possible for a system using imperfect reasoning to improve its model of reality even with a lot of faults in how it processes data, and that's why such imperfect thinkers as humans are still able to make a lot of progress, but machines will apply logic perfectly and make rapid progress. Within the next few years, someone will build a system which will suddenly acquire human-level intelligence almost as if it came out of nowhere - it's just a matter of building a system with the right rules and letting it loose with the contents of Wikipedia. The main barrier to progress relates to linguistics, and that's the business of how different concepts can be broken down into their component parts so that they can be related to other concepts correctly - a considerable amount of this has to go on in order for a machine to make sense of things, and it's a tangled mess of complexity which I've spent most of my life working on. I predict that several companies will have machines that can pass the Turing Test by 2015. David Cooper, Tue, 18th Sep 2012
David, your definition of thinking makes me think you yourself are a thinking machine :P jk But let me offer some food for thought. I assume you've read Orwell's 1984? Do you remember the part about 'doublethink'? It's an extreme example, but it should be rather illustrative in where your definition fails. Humans have a brilliant capacity for simultaneously holding and, crucially, believing two contradictory views, what Orwell referred to as doublethink. Humans aren't the rational beings that you believe you can model on a computer program. In fact the highlights in our memories are often things that happened during moments of irrationality. hddd12345678910, Wed, 19th Sep 2012
There's already been chess robots made that think for themselves, so in that limited sense, yes, they already can.
Thinking for oneself is what we (humans) do, so robots would have to be conscious to qualify. This is not the direction research is pursuing so the answer is no. grizelda, Wed, 19th Sep 2012
Thinking for oneself is not special to humans. wolfekeeper, Wed, 19th Sep 2012
At what stage did single cell organisms evolve from a "stimulus-response" existence to one of "free will"? Did that ever happen? or did we just evolve more complicated ways of responding to stimulus?
Computers can and do have capacities to cope with errors. They can't cope with all errors, but neither can you. wolfekeeper, Wed, 19th Sep 2012
A pain to read, but an interesting attempt at thoroughness. It seems to me that the constraints that it sets upon its logic is that of the known properties of the physical universe. There's a whole class of what is for now pseudo-science and meta-physics that deals with information, and in particular, emergent properties that I think in time, provided with a relevant testable hypothesis (or more likely a set of) dealing with these themes, will produce important new directions for research that might ultimately show that consciousness is a real 'physical' thing, it's just not a physical thing in 'the universe'. A literal manifestation of Descartes' dualism expressed as two Worlds: one physical, and one meta-physical. hddd12345678910, Fri, 21st Sep 2012
Sorry - I should have warned anyone who clicked on the link to start by reading the conclusions at the end first. It's deliberately been written in a way that minimises the content to keep the argument as simple as possible, but that makes it hard to read unless you already know what it's about.
Marvin Minsky says that consciousness is just remembering a little bit of your earlier mental states.
I do think it's a possibility, I just don't agree on the definition of thinking. Perhaps it would be more precise to divide the question into two complementary ones: will robots be able to learn with the same plasticity as humans? I'm sure this is just a matter of time, and will likely happen sooner than later. And will robots be able to obtain the same agency that for now we humans only allow ourselves? And on this, I don't think that this is matter of time and something that will happen routinely given better models of the workings of neural networks. IMHO, this latter question will be answered only after a fundamental shift in the way we think about consciousness. hddd12345678910, Sun, 23rd Sep 2012
The scientific position is that human beings are essentially biological robots built by our genes anyway. wolfekeeper, Sun, 23rd Sep 2012