0 Members and 1 Guest are viewing this topic.
"Windows is looking for a solution to the problem". Is that a symptom of selfawareness?
What would be the end of that sentence? "An entity could described as self-aware if.." What does it have to be able to do, that we know of so far?
One of my friends flies an airliner that won't move until it is happy that it is properly loaded etc
At the moment, any single computer or even a supercomputer doesn't have enough processing power to run a human intelligence.
Projections based on Moore's law suggest that we are going to reach the point in about 30 years or so; this is sometimes called the 'singularity'. Beyond that point, computers will be more intelligent than humans.
Self awareness itself is not a particularly deep problem, in general consciousness is just remembering and knowing about your own thought patterns, it's essentially a type of feedback loops.
Could [a computer] eventually gain a level of self-awareness that is similar to our own or even more superior to ours?
To do the calculation of how much hardware you need, you take the number of neurons and factor in the connections between them (each one has hundreds of connections or more), and then allow for the fact that silicon has a clock cycle rate thousands or even millions of times faster than the neurons.You end up needing a very, very big computer with loads of RAM and lots and lots of interconnection.Your desk top computer is only a bit smarter than a beetle, best case.
Perhaps one day, computers may be able to communicate effectively with more than 150 individuals?
Well, you'll soon eat your word.
To oversummarise this, the thing is that it takes a huge amount less computing power to run a program than it does to learn or write a new program.Writing a new program effectively involves doing a massive search operation to work out the interrelationships between things.Human brains are massively parallel and are used to look for these relationships; they effectively write their own programs.
Computers are really good at massive search operations.
Human brains are incredibly slow and need to be massively parallel in order to make up for that deficiency.
The fact that we can write our own programs is simply down to the fact that evolution has programmed us to be universal problem solving machines.
We really are just a few steps away from making this happen.
Quote from: David Cooper on 09/01/2015 18:26:42Computers are really good at massive search operations.Actually... no. You'd think that, but no. They're fairly good at searching for some things in highly restricted areas. But a single human can (say) play chess at Grand Master level, drive home, talk to another human, understand a visual scene, listen to music. etc. That's just one human. Even adding together the different processing demands ends up with a huge computer. Then add in the fact that the human taught itself to do those things... oh boy.
QuoteThe fact that we can write our own programs is simply down to the fact that evolution has programmed us to be universal problem solving machines.Yes, by making us massively parallel. The human brain has more processing, storage, interconnection and thoughput than any supercomputer. Or a supercomputer might hit one of those, but not all at the same time.
QuoteWe really are just a few steps away from making this happen.Much more than 50 years of AI research says you're wrong.
I think you're vastly underestimating how hard it is to learn a new skill from more or less scratch.
While a computer can indeed play Countdown better than a human, it can't learn the game from scratch and play it acceptably; there's no fundamental limit to this, it just takes a freaking age.
A huge amount of human brain power is associated with general learning.
I know a reasonable amount about AI, and nothing at all gives me any reason to think that general learning problems are in any way easy; indeed all known general learning algorithms learn extremely slowly, and require a LOT of processing power; which the human brain actually has in spades, and I think it actually needs it.
Good luck, but I'm still pretty damn sure that general learning is np complete.
So we build a nice super efficient self learning AGI. Then what? Feed it lots of books? On what subjects? OK so it gets its 'mind' filled with our selection of knowledge. What are its motivations? What does it actually want to do with its time? It has all this knowledge and information on how to do things. Maybe like walking, talking and moving around. Do we then tell it what to do with these learnt abilities or just let it decide for itself. This is the conundrum. Do we tell it what its motivations are or at least persuade it in our way of thinking or let it make its own 'mind' up? What if it just gets obsessed with golf?
How well can computers make sense of visual images? I know they can recognize certain images - "a tree" "a human" "a house" but can they make inferences about the relationships between objects, or even things that should be present but aren't?
How well can computers make sense of visual images? I know they can recognize certain images - "a tree" "a human" "a house" but can they make inferences about the relationships between objects, or even things that should be present but aren't? The simplest percept is significant in everything it is not, or what you could possibly compare it to in a myriad of ways, and the countless tangential associations, some possibly relevant and some not, that every object has.