The Naked Scientists

The Naked Scientists Forum

Author Topic: Could computers gain some degree of self-awareness one day?  (Read 12051 times)

Offline Musicforawhile

  • Jr. Member
  • **
  • Posts: 44
    • View Profile
If consciousness emerged as a result of the complexity of the human mind, could a computer begin to gain some self-awareness if it were complex enough? Could it eventually gain a level of self-awareness that is similar to our own or even more superior to ours?



 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4707
  • Thanked: 153 times
  • life is too short to drink instant coffee
    • View Profile
"Windows is looking for a solution to the problem". Is that a symptom of selfawareness?
 

Offline Musicforawhile

  • Jr. Member
  • **
  • Posts: 44
    • View Profile
"Windows is looking for a solution to the problem". Is that a symptom of selfawareness?

Well that's what I'm asking you. I am not a computer or maths specialist, so I thought I'd ask the people who understand about computers.
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4707
  • Thanked: 153 times
  • life is too short to drink instant coffee
    • View Profile
And I'm not sure what anyone means by selfawareness!

I think, rather like "life" or "consciousness", any useful definition of the abstract noun has to derive from an adjective and a function: "an entity is classified as selfaware if it ....."

And there it all gets a bit slippery, because we can almost always imagine or even demonstrate a machine that does exactly whatever the definition requires. Next thing you know, we are talking about entirely mechanistic models of humans. That doesn't worry me - I care for machines as well as animals, and have a healthy relationship with everything from the dumbest tools to the brightest dogs (though humans are often disappointingly dishonest) - but some people believe (with no evidence) that homo sapiens is somehow special. 
 

Offline Musicforawhile

  • Jr. Member
  • **
  • Posts: 44
    • View Profile
What would be the end of that sentence? "An entity could described as self-aware if.." What does it have to be able to do, that we know of so far?
 

Offline CliffordK

  • Neilep Level Member
  • ******
  • Posts: 6321
  • Thanked: 3 times
  • Site Moderator
    • View Profile
It is inevitable that AI will progress, probably quite rapidly over the next century or few centuries. 

There is a question of how much will be programmed, vs how much the computer will be able to learn on its own.  How much will the computers be able to program themselves?

Certainly computers are getting "smarter".  Just bring up a simple program like your favorite word processor, and it will suggest spelling errors, grammar errors, and, perhaps even suggest words to use.  But, of course, all that was programmed in.

So, what is self aware?
I could write answers to a number of questions and put it into a computer program.

So,
You Ask:  What are you?
Computer Responds: I am a computer.

You:
What are you made of?
Computer: Silicone Chips and wires.

You could certainly program in any number of questions/responses.  So, is the computer self aware? 

Perhaps one should ask if your dog is self aware? 

Did you suddenly conclude that you are a "human"?  Or is that something that you were told?  So, are people even truly self-aware?
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4707
  • Thanked: 153 times
  • life is too short to drink instant coffee
    • View Profile
What would be the end of that sentence? "An entity could described as self-aware if.." What does it have to be able to do, that we know of so far?

That's the whole conundrum. I have built machines that spent half the time doing what they were "paid" for, and the rest of the time checking themselves to make sure they were working correctly. My car goes into a selfpreservation mode if it detects a problem that might damage the engine, and tells me if any of the light bulbs need replacing. One of my friends flies an airliner that won't move until it is happy that it is properly loaded, all the doors are shut, etc: there are basically two buttons - "start engines" and "takeoff", and neither will work if any important part of the aircraft is out of specification. All these machines are at least as "selfaware" as a mouse, which knows when it's hungry, threatened, or in the mood to make baby mice, and responds appropriately.

So I'm afraid I have to put the question back to you. What do you do that is symptomatic of selfawareness and can't be replicated by an algorithm or a set of gears? Simply "knowing that I exist" won't do, because we have no way of testing it! 
 

Offline wolfekeeper

  • Neilep Level Member
  • ******
  • Posts: 1092
  • Thanked: 11 times
    • View Profile
At the moment, any single computer or even a supercomputer doesn't have enough processing power to run a human intelligence.

Projections based on Moore's law suggest that we are going to reach the point in about 30 years or so; this is sometimes called the 'singularity'. Beyond that point, computers will be more intelligent than humans.

Self awareness itself is not a particularly deep problem, in general consciousness is just remembering and knowing about your own thought patterns, it's essentially a type of feedback loops.
 

Offline evan_au

  • Neilep Level Member
  • ******
  • Posts: 4116
  • Thanked: 245 times
    • View Profile
Quote
One of my friends flies an airliner that won't move until it is happy that it is properly loaded etc
An historical view:
Traditional mainframe computers didn't have very many sensors - it was up to a human operator to ensure that the airconditioning was working and the power was connected. Most of the processing power was dedicated to number crunching, and there were no extra processors available to carry out maintenance functions. If anything broke, the computer usually crashed; there was no "limp along" mode.

High-availability workstations can keep working when a power supply fails, or a CPU chip fails, due to redundant hardware. And they can warn you when the air gets too hot, or the voltage goes out of tolerance, due to internal sensors. Modern disk drives will detect that certain sectors are not working very well, move the data to a good part of the disk, and mark the sector as "faulty". Each module (like power supply, disk drive, or fan) has its own sensors and maintenance processor that can communicate with the main CPU to tell it how they are "feeling". So they can limp along, but they don't really have any manipulators that can do something about the underlying problem.

We are seeing smartphones which are now somewhat independent of mains power due to internal batteries, and brimming with sensors - acceleration, magnetic field, battery charge, GPS, wireless, temperature, air pressure. They can communicate their status to the owner and to remote servers. Because space is at a premium, they don't have multiple redundant sensors (but sometimes WiFi can provide location data if GPS is unavailable). Newer smartphones have multiple redundant CPU cores, but I'm not sure how well the operating system and applications can survive a core crash.

Perhaps the first consumer device that was self-aware and could actually do something about its condition was the small floor-sweeping robot that could find its way back to its charging station when its battery got low. Perhaps the next step is to empty its own dustpan? In this case, low cost means no redundancy - any failure is "fatal" to its mission.

So I suggest that effective self-awareness goes beyond raw processing power to include redundancy, internal & external sensors, multiple processors (some doing maintenance functions), enough control over the environment to autonomously recover from "simple problems" and communications to request help for "difficult problems". Ideally, it should also have the ability to predict & avoid "problems" and to seek out "value" in its environment.
« Last Edit: 07/01/2015 19:29:28 by evan_au »
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
At the moment, any single computer or even a supercomputer doesn't have enough processing power to run a human intelligence.

How do you know that? A simple laptop with only one processor may be hard pushed to do all the visual processing that we do, but I reckon it will be more than up to the task of thinking at a human-level of intelligence if it's running the right software. The special code required to run an AGI system on top of an operating system looks as if it will sit comfortably in just 64K of memory. The data it will need to hold is a lot bulkier, but a gigabyte of RAM can hold a couple of thousand books which can provide it with an enormous amount of knowledge, particularly if there is no repetition in the data.

Quote
Projections based on Moore's law suggest that we are going to reach the point in about 30 years or so; this is sometimes called the 'singularity'. Beyond that point, computers will be more intelligent than humans.

The "singularity" is about the point where intelligent machines no longer depend on us to feed them with new functionality and ideas, but they simply race away ahead on their own, and we'll never catch up. There are no hardware requirements specified for this and the "30 years" part is just an average of many guesses on the part of people who for the most part are a very long way from understanding what's involved.

Quote
Self awareness itself is not a particularly deep problem, in general consciousness is just remembering and knowing about your own thought patterns, it's essentially a type of feedback loops.

Self awareness is a massive problem, unless it's non-conscious in which case it's trivial. The issue is whether the machine is sentient or not - if it isn't, it can't be conscious of anything and can't be consciously aware of its own existence. A non-sentient machine can (in conjunction with the software running on it) calculate that it is looking at itself or reading its own internal data, but all it's doing is storing and manipulating data that says so. The closest it can get to understanding anything is to determine that data is consistent and doesn't clash with other data. Wherever there is a clash, something has not been understood. We may be the same, but it doesn't feel like that to us - we feel as if we understand things in a quite different way, but there is no known way to replicate that in a machine other than by bolting on fictions about feelings.
 

Offline wolfekeeper

  • Neilep Level Member
  • ******
  • Posts: 1092
  • Thanked: 11 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #10 on: 06/01/2015 22:39:41 »
To do the calculation of how much hardware you need, you take the number of neurons and factor in the connections between them (each one has hundreds of connections or more), and then allow for the fact that silicon has a clock cycle rate thousands or even millions of times faster than the neurons.

You end up needing a very, very big computer with loads of RAM and lots and lots of interconnection.

Your desk top computer is only a bit smarter than a beetle, best case.
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4707
  • Thanked: 153 times
  • life is too short to drink instant coffee
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #11 on: 06/01/2015 23:25:27 »
We have a very narrow interpretation of intelligence. Humans value qualities that are valuable to humans, so we think it is important to be able to recognise written words, but our olfactory system is lucky if it can tell the difference between edible and rotten food. The canine brain is quite different, adept at processing night vision, an extended range of sonic pitch and intensity, and enormous environmental and historic data from a nose that is beyond our capacity to imagine. To a dog, humans are blind, deaf, and survive by luck alone.
 

Offline evan_au

  • Neilep Level Member
  • ******
  • Posts: 4116
  • Thanked: 245 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #12 on: 07/01/2015 09:25:39 »
Quote from: Musicforawhile
Could [a computer] eventually gain a level of self-awareness that is similar to our own or even more superior to ours?
An even more difficult problem must be overcome if computers are ever to work effectively with people, or even other computers: other-awareness.

This means self-awareness, plus awareness of the condition of others (human or machine), and your relationship with these others, and what you can do about it.

The accomplishments of humanity come about in large part from specialisation, cooperation and cultural transmission of useful skills (like hunting, fairness & justice, agriculture, education, commerce, medicine, design, architecture and art). These arise from our ability to form a "theory of mind" which represents others as members of society. As the saying goes "It takes a village to raise a child".

This can only occur through effective communication between individuals (whether human or machine).

For humans, much of that communication is subconscious, and seems to be limited to around 150 individuals (although a legal and cultural framework allows us to deal with larger groups as aggregates). Many of the failures of humanity (like nepotism, oppression and war) come about from our failures to see others as worthy individuals and to communicate effectively.

There has been some progress recently in automated extraction of emotional state from tweets and Facebook posts (I'm sure that this is a topic in which various security agencies are very interested). Perhaps one day, computers may be able to communicate effectively with more than 150 individuals?
« Last Edit: 07/01/2015 19:45:27 by evan_au »
 

Offline wolfekeeper

  • Neilep Level Member
  • ******
  • Posts: 1092
  • Thanked: 11 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #13 on: 07/01/2015 17:27:08 »
That's right. Humans have a whole bunch of built-in programming, so as to be able to understand what we see, to be able to learn to talk, some understanding of grammars, to know what animals are, other humans, to be able to react to sounds and to have some concept of location.

All these things and more seem to be more or less built-in genetically, or at least the capacity to learn them rapidly.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #14 on: 07/01/2015 18:47:10 »
To do the calculation of how much hardware you need, you take the number of neurons and factor in the connections between them (each one has hundreds of connections or more), and then allow for the fact that silicon has a clock cycle rate thousands or even millions of times faster than the neurons.

You end up needing a very, very big computer with loads of RAM and lots and lots of interconnection.

Your desk top computer is only a bit smarter than a beetle, best case.

Neural computers need a lot of overcapacity because they waste most of their neurons - something that can be designed to do a simple calculation with a handful of logic gates takes hundreds of neurons, and even then it will occasionally make errors. A carefully programmed computer will not waste any of its capacity and will not make mistakes (unless there's a hardware failure). There is no lack of interconnectedness as every part of memory can be accessed from any processor. The machine on your desk is only stupid because it is not running intelligent software (assuming it isn't an ancient one), but in hardware terms it is already up to the task of bettering human-level intelligence.
 

Offline wolfekeeper

  • Neilep Level Member
  • ******
  • Posts: 1092
  • Thanked: 11 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #15 on: 07/01/2015 18:56:26 »
Nah.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #16 on: 08/01/2015 17:38:53 »
Well, you'll soon eat your word.
 

Offline evan_au

  • Neilep Level Member
  • ******
  • Posts: 4116
  • Thanked: 245 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #17 on: 08/01/2015 17:59:26 »
Quote from: evan_au
Perhaps one day, computers may be able to communicate effectively with more than 150 individuals?
Isn't this the goal of Google, Amazon and every other commercial interest on the web?
To interpret our individual goals, aspirations, and interests, and to offer relevant content (which preferably brings them some profit).
 

Offline wolfekeeper

  • Neilep Level Member
  • ******
  • Posts: 1092
  • Thanked: 11 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #18 on: 08/01/2015 18:42:47 »
Well, you'll soon eat your word.
To oversummarise this, the thing is that it takes a huge amount less computing power to run a program than it does to learn or write a new program.

Writing a new program effectively involves doing a massive search operation to work out the interrelationships between things.

Human brains are massively parallel and are used to look for these relationships; they effectively write their own programs.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #19 on: 09/01/2015 18:26:42 »
To oversummarise this, the thing is that it takes a huge amount less computing power to run a program than it does to learn or write a new program.

Writing a new program effectively involves doing a massive search operation to work out the interrelationships between things.

Human brains are massively parallel and are used to look for these relationships; they effectively write their own programs.

Computers are really good at massive search operations. Human brains are incredibly slow and need to be massively parallel in order to make up for that deficiency. The fact that we can write our own programs is simply down to the fact that evolution has programmed us to be universal problem solving machines. AGI systems will soon have the same ability. If you can work out what it is you want to do, you can then apply simple algorithms to find ways of doing it (if the thing you want to do is possible), and the solution you find can then be distilled down into a compact program to repeat the same task more efficiently in future. Different people have different sets of algorithms that they apply when trying to solve problems, and that makes some better than others at some tasks, so the trick with AGI is to provide it with as wide a range of these algorithms as possible so that it can approach all tasks in the way the best human thinkers do. The algorithms themselves are simple, but the difficulty is in finding ways for the system to hold them and to design a framework in which they can be applied so that they can be used to manipulate ideas. For the most part, what programmers have done up to now is write unintelligent code to solve specific tasks, but the road to AGI means working on a different level and writing universal problem solving algorithms which can then be applied by the machine to solve an infinite range of specific tasks without a human programmer having to do the top-level thinking part of it every time. We really are just a few steps away from making this happen.
 

Offline wolfekeeper

  • Neilep Level Member
  • ******
  • Posts: 1092
  • Thanked: 11 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #20 on: 09/01/2015 20:03:39 »
Computers are really good at massive search operations.
Actually... no. You'd think that, but no. They're fairly good at searching for some things in highly restricted areas. But a single human can (say) play chess at Grand Master level, drive home, talk to another human, understand a visual scene, listen to music. etc. That's just one human. Even adding together the different processing demands ends up with a huge computer. Then add in the fact that the human taught itself to do those things... oh boy.
Quote
Human brains are incredibly slow and need to be massively parallel in order to make up for that deficiency.
They're incredibly slow except for the fact that they're massively parallel, so the total throughput is stupendously vast.
Quote
The fact that we can write our own programs is simply down to the fact that evolution has programmed us to be universal problem solving machines.
Yes, by making us massively parallel. The human brain has more processing, storage, interconnection and thoughput than any supercomputer. Or a supercomputer might hit one of those, but not all at the same time.
Quote
We really are just a few steps away from making this happen.
Much more than 50 years of AI research says you're wrong.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #21 on: 09/01/2015 23:16:44 »
Computers are really good at massive search operations.
Actually... no. You'd think that, but no. They're fairly good at searching for some things in highly restricted areas. But a single human can (say) play chess at Grand Master level, drive home, talk to another human, understand a visual scene, listen to music. etc. That's just one human. Even adding together the different processing demands ends up with a huge computer. Then add in the fact that the human taught itself to do those things... oh boy.

A simple computer playing chess using the same algorithm as a human might well be able to beat the best humans by thinking faster than them. We don't know yet, because the software used for the task so far has always used the blunderbus approach instead of restricting itself to following a much more limited range of possibilities using a better algorithm like those applied by the best human players.

There's a number game in a popular TV show where you have to use six small numbers to make a randomly generated three digit number by adding, subtracting, multiplying and dividing. A computer can calculate every possible solution in under a second by following all possible routes, but a human only follows a very small fraction of one percent of the possible routes, making up for this by selecting the most likely routes instead by applying intelligent algorithms which are likely to find solutions quickly. A primitive computer from the '80s programmed to do the same would typically find a solution faster than a modern computer using the unintelligent blunderbus approach.

Visual processing is slow if you have to take input from a high-definition camera, but the eye works with highly blurred images instead, only having high definition at the centre and moving the eyes if part of the scene needs to be looked at more closely. This trick of working with a blurred scene can be done in a computer too, but the cameras available aren't designed the right way for this. What's needed is a camera that sends multiple streams at the same time, with most of the processing work being done on the least detailed one and the high-def ones being ignored unless a small part of the scene needs to be looked at more carefully. As it stands, you have to waste masses of processing time averaging out the data from many pixels in order to create a blurred version of the scene which you can then process quickly, though an easy fix is to use multiple cameras and put something in front of some of them to blur the image so that you can just read a single pixel to get an approximate average value for a whole block of 16x16.

You keep making the mistake of looking at what's being done now by people who are programming things to work in highly inefficient ways instead of thinking about how they could be done magnitudes faster on today's hardware by using intelligent methods.

Quote
Quote
The fact that we can write our own programs is simply down to the fact that evolution has programmed us to be universal problem solving machines.
Yes, by making us massively parallel. The human brain has more processing, storage, interconnection and thoughput than any supercomputer. Or a supercomputer might hit one of those, but not all at the same time.

Supercomputers aren't any more intelligently programmed than desktops - they typically just use the blunderbus approach for everything and rely on extra grunt to get things done faster. (Much of the work they do might be impossible to speed up though as it's often things like simulations of physics where there might not be any viable shortcuts, but then they're doing something which our brains can't compete with anyway.)

Quote
Quote
We really are just a few steps away from making this happen.
Much more than 50 years of AI research says you're wrong.

15 years of my work in AI says I'm right. The failure of most people working in the same field to make rapid progress has no bearing on the issue, other than misleading you into thinking that my claims can't come out of real research, but I've put plenty of clues in the things I've said to demonstrate that I know what I'm talking about, even if it's only after other people have covered the same ground that they will be able to recognise the fact. I've left a trail of evidence all over the Internet to make sure that future AGI will be able to look back and determine that I was in the lead and that I would have got there first if I hadn't been taken out by illness - it's an insurance policy just in case that happens (which is more than possible, given my current state of health).

E.g. 64K of memory for an AGI system (not including OS code and ordinary library routines), I said. Only a nutter would suggest something like that, unless it's someone who is actually a good long way through the process of building one and who actually knows what it would take.
« Last Edit: 09/01/2015 23:20:52 by David Cooper »
 

Offline wolfekeeper

  • Neilep Level Member
  • ******
  • Posts: 1092
  • Thanked: 11 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #22 on: 10/01/2015 01:04:31 »
I think you're vastly underestimating how hard it is to learn a new skill from more or less scratch.

While a computer can indeed play Countdown better than a human, it can't learn the game from scratch and play it acceptably; there's no fundamental limit to this, it just takes a freaking age.

A huge amount of human brain power is associated with general learning.

I know a reasonable amount about AI, and nothing at all gives me any reason to think that general learning problems are in any way easy; indeed all known general learning algorithms learn extremely slowly, and require a LOT of processing power; which the human brain actually has in spades, and I think it actually needs it.

Hey, maybe there is another point in the speed-time-memory optimisation space that computers can reach that the human brain can't due to the low 'clock' speed of neurons, but it seems unlikely; a lot of this seems to be np complete.
« Last Edit: 10/01/2015 01:09:29 by wolfekeeper »
 

Offline evan_au

  • Neilep Level Member
  • ******
  • Posts: 4116
  • Thanked: 245 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #23 on: 10/01/2015 07:25:51 »
One thing that computers must do before they can match human intelligence is to become much more efficient.
The human brain consumes around 25 Watts (25% of the human resting metabolism).
There have been some rough estimates that to operate a supercomputer (built with current technology) to allow research into a small part of the human brain will consume around 10 MegaWatts.

Current computer circuits are designed to generate the "right" value with an error rate < 1 in 1013 logic operations. You need to deliver a lot of electrons (or photons) in each switching operation to ensure that statistical variation does not cause the logic level to be misread. Charging and discharging capacitance billions of times per second consumes a lot of power.

It is thought that the brain uses more approximate methods which are more tolerant of errors, plus its relatively slow switching rate (< 1000 times per second) to achieve its amazing power efficiency. Some teams are investigating circuits that operate more like the brain.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #24 on: 11/01/2015 00:01:04 »
I think you're vastly underestimating how hard it is to learn a new skill from more or less scratch.

I'm not estimating it at all - I'm looking directly into how it can be done and then going on to try to implement it in software. Humans are very slow at learning new skills, but ultimately they achieve it by applying very simple rules. Once AGI has that set of fundamental rules built into it, it will pick up new skills at much higher speed than we can.

Quote
While a computer can indeed play Countdown better than a human, it can't learn the game from scratch and play it acceptably; there's no fundamental limit to this, it just takes a freaking age.

A thick piece of software certainly can't do so, but a piece of software designed to be able to pick up new skills is a very different kind of fish. The inability of word processors, spreadsheets and the like to take on new skills is not a good guide as to what the hardware is actually capable of. What I'm talking about is a new kind of software which is designed to approach problems from a totally different direction, and you have never seen this approach in action other than in humans (and a few bright animals).

Quote
A huge amount of human brain power is associated with general learning.

Because we're not very good at it either and really don't make the best use of our hardware. Some people with half their brain missing are perfectly normal and show no lack of capacity or capability. Evolution took shortcuts in its design of our brain which led to it being a hugely bloated thing that uses far more energy than necessary, but it is hard for evolution to optimise it and so we're stuck with it much as it is. It is a slow pile of junk which only just scrapes in as a universal problem solving machine, while crows with tiny brains come very close to matching its performance, being almost on a level with chimps.

Quote
I know a reasonable amount about AI, and nothing at all gives me any reason to think that general learning problems are in any way easy; indeed all known general learning algorithms learn extremely slowly, and require a LOT of processing power; which the human brain actually has in spades, and I think it actually needs it.

None of the general learning algorithms you know of are doing the job the right way - most of them seem to involve using neural computers and leaving them to try to solve problems for themselves instead of the programmers doing the work and trying to work out how problems can be solved. I take a different approach by studying how I solve problems myself and then trying to identify the algorithms I used so that I can recreate them in software, but very few people are doing this kind of work and those who are are keeping most of it to themselves. So, you are again being misled by judging possibility on the basis of current failure, just like looking at the first car prototypes and concluding from them that cars will never go faster than walking pace. You won't see what's possible until there's actually an AGI system available for you to play with, and then you'll suddenly understand it's power. Our brains are incredibly slow, but they make up for it by using good algorithms which evolution has found through experimentation over millions of years. Computers are incredibly fast, but the software is not taking anything remotely like full advantage of them other than in simple repetitive tasks which don't require a lot of intelligence to program.

That's my last reply on this - I'd rather put the time into completing the build of the system that will settle the argument.
 

The Naked Scientists Forum

Re: Could computers gain some degree of self-awareness one day?
« Reply #24 on: 11/01/2015 00:01:04 »

 

SMF 2.0.10 | SMF © 2015, Simple Machines
SMFAds for Free Forums