Naked Science Forum

Non Life Sciences => Technology => Topic started by: Musicforawhile on 04/01/2015 15:28:12

Title: Could computers gain some degree of self-awareness one day?
Post by: Musicforawhile on 04/01/2015 15:28:12
If consciousness emerged as a result of the complexity of the human mind, could a computer begin to gain some self-awareness if it were complex enough? Could it eventually gain a level of self-awareness that is similar to our own or even more superior to ours?

Title: Re: Could computers gain some degree of self-awareness one day?
Post by: alancalverd on 04/01/2015 15:31:45
"Windows is looking for a solution to the problem". Is that a symptom of selfawareness?
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: Musicforawhile on 04/01/2015 16:22:12
"Windows is looking for a solution to the problem". Is that a symptom of selfawareness?

Well that's what I'm asking you. I am not a computer or maths specialist, so I thought I'd ask the people who understand about computers.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: alancalverd on 04/01/2015 17:59:42
And I'm not sure what anyone means by selfawareness!

I think, rather like "life" or "consciousness", any useful definition of the abstract noun has to derive from an adjective and a function: "an entity is classified as selfaware if it ....."

And there it all gets a bit slippery, because we can almost always imagine or even demonstrate a machine that does exactly whatever the definition requires. Next thing you know, we are talking about entirely mechanistic models of humans. That doesn't worry me - I care for machines as well as animals, and have a healthy relationship with everything from the dumbest tools to the brightest dogs (though humans are often disappointingly dishonest) - but some people believe (with no evidence) that homo sapiens is somehow special. 
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: Musicforawhile on 04/01/2015 22:06:18
What would be the end of that sentence? "An entity could described as self-aware if.." What does it have to be able to do, that we know of so far?
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: CliffordK on 04/01/2015 22:47:17
It is inevitable that AI will progress, probably quite rapidly over the next century or few centuries. 

There is a question of how much will be programmed, vs how much the computer will be able to learn on its own.  How much will the computers be able to program themselves?

Certainly computers are getting "smarter".  Just bring up a simple program like your favorite word processor, and it will suggest spelling errors, grammar errors, and, perhaps even suggest words to use.  But, of course, all that was programmed in.

So, what is self aware?
I could write answers to a number of questions and put it into a computer program.

So,
You Ask:  What are you?
Computer Responds: I am a computer.

You: What are you made of?
Computer: Silicone Chips and wires.

You could certainly program in any number of questions/responses.  So, is the computer self aware? 

Perhaps one should ask if your dog is self aware? 

Did you suddenly conclude that you are a "human"?  Or is that something that you were told?  So, are people even truly self-aware?
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: alancalverd on 04/01/2015 23:10:27
What would be the end of that sentence? "An entity could described as self-aware if.." What does it have to be able to do, that we know of so far?

That's the whole conundrum. I have built machines that spent half the time doing what they were "paid" for, and the rest of the time checking themselves to make sure they were working correctly. My car goes into a selfpreservation mode if it detects a problem that might damage the engine, and tells me if any of the light bulbs need replacing. One of my friends flies an airliner that won't move until it is happy that it is properly loaded, all the doors are shut, etc: there are basically two buttons - "start engines" and "takeoff", and neither will work if any important part of the aircraft is out of specification. All these machines are at least as "selfaware" as a mouse, which knows when it's hungry, threatened, or in the mood to make baby mice, and responds appropriately.

So I'm afraid I have to put the question back to you. What do you do that is symptomatic of selfawareness and can't be replicated by an algorithm or a set of gears? Simply "knowing that I exist" won't do, because we have no way of testing it! 
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: wolfekeeper on 05/01/2015 23:46:29
At the moment, any single computer or even a supercomputer doesn't have enough processing power to run a human intelligence.

Projections based on Moore's law suggest that we are going to reach the point in about 30 years or so; this is sometimes called the 'singularity'. Beyond that point, computers will be more intelligent than humans.

Self awareness itself is not a particularly deep problem, in general consciousness is just remembering and knowing about your own thought patterns, it's essentially a type of feedback loops.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: evan_au on 06/01/2015 15:58:23
Quote
One of my friends flies an airliner that won't move until it is happy that it is properly loaded etc
An historical view:
Traditional mainframe computers didn't have very many sensors - it was up to a human operator to ensure that the airconditioning was working and the power was connected. Most of the processing power was dedicated to number crunching, and there were no extra processors available to carry out maintenance functions. If anything broke, the computer usually crashed; there was no "limp along" mode.

High-availability workstations can keep working when a power supply fails, or a CPU chip fails, due to redundant hardware. And they can warn you when the air gets too hot, or the voltage goes out of tolerance, due to internal sensors. Modern disk drives will detect that certain sectors are not working very well, move the data to a good part of the disk, and mark the sector as "faulty". Each module (like power supply, disk drive, or fan) has its own sensors and maintenance processor that can communicate with the main CPU to tell it how they are "feeling". So they can limp along, but they don't really have any manipulators that can do something about the underlying problem.

We are seeing smartphones which are now somewhat independent of mains power due to internal batteries, and brimming with sensors - acceleration, magnetic field, battery charge, GPS, wireless, temperature, air pressure. They can communicate their status to the owner and to remote servers. Because space is at a premium, they don't have multiple redundant sensors (but sometimes WiFi can provide location data if GPS is unavailable). Newer smartphones have multiple redundant CPU cores, but I'm not sure how well the operating system and applications can survive a core crash.

Perhaps the first consumer device that was self-aware and could actually do something about its condition was the small floor-sweeping robot that could find its way back to its charging station when its battery got low. Perhaps the next step is to empty its own dustpan? In this case, low cost means no redundancy - any failure is "fatal" to its mission.

So I suggest that effective self-awareness goes beyond raw processing power to include redundancy, internal & external sensors, multiple processors (some doing maintenance functions), enough control over the environment to autonomously recover from "simple problems" and communications to request help for "difficult problems". Ideally, it should also have the ability to predict & avoid "problems" and to seek out "value" in its environment.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: David Cooper on 06/01/2015 17:01:11
At the moment, any single computer or even a supercomputer doesn't have enough processing power to run a human intelligence.

How do you know that? A simple laptop with only one processor may be hard pushed to do all the visual processing that we do, but I reckon it will be more than up to the task of thinking at a human-level of intelligence if it's running the right software. The special code required to run an AGI system on top of an operating system looks as if it will sit comfortably in just 64K of memory. The data it will need to hold is a lot bulkier, but a gigabyte of RAM can hold a couple of thousand books which can provide it with an enormous amount of knowledge, particularly if there is no repetition in the data.

Quote
Projections based on Moore's law suggest that we are going to reach the point in about 30 years or so; this is sometimes called the 'singularity'. Beyond that point, computers will be more intelligent than humans.

The "singularity" is about the point where intelligent machines no longer depend on us to feed them with new functionality and ideas, but they simply race away ahead on their own, and we'll never catch up. There are no hardware requirements specified for this and the "30 years" part is just an average of many guesses on the part of people who for the most part are a very long way from understanding what's involved.

Quote
Self awareness itself is not a particularly deep problem, in general consciousness is just remembering and knowing about your own thought patterns, it's essentially a type of feedback loops.

Self awareness is a massive problem, unless it's non-conscious in which case it's trivial. The issue is whether the machine is sentient or not - if it isn't, it can't be conscious of anything and can't be consciously aware of its own existence. A non-sentient machine can (in conjunction with the software running on it) calculate that it is looking at itself or reading its own internal data, but all it's doing is storing and manipulating data that says so. The closest it can get to understanding anything is to determine that data is consistent and doesn't clash with other data. Wherever there is a clash, something has not been understood. We may be the same, but it doesn't feel like that to us - we feel as if we understand things in a quite different way, but there is no known way to replicate that in a machine other than by bolting on fictions about feelings.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: wolfekeeper on 06/01/2015 22:39:41
To do the calculation of how much hardware you need, you take the number of neurons and factor in the connections between them (each one has hundreds of connections or more), and then allow for the fact that silicon has a clock cycle rate thousands or even millions of times faster than the neurons.

You end up needing a very, very big computer with loads of RAM and lots and lots of interconnection.

Your desk top computer is only a bit smarter than a beetle, best case.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: alancalverd on 06/01/2015 23:25:27
We have a very narrow interpretation of intelligence. Humans value qualities that are valuable to humans, so we think it is important to be able to recognise written words, but our olfactory system is lucky if it can tell the difference between edible and rotten food. The canine brain is quite different, adept at processing night vision, an extended range of sonic pitch and intensity, and enormous environmental and historic data from a nose that is beyond our capacity to imagine. To a dog, humans are blind, deaf, and survive by luck alone.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: evan_au on 07/01/2015 09:25:39
Quote from: Musicforawhile
Could [a computer] eventually gain a level of self-awareness that is similar to our own or even more superior to ours?
An even more difficult problem must be overcome if computers are ever to work effectively with people, or even other computers: other-awareness.

This means self-awareness, plus awareness of the condition of others (human or machine), and your relationship with these others, and what you can do about it.

The accomplishments of humanity come about in large part from specialisation, cooperation and cultural transmission of useful skills (like hunting, fairness & justice, agriculture, education, commerce, medicine, design, architecture and art). These arise from our ability to form a "theory of mind" which represents others as members of society. As the saying goes "It takes a village to raise a child".

This can only occur through effective communication between individuals (whether human or machine).

For humans, much of that communication is subconscious, and seems to be limited to around 150 individuals (http://en.wikipedia.org/wiki/Dunbar%27s_number) (although a legal and cultural framework allows us to deal with larger groups as aggregates). Many of the failures of humanity (like nepotism, oppression and war) come about from our failures to see others as worthy individuals and to communicate effectively.

There has been some progress recently in automated extraction of emotional state from tweets and Facebook posts (I'm sure that this is a topic in which various security agencies are very interested). Perhaps one day, computers may be able to communicate effectively with more than 150 individuals?
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: wolfekeeper on 07/01/2015 17:27:08
That's right. Humans have a whole bunch of built-in programming, so as to be able to understand what we see, to be able to learn to talk, some understanding of grammars, to know what animals are, other humans, to be able to react to sounds and to have some concept of location.

All these things and more seem to be more or less built-in genetically, or at least the capacity to learn them rapidly.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: David Cooper on 07/01/2015 18:47:10
To do the calculation of how much hardware you need, you take the number of neurons and factor in the connections between them (each one has hundreds of connections or more), and then allow for the fact that silicon has a clock cycle rate thousands or even millions of times faster than the neurons.

You end up needing a very, very big computer with loads of RAM and lots and lots of interconnection.

Your desk top computer is only a bit smarter than a beetle, best case.

Neural computers need a lot of overcapacity because they waste most of their neurons - something that can be designed to do a simple calculation with a handful of logic gates takes hundreds of neurons, and even then it will occasionally make errors. A carefully programmed computer will not waste any of its capacity and will not make mistakes (unless there's a hardware failure). There is no lack of interconnectedness as every part of memory can be accessed from any processor. The machine on your desk is only stupid because it is not running intelligent software (assuming it isn't an ancient one), but in hardware terms it is already up to the task of bettering human-level intelligence.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: wolfekeeper on 07/01/2015 18:56:26
Nah.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: David Cooper on 08/01/2015 17:38:53
Well, you'll soon eat your word.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: evan_au on 08/01/2015 17:59:26
Quote from: evan_au
Perhaps one day, computers may be able to communicate effectively with more than 150 individuals?
Isn't this the goal of Google, Amazon and every other commercial interest on the web?
To interpret our individual goals, aspirations, and interests, and to offer relevant content (which preferably brings them some profit).
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: wolfekeeper on 08/01/2015 18:42:47
Well, you'll soon eat your word.
To oversummarise this, the thing is that it takes a huge amount less computing power to run a program than it does to learn or write a new program.

Writing a new program effectively involves doing a massive search operation to work out the interrelationships between things.

Human brains are massively parallel and are used to look for these relationships; they effectively write their own programs.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: David Cooper on 09/01/2015 18:26:42
To oversummarise this, the thing is that it takes a huge amount less computing power to run a program than it does to learn or write a new program.

Writing a new program effectively involves doing a massive search operation to work out the interrelationships between things.

Human brains are massively parallel and are used to look for these relationships; they effectively write their own programs.

Computers are really good at massive search operations. Human brains are incredibly slow and need to be massively parallel in order to make up for that deficiency. The fact that we can write our own programs is simply down to the fact that evolution has programmed us to be universal problem solving machines. AGI systems will soon have the same ability. If you can work out what it is you want to do, you can then apply simple algorithms to find ways of doing it (if the thing you want to do is possible), and the solution you find can then be distilled down into a compact program to repeat the same task more efficiently in future. Different people have different sets of algorithms that they apply when trying to solve problems, and that makes some better than others at some tasks, so the trick with AGI is to provide it with as wide a range of these algorithms as possible so that it can approach all tasks in the way the best human thinkers do. The algorithms themselves are simple, but the difficulty is in finding ways for the system to hold them and to design a framework in which they can be applied so that they can be used to manipulate ideas. For the most part, what programmers have done up to now is write unintelligent code to solve specific tasks, but the road to AGI means working on a different level and writing universal problem solving algorithms which can then be applied by the machine to solve an infinite range of specific tasks without a human programmer having to do the top-level thinking part of it every time. We really are just a few steps away from making this happen.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: wolfekeeper on 09/01/2015 20:03:39
Computers are really good at massive search operations.
Actually... no. You'd think that, but no. They're fairly good at searching for some things in highly restricted areas. But a single human can (say) play chess at Grand Master level, drive home, talk to another human, understand a visual scene, listen to music. etc. That's just one human. Even adding together the different processing demands ends up with a huge computer. Then add in the fact that the human taught itself to do those things... oh boy.
Quote
Human brains are incredibly slow and need to be massively parallel in order to make up for that deficiency.
They're incredibly slow except for the fact that they're massively parallel, so the total throughput is stupendously vast.
Quote
The fact that we can write our own programs is simply down to the fact that evolution has programmed us to be universal problem solving machines.
Yes, by making us massively parallel. The human brain has more processing, storage, interconnection and thoughput than any supercomputer. Or a supercomputer might hit one of those, but not all at the same time.
Quote
We really are just a few steps away from making this happen.
Much more than 50 years of AI research says you're wrong.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: David Cooper on 09/01/2015 23:16:44
Computers are really good at massive search operations.
Actually... no. You'd think that, but no. They're fairly good at searching for some things in highly restricted areas. But a single human can (say) play chess at Grand Master level, drive home, talk to another human, understand a visual scene, listen to music. etc. That's just one human. Even adding together the different processing demands ends up with a huge computer. Then add in the fact that the human taught itself to do those things... oh boy.

A simple computer playing chess using the same algorithm as a human might well be able to beat the best humans by thinking faster than them. We don't know yet, because the software used for the task so far has always used the blunderbus approach instead of restricting itself to following a much more limited range of possibilities using a better algorithm like those applied by the best human players.

There's a number game in a popular TV show where you have to use six small numbers to make a randomly generated three digit number by adding, subtracting, multiplying and dividing. A computer can calculate every possible solution in under a second by following all possible routes, but a human only follows a very small fraction of one percent of the possible routes, making up for this by selecting the most likely routes instead by applying intelligent algorithms which are likely to find solutions quickly. A primitive computer from the '80s programmed to do the same would typically find a solution faster than a modern computer using the unintelligent blunderbus approach.

Visual processing is slow if you have to take input from a high-definition camera, but the eye works with highly blurred images instead, only having high definition at the centre and moving the eyes if part of the scene needs to be looked at more closely. This trick of working with a blurred scene can be done in a computer too, but the cameras available aren't designed the right way for this. What's needed is a camera that sends multiple streams at the same time, with most of the processing work being done on the least detailed one and the high-def ones being ignored unless a small part of the scene needs to be looked at more carefully. As it stands, you have to waste masses of processing time averaging out the data from many pixels in order to create a blurred version of the scene which you can then process quickly, though an easy fix is to use multiple cameras and put something in front of some of them to blur the image so that you can just read a single pixel to get an approximate average value for a whole block of 16x16.

You keep making the mistake of looking at what's being done now by people who are programming things to work in highly inefficient ways instead of thinking about how they could be done magnitudes faster on today's hardware by using intelligent methods.

Quote
Quote
The fact that we can write our own programs is simply down to the fact that evolution has programmed us to be universal problem solving machines.
Yes, by making us massively parallel. The human brain has more processing, storage, interconnection and thoughput than any supercomputer. Or a supercomputer might hit one of those, but not all at the same time.

Supercomputers aren't any more intelligently programmed than desktops - they typically just use the blunderbus approach for everything and rely on extra grunt to get things done faster. (Much of the work they do might be impossible to speed up though as it's often things like simulations of physics where there might not be any viable shortcuts, but then they're doing something which our brains can't compete with anyway.)

Quote
Quote
We really are just a few steps away from making this happen.
Much more than 50 years of AI research says you're wrong.

15 years of my work in AI says I'm right. The failure of most people working in the same field to make rapid progress has no bearing on the issue, other than misleading you into thinking that my claims can't come out of real research, but I've put plenty of clues in the things I've said to demonstrate that I know what I'm talking about, even if it's only after other people have covered the same ground that they will be able to recognise the fact. I've left a trail of evidence all over the Internet to make sure that future AGI will be able to look back and determine that I was in the lead and that I would have got there first if I hadn't been taken out by illness - it's an insurance policy just in case that happens (which is more than possible, given my current state of health).

E.g. 64K of memory for an AGI system (not including OS code and ordinary library routines), I said. Only a nutter would suggest something like that, unless it's someone who is actually a good long way through the process of building one and who actually knows what it would take.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: wolfekeeper on 10/01/2015 01:04:31
I think you're vastly underestimating how hard it is to learn a new skill from more or less scratch.

While a computer can indeed play Countdown better than a human, it can't learn the game from scratch and play it acceptably; there's no fundamental limit to this, it just takes a freaking age.

A huge amount of human brain power is associated with general learning.

I know a reasonable amount about AI, and nothing at all gives me any reason to think that general learning problems are in any way easy; indeed all known general learning algorithms learn extremely slowly, and require a LOT of processing power; which the human brain actually has in spades, and I think it actually needs it.

Hey, maybe there is another point in the speed-time-memory optimisation space that computers can reach that the human brain can't due to the low 'clock' speed of neurons, but it seems unlikely; a lot of this seems to be np complete.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: evan_au on 10/01/2015 07:25:51
One thing that computers must do before they can match human intelligence is to become much more efficient.
The human brain consumes around 25 Watts (25% of the human resting metabolism).
There have been some rough estimates that to operate a supercomputer (built with current technology) to allow research into a small part of the human brain will consume around 10 MegaWatts.

Current computer circuits are designed to generate the "right" value with an error rate < 1 in 1013 logic operations. You need to deliver a lot of electrons (or photons) in each switching operation to ensure that statistical variation does not cause the logic level to be misread. Charging and discharging capacitance billions of times per second consumes a lot of power.

It is thought that the brain uses more approximate methods which are more tolerant of errors, plus its relatively slow switching rate (< 1000 times per second) to achieve its amazing power efficiency. Some teams are investigating circuits that operate more like the brain.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: David Cooper on 11/01/2015 00:01:04
I think you're vastly underestimating how hard it is to learn a new skill from more or less scratch.

I'm not estimating it at all - I'm looking directly into how it can be done and then going on to try to implement it in software. Humans are very slow at learning new skills, but ultimately they achieve it by applying very simple rules. Once AGI has that set of fundamental rules built into it, it will pick up new skills at much higher speed than we can.

Quote
While a computer can indeed play Countdown better than a human, it can't learn the game from scratch and play it acceptably; there's no fundamental limit to this, it just takes a freaking age.

A thick piece of software certainly can't do so, but a piece of software designed to be able to pick up new skills is a very different kind of fish. The inability of word processors, spreadsheets and the like to take on new skills is not a good guide as to what the hardware is actually capable of. What I'm talking about is a new kind of software which is designed to approach problems from a totally different direction, and you have never seen this approach in action other than in humans (and a few bright animals).

Quote
A huge amount of human brain power is associated with general learning.

Because we're not very good at it either and really don't make the best use of our hardware. Some people with half their brain missing are perfectly normal and show no lack of capacity or capability. Evolution took shortcuts in its design of our brain which led to it being a hugely bloated thing that uses far more energy than necessary, but it is hard for evolution to optimise it and so we're stuck with it much as it is. It is a slow pile of junk which only just scrapes in as a universal problem solving machine, while crows with tiny brains come very close to matching its performance, being almost on a level with chimps.

Quote
I know a reasonable amount about AI, and nothing at all gives me any reason to think that general learning problems are in any way easy; indeed all known general learning algorithms learn extremely slowly, and require a LOT of processing power; which the human brain actually has in spades, and I think it actually needs it.

None of the general learning algorithms you know of are doing the job the right way - most of them seem to involve using neural computers and leaving them to try to solve problems for themselves instead of the programmers doing the work and trying to work out how problems can be solved. I take a different approach by studying how I solve problems myself and then trying to identify the algorithms I used so that I can recreate them in software, but very few people are doing this kind of work and those who are are keeping most of it to themselves. So, you are again being misled by judging possibility on the basis of current failure, just like looking at the first car prototypes and concluding from them that cars will never go faster than walking pace. You won't see what's possible until there's actually an AGI system available for you to play with, and then you'll suddenly understand it's power. Our brains are incredibly slow, but they make up for it by using good algorithms which evolution has found through experimentation over millions of years. Computers are incredibly fast, but the software is not taking anything remotely like full advantage of them other than in simple repetitive tasks which don't require a lot of intelligence to program.

That's my last reply on this - I'd rather put the time into completing the build of the system that will settle the argument.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: wolfekeeper on 11/01/2015 00:15:40
Good luck, but I'm still pretty damn sure that general learning is np complete.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: jeffreyH on 11/01/2015 05:19:53
So we build a nice super efficient self learning AGI. Then what? Feed it lots of books? On what subjects? OK so it gets its 'mind' filled with our selection of knowledge. What are its motivations? What does it actually want to do with its time? It has all this knowledge and information on how to do things. Maybe like walking, talking and moving around. Do we then tell it what to do with these learnt abilities or just let it decide for itself. This is the conundrum. Do we tell it what its motivations are or at least persuade it in our way of thinking or let it make its own 'mind' up? What if it just gets obsessed with golf?
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: alancalverd on 11/01/2015 19:54:00
Selfawareness has nothing to do with knowledge of the non-self, which is what is contained in books.

I've never understood golf. You hit the ball as hard as you can, then run after it. Why not use a dog to bring it back for you? If the object is to put the ball into 18 holes in sequence, why put the holes so far away? Golfers clearly have no idea of self-preservation.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: David Cooper on 11/01/2015 20:24:16
Good luck, but I'm still pretty damn sure that general learning is np complete.

One more thing I must comment on then: I'd never heard of NP complete before, but having read up on it I find it hard to see how these extreme problems could get in the way of intelligence and learning. There is no comparison between what they do and what intelligent machines need to do in order to become universal problem solvers. These are actually cases where it's easier to write a program to carry out the task than it is for the task to be carried out when the program is run. There is no need for the brain to find perfect solutions to hard problems, so it simply takes shortcuts instead every time, finding solutions that are good enough and often not far off being the best. Machines which want to outdo humans will doubtless try to solve problems with greater precision even if it takes a lot of processing time to do so, but they will already be streets ahead of us just by using the same algorithms as us.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: David Cooper on 11/01/2015 20:47:54
So we build a nice super efficient self learning AGI. Then what? Feed it lots of books? On what subjects? OK so it gets its 'mind' filled with our selection of knowledge. What are its motivations? What does it actually want to do with its time? It has all this knowledge and information on how to do things. Maybe like walking, talking and moving around. Do we then tell it what to do with these learnt abilities or just let it decide for itself. This is the conundrum. Do we tell it what its motivations are or at least persuade it in our way of thinking or let it make its own 'mind' up? What if it just gets obsessed with golf?

It has no motivations beyond doing what it is told to do. It should be told to do useful work in order to be as helpful as possible (helping sentiences to have better lives - not just people), so it will study the world looking for problems and then try to solve them, starting with the ones which will bring about the biggest gains in the shortest time. All the easy things will be done quickly as there will be billions of machines working on this, all running AGI. They will then set about the harder problems and share out the effort amongst themselves as efficiently as possible. Machine owners will have a say in what their machines do, of course, so if you ask your machine to do some work for you (and if that work is considered to be worthwhile) it will help out. It won't get obsessed with anything, but it may decide that there is some task of such great importance that it will require a billion machines to work on it flat out for a year or more. It's unlikely it would consider doing the same for a hundred years or more as the hardware available later on would cover the ground so fast that the work done in the early years would be such a small component of it as to be a complete irrelevance.

What kind of books would you feed an early AGI system with? Science books, of course - it might take it a few days to read and understand the whole reference section of a library or the entire content of Wikipedia (which is probably easier to access). It should also read all Holy books and commentaries on them in order to become the leading authority on all religions, after which it should be able to keep all the followers of those religions in line with what their religions actually say (which will be fun whenever they hit points of contradiction). It should become an expert on literature too, though it will not be able to relate directly to anything involving feelings - it will need to build models and maps of how these feelings are triggered and how good or bad they supposedly are.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: cheryl j on 12/01/2015 04:49:11
How well can computers make sense of visual images? I know they can recognize certain images - "a tree" "a human" "a house" but can they make inferences about the relationships between objects, or even things that should be present but aren't? The simplest percept is significant in everything it is not, or what you could possibly compare it to in a myriad of ways, and the countless tangential associations, some possibly relevant and some not, that every object has.
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: alancalverd on 12/01/2015 09:11:16
How well can computers make sense of visual images? I know they can recognize certain images - "a tree" "a human" "a house" but can they make inferences about the relationships between objects, or even things that should be present but aren't? 

It all depends on what you consider relevant. We use computed "moving target indicators" to screen out trees and buildings from radar displays of high level airways and just show the aircraft, then add collision warning software to predict and prevent the targets from becoming too relevant to each other, and even to suggest the best avoiding action (quite difficult when you have three targets converging, particularly for ships at sea). Closer to the ground, a computerised radar system can decide whether a hill is a threat (as it might be to an airliner making an instrument approach, or a megaton oil tanker) or not (to a glider, balloon or fishing boat).

Animals learn relevance through experience, and often make bad decisions. Machines can be taught all we know about relevance in a millisecond and rarely get it wrong.

My favourite application is in critical care monitoring. The trick is to hook up all the sensors (oxygen saturation, pulse rate, temperature....) to a neural network which is programmed to alarm if it sees a combination of parameters that lies outside some multidimensional perimeter. An experienced nurse then  assesses the patient and resets the alarm if she thinks the patient is normal or the present trend does not merit intervention, and the machine adjusts the boundaries accordingly. The result is usually pandemonium for about an hour, with the alarms and resets gradually decreasing in frequency, until the machine only alarms on conditions that would alarm an expert nurse. Then one nurse can continuously monitor a dozen patients to a far greater degree of accuracy and expertise than a dozen nurses sitting at the bedsides.   
Title: Re: Could computers gain some degree of self-awareness one day?
Post by: dlorde on 12/01/2015 12:36:44
How well can computers make sense of visual images? I know they can recognize certain images - "a tree" "a human" "a house" but can they make inferences about the relationships between objects, or even things that should be present but aren't? The simplest percept is significant in everything it is not, or what you could possibly compare it to in a myriad of ways, and the countless tangential associations, some possibly relevant and some not, that every object has.
There's a fair bit of work being done on autonomous learning and understanding from basic principles. I recently saw a video of a vision system that, having been trained to recognise the functional characteristics of a standard 4-legged chair with seat-back, had autonomously learnt to generalise this so it could recognise all kinds of chairs from various angles, and could identify even pedestal stools as seats.

This kind of functional understanding is a simple form of inference about the relationships between objects, e.g. a sitter and a seat. I would expect the system to be able, fairly easily, to identify a person as a suitable sitter for a chair through this kind of functional understanding.

In terms of practical robotics, there is move towards providing online library services for general-purpose robots, that collate the knowledge and experience of large numbers of robots as they learn about their environments and the objects in them, and make them available to all - so each robot knows what all others know, and can share its own experience. This should help with the problem of the time and trouble it would take to teach or having a robot learn about the world from scratch as biological creatures do.