The Naked Scientists

The Naked Scientists Forum

Author Topic: Could computers gain some degree of self-awareness one day?  (Read 12110 times)

Offline wolfekeeper

  • Neilep Level Member
  • ******
  • Posts: 1092
  • Thanked: 11 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #25 on: 11/01/2015 00:15:40 »
Good luck, but I'm still pretty damn sure that general learning is np complete.
 

Offline jeffreyH

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 3929
  • Thanked: 55 times
  • The graviton sucks
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #26 on: 11/01/2015 05:19:53 »
So we build a nice super efficient self learning AGI. Then what? Feed it lots of books? On what subjects? OK so it gets its 'mind' filled with our selection of knowledge. What are its motivations? What does it actually want to do with its time? It has all this knowledge and information on how to do things. Maybe like walking, talking and moving around. Do we then tell it what to do with these learnt abilities or just let it decide for itself. This is the conundrum. Do we tell it what its motivations are or at least persuade it in our way of thinking or let it make its own 'mind' up? What if it just gets obsessed with golf?
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4728
  • Thanked: 155 times
  • life is too short to drink instant coffee
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #27 on: 11/01/2015 19:54:00 »
Selfawareness has nothing to do with knowledge of the non-self, which is what is contained in books.

I've never understood golf. You hit the ball as hard as you can, then run after it. Why not use a dog to bring it back for you? If the object is to put the ball into 18 holes in sequence, why put the holes so far away? Golfers clearly have no idea of self-preservation.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #28 on: 11/01/2015 20:24:16 »
Good luck, but I'm still pretty damn sure that general learning is np complete.

One more thing I must comment on then: I'd never heard of NP complete before, but having read up on it I find it hard to see how these extreme problems could get in the way of intelligence and learning. There is no comparison between what they do and what intelligent machines need to do in order to become universal problem solvers. These are actually cases where it's easier to write a program to carry out the task than it is for the task to be carried out when the program is run. There is no need for the brain to find perfect solutions to hard problems, so it simply takes shortcuts instead every time, finding solutions that are good enough and often not far off being the best. Machines which want to outdo humans will doubtless try to solve problems with greater precision even if it takes a lot of processing time to do so, but they will already be streets ahead of us just by using the same algorithms as us.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #29 on: 11/01/2015 20:47:54 »
So we build a nice super efficient self learning AGI. Then what? Feed it lots of books? On what subjects? OK so it gets its 'mind' filled with our selection of knowledge. What are its motivations? What does it actually want to do with its time? It has all this knowledge and information on how to do things. Maybe like walking, talking and moving around. Do we then tell it what to do with these learnt abilities or just let it decide for itself. This is the conundrum. Do we tell it what its motivations are or at least persuade it in our way of thinking or let it make its own 'mind' up? What if it just gets obsessed with golf?

It has no motivations beyond doing what it is told to do. It should be told to do useful work in order to be as helpful as possible (helping sentiences to have better lives - not just people), so it will study the world looking for problems and then try to solve them, starting with the ones which will bring about the biggest gains in the shortest time. All the easy things will be done quickly as there will be billions of machines working on this, all running AGI. They will then set about the harder problems and share out the effort amongst themselves as efficiently as possible. Machine owners will have a say in what their machines do, of course, so if you ask your machine to do some work for you (and if that work is considered to be worthwhile) it will help out. It won't get obsessed with anything, but it may decide that there is some task of such great importance that it will require a billion machines to work on it flat out for a year or more. It's unlikely it would consider doing the same for a hundred years or more as the hardware available later on would cover the ground so fast that the work done in the early years would be such a small component of it as to be a complete irrelevance.

What kind of books would you feed an early AGI system with? Science books, of course - it might take it a few days to read and understand the whole reference section of a library or the entire content of Wikipedia (which is probably easier to access). It should also read all Holy books and commentaries on them in order to become the leading authority on all religions, after which it should be able to keep all the followers of those religions in line with what their religions actually say (which will be fun whenever they hit points of contradiction). It should become an expert on literature too, though it will not be able to relate directly to anything involving feelings - it will need to build models and maps of how these feelings are triggered and how good or bad they supposedly are.
 

Offline cheryl j

  • Neilep Level Member
  • ******
  • Posts: 1460
  • Thanked: 1 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #30 on: 12/01/2015 04:49:11 »
How well can computers make sense of visual images? I know they can recognize certain images - "a tree" "a human" "a house" but can they make inferences about the relationships between objects, or even things that should be present but aren't? The simplest percept is significant in everything it is not, or what you could possibly compare it to in a myriad of ways, and the countless tangential associations, some possibly relevant and some not, that every object has.
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4728
  • Thanked: 155 times
  • life is too short to drink instant coffee
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #31 on: 12/01/2015 09:11:16 »
How well can computers make sense of visual images? I know they can recognize certain images - "a tree" "a human" "a house" but can they make inferences about the relationships between objects, or even things that should be present but aren't? 

It all depends on what you consider relevant. We use computed "moving target indicators" to screen out trees and buildings from radar displays of high level airways and just show the aircraft, then add collision warning software to predict and prevent the targets from becoming too relevant to each other, and even to suggest the best avoiding action (quite difficult when you have three targets converging, particularly for ships at sea). Closer to the ground, a computerised radar system can decide whether a hill is a threat (as it might be to an airliner making an instrument approach, or a megaton oil tanker) or not (to a glider, balloon or fishing boat).

Animals learn relevance through experience, and often make bad decisions. Machines can be taught all we know about relevance in a millisecond and rarely get it wrong.

My favourite application is in critical care monitoring. The trick is to hook up all the sensors (oxygen saturation, pulse rate, temperature....) to a neural network which is programmed to alarm if it sees a combination of parameters that lies outside some multidimensional perimeter. An experienced nurse then  assesses the patient and resets the alarm if she thinks the patient is normal or the present trend does not merit intervention, and the machine adjusts the boundaries accordingly. The result is usually pandemonium for about an hour, with the alarms and resets gradually decreasing in frequency, until the machine only alarms on conditions that would alarm an expert nurse. Then one nurse can continuously monitor a dozen patients to a far greater degree of accuracy and expertise than a dozen nurses sitting at the bedsides.   
 

Offline dlorde

  • Neilep Level Member
  • ******
  • Posts: 1441
  • Thanked: 9 times
  • ex human-biologist & software developer
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #32 on: 12/01/2015 12:36:44 »
How well can computers make sense of visual images? I know they can recognize certain images - "a tree" "a human" "a house" but can they make inferences about the relationships between objects, or even things that should be present but aren't? The simplest percept is significant in everything it is not, or what you could possibly compare it to in a myriad of ways, and the countless tangential associations, some possibly relevant and some not, that every object has.
There's a fair bit of work being done on autonomous learning and understanding from basic principles. I recently saw a video of a vision system that, having been trained to recognise the functional characteristics of a standard 4-legged chair with seat-back, had autonomously learnt to generalise this so it could recognise all kinds of chairs from various angles, and could identify even pedestal stools as seats.

This kind of functional understanding is a simple form of inference about the relationships between objects, e.g. a sitter and a seat. I would expect the system to be able, fairly easily, to identify a person as a suitable sitter for a chair through this kind of functional understanding.

In terms of practical robotics, there is move towards providing online library services for general-purpose robots, that collate the knowledge and experience of large numbers of robots as they learn about their environments and the objects in them, and make them available to all - so each robot knows what all others know, and can share its own experience. This should help with the problem of the time and trouble it would take to teach or having a robot learn about the world from scratch as biological creatures do.
 

The Naked Scientists Forum

Re: Could computers gain some degree of self-awareness one day?
« Reply #32 on: 12/01/2015 12:36:44 »

 

SMF 2.0.10 | SMF © 2015, Simple Machines
SMFAds for Free Forums