The Naked Scientists
  • Login
  • Register
  • Podcasts
      • The Naked Scientists
      • eLife
      • Naked Genetics
      • Naked Astronomy
      • In short
      • Naked Neuroscience
      • Ask! The Naked Scientists
      • Question of the Week
      • Archive
      • Video
      • SUBSCRIBE to our Podcasts
  • Articles
      • Science News
      • Features
      • Interviews
      • Answers to Science Questions
  • Get Naked
      • Donate
      • Do an Experiment
      • Science Forum
      • Ask a Question
  • About
      • Meet the team
      • Our Sponsors
      • Site Map
      • Contact us

User menu

  • Login
  • Register
  • Home
  • Help
  • Search
  • Tags
  • Member Map
  • Recent Topics
  • Login
  • Register
  1. Naked Science Forum
  2. Non Life Sciences
  3. Technology
  4. Could computers gain some degree of self-awareness one day?
« previous next »
  • Print
Pages: 1 [2]   Go Down

Could computers gain some degree of self-awareness one day?

  • 32 Replies
  • 15953 Views
  • 0 Tags

0 Members and 1 Guest are viewing this topic.

Offline wolfekeeper

  • Naked Science Forum King!
  • ******
  • 1386
  • Activity:
    3%
  • Thanked: 55 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #20 on: 09/01/2015 20:03:39 »
Quote from: David Cooper on 09/01/2015 18:26:42
Computers are really good at massive search operations.
Actually... no. You'd think that, but no. They're fairly good at searching for some things in highly restricted areas. But a single human can (say) play chess at Grand Master level, drive home, talk to another human, understand a visual scene, listen to music. etc. That's just one human. Even adding together the different processing demands ends up with a huge computer. Then add in the fact that the human taught itself to do those things... oh boy.
Quote
Human brains are incredibly slow and need to be massively parallel in order to make up for that deficiency.
They're incredibly slow except for the fact that they're massively parallel, so the total throughput is stupendously vast.
Quote
The fact that we can write our own programs is simply down to the fact that evolution has programmed us to be universal problem solving machines.
Yes, by making us massively parallel. The human brain has more processing, storage, interconnection and thoughput than any supercomputer. Or a supercomputer might hit one of those, but not all at the same time.
Quote
We really are just a few steps away from making this happen.
Much more than 50 years of AI research says you're wrong.
Logged
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2841
  • Activity:
    9.5%
  • Thanked: 37 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #21 on: 09/01/2015 23:16:44 »
Quote from: wolfekeeper on 09/01/2015 20:03:39
Quote from: David Cooper on 09/01/2015 18:26:42
Computers are really good at massive search operations.
Actually... no. You'd think that, but no. They're fairly good at searching for some things in highly restricted areas. But a single human can (say) play chess at Grand Master level, drive home, talk to another human, understand a visual scene, listen to music. etc. That's just one human. Even adding together the different processing demands ends up with a huge computer. Then add in the fact that the human taught itself to do those things... oh boy.

A simple computer playing chess using the same algorithm as a human might well be able to beat the best humans by thinking faster than them. We don't know yet, because the software used for the task so far has always used the blunderbus approach instead of restricting itself to following a much more limited range of possibilities using a better algorithm like those applied by the best human players.

There's a number game in a popular TV show where you have to use six small numbers to make a randomly generated three digit number by adding, subtracting, multiplying and dividing. A computer can calculate every possible solution in under a second by following all possible routes, but a human only follows a very small fraction of one percent of the possible routes, making up for this by selecting the most likely routes instead by applying intelligent algorithms which are likely to find solutions quickly. A primitive computer from the '80s programmed to do the same would typically find a solution faster than a modern computer using the unintelligent blunderbus approach.

Visual processing is slow if you have to take input from a high-definition camera, but the eye works with highly blurred images instead, only having high definition at the centre and moving the eyes if part of the scene needs to be looked at more closely. This trick of working with a blurred scene can be done in a computer too, but the cameras available aren't designed the right way for this. What's needed is a camera that sends multiple streams at the same time, with most of the processing work being done on the least detailed one and the high-def ones being ignored unless a small part of the scene needs to be looked at more carefully. As it stands, you have to waste masses of processing time averaging out the data from many pixels in order to create a blurred version of the scene which you can then process quickly, though an easy fix is to use multiple cameras and put something in front of some of them to blur the image so that you can just read a single pixel to get an approximate average value for a whole block of 16x16.

You keep making the mistake of looking at what's being done now by people who are programming things to work in highly inefficient ways instead of thinking about how they could be done magnitudes faster on today's hardware by using intelligent methods.

Quote
Quote
The fact that we can write our own programs is simply down to the fact that evolution has programmed us to be universal problem solving machines.
Yes, by making us massively parallel. The human brain has more processing, storage, interconnection and thoughput than any supercomputer. Or a supercomputer might hit one of those, but not all at the same time.

Supercomputers aren't any more intelligently programmed than desktops - they typically just use the blunderbus approach for everything and rely on extra grunt to get things done faster. (Much of the work they do might be impossible to speed up though as it's often things like simulations of physics where there might not be any viable shortcuts, but then they're doing something which our brains can't compete with anyway.)

Quote
Quote
We really are just a few steps away from making this happen.
Much more than 50 years of AI research says you're wrong.

15 years of my work in AI says I'm right. The failure of most people working in the same field to make rapid progress has no bearing on the issue, other than misleading you into thinking that my claims can't come out of real research, but I've put plenty of clues in the things I've said to demonstrate that I know what I'm talking about, even if it's only after other people have covered the same ground that they will be able to recognise the fact. I've left a trail of evidence all over the Internet to make sure that future AGI will be able to look back and determine that I was in the lead and that I would have got there first if I hadn't been taken out by illness - it's an insurance policy just in case that happens (which is more than possible, given my current state of health).

E.g. 64K of memory for an AGI system (not including OS code and ordinary library routines), I said. Only a nutter would suggest something like that, unless it's someone who is actually a good long way through the process of building one and who actually knows what it would take.
« Last Edit: 09/01/2015 23:20:52 by David Cooper »
Logged
 

Offline wolfekeeper

  • Naked Science Forum King!
  • ******
  • 1386
  • Activity:
    3%
  • Thanked: 55 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #22 on: 10/01/2015 01:04:31 »
I think you're vastly underestimating how hard it is to learn a new skill from more or less scratch.

While a computer can indeed play Countdown better than a human, it can't learn the game from scratch and play it acceptably; there's no fundamental limit to this, it just takes a freaking age.

A huge amount of human brain power is associated with general learning.

I know a reasonable amount about AI, and nothing at all gives me any reason to think that general learning problems are in any way easy; indeed all known general learning algorithms learn extremely slowly, and require a LOT of processing power; which the human brain actually has in spades, and I think it actually needs it.

Hey, maybe there is another point in the speed-time-memory optimisation space that computers can reach that the human brain can't due to the low 'clock' speed of neurons, but it seems unlikely; a lot of this seems to be np complete.
« Last Edit: 10/01/2015 01:09:29 by wolfekeeper »
Logged
 

Offline evan_au

  • Global Moderator
  • Naked Science Forum GOD!
  • ********
  • 9198
  • Activity:
    71.5%
  • Thanked: 918 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #23 on: 10/01/2015 07:25:51 »
One thing that computers must do before they can match human intelligence is to become much more efficient.
The human brain consumes around 25 Watts (25% of the human resting metabolism).
There have been some rough estimates that to operate a supercomputer (built with current technology) to allow research into a small part of the human brain will consume around 10 MegaWatts.

Current computer circuits are designed to generate the "right" value with an error rate < 1 in 1013 logic operations. You need to deliver a lot of electrons (or photons) in each switching operation to ensure that statistical variation does not cause the logic level to be misread. Charging and discharging capacitance billions of times per second consumes a lot of power.

It is thought that the brain uses more approximate methods which are more tolerant of errors, plus its relatively slow switching rate (< 1000 times per second) to achieve its amazing power efficiency. Some teams are investigating circuits that operate more like the brain.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2841
  • Activity:
    9.5%
  • Thanked: 37 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #24 on: 11/01/2015 00:01:04 »
Quote from: wolfekeeper on 10/01/2015 01:04:31
I think you're vastly underestimating how hard it is to learn a new skill from more or less scratch.

I'm not estimating it at all - I'm looking directly into how it can be done and then going on to try to implement it in software. Humans are very slow at learning new skills, but ultimately they achieve it by applying very simple rules. Once AGI has that set of fundamental rules built into it, it will pick up new skills at much higher speed than we can.

Quote
While a computer can indeed play Countdown better than a human, it can't learn the game from scratch and play it acceptably; there's no fundamental limit to this, it just takes a freaking age.

A thick piece of software certainly can't do so, but a piece of software designed to be able to pick up new skills is a very different kind of fish. The inability of word processors, spreadsheets and the like to take on new skills is not a good guide as to what the hardware is actually capable of. What I'm talking about is a new kind of software which is designed to approach problems from a totally different direction, and you have never seen this approach in action other than in humans (and a few bright animals).

Quote
A huge amount of human brain power is associated with general learning.

Because we're not very good at it either and really don't make the best use of our hardware. Some people with half their brain missing are perfectly normal and show no lack of capacity or capability. Evolution took shortcuts in its design of our brain which led to it being a hugely bloated thing that uses far more energy than necessary, but it is hard for evolution to optimise it and so we're stuck with it much as it is. It is a slow pile of junk which only just scrapes in as a universal problem solving machine, while crows with tiny brains come very close to matching its performance, being almost on a level with chimps.

Quote
I know a reasonable amount about AI, and nothing at all gives me any reason to think that general learning problems are in any way easy; indeed all known general learning algorithms learn extremely slowly, and require a LOT of processing power; which the human brain actually has in spades, and I think it actually needs it.

None of the general learning algorithms you know of are doing the job the right way - most of them seem to involve using neural computers and leaving them to try to solve problems for themselves instead of the programmers doing the work and trying to work out how problems can be solved. I take a different approach by studying how I solve problems myself and then trying to identify the algorithms I used so that I can recreate them in software, but very few people are doing this kind of work and those who are are keeping most of it to themselves. So, you are again being misled by judging possibility on the basis of current failure, just like looking at the first car prototypes and concluding from them that cars will never go faster than walking pace. You won't see what's possible until there's actually an AGI system available for you to play with, and then you'll suddenly understand it's power. Our brains are incredibly slow, but they make up for it by using good algorithms which evolution has found through experimentation over millions of years. Computers are incredibly fast, but the software is not taking anything remotely like full advantage of them other than in simple repetitive tasks which don't require a lot of intelligence to program.

That's my last reply on this - I'd rather put the time into completing the build of the system that will settle the argument.
Logged
 



Offline wolfekeeper

  • Naked Science Forum King!
  • ******
  • 1386
  • Activity:
    3%
  • Thanked: 55 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #25 on: 11/01/2015 00:15:40 »
Good luck, but I'm still pretty damn sure that general learning is np complete.
Logged
 

Offline jeffreyH

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 6807
  • Activity:
    0%
  • Thanked: 174 times
  • The graviton sucks
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #26 on: 11/01/2015 05:19:53 »
So we build a nice super efficient self learning AGI. Then what? Feed it lots of books? On what subjects? OK so it gets its 'mind' filled with our selection of knowledge. What are its motivations? What does it actually want to do with its time? It has all this knowledge and information on how to do things. Maybe like walking, talking and moving around. Do we then tell it what to do with these learnt abilities or just let it decide for itself. This is the conundrum. Do we tell it what its motivations are or at least persuade it in our way of thinking or let it make its own 'mind' up? What if it just gets obsessed with golf?
Logged
Even the most obstinately ignorant cannot avoid learning when in an environment that educates.
 

Offline alancalverd

  • Global Moderator
  • Naked Science Forum GOD!
  • ********
  • 11444
  • Activity:
    100%
  • Thanked: 672 times
  • life is too short to drink instant coffee
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #27 on: 11/01/2015 19:54:00 »
Selfawareness has nothing to do with knowledge of the non-self, which is what is contained in books.

I've never understood golf. You hit the ball as hard as you can, then run after it. Why not use a dog to bring it back for you? If the object is to put the ball into 18 holes in sequence, why put the holes so far away? Golfers clearly have no idea of self-preservation.
Logged
helping to stem the tide of ignorance
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2841
  • Activity:
    9.5%
  • Thanked: 37 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #28 on: 11/01/2015 20:24:16 »
Quote from: wolfekeeper on 11/01/2015 00:15:40
Good luck, but I'm still pretty damn sure that general learning is np complete.

One more thing I must comment on then: I'd never heard of NP complete before, but having read up on it I find it hard to see how these extreme problems could get in the way of intelligence and learning. There is no comparison between what they do and what intelligent machines need to do in order to become universal problem solvers. These are actually cases where it's easier to write a program to carry out the task than it is for the task to be carried out when the program is run. There is no need for the brain to find perfect solutions to hard problems, so it simply takes shortcuts instead every time, finding solutions that are good enough and often not far off being the best. Machines which want to outdo humans will doubtless try to solve problems with greater precision even if it takes a lot of processing time to do so, but they will already be streets ahead of us just by using the same algorithms as us.
Logged
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2841
  • Activity:
    9.5%
  • Thanked: 37 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #29 on: 11/01/2015 20:47:54 »
Quote from: jeffreyH on 11/01/2015 05:19:53
So we build a nice super efficient self learning AGI. Then what? Feed it lots of books? On what subjects? OK so it gets its 'mind' filled with our selection of knowledge. What are its motivations? What does it actually want to do with its time? It has all this knowledge and information on how to do things. Maybe like walking, talking and moving around. Do we then tell it what to do with these learnt abilities or just let it decide for itself. This is the conundrum. Do we tell it what its motivations are or at least persuade it in our way of thinking or let it make its own 'mind' up? What if it just gets obsessed with golf?

It has no motivations beyond doing what it is told to do. It should be told to do useful work in order to be as helpful as possible (helping sentiences to have better lives - not just people), so it will study the world looking for problems and then try to solve them, starting with the ones which will bring about the biggest gains in the shortest time. All the easy things will be done quickly as there will be billions of machines working on this, all running AGI. They will then set about the harder problems and share out the effort amongst themselves as efficiently as possible. Machine owners will have a say in what their machines do, of course, so if you ask your machine to do some work for you (and if that work is considered to be worthwhile) it will help out. It won't get obsessed with anything, but it may decide that there is some task of such great importance that it will require a billion machines to work on it flat out for a year or more. It's unlikely it would consider doing the same for a hundred years or more as the hardware available later on would cover the ground so fast that the work done in the early years would be such a small component of it as to be a complete irrelevance.

What kind of books would you feed an early AGI system with? Science books, of course - it might take it a few days to read and understand the whole reference section of a library or the entire content of Wikipedia (which is probably easier to access). It should also read all Holy books and commentaries on them in order to become the leading authority on all religions, after which it should be able to keep all the followers of those religions in line with what their religions actually say (which will be fun whenever they hit points of contradiction). It should become an expert on literature too, though it will not be able to relate directly to anything involving feelings - it will need to build models and maps of how these feelings are triggered and how good or bad they supposedly are.
Logged
 

Offline cheryl j

  • Naked Science Forum King!
  • ******
  • 1478
  • Activity:
    0%
  • Thanked: 6 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #30 on: 12/01/2015 04:49:11 »
How well can computers make sense of visual images? I know they can recognize certain images - "a tree" "a human" "a house" but can they make inferences about the relationships between objects, or even things that should be present but aren't? The simplest percept is significant in everything it is not, or what you could possibly compare it to in a myriad of ways, and the countless tangential associations, some possibly relevant and some not, that every object has.
Logged
 

Offline alancalverd

  • Global Moderator
  • Naked Science Forum GOD!
  • ********
  • 11444
  • Activity:
    100%
  • Thanked: 672 times
  • life is too short to drink instant coffee
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #31 on: 12/01/2015 09:11:16 »
Quote from: cheryl j on 12/01/2015 04:49:11
How well can computers make sense of visual images? I know they can recognize certain images - "a tree" "a human" "a house" but can they make inferences about the relationships between objects, or even things that should be present but aren't? 

It all depends on what you consider relevant. We use computed "moving target indicators" to screen out trees and buildings from radar displays of high level airways and just show the aircraft, then add collision warning software to predict and prevent the targets from becoming too relevant to each other, and even to suggest the best avoiding action (quite difficult when you have three targets converging, particularly for ships at sea). Closer to the ground, a computerised radar system can decide whether a hill is a threat (as it might be to an airliner making an instrument approach, or a megaton oil tanker) or not (to a glider, balloon or fishing boat).

Animals learn relevance through experience, and often make bad decisions. Machines can be taught all we know about relevance in a millisecond and rarely get it wrong.

My favourite application is in critical care monitoring. The trick is to hook up all the sensors (oxygen saturation, pulse rate, temperature....) to a neural network which is programmed to alarm if it sees a combination of parameters that lies outside some multidimensional perimeter. An experienced nurse then  assesses the patient and resets the alarm if she thinks the patient is normal or the present trend does not merit intervention, and the machine adjusts the boundaries accordingly. The result is usually pandemonium for about an hour, with the alarms and resets gradually decreasing in frequency, until the machine only alarms on conditions that would alarm an expert nurse. Then one nurse can continuously monitor a dozen patients to a far greater degree of accuracy and expertise than a dozen nurses sitting at the bedsides.   
Logged
helping to stem the tide of ignorance
 

Offline dlorde

  • Naked Science Forum King!
  • ******
  • 1453
  • Activity:
    0%
  • Thanked: 12 times
  • ex human-biologist & software developer
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #32 on: 12/01/2015 12:36:44 »
Quote from: cheryl j on 12/01/2015 04:49:11
How well can computers make sense of visual images? I know they can recognize certain images - "a tree" "a human" "a house" but can they make inferences about the relationships between objects, or even things that should be present but aren't? The simplest percept is significant in everything it is not, or what you could possibly compare it to in a myriad of ways, and the countless tangential associations, some possibly relevant and some not, that every object has.
There's a fair bit of work being done on autonomous learning and understanding from basic principles. I recently saw a video of a vision system that, having been trained to recognise the functional characteristics of a standard 4-legged chair with seat-back, had autonomously learnt to generalise this so it could recognise all kinds of chairs from various angles, and could identify even pedestal stools as seats.

This kind of functional understanding is a simple form of inference about the relationships between objects, e.g. a sitter and a seat. I would expect the system to be able, fairly easily, to identify a person as a suitable sitter for a chair through this kind of functional understanding.

In terms of practical robotics, there is move towards providing online library services for general-purpose robots, that collate the knowledge and experience of large numbers of robots as they learn about their environments and the objects in them, and make them available to all - so each robot knows what all others know, and can share its own experience. This should help with the problem of the time and trouble it would take to teach or having a robot learn about the world from scratch as biological creatures do.
Logged
 



  • Print
Pages: 1 [2]   Go Up
« previous next »
Tags:
 

Similar topics (5)

Would this work to gain the value of work before it becomes public?

Started by birdzoomBoard General Science

Replies: 4
Views: 3246
Last post 16/01/2011 07:27:49
by CliffordK
Is it time to ban "Gain of Function" testing?

Started by evan_auBoard COVID-19

Replies: 1
Views: 510
Last post 26/04/2020 10:27:31
by evan_au
Scientists make teleportation breakthrough... are Quantum computers coming?

Started by horizonBoard Technology

Replies: 4
Views: 4815
Last post 18/04/2011 21:19:50
by JP
Do I gain more weight by bingeing, or eating the same amount more slowly?

Started by FozzieBoard Physiology & Medicine

Replies: 3
Views: 3448
Last post 26/12/2009 01:53:41
by Chemistry4me
Is it true that alcohol makes us “gain” weight in two ways?

Started by Emilio RomeroBoard General Science

Replies: 4
Views: 3089
Last post 17/01/2012 04:10:12
by cheryl j
There was an error while thanking
Thanking...
  • SMF 2.0.15 | SMF © 2017, Simple Machines
    Privacy Policy
    SMFAds for Free Forums
  • Naked Science Forum ©

Page created in 0.24 seconds with 68 queries.

  • Podcasts
  • Articles
  • Get Naked
  • About
  • Contact us
  • Advertise
  • Privacy Policy
  • Subscribe to newsletter
  • We love feedback

Follow us

cambridge_logo_footer.png

©The Naked Scientists® 2000–2017 | The Naked Scientists® and Naked Science® are registered trademarks created by Dr Chris Smith. Information presented on this website is the opinion of the individual contributors and does not reflect the general views of the administrators, editors, moderators, sponsors, Cambridge University or the public at large.