The Naked Scientists
  • Login
  • Register
  • Podcasts
      • The Naked Scientists
      • eLife
      • Naked Genetics
      • Naked Astronomy
      • In short
      • Naked Neuroscience
      • Ask! The Naked Scientists
      • Question of the Week
      • Archive
      • Video
      • SUBSCRIBE to our Podcasts
  • Articles
      • Science News
      • Features
      • Interviews
      • Answers to Science Questions
  • Get Naked
      • Donate
      • Do an Experiment
      • Science Forum
      • Ask a Question
  • About
      • Meet the team
      • Our Sponsors
      • Site Map
      • Contact us

User menu

  • Login
  • Register
  • Home
  • Help
  • Search
  • Tags
  • Member Map
  • Recent Topics
  • Login
  • Register
  1. Naked Science Forum
  2. Non Life Sciences
  3. Technology
  4. Could computers gain some degree of self-awareness one day?
« previous next »
  • Print
Pages: [1] 2   Go Down

Could computers gain some degree of self-awareness one day?

  • 32 Replies
  • 15928 Views
  • 0 Tags

0 Members and 1 Guest are viewing this topic.

Offline Musicforawhile (OP)

  • Jr. Member
  • **
  • 44
  • Activity:
    0%
    • View Profile
Could computers gain some degree of self-awareness one day?
« on: 04/01/2015 15:28:12 »
If consciousness emerged as a result of the complexity of the human mind, could a computer begin to gain some self-awareness if it were complex enough? Could it eventually gain a level of self-awareness that is similar to our own or even more superior to ours?

Logged
 



Offline alancalverd

  • Global Moderator
  • Naked Science Forum GOD!
  • ********
  • 11407
  • Activity:
    100%
  • Thanked: 670 times
  • life is too short to drink instant coffee
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #1 on: 04/01/2015 15:31:45 »
"Windows is looking for a solution to the problem". Is that a symptom of selfawareness?
Logged
helping to stem the tide of ignorance
 

Offline Musicforawhile (OP)

  • Jr. Member
  • **
  • 44
  • Activity:
    0%
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #2 on: 04/01/2015 16:22:12 »
Quote from: alancalverd on 04/01/2015 15:31:45
"Windows is looking for a solution to the problem". Is that a symptom of selfawareness?

Well that's what I'm asking you. I am not a computer or maths specialist, so I thought I'd ask the people who understand about computers.
Logged
 

Offline alancalverd

  • Global Moderator
  • Naked Science Forum GOD!
  • ********
  • 11407
  • Activity:
    100%
  • Thanked: 670 times
  • life is too short to drink instant coffee
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #3 on: 04/01/2015 17:59:42 »
And I'm not sure what anyone means by selfawareness!

I think, rather like "life" or "consciousness", any useful definition of the abstract noun has to derive from an adjective and a function: "an entity is classified as selfaware if it ....."

And there it all gets a bit slippery, because we can almost always imagine or even demonstrate a machine that does exactly whatever the definition requires. Next thing you know, we are talking about entirely mechanistic models of humans. That doesn't worry me - I care for machines as well as animals, and have a healthy relationship with everything from the dumbest tools to the brightest dogs (though humans are often disappointingly dishonest) - but some people believe (with no evidence) that homo sapiens is somehow special. 
Logged
helping to stem the tide of ignorance
 

Offline Musicforawhile (OP)

  • Jr. Member
  • **
  • 44
  • Activity:
    0%
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #4 on: 04/01/2015 22:06:18 »
What would be the end of that sentence? "An entity could described as self-aware if.." What does it have to be able to do, that we know of so far?
Logged
 



Offline CliffordK

  • Naked Science Forum King!
  • ******
  • 6408
  • Activity:
    0%
  • Thanked: 15 times
  • Site Moderator
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #5 on: 04/01/2015 22:47:17 »
It is inevitable that AI will progress, probably quite rapidly over the next century or few centuries. 

There is a question of how much will be programmed, vs how much the computer will be able to learn on its own.  How much will the computers be able to program themselves?

Certainly computers are getting "smarter".  Just bring up a simple program like your favorite word processor, and it will suggest spelling errors, grammar errors, and, perhaps even suggest words to use.  But, of course, all that was programmed in.

So, what is self aware?
I could write answers to a number of questions and put it into a computer program.

So,
You Ask:  What are you?
Computer Responds: I am a computer.

You:
What are you made of?
Computer: Silicone Chips and wires.

You could certainly program in any number of questions/responses.  So, is the computer self aware? 

Perhaps one should ask if your dog is self aware? 

Did you suddenly conclude that you are a "human"?  Or is that something that you were told?  So, are people even truly self-aware?
Logged
 

Offline alancalverd

  • Global Moderator
  • Naked Science Forum GOD!
  • ********
  • 11407
  • Activity:
    100%
  • Thanked: 670 times
  • life is too short to drink instant coffee
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #6 on: 04/01/2015 23:10:27 »
Quote from: Musicforawhile on 04/01/2015 22:06:18
What would be the end of that sentence? "An entity could described as self-aware if.." What does it have to be able to do, that we know of so far?

That's the whole conundrum. I have built machines that spent half the time doing what they were "paid" for, and the rest of the time checking themselves to make sure they were working correctly. My car goes into a selfpreservation mode if it detects a problem that might damage the engine, and tells me if any of the light bulbs need replacing. One of my friends flies an airliner that won't move until it is happy that it is properly loaded, all the doors are shut, etc: there are basically two buttons - "start engines" and "takeoff", and neither will work if any important part of the aircraft is out of specification. All these machines are at least as "selfaware" as a mouse, which knows when it's hungry, threatened, or in the mood to make baby mice, and responds appropriately.

So I'm afraid I have to put the question back to you. What do you do that is symptomatic of selfawareness and can't be replicated by an algorithm or a set of gears? Simply "knowing that I exist" won't do, because we have no way of testing it! 
Logged
helping to stem the tide of ignorance
 

Offline wolfekeeper

  • Naked Science Forum King!
  • ******
  • 1383
  • Activity:
    1.5%
  • Thanked: 55 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #7 on: 05/01/2015 23:46:29 »
At the moment, any single computer or even a supercomputer doesn't have enough processing power to run a human intelligence.

Projections based on Moore's law suggest that we are going to reach the point in about 30 years or so; this is sometimes called the 'singularity'. Beyond that point, computers will be more intelligent than humans.

Self awareness itself is not a particularly deep problem, in general consciousness is just remembering and knowing about your own thought patterns, it's essentially a type of feedback loops.
Logged
 

Offline evan_au

  • Global Moderator
  • Naked Science Forum GOD!
  • ********
  • 9185
  • Activity:
    73%
  • Thanked: 915 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #8 on: 06/01/2015 15:58:23 »
Quote
One of my friends flies an airliner that won't move until it is happy that it is properly loaded etc
An historical view:
Traditional mainframe computers didn't have very many sensors - it was up to a human operator to ensure that the airconditioning was working and the power was connected. Most of the processing power was dedicated to number crunching, and there were no extra processors available to carry out maintenance functions. If anything broke, the computer usually crashed; there was no "limp along" mode.

High-availability workstations can keep working when a power supply fails, or a CPU chip fails, due to redundant hardware. And they can warn you when the air gets too hot, or the voltage goes out of tolerance, due to internal sensors. Modern disk drives will detect that certain sectors are not working very well, move the data to a good part of the disk, and mark the sector as "faulty". Each module (like power supply, disk drive, or fan) has its own sensors and maintenance processor that can communicate with the main CPU to tell it how they are "feeling". So they can limp along, but they don't really have any manipulators that can do something about the underlying problem.

We are seeing smartphones which are now somewhat independent of mains power due to internal batteries, and brimming with sensors - acceleration, magnetic field, battery charge, GPS, wireless, temperature, air pressure. They can communicate their status to the owner and to remote servers. Because space is at a premium, they don't have multiple redundant sensors (but sometimes WiFi can provide location data if GPS is unavailable). Newer smartphones have multiple redundant CPU cores, but I'm not sure how well the operating system and applications can survive a core crash.

Perhaps the first consumer device that was self-aware and could actually do something about its condition was the small floor-sweeping robot that could find its way back to its charging station when its battery got low. Perhaps the next step is to empty its own dustpan? In this case, low cost means no redundancy - any failure is "fatal" to its mission.

So I suggest that effective self-awareness goes beyond raw processing power to include redundancy, internal & external sensors, multiple processors (some doing maintenance functions), enough control over the environment to autonomously recover from "simple problems" and communications to request help for "difficult problems". Ideally, it should also have the ability to predict & avoid "problems" and to seek out "value" in its environment.
« Last Edit: 07/01/2015 19:29:28 by evan_au »
Logged
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2840
  • Activity:
    9%
  • Thanked: 37 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #9 on: 06/01/2015 17:01:11 »
Quote from: wolfekeeper on 05/01/2015 23:46:29
At the moment, any single computer or even a supercomputer doesn't have enough processing power to run a human intelligence.

How do you know that? A simple laptop with only one processor may be hard pushed to do all the visual processing that we do, but I reckon it will be more than up to the task of thinking at a human-level of intelligence if it's running the right software. The special code required to run an AGI system on top of an operating system looks as if it will sit comfortably in just 64K of memory. The data it will need to hold is a lot bulkier, but a gigabyte of RAM can hold a couple of thousand books which can provide it with an enormous amount of knowledge, particularly if there is no repetition in the data.

Quote
Projections based on Moore's law suggest that we are going to reach the point in about 30 years or so; this is sometimes called the 'singularity'. Beyond that point, computers will be more intelligent than humans.

The "singularity" is about the point where intelligent machines no longer depend on us to feed them with new functionality and ideas, but they simply race away ahead on their own, and we'll never catch up. There are no hardware requirements specified for this and the "30 years" part is just an average of many guesses on the part of people who for the most part are a very long way from understanding what's involved.

Quote
Self awareness itself is not a particularly deep problem, in general consciousness is just remembering and knowing about your own thought patterns, it's essentially a type of feedback loops.

Self awareness is a massive problem, unless it's non-conscious in which case it's trivial. The issue is whether the machine is sentient or not - if it isn't, it can't be conscious of anything and can't be consciously aware of its own existence. A non-sentient machine can (in conjunction with the software running on it) calculate that it is looking at itself or reading its own internal data, but all it's doing is storing and manipulating data that says so. The closest it can get to understanding anything is to determine that data is consistent and doesn't clash with other data. Wherever there is a clash, something has not been understood. We may be the same, but it doesn't feel like that to us - we feel as if we understand things in a quite different way, but there is no known way to replicate that in a machine other than by bolting on fictions about feelings.
Logged
 

Offline wolfekeeper

  • Naked Science Forum King!
  • ******
  • 1383
  • Activity:
    1.5%
  • Thanked: 55 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #10 on: 06/01/2015 22:39:41 »
To do the calculation of how much hardware you need, you take the number of neurons and factor in the connections between them (each one has hundreds of connections or more), and then allow for the fact that silicon has a clock cycle rate thousands or even millions of times faster than the neurons.

You end up needing a very, very big computer with loads of RAM and lots and lots of interconnection.

Your desk top computer is only a bit smarter than a beetle, best case.
Logged
 

Offline alancalverd

  • Global Moderator
  • Naked Science Forum GOD!
  • ********
  • 11407
  • Activity:
    100%
  • Thanked: 670 times
  • life is too short to drink instant coffee
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #11 on: 06/01/2015 23:25:27 »
We have a very narrow interpretation of intelligence. Humans value qualities that are valuable to humans, so we think it is important to be able to recognise written words, but our olfactory system is lucky if it can tell the difference between edible and rotten food. The canine brain is quite different, adept at processing night vision, an extended range of sonic pitch and intensity, and enormous environmental and historic data from a nose that is beyond our capacity to imagine. To a dog, humans are blind, deaf, and survive by luck alone.
Logged
helping to stem the tide of ignorance
 

Offline evan_au

  • Global Moderator
  • Naked Science Forum GOD!
  • ********
  • 9185
  • Activity:
    73%
  • Thanked: 915 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #12 on: 07/01/2015 09:25:39 »
Quote from: Musicforawhile
Could [a computer] eventually gain a level of self-awareness that is similar to our own or even more superior to ours?
An even more difficult problem must be overcome if computers are ever to work effectively with people, or even other computers: other-awareness.

This means self-awareness, plus awareness of the condition of others (human or machine), and your relationship with these others, and what you can do about it.

The accomplishments of humanity come about in large part from specialisation, cooperation and cultural transmission of useful skills (like hunting, fairness & justice, agriculture, education, commerce, medicine, design, architecture and art). These arise from our ability to form a "theory of mind" which represents others as members of society. As the saying goes "It takes a village to raise a child".

This can only occur through effective communication between individuals (whether human or machine).

For humans, much of that communication is subconscious, and seems to be limited to around 150 individuals (although a legal and cultural framework allows us to deal with larger groups as aggregates). Many of the failures of humanity (like nepotism, oppression and war) come about from our failures to see others as worthy individuals and to communicate effectively.

There has been some progress recently in automated extraction of emotional state from tweets and Facebook posts (I'm sure that this is a topic in which various security agencies are very interested). Perhaps one day, computers may be able to communicate effectively with more than 150 individuals?
« Last Edit: 07/01/2015 19:45:27 by evan_au »
Logged
 



Offline wolfekeeper

  • Naked Science Forum King!
  • ******
  • 1383
  • Activity:
    1.5%
  • Thanked: 55 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #13 on: 07/01/2015 17:27:08 »
That's right. Humans have a whole bunch of built-in programming, so as to be able to understand what we see, to be able to learn to talk, some understanding of grammars, to know what animals are, other humans, to be able to react to sounds and to have some concept of location.

All these things and more seem to be more or less built-in genetically, or at least the capacity to learn them rapidly.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2840
  • Activity:
    9%
  • Thanked: 37 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #14 on: 07/01/2015 18:47:10 »
Quote from: wolfekeeper on 06/01/2015 22:39:41
To do the calculation of how much hardware you need, you take the number of neurons and factor in the connections between them (each one has hundreds of connections or more), and then allow for the fact that silicon has a clock cycle rate thousands or even millions of times faster than the neurons.

You end up needing a very, very big computer with loads of RAM and lots and lots of interconnection.

Your desk top computer is only a bit smarter than a beetle, best case.

Neural computers need a lot of overcapacity because they waste most of their neurons - something that can be designed to do a simple calculation with a handful of logic gates takes hundreds of neurons, and even then it will occasionally make errors. A carefully programmed computer will not waste any of its capacity and will not make mistakes (unless there's a hardware failure). There is no lack of interconnectedness as every part of memory can be accessed from any processor. The machine on your desk is only stupid because it is not running intelligent software (assuming it isn't an ancient one), but in hardware terms it is already up to the task of bettering human-level intelligence.
Logged
 

Offline wolfekeeper

  • Naked Science Forum King!
  • ******
  • 1383
  • Activity:
    1.5%
  • Thanked: 55 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #15 on: 07/01/2015 18:56:26 »
Nah.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2840
  • Activity:
    9%
  • Thanked: 37 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #16 on: 08/01/2015 17:38:53 »
Well, you'll soon eat your word.
Logged
 



Offline evan_au

  • Global Moderator
  • Naked Science Forum GOD!
  • ********
  • 9185
  • Activity:
    73%
  • Thanked: 915 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #17 on: 08/01/2015 17:59:26 »
Quote from: evan_au
Perhaps one day, computers may be able to communicate effectively with more than 150 individuals?
Isn't this the goal of Google, Amazon and every other commercial interest on the web?
To interpret our individual goals, aspirations, and interests, and to offer relevant content (which preferably brings them some profit).
Logged
 

Offline wolfekeeper

  • Naked Science Forum King!
  • ******
  • 1383
  • Activity:
    1.5%
  • Thanked: 55 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #18 on: 08/01/2015 18:42:47 »
Quote from: David Cooper on 08/01/2015 17:38:53
Well, you'll soon eat your word.
To oversummarise this, the thing is that it takes a huge amount less computing power to run a program than it does to learn or write a new program.

Writing a new program effectively involves doing a massive search operation to work out the interrelationships between things.

Human brains are massively parallel and are used to look for these relationships; they effectively write their own programs.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2840
  • Activity:
    9%
  • Thanked: 37 times
    • View Profile
Re: Could computers gain some degree of self-awareness one day?
« Reply #19 on: 09/01/2015 18:26:42 »
Quote from: wolfekeeper on 08/01/2015 18:42:47
To oversummarise this, the thing is that it takes a huge amount less computing power to run a program than it does to learn or write a new program.

Writing a new program effectively involves doing a massive search operation to work out the interrelationships between things.

Human brains are massively parallel and are used to look for these relationships; they effectively write their own programs.

Computers are really good at massive search operations. Human brains are incredibly slow and need to be massively parallel in order to make up for that deficiency. The fact that we can write our own programs is simply down to the fact that evolution has programmed us to be universal problem solving machines. AGI systems will soon have the same ability. If you can work out what it is you want to do, you can then apply simple algorithms to find ways of doing it (if the thing you want to do is possible), and the solution you find can then be distilled down into a compact program to repeat the same task more efficiently in future. Different people have different sets of algorithms that they apply when trying to solve problems, and that makes some better than others at some tasks, so the trick with AGI is to provide it with as wide a range of these algorithms as possible so that it can approach all tasks in the way the best human thinkers do. The algorithms themselves are simple, but the difficulty is in finding ways for the system to hold them and to design a framework in which they can be applied so that they can be used to manipulate ideas. For the most part, what programmers have done up to now is write unintelligent code to solve specific tasks, but the road to AGI means working on a different level and writing universal problem solving algorithms which can then be applied by the machine to solve an infinite range of specific tasks without a human programmer having to do the top-level thinking part of it every time. We really are just a few steps away from making this happen.
Logged
 



  • Print
Pages: [1] 2   Go Up
« previous next »
Tags:
 

Similar topics (5)

Would this work to gain the value of work before it becomes public?

Started by birdzoomBoard General Science

Replies: 4
Views: 3243
Last post 16/01/2011 07:27:49
by CliffordK
Is it time to ban "Gain of Function" testing?

Started by evan_auBoard COVID-19

Replies: 1
Views: 506
Last post 26/04/2020 10:27:31
by evan_au
Scientists make teleportation breakthrough... are Quantum computers coming?

Started by horizonBoard Technology

Replies: 4
Views: 4812
Last post 18/04/2011 21:19:50
by JP
Do I gain more weight by bingeing, or eating the same amount more slowly?

Started by FozzieBoard Physiology & Medicine

Replies: 3
Views: 3446
Last post 26/12/2009 01:53:41
by Chemistry4me
Is it true that alcohol makes us “gain” weight in two ways?

Started by Emilio RomeroBoard General Science

Replies: 4
Views: 3085
Last post 17/01/2012 04:10:12
by cheryl j
There was an error while thanking
Thanking...
  • SMF 2.0.15 | SMF © 2017, Simple Machines
    Privacy Policy
    SMFAds for Free Forums
  • Naked Science Forum ©

Page created in 0.216 seconds with 83 queries.

  • Podcasts
  • Articles
  • Get Naked
  • About
  • Contact us
  • Advertise
  • Privacy Policy
  • Subscribe to newsletter
  • We love feedback

Follow us

cambridge_logo_footer.png

©The Naked Scientists® 2000–2017 | The Naked Scientists® and Naked Science® are registered trademarks created by Dr Chris Smith. Information presented on this website is the opinion of the individual contributors and does not reflect the general views of the administrators, editors, moderators, sponsors, Cambridge University or the public at large.