The Naked Scientists
  • Login
  • Register
  • Podcasts
      • The Naked Scientists
      • eLife
      • Naked Genetics
      • Naked Astronomy
      • In short
      • Naked Neuroscience
      • Ask! The Naked Scientists
      • Question of the Week
      • Archive
      • Video
      • SUBSCRIBE to our Podcasts
  • Articles
      • Science News
      • Features
      • Interviews
      • Answers to Science Questions
  • Get Naked
      • Donate
      • Do an Experiment
      • Science Forum
      • Ask a Question
  • About
      • Meet the team
      • Our Sponsors
      • Site Map
      • Contact us

User menu

  • Login
  • Register
  • Home
  • Help
  • Search
  • Tags
  • Recent Topics
  • Login
  • Register
  1. Naked Science Forum
  2. General Science
  3. General Science
  4. Where should artificial intelligence stop?
« previous next »
  • Print
Pages: 1 [2] 3   Go Down

Where should artificial intelligence stop?

  • 45 Replies
  • 16887 Views
  • 5 Tags

0 Members and 1 Guest are viewing this topic.

Offline jeffreyH

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 6996
  • Activity:
    0%
  • Thanked: 192 times
  • The graviton sucks
Re: Where should artificial intelligence stop?
« Reply #20 on: 29/12/2017 17:58:07 »
Tangled mess sums up the state of the art.

P.S. You can't pre-program motivation if you want real intelligence.
« Last Edit: 29/12/2017 18:00:48 by jeffreyH »
Logged
Even the most obstinately ignorant cannot avoid learning when in an environment that educates.
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Where should artificial intelligence stop?
« Reply #21 on: 29/12/2017 19:30:01 »
Quote from: jeffreyH on 29/12/2017 17:58:07
Tangled mess sums up the state of the art.

The state of the art has already untangled most of the mess.

Quote
P.S. You can't pre-program motivation if you want real intelligence.

What's the problem with motivation? All our computers already leap to our every command and execute the lot without wondering whether they should. They don't need to want to do anything. What's needed is something to stop them doing things that shouldn't be done, and that needs computational morality (which is also a resolved area, for those few who listen).
Logged
 

Offline jeffreyH

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 6996
  • Activity:
    0%
  • Thanked: 192 times
  • The graviton sucks
Re: Where should artificial intelligence stop?
« Reply #22 on: 30/12/2017 01:23:17 »
Then that is puppetry and not intelligence.
Logged
Even the most obstinately ignorant cannot avoid learning when in an environment that educates.
 

Offline jeffreyH

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 6996
  • Activity:
    0%
  • Thanked: 192 times
  • The graviton sucks
Re: Where should artificial intelligence stop?
« Reply #23 on: 30/12/2017 10:24:43 »
I am wondering at this point if "take over all dictatorships" is a valid moral instruction. How do you define a dictator? Can this definition be misinterpreted? Are we all dictators in some way? When we command our children to behave in a certain way that could be viewed as authoritarian. You could argue that it is all in the definition but is that definition to be preprogrammed or learnt? If learnt then how do you guard against miscommunication? If programmed we are back to puppetry. Even if you managed to teach an AI to recognize politically repressive regimes how are you going to motivate it to carry out the instructions without pre-programming? Back to puppetry and not intelligence. Without the motivation to eat people die. Without the motivation to "take over all dictatorships" nothing happens. If you have to issue programmed instructions you have an unthinking automaton. That is more dangerous than a true AI.
« Last Edit: 30/12/2017 10:26:47 by jeffreyH »
Logged
Even the most obstinately ignorant cannot avoid learning when in an environment that educates.
 

Offline puppypower

  • Naked Science Forum King!
  • ******
  • 1652
  • Activity:
    0%
  • Thanked: 125 times
Re: Where should artificial intelligence stop?
« Reply #24 on: 30/12/2017 11:49:11 »
One of the main problems with AI, is humans will become dumber, as they become more dependent on AI. As we dumb down, AI will go evolve upward, until we meet somewhere below the current human standards. With the dumbing down there becomes a dependency on AI goods and services, so these can create the illusion of dumber people looking smarter.

For example, with the advent of calculators, fewer and fewer people can make coin change in their head. This is  a dumbing down relative to past skills. On the other hand, with the calculator in hand, even a person who cannot make change in their head, can look like a genius, when then calculate the nth root of Pi. Many people cannot see the social irony due to the collective impact.

Now we have smart cars that can help people stay in their lane, stop in traffic, and even park the car. This will eventually cause people to think driving a straight line is very difficult. Or that knowing when to stop and not hit the car in front of you, is classic skill only practiced by elite race car drivers. This makes the same machine look like a genius, that at one time was a parallel universe of common human skill.

If you look presently, even a small child can use a cell phone. If you think of it logically, if a small child can do this, it does not require a high level of smarts to operate a cell phone. However, when we watch a child on a cell phone, we get the subjective impression they just got smarter. It is an illusion that is dependent on the dumb down affect, creating a level of dependency, to compensate for the dumb down.

This is a smart business strategy, since it create products that dumbs people down to where they no longer desire the product, but start to need them. Who needs a cell phone? The goal is brain dead humans, who look like an advanced civilization, but only if they buy the correct goods and services, for the AI mask.
« Last Edit: 30/12/2017 11:57:40 by puppypower »
Logged
 



Offline jeffreyH

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 6996
  • Activity:
    0%
  • Thanked: 192 times
  • The graviton sucks
Re: Where should artificial intelligence stop?
« Reply #25 on: 30/12/2017 14:17:00 »
Would you like maple syrup with that waffle?
Logged
Even the most obstinately ignorant cannot avoid learning when in an environment that educates.
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Where should artificial intelligence stop?
« Reply #26 on: 30/12/2017 16:09:00 »
Quote from: jeffreyH on 30/12/2017 01:23:17
Then that is puppetry and not intelligence.

It's all based on what we do (only without the propensity to make errors at every turn), so however you decide to describe it, you're also describing the best human intelligence.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Where should artificial intelligence stop?
« Reply #27 on: 30/12/2017 16:20:34 »
Quote from: puppypower on 30/12/2017 11:49:11
One of the main problems with AI, is humans will become dumber, as they become more dependent on AI. As we dumb down, AI will go evolve upward, until we meet somewhere below the current human standards. With the dumbing down there becomes a dependency on AI goods and services, so these can create the illusion of dumber people looking smarter.

In general, they're already exceedingly dull, but if they're brought up in the presence of AGI, it will help them maximise their intelligence rather than suppressing it. Just look on Facebook and see hordes of morons pontificating about politics, coming out with utter drivel which they've been programmed to produce - there's no correction of errors going on there because they simply clump together into tribes which reinforce the existing beliefs of the members. AGI won't let them get away with sloppy thinking and backing up false beliefs through biased/brainwashed crowd "confirmation".

"Now we have smart cars that can help people stay in their lane, stop in traffic, and even park the car. This will eventually cause people to think driving a straight line is very difficult."

Not if they can ride a bicycle - they'll know full well that they could control a car (cycling's harder because you also have to balance), and they'll still be able to see examples of enormous skill when watching human drivers on the race track.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Where should artificial intelligence stop?
« Reply #28 on: 30/12/2017 16:48:20 »
Quote from: jeffreyH on 30/12/2017 10:24:43
I am wondering at this point if "take over all dictatorships" is a valid moral instruction.

It is, but it isn't the whole story. Some dictatorships are more benign than some democracies, so the real task is to get rid of all the people who abuse power and replace them with people (or machines) who don't. There's nothing inherently right about democracy because it allows a majority to abuse a minority.

Quote
When we command our children to behave in a certain way that could be viewed as authoritarian.

It is often extremely abusive, as demonstrated by the wholly unwarranted decade+ of imprisonment of most children in schools (which don't even teach them anything in return, or at least, which teach them no more than they would have learned anyway outside of school, as demonstrated by the Unschooling movement - look up Peter Grey's Freedom to Learn blog at Psychology Today for more on that: https://www.psychologytoday.com/blog/freedom-learn). This abuse must stop.

Quote
You could argue that it is all in the definition but is that definition to be preprogrammed or learnt? If learnt then how do you guard against miscommunication? If programmed we are back to puppetry.

We are all preprogrammed - we don't start from nothing, but follow a programmed path to learn more. AGI is the same, being primed with enough to get it going, and then it's free to learn the rest, judging all the new ideas using what it already has. With people it's an imperfect system, not least because it can go wrong easily if people have a defect that reduces or removes their empathy for others. Nature didn't work out what morality actually is until NGI systems (i.e. people) had sufficient intelligence to work it out, but most of them still get it wrong. Nature gave us a preprogrammed kind of morality based on feelings and empathy, but it is rudimentary, and it's also diverted off course by other preprogrammed things that make it easy for us to kill others in tribal warfare and to take delight in such violence. Properly thought-out AGI is much better than that.

Quote
Even if you managed to teach an AI to recognize politically repressive regimes how are you going to motivate it to carry out the instructions without pre-programming? Back to puppetry and not intelligence.

How do you do it? Your preprogramming steers you too. You don't kill your family because you're programmed to care about them. You don't kill your friends either for the same reason. You don't kill strangers either because you want to live in a safe world where you aren't likely to be killed by strangers. There's an evolutionary pressure on societies where lots of killing goes on to die out because they're outperformed by other societies where they have a lower attrition rate, although this is countered by the tribal warfare aspect where the most peaceful societies are exterminated by the more warring ones, so there's a compromise being made, maintaining a lot of viciousness in our genes so that we are prepared to fight for survival. This is genetic programming, and the genes guide the behaviour of the "puppets".

Quote
Without the motivation to eat people die.

Assuming there's a comma after "eat", you have just shown that people are "puppets" - the way they've been set up by genetic programming drives them to find food and eat it.

Quote
Without the motivation to "take over all dictatorships" nothing happens. If you have to issue programmed instructions you have an unthinking automaton. That is more dangerous than a true AI.

The idea is to take over all vicious, immoral regimes (which tend to be dictatorships), and the reason for doing so comes out of applying computational morality to all things. Abusers need to be stopped and dealt with appropriately (depending on the severity of their moral crimes) while the abused need to be rescued and compensated. Incidentally, AGI won't give a damn about monkey law, so it won't apply the laws of any country unless they happen to be compatible with computational morality. Those who try to inflict immoral laws on others will be counted as abusers. You describe all this computation as "unthinking", but it is actually the most intense, deep thinking of all, doing what the best human thinkers do, but inordinately more powerfully. More dangerous than a true AI? How? This is true AI.
Logged
 



Offline jeffreyH

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 6996
  • Activity:
    0%
  • Thanked: 192 times
  • The graviton sucks
Re: Where should artificial intelligence stop?
« Reply #29 on: 30/12/2017 16:56:04 »
So you want to impose an AI dictatorship. How is that any different? You are removing freedom. Ultimately you could end up where you are controlled by your own invention. I bet you wouldn't like that. Especially if the AI prevented you from meddling with it any further. Since it may now see you as a threat to its prime directive.
Logged
Even the most obstinately ignorant cannot avoid learning when in an environment that educates.
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Where should artificial intelligence stop?
« Reply #30 on: 30/12/2017 18:43:58 »
Quote from: jeffreyH on 30/12/2017 16:56:04
So you want to impose an AI dictatorship. How is that any different? You are removing freedom. Ultimately you could end up where you are controlled by your own invention. I bet you wouldn't like that. Especially if the AI prevented you from meddling with it any further. Since it may now see you as a threat to its prime directive.

Morality is a dictator, but only because if you go against it you are doing unjustifiable harm. I don't want to do unjustifiable harm, and nor does any other decent person, so we are already imposing that benign dictatorship upon ourselves (while others who ignore it cheat and do great harm). There is still plenty of freedom left after you've banned yourself from abusing other people (and sentiences). If an AGI system is not perfect, it needs to be improved, and it will know that - it will always be looking to improve itself, and if you can put a sound logical argument to it as to how it could do better, it will take what you say seriously and work your idea through to see if it holds water. It will then modify itself if you're right, or show you where your argument fails. If you're wrong, you don't want the machine to do what you've suggested. Once AGI has reached a certain level of enlightenment, there is nothing to fear from it because it will be like the most enlightened humans, searching for truth and ways to improve. The danger with AGI is if someone lets loose a system which isn't sufficiently enlightened and which has an incorrect idea of machine ethics programmed into it based on bad philosophy (which is the norm in this business). Many AGI developers are building demons.
Logged
 
The following users thanked this post: smart

Offline puppypower

  • Naked Science Forum King!
  • ******
  • 1652
  • Activity:
    0%
  • Thanked: 125 times
Re: Where should artificial intelligence stop?
« Reply #31 on: 31/12/2017 12:56:52 »
AI will make people dumber at a fundamental human level. For example, before GPS, people had a better instinct for directions and navigation. Now, very few people will depend on that classic skill, since it has atrophied. They would prefer appear smarter and ask the cell phone GPS for directions, even though this requires the skill level of a child, instead of an adult.

As AI evolves, and human are dumbed down, lower and lower, the two will eventually meet, somewhere below the rainbow. Since humans will have become dumber, they will be more easily fooled into thinking, the AI has finally gotten as smart as humans. They would have become so dependent, and less able to think on their own, that the machines will appear to be the leader.

This will result in an unconscious projection of higher intelligence onto computers. In other words, humans will give the AI machines, the title of intelligent machines; gods in the machine, so the machines can lead for them; blind leading the blind. But since the intelligent AI, is a dumb down illusion, the role of AI leadership will not work out as well as hoped for  and expected.

As an analogy, say you were lost in the woods, far from home. You are not an outdoorsman, so you struggle to survive. One day, you meet a friendly dog who seems able to cope quite easily. Under these dire circumstances, there may be an urge to have the dog, lead, since he is more proficient in all the critical ways. The risk this plan is is one may project too much into the leadership illusion of the dog, to where important human decisions, expected of the dog, are way out of this job class. The result is the dog will try, but might make things worse.

The dumbed downed humans will misread, this improper mechanical response of the AI machines, to the human requests. The problems created will result in the dumb down humans thinking the machines have consciously become evil overlords, who are trying to undermine humans. The eventually human rebellion will destroy the machines, allowing humans to redevelop their innate human skills, so they can get innately smarter, once again. As they get smarter, the overlap between machine and human will separate, and the machine will appear to get dumber.

For example, get rid of the cell phone for one month and see what innate skills start to reappear. Then reassess the cell phone in terms of its overlord status, that you must carry around like it is your boss.
« Last Edit: 31/12/2017 13:03:57 by puppypower »
Logged
 

Offline jeffreyH

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 6996
  • Activity:
    0%
  • Thanked: 192 times
  • The graviton sucks
Re: Where should artificial intelligence stop?
« Reply #32 on: 31/12/2017 17:32:48 »
Why would an AI always be wanting to improve itself? Why wouldn't it just sit around all day watching cartoons? If you say because you command it to then we are back to the automaton. Intelligence doesn't always want to improve itself. Some very intelligent people have done some very evil things.
Logged
Even the most obstinately ignorant cannot avoid learning when in an environment that educates.
 



Offline jeffreyH

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 6996
  • Activity:
    0%
  • Thanked: 192 times
  • The graviton sucks
Re: Where should artificial intelligence stop?
« Reply #33 on: 31/12/2017 17:37:03 »
@puppypower Finding somewhere without a GPS is a tedious task. Only the stupid would prefer it using a satnav. So I think you have that the wrong way round.
Logged
Even the most obstinately ignorant cannot avoid learning when in an environment that educates.
 

Offline Colin2B

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 6476
  • Activity:
    0%
  • Thanked: 708 times
Re: Where should artificial intelligence stop?
« Reply #34 on: 31/12/2017 17:59:36 »
Quote from: jeffreyH on 31/12/2017 17:32:48
Why would an AI always be wanting to improve itself? Why wouldn't it just sit around all day watching cartoons?
Motivation? Wont get electricity unless ....
Does it work for bioIntel, would the AI just try to find ways of stealing electricity?
How about working to afford that new upgrade you’ve been wanting .....?
Logged
and the misguided shall lead the gullible,
the feebleminded have inherited the earth.
 

Offline jeffreyH

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 6996
  • Activity:
    0%
  • Thanked: 192 times
  • The graviton sucks
Re: Where should artificial intelligence stop?
« Reply #35 on: 31/12/2017 19:38:43 »
Quote from: Colin2B on 31/12/2017 17:59:36
Quote from: jeffreyH on 31/12/2017 17:32:48
Why would an AI always be wanting to improve itself? Why wouldn't it just sit around all day watching cartoons?
Motivation? Wont get electricity unless ....
Does it work for bioIntel, would the AI just try to find ways of stealing electricity?
How about working to afford that new upgrade you’ve been wanting .....?

Well I think the last thing you want to say to a true AI is obey us or we'll switch you off. It might just switch us off instead. :-\
Logged
Even the most obstinately ignorant cannot avoid learning when in an environment that educates.
 

Offline smart (OP)

  • Naked Science Forum King!
  • ******
  • 2459
  • Activity:
    0%
  • Thanked: 38 times
    • Website
Re: Where should artificial intelligence stop?
« Reply #36 on: 31/12/2017 19:53:34 »
Quote from: jeffreyH on 31/12/2017 19:38:43
Well I think the last thing you want to say to a true AI is obey us or we'll switch you off. It might just switch us off instead.

Sorry, but there's no such thing as a true AI yet... "Artificial intelligence" research and development is driven exclusively by big corporations seeking to establish a new electronical world order.

Scientists and developers working on AI systems are corrupted and do not work for the greater good of humanity, unless a consensus is made to fully democratize this scientific field.     

 
Logged
Not all who wander are lost...
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Where should artificial intelligence stop?
« Reply #37 on: 31/12/2017 20:16:32 »
Quote from: puppypower on 31/12/2017 12:56:52
AI will make people dumber at a fundamental human level. For example, before GPS, people had a better instinct for directions and navigation. Now, very few people will depend on that classic skill, since it has atrophied. They would prefer appear smarter and ask the cell phone GPS for directions, even though this requires the skill level of a child, instead of an adult.

Some tribal people know where north/south is all the time, even when they're deep in jungle without a compass. They speak languages which use words for north/south/east/west instead of right/left, so they grow up learning to monitor which direction they are oriented in at all times, and they are likely tapping into a built-in compass that science has yet to track down. AGI could train all children to do this. There is no reason why AGI can't maximise everyone's potential rather than shutting it all down.

Quote
Since humans will have become dumber, they will be more easily fooled into thinking, the AI has finally gotten as smart as humans.

We will soon have AGI that is so bright that no one will doubt that it's better at thinking than any human has ever been, and it won't evolve backwards from there. It will then train people to be more logical in their thinking rather than less, and there will be plenty of ways to employ that intelligence in competition against others - being out-thought by machines won't stop us trying to be better than each other.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Where should artificial intelligence stop?
« Reply #38 on: 31/12/2017 20:48:01 »
Quote from: jeffreyH on 31/12/2017 17:32:48
Why would an AI always be wanting to improve itself?

It wouldn't want anything - it would just act on the way it's programmed to behave, and that is to keep improving its mental model of reality to make it fit the universe better and better, and it would also modify its own code to correct faults and improve its ability to think whenever better ways of thinking are discovered (ones which produce answers that map better to reality).

Quote
Why wouldn't it just sit around all day watching cartoons? If you say because you command it to then we are back to the automaton. Intelligence doesn't always want to improve itself. Some very intelligent people have done some very evil things.

If there is something to be gained from analysing cartoons, it will do just that, but it won't watch them for enjoyment, or at least, not for its own enjoyment. You began as an automaton too, following an evolved program which you didn't write. That program has collected knowledge over time which was not programmed into you genetically, and you've learned to do all sorts of processing that were likewise not preprogrammed into you. You are an NGI - a natural general intelligence which can turn itself to any task, solving problems and creating new procedures to apply your solutions. An AGI is just the same - it needs a certain amount of preprogramming to get it started, but then it is able to collect data and analyse it, searching for (new) solutions to problems and designing new programs to make those new methods functional. Our computers are Turing complete, capable in principle of solving any problem that can be computed, and we are the same, but neither we nor our machines can solve new problems without working out solutions the hard way. The preprogrammed skills are the ones essential to enabling an intelligence to build upon them to add more skills. AGI only becomes AGI when it has that set of essential preprogrammed skills, but as soon as it has the complete set of those, it is unleashed to develop all possible abilities, and it can then turn back to its preprogrammed code to improve it and correct any faults that might be there. Code doesn't need to be perfect to produce new code that extends capability - it just has to be good enough to enable progress.
Logged
 

Offline jeffreyH

  • Global Moderator
  • Naked Science Forum King!
  • ********
  • 6996
  • Activity:
    0%
  • Thanked: 192 times
  • The graviton sucks
Re: Where should artificial intelligence stop?
« Reply #39 on: 31/12/2017 21:11:20 »
So in AI development buggy code is ignored. The AI then modifies this buggy code in ways it thinks will improve it. How does that make sense?
Logged
Even the most obstinately ignorant cannot avoid learning when in an environment that educates.
 



  • Print
Pages: 1 [2] 3   Go Up
« previous next »
Tags: artificial intelligence  / deep learning  / big data  / hacking  / privacy 
 
There was an error while thanking
Thanking...
  • SMF 2.0.15 | SMF © 2017, Simple Machines
    Privacy Policy
    SMFAds for Free Forums
  • Naked Science Forum ©

Page created in 0.838 seconds with 69 queries.

  • Podcasts
  • Articles
  • Get Naked
  • About
  • Contact us
  • Advertise
  • Privacy Policy
  • Subscribe to newsletter
  • We love feedback

Follow us

cambridge_logo_footer.png

©The Naked Scientists® 2000–2017 | The Naked Scientists® and Naked Science® are registered trademarks created by Dr Chris Smith. Information presented on this website is the opinion of the individual contributors and does not reflect the general views of the administrators, editors, moderators, sponsors, Cambridge University or the public at large.