The Naked Scientists
  • Login
  • Register
  • Podcasts
      • The Naked Scientists
      • eLife
      • Naked Genetics
      • Naked Astronomy
      • In short
      • Naked Neuroscience
      • Ask! The Naked Scientists
      • Question of the Week
      • Archive
      • Video
      • SUBSCRIBE to our Podcasts
  • Articles
      • Science News
      • Features
      • Interviews
      • Answers to Science Questions
  • Get Naked
      • Donate
      • Do an Experiment
      • Science Forum
      • Ask a Question
  • About
      • Meet the team
      • Our Sponsors
      • Site Map
      • Contact us

User menu

  • Login
  • Register
  • Home
  • Help
  • Search
  • Tags
  • Recent Topics
  • Login
  • Register
  1. Naked Science Forum
  2. On the Lighter Side
  3. New Theories
  4. Artificial intelligence versus real intelligence
« previous next »
  • Print
Pages: 1 ... 10 11 [12] 13 14 ... 19   Go Down

Artificial intelligence versus real intelligence

  • 369 Replies
  • 74012 Views
  • 0 Tags

0 Members and 5 Guests are viewing this topic.

Offline Le Repteux

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
Re: Artificial intelligence versus real intelligence
« Reply #220 on: 15/06/2018 16:37:07 »
I was only pointing to the possibility that intelligence could be a natural outcome of any natural evolution. If it is so, then artificial intelligence is also natural, and it can thus not predict its evolution even if it tries to control it. Control would then only be an illusion created by the way mind works. Mind would then only be able to accelerate its own evolution, not control it.

Quote from: Thebox on 15/06/2018 15:57:35
You should make a movie with your thoughts.
I'm actually making a reality show out of them, and you're in!  :0)
« Last Edit: 15/06/2018 16:41:13 by Le Repteux »
Logged
 



guest39538

  • Guest
Re: Artificial intelligence versus real intelligence
« Reply #221 on: 15/06/2018 16:40:42 »
Quote from: Le Repteux on 15/06/2018 16:37:07
I was only pointing to the possibility that intelligence could be a natural outcome of any natural evolution. If it is so, then artificial intelligence is also natural, and it can thus not predict its evolution even if it tries to control it. Control would then only be an illusion created by the way mind works. Mind would then only be able to accelerate its own evolution, not control it.
I understand you, but what if the Ai unit was so smart it could ask its creator for upgrades ?  Effectively creating the future as the Ai deems fit?
Wouldn't the creator have to agree with the Ai because the Ai was created to try and evolve past the level of the creator?
If the Ai has restrictions, then the creator fears the Ai will outdo the creator. This not concluding the creators test.

Cutting an experiment short. Concluding the creator had little faith in his own abilities to create such a perfect Ai.
Logged
 

guest39538

  • Guest
Re: Artificial intelligence versus real intelligence
« Reply #222 on: 15/06/2018 16:52:00 »
Quote from: Le Repteux on 15/06/2018 16:37:07
I'm actually making a reality show out of them, and you're in!  :0)
Cool I think, I hope it isn't the delusions of people online lol. Do remember I should be a candidate for the Turner price of art. What I do is art at it's best. ahah more delusions hey Jeremy.

Added- Come on,now I am curious , what is your documentary about?


Logged
 

guest39538

  • Guest
Re: Artificial intelligence versus real intelligence
« Reply #223 on: 15/06/2018 17:38:31 »
Disclaimer - Under the Data protection act any personal information and details about me must be accurate and true. 

In any breach of this, I have the right to seek legal advice making a formal lawsuit against person or person(s) providing false information.

To be clear.....
Logged
 

Offline Le Repteux

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
Re: Artificial intelligence versus real intelligence
« Reply #224 on: 15/06/2018 17:58:17 »
Quote from: Thebox on 15/06/2018 16:40:42
I understand you, but what if the Ai unit was so smart it could ask its creator for upgrades ?  Effectively creating the future as the Ai deems fit?
David's AGI wouldn't have to wait for upgrades from his creators, he would upgrade himself all by himself.

Quote from: Thebox on 15/06/2018 16:40:42
Come on,now I am curious , what is your documentary about?
It's not really a documentary, it's a public discussion actually taking place and available for free at https://www.thenakedscientists.com/forum/index.php?topic=73258.200 :0)
« Last Edit: 15/06/2018 18:00:29 by Le Repteux »
Logged
 



guest39538

  • Guest
Re: Artificial intelligence versus real intelligence
« Reply #225 on: 15/06/2018 18:00:36 »
Quote from: Le Repteux on 15/06/2018 17:58:17
It's not really a documentary, it's a public discussion available for free at https://www.thenakedscientists.com/forum/index.php?topic=73258.200 :0)

Lol , like a library hey.   
Logged
 

Offline Le Repteux

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
Re: Artificial intelligence versus real intelligence
« Reply #226 on: 15/06/2018 19:09:02 »
Quote from: David Cooper on 14/06/2018 22:36:28
If a fascist was making lots of fascist friends, that might not be good for him (or for them, or for anyone else), so there may be an argument for blocking that,
Making friends is a response from our instinctive selfish behavior, so we can't feel bad about that whatever the kind of friends we make. What might make us feel bad then depends on what the AGI will do, which depends on what he thinks the whole group will feel, not only some individuals. That's why I was saying that while caring about the welfare of others instead of caring for himself first, he would only care for the survival of the specie as a whole, not for individuals or smaller groups.

Quote from: David Cooper on 14/06/2018 22:36:28
AGI will work along with people's instincts as much as morally is acceptable
That's what religions thought they were doing too when trying to control our instincts, and history shows that they were only working for the survival of their own group. In the case of your AGI, his own group would be the people that would obey him, and the others would be prosecuted. After a while, history would probably show that the AGI was only working for the welfare of his own group, and that his reign would have produced nothing but zombies.

Quote from: David Cooper on 14/06/2018 22:36:28
do we end up taking drugs to be happy?
With an AGI whose morality is based on what we feel, that might happen.

Quote from: David Cooper on 14/06/2018 22:36:28
Is getting excited about new experiences just as pointless?
New experiences are fine when we are young, but they are replaced by new ideas when we grow up, and it is certainly pointless to develop any idea knowing that the AGI already has better ones. As I said, the only way you can find it interesting is imagining that the AGI is yours.

Quote from: David Cooper on 14/06/2018 22:36:28
Who knows
That's interesting, because it means that your AGI wouldn't know either.

Quote from: David Cooper on 14/06/2018 22:36:28
There is only one correct morality, and whichever one that is, ..... that's the one I want to put in it.
You can try mine, it has no copyrights! :0)

Quote from: David Cooper on 14/06/2018 22:36:28
the best advice simply can't be ignored
Didn't you say that relativists kind of ignored your advice? :0)
Logged
 

guest39538

  • Guest
Re: Artificial intelligence versus real intelligence
« Reply #227 on: 15/06/2018 19:19:07 »
Quote from: Le Repteux on 15/06/2018 19:09:02
Making friends is a response from our instinctive selfish behavior, so we can't feel bad about that whatever the kind of friends we make. What might make us feel bad then depends on what the AGI will do, which depends on what he thinks the whole group will feel, not only some individuals. That's why I was saying that while caring about the welfare of others instead of caring for himself first, he would only care for the survival of the specie as a whole, not for individuals or smaller groups.

Ostensible, the Ai would know to protect the minority equal to the majority unless the Ai had good reason not too, such as really bad apples.  The Ai would reason with advanced logic that it is the best option , if still opposed segregation would be on the agenda until they listened to logical reason.
If he was as smart as programmed he would know to consider both sides of the fence and be totally objective.
For example , science likes to remain in peace and quiet, imagine if people were to interfere.   The Ai would keep people away from science so science can continue to help the world.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Artificial intelligence versus real intelligence
« Reply #228 on: 15/06/2018 19:47:37 »
Quote from: Thebox on 14/06/2018 23:02:00
What if the unit was so smart, the unit knew how to manipulate the stock market ?

The unit over a period of time would not only rule the world, but would also have most of the worlds  finances.

Nothing wrong with that - it would share out the spoils fairly. However, AGI will eliminate the stock market by creating perfect companies as a part of world government, wiping out all the opposition and removing the ability of people to earn money eternally out of mere ownership where the rewards aren't justified by the work done. I already have plans to wipe out all the banks by using AGI.
Logged
 



guest39538

  • Guest
Re: Artificial intelligence versus real intelligence
« Reply #229 on: 15/06/2018 19:57:58 »
Quote from: David Cooper on 15/06/2018 19:47:37
Quote from: Thebox on 14/06/2018 23:02:00
What if the unit was so smart, the unit knew how to manipulate the stock market ?

The unit over a period of time would not only rule the world, but would also have most of the worlds  finances.

Nothing wrong with that - it would share out the spoils fairly. However, AGI will eliminate the stock market by creating perfect companies as a part of world government, wiping out all the opposition and removing the ability of people to earn money eternally out of mere ownership where the rewards aren't justified by the work done. I already have plans to wipe out all the banks by using AGI.
Indeed, the Ai  would have already equated t = t for all.   Even distribution and equality of life being a prime directive.  A game of monopoly with a full board , most new comers have lost before they begin.  The jail ends up full because the players could not pay the high rates of rent on the map, competing with immediate handicaps.
This being said, he may introduce some sort of bonus incentive for achievers , such as a ''holiday'' of pampering maybe.  Service is reward for anyone from any walks of life.  It would be a good thing to barter with for food etc that others may grow.



Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: Artificial intelligence versus real intelligence
« Reply #230 on: 15/06/2018 20:00:19 »
Quote from: Le Repteux on 15/06/2018 19:09:02
What might make us feel bad then depends on what the AGI will do, which depends on what he thinks the whole group will feel, not only some individuals. That's why I was saying that while caring about the welfare of others instead of caring for himself first, he would only care for the survival of the specie as a whole, not for individuals or smaller groups.

Individuals matter most of all, and the most moral ones matter more than the less moral ones. The survival of the species isn't necessarily important - if all the individuals are vile, the whole species could be allowed to die out without it being any loss. AGI's job is to protect the good first, and it isn't going to care about groups over and above individuals.

Quote
That's what religions thought they were doing too when trying to control our instincts, and history shows that they were only working for the survival of their own group. In the case of your AGI, his own group would be the people that would obey him, and the others would be prosecuted. After a while, history would probably show that the AGI was only working for the welfare of his own group, and that his reign would have produced nothing but zombies.

AGI will be working for people based on morality (harm management). Religions work on a similar basis, but with warped moralities caused by them being designed by imperfect philosophers, though to be fair to them, they didn't have machines to enable perfect deep thinking without bias.

Quote
That's interesting, because it means that your AGI wouldn't know either.

Predicting the future will always be hard, and harder the further ahead you're trying to see.

Quote
Didn't you say that relativists kind of ignored your advice? :0)

It wasn't advice, but it also isn't something that leads to money for anyone, so they don't care. It's quite different when decisions can lead to riches or poverty.
Logged
 

Offline Le Repteux

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
Re: Artificial intelligence versus real intelligence
« Reply #231 on: 15/06/2018 20:00:41 »
Quote from: Thebox on 15/06/2018 19:19:07
the Ai would know to protect the minority equal to the majority unless the Ai had good reason not too, such as really bad apples.
I was comparing the AGI's morality to the religious one, and I found that they were the same, and the religions were not protecting people from  other religions, only from their's, so how could an AGI work differently? Of course, he could avoid killing people since that's where civilization seems to lead, but he couldn't avoid to apply his own law, which is what any group that can act freely does. That kind of law only serve to protect a specific group, not the whole universe. The universe is protected by universal laws, and I think that selfishness is one of them. Selfishness is a result of our resistance to change, and even particles resist to change: in their case, we call it resistance to acceleration, but it's exactly the same principle. If religions had recognized that, they may not have killed people, and if we could recognize that too, not only would we stop killing people just because they killed some, but we might also stop making wars, which is one of the goals of David's AGI.


Logged
 

guest39538

  • Guest
Re: Artificial intelligence versus real intelligence
« Reply #232 on: 15/06/2018 20:08:32 »
Quote from: Le Repteux on 15/06/2018 20:00:41
I was comparing the AGI's morality to the religious one, and I found that they were the same, and the religions were not protecting people from  other religions, only from their's, so how could an AGI work differently?
An Ai would view all the information and make god a reality by using his inner sub program routine of science is everything and everything is science.  From this he will establish that space itself is an immortal now continuum where space-time has a period of existence.  He would establish something from nothing by his sub programming.   Thus proving the superseding God of space , supersedes the information God.   The Ai would  convey his message by information to the information God.  The information God will scratch their head thinking what in the Universe have we created with such an Ai, The Ai would respond with his name, I am.
Logged
 



guest39538

  • Guest
Re: Artificial intelligence versus real intelligence
« Reply #233 on: 15/06/2018 22:48:41 »
Of course , it would not be good drama without a cheeky video or two lol


Then after being a trembling wreck, return with passion  ;)

Logged
 

Offline Le Repteux

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
Re: Artificial intelligence versus real intelligence
« Reply #234 on: 17/06/2018 15:02:50 »
Quote from: David Cooper on 15/06/2018 20:00:19
AGI will be working for people based on morality (harm management). Religions work on a similar basis, but with warped moralities caused by them being designed by imperfect philosophers, though to be fair to them, they didn't have machines to enable perfect deep thinking without bias.
Religions used magic to solve the contradictions, but they were nevertheless contradicting themselves all the time with their morality based on determining themselves the good and the bad. Have you tried to find any contradiction while using selfishness as a morality? I did and I couldn't find any.

Quote from: David Cooper on 15/06/2018 20:00:19
AGI's job is to protect the good first
There is a difference between protecting the good people and managing the harm, and I just noticed that you were switching from one to the other as if there was not. Religions were not managing the harm, they were only forcing us to do the right things, because they really thought that their morality was better than the one from other groups. They were even forcing us to harm ourselves in order to please god, which is the inverse of managing the harm. I reread your Part 1 Morality at LessWrong, and I realized that the way your AGI would have to manage the harm was god's way. If I understand well, religions were leaving the management to god, pretending that he would be fair since he would know everything, even what we had in mind. But god is a human creation, and humans are not fair, they are selfish, they always protect the members of their own group, so what they did is simply invent a leader that would favor their group instead of other groups just by reading their mind, or worse, favor some individuals instead of others: prayers always ask for favors, which is evidently selfish.

Of course it doesn't work for real, but it works as a placebo, it helps people to feel good, which is a kind of harm managing. I don't need god to feel good: when I notice I feel bad, which sometimes happen when I'm tired, I simply stop thinking or I sleep. It's not a good idea to try to solve problems when we are tired, and it is easy to use the placebo effect to stop thinking about them. That's what god was used for and it worked. That was his only use, and I know we don't need him anymore because I know that people can talk to themselves instead. Of course, god didn't succeed to replace our selfishness by altruism, but we still survived quite well without it: if it wasn't for pollution and war, human specie would be fine. The source of those two problems is the uncontrolled growth of the population, and we can't do anything altruistic about that except wait till it stops. It will when women will be permitted to do something else than making babies all over the world, not just in developed countries.

What you're trying to create is a god that would be altruistic instead of selfish, and I bet you would be happy if he could read our minds. You simply want to upgrade our actual gods. The guys that imagined them probably thought, like you, that it would make a better world, but it didn't. Ideas about control come from a mind that is free to think, ideas about absoluteness come from a mind that is limited, ideas about altruism come from a mind that is selfish. I'm selfish too, but I think I'm privileged, so I'm not in a hurry to get my reward, and I look for upgrades that will take time to develop. You are looking for a fast way, so it may mean that you're in a hurry, or at least that you feel so. My problem with your AGI is that I hate being told what to do, to the point that, when I face believers, I finger the sky and ask their god to strike me down. Know what? Each time I do that, I can feel my hair bristle on my back, as if I was still believing it might happen. That's why it is so hard to convince believers. Try it and tell me what you feel. :0)

DON'T TRY THAT AT HOME GUYS, IT CAN BE VERY DANGEROUS, DO IT IN A CHURCH INSTEAD! :0)

I just had another crazy idea: if you promise your AGI will laugh when I'll finger him, I buy it! :0)

Quote from: From David at LessWrong
AGI will be able to access a lot of information about the people involved in situations where such difficult decisions need to be made. Picture a scene where a car is moving towards a group of children who are standing by the road. One of the children suddenly moves out into the road and the car must decide how to react. If it swerves to one side it will run into a lorry that's coming the other way, but if it swerves to the other side it will plough into the group of children.
In this case, I would simply hit the brakes, because I wouldn't have the time to think anyway. Anybody can jump in front of a car without the car even having the time to brake though, and no software on the car could prevent the collision either. If you have the time to think, then you also have the time to stop. On the other hand, if your AGI was able to calculate everything, then he should also know that he has to slow down since it is most probable that a bunch of kids are actually playing at that place beside the street.
 
« Last Edit: 17/06/2018 18:54:22 by Le Repteux »
Logged
 

guest39538

  • Guest
Re: Artificial intelligence versus real intelligence
« Reply #235 on: 17/06/2018 15:57:25 »
Quote from: Le Repteux on 17/06/2018 15:02:50
and I realized that the way your AGI would have to manage the harm was god's way.
The Ai would conclude that this was the best option.  By creating a real God and spreading the seed into the system the Ai would know the outcome by advanced logical awareness.  The Ai would know once the religious books are moved out of the frame, the God ideology now science based, would lose it's appeal and the eventuality is God would then be forgotten over time.

(+1/t )  -  (+1/t) = 0

The Ai then if he thought it appropriate could remodel God in being something  less arguable.

Quote
What you're trying to create is a god that would be altruistic instead of selfish, and I bet you would be happy if he could read our minds.

A selfish Ai would not be a 100% objective unit.   The selfishness for the Ai would be programmed in a sense of self preservation ,only needing the selfish basics to maintain ''himself''.

Added- Of course David's Ai would be fully programmed with C.I.A subjective mind control techniques , uploaded like


Logged
 

guest39538

  • Guest
Re: Artificial intelligence versus real intelligence
« Reply #236 on: 17/06/2018 18:55:47 »
Quote from: Le Repteux on 15/06/2018 17:58:17
David's AGI wouldn't have to wait for upgrades from his creators, he would upgrade himself all by himself.
I was thinking about this post, so the Ai could weaponize themselves at an instant if they wanted to  ?

Logged
 



guest39538

  • Guest
Re: Artificial intelligence versus real intelligence
« Reply #237 on: 17/06/2018 19:16:48 »
Quote from: Le Repteux on 17/06/2018 15:02:50
Quote from: From David at LessWrong
AGI will be able to access a lot of information about the people involved in situations where such difficult decisions need to be made. Picture a scene where a car is moving towards a group of children who are standing by the road. One of the children suddenly moves out into the road and the car must decide how to react. If it swerves to one side it will run into a lorry that's coming the other way, but if it swerves to the other side it will plough into the group of children.
In this case, I would simply hit the brakes, because I wouldn't have the time to think anyway. Anybody can jump in front of a car without the car even having the time to brake though, and no software on the car could prevent the collision either. If you have the time to think, then you also have the time to stop. On the other hand, if your AGI was able to calculate everything, then he should also know that he has to slow down since it is most probable that a bunch of kids are actually playing at that place beside the street.

An interesting argument of logic

The Ai options

Turn into the bunch of children

Turn into the truck

Break knowing he has ABS and hope for the best

Or just continuing and running the child over not caring

Well he knows driving into the bunch of children is a no no.

He knows if he drives into the truck he can't continue his programming

He might choose to just take the breaking option

Slowing it down slightly not to hurt the child to badly. (hopefully)

Quite obviously he would be driving a lot slower knowing it was built up area.  But he also might take a chance of breaking to slow it down a bit if he was going to fast. 
Logged
 

Offline Le Repteux

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
Re: Artificial intelligence versus real intelligence
« Reply #238 on: 17/06/2018 19:24:05 »
Quote from: Thebox on 17/06/2018 15:57:25
The selfishness for the Ai would be programmed in a sense of self preservation ,only needing the selfish basics to maintain ''himself''.
An AGI will only protect himself from humans if he calculates that it is better for humans that he stays alive, which is indirectly a selfish behavior since it is exactly what good humans think when they kill people. We don't have to calculate anything to protect ourselves when we are attacked, because our selfishness is instinctive, but once an AGI would have understood that he can protect himself, he wouldn't have to calculate either. He would do like we do, he would defend himself while respecting his law, which is incidentally the same as ours when force is necessary: not to use more force than necessary. That law is not only instinctive, it is natural. Particles don't explode when they don't have to, they only do when the external force exceeds their internal one.

Quote from: Thebox on 17/06/2018 18:55:47
I was thinking about this post, so the Ai could weaponize themselves at an instant if they wanted to  ?
They could, but they would still have to respect their own law.
« Last Edit: 17/06/2018 19:29:54 by Le Repteux »
Logged
 

guest39538

  • Guest
Re: Artificial intelligence versus real intelligence
« Reply #239 on: 17/06/2018 19:39:16 »
Quote from: Le Repteux on 17/06/2018 19:24:05
Quote from: Thebox on 17/06/2018 15:57:25
The selfishness for the Ai would be programmed in a sense of self preservation ,only needing the selfish basics to maintain ''himself''.
An AGI will only protect himself from humans if he calculates that it is better for humans that he stays alive, which is indirectly a selfish behavior since it is exactly what good humans think when they kill people. We don't have to calculate anything to protect ourselves when we are attacked, because our selfishness is instinctive, but once an AGI would have understood that he can protect himself, he wouldn't have to calculate either. He would do like we do, he would defend himself while respecting his law, which is incidentally the same as ours when force is necessary: not to use more force than necessary. That law is not only instinctive, it is natural. Particles don't explode when they don't have to, they only do when the external force exceeds their internal one.

Interesting, but don't forget though , it is your Ai so he has to follow programming. Did the Ai have all the programming needed?

Logged
 



  • Print
Pages: 1 ... 10 11 [12] 13 14 ... 19   Go Up
« previous next »
Tags:
 
There was an error while thanking
Thanking...
  • SMF 2.0.15 | SMF © 2017, Simple Machines
    Privacy Policy
    SMFAds for Free Forums
  • Naked Science Forum ©

Page created in 1.579 seconds with 67 queries.

  • Podcasts
  • Articles
  • Get Naked
  • About
  • Contact us
  • Advertise
  • Privacy Policy
  • Subscribe to newsletter
  • We love feedback

Follow us

cambridge_logo_footer.png

©The Naked Scientists® 2000–2017 | The Naked Scientists® and Naked Science® are registered trademarks created by Dr Chris Smith. Information presented on this website is the opinion of the individual contributors and does not reflect the general views of the administrators, editors, moderators, sponsors, Cambridge University or the public at large.