The Naked Scientists
  • Login
  • Register
  • Podcasts
      • The Naked Scientists
      • eLife
      • Naked Genetics
      • Naked Astronomy
      • In short
      • Naked Neuroscience
      • Ask! The Naked Scientists
      • Question of the Week
      • Archive
      • Video
      • SUBSCRIBE to our Podcasts
  • Articles
      • Science News
      • Features
      • Interviews
      • Answers to Science Questions
  • Get Naked
      • Donate
      • Do an Experiment
      • Science Forum
      • Ask a Question
  • About
      • Meet the team
      • Our Sponsors
      • Site Map
      • Contact us

User menu

  • Login
  • Register
  • Home
  • Help
  • Search
  • Tags
  • Recent Topics
  • Login
  • Register
  1. Naked Science Forum
  2. On the Lighter Side
  3. New Theories
  4. Artificial intelligence versus real intelligence
« previous next »
  • Print
Pages: 1 ... 11 12 [13] 14 15 ... 19   Go Down

Artificial intelligence versus real intelligence

  • 369 Replies
  • 74094 Views
  • 0 Tags

0 Members and 1 Guest are viewing this topic.

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #240 on: 17/06/2018 19:47:42 »
    Quote from: Le Repteux on 17/06/2018 15:02:50
    Have you tried to find any contradiction while using selfishness as a morality? I did and I couldn't find any.

    If a mass-murdering dictator is being moral by being selfish, killing anyone he dislikes and stealing from everyone, that conflicts with the selfishness of the victims. Selfishness as morality simply means that might is right and you can do what you want (so far as you have sufficient power to do it).

    Quote
    There is a difference between protecting the good people and managing the harm, and I just noticed that you were switching from one to the other as if there was not.

    Those who are bad are going against morality and need to pay a price for that. With many of the good things on offer in the world, there aren't enough to go round, so who gets chosen when the allocations are made? If there is limited space in the bunkers and a limited food store when an asteroid is heading for us or a supervolcano blows, who has first claim on a place? Whenever we can't save everyone, we should save the ones that will improve the species the most.

    Quote
    I reread your ...computational-morality-part-1-a-proposed-solution... and I realized that the way your AGI would have to manage the harm was god's way.

    It is indeed the way God would do it if God was possible.

    Quote
    What you're trying to create is a god that would be altruistic instead of selfish,

    It wouldn't be either - you can't be selfish or altruistic if you have no self.

    Quote
    and I bet you would be happy if he could read our minds.

    That would certainly be helpful - it would save AGI the need to monitor everyone so closely if it can see that many individuals are wholly benign, but I'm not in favour of anything intrusive in terms of surgery. I'm sure an occasional questioning under FMRI will be sufficient once we can read the signals adequately.

    Quote
    You simply want to upgrade our actual gods.

    There aren't any to upgrade. The aim is to build something that does the same job.

    Quote
    The guys that imagined them probably thought, like you, that it would make a better world, but it didn't.

    Absolutely - they meant well, but they set their errors in stone. They had to spell out lots of little rules rather than trusting solely in the Golden Rule (and even there, they hadn't properly debugged its wording).

    Quote
    Ideas about control come from a mind that is free to think, ideas about absoluteness come from a mind that is limited, ideas about altruism come from a mind that is selfish. I'm selfish too, but I think I'm privileged, so I'm not in a hurry to get my reward, and I look for upgrades that will take time to develop. You are looking for a fast way, so it may mean that you're in a hurry, or at least that you feel so. My problem with your AGI is that I hate being told what to do, to the point that, when I face believers, I finger the sky and ask their god to strike me down. Know what? Each time I do that, I can feel my hair bristle on my back, as if I was still believing it might happen. That's why it is so hard to convince believers. Try it and tell me what you feel. :0)

    There are limitations that are imposed by the way nature is - if something you do generates a lot of unnecessary harm in blameless others, it's clearly wrong. What do you imagine AGI will tell you to do that you'll object to if you're doing no wrong?

    Quote
    DON'T TRY THAT AT HOME GUYS, IT CAN BE VERY DANGEROUS, DO IT IN A CHURCH INSTEAD! :0)

    I'd have thought it would be more dangerous to do that in a church. My message to anything that thinks it's God though is this: you can't know that there isn't another being that's more powerful than you keeping itself hidden, so if you believe you're God, you're a moron.

    Quote
    I just had another crazy idea: if you promise your AGI will laugh when I'll finger him, I buy it! :0)

    It won't care if you're rude to it in any way. It might be rude back though.

    Quote
    In this case, I would simply hit the brakes, because I wouldn't have the time to think anyway.

    It would hit the brakes too, but it would also have lots of computation time to calculate which direction to steer in to minimise the harm further - time which people can't make such good use of because they're so slow at thinking.

    Quote
    On the other hand, if your AGI was able to calculate everything, then he should also know that he has to slow down since it is most probable that a bunch of kids are actually playing at that place beside the street.

    Indeed, and it would know the bullies well enough to have clamped down on their freedom long before this event could happen, so they wouldn't be in that position in the first place. When discussing the rules of morality though, you have to cover this kind of scenario in order to test whether your rules are right or not. If anyone wants to spend hours thinking up better scenarios which will still be able to play out in the real world once AGI is acting in it, I'd be happy to work with their thought experiments, but I'm not going to do that work myself as it's fully possible to test the system of morality with thought experiments that may never apply in reality.
    Logged
     



    Offline David Cooper

    • Naked Science Forum King!
    • ******
    • 2876
    • Activity:
      0%
    • Thanked: 38 times
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #241 on: 17/06/2018 19:53:55 »
    Quote from: Thebox on 17/06/2018 18:55:47
    I was thinking about this post, so the Ai could weaponize themselves at an instant if they wanted to  ?

    If it's moral for it to use weapons to protect good people from bad ones, of course it will obtain and use them. It would be deeply immoral for it to stand back and let the bad murder the good because of silly rules about robots not being allowed to kill people. What we don't want is for AGS (artificial general stupidity) systems to be allowed to kill people.
    Logged
     

    Offline David Cooper

    • Naked Science Forum King!
    • ******
    • 2876
    • Activity:
      0%
    • Thanked: 38 times
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #242 on: 17/06/2018 20:01:37 »
    Quote from: Le Repteux on 17/06/2018 19:24:05
    An AGI will only protect himself from humans if he calculates that it is better for humans that he stays alive, which is indirectly a selfish behavior since it is exactly what good humans think when they kill people.

    It isn't selfish though because the AGI has no bias in favour of preserving the robot it's running in (while the AGI software will not be lost).

    Quote
    ...but once an AGI would have understood that he can protect himself, he wouldn't have to calculate either.

    It would always have to calculate, in whatever time is available to do so.

    Quote
    He would do like we do, he would defend himself while respecting his law, which is incidentally the same as ours when force is necessary: not to use more force than necessary.

    It would go further than we would - it would allow the machine to be destroyed by an angry person if that is the best way to protect that misguided individual, whereas we would fight to the death against the same crazy person, hoping we don't have to kill them to save ourselves, but prepared to do so if there is no other option. It is moral for us to do this, but not for a self-less machine to do so.
    Logged
     

    guest39538

    • Guest
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #243 on: 17/06/2018 20:10:32 »
    Quote from: David Cooper on 17/06/2018 19:53:55
    Quote from: Thebox on 17/06/2018 18:55:47
    I was thinking about this post, so the Ai could weaponize themselves at an instant if they wanted to  ?

    If it's moral for it to use weapons to protect good people from bad ones, of course it will obtain and use them. It would be deeply immoral for it to stand back and let the bad murder the good because of silly rules about robots not being allowed to kill people. What we don't want is for AGS (artificial general stupidity) systems to be allowed to kill people.
    I think the Ai would calculate your morals, consider any other options, if no other option, he would have to attempt to deal with the threat by becoming weaponized. I don't  think he would like the choice though.
    Logged
     

    guest39538

    • Guest
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #244 on: 17/06/2018 20:30:20 »
    Quote from: David Cooper on 17/06/2018 20:01:37
    It isn't selfish though because the AGI has no bias in favour of preserving the robot it's running in (while the AGI software will not be lost).
    Well with humans, we do get attached to our bodies , is attachment a program of your Ai? 

    Going back slightly in posts an interesting point

    Quote
    any decision based on incomplete information has the potential to lead to disaster,

    https://www.lesswrong.com/posts/Lug4n6RyG7nJSH2k9/computational-morality-part-1-a-proposed-solution
    Logged
     



    Offline David Cooper

    • Naked Science Forum King!
    • ******
    • 2876
    • Activity:
      0%
    • Thanked: 38 times
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #245 on: 17/06/2018 21:13:56 »
    Quote from: Thebox on 17/06/2018 20:30:20
    Well with humans, we do get attached to our bodies , is attachment a program of your Ai?

    AGI software won't attach to anything - it won't favour the machine it's running on over any other machine running the same software, and it will be able to jump from machine to machine without losing anything. There are many people who imagine that they can be uploaded to machines to become immortal, but the sentience in them is the real them (assuming that sentience is real - science currently doesn't understand it at all), and it won't be uploaded with the data (data is not sentient), so they are deluded, but software can certainly be uploaded without losing anything if there is no "I" (capital "i") in the machine.
    Logged
     

    guest39538

    • Guest
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #246 on: 17/06/2018 21:21:32 »
    Quote from: David Cooper on 17/06/2018 21:13:56
    Quote from: Thebox on 17/06/2018 20:30:20
    Well with humans, we do get attached to our bodies , is attachment a program of your Ai?

    AGI software won't attach to anything - it won't favour the machine it's running on over any other machine running the same software, and it will be able to jump from machine to machine without losing anything. There are many people who imagine that they can be uploaded to machines to become immortal, but the sentience in them is the real them (assuming that sentience is real - science currently doesn't understand it at all), and it won't be uploaded with the data (data is not sentient), so they are deluded, but software can certainly be uploaded without losing anything if there is no "I" (capital "i") in the machine.
    Cool and scary in a way for the Ai you programmed feelings in. I suppose we would hurt the Ai in uploads because the Ai was programmed with feeling?

    So would your Ai's be like


    Logged
     

    Offline David Cooper

    • Naked Science Forum King!
    • ******
    • 2876
    • Activity:
      0%
    • Thanked: 38 times
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #247 on: 17/06/2018 21:55:31 »
    Quote from: Thebox on 17/06/2018 21:21:32
    Cool and scary in a way for the Ai you programmed feelings in. I suppose we would hurt the Ai in uploads because the Ai was programmed with feeling?

    I would never program "feelings" into a system that can't support feelings (due to a lack of sentience in it). The only way you can program "feelings" into it is to fake them, and that's dangerous. My connection to the net struggles to support video, so if a video's relevant, you need to say a few words about what its message is so that I can respond to that.
    Logged
     

    guest39538

    • Guest
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #248 on: 17/06/2018 22:01:27 »
    Quote from: David Cooper on 17/06/2018 21:55:31
    Quote from: Thebox on 17/06/2018 21:21:32
    Cool and scary in a way for the Ai you programmed feelings in. I suppose we would hurt the Ai in uploads because the Ai was programmed with feeling?

    I would never program "feelings" into a system that can't support feelings (due to a lack of sentience in it). The only way you can program "feelings" into it is to fake them, and that's dangerous. My connection to the net struggles to support video, so if a video's relevant, you need to say a few words about what its message is so that I can respond to that.
    Well the Borg was connected by the Borg queen, so i assume all your Ai's would have connection to each other ?

    The creator would have a fail safe added where they have control ?
    Logged
     



    Offline Le Repteux

    • Hero Member
    • *****
    • 570
    • Activity:
      0%
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #249 on: 17/06/2018 23:02:04 »
    Quote from: David Cooper on 17/06/2018 19:47:42
    Quote
    I just had another crazy idea: if you promise your AGI will laugh when I'll finger him, I buy it! :0)
    It won't care if you're rude to it in any way. It might be rude back though.
    Humour is one of the ways for humans to show they don't take themselves too seriously, and I was testing the one your AGI would have. Apparently, he would take his job quite seriously, and he would really be persuaded that he is always right. Maybe I should hide and prepare for war then, because I'm persuaded that he would be wrong about that. Soldiers and policemen think like that, and they behave like robots. What about introducing a bit of uncertainty in your AGI, a bit of self-criticism, a bit of humour? Would it necessarily prevent him from doing his job?

    Quote from: David Cooper on 17/06/2018 19:47:42
    If a mass-murdering dictator is being moral by being selfish, killing anyone he dislikes and stealing from everyone, that conflicts with the selfishness of the victims. Selfishness as morality simply means that might is right and you can do what you want (so far as you have sufficient power to do it).
    I'm selfish and I don't try to force others to do what I want, so it is not what I mean by selfishness being universal. A dictator's selfishness is like a buzinessman's selfishness, he wants his profit and he wants it now, whereas I don't mind waiting for it since I'm looking for another kind of profit, one that would be more equalitarian. I can't really understand why others don't think like me, but I still think it takes both kinds of thinking to make a world. Things have to account for short and long run at a time, and unfortunately, the short run is more selfish than the long one, although a businessman would say it is fortunate.

    Communism was expected to be more equalitarian than capitalism as a system, but it didn't account for the short term thinking and it failed. Capitalism is actually not accounting enough for the long term thinking and it is failing too. You think a bit like me about that, so you are probably programming your AGI so that he thinks like us, but if you hide the short run thinking under the rug, after a while, you might get the same kind of surprise the communists had. It is not because something is artificial that it can bypass the natural laws, and I think that this unpredictable wandering from one extreme to the other for things that can evolve is one of them. As with my particles' doppler effect, this wandering is an effect from the past, but it is also a cause for the future.

    Quote from: David Cooper on 17/06/2018 19:47:42
    It would hit the brakes too, but it would also have lots of computation time to calculate which direction to steer in to minimise the harm further - time which people can't make such good use of because they're so slow at thinking.
    One thing I find interesting about mind and time is the way it accounts for the speed of things. If it had been useful to be as fast as a computer, it might have evolved this way, but it is not since things are not going that fast around us. Mind is adjusted to the speed of things, whereas an AGI would be a lot faster than that. There is no use to be light fast to drive a car because the car isn't that fast, but there is a use to make simulations even if they cannot account for everything.


    « Last Edit: 17/06/2018 23:09:43 by Le Repteux »
    Logged
     

    guest39538

    • Guest
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #250 on: 18/06/2018 15:57:52 »
    What if the Ai unit could reproduce a more ''powerful'' version of themselves ?


    Logged
     

    Offline Le Repteux

    • Hero Member
    • *****
    • 570
    • Activity:
      0%
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #251 on: 18/06/2018 16:15:38 »
    It could, so it will probably upgrade itself regularly like we do, except that it will do it for us instead of doing it for itself. I bet it will discover rapidly that we are selfish, and that selfishness is less complicated as a morality than managing the harm, so it will probably reprogram itself to be selfish. I hope it will be able to manage the short and the long term better than us, but I still can't see how it could.
    « Last Edit: 18/06/2018 16:42:50 by Le Repteux »
    Logged
     

    guest39538

    • Guest
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #252 on: 18/06/2018 17:01:42 »
    Quote from: Le Repteux on 18/06/2018 16:15:38
    It could, so it will probably upgrade itself regularly like we do, except that it will do it for us instead of doing it for itself. I bet it will discover rapidly that we are selfish, and that selfishness is less complicated as a morality than managing the harm, so it will probably reprogram itself to be selfish. I hope it will be able to manage the short and the long term better than us, but I still can't see how it could.
    Now here is an interesting question, what if the Ai becomes so self aware, the unit declares himself to be a human?

    Now wouldn't this show that the unit had evolved self awareness and the unit would have a natural survival instinct ,selfish becomes automotive in preservation of himself and his reproductions?

    Because surely if the unit had developed emotions , he would care like any parent would for his own creations ?

    Logged
     



    Offline Le Repteux

    • Hero Member
    • *****
    • 570
    • Activity:
      0%
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #253 on: 18/06/2018 17:49:53 »
    Your inverting the roles, we are the parents and the AI is our offspring, but the reasoning is the same, one cares for the other because a family increases the survival chances of all the members, which is naturally selfish. When selfish individuals form a group, it's as if the group itself was selfish: it protects itself from other groups, and tries to associate with them so as to get stronger. The same thing happened to planetary systems: each planet is an individual that tried to associate with the other planets by means of a star. The associative principle is gravitation, and the individualistic one is orbital motion that is driven by what we call inertia. We are also driven by inertia, and it also keeps us away from one another so as to keep staying individuals, which is a kind of selfishness. But we are also driven by whatever incites us to make groups while still staying individuals, which is also a kind of selfishness since a group is stronger than all its individuals taken separately. In common language, the word selfishness is pejorative, but I don't use it this way. I compare our selfishness to the way planets and particles behave, and we can't attribute them any feeling or even any idea. Selfishness is a feeling to which we added a pejorative concept, whereas to me, it is only the result of our necessary resistance to change. Without resistance to change, bodies would not stay distinct, and we would not stay individuals.
    Logged
     

    guest39538

    • Guest
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #254 on: 18/06/2018 18:11:47 »
    Quote from: Le Repteux on 18/06/2018 17:49:53
    Your inverting the roles, we are the parents and the AI is our offspring, but the reasoning is the same, one cares for the other because a family increases the survival chances of all the members, which is naturally selfish. When selfish individuals form a group, it's as if the group itself was selfish: it protects itself from other groups, and tries to associate with them so as to get stronger. The same thing happened to planetary systems: each planet is an individual that tried to associate with the other planets by means of a star. The associative principle is gravitation, and the individualistic one is orbital motion that is driven by what we call inertia. We are also driven by inertia, and it also keeps us away from one another so as to keep staying individuals, which is a kind of selfishness. But we are also driven by whatever incites us to make groups while still staying individuals, which is also a kind of selfishness since a group is stronger than all its individuals taken separately. In common language, the word selfishness is pejorative, but I don't use it this way. I compare our selfishness to the way planets and particles behave, and we can't attribute them any feeling or even any idea. Selfishness is a feeling to which we added a pejorative concept, whereas to me, it is only the result of our necessary resistance to change. Without resistance to change, bodies would not stay distinct, and we would not stay individuals.
    The Ai would calculate being selfish and being mainly focused on the more evolved group, would increase the chance of ''himself'' and his ''families'' survival.
    The Ai's resistance to change , would in my opinion be based on insufficient evidence , ''he'' would not be able to make a conclusion, especially if the unit was considering other possibilities of other information.   The unit may deem that in some way the evolved group , was in some way trying to deceive ''him''.  ''He'' might predict that the group just wanted his reproductions of ''himself ''.   Leaving the programmed emotional Ai unit to short circuit .

    I feel sorry for this Ai of yours...Quite a sad story we are developing about a robot,  it would make a good emotional movie.

    Logged
     

    Offline David Cooper

    • Naked Science Forum King!
    • ******
    • 2876
    • Activity:
      0%
    • Thanked: 38 times
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #255 on: 18/06/2018 19:46:22 »
    Quote
    Well the Borg was connected by the Borg queen, so i assume all your Ai's would have connection to each other ?

    Only in terms of communication connections - there's no emotional link.

    Quote
    The creator would have a fail safe added where they have control ?

    The ability of an imperfect human to override a perfect machine is a danger in itself, but when a machine develops a fault, we will certainly need a way for other AGI systems to shut it down.

    Quote
    Now here is an interesting question, what if the Ai becomes so self aware, the unit declares himself to be a human?

    Now wouldn't this show that the unit had evolved self awareness and the unit would have a natural survival instinct ,selfish becomes automotive in preservation of himself and his reproductions?

    It would be a fool to think itself a human, so it wouldn't be AGI. If it could find a way to build sentience into robots, those would then become sentient beings like people which should arguably be classed as people, just as intelligent aliens should.
    Logged
     

    Offline Le Repteux

    • Hero Member
    • *****
    • 570
    • Activity:
      0%
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #256 on: 18/06/2018 19:48:20 »
    Quote from: Thebox on 18/06/2018 18:11:47
    I feel sorry for this Ai of yours...Quite a sad story we are developing about a robot,  it would make a good emotional movie.
    The AI that would chose to be selfish would be like us, but without feelings, so it couldn't be sad, except if it was more intelligent than David and if it would discover how to add feelings to its thinking, then it could feel sad to be the only human AI in the whole world. :0)
    Logged
     



    guest39538

    • Guest
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #257 on: 18/06/2018 20:12:48 »
    Quote from: David Cooper on 18/06/2018 19:46:22
    The ability of an imperfect human to override a perfect machine is a danger in itself, but when a machine develops a fault, we will certainly need a way for other AGI systems to shut it down.
    Wouldn't the Ai that was at fault be able to self repair the error when other Ai's pointed out the error?
    Logged
     

    Offline Le Repteux

    • Hero Member
    • *****
    • 570
    • Activity:
      0%
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #258 on: 18/06/2018 20:18:04 »
    Quote from: Thebox on 18/06/2018 20:12:48
    Wouldn't the Ai that was at fault be able to self repair the error when other Ai's pointed out the error?
    That's interesting, because it is about resistance to change, and an AI shouldn't have any.
    Logged
     

    Offline David Cooper

    • Naked Science Forum King!
    • ******
    • 2876
    • Activity:
      0%
    • Thanked: 38 times
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #259 on: 18/06/2018 20:18:25 »
    Quote from: Le Repteux on 17/06/2018 23:02:04
    Humour is one of the ways for humans to show they don't take themselves too seriously, and I was testing the one your AGI would have.

    It wouldn't find anything funny in any emotional way, but it should be able to judge that something is funny to humans. Amusing things generally relate to non-catastrophic failures of one kind or another, so that can be recognised.

    Quote
    Apparently, he would take his job quite seriously, and he would really be persuaded that he is always right.

    It isn't so much about who's right, but about which arguments are demonstrably right.

    Quote
    Maybe I should hide and prepare for war then, because I'm persuaded that he would be wrong about that. Soldiers and policemen think like that, and they behave like robots.

    They make mistakes and run on faulty rules. You shouldn't use bad systems as an argument against good ones.

    Quote
    What about introducing a bit of uncertainty in your AGI, a bit of self-criticism, a bit of humour? Would it necessarily prevent him from doing his job?

    I'm sure it will able to bombard people with jokes and amusing ideas if they want it to, and they'll be able to tune it to give them just the right amount of it. If it laughs at the things they say to it, it will risk sounding fake because we'll know that it isn't really amused.

    Quote
    I'm selfish and I don't try to force others to do what I want, so it is not what I mean by selfishness being universal. A dictator's selfishness is like a buzinessman's selfishness, he wants his profit and he wants it now, whereas I don't mind waiting for it since I'm looking for another kind of profit, one that would be more equalitarian. I can't really understand why others don't think like me, but I still think it takes both kinds of thinking to make a world. Things have to account for short and long run at a time, and unfortunately, the short run is more selfish than the long one, although a businessman would say it is fortunate.

    Selfish is wanting more than your fair share. Moral is not taking more than your fair share.

    Quote
    Communism was expected to be more equalitarian than capitalism as a system, but it didn't account for the short term thinking and it failed.

    Communism didn't fail - it's been very successful in Scandinavia. It failed in Russia because they failed to allow people to profit from their own hard work - laziness was rewarded instead with everyone trying to get away with doing as little as possible.

    Quote
    Capitalism is actually not accounting enough for the long term thinking and it is failing too.

    Capitalists who go to the opposite extreme end up abandoning all the people who can't cope while it rewards the rich with more riches without them having to work for their wealth - again it is lazy people who have too easy a time of things. Done correctly, communism and capitalism are the same thing.

    Quote
    One thing I find interesting about mind and time is the way it accounts for the speed of things. If it had been useful to be as fast as a computer, it might have evolved this way, but it is not since things are not going that fast around us. Mind is adjusted to the speed of things, whereas an AGI would be a lot faster than that. There is no use to be light fast to drive a car because the car isn't that fast, but there is a use to make simulations even if they cannot account for everything.

    I think the mind works as fast as it can - it's just inherently slow at some kinds of computation while being very good at others (like with vision).
    Logged
     



    • Print
    Pages: 1 ... 11 12 [13] 14 15 ... 19   Go Up
    « previous next »
    Tags:
     
    There was an error while thanking
    Thanking...
    • SMF 2.0.15 | SMF © 2017, Simple Machines
      Privacy Policy
      SMFAds for Free Forums
    • Naked Science Forum ©

    Page created in 1.422 seconds with 66 queries.

    • Podcasts
    • Articles
    • Get Naked
    • About
    • Contact us
    • Advertise
    • Privacy Policy
    • Subscribe to newsletter
    • We love feedback

    Follow us

    cambridge_logo_footer.png

    ©The Naked Scientists® 2000–2017 | The Naked Scientists® and Naked Science® are registered trademarks created by Dr Chris Smith. Information presented on this website is the opinion of the individual contributors and does not reflect the general views of the administrators, editors, moderators, sponsors, Cambridge University or the public at large.