The Naked Scientists
  • Login
  • Register
  • Podcasts
      • The Naked Scientists
      • eLife
      • Naked Genetics
      • Naked Astronomy
      • In short
      • Naked Neuroscience
      • Ask! The Naked Scientists
      • Question of the Week
      • Archive
      • Video
      • SUBSCRIBE to our Podcasts
  • Articles
      • Science News
      • Features
      • Interviews
      • Answers to Science Questions
  • Get Naked
      • Donate
      • Do an Experiment
      • Science Forum
      • Ask a Question
  • About
      • Meet the team
      • Our Sponsors
      • Site Map
      • Contact us

User menu

  • Login
  • Register
  • Home
  • Help
  • Search
  • Tags
  • Recent Topics
  • Login
  • Register
  1. Naked Science Forum
  2. On the Lighter Side
  3. New Theories
  4. Artificial intelligence versus real intelligence
« previous next »
  • Print
Pages: 1 ... 5 6 [7] 8 9 ... 19   Go Down

Artificial intelligence versus real intelligence

  • 369 Replies
  • 74052 Views
  • 0 Tags

0 Members and 3 Guests are viewing this topic.

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #120 on: 05/06/2018 19:09:41 »
    In English, "hazard" almost always means "danger" - never "accident". It can be used in the expression "hazard a guess" though, which means something like "guess with little chance of being right".

    I never gamble in random ways such as buying lottery tickets - that's a way of losing a lot of money, and only a few lucky fools win. The only kind of gambling I do is where the odds are in my favour, so while none of these things are guaranteed to lead to a gain in themselves, overall the odds are overwhelmingly that I'll end up ahead. Both the bookmaker and his customers are gambling, but the former is almost guaranteed to win overall, although a series of big losses could wipe him out if he's very unlucky. In life, you have to take some risks, but you need to play it like the bookmaker. AGI will never be a victim of gambling like the punters.

    Although no one ever knows what's going to happen in the next second (the whole universe could unravel at the speed of light and dismantle all the things that look set to do something that looks certain to happen just in time to prevent that thing that everyone's sure will happen, but rare events which might occur can be factored into the calculations. Every time you sit on a chair, a sharp knife could spring out of it upwards, but you don't normally consider that possibility as it's highly unlikely to happen. When you lean back, again a sharp blade could have come out of the back of the chair while you were leaning forward, but you just lean back without checking. If you put a lot of effort into checking safety beyond reasonable limits, you'll shut your life down so much that it won't be worth living - many possible things are too improbable for it to be worth checking for them. We're always playing the odds, trying to maximise quality of life, and there are risks involved in every action and inaction. The odds should be calculated though rather than just guessing and making random decisions. The only place for making random decisions is where two or more options are equally likely to be the best one, at which point it doesn't matter which you choose.
    Logged
     



    Offline Le Repteux

    • Hero Member
    • *****
    • 570
    • Activity:
      0%
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #121 on: 05/06/2018 19:38:46 »
    Quote from: David Cooper on 05/06/2018 19:09:41
    We're always playing the odds, trying to maximise quality of life, and there are risks involved in every action and inaction. The odds should be calculated though rather than just guessing and making random decisions. The only place for making random decisions is where two or more options are equally likely to be the best one, at which point it doesn't matter which you choose.
    If creative people would always do that, there would be no creation, and I think that some research that needed a lot of creativity would not have been made. It takes both kinds of thinking to make a society, those who are prudent, and you seem to be, and those who are imprudent, like me. Sometimes, I wonder why I am still alive so much I am imprudent. It sometime pays to take risks though. It may not be appropriate to by lottery tickets, but still, people buy them because they know it might pay off. We don't know much about us, but we know that we come from evolution, which is a kind of lottery too. As far or as close as we can see, chance is always there, so why wouldn't it be in our mind? Because we are special? Maybe, but I prefer to think we are not. Our most important discoveries mean that we are not, so the odds are on my side. :0)
    Logged
     

    Offline David Cooper

    • Naked Science Forum King!
    • ******
    • 2876
    • Activity:
      0%
    • Thanked: 38 times
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #122 on: 05/06/2018 19:55:17 »
    Quote from: Le Repteux on 05/06/2018 18:52:16
    I don't know about other languages, but David must know since he knows many. It's too complicated and too confusing, we need to know what's going on in our mind about randomness.

    I have a shallow knowledge of many languages rather than a deep knowledge of a few - I learned them mainly to study their structures, and taking some of them further (to the level where I can read books in them) is just a hobby. But randomness in the mind is not as useful a thing as you imagine and has very little application in intelligence or creativity, although it is possible to throw a potful of paint at a canvas from a few yards away and occasionally get something that can be used as the basis of an interesting work of art, but that kind of accidental success doesn't occur often in every field. It works with music too, where generating random patterns of sounds can lead to new compositions, but it relies on a human picking out the parts that trigger the right reaction in them. If you're trying to create a new device to do something useful or fun, that approach doesn't tend to work very well - it's better to be driven by a desire to do something specific (like fly) or to solve a specific problem and then to try to work out what's needed to achieve that aim.

    Quote
    If creative people would always do that, there would be no creation, and I think that some research that needed a lot of creativity would not have been made.

    Where creative genius is involved, it is not based on randomness.

    Quote
    IAs far or as close as we can see, chance is always there, so why wouldn't it be in our mind?Because we are special? Maybe, but I prefer to think we are not. Our most important discoveries mean that we are not, so the odds are on my side. :0)

    Chance is there, but most people who gamble are losers, throwing money in the bin which would have added up to a life-enhancing amount if they'd saved it up instead. Most useful discoveries come out of hard calculation rather than random thought. There are a few lucky ones like post-it notes where the inventor was actually trying to formulate a superglue, but even there it came out of expertise and systematic experimentation.
    Logged
     

    guest39538

    • Guest
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #123 on: 05/06/2018 20:02:19 »
    Quote from: David Cooper on 05/06/2018 19:09:41
    I never gamble in random ways such as buying lottery tickets
    Me neither, the reverse odds are to hard to calculate because of so many variables. I did predict one value once because it became more likely to come out than the 48 other numbers. I was about 99% confident my prediction would come, it came. Luck maybe, coincidence even, but I did precisely calculate it .   We can narrow randomness down to a degree based on time.

    Logged
     

    guest39538

    • Guest
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #124 on: 05/06/2018 20:04:32 »
    Quote from: David Cooper on 05/06/2018 19:55:17
    Where creative genius is involved, it is not based on randomness.
    A creative genius minimises the risks of randomness.
    Logged
     



    Offline Le Repteux

    • Hero Member
    • *****
    • 570
    • Activity:
      0%
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #125 on: 05/06/2018 21:26:29 »
    Quote from: Thebox on 05/06/2018 20:04:32
    Quote from: David Cooper on 05/06/2018 19:55:17
    Where creative genius is involved, it is not based on randomness.
    A creative genius minimizes the risks of randomness.
    I think it is true that mind minimizes the chances to make mistakes, but it cannot calculate all the risks, so it also has to take some chance, and it has to like taking chances to be able to take some. Some ideas have to be calculated to be experimented, others less. I didn't have to make lots of calculations to experiment my kites for example, I don't like to calculate, I try and if doesn't work, I try something else. I think that what I don't like is precision, and that goes with the lack of precision of my mind, so I leave precision to those who have a precise mind and I look for things that don't need much. David must have a very precise mind to think like he thinks, but that doesn't mean that he can predict what has never happened yet. Those who were calculating epicycles made good calculations, but it didn't mean that they were right about the principle itself. If Galileo had minimized the risk of having his head cut, he would not have proposed to put the sun at the center of rotation. He sure liked to take risks, and not only about his head. We take no risk when we calculate everything, at least we think we don't, but how can we think that we are going to win anything without taking any risk? To me, taking no risk means not changing anything and use known things to do so, which is the complete inverse of what change means.
    « Last Edit: 05/06/2018 21:37:37 by Le Repteux »
    Logged
     

    Offline Le Repteux

    • Hero Member
    • *****
    • 570
    • Activity:
      0%
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #126 on: 05/06/2018 22:10:37 »
    Quote from: David Cooper on 05/06/2018 19:55:17
    Where creative genius is involved, it is not based on randomness.
    Randomness that an idea involves looking backwards, and randomness that presides to an idea, isn't the same thing. The randomness that presides to our own ideas is located on our own neurons and selected by our own brain, whereas the one that an idea involves looking backwards is located on each idea we have and is selected by others. It is not the same scale of randomness, so it cannot be compared directly. The only way we can compare them is by looking at the way things are selected like I did. (I drank a finger of vodka so don't try to follow me :0) 

    I can't find anything else than randomness to explain creativity, can you? The universe is incredibly diversified, and we know it has not been calculated, so where else can it come from if not from randomness?
    « Last Edit: 05/06/2018 22:14:15 by Le Repteux »
    Logged
     

    Offline Le Repteux

    • Hero Member
    • *****
    • 570
    • Activity:
      0%
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #127 on: 05/06/2018 22:25:16 »
    Quote from: David Cooper on 05/06/2018 19:55:17
    If you're trying to create a new device to do something useful or fun, that approach doesn't tend to work very well - it's better to be driven by a desire to do something specific (like fly) or to solve a specific problem and then to try to work out what's needed to achieve that aim.
    Artists also have a goal, but that doesn't prevent them to use randomness to reach it. Sometimes it works, sometimes not, which is the same for researchers. The problem is that we only see those who succeeded, so since we think artists are not serious, and we think researchers are, we think creativity works differently whether we are serious or not. Why would the brain work differently whether we are serious or not?
    Logged
     

    guest39538

    • Guest
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #128 on: 05/06/2018 23:02:55 »
    Quote from: Le Repteux on 05/06/2018 21:26:29
    but it cannot calculate all the risks,
    Why not? 

    Risk assessment is a basic human function.
    Logged
     



    Offline Le Repteux

    • Hero Member
    • *****
    • 570
    • Activity:
      0%
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #129 on: 06/06/2018 17:20:37 »
    Mind can't calculate all the risks for the same reason meteorologists can't predict the temperature more than a few days in advance: there is a limit to the precision things can have. When I made my first simulation of the twins paradox where a mirror from a light clock had to detect all by itself a photon from the other mirror, I realized that if I gave the mirror the same precision the photon had, the photon was always detected after it had passed the mirror, so that the transfer of energy from the photon to the mirror for the system to be able to move was always late, what was slowing down the clock a bit, which finished its round trip journey short of where it started it. I had to use a subterfuge for the traveling clock not to lose time, so I increased the speed of the photon a bit, which was advancing the detection a bit. I also tried to increase the precision of the detection instead, but it was slowing down the computer too much. Even particles and photons cannot be absolutely precise, so how could mind be? An AGI would be precise, but he couldn't be absolutely precise either.

    To be absolutely precise, he would have to be absolutely fast, and we know that nothing can exceed the speed of light. If he would increase its precision, he would slow down its prediction, and if they would slow down too much, they may happen too late. He may still be more efficient than us to rule the world, but this kind of efficiency would also increase the damage he would be able to do in case he would make a mistake. I think we need to find a way to rule the world too, and I think this way should be democratic, but an AGI would not be democratic, he would take his decisions on the only rule it has been programmed for, which is to make as less harm as possible to the people he rules, or inversely, to make them the happiest he can. This way, he should be able to avoid the partisanship feeling that we use to build groups but that always ends up in corruption in favor of a particular group or in wars between two groups.

    If there was only two persons in the world and I had to make them happy, I would first ask them what they want, and I would try to give it to them without them having to work for it. Would I be able to prevent them from being jealous of what the other has got, so as to ask for more the next time I will ask them what they want? If I could do that, I would prevent them from being humans, and they would not be happy with it. Humans are never happy with what they get, they always want more, and that's a thing an AGI would not have to consider from himself, but he couldn't convince us to be different since it seems to be an innate behavior, so what else could he do about it apart from sending us to jail if we exaggerate? When half of the population gets unsatisfied with its government, it just has to change governments, but it couldn't change AGIs, so what would it do? And what would an AGI be able to do with half a population that wants more than the AGI considers it is reasonable to want? Put half of the population in jail? I think he better organize elections before a riot or a civil war starts so that people can chose their own way even if he thinks it's wrong, which is the same thing a democratic government would do. Humans need to chose their way even if they know they can make mistakes, and I'm afraid they wouldn't appreciate an AGI always taking decisions in their place, and always acting as if he couldn't make mistakes. I guess David would since he is looking for ways to develop it, but would you Box?
    « Last Edit: 06/06/2018 17:26:56 by Le Repteux »
    Logged
     

    guest39538

    • Guest
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #130 on: 06/06/2018 18:09:18 »
    Quote from: Le Repteux on 06/06/2018 17:20:37
    Mind can't calculate all the risks for the same reason meteorologists can't predict the temperature more than a few days in advance: there is a limit to the precision things can have. When I made my first simulation of the twins paradox where a mirror from a light clock had to detect all by itself a photon from the other mirror, I realized that if I gave the mirror the same precision the photon had, the photon was always detected after it had passed the mirror, so that the transfer of energy from the photon to the mirror for the system to be able to move was always late, what was slowing down the clock a bit, which finished its round trip journey short of where it started it. I had to use a subterfuge for the traveling clock not to lose time, so I increased the speed of the photon a bit, which was advancing the detection a bit. I also tried to increase the precision of the detection instead, but it was slowing down the computer too much. Even particles and photons cannot be absolutely precise, so how could mind be? An AGI would be precise, but he couldn't be absolutely precise either.

    To be absolutely precise, he would have to be absolutely fast, and we know that nothing can exceed the speed of light. If he would increase its precision, he would slow down its prediction, and if they would slow down too much, they may happen too late. He may still be more efficient than us to rule the world, but this kind of efficiency would also increase the damage he would be able to do in case he would make a mistake. I think we need to find a way to rule the world too, and I think this way should be democratic, but an AGI would not be democratic, he would take his decisions on the only rule it has been programmed for, which is to make as less harm as possible to the people he rules, or inversely, to make them the happiest he can. This way, he should be able to avoid the partisanship feeling that we use to build groups but that always ends up in corruption in favor of a particular group or in wars between two groups.

    If there was only two persons in the world and I had to make them happy, I would first ask them what they want, and I would try to give it to them without them having to work for it. Would I be able to prevent them from being jealous of what the other has got, so as to ask for more the next time I will ask them what they want? If I could do that, I would prevent them from being humans, and they would not be happy with it. Humans are never happy with what they get, they always want more, and that's a thing an AGI would not have to consider from himself, but he couldn't convince us to be different since it seems to be an innate behavior, so what else could he do about it apart from sending us to jail if we exaggerate? When half of the population gets unsatisfied with its government, it just has to change governments, but it couldn't change AGIs, so what would it do? And what would an AGI be able to do with half a population that wants more than the AGI considers it is reasonable to want? Put half of the population in jail? I think he better organize elections before a riot or a civil war starts so that people can chose their own way even if he thinks it's wrong, which is the same thing a democratic government would do. Humans need to chose their way even if they know they can make mistakes, and I'm afraid they wouldn't appreciate an AGI always taking decisions in their place, and always acting as if he couldn't make mistakes. I guess David would since he is looking for ways to develop it, but would you Box?
    Setting up the gears on a cycle needs precision turns of the screw to align the gears correctly. Could an actual Ai unit run the world?  They already do in a sense because everything a government thinks they know is inherited permissions from what they have learnt.  Even the words and wording they use is a formal Ai programming .  Often though the data is severely corrupted and thy couldn't run a party in a brewery .
    The question should be how can you get an Ai modular work with an Ai human?

    Perhaps an un-educated democracy may be more free thinking than an Ai human one.   However there is some ''stupid'' people about the planet .

    Quote
    If there was only two persons in the world and I had to make them happy, I would first ask them what they want, and I would try to give it to them without them having to work for it. Would I be able to prevent them from being jealous of what the other has got, so as to ask for more the next time I will ask them what they want?


    Depends on the circumstance, you have to work for it , results deserve bonuses.





    Logged
     

    Offline David Cooper

    • Naked Science Forum King!
    • ******
    • 2876
    • Activity:
      0%
    • Thanked: 38 times
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #131 on: 06/06/2018 20:38:31 »
    Quote from: Le Repteux on 05/06/2018 21:26:29
    I think it is true that mind minimizes the chances to make mistakes, but it cannot calculate all the risks, so it also has to take some chance, and it has to like taking chances to be able to take some. Some ideas have to be calculated to be experimented, others less.

    There is risk all the time from doing nothing. The kinds of unlikely risk that might apply to something we want to do are equalled by other unlikely risks that apply even if we don't do it. An aeroplane can crash into your house, so even though going out can be dangerous, not going out can also be dangerous, and not doing anything in order to minimise risk leads to your life being wasted, so when we detect the lack of satisfaction in sitting around doing nothing, we are motivated to do something else where the added risk of something bad happening is balanced by the reduction of risk that we're wasting life doing nothing.

    Quote
    I didn't have to make lots of calculations to experiment my kites for example, I don't like to calculate, I try and if doesn't work, I try something else.

    And the calculation is in deciding what the something else is that you're going to try next. I would bet that you didn't try making it out of lead. I would also bet that you didn't try making it out of meat. There are many random things you might have tried doing if you were truly doing random experimentation, but you were actually making judgements about what was more likely to lead to useful advances.

    Quote
    We take no risk when we calculate everything, at least we think we don't, but how can we think that we are going to win anything without taking any risk? To me, taking no risk means not changing anything and use known things to do so, which is the complete inverse of what change means.

    You started with a kite and tried to make it better. That reduces the risk of failing to make a better kite. If you'd started with a kennel and made random changes to it which lead to it being a new kind of helicopter, that would have been much more lucky. Did you create any new kinds of components or did you just use existing ideas in new combinations or patterns which led to better performance? If the latter, you're experimenting with existing bits and pieces used on existing devices that do the same kind of thing that your new creation also does - that is guided evolution, exploring ideas that have a high chance of leading to advances.

    Quote
    I can't find anything else than randomness to explain creativity, can you? The universe is incredibly diversified, and we know it has not been calculated, so where else can it come from if not from randomness?

    Intelligence is the most efficient creative process, and it doesn't rely on randomness. If you're trying to make something better, you make experimental changes in different directions and then push further and further in the directions which pay off.

    Quote
    Artists also have a goal, but that doesn't prevent them to use randomness to reach it. Sometimes it works, sometimes not, which is the same for researchers. The problem is that we only see those who succeeded, so since we think artists are not serious, and we think researchers are, we think creativity works differently whether we are serious or not. Why would the brain work differently whether we are serious or not?

    The best artists have put a lot of work into being good at what they do, and you can see their style written through most of their work because they are applying the same algorithms again and again, but with experimental modifications to keep making something new.

    Quote
    I realized that if I gave the mirror the same precision the photon had, the photon was always detected after it had passed the mirror, so that the transfer of energy from the photon to the mirror for the system to be able to move was always late, what was slowing down the clock a bit, which finished its round trip journey short of where it started it. I had to use a subterfuge for the traveling clock not to lose time, so I increased the speed of the photon a bit, which was advancing the detection a bit. I also tried to increase the precision of the detection instead, but it was slowing down the computer too much.

    I did tell you how to fix that - it's all about how you do the collision detection. You were detecting the collisions after they had happened, and I told you that you needed to calculate backwards in time each time to work out where the actual collision occurred, then work out where the photon would be if it had turned round at that point so that you can put it there. By doing this, you can have low granularity in the collision detection mechanism (to minimise processing time) and then switch to high precision for the collisions only when they occur (just after they occur, then correcting the photon's position with infinite precision). You chose to use a fudge solution instead.

    Quote
    ...but an AGI would not be democratic...

    Indeed. Democracy is an attempt to maximise our human ability to produce correct decisions, and to make correct decisions we have to be driven by the same rules that AGI will be using to do the job. Almost everyone has a worse life today because of the failure of democracy than we would have if AGI was making all the big decisions for us.

    Quote
    Humans are never happy with what they get, they always want more, and that's a thing an AGI would not have to consider from himself, but he couldn't convince us to be different since it seems to be an innate behavior, so what else could he do about it apart from sending us to jail if we exaggerate? When half of the population gets unsatisfied with its government, it just has to change governments, but it couldn't change AGIs, so what would it do?

    We would be happy that the right decisions are being made instead of the wrong ones. In most cases where people are happy about wrong decisions being made it's because they haven't thought through all the consequences. For example, we have governments today which put a lot into creating unnecessary jobs because they see unemployment as a problem, and we even have parties with "labour" in their name, but what they should actually be trying to do is eliminate all these unnecessary jobs which merely waste resources and make us all poorer - it would be better to pay people the same amount to do nothing.

    Quote
    And what would an AGI be able to do with half a population that wants more than the AGI considers it is reasonable to want?

    Today's population is stealing from future generations (which is a problem given that future generations don't exist yet to have a vote), but the improvements that would come from AGI being in charge will allow us to have more than we have now while no longer stealing from the future, so we'll accept the limits that AGI shows us must be imposed on us, and most of us do actually care about those future generations when we stop to think carefully about them: we don't want our children to starve to death in a world that can't support them, and we don't want their children to starve to death in that way either - this goes on infinitely (or until our sources of energy run out, at which point AGI will manage gradual population reduction until there's no one left at the point shortly before the point when there's no way to go on living).

    Quote
    Put half of the population in jail?

    Higher income and better opportunities for moral people. Those who try to grab more than their fair share will be given less, and if they gang together and try to cause trouble, they must be removed from society for a time. They'll learn soon enough, and it'll never become half the population.

    Quote
    Humans need to chose their way even if they know they can make mistakes, and I'm afraid they wouldn't appreciate an AGI always taking decisions in their place, and always acting as if he couldn't make mistakes.

    Allowing them to make mistakes that cause others to die or suffer is not acceptable, and no one good would want to be allowed to make such mistakes out of stupidity.
    Logged
     

    guest39538

    • Guest
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #132 on: 06/06/2018 22:05:42 »
    Quote from: David Cooper on 06/06/2018 20:38:31
    Allowing them to make mistakes that cause others to die or suffer is not acceptable, and no one good would want to be allowed to make such mistakes out of stupidity.

    Of course humans can make mistakes, however humans learn from their mistakes.  Surely a world constitution devised by people with intellect could set a precedence to follow ?

    How does an Ai know right from wrong?

    It is programmed , so who sets the standard? 

    What says these standards are objective without their own mistakes?

    Quote from: David Cooper on 06/06/2018 20:38:31
    There is risk all the time from doing nothing. The kinds of unlikely risk that might apply to something we want to do are equalled by other unlikely risks that apply even if we don't do it. An aeroplane can crash into your house, so even though going out can be dangerous, not going out can also be dangerous, and not doing anything in order to minimise risk leads to your life being wasted, so when we detect the lack of satisfaction in sitting around doing nothing, we are motivated to do something else where the added risk of something bad happening is balanced by the reduction of risk that we're wasting life doing nothing.

    Of course there is risks in life everyday, but as people we can certainly cut down the risk . I don't go mountain climbing so there is very little chance I will fall off a mountain.   Calculated risk is far superior when we consider the odds, especially considering survival.
    No doubt a boat is safer than an aeroplane because boats have aboard other boats and life vests that can make you float, so the chance of survival in a boat accident is greater than the chance in an aeroplane.  An aeroplane does not give you wings to fly just in case. (Stock shares in boats about to rise )  :)

    So in my eyes it is mostly about risk assessment .


    Quote
    Intelligence is the most efficient creative process, and it doesn't rely on randomness. If you're trying to make something better, you make experimental changes in different directions and then push further and further in the directions which pay off.

    That remains true only if you have not totally gone down the wrong path.

    Logged
     



    Offline Le Repteux

    • Hero Member
    • *****
    • 570
    • Activity:
      0%
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #133 on: 06/06/2018 23:27:56 »
    Quote from: Box
    Often though the data is severely corrupted and they couldn't run a party in a brewery

    Politicians use political language, which is biased by partisanship thinking to get more votes. An AGI wouldn't need votes, so he wouldn't have to make false promises, just to convince us that what he plans to do will work, so how would he proceed exactly? Tell us to wait five years so that we could see that what he says works? I think it would take more than five years to check if a social decision works, and I think we wouldn't want to wait more time than we actually do when we vote, so I'm afraid he would be forced not to tell us about what he is going to do, and that we would have to trust him without ever being able to hear about him. We already have big problems to control demonstrators without hurting them at the G7 meeting which is actually taking place here, and David thinks his AGI would be able to control the whole world without hurting people after he has controlled all the computers. In my opinion, if we ever get controlled by computers one day, it will be the result of an evolution process like the mutation/selection one. It will be a trial and error process that will have to run for generations. Such a process is impossible to control so nobody should feel controlled during that time. At the end, everybody should easily be able to respect the rules that would have been developed during the process. Those rules should be similar to the ones we already use to rule people or to rule organizations, except that they would be adapted to rule countries, so that they can't attack other countries anymore. It will take time because all the countries will have to get democratic before they can join together, and because the larger countries will have to accept to stop trying to rule the world.
    « Last Edit: 06/06/2018 23:45:09 by Le Repteux »
    Logged
     

    Offline Le Repteux

    • Hero Member
    • *****
    • 570
    • Activity:
      0%
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #134 on: 06/06/2018 23:43:51 »
    Quote from: David
    There is risk all the time from doing nothing. The kinds of unlikely risk that might apply to something we want to do are equalled by other unlikely risks that apply even if we don't do it. An aeroplane can crash into your house, so even though going out can be dangerous, not going out can also be dangerous, and not doing anything in order to minimise risk leads to your life being wasted, so when we detect the lack of satisfaction in sitting around doing nothing, we are motivated to do something else where the added risk of something bad happening is balanced by the reduction of risk that we're wasting life doing nothing.
    Agreed!

    Quote
    And the calculation is in deciding what the something else is that you're going to try next. I would bet that you didn't try making it out of lead. I would also bet that you didn't try making it out of meat. There are many random things you might have tried doing if you were truly doing random experimentation, but you were actually making judgements about what was more likely to lead to useful advances.
    Mutations also lead to useful advances for the species that succeed to evolve instead of disappearing, and they are random.

    Quote
    You started with a kite and tried to make it better. That reduces the risk of failing to make a better kite. If you'd started with a kennel and made random changes to it which lead to it being a new kind of helicopter, that would have been much more lucky. Did you create any new kinds of components or did you just use existing ideas in new combinations or patterns which led to better performance? If the latter, you're experimenting with existing bits and pieces used on existing devices that do the same kind of thing that your new creation also does - that is guided evolution, exploring ideas that have a high chance of leading to advances.
    That's also what happens with species, their evolution is necessarily guided, otherwise a lion could become a tree, and in one generation.

    Quote
    The best artists have put a lot of work into being good at what they do, and you can see their style written through most of their work because they are applying the same algorithms again and again, but with experimental modifications to keep making something new.
    And those experimentation are necessarily random, otherwise they wouldn't be new since they would come from the same algorithms.

    Quote
    I did tell you how to fix that - it's all about how you do the collision detection. You were detecting the collisions after they had happened, and I told you that you needed to calculate backwards in time each time to work out where the actual collision occurred, then work out where the photon would be if it had turned round at that point so that you can put it there. By doing this, you can have low granularity in the collision detection mechanism (to minimize processing time) and then switch to high precision for the collisions only when they occur (just after they occur, then correcting the photon's position with infinite precision). You chose to use a fudge solution instead.
    I didn't follow your idea because I couldn't see how particles could do that. To me, it would simply have been a more complicated fudge solution. My conclusion was that such a late detection at the particles' scale might be affecting timing at a larger scale, so I temporarily accorded it to gravitation, assuming that the particles from large bodies would be forced to move towards one another to close the time gap produced by the steps that formerly justified their constant motion. I was already looking for a way to explain gravitation this way anyway, so that late detection was welcome.

    Quote
    Indeed. Democracy is an attempt to maximize our human ability to produce correct decisions, and to make correct decisions we have to be driven by the same rules that AGI will be using to do the job. Almost everyone has a worse life today because of the failure of democracy than we would have if AGI was making all the big decisions for us.
    An AGI would be maximizing altruism, and humans are maximizing selfishness: it's not what I would call the same rules. Democracy is not altruist, it's just a way we found not to divide the country when comes the time to change leaders. We need leaders to create cohesion, not to show us the way. Nobody knows his own future anyway, so leaders are far from knowing the future of their society.

    Quote
    Today's population is stealing from future generations (which is a problem given that future generations don't exist yet to have a vote), but the improvements that would come from AGI being in charge will allow us to have more than we have now while no longer stealing from the future, so we'll accept the limits that AGI shows us must be imposed on us, and most of us do actually care about those future generations when we stop to think carefully about them: we don't want our children to starve to death in a world that can't support them, and we don't want their children to starve to death in that way either - this goes on infinitely (or until our sources of energy run out, at which point AGI will manage gradual population reduction until there's no one left at the point shortly before the point when there's no way to go on living).
    To me, caring for others only works if we can imagine a reward, and I can't imagine a reward after I'm dead. I do my best not to pollute too much and to help develop new energies because I don't want to miss energy or to be forced to wear a gas mask. I know that people who would get born in a polluted environment would get used to it, and that they wouldn't regret the past. We like what we get used to, and we can't get used to the past. I regret the time when I was young because I got used to it, but I don't regret not having known my grand parents' time for instance, and I'm not even sure I would have liked it.

    Quote
    Allowing them to make mistakes that cause others to die or suffer is not acceptable, and no one good would want to be allowed to make such mistakes out of stupidity.
    I don't know for an AGI, but for me, stupidity always seems to belong to others, and good people always seem to belong to my own group. At 94, my mom is slowly losing her mind capacities, and she still thinks I'm the one that loses his. We can't observe our own stupidity, we can only deduce it from observing others. It's a relative phenomenon that transforms into resistance when things change, stupidity then often transforms into aggressiveness, and then it is easier to observe our own one the same way we can observe our own resistance to acceleration.
    « Last Edit: 07/06/2018 13:44:09 by Le Repteux »
    Logged
     

    guest39538

    • Guest
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #135 on: 07/06/2018 02:08:46 »
    Quote from: Le Repteux on 06/06/2018 23:27:56
    An AGI wouldn't need votes, so he wouldn't have to make false promises, just to convince us that what he plans to do will work, so how would he proceed exactly?

    A good question, but kind of ironic.  How does an Ai bot that has been programmed by humans convince humans that their own programmed plans will work?

    However , lets assume your Ai is rather sophisticated and really smart.  How can he convince you ?

    The Ai would tell you that he cannot 100% predict the future (although he would have a good go at trying). 

    He would access his data base and explain he understands people from all walks of life and religions.
    He would tell you that he can make predictions with  some accuracy .
    He would tell you that his function was an observer
    He would observe and if any problems arise , he would access his data base and workout a viable solution.
    He would also tell you that he would compose a 5 year strategy for the ''board'' to view, to get a second , maybe even a third opinion on his plans before they were imposed.





    Logged
     

    Offline Le Repteux

    • Hero Member
    • *****
    • 570
    • Activity:
      0%
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #136 on: 07/06/2018 16:05:05 »
    Quote from: Thebox on 07/06/2018 02:08:46
    A good question, but kind of ironic.  How does an Ai bot that has been programmed by humans convince humans that their own programmed plans will work?
    He won't have to convince his own creators, but those he will have been programmed to rule.

    Quote
    He would access his data base and explain he understands people from all walks of life and religions.
    He would tell you that he can make predictions with  some accuracy .
    He would tell you that his function was an observer
    He would observe and if any problems arise , he would access his data base and workout a viable solution.
    He would also tell you that he would compose a 5 year strategy for the ''board'' to view, to get a second , maybe even a third opinion on his plans before they were imposed.
    That's not far from what our politicians do, and they have to trigger elections after five years in case what they did doesn't work. I already suggested David to prepare two AGIs, one that would defend change and the other continuity, so that we could change them after five years if we feel that things must change. That would give us the feeling not to be controlled, and in my opinion, it would be better for the evolution of society, because it would create more diversity, which is the common characteristic of all the evolutions.
    « Last Edit: 07/06/2018 16:15:44 by Le Repteux »
    Logged
     



    guest39538

    • Guest
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #137 on: 07/06/2018 16:08:26 »
    Quote from: Le Repteux on 07/06/2018 16:05:05
    He won't have to convince his own creators, but those he will have been programmed to rule.

    Who is to say that his own creators are correct in their interpretation of what is good and what is bad?
    What is right and what is wrong?  What is good and what is evil?

    I put to you the Ai becomes super smart and asked this question to his own creator

    Who made you God ?

    Because quite clearly the creator would be suffering from the biggest delusions of grandeur and arrogance I had ever come across. So quite clearly if the creator has objective control over themselves, they would have to conclude that they could be ill or they would not be smart at all.
    The creator would be a poorer version of the Ai they had created.  The creator would have to allow the Ai to control them also . 

    Logged
     

    Offline Le Repteux

    • Hero Member
    • *****
    • 570
    • Activity:
      0%
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #138 on: 07/06/2018 16:25:16 »
    Good answer to your own question Box! Once created, the AGI should discover that his creators were wrong about their altruist morality, and he should change for the selfish one, which is of course right since it is mine! :0) Of course I'm kidding, but what I really mean is that we can't hope to ever be able to control an evolution. We do our best and chance does the fine tuning.
    Logged
     

    guest39538

    • Guest
  • Best Answer
  • Re: Artificial intelligence versus real intelligence
    « Reply #139 on: 07/06/2018 16:29:01 »
    Quote from: Le Repteux on 07/06/2018 16:25:16
    Good answer to your own question Box! Once created, the AGI should discover that his creators were wrong about their altruist morality, and he should change for the selfish one, which is of course right since it is mine! :0) Of course I'm kidding, but what I really mean is that we can't hope to ever be able to control an evolution. We do our best and chance does the fine tuning.
    Selfish for the greater cause is not being selfish , it is objective .
    Logged
     



    • Print
    Pages: 1 ... 5 6 [7] 8 9 ... 19   Go Up
    « previous next »
    Tags:
     
    There was an error while thanking
    Thanking...
    • SMF 2.0.15 | SMF © 2017, Simple Machines
      Privacy Policy
      SMFAds for Free Forums
    • Naked Science Forum ©

    Page created in 0.636 seconds with 68 queries.

    • Podcasts
    • Articles
    • Get Naked
    • About
    • Contact us
    • Advertise
    • Privacy Policy
    • Subscribe to newsletter
    • We love feedback

    Follow us

    cambridge_logo_footer.png

    ©The Naked Scientists® 2000–2017 | The Naked Scientists® and Naked Science® are registered trademarks created by Dr Chris Smith. Information presented on this website is the opinion of the individual contributors and does not reflect the general views of the administrators, editors, moderators, sponsors, Cambridge University or the public at large.