0 Members and 1 Guest are viewing this topic.
We're always playing the odds, trying to maximise quality of life, and there are risks involved in every action and inaction. The odds should be calculated though rather than just guessing and making random decisions. The only place for making random decisions is where two or more options are equally likely to be the best one, at which point it doesn't matter which you choose.
I don't know about other languages, but David must know since he knows many. It's too complicated and too confusing, we need to know what's going on in our mind about randomness.
If creative people would always do that, there would be no creation, and I think that some research that needed a lot of creativity would not have been made.
IAs far or as close as we can see, chance is always there, so why wouldn't it be in our mind?Because we are special? Maybe, but I prefer to think we are not. Our most important discoveries mean that we are not, so the odds are on my side. :0)
I never gamble in random ways such as buying lottery tickets
Where creative genius is involved, it is not based on randomness.
Quote from: David Cooper on 05/06/2018 19:55:17Where creative genius is involved, it is not based on randomness.A creative genius minimizes the risks of randomness.
If you're trying to create a new device to do something useful or fun, that approach doesn't tend to work very well - it's better to be driven by a desire to do something specific (like fly) or to solve a specific problem and then to try to work out what's needed to achieve that aim.
but it cannot calculate all the risks,
Mind can't calculate all the risks for the same reason meteorologists can't predict the temperature more than a few days in advance: there is a limit to the precision things can have. When I made my first simulation of the twins paradox where a mirror from a light clock had to detect all by itself a photon from the other mirror, I realized that if I gave the mirror the same precision the photon had, the photon was always detected after it had passed the mirror, so that the transfer of energy from the photon to the mirror for the system to be able to move was always late, what was slowing down the clock a bit, which finished its round trip journey short of where it started it. I had to use a subterfuge for the traveling clock not to lose time, so I increased the speed of the photon a bit, which was advancing the detection a bit. I also tried to increase the precision of the detection instead, but it was slowing down the computer too much. Even particles and photons cannot be absolutely precise, so how could mind be? An AGI would be precise, but he couldn't be absolutely precise either. To be absolutely precise, he would have to be absolutely fast, and we know that nothing can exceed the speed of light. If he would increase its precision, he would slow down its prediction, and if they would slow down too much, they may happen too late. He may still be more efficient than us to rule the world, but this kind of efficiency would also increase the damage he would be able to do in case he would make a mistake. I think we need to find a way to rule the world too, and I think this way should be democratic, but an AGI would not be democratic, he would take his decisions on the only rule it has been programmed for, which is to make as less harm as possible to the people he rules, or inversely, to make them the happiest he can. This way, he should be able to avoid the partisanship feeling that we use to build groups but that always ends up in corruption in favor of a particular group or in wars between two groups. If there was only two persons in the world and I had to make them happy, I would first ask them what they want, and I would try to give it to them without them having to work for it. Would I be able to prevent them from being jealous of what the other has got, so as to ask for more the next time I will ask them what they want? If I could do that, I would prevent them from being humans, and they would not be happy with it. Humans are never happy with what they get, they always want more, and that's a thing an AGI would not have to consider from himself, but he couldn't convince us to be different since it seems to be an innate behavior, so what else could he do about it apart from sending us to jail if we exaggerate? When half of the population gets unsatisfied with its government, it just has to change governments, but it couldn't change AGIs, so what would it do? And what would an AGI be able to do with half a population that wants more than the AGI considers it is reasonable to want? Put half of the population in jail? I think he better organize elections before a riot or a civil war starts so that people can chose their own way even if he thinks it's wrong, which is the same thing a democratic government would do. Humans need to chose their way even if they know they can make mistakes, and I'm afraid they wouldn't appreciate an AGI always taking decisions in their place, and always acting as if he couldn't make mistakes. I guess David would since he is looking for ways to develop it, but would you Box?
If there was only two persons in the world and I had to make them happy, I would first ask them what they want, and I would try to give it to them without them having to work for it. Would I be able to prevent them from being jealous of what the other has got, so as to ask for more the next time I will ask them what they want?
I think it is true that mind minimizes the chances to make mistakes, but it cannot calculate all the risks, so it also has to take some chance, and it has to like taking chances to be able to take some. Some ideas have to be calculated to be experimented, others less.
I didn't have to make lots of calculations to experiment my kites for example, I don't like to calculate, I try and if doesn't work, I try something else.
We take no risk when we calculate everything, at least we think we don't, but how can we think that we are going to win anything without taking any risk? To me, taking no risk means not changing anything and use known things to do so, which is the complete inverse of what change means.
I can't find anything else than randomness to explain creativity, can you? The universe is incredibly diversified, and we know it has not been calculated, so where else can it come from if not from randomness?
Artists also have a goal, but that doesn't prevent them to use randomness to reach it. Sometimes it works, sometimes not, which is the same for researchers. The problem is that we only see those who succeeded, so since we think artists are not serious, and we think researchers are, we think creativity works differently whether we are serious or not. Why would the brain work differently whether we are serious or not?
I realized that if I gave the mirror the same precision the photon had, the photon was always detected after it had passed the mirror, so that the transfer of energy from the photon to the mirror for the system to be able to move was always late, what was slowing down the clock a bit, which finished its round trip journey short of where it started it. I had to use a subterfuge for the traveling clock not to lose time, so I increased the speed of the photon a bit, which was advancing the detection a bit. I also tried to increase the precision of the detection instead, but it was slowing down the computer too much.
...but an AGI would not be democratic...
Humans are never happy with what they get, they always want more, and that's a thing an AGI would not have to consider from himself, but he couldn't convince us to be different since it seems to be an innate behavior, so what else could he do about it apart from sending us to jail if we exaggerate? When half of the population gets unsatisfied with its government, it just has to change governments, but it couldn't change AGIs, so what would it do?
And what would an AGI be able to do with half a population that wants more than the AGI considers it is reasonable to want?
Put half of the population in jail?
Humans need to chose their way even if they know they can make mistakes, and I'm afraid they wouldn't appreciate an AGI always taking decisions in their place, and always acting as if he couldn't make mistakes.
Allowing them to make mistakes that cause others to die or suffer is not acceptable, and no one good would want to be allowed to make such mistakes out of stupidity.
There is risk all the time from doing nothing. The kinds of unlikely risk that might apply to something we want to do are equalled by other unlikely risks that apply even if we don't do it. An aeroplane can crash into your house, so even though going out can be dangerous, not going out can also be dangerous, and not doing anything in order to minimise risk leads to your life being wasted, so when we detect the lack of satisfaction in sitting around doing nothing, we are motivated to do something else where the added risk of something bad happening is balanced by the reduction of risk that we're wasting life doing nothing.
Intelligence is the most efficient creative process, and it doesn't rely on randomness. If you're trying to make something better, you make experimental changes in different directions and then push further and further in the directions which pay off.
Often though the data is severely corrupted and they couldn't run a party in a brewery
And the calculation is in deciding what the something else is that you're going to try next. I would bet that you didn't try making it out of lead. I would also bet that you didn't try making it out of meat. There are many random things you might have tried doing if you were truly doing random experimentation, but you were actually making judgements about what was more likely to lead to useful advances.
You started with a kite and tried to make it better. That reduces the risk of failing to make a better kite. If you'd started with a kennel and made random changes to it which lead to it being a new kind of helicopter, that would have been much more lucky. Did you create any new kinds of components or did you just use existing ideas in new combinations or patterns which led to better performance? If the latter, you're experimenting with existing bits and pieces used on existing devices that do the same kind of thing that your new creation also does - that is guided evolution, exploring ideas that have a high chance of leading to advances.
The best artists have put a lot of work into being good at what they do, and you can see their style written through most of their work because they are applying the same algorithms again and again, but with experimental modifications to keep making something new.
I did tell you how to fix that - it's all about how you do the collision detection. You were detecting the collisions after they had happened, and I told you that you needed to calculate backwards in time each time to work out where the actual collision occurred, then work out where the photon would be if it had turned round at that point so that you can put it there. By doing this, you can have low granularity in the collision detection mechanism (to minimize processing time) and then switch to high precision for the collisions only when they occur (just after they occur, then correcting the photon's position with infinite precision). You chose to use a fudge solution instead.
Indeed. Democracy is an attempt to maximize our human ability to produce correct decisions, and to make correct decisions we have to be driven by the same rules that AGI will be using to do the job. Almost everyone has a worse life today because of the failure of democracy than we would have if AGI was making all the big decisions for us.
Today's population is stealing from future generations (which is a problem given that future generations don't exist yet to have a vote), but the improvements that would come from AGI being in charge will allow us to have more than we have now while no longer stealing from the future, so we'll accept the limits that AGI shows us must be imposed on us, and most of us do actually care about those future generations when we stop to think carefully about them: we don't want our children to starve to death in a world that can't support them, and we don't want their children to starve to death in that way either - this goes on infinitely (or until our sources of energy run out, at which point AGI will manage gradual population reduction until there's no one left at the point shortly before the point when there's no way to go on living).
An AGI wouldn't need votes, so he wouldn't have to make false promises, just to convince us that what he plans to do will work, so how would he proceed exactly?
A good question, but kind of ironic. How does an Ai bot that has been programmed by humans convince humans that their own programmed plans will work?
He would access his data base and explain he understands people from all walks of life and religions.He would tell you that he can make predictions with some accuracy .He would tell you that his function was an observerHe would observe and if any problems arise , he would access his data base and workout a viable solution.He would also tell you that he would compose a 5 year strategy for the ''board'' to view, to get a second , maybe even a third opinion on his plans before they were imposed.
He won't have to convince his own creators, but those he will have been programmed to rule.
Good answer to your own question Box! Once created, the AGI should discover that his creators were wrong about their altruist morality, and he should change for the selfish one, which is of course right since it is mine! :0) Of course I'm kidding, but what I really mean is that we can't hope to ever be able to control an evolution. We do our best and chance does the fine tuning.