41
New Theories / Re: How can I write a computer simulation to test my theory
« on: 20/03/2019 15:28:57 »AGI will be like a superparent of all mankind.Watchout when the whole mankind will want to jump off the nest. :0) Humans are visibly programmed to fly on their own wings around 18, and to counter the attractive force, they develop a repulsive one. No matter how comfortable was the nest, they visibly need something else. They become so aggressive that their parents start to hate them sometimes. If they would go on listening, they could probably stay home all their life, but they can't, they got to make their own life. Trying to control them at that moment can be critical, they can leave too soon and end up on the street. It's a chance though that youngsters behave like that otherwise society would not be so diversified. We're programmed to change places when we get bored, and we're programmed to feel bored depending on precise events, which is excellent for diversity. We get bored copulating with the same women for a while for instance, which causes us many problems, but if we didn't we would probably miss a necessary genetic diversity. Scientists sometimes get bored finding nothing, so they try something else in case it would work, and it sometimes does. We like trying to get stable, we enjoy it for a while once we succeed, and then we get bored quite fast. We need to do repetitive things for a living, what we call work, but we don't like it. Once your AGI will be working, we won't have to work anymore, so we will be happy for a while, but it is evident that we will get bored too after another while. Will your AGI be programmed to push us off the nest then, thus to stop caring for us for a while?
I saw Terminator Genisys yesterday, a film about an AGI trying to erase humans because they are getting too dangerous for it. Naturally, it's humans that end up erasing the AGI even if it's completely irrealistic. Too bad these films aren't treating the real problems that concern artificial intelligence. American film makers seem to be only able to treat evil and good, as if they couldn't grow up. It would have been interesting to see a discussion between the AGI and the people about how they felt since they got everything they wanted, and since they had no problems to solve anymore. I would have liked to see the AGI not being able to understand why they felt bad, then see the people immediately feeling good again, then see the AGI freeze because it is unable to find what he did right. :0) Of course, your scenario would have been different, people would have answered that they were happy, and we could read «THEY LIVED HAPPY FOR THE REST OF THE ETERNITY» in the middle of the screen while the sun would be slowly setting down in the background. No more problems, no more discussion about artificial intelligence anymore. Which of the two scenarios do you think people would prefer if we would present them both? I think that those who are unhappy because they can't solve their problems would vote for your's, and the rest would vote for mine. That would be a way to find out if a population is generally happy or not, but it would only be a snapshot.
Lately, observing my mom developing wrong ideas about me all the time without reason, I noticed that the ideas I had depended on how I felt, as if our ideas were triggered by our feelings, in such a way that if our feelings change, the way we imagine things changes. Feelings look like shortcuts through ideas. No need to analyse a situation for a long time when it spontaneously gives us a good or a bad feeling for instance, the taste to analyse it only comes when the feeling is uncertain. Your AGI won't have feelings, but he will have the means to observe ours, and to use that data to decide which way it will move. Curiously, that's exactly what we are doing when we need to take a decision that concerns others. Our feelings then seem to be made of the feelings we observe from others. If it is so, then your AGI will obviously heft its own feelings the same way we do.
(That question comes from the answer you just gave Phyti.)We all do that all the time in normal life, that's how things work, so why would scientists behave differently. What we should ask ourselves is how come it works this way, not how come others work this way. It's no good for knowledge to think that our mind works differently than that of others. What you're describing is normal resistance to change. Resistance to change is not an intelligent behavior, it's not even an instinctive behavior, it's an intrinsic subconscious behavior that belongs to anything that exists. It's mass, and it affects mind the same way it affects particles. Nothing that exists can avoid it. You're asking others to avoid what you can't even avoid. It's simply illogical. Knowing that, we should never tell others that they are wrong, but that we think they are, because they also get the feeling that we are wrong. Resistance induces the feeling that the change has to come from others, but it's impossible to tell which one of us is right when a change happens. When we think we are right, the only thing that works is thus to keep on pushing until it starts to move, and it unfortunately takes time. To push harder, we can invite people to push with us, but we should never increase our own force until it hurts others, otherwise it will take even longer to convince them. Convincing others is not a question of intelligence, it's a question of coincidence. Things change when circumstances allow it. The wall of shame fell when circumstances changed. Walls don't change things though, they just postpone them.
Why is it so hard to get people to recognise that they are breaking their own rules when the rules of mathematics are so clear and are so clearly being broken by the models?