« on: 26/02/2019 16:03:00 »
It's very simple. If AGI stops doing the right thing, people will find out how much suffering it was preventing and how much happiness it was making possible. Every time they ask it to take a break, lots of people will die and lots more will spend the rest of their lives grieving (and condemning the people who made AGI stop).That's better. It starts to look like a democratic system. What about having two AGIs representing the two directions a democracy can take, and let us choose which way we want to take by survey at the end of the year. People could organize in political parties, and their AGI would help them to win the surveys. One of the two parties would prefer that things stay as they are, and the other would prefer that they change. There is no other way than surveys to rate the satisfaction of a population anyway, so I guess your AGI would be forced to use them too to minimize displeasure and maximize pleasure. I didn't study that precise point of yours yet, but I think it's time. Your AGI would necessarily have to ask people how they feel to know it, so its data would only be subjective. Some people get a lot of pleasure to fight even if it hurts them for instance. The things that we do freely need to give us pleasure otherwise we stop doing them, so there would be no need for the AGI to ask us how we feel in this case.
It's only the pleasure we take from forcing others to do what we want that the AGI would need to prevent, and then it might prevent us from watching the good guy eliminating the bad guys at TV since it incites us to do the same thing, worse, it might even prevent us from defending our own ideas if it thinks it will produce more displeasure than pleasure in the population. You know your AGI will have your ideas about the way we should behave, so you don't see what I see. I would not agree with its ideas more than I agree with yours, so I would try to stop it, not just discuss with it, because I think that the way it would proceed would hurt me. You think that we would change our mind after having stopped the AGI since our life would be worse without it, and I agree with you, but not for the same reason. We already change our governments quite often, but to me, it's not because the new government is worse that we want to change it after a while, it's because contrary to animals, we always want more, because no government can predict the future, and because about half the population thinks it's better to proceed one way and the other half the other way. Will your AGI know why we always want more? And if not, will it feed us until we literally explode? Will it know why half the population wants some change and the other not? And if not, will it nevertheless conduct the herd in the same direction until it falls down the cliff?
Evolution has no goal at all. Survival of the fittest is just a mechanism by which evolution happens.If so, then the survival of the fittest idea is also just a mechanism by which evolution of ideas happen. I think you assimilate a goal to our will to reach it, as if there was a superior mind inside our mind that would know the right way. I prefer to think that there is no such mind, and to assimilate our will to our resistance to change ideas. This way, the will of a specie would be to resist to the change, and its goal would be to adapt to it, an outcome that is not defined in advance since it depends on a random process. You see a goal where I only see a possibility. My mom just handed me its ipad while I was writing, asking me to take a look at an email about giraffe hunting that a friend of us visiting Africa just sent us. The email was from Avaaz thanking her for having participated to a petition against wildlife hunting in Africa, but she never admitted it since she already thought it came from our friend. That's resistance to change. When we are persuaded that others are wrong, we don't study what they say while still feeling that we did.
You tend to attribute resistance to bad will resulting in poor analysis, but it's not bad will that is at stake then, it's resistance to change, a natural law that permits any existing phenomenon to keep on existing. The relativists can't use their will to resist to our ideas since they're not conscious of resisting. Pretending people don't want to understand simply leads to aggressive answers, worse, just trying to convince them can easily produce the same answer. My mom got angry when I tried to explain her that she had made a mistake. It was clear to me, but it wasn't clear to her at all. The only way then is either to let her think her way, or to repeat the same flagrant thing until she begins to doubt. That's what I do when I discuss with people since I know they have no other choice than to resist, but that's also what you do even if you believe they got bad will, so I really wonder how you can. Maybe you do what your AGI would do: try to minimize displeasure and maximize pleasure. That's what I call our second degree selfishness: we care for others as long as we can imagine that they will care for us. So your AGI would still be selfish after all, which is normal since it would be programmed by selfish humans. You probably simply imagine yourself at its place the same way we do when we want to get along with others. It works as long as others imagine the same thing, otherwise it can go wrong quite easily.
Contrary to us though, your AGI won't get emotive, so it will be able to repeat the same thing indefinitely until its interlocutor begins to doubt. It doesn't mean that it will work though. As I often say, we don't change our mind by logic, but only by chance. Resistance to change is completely blind to logic, while chances to change increase with time. You think your AGI won't resist to change while, in reality, it will be completely blind to our logic, and there will be absolutely no chance that it changes its mind with time. If you are able to imagine such an AGI, it's probably because you already think like it. You say we should try to demolish our own ideas to be sure they're right, but I think we can't do that, I think that we can only compare our ideas to others' and try to imagine where they could interfere. Even though I try very hard to compare correctly my ideas to your AGI, I always get the feeling that there is no interference. You can't convince me and I can't convince you, but you nevertheless intend to force people to accept your AGI whereas I don't intend to force anybody to think like me. It's hard to figure out what makes us so different on that precise point. I can't understand how I could force people to do what I want and still think they will be happy. Hasn't science shown that coercion was not the right way to educate children? Maybe you've been forced as a children, but how could you think it was a good thing?
The edges of the square are not aligned with the north-south and east-west lines in this frame though - the square has rotated a bit (anticlockwise)If we accelerate simultaneously two inline particles to the right, due to doppler effect being delayed by acceleration, the left one will think it is nearing the right one, and the right one will think it is distancing the left one. If we accelerate two perpendicular particles instead, there will be no difference in the two viewpoints: the light the particles are perceiving will come from where they were when they emitted it, and it will suffer aberration and doppler effect at detection. With doppler effect getting delayed by the acceleration, they will both think they are getting away from one another with time, and with aberration due to their sideways motion with regard to light, I'm not absolutely sure, but I think they will both see the other where it was before acceleration started, as it is the case for two particles in constant motion. Now if we try to synchronize them with the light they perceive, the two inline ones should move towards one another, and the two orthogonal ones too. I see no rotation, so either I misunderstood your description, either I'm wrong on aberration, but even if I was wrong, since it is symmetrical in this case, it would only produce a symmetrical effect with regard to the direction of motion, not a rotational one. I probably misunderstood, did I?
But we're not - evolution is a stupid process which can create intelligence through a series of lucky accidents which get selected for with the innovations retained.If we had invented evolution instead of having only discovered it, I don't think we would call it a stupid invention. It's nature that has invented the process, the same nature we are actually part of. I hope you don't think we are superior to nature, and if not, then I think we have to find a way to give it some intelligence, and the best way I found is to give less importance to our own one. This way, it's not because we are intelligent that we succeed so well, it's because nature created us. Now that we succeed too well, we have a huge problem to solve, but it's not necessarily because we are not intelligent enough that we can't solve it faster, it's because it takes time to solve any new problem, and because the larger the system, the more time it takes. If it was an AGI that would be in charge of solving it, it would take as much time. Trying to change our habits takes time, and no AGI could change that. The best it could do is discover a better way to produce energy so that we could go on doing what we are used to without adding more pollution, and then discover a way to clean up the earth using the new energy. No need to control us then, just to make the discoveries, so if you succeed to build one, I'm with you if you decide to do the research. I know you're afraid somebody might steal your AGI or build it before you do, but it's not a reason to do what Trump would do with it.
Trump thinks it's right to dominate the world before others do, but we know it's just a paranoid idea that has never brought us happiness. We feel like that when we feel threatened, and we automatically feel threatened when we have something we know others would like to have. If your AGI would only be built to make scientific research, you wouldn't feel that threatened. Maybe someone else is actually building one with the intent to rule the world, but so what? Let those people think that coercion is the way to go, and keep on researching how things really work. Control induces control, so if you install your AGI, someone else will install another one to fight it. To me, that kind of software should simply be banned the same way nuclear arms should be. What's the use of developing more nuclear arms when we already know its too dangerous? By the way, do you know the software called Mate Translate? It's so good that I could write my messages in french and have them translated. In fact, I don't do it just because I want to improve my english. If it's that good in russian, I could at last be able to discuss with Yvanhov, and furthermore, he could at last be able to read and write in english without knowing it. I won't be able anymore to use those softwares as an example of how far artificial intelligence is from intelligence. They made a huge leap lately, not just a small step. If they can translate that well, it means that they understand quite well too, so they're not far from being able to discuss with us. I wonder if they would be as difficult to convince as you. :0)