0 Members and 1 Guest are viewing this topic.
When a species evolves, it does so by natural selection rewarding useful mutations and punishing bad ones, but the mutations continue to be random - no lessons are learned about bad mutations, so they are made repeatedly and a lot of individual animals suffer greatly as a result. If a mutation is discovered to be bad, ideally the repetition of that mutation would be prevented, but nature hasn't provided a memory to prevent that. Of course, the same mutation might not be harmful and could be beneficial later on after a number of other mutations have occurred, so you don't want to prevent that mutation being tested again, but you do want to avoid testing it again from the same starting point. Sticking to random is a slower way of making progress, and with intelligent machines, there's no excuse for doing that because it's easy to record what fails and to avoid repeating those failures over and over again.
You say that no lesson is learned about bad mutations, so they continue to be unnecessarily repeated. It is without counting on a changing environment though, since if the wrong mutation is kept in memory because it is not deleterious and the environment changes in its favor, it will be reproduced more quickly and the species will change faster. This is the advantage of a memory based on reproduction, which I believe is also that of our memory.
Our ideas that didn't work are also kept in memory if they didn't kill us, so they are still available in case things change. If an AI erased its bad ideas, it would have to rediscover them if things changed, and if it did not erase them, then it would mean that it works as for mutations.
You're assuming that artificial intelligence would not make mistakes,
That's what I mean when I say that an AI cannot predict the future, but you persist to say that it won't make mistakes since it's morality will be perfect.
It might be so if it had invented it all by itself, but it's not the case, it's your morality, and I see no reason why you would be able to predict the future with an idea that has not been tested. Like me, you probably think things will be fine once your idea is implemented, but unlike mine, yours could be dangerous if it doesn't work.
If you know the environment hasn't changed, you know not to repeat failed experiments. In a warming world, mutations which might better adapt people to a cooling world won't be useful. The experiments that fail cause suffering, so it's worth avoiding them when you already know that they will fail.
That's right, but if I'm right, in a changing environment, there would be no way to avoid suffering. If there were no mutations, for example, the species would not evolve and they would disappear, which means that no life would have developed. For species, mutations must therefore occur, whether there is a change or not, and they actually cause useless suffering when there is none.
Why can't you just put it in place and wait for it to change things instead of programming it to control them?
I know you want to prevent it from falling into the wrong hands, but democratic societies also have this type of security problem while still being able to evolve freely. Freedom might not just be a feeling we like to have or a right we want to preserve, it could also very well be a natural law without which things would not evolve.
Most mutations aren't damaging, but where they are, we'll eventually be able to do advanced gene therapy to correct the faults (in humans).
Most people prefer to have a bit of control though
It won't be AGI making those decisions except where people want to do things that will be harmful.
Democracy allows people to make mistakes that result in genocide because their judgement is so poor, but it will be possible to prevent that by having AGI provide them with better education and advice, proving to them that many of the things they believe in are plain wrong. They will be forced by their own realisations to change their minds on many issues without AGI being in direct power at all.
The odds are that we're heading for extermination wars, and that's what I'm trying to head off.
Mutations are always defaults, so they inevitably cause suffering to the ones that carry them.
Where will your AGI put the bar? Will it correct homosexuallity for instance? Will it correct low intelligence? At the limit, won't it try to make us at its image like any god would do? And if it succeeds, what will be the difference between us and it? Why not all become robots?
If your AGI doesn't know that these people are useful to the evolution of society the same way mutations are, it might not protect them, and things could get worse if it doesn't know its ideas cannot evolve without chance being part of its thinking process.
I believe that genocides are not due to lack of jugement, but to lack of democracy. Will your AGI try to change my mind or try to understand what I mean? Of course, if it tries to understand, I'm with you, because it means that it will be able to doubt, which is due to chance being part of the intelligence process. :0)
If I'm right about intelligence, an AGI that doesn't use chance will have less chance against a human that does.
Wars are made by humans that avoid chance, and we are beginning to understand that such a behavior is exaggerated.
The purpose of war is only to avoid chance being part of social evolution, and when humans will have understood that, they won't need an AGI to know what to do.
Here is the link to his paper in case you want to consult it: http://rhythmodynamics.com/index_files/Report_blok_Eng.pdf