0 Members and 1 Guest are viewing this topic.
When a species evolves, it does so by natural selection rewarding useful mutations and punishing bad ones, but the mutations continue to be random - no lessons are learned about bad mutations, so they are made repeatedly and a lot of individual animals suffer greatly as a result. If a mutation is discovered to be bad, ideally the repetition of that mutation would be prevented, but nature hasn't provided a memory to prevent that. Of course, the same mutation might not be harmful and could be beneficial later on after a number of other mutations have occurred, so you don't want to prevent that mutation being tested again, but you do want to avoid testing it again from the same starting point. Sticking to random is a slower way of making progress, and with intelligent machines, there's no excuse for doing that because it's easy to record what fails and to avoid repeating those failures over and over again.
You say that no lesson is learned about bad mutations, so they continue to be unnecessarily repeated. It is without counting on a changing environment though, since if the wrong mutation is kept in memory because it is not deleterious and the environment changes in its favor, it will be reproduced more quickly and the species will change faster. This is the advantage of a memory based on reproduction, which I believe is also that of our memory.
Our ideas that didn't work are also kept in memory if they didn't kill us, so they are still available in case things change. If an AI erased its bad ideas, it would have to rediscover them if things changed, and if it did not erase them, then it would mean that it works as for mutations.
You're assuming that artificial intelligence would not make mistakes,
That's what I mean when I say that an AI cannot predict the future, but you persist to say that it won't make mistakes since it's morality will be perfect.
It might be so if it had invented it all by itself, but it's not the case, it's your morality, and I see no reason why you would be able to predict the future with an idea that has not been tested. Like me, you probably think things will be fine once your idea is implemented, but unlike mine, yours could be dangerous if it doesn't work.
If you know the environment hasn't changed, you know not to repeat failed experiments. In a warming world, mutations which might better adapt people to a cooling world won't be useful. The experiments that fail cause suffering, so it's worth avoiding them when you already know that they will fail.
That's right, but if I'm right, in a changing environment, there would be no way to avoid suffering. If there were no mutations, for example, the species would not evolve and they would disappear, which means that no life would have developed. For species, mutations must therefore occur, whether there is a change or not, and they actually cause useless suffering when there is none.
Why can't you just put it in place and wait for it to change things instead of programming it to control them?
I know you want to prevent it from falling into the wrong hands, but democratic societies also have this type of security problem while still being able to evolve freely. Freedom might not just be a feeling we like to have or a right we want to preserve, it could also very well be a natural law without which things would not evolve.
Most mutations aren't damaging, but where they are, we'll eventually be able to do advanced gene therapy to correct the faults (in humans).
Most people prefer to have a bit of control though
It won't be AGI making those decisions except where people want to do things that will be harmful.
Democracy allows people to make mistakes that result in genocide because their judgement is so poor, but it will be possible to prevent that by having AGI provide them with better education and advice, proving to them that many of the things they believe in are plain wrong. They will be forced by their own realisations to change their minds on many issues without AGI being in direct power at all.
The odds are that we're heading for extermination wars, and that's what I'm trying to head off.
Mutations are always defaults, so they inevitably cause suffering to the ones that carry them.
Where will your AGI put the bar? Will it correct homosexuallity for instance? Will it correct low intelligence? At the limit, won't it try to make us at its image like any god would do? And if it succeeds, what will be the difference between us and it? Why not all become robots?
If your AGI doesn't know that these people are useful to the evolution of society the same way mutations are, it might not protect them, and things could get worse if it doesn't know its ideas cannot evolve without chance being part of its thinking process.
I believe that genocides are not due to lack of jugement, but to lack of democracy. Will your AGI try to change my mind or try to understand what I mean? Of course, if it tries to understand, I'm with you, because it means that it will be able to doubt, which is due to chance being part of the intelligence process. :0)
If I'm right about intelligence, an AGI that doesn't use chance will have less chance against a human that does.
Wars are made by humans that avoid chance, and we are beginning to understand that such a behavior is exaggerated.
The purpose of war is only to avoid chance being part of social evolution, and when humans will have understood that, they won't need an AGI to know what to do.
Here is the link to his paper in case you want to consult it: http://rhythmodynamics.com/index_files/Report_blok_Eng.pdf
Quote from: Le RepteuxMutations are always defaults, so they inevitably cause suffering to the ones that carry them.Mutations are often benign and can be helpful.
it's up to people to say what they want: if they want to have intelligent, good looking children, they'll get that
What makes you think that wars are made by humans that avoid chance?
With species that diverged millions of years ago, subtle little changes have built upon each other over time to scramble things, destroying compatibility, but that scrambling was done little by little without the changes doing any harm.
Is it fun to make your own bad decisions and lose out rather than being given better advice by machines which leads to you being better off? There are times when life is better if you trust the superior intelligence.
On another issue, we both had a problem with LaFrenière's way of holding the energy in a particle to avoid it leaking away into space and eliminating the particle. However, if we add three more dimensions which are rolled up in tight loops, we can have all the movement occur within those dimensions to serve as a container.
Once you’re seeing matter as waves moving at c, you can then see how gravity moves matter simply by having the speed of light go down as you go deeper into a gravity well.
The wiki article is misleading. It says...It would assume that more matter is needed that what is present however, dark matter particles is not the only theory that is capable of explaining the strange phenomenon. I
I think that the only way humans can trust a leader is when they feel they belong to the group, because it is an instinctive behavior. We know we are stronger within a group, but we must also feel good to submit to its rules. Since they are lacking our kind of intelligence, it's probably how they feel that incites animals to let a leader drive them, and I think it's also how our ideas work, which means that it's probably the feelings associated to our ideas that drive us and not the inverse. There is probably no "us" inside our minds, just ideas and feelings tagged to them, the same way hierarchical values seem to be tagged to the members of a clan of primates. This way, all our ideas would be fighting to get better values, which would create the same kind of hierarchical organisation. Once born, ideas could then be on their own like any individual thing. How about integrating this feature into an AI? Would not that be close to your measuring of harm?
To me, the only thing that can roll up energy in tight loops is matter.
It is intellectual high flying to claim that the presence of a body can bend space.
...so how can't you suspect that curved space stuff? What would be curved exactly?
I don't believe in it for a moment. The path light follows past a planet can bend without space being curved in any way: it's all done by the density of the medium causing lensing effects. A planet in orbit around a star is simply being bent around it like light being bent off line. What makes them look different is that the light going past the star goes one way and is bent a little, but the energy inside the atoms of the planet moves to and fro an astronomical number of times allowing that bending effect to accumulate and become much more severe; sufficiently severe to enable it to be lensed round and round the star in circles.
If the density of the medium around a planet could change, then the density between my particles could change too, and I wouldn't need light to explain the bonding, just curved space.
No need for synchronisation, so no explanation for mass, and no explanation for motion either.
Is there a real mechanism for density change? One that we can simulate?
You are trying to find a complex explanation for something really simple, and you're doing that because you've been misled by words and by establishment thinking.
The closest to a mechanism that I've been able to think of so far is that all matter is more spread out through space than we normally think - it extends far out from the centre of every piece of mass/energy, serving directly as a medium which slows light.
Accelerating a macroscopic body looks instantaneous, but it can't be since the information has to reach the whole body before it accelerates as a whole.
That matter works for things that have a speed with regard to it, but what for those who have not? How does it explain the force we feel standing on earth for example?
If you accelerate one particle of a bonded pair, you've already achieved the full acceleration even if the other particle hasn't begun to move yet. Everything that happens subsequently is just a transfer of energy between the two particles as they take turns in moving or share out that movement energy to take half of it each.
If a particle is in space near a planet, the waves of energy moving about within the particle are bent downwards, leading to the particle accelerating towards the planet.
So if I refer to my particles, your matter would be a kind of ether belonging to a particle, that spreads out around it, and that changes the direction and/or the speed of the light it exchanges with the other particle because it's density changes with distance.
Then I have a question: when such a particle is accelerated, is that ether accelerating instantaneously or does it take time until it is completely accelerated? For instance, if I'm not mistaken, I think it is considered that space-time accelerates instantly even if gravity waves only travel at c.