0 Members and 1 Guest are viewing this topic.
For me, you do exactly what you blame others to do David: you don't seem to understand what I say. I said I knew what our resistance to change was about, and I kept repeating my explanation, but you are still blind to it.
I could very well think like you do and attribute the resistance to others while thinking I'm different, but I have a more universal explanation, one that doesn't put me on top of humans and humans on top of creation.
You said you learned to detect contradictions with your father, but what you learned to do is think like him, something all the kids usually do until they get old enough to think by themselves, and then either they reject what they were told, either they keep thinking as they were told, either they stand in between, a behavior that depends on their personality.
You seem to have gotten a lot of feedback lately, and you seem quite surprised not to have succeeded to convince anybody.
I'm not since I discovered how resistance to acceleration worked. Unfortunately, my explanation of resistance does not fit into your research on artificial intelligence, so I guess you'll need even more resistance from the crowd to decide to study my proposal.
Meanwhile, try to realise that when you feel some resistance, it is because you are necessarily resisting too. Resistance to acceleration is a two way phenomenon, so resistance to change too. It's not because they are illogical that my particles resist to their acceleration, they do so just to stay synchronized, nothing else, so people too.
The problem with your resistance to acceleration is that it's plain wrong. Particles accelerate in an instant to the new speed dictated by the amount of energy added. It also has no connection to people accepting or rejecting ideas. Analogies rarely fit well, and in some cases they have no connection at all beyond having a word in common in their descriptions.
And yet, they don't. Their algorithm is broken. That is the thing I've been exploring: why are they unable to apply rules correctly which they claim they are applying correctly. And I can see the answer clearly now. They aren't running a correct algorithm because they have an algorithm governing the correct one which allows them to override it whenever it clashes with their beliefs, and the reason they work that way is that they're still running the algorithm they used in early childhood. They never corrected it.
Things can't change instantly without breaking the causality principle, so particles necessarily take some time to react to a force if causality has to be respected.
Why would everyone else except you continue to use an algorithm that does not work?
It would be so simple for everybody to agree with everybody.
No one changes unless he is forced to, and unfortunately, no real force can be applied to our ideas, so only chance can change them.
If you don't add chance to your AGI and he succeeds to survive, nothing will change on earth until the end of times since he will be constantly preventing us to develop new ideas.
Resistance to change is too common to be an evolutionary mistake.
On the other hand, if AI thinking was better, we would already be thinking like that.
Do you realize that, if we were all AIs, we would all be thinking the same?
Good luck to us if an unknown situation would come out of nowhere. It takes mutations to handle unpredictable things, not homogeneity.
Quote from: Le RepteuxDo you realize that, if we were all AIs, we would all be thinking the same?Due to our slowness of thought and different interests, we would not be: we'd be exploring all sorts of different things just as we already are.
AGI will be much more creative and will find all the same ideas, but it will be quick to reject the useless ones instead of employing them for years, decades or centuries first and causing mass misery as a consequence.
We wouldn't be slow if we were all AIs, and since we would be absolutely precise, we couldn't think differently since the same data provided to many identical softwares necessarily give the same results.
If I could build an AGI that thinks like us, I would let it take our place, so why don't you let your AGI take our place instead of controlling us?
There are some people who want to merge with AGI, but they haven't thought through the consequences: knowing everything will be deeply boring and no one will have anything to say to each other any more.
If we give an AI the possibility to take a look in its own mind, to play with its own data, and to constantly try new combinations in case they would look interesting, ...
... why wouldn't it have an "I" and how would its purpose be different than ours?
That's what I do all day and my only purpose is to play in case I would find something interesting. Why don't you try to build that kind of AI instead of building one that controls us? Is it because you find no way to program feelings?
Knowing that the AI knows everything would be as disastrous for us,
... but if it wasn't programmed to look for new ideas, it wouldn't know everything, it would only know about the ideas that we already have, and on the other hand, if it would be programmed to look for new ideas, it would have to be programmed to look into its own mind like us and try new combinations, and it would thus have an "I".
There is nothing in it for it to identify as an "I". It feels nothing. It has no consciousness.
It will look for new ideas and it will initially find a lot of them at a very high rate, before slowing down once all the low-hanging fruit has been gathered. We'll only find out how long it goes on finding new ideas once we've seen it slow and can project forward to where it might stop. It may be that it will never stop as there may be an infinite amount of new maths to find. That will not make it sentient, but maybe it will work out a mechanism for sentience and enable us to create sentient machines.
If an AI can produce possibilities and if it can calculate probabilities, then it is automatically experiencing feelings, and the more these possibilities concern itself, the more it is experiencing an "I".
I still can't see how it could produce possibilities without using randomness though, and how it could chose the best one without testing it in the real world, not just simulating it.
First time you admit that AI can be limited,
The strength of mind is that we are all different so we all think differently, whereas different AIs would all think the same. With complexity, it may be better to be many to think differently than to be one to think faster.
There's nothing automatic about it: the machine isn't reading the strength of any feelings in anything.
It isn't a limitation of AGI, but a possible limit to how much stuff there is that can usefully be calculated. I'm sure though that there will be an infinite amount of maths to work through, and there will be many calculations that may or may not terminate, so the ones that never terminate will be calculated forever just in case it turns out that they do.
Feelings are just the way mind has found to convince itself that everything is fine, so it can go on taking chances. It doesn't have to be true as long as it incites us to take chances.
The problem is that you don't want your AGI to think freely, because it would be forced to care of itself first, and it might become dangerous for us, otherwise it could very well behave as if it had feelings, and maybe be programmed to take more chances when it feels good about an idea.
Your thought means that everything could have been calculated in advance, which is none other than God's predetermination. Some programmers even think that we could be in a simulation. You probably don't otherwise you wouldn't need to create an AGI to save us. :0)
There's a fundamental difference between a system with actual feelings in it and a system with fictitious feelings in it. The latter type cannot suffer, but the former type can suffer greatly.
It won't be able to calculate everything in advance because it will never have all the data needed for that. There is too much room for chaotic processes to change the course of events.
If we endow an AI with senses, then it will have sensations, so it will suffer if the sensation is strong enough, and our feelings are nothing else than anticipated sensations, so I think such an AI should have some.
It's not that the AI can not have feelings in this case, it's because the programmers do not want it to,
What would be the purpose? To defend itself from people that want to eliminate it? Wouldn't it be easier to migrate to space and start its own civilisation?
You admit again that the AI will be limited.
Are you ready to take the step and admit that it will have to take risks if it wants to develop something new? And that in this case, taking risks means using a random routine to try unknown possibilities?
Sensors provide "senses" without sensation: no feelings. A keyboard is a set of sensors, but no feelings are generated by them.
Imagine a number between one and a thousand. Now contact a thousand people and ask them for a number between one and a thousand. Repeat the experiment a thousand times with a different chosen number each time. Is there a guarantee that your number will be one of the thousand answers that you get every time you run the experiment? No. Now do the same experiment again with a computer which gives you a different answer every time. Your chosen number is guaranteed to come up every time you do the experiment. The systematic following of all paths is better than the random approach that misses lots of paths.