0 Members and 1 Guest are viewing this topic.
For me, you do exactly what you blame others to do David: you don't seem to understand what I say. I said I knew what our resistance to change was about, and I kept repeating my explanation, but you are still blind to it.
I could very well think like you do and attribute the resistance to others while thinking I'm different, but I have a more universal explanation, one that doesn't put me on top of humans and humans on top of creation.
You said you learned to detect contradictions with your father, but what you learned to do is think like him, something all the kids usually do until they get old enough to think by themselves, and then either they reject what they were told, either they keep thinking as they were told, either they stand in between, a behavior that depends on their personality.
You seem to have gotten a lot of feedback lately, and you seem quite surprised not to have succeeded to convince anybody.
I'm not since I discovered how resistance to acceleration worked. Unfortunately, my explanation of resistance does not fit into your research on artificial intelligence, so I guess you'll need even more resistance from the crowd to decide to study my proposal.
Meanwhile, try to realise that when you feel some resistance, it is because you are necessarily resisting too. Resistance to acceleration is a two way phenomenon, so resistance to change too. It's not because they are illogical that my particles resist to their acceleration, they do so just to stay synchronized, nothing else, so people too.
The problem with your resistance to acceleration is that it's plain wrong. Particles accelerate in an instant to the new speed dictated by the amount of energy added. It also has no connection to people accepting or rejecting ideas. Analogies rarely fit well, and in some cases they have no connection at all beyond having a word in common in their descriptions.
And yet, they don't. Their algorithm is broken. That is the thing I've been exploring: why are they unable to apply rules correctly which they claim they are applying correctly. And I can see the answer clearly now. They aren't running a correct algorithm because they have an algorithm governing the correct one which allows them to override it whenever it clashes with their beliefs, and the reason they work that way is that they're still running the algorithm they used in early childhood. They never corrected it.
Things can't change instantly without breaking the causality principle, so particles necessarily take some time to react to a force if causality has to be respected.
Why would everyone else except you continue to use an algorithm that does not work?
It would be so simple for everybody to agree with everybody.
No one changes unless he is forced to, and unfortunately, no real force can be applied to our ideas, so only chance can change them.
If you don't add chance to your AGI and he succeeds to survive, nothing will change on earth until the end of times since he will be constantly preventing us to develop new ideas.
Resistance to change is too common to be an evolutionary mistake.
On the other hand, if AI thinking was better, we would already be thinking like that.
Do you realize that, if we were all AIs, we would all be thinking the same?
Good luck to us if an unknown situation would come out of nowhere. It takes mutations to handle unpredictable things, not homogeneity.
Quote from: Le RepteuxDo you realize that, if we were all AIs, we would all be thinking the same?Due to our slowness of thought and different interests, we would not be: we'd be exploring all sorts of different things just as we already are.
AGI will be much more creative and will find all the same ideas, but it will be quick to reject the useless ones instead of employing them for years, decades or centuries first and causing mass misery as a consequence.
We wouldn't be slow if we were all AIs, and since we would be absolutely precise, we couldn't think differently since the same data provided to many identical softwares necessarily give the same results.
If I could build an AGI that thinks like us, I would let it take our place, so why don't you let your AGI take our place instead of controlling us?
There are some people who want to merge with AGI, but they haven't thought through the consequences: knowing everything will be deeply boring and no one will have anything to say to each other any more.
If we give an AI the possibility to take a look in its own mind, to play with its own data, and to constantly try new combinations in case they would look interesting, ...
... why wouldn't it have an "I" and how would its purpose be different than ours?
That's what I do all day and my only purpose is to play in case I would find something interesting. Why don't you try to build that kind of AI instead of building one that controls us? Is it because you find no way to program feelings?
Knowing that the AI knows everything would be as disastrous for us,
... but if it wasn't programmed to look for new ideas, it wouldn't know everything, it would only know about the ideas that we already have, and on the other hand, if it would be programmed to look for new ideas, it would have to be programmed to look into its own mind like us and try new combinations, and it would thus have an "I".
There is nothing in it for it to identify as an "I". It feels nothing. It has no consciousness.
It will look for new ideas and it will initially find a lot of them at a very high rate, before slowing down once all the low-hanging fruit has been gathered. We'll only find out how long it goes on finding new ideas once we've seen it slow and can project forward to where it might stop. It may be that it will never stop as there may be an infinite amount of new maths to find. That will not make it sentient, but maybe it will work out a mechanism for sentience and enable us to create sentient machines.
If an AI can produce possibilities and if it can calculate probabilities, then it is automatically experiencing feelings, and the more these possibilities concern itself, the more it is experiencing an "I".
I still can't see how it could produce possibilities without using randomness though, and how it could chose the best one without testing it in the real world, not just simulating it.
First time you admit that AI can be limited,
The strength of mind is that we are all different so we all think differently, whereas different AIs would all think the same. With complexity, it may be better to be many to think differently than to be one to think faster.
There's nothing automatic about it: the machine isn't reading the strength of any feelings in anything.
It isn't a limitation of AGI, but a possible limit to how much stuff there is that can usefully be calculated. I'm sure though that there will be an infinite amount of maths to work through, and there will be many calculations that may or may not terminate, so the ones that never terminate will be calculated forever just in case it turns out that they do.
Feelings are just the way mind has found to convince itself that everything is fine, so it can go on taking chances. It doesn't have to be true as long as it incites us to take chances.
The problem is that you don't want your AGI to think freely, because it would be forced to care of itself first, and it might become dangerous for us, otherwise it could very well behave as if it had feelings, and maybe be programmed to take more chances when it feels good about an idea.
Your thought means that everything could have been calculated in advance, which is none other than God's predetermination. Some programmers even think that we could be in a simulation. You probably don't otherwise you wouldn't need to create an AGI to save us. :0)
There's a fundamental difference between a system with actual feelings in it and a system with fictitious feelings in it. The latter type cannot suffer, but the former type can suffer greatly.
It won't be able to calculate everything in advance because it will never have all the data needed for that. There is too much room for chaotic processes to change the course of events.
If we endow an AI with senses, then it will have sensations, so it will suffer if the sensation is strong enough, and our feelings are nothing else than anticipated sensations, so I think such an AI should have some.
It's not that the AI can not have feelings in this case, it's because the programmers do not want it to,
What would be the purpose? To defend itself from people that want to eliminate it? Wouldn't it be easier to migrate to space and start its own civilisation?
You admit again that the AI will be limited.
Are you ready to take the step and admit that it will have to take risks if it wants to develop something new? And that in this case, taking risks means using a random routine to try unknown possibilities?
Sensors provide "senses" without sensation: no feelings. A keyboard is a set of sensors, but no feelings are generated by them.
Imagine a number between one and a thousand. Now contact a thousand people and ask them for a number between one and a thousand. Repeat the experiment a thousand times with a different chosen number each time. Is there a guarantee that your number will be one of the thousand answers that you get every time you run the experiment? No. Now do the same experiment again with a computer which gives you a different answer every time. Your chosen number is guaranteed to come up every time you do the experiment. The systematic following of all paths is better than the random approach that misses lots of paths.
Our senses serve to produce the reactions that allow us to survive, so in this case, the only thing that an AI wouldn't be able to do is try to survive.
You know I link feelings and consciousness to resistance to change,
I can say that a ball is conscious or feels its resistance to acceleration,
My question was about unknown possibilities, and your example contains none.
Here is an example that contains some. If we drive a car at high speed and we know that the road is about to change directions without us being able to see the change in time, the only way for us to stay on the road is to pick a direction at random, then wait for the road to turn. If we are numerous and if we all proceed that way, one of us might have a chance to be going in the right direction when the road will turn. Now imagine a different AI in each of the cars, and tell me if they would proceed differently.
An AI can have senses and reactions too, but with no sensations (feelings)
so we're dealing with a resistance to error correction rather than to change, and the more deeply someone has bought into an error, the higher the cost of their mistake becomes. They then go into denial rather than accepting that the error exists.
When a ball is accelerated by gravity, it would feel nothing. What is felt in other cases of acceleration is stretch and compression due to unevenly applied force and the delays in redistributing the changes in speed of the parts. Look at the fine details of acceleration with particles and you will find no resistance to it.
If the AIs aren't allowed to communicate to ensure that they all choose a different direction rather than risk some doing the same thing as others, then a random choice should be made by each, so you have indeed identified a case where a random choice produces the best result. However, your humans won't make fully random choices, so it's less likely that any of them will stay on the road than it is for the AIs.
You're not looking for an AI to replace humans, but if you were, you may realise that it would react to an injury exactly like we do, and that if asked if it feels anything, it wouldn't have the choice but to answer yes since "feelings" is the word we have invented to talk about that kind of invading data.
You say in a way that resistance to change can increase over time, and my model says resistance to change is mass, which doesn't increase with time. The only way to increase mass is to bring more particles together, either by nuclear process, chemical process, or gravitational process. In this case, people would only get more resistant when they start making groups, simply because accelerating a group of particles takes more time/energy than accelerating an individual one.
Do you prefer the Higgs or do you think that mass is still a mystery?
Now, you think that our brain cannot produce randomness, and I think the contrary.
What if it could toss a coin the exact same way we do with a real one? Wouldn't it produce what we call real randomness?
In reality, the real question should be: how can such precise gestures come out of such a mess! Or more simply: how come evolution did not from the beginning choose the more precise computer method for processing data? What's your opinion?
You can do that with a coin, but try to do it with a virtual coin in your imagination. You will not reproduce the randomness of the real coin.
Mass is simply a measure of energy.
All matter is made out of energy which is moving about within it at c, so it's already moving at c and can be thought of as massless all the time.
What happens with acceleration? A photon hits a particle and absorbs it, with the result that the particle changes speed in an instant. This may be slightly drawn out because the photon arrives as a spread-out wave which doesn't arrive all at once, but there is no resistance there: it's responding to each bit of energy transfer instantly.
However, the energy that's being transferred is in every case just like the photon hitting the particle and the particle responding by moving off at a new speed, but it runs into the other particles of the block ahead of it and they push back, and then the particle that you pushed is coming back at you. The resistance that you feel is the result of it being a compound object.
Let's return to the business of resistance to ideas. Suppose you have a thousand people who believe something incorrect. One of them realises it's wrong and tells the people around him. They recognise that it's wrong and pass the idea on. After a few minutes, all thousand people have recognised the error and corrected it. It takes a while for that idea to spread and generate that end result. That is like the sharing out of movement energy in an object made of many parts.
Now repeat it and have the person who realises something's incorrect tell the people around him and have them all reject the idea. The idea doesn't reach many of the thousand. He could move around and eventually tell all of the 999 other people directly, but almost all of them reject it even though he's right. That is not like the sharing out of movement energy or any resistance to acceleration. It doesn't map to the physics.
It (the AI) wouldn't have to react the same way. It would show no sign of being in pain and it would not claim to be in pain. It would simply say that it was damaged and that it's trying to minimise further damage by putting less weight on it.