0 Members and 1 Guest are viewing this topic.
Let me ask my last question differently: why aren't we already biological AIs if it is a better way to evolve?
Apart from not being able to produce randomness consciously, and since randomness depends on complexity, do you think that our brain is not complex enough to produce some unconsciously?
Imagining that mass is massless is close to imagining that the speed of light doesn't depend on the speed of the observer.
If things would change in no time, time would simply not exist.
The resistance of my small steps is also due to a compound effect, but at a scale of smaller particles than molecules. The energy/information that bonds them also travels at c, but it is confined between two or more particles whereas yours is not.
That's acceleration without resistance to acceleration, and we find it nowhere.
That's resistance to acceleration, and it maps to the physics very tightly since we observe it everywhere.
We have to put pressure on people, but blaming them is like asking them to move without us having to put pressure on them, it's to think that things can accelerate instantly.
If it needed help and if this help was urgent, then it would have to show it otherwise it could die just like us.
I suspect there is no situation in which an AI designed to survive like us would behave differently from us, and if it is so, the only way for it to explain its behavior would be to tell us that it evaluates the information it receives from its sensors, which amounts to feeling something.
How you can't use a computer to test your theory.
The problem is that neural nets in the brain are trained to avoid producing randomness because they're trying to do useful things, and proper randomness is rarely useful.
If they take time to change, then the energy is being transferred in stages and there are multiple components of that energy involved. There is no way for a single fundamental piece of energy to be added to something without an instant jump to the new speed.
The resistance to acceleration is the force felt by the thing doing the pushing.
It can explain it by telling the truth. If you want it to behave more like people where it prioritises the survival of a piece of machinery over the people it's supposed to be protecting, then it's badly designed.
Feelings are just a way of weighting the importance of the data, and such a mechanism is already necessary to weigh our sensations, so if there is a way to program sensations, and there must be if the machine has to survive, we're not far from being able to program feelings.
No need for feelings in this case so why their need in other cases?
...instead of building an AGI, you could build an HAI, a human artificial intelligence, and give it the same human goal, which is to survive by discovering how things work. It could still help us to survive, but the best way to do so would be to be on its own just like us.
That would be a very dangerous project, making machines that aren't fully rational and which might prioritise their survival over us. We must avoid going down that path.
Quote from: David Cooper on 14/11/2019 19:48:28That would be a very dangerous project, making machines that aren't fully rational and which might prioritise their survival over us. We must avoid going down that path.How can you say that after having said many times that your AGI would be a lot more rational than we are?
Let's admit that it is, then why wouldn't it prioritise its own survival if it considers that it can save intelligence from disappearing?
I assumed by a human artificial intelligence you meant one that's built to be as rational as a typical human.