0 Members and 1 Guest are viewing this topic.
If we wire it to love throwing itself into a volcano, I'm afraid it won't live long, but if we wire it to retract its hand when it is too hot for its skin, it will do so, and if it can remember the sensation, it will try not to feel it again, and it might live longer like that.
How a data that represents a sensation is transformed into a data that produces a move, or how a data that produces a move is transformed into a data that represents a sensation? Is that close to what you call the hard question? If not, I might need a simulation that shows the data going sideways to the motion. :0)
Everything in nature is a measure of degrees between two opposing states. You simply cannot have one state without the opposing state.
In the latter case, there is no opposite in the sense of negative - there is only an opposite in the sense of there being two ends of the range where they're furthest apart.
The point is that you could wire it to feel pleasure when being damaged, but to use that feeling of pleasure to drive a strong reaction to get away from that damaging thing
In the same way, it could be wired to feel more and more pain as it gets into a safer position, and that could still be used to drive it to seek greater safety. The feeling isn't felt by the information system which decides how to react to the inputs.
Yes, I can see that possibility. Do you have specific examples?
I spent a lot of time on this exact topic many years ago. I measured everything in true-unknown-false, effectively eliminating the concept of belief from my day to day life.
I don't see any difference between an AGI and us in this case, so if you still consider that your AGI wouldn't have to use the data from his environment to control his moves, then to me, it means that he wouldn't have to care about staying alive, which means that he wouldn't be able to learn what staying alive means, which means that he might not be able to help us stay alive.
Our simplest reaction system is a reflex. In the hand's case, the data from the environment is sort of reflected when it strikes our spinal cord, and it is directly transformed into a data that retracts the hand. No need to decide which way or at what speed the hand must be retracted, the arm only has to bend as fast as it can. We can't really call a feeling the data that has triggered the move since it hasn't yet been used by the central system, but it is nevertheless a data that came from the environment, and I think it would be quite easy to wire an AGI to produce the same kind of move.
What if all our moves were triggered subconsciously before we can feel anything. Would it be closer to the way you need to program your AGI?
...and what we might feel from a particular spot is always late since the data that has produced the feeling has already left for another spot.
But we can't produce any rational model of how this happens in which the experience of pain (or any other feeling) can be converted into data that documents the experience of that feeling.
The data is generated by something that can't feel anything, so it's produced by a system that has no access to the truth of how the experience felt and is thus incompetent - the feeling could be the exact opposite of what it asserts it to be.
This reaction seems quite easy to program
The data from pain is specific: no data means no pain, data means pain, and more data means more pain. If a brain produces the wrong move, it will die, so only the brains with the right move will survive, only their genes will be transmitted, and the following brains will make the right move. This way, at the beginning of the evolution process, the meaning of pain is unimportant, only the right reaction is. Now that we can think, we gave a name to that feeling because we all feel the same thing, the same way we gave the name red to that color because we all see the same thing.
Where do thoughts come from?
Bad feeling , good feelings, they are subjective of the thought . We have no good or bad feelings prior to the moment of the thought. We don't really feel anything other than the moment we are experiencing.
Quote from: Thebox on 02/04/2018 11:24:14Where do thoughts come from?A mixture of memory and new input sparks them off.QuoteBad feeling , good feelings, they are subjective of the thought . We have no good or bad feelings prior to the moment of the thought. We don't really feel anything other than the moment we are experiencing.They do indeed appear to be tied tightly to the thought. I've been ill for a long time with something that was misdiagnosed, but turns out to be Crohn's disease. It generates a lot of pain, making it easy for me to study that without having to stick pins in my arm. You can't just imagine pain away and not be bothered by it. That pain goes right through thought, but it isn't data, and yet data is somehow generated that documents it. What could do that? A computer has no ability to do this. I can only think that we (minimalist souls) are something quantum and run on rules that go beyond what we understand of computation. There must be some higher level of computation in which sentience is a key component of the system or where it somehow is the data system. Trying to understand how that might work when thinking about it using the lower level of computation as the model will never find the solution, which means we can only understand the mechanism through using that mechanism as the model, but we don't have that model yet and are therefore cut off from the solution by a barrier which we can only overcome by making some giant leap of the imagination. No one has yet managed to make that leap in the right direction, but by understanding why our computers can't turn feelings (should they have any) into data about experiencing feelings, it may help someone to find a way to the solution to the biggest puzzle of them all. Those who don't understand the problem certainly won't solve it.
Try it and see how far you get. Take a computer, a sensor, and put a "sentience box" in between them. The feeling is felt in the box (by something which science can't describe). If a signal comes in which feels pleasant, the sentience box sends a signal on down wire 1 into the computer to say that pleasure has been experienced. If a signal comes in that feels painful, the sentience box sends a signal down wire 2 to the computer to say that pain has been experienced. What does the computer make of these signals?
You have heard of nano bots right? Now you are going to hear about bio bots.
...millions of chips, wired like our neurons are wired...
Let's admit that, with the proper improvements, such a system would become a lot more efficient than a human brain. Do you think that it could help us to rule the world as efficiently as your AGI, or on the contrary, do you think that it wouldn't be more reliable than we are?
I just realized that sentience meant consciousness, and I thought you were only talking about feelings.
That's what animals do, and we don't attribute them the same kind of conscience that we have, so what is the difference?
The difference with your AGI is that it can't intrinsically resist to a change in frequency...
Quote from: Thebox on 02/04/2018 17:29:45You have heard of nano bots right? Now you are going to hear about bio bots.Nice, but how do they get round the problem they can't get round? What I want to see is a system with sentience in it and which generates data that documents the experience of feelings in the system in an informed way rather than just generating baseless assertions about feelings which aren't informed by any such experience.
What do you mean they can't get around? I explained us.
a signal came in from a port linked to the robot arm. Data was looked up to see what this meant and it was established that it was a damage signal, so a signal was sent to another port to withdraw the arm. A message was sent to the speaker to say "Ouch! Bad doggie! Don't bite my hand!", but nothing in the system felt any pain.
...but to be sure that your AGI won't harm us, I think we better know why we have feelings.
We, humans, took a lot of risks to get where we are, and doing so, we endangered the other species. You often say that your AGI would be more reliable than we are, so it may mean that he wouldn't take risks, but if he would, how would he know that his long term risks are not going to be dangerous for us one day?
When you take risks, you're really supposed to be minimising risk