0 Members and 1 Guest are viewing this topic.
I read about an interesting experiment where they demonstated, I think using PET scans, that a person's brain had made a decision to act or not act, before the person was conscious of having made a decision. I wish I could find the source. But when I read it, I thought, wow, that does mess with the concept of free will somewhat.
I am currently building a system (A.I. software) which I reckon ought to be able to match my own intelligence on a very ordinary machine within a few months of learning after the build is complete (which should be around the middle of this year).
_____________________________________________________Geezer, I'm not sure why you brought up autonomic functions of the body just to say that doesn't prove we don't have free will. Autonomic functions by definition are beyond our day to day control. I'm not understanding what your point is.
Will it be able to tell a lie and be aware that it's lying when it does so, like your brain surely can when you say something simple like "Grass is red"?
We can learn to take control, at least for a period of time, of what are normally automatic functions.
Nizzle said "No one will argue that you can decide for yourself what you're having for dinner this evening, but some people, like David Cooper, will say that the current (quantum)physical state of your brain and body will make you choose one or the other and thus the decision will be made for you, by your brain and body.But it happens to be that that's exactly what we are.. We are a brain in a body. So if the brain and body makes the decision for us, we make it for ourselves."You lost me! You start your discussion with the statement you fall on the side of the discussion where there is no free will. Then you give the above example that shows we are making our own decisions, even if it is at a quantum level, it is still us.at level a decision is made within ourselves, it is still US making the decision, be it the subconscious or the quantum us.Unless you are stating that at the quantum it is no longer "us". My question then becomes, if the quantum level of us is not us, who or what is it?
It will know that the statement it's making clashes with its database of knowledge and is therefore a lie.
Okay, but could you program your software to make a test subject who's interacting with your software believe in a lie that it's telling.Humans tell lies mostly because they somehow benefit from it themselves (or at least think they'll benefit from the lie) and I know that such a motivation will be lacking in your AI Software because, I assume, it's 100% unselfish, but suppose that you program the motivation "Convince the test subject of a lie". Would it be capable to do so?
BTW, once your AI program is finished, what kind of interface will it be using? Something like Cleverbot? And I want to volunteer for the Turing test if you think of doing this
We've now reached the point where discussing free will leads to discussing consciousness. Computers lack consciousness. People generally believe themselves to be conscious. Let's try to add something conscious to a machine. A robot has sensors all over its surface designed to detect contact with other objects, and if anything hits it it will send a signal to the processor to trigger an action. The processor then runs a bit of code to handle the situation and try to move the robot away from whatever it might be that hit it. Now, if we want to make this more like a human, the processor should maybe experience pain. So, lets arrange for it to feel pain whenever a signal comes in from one of these sensors. What's the result?
The robot behaves exactly the same way as it did before, but with the addition that something in it feels pain. The pain becomes part of the chain of causation, but it doesn't change anything about the choice that is made,
so there is no room for it to introduce any free will into things. What it does do, however, is introduce the idea of there existing something in the machine that can feel sensations and which can be identified as "I", and that's where we run up against the real puzzle, because even if you could have a component capable of feeling pain in the system, you have the problem of how you could ever get that component to inform the system that it is actually feeling pain and not just passing on the same signal that was fed into it.
For the component that feels pain to be able to pass on knowledge of pain to the rest of the system, it would have to be a lot more complex than something that simply feels pain. What we'd need is something complex which collectively feels the pain and which understands that it is feeling the pain and which is able to articulate the fact that it is feeling the pain and which feels as if it is involved in the mechanism for responding to that pain. The last part of that is what makes people feel that they have free will (even though they don't), but the rest of it is problematic as it doesn't look as if it should be possible for something like that to exist at all.
The result is that you haven't modelled pain correctly.
I mean, a classic 'neural network' has no training system built into it, but humans clearly do have a training system, and pain is part of that system.So a thing like pain is designed into a human or animal brain by evolution. It's a really strong sign that the animal is doing something very wrong, and should learn to avoid that in future. It's not just simply an input, like the colour red, it tells the other neurons that they need reprogramming.
What happens is that when you feel pain, your brain notices that and correlates neuronal activities that were happening around that time, and downvalues those things.It's a VALUE of and for the neural network, it's not just an input, the neural network gets a hit of pain and downgrades everything a little, changes the weights between neurons. Which weights it chooses where in the brain, they have been selected by evolution, and it probably depends on what hurts, burning your finger is different from burning your foot is different from... there's doubtless chemical and electrical triggers that alter the weights.
QuoteFor the component that feels pain to be able to pass on knowledge of pain to the rest of the system, it would have to be a lot more complex than something that simply feels pain. What we'd need is something complex which collectively feels the pain and which understands that it is feeling the pain and which is able to articulate the fact that it is feeling the pain and which feels as if it is involved in the mechanism for responding to that pain. The last part of that is what makes people feel that they have free will (even though they don't), but the rest of it is problematic as it doesn't look as if it should be possible for something like that to exist at all.That makes no sense at all. Something that feels pain and reacts to it, and learns to avoid pain is highly unlikely to involve anything we would normally describe as free will, it's going to be a very, very evolutionarily ancient process.
Quote from: wolfekeeper on 25/02/2012 17:13:30The result is that you haven't modelled pain correctly.Indeed, and no one else has either - it doesn't appear to be possible to model pain at all, so if you have ideas about how it can be done, I want to hear them.QuoteI mean, a classic 'neural network' has no training system built into it, but humans clearly do have a training system, and pain is part of that system.So a thing like pain is designed into a human or animal brain by evolution. It's a really strong sign that the animal is doing something very wrong, and should learn to avoid that in future. It's not just simply an input, like the colour red, it tells the other neurons that they need reprogramming.I don't think learning is the immediate priority when pain is generated - it's about driving you to do something as quickly as possible to do something that might reduce or eliminate the pain.
Clearly there could be some learning associated with an event involving pain if it's a novel situation which could be avoided in future, but not at that immediate time.
Have you got this idea from somewhere that I could go to to read up on it more fully, because it sounds like an interesting idea, even if it doesn't relate directly to the business of pain driving action.
QuoteQuoteFor the component that feels pain to be able to pass on knowledge of pain to the rest of the system, it would have to be a lot more complex than something that simply feels pain. What we'd need is something complex which collectively feels the pain and which understands that it is feeling the pain and which is able to articulate the fact that it is feeling the pain and which feels as if it is involved in the mechanism for responding to that pain. The last part of that is what makes people feel that they have free will (even though they don't), but the rest of it is problematic as it doesn't look as if it should be possible for something like that to exist at all.That makes no sense at all. Something that feels pain and reacts to it, and learns to avoid pain is highly unlikely to involve anything we would normally describe as free will, it's going to be a very, very evolutionarily ancient process.It isn't free will, but my point is that it feels as if it is because we feel as if we are something inside the machine that makes conscious decisions. If we were non-conscious machines like computers, no one would entertain the idea of free will at all, but adding consciousness into the system complifies things substantially, and no one has managed to get a handle on what consciousness is other than that it involves feelings of a multiplicity of different kinds, and these feelings have to be experienced by something and processed in some way so that they can have a role in the chain of causation. All of that is problematic.