The Naked Scientists

The Naked Scientists Forum

Author Topic: Do People have Free Will, or is the Concept Nothing But Illusion.  (Read 14761 times)

Offline Gordian Knot

  • Sr. Member
  • ****
  • Posts: 165
    • View Profile
Uh, okay..........
 

Offline Nizzle

  • Hero Member
  • *****
  • Posts: 964
  • Thanked: 1 times
  • Extropian by choice!
    • View Profile
    • Carnivorous Plants
Nizzle said "No one will argue that you can decide for yourself what you're having for dinner this evening, but some people, like David Cooper, will say that the current (quantum)physical state of your brain and body will make you choose one or the other and thus the decision will be made for you, by your brain and body.

But it happens to be that that's exactly what we are.. We are a brain in a body. So if the brain and body makes the decision for us, we make it for ourselves
."

You lost me! You start your discussion with the statement you fall on the side of the discussion where there is no free will. Then you give the above example that shows we are making our own decisions, even if it is at a quantum level, it is still us.
at level a decision is made within ourselves, it is still US making the decision, be it the subconscious or the quantum us.

Unless you are stating that at the quantum it is no longer "us". My question then becomes, if the quantum level of us is not us, who or what is it?

Yea it might've been a bit confuzzling, I was thinking about the issue while I was writing my post, and in the beginning I thought free will must somehow exist, but after going deeper and deeper I had to revise my standpoint and edited the first line in my post.

So I started to believe there is no free will from my sentence "Now you can drive it a bit further.."

In that post I wanted to share that I no longer believe in free will, but that itt doesn't automatically mean I believe in fate.

We are still 'us' on a quantum level, but we have no say in how that quantum level representation of 'us' came to be, and it's exactly that quantum state that influences all our future actions and decisions.
In other words, we have to undergo the course of our lives, dictated by either quantum level randomness/probabilities or fate, whichever is your preference.

It will know that the statement it's making clashes with its database of knowledge and is therefore a lie.

Okay, but could you program your software to make a test subject who's interacting with your software believe in a lie that it's telling.
Humans tell lies mostly because they somehow benefit from it themselves (or at least think they'll benefit from the lie) and I know that such a motivation will be lacking in your AI Software because, I assume, it's 100% unselfish, but suppose that you program the motivation "Convince the test subject of a lie". Would it be capable to do so?

BTW, once your AI program is finished, what kind of interface will it be using? Something like Cleverbot?

 

Offline Nizzle

  • Hero Member
  • *****
  • Posts: 964
  • Thanked: 1 times
  • Extropian by choice!
    • View Profile
    • Carnivorous Plants
Nizzle said "No one will argue that you can decide for yourself what you're having for dinner this evening, but some people, like David Cooper, will say that the current (quantum)physical state of your brain and body will make you choose one or the other and thus the decision will be made for you, by your brain and body.

But it happens to be that that's exactly what we are.. We are a brain in a body. So if the brain and body makes the decision for us, we make it for ourselves
."

You lost me! You start your discussion with the statement you fall on the side of the discussion where there is no free will. Then you give the above example that shows we are making our own decisions, even if it is at a quantum level, it is still us.
at level a decision is made within ourselves, it is still US making the decision, be it the subconscious or the quantum us.

Unless you are stating that at the quantum it is no longer "us". My question then becomes, if the quantum level of us is not us, who or what is it?

Yea it might've been a bit confuzzling, I was thinking about the issue while I was writing my post, and in the beginning I thought free will must somehow exist, but after going deeper and deeper I had to revise my standpoint and edited the first line in my post.

So I started to believe there is no free will from my sentence "Now you can drive it a bit further.."

In that post I wanted to share that I no longer believe in free will, but that itt doesn't automatically mean I believe in fate.

We are still 'us' on a quantum level, but we have no say in how that quantum level representation of 'us' came to be, and it's exactly that quantum state that influences all our future actions and decisions.
In other words, we have to undergo the course of our lives, dictated by either quantum level randomness/probabilities or fate, whichever is your preference.

It will know that the statement it's making clashes with its database of knowledge and is therefore a lie.

Okay, but could you program your software to make a test subject who's interacting with your software believe in a lie that it's telling.
Humans tell lies mostly because they somehow benefit from it themselves (or at least think they'll benefit from the lie) and I know that such a motivation will be lacking in your AI Software because, I assume, it's 100% unselfish, but suppose that you program the motivation "Convince the test subject of a lie". Would it be capable to do so?

BTW, once your AI program is finished, what kind of interface will it be using? Something like Cleverbot? And I want to volunteer for the Turing test if you think of doing this :)

 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Okay, but could you program your software to make a test subject who's interacting with your software believe in a lie that it's telling.
Humans tell lies mostly because they somehow benefit from it themselves (or at least think they'll benefit from the lie) and I know that such a motivation will be lacking in your AI Software because, I assume, it's 100% unselfish, but suppose that you program the motivation "Convince the test subject of a lie". Would it be capable to do so?

Initially I don't want it to tell lies at all, but it should be simple enough to add the capability, and it will be necessary - morality may dictate that someone needs to be lied to in order to protect someone else from harm. The machine will also need to be able to lie when playing a game. It needs to keep lies separate from truths to avoid confusing itself, so what it has to do is create an alternative version of reality in which the lie can be placed, and then other data which conflicts with that lie can be modified to try to fit in with it, thereby generating a whole pack of lies to back up the first one. The rules will then allow the machine to present data to the person it's trying to fool based on the alternative version of reality, and the only thing needed to drive it to present lies is a moral imperative (meaning it's necessary to lie to prevent wrong being done) or a moral request (meaning a deception which isn't intended to do harm and which will likely only be temporary).

Quote
BTW, once your AI program is finished, what kind of interface will it be using? Something like Cleverbot? And I want to volunteer for the Turing test if you think of doing this :)

I've never looked at Cleverbot... I have now (Just asked it "What are trees for?" and got the reply "A variety of subatomic particles often found surrounding the nucleus of an atom." I followed it up with "What about cheese?" and it replied "It's green." Then I asked "Who invented the telescope?" and it was obviously programmed to ask me for an answer in case someone else asks the same question in future, because it replied with "I don't know, you tell me." I decided to be helpful and told it "Dutch children of an optitian - they were playing with lenses." It then informed me: "No you're a computer!")

I'm building it into my own operating system rather than putting it on line, but you'll still converse with it in a similar way, although mine will start analysing as you type rather than waiting till the sentence is complete, and it may be able to answer before you've finished as well, as well as doing so in so much detail that you will be in no doubt that it can only be a machine. Like getting it to lie, getting it to hide its intelligence will not be an immediate priority - I want to focus on getting it to make sense first, and worry about things like the Turing Test later (which is just a distraction as it will already be more intelligent than a human before it is able to pass the Turing Test). I haven't worked out yet how to release it, or when - there are a number of problems relating to how to prevent it being stolen and the question of whether it needs to be keep it out of the hands of the Russians and Chinese governments. This could be a huge barrier to making human-level A.I. available to the public, because we may have to go through a phase where only the military is allowed to use it, during which it will be used in devices aimed at regime change to make the world safe enough to put it into the hands of the public. We cannot allow A.I. with the morality module removed to be used as a tool of oppression. That means that for quite a long time, all you get to see of it may be transcripts of conversations which journalists and the like have with the machine.
 

Offline CZARCAR

  • Hero Member
  • *****
  • Posts: 686
    • View Profile
diarrhea= free will?= even i cant control it
 

Offline wolfekeeper

  • Neilep Level Member
  • ******
  • Posts: 1092
  • Thanked: 11 times
    • View Profile
We've now reached the point where discussing free will leads to discussing consciousness. Computers lack consciousness. People generally believe themselves to be conscious. Let's try to add something conscious to a machine. A robot has sensors all over its surface designed to detect contact with other objects, and if anything hits it it will send a signal to the processor to trigger an action. The processor then runs a bit of code to handle the situation and try to move the robot away from whatever it might be that hit it. Now, if we want to make this more like a human, the processor should maybe experience pain. So, lets arrange for it to feel pain whenever a signal comes in from one of these sensors. What's the result?
The result is that you haven't modelled pain correctly.
Quote
The robot behaves exactly the same way as it did before, but with the addition that something in it feels pain. The pain becomes part of the chain of causation, but it doesn't change anything about the choice that is made,
Yeah, exactly, and that's why.
Quote
so there is no room for it to introduce any free will into things. What it does do, however, is introduce the idea of there existing something in the machine that can feel sensations and which can be identified as "I", and that's where we run up against the real puzzle, because even if you could have a component capable of feeling pain in the system, you have the problem of how you could ever get that component to inform the system that it is actually feeling pain and not just passing on the same signal that was fed into it.
I mean, a classic 'neural network' has no training system built into it, but humans clearly do have a training system, and pain is part of that system.

So a thing like pain is designed into a human or animal brain by evolution. It's a really strong sign that the animal is doing something very wrong, and should learn to avoid that in future. It's not just simply an input, like the colour red, it tells the other neurons that they need reprogramming.

What happens is that when you feel pain, your brain notices that and correlates neuronal activities that were happening around that time, and downvalues those things.

It's a VALUE of and for the neural network, it's not just an input, the neural network gets a hit of pain and downgrades everything a little, changes the weights between neurons. Which weights it chooses where in the brain, they have been selected by evolution, and it probably depends on what hurts, burning your finger is different from burning your foot is different from... there's doubtless chemical and electrical triggers that alter the weights.

And in humans the value system is likely to be very complex, for example we have the ability to learn language, these are very, very probably mediated partly by value systems, built into the brain that enable us to learn that. If we hear certain sounds, humans value that, and seek it out and value emulating it, or whatever.

Quote
For the component that feels pain to be able to pass on knowledge of pain to the rest of the system, it would have to be a lot more complex than something that simply feels pain. What we'd need is something complex which collectively feels the pain and which understands that it is feeling the pain and which is able to articulate the fact that it is feeling the pain and which feels as if it is involved in the mechanism for responding to that pain. The last part of that is what makes people feel that they have free will (even though they don't), but the rest of it is problematic as it doesn't look as if it should be possible for something like that to exist at all.

That makes no sense at all. Something that feels pain and reacts to it, and learns to avoid pain is highly unlikely to involve anything we would normally describe as free will, it's going to be a very, very evolutionarily ancient process.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
The result is that you haven't modelled pain correctly.

Indeed, and no one else has either - it doesn't appear to be possible to model pain at all, so if you have ideas about how it can be done, I want to hear them.

Quote
I mean, a classic 'neural network' has no training system built into it, but humans clearly do have a training system, and pain is part of that system.

So a thing like pain is designed into a human or animal brain by evolution. It's a really strong sign that the animal is doing something very wrong, and should learn to avoid that in future. It's not just simply an input, like the colour red, it tells the other neurons that they need reprogramming.

I don't think learning is the immediate priority when pain is generated - it's about driving you to do something as quickly as possible to do something that might reduce or eliminate the pain. Clearly there could be some learning associated with an event involving pain if it's a novel situation which could be avoided in future, but not at that immediate time.

Quote
What happens is that when you feel pain, your brain notices that and correlates neuronal activities that were happening around that time, and downvalues those things.

It's a VALUE of and for the neural network, it's not just an input, the neural network gets a hit of pain and downgrades everything a little, changes the weights between neurons. Which weights it chooses where in the brain, they have been selected by evolution, and it probably depends on what hurts, burning your finger is different from burning your foot is different from... there's doubtless chemical and electrical triggers that alter the weights.

Have you got this idea from somewhere that I could go to to read up on it more fully, because it sounds like an interesting idea, even if it doesn't relate directly to the business of pain driving action.

Quote
Quote
For the component that feels pain to be able to pass on knowledge of pain to the rest of the system, it would have to be a lot more complex than something that simply feels pain. What we'd need is something complex which collectively feels the pain and which understands that it is feeling the pain and which is able to articulate the fact that it is feeling the pain and which feels as if it is involved in the mechanism for responding to that pain. The last part of that is what makes people feel that they have free will (even though they don't), but the rest of it is problematic as it doesn't look as if it should be possible for something like that to exist at all.

That makes no sense at all. Something that feels pain and reacts to it, and learns to avoid pain is highly unlikely to involve anything we would normally describe as free will, it's going to be a very, very evolutionarily ancient process.

It isn't free will, but my point is that it feels as if it is because we feel as if we are something inside the machine that makes conscious decisions. If we were non-conscious machines like computers, no one would entertain the idea of free will at all, but adding consciousness into the system complifies things substantially, and no one has managed to get a handle on what consciousness is other than that it involves feelings of a multiplicity of different kinds, and these feelings have to be experienced by something and processed in some way so that they can have a role in the chain of causation. All of that is problematic.
 

Offline wolfekeeper

  • Neilep Level Member
  • ******
  • Posts: 1092
  • Thanked: 11 times
    • View Profile
The result is that you haven't modelled pain correctly.

Indeed, and no one else has either - it doesn't appear to be possible to model pain at all, so if you have ideas about how it can be done, I want to hear them.

Quote
I mean, a classic 'neural network' has no training system built into it, but humans clearly do have a training system, and pain is part of that system.

So a thing like pain is designed into a human or animal brain by evolution. It's a really strong sign that the animal is doing something very wrong, and should learn to avoid that in future. It's not just simply an input, like the colour red, it tells the other neurons that they need reprogramming.

I don't think learning is the immediate priority when pain is generated - it's about driving you to do something as quickly as possible to do something that might reduce or eliminate the pain.
Oh sure, I'm not saying that pain isn't a direct input to the nervous system, AS WELL, and I'm not saying that many of the immediate reactions aren't hard wired- if you burn your finger there's reflexes that pull your hand away as well as it being an input to your nervous system. And I'm sure there's one or more modules somewhere in the brain whose job it is to label something as 'bad' or 'good' which raise the stress levels and trigger flight-or-fight reactions.

Quote
Clearly there could be some learning associated with an event involving pain if it's a novel situation which could be avoided in future, but not at that immediate time.
Exactly and it has to correlate with the current situation that lead you there and cause downgrading of much of that activity. For example you might be in a particular geographical location, and that location will after that pain make you uneasy if you're there again. There will be a module in the brain that models physical location, and that location will end up being associated with pain by the learning process; which means that pain has had to directly adjust the neural weights associated with that activation. Presumably the whole time you're at a location, that location is associated with neuronal activity of some kind. Not like a GPS, but activation due to geographical features, mountain over there, tree over there, rock over there, kind of thing, and if you see that combination of features again, you'll get uneasy and run away.

Quote
What happens is that when you feel pain, your brain notices that and correlates neuronal activities that were happening around that time, and downvalues those things.

It's a VALUE of and for the neural network, it's not just an input, the neural network gets a hit of pain and downgrades everything a little, changes the weights between neurons. Which weights it chooses where in the brain, they have been selected by evolution, and it probably depends on what hurts, burning your finger is different from burning your foot is different from... there's doubtless chemical and electrical triggers that alter the weights.

Quote
Have you got this idea from somewhere that I could go to to read up on it more fully, because it sounds like an interesting idea, even if it doesn't relate directly to the business of pain driving action.
No, no the action itself is reflex, that's nothing immediately to do with learning.

I read something somewhere, somebody had found some structures that might act as part of a value system for the brain.

Quote
Quote
Quote
For the component that feels pain to be able to pass on knowledge of pain to the rest of the system, it would have to be a lot more complex than something that simply feels pain. What we'd need is something complex which collectively feels the pain and which understands that it is feeling the pain and which is able to articulate the fact that it is feeling the pain and which feels as if it is involved in the mechanism for responding to that pain. The last part of that is what makes people feel that they have free will (even though they don't), but the rest of it is problematic as it doesn't look as if it should be possible for something like that to exist at all.

That makes no sense at all. Something that feels pain and reacts to it, and learns to avoid pain is highly unlikely to involve anything we would normally describe as free will, it's going to be a very, very evolutionarily ancient process.

It isn't free will, but my point is that it feels as if it is because we feel as if we are something inside the machine that makes conscious decisions. If we were non-conscious machines like computers, no one would entertain the idea of free will at all, but adding consciousness into the system complifies things substantially, and no one has managed to get a handle on what consciousness is other than that it involves feelings of a multiplicity of different kinds, and these feelings have to be experienced by something and processed in some way so that they can have a role in the chain of causation. All of that is problematic.
It's just multiple things going on; pain generates stress reactions, reflexes, learning, negative feelings to the current situation, a desire for flight to get away; as well as good perception of many of them, all of these things are neuronally programmed; hard wired, but with learned inputs associated with the animals/humans value systems.
 

Offline CZARCAR

  • Hero Member
  • *****
  • Posts: 686
    • View Profile
assparagus coal
 

The Naked Scientists Forum


 

SMF 2.0.10 | SMF © 2015, Simple Machines
SMFAds for Free Forums