0 Members and 1 Guest are viewing this topic.
There is either sentience or there is not. If you subtract sentience from something, whatever remains is not part of consciousness and has no place in its definition.
That is why I take idealism seriously - reductionism leads us to a place which denies our very existence, and few of us will ever accept that end point.
Quote from: David Cooper on 09/01/2014 19:46:14 That is why I take idealism seriously - reductionism leads us to a place which denies our very existence, and few of us will ever accept that end point. I disagree. An object exists to the extent that it affects other objects. If you can pick up a stone, or even get in the way of a photon, you exist, and this must be the starting axiom of any meaningful discussion. If your argument contradicts its axioms, it's wrong.
Quote from: David Cooper on 08/01/2014 19:48:23There is either sentience or there is not. If you subtract sentience from something, whatever remains is not part of consciousness and has no place in its definition.I think see your point, but it’s hard to conceive of consciousness entirely separate from what one is conscious of.
If you stripped consciousness of all of the processes “associated with” it, like intelligence or memory, looking for that mysterious essence of sentience, perhaps you would end up with absolutely nothing there. You can chip away at the concept, by saying consciousness can exist without this or without that, but if you remove all external sensory information, all internal stimuli, block access to memories, what is there to be sentient of? How does sentience exist in some pure, isolated state?
Either way, the fact that other systems – computers – can perform these functions but aren’t conscious, doesn’t mean consciousness doesn’t require them, (whether we want to include any requirements in our definition or not.)
Take memory for example, one might not need long term memory for consciousness, but I’d think you’d need at the very least have to maintain something in working memory long enough to be sentient of it.
You would need enough short term or working memory to connect one event to another in any meaningful way.
It’s hard to imagine conscious experience as a series of instantly experienced and instantly forgotten snap shots of the world or even of internal sensation, instead of the moving picture-like, stream of consciousness we are accustomed to. If every time I see a chair, I am seeing it for the first time with no memory of prior associations, my awareness of it would probably be very photo-detector like. Something is there or not there, with no significance or meaning attached to it, and probably no ability to generate any emotional response. Is that collection of parallel and perpendicular lines in my visual field something good or bad for me? With no prior associations, and no potential to create new ones, the chair remains parallel and perpendicular lines in my visual field.
People who lose the ability to lay down new long term memory or even have some short term memory deficits, did possess them at one time. I would be surprised if a human born without any capacity for storing memory, or learning, would still develop consciousness or a sense of self.
Maybe one can’t point to the smallest component of the brain – a neuron or feedback loop – that is still capable of suffering, the way we “suffer.” What I do question is the criteria – when is a particular function of a component of a system “enough like” the display of that particular function in the system as a whole? With the function of movement, most people accept the explanatory link between sliding actin and myosin filaments inside muscle cells, and contraction of a muscle cell, the resulting shortening of muscles, and the locomotion of the entire body or movement of parts. The movement of all of those things is considered “enough alike” and the jump from one level to the next isn’t questioned. Nor does it bother anyone that if there are disruptions in quantity, arrangement or timing of things that move, you may not get the desired end result. (A heart muscle in V-fib is useless as a pump) But people see sensation in cells as being too qualitatively different (too mechanical) from sentience in the brain. And they also balk at the idea of any “emergent properties” related to quantity, arrangement and timing. Why? I’m not saying sensation it is or it isn’t enough like sentience, but what is the qualitative demarcation?
There is also no pain qualia associated with the reflex arc of jerking your hand away from a hot element -nerve impulses are transmitted from a heat receptor, through a sensory nerve to the spinal cord and back out through a motor nerve to the muscles in the arm. A “CC” is also sent to the brain, resulting in the experience of pain, but it occurs after you have already moved your hand. So what is the point of the pain, if the body has a fully functioning, and quite effective “zombie” program that prevents further damage to the skin from the hot element? Is pain from the CC message to the brain just an epiphenomena of consciousness or does it accomplish something that for some reason the zombie program can’t? It would appear to be a future behaviour reinforcer with a dimmer switch that says, in the case of mild pain, “try to avoid that next time,” or with severe pain, “Don’t do ever do that again for any reason!” Perhaps the degree of pain affects, too, whether that experience is even stored as a long term memory.
Like the two visual pathways, the zombie one and the conscious one, there may be two aversion pathways, but only the one engaging the anterior cingulate generates qualia. Why? I don’t know, but I would expect one pathway accomplishes something that the other can’t, and my guess would be it involves modifying future behaviour and involves generating a multitude of meaningful associations, between that event and similar scenarios, that object and similar objects, etc.
I guess one could still argue that a machine could do all of this without sentience, it could do it some other way. But that doesn’t mean it is not the way animals like us do it. At any rate, I’d argue there is more to gain at looking at the neural pathways or areas of the brain closely associated with consciousness and asking “What’s different about them?” than simply assuming that nothing is different, and consciousness serves no function, or doesn’t exist.
We seem to be drifting from consciousness to sentience with nothing to anchor the meaning of either,
... but it now seems that you consider sentience to be a property of things that are not machines.
So how would an intelligent alien know what is a machine and what its not?
You would end up not with nothing, but with a sentient thing experiencing qualia. The things being chipped away are merely the things which induce those qualia in the sentient thing......If you decide that consciousness does require them, you end up with a problem when looking at a system which lacks one feature but which has another, such as a person who has no memory but can be made to suffer.
The only thing you can anchor it to is your own experience of sentience - there's no way to anchor the meaning to mere words without going round in circles.
But I can take even pain out of the system and it is still both sentient and conscious. I could in theory block, one by one, every type of sensory information, but more importantly, I could also just interefere with the specific neural machiney that people like Ranachandron say is needed to generate that particular quale, and as long as I leave you one form, presumably you are still sentient.
So imagine you are now "the being that experiences red". No memory – every blast of red, is redness for the first time, and it does not mean stop signs or apples or red lip stick, it’s not good or bad, you can't even miss its absence forlornly. I don’t know if you are a person or other animal, or just a section of brain tissue in a laboratory. You may not know either. Are you still, by our definition, conscious?
And if I cruelly decide never to stimulate that nerve pathway in any way that signals red, then what?
Pity about that. I can define a cow in such a way that a Martian could recognise a cow and a non-cow, and I can define a colour by example.
Even such abstractions as energy and entropy are definable such that we both know what the other is talking about, and when we measure energy or calculate entropy, we both get the same number. But consciousness or sentience seems to defeat the definitive powers of those who discuss it, which makes quantification doubly impossible..
Pity about that. I can define a cow in such a way that a Martian could recognise a cow and a non-cow,
and I can define a colour by example.
On the other hand, if emotions are qualia, and a person is angry, he can insult you and generate that qualia inside of you, or describe some injustice done to him that makes you feel angry as well. I can make your neurons do what mine are doing, or something pretty close, when it comes to anger but not red.
I must admit, though, I get very confused when it comes to brains and computers, about the difference between input-output functions versus copying or translating information. I often think I am confusing one with other. I would appreciate any help.
We don't necessarily feel anger the same way any more than we all see red the same way.
Given his textbook-accurate description, could he draw a carrot, I wonder? Fascinating case.
Quote from: David Cooper on 12/01/2014 22:25:29We don't necessarily feel anger the same way any more than we all see red the same way.Possibly, but I would be surprised if it were true.
I wonder if there is anything in AI like mirror neurons in learning, or what simulations of them do.
I sometimes wonder about the qualia red/green inversion question. If our colours were truly inverted or shifted, wouldn’t there be discrepancies in what we thought was the same or different when we compared mixed colors and added and subtracted shades of color? It seems like an obvious question, so surely someone has done the math.