0 Members and 1 Guest are viewing this topic.
Someone asked me:If consciousness is not generated by the physical brain, but instead exists somewhere else
So I posted my response on my blog:http://ahmadsoomro.com/asking-ahmad-1-quantifying-consciousness/
What intelligent reasonable person would suggest that consciousness originates anywhere but in the brain?
This universe could be virtual and we could be outside of it but wired into it in some way. If that's the case though, how far can we explore the functionality of the brain while the virtual world continues to show us something that appears to provide that functionality while not actually doing so?
Quote from: David Cooper on 01/01/2014 20:40:26 This universe could be virtual and we could be outside of it but wired into it in some way. If that's the case though, how far can we explore the functionality of the brain while the virtual world continues to show us something that appears to provide that functionality while not actually doing so? While only containing two letters, the word IF is still one of the largest in our vocabulary. My dad always used to say; "IF a frog had wings, he wouldn't bump his behind parts on the ground when he hopped.
This is why we have to examine the feasibility of our search and the possible fruits we may obtain thru it's endeavor. I doubt seriously that we'll be garnering any useful information about an intelligent consciousness that resides beyond our useful control.
You seem to suggest that we may be living in a Matrix of sorts. May I suggest to you that science deals with evidence we can measure. Until evidence for this spooky Matrix world view is found, I suggest we're wasting our time speculating about any such mythical theatrics.
If you're trying to work out how things are, "if" has a major role to play. If you want to ignore that, you may not explore the right paths.
A disappointing paper, alas. After wandering into a multiplicity of divergent alleyways it seems to settle on the essence of consciousness being a flexible, multivariate approach to optimisation. Hardly exciting, and to a considerable extent contradicting some of the examples the authors give of non-conscious systems. Bu 5/10 for trying, at least, to define what they are talking about, which puts them way ahead of the rest of the field.
Quote from: David Cooper on 02/01/2014 19:53:24If you're trying to work out how things are, "if" has a major role to play. If you want to ignore that, you may not explore the right paths.The right path is the scientific method David and not just a bunch of speculative if's. I'm not as interested in the if's as I am in the why's and the how's.
We can choose to waste our time with constant metaphysical speculation or, we can look for measurable evidence. I choose to use the scientific method, you're free to waste your time if you so choose.
The scientific method does not tell you to reject any of the "if"s on a whim. You should not be rejecting any of them until they are disproved.
2. Gather information and resources (observations)..(can't check this one)
Where are the observations suggesting consciousness originates elsewhere?
3. Form an explanatory hypothesis..........................(can't check this one either)
Without those suggestions, this research has reached as far as the scientific method allows us to go!!
What we need is a model, any model, that can show a useful cause-and-effect role for consciousness in a system where the existence of that consciousness can also be recognised by the system. We don't have one of those at all at the moment and trying to shut people's minds down is not at all helpful.
So I repeat: "Where are the observations that suggest consciousness originates elsewhere?"Enough said!
It isn't all about observations.
It's about the lack of room for consciousness to exist in the brain.
OK, we now have three very intelligent people discussing how to measure something. Would any of you care to define what it is you are trying to measure? A functional definition will suffice for a start, as in "consciousness is that which....."
that which: "establishes the communion between the self to it's environment."
If your scientific model has no room for a soul of this kind in the brain, then consciousness can only occur if it is located somewhere outside the brain.
That is why it seems so rational to look for consciousness outside of the brain, but it is also still rational to look for it inside the brain, just so long as you're prepared to look for something that's actually capable of being tortured.
...are you asserting that Ms X did not have a soul?
I never said there was no room inside the brain for what you term as the soul. So that leaves us with the question I already asked you: "if not in the brain, where would you suggest we find this so-called "soul"?
I do not agree that it is rational to find consciousness outside of the brain. But as you have conceded, "it is also still rational to look for it inside the brain".
This has been my issue with this whole argument from the outset. You seemed to insinuate that there wasn't room in the brain for the fullness of consciousness.
And one or two others here have tried to imply that this consciousness lives on even when the physical brain has died. And the point about Zombies gives no support for that notion either, because being a Zombie doesn't eliminate the brain which is still alive even though it's in a Zombie's head.
And don't start bringing up near death experiences as evidence for the survival of the consciousness. They call it NEAR death for a good reason.
I assumed that you'd read the early pages of the Don's thread and would have understood my position. I didn't want to attack your position but merely defend the idea of looking for consciousness outside of the brain - I suspect that this universe is virtual and that it is likely to be incapable of hosting sentience as a result.
I must apologize for assuming too much, and for not giving enough time and effort to read that thread from start to finish. But in my own defense, it became very evident from reading several of Don's posts that most of them were only boring repetitions of his previous posts. So in my laziness and boredom I really didn't care about wasting time reading any more of his crap than I had to.
I will confess after reading your clarifications on the subject that I find much more agreement with you than I do with Don.
Nevertheless, the one issue I still disagree with you on is a viable consciousness outside the brain. I am willing to overlook that and submit that we can agree to disagree in a friendly manner.
However, finding any sort of cordial arrangement with Don has become impossible. His insults, calling some of us swine and monkeys has him looking and sounding like a simple brat. I simply have no use for that sort of attitude, and his unwillingness to calmly discuss, he only wants to argue his points as if nobody else is smart enough to understand his brilliance. Nothing but a waste!
The real trick is to starve a thread like that instead of feeding it. The level of attention he's getting is a substantial reward as it boosts his status - he is serving as some kind of teacher handing out reading material for the class to work through, but the quality of most of it is either shoddy or out of date. I don't understand why people are letting him manipulate them in that way when they could find far better things to read on the subject by themselves.
If this is a virtual universe, what is it a model of?
If it is an adequate model, then it should replicate or simulate all the characteristics of a real one.
If it is not an adequate model, what is its purpose, ...
...and why go looking for simulations that you know are absent?
If you create a virtual world for people to think they live in where they can have fun that they couldn't have had so conveniently (if at all) within reality, that's all the purpose you need. The model's functional incompleteness is not a barrier to it being useful in this way because the things it cannot handle can sit outside of it.
About defining ConsciousnessEarly working definitions always seem to be somewhat functional, and aren't always true backwards and forwards. In other words, "Something that is conscious must be able to do XYZ, but something that does XYZ may not be conscious." Is that sort of definition "a start", or a failure?
So some malevolent being has created a model of the real universe just so that people can suffer and die, eh? Or is the real universe even more unpleasant than the world we think we live in?
About defining ConsciousnessDefinitions are generally brief, but somehow must contain the elements that are necessary and sufficient. Consciousness appears to be very complex and multi-faceted, and even leaving aside its unknown aspects, consciousness is difficult to sum up with a definition. The list grew longer and longer when I tried to write down what I thought were key elements: sensation, awareness, self-awareness, memory, intelligence, learning, creativity, problem solving, choice or volition, emotion, integrated information, symbols, qualia, attention switching, and possessing Theory of Mind – that is the ability to imagine or attribute the same qualities to another animal that one believes is also conscious, and adopt their point of view. (The last one might not seem that important, or are a consequence of the others, but if consciousness developed to foster social functioning, I suggest that empathy or the ability to alter point of view is important.)
Cooper might argue that computers can do many of these things, often better than we can, so consciousness must be something "else".
At the same time, it’s hard to conceive of consciousness functioning without, for example, memory.
Perhaps memory or intelligence is necessary but not sufficient, ...
... the same way ability to replicate, or respond to stimuli, is part of the definition of life but not enough.
But I digress. My point is that both objectively and subjectively, there appear to be levels or degrees of consciousness. How do you define something that is not a single entity but occurs across a broad spectrum? Neurologists often say “More is different” but AI people seem to say, “No, different is different” and nothing “emerges.” And what’s missing is some key element that turns information into active experience. I can’t decide.
Not a lot of progress there, because sentience is equally undefined (except possibly as consciousness).
If you want your security light to be sentient, presumably you want it to decide whether the moving target is a threat, based on previous knowledge of the general characteristics of a threat, or the absence of characteristics of a friend. Either way you are simply adding learning and a statistical algorithm, so you have to look at something you call a sentient sentinel and ask how it (or he) acts and thinks to determine the intentions of an approaching object. Then I guess you would distinguish between a human that makes some kind of instinctive guess and a machine that sticks to rigid or neural rules. But the problem then becomes that you are defining sentience or consciousness as nothing more than fallibility.
There is either sentience or there is not. If you subtract sentience from something, whatever remains is not part of consciousness and has no place in its definition.
That is why I take idealism seriously - reductionism leads us to a place which denies our very existence, and few of us will ever accept that end point.
Quote from: David Cooper on 09/01/2014 19:46:14 That is why I take idealism seriously - reductionism leads us to a place which denies our very existence, and few of us will ever accept that end point. I disagree. An object exists to the extent that it affects other objects. If you can pick up a stone, or even get in the way of a photon, you exist, and this must be the starting axiom of any meaningful discussion. If your argument contradicts its axioms, it's wrong.
Quote from: David Cooper on 08/01/2014 19:48:23There is either sentience or there is not. If you subtract sentience from something, whatever remains is not part of consciousness and has no place in its definition.I think see your point, but it’s hard to conceive of consciousness entirely separate from what one is conscious of.
If you stripped consciousness of all of the processes “associated with” it, like intelligence or memory, looking for that mysterious essence of sentience, perhaps you would end up with absolutely nothing there. You can chip away at the concept, by saying consciousness can exist without this or without that, but if you remove all external sensory information, all internal stimuli, block access to memories, what is there to be sentient of? How does sentience exist in some pure, isolated state?
Either way, the fact that other systems – computers – can perform these functions but aren’t conscious, doesn’t mean consciousness doesn’t require them, (whether we want to include any requirements in our definition or not.)
Take memory for example, one might not need long term memory for consciousness, but I’d think you’d need at the very least have to maintain something in working memory long enough to be sentient of it.
You would need enough short term or working memory to connect one event to another in any meaningful way.
It’s hard to imagine conscious experience as a series of instantly experienced and instantly forgotten snap shots of the world or even of internal sensation, instead of the moving picture-like, stream of consciousness we are accustomed to. If every time I see a chair, I am seeing it for the first time with no memory of prior associations, my awareness of it would probably be very photo-detector like. Something is there or not there, with no significance or meaning attached to it, and probably no ability to generate any emotional response. Is that collection of parallel and perpendicular lines in my visual field something good or bad for me? With no prior associations, and no potential to create new ones, the chair remains parallel and perpendicular lines in my visual field.
People who lose the ability to lay down new long term memory or even have some short term memory deficits, did possess them at one time. I would be surprised if a human born without any capacity for storing memory, or learning, would still develop consciousness or a sense of self.
Maybe one can’t point to the smallest component of the brain – a neuron or feedback loop – that is still capable of suffering, the way we “suffer.” What I do question is the criteria – when is a particular function of a component of a system “enough like” the display of that particular function in the system as a whole? With the function of movement, most people accept the explanatory link between sliding actin and myosin filaments inside muscle cells, and contraction of a muscle cell, the resulting shortening of muscles, and the locomotion of the entire body or movement of parts. The movement of all of those things is considered “enough alike” and the jump from one level to the next isn’t questioned. Nor does it bother anyone that if there are disruptions in quantity, arrangement or timing of things that move, you may not get the desired end result. (A heart muscle in V-fib is useless as a pump) But people see sensation in cells as being too qualitatively different (too mechanical) from sentience in the brain. And they also balk at the idea of any “emergent properties” related to quantity, arrangement and timing. Why? I’m not saying sensation it is or it isn’t enough like sentience, but what is the qualitative demarcation?
There is also no pain qualia associated with the reflex arc of jerking your hand away from a hot element -nerve impulses are transmitted from a heat receptor, through a sensory nerve to the spinal cord and back out through a motor nerve to the muscles in the arm. A “CC” is also sent to the brain, resulting in the experience of pain, but it occurs after you have already moved your hand. So what is the point of the pain, if the body has a fully functioning, and quite effective “zombie” program that prevents further damage to the skin from the hot element? Is pain from the CC message to the brain just an epiphenomena of consciousness or does it accomplish something that for some reason the zombie program can’t? It would appear to be a future behaviour reinforcer with a dimmer switch that says, in the case of mild pain, “try to avoid that next time,” or with severe pain, “Don’t do ever do that again for any reason!” Perhaps the degree of pain affects, too, whether that experience is even stored as a long term memory.
Like the two visual pathways, the zombie one and the conscious one, there may be two aversion pathways, but only the one engaging the anterior cingulate generates qualia. Why? I don’t know, but I would expect one pathway accomplishes something that the other can’t, and my guess would be it involves modifying future behaviour and involves generating a multitude of meaningful associations, between that event and similar scenarios, that object and similar objects, etc.
I guess one could still argue that a machine could do all of this without sentience, it could do it some other way. But that doesn’t mean it is not the way animals like us do it. At any rate, I’d argue there is more to gain at looking at the neural pathways or areas of the brain closely associated with consciousness and asking “What’s different about them?” than simply assuming that nothing is different, and consciousness serves no function, or doesn’t exist.
We seem to be drifting from consciousness to sentience with nothing to anchor the meaning of either,
... but it now seems that you consider sentience to be a property of things that are not machines.
So how would an intelligent alien know what is a machine and what its not?
You would end up not with nothing, but with a sentient thing experiencing qualia. The things being chipped away are merely the things which induce those qualia in the sentient thing......If you decide that consciousness does require them, you end up with a problem when looking at a system which lacks one feature but which has another, such as a person who has no memory but can be made to suffer.
The only thing you can anchor it to is your own experience of sentience - there's no way to anchor the meaning to mere words without going round in circles.
But I can take even pain out of the system and it is still both sentient and conscious. I could in theory block, one by one, every type of sensory information, but more importantly, I could also just interefere with the specific neural machiney that people like Ranachandron say is needed to generate that particular quale, and as long as I leave you one form, presumably you are still sentient.
So imagine you are now "the being that experiences red". No memory – every blast of red, is redness for the first time, and it does not mean stop signs or apples or red lip stick, it’s not good or bad, you can't even miss its absence forlornly. I don’t know if you are a person or other animal, or just a section of brain tissue in a laboratory. You may not know either. Are you still, by our definition, conscious?
And if I cruelly decide never to stimulate that nerve pathway in any way that signals red, then what?