0 Members and 1 Guest are viewing this topic.
It isn't that simple. You can have a deluded inteligent computer system which makes incorrect judgements which lead to the generation of data which asserts that feelings are felt in the system without any feelings actually being felt in the system at all. These data making assertions that feelings are felt are then used within the thinking of the system as proof that feelings are felt, but they're all based on untruths. There is no consciousness in such a system, but it continually asserts both to us and to itself that there is.
We may be the same as that deluded system. It doesn't feel as if that is the case, of course, because we can stick pins in ourselves and imagine that we feel the pain, but is there really any pain there or are we just being fooled into thinking that there is? And where is the "I" in the machine that is feeling this pain?
In a computer, no matter how intelligent the software becomes, there will never be an "I" in it capable of feeling anything.
Science has failed to find an "I" in us too - all we have to go on are the pronouncements of the information systems within us which assert that there is an "I" inside us somewhere feeling feelings, and yet the information systems which create the data that documents this phenomenon cannot be trusted as it should be impossible for them to access such knowledge.
It seems most likely that all that's happening is that assertions are being mapped to inputs such that an input signal which may represent damage being done has the idea of "pain" mapped to it by the information system, and then that fiction of pain is never questioned.
There cannot be a transmission of knowledge of actual pain in the input signal itself unless it comes ready packaged as data which speaks of pain, but if it came in that form it would have to be written in the same language as used by the information system collecting that data, which either means that part of the information system is on the other end of the input signal line or another information system that happens to speak the same language is at that other end, but either way the problem is merely transferred - the data system at the far end would still have to know that the pain is real, and yet it can't. All it can do is make an assumption that pain is involved and then assert as much in the data.
I honestly cannot understand how one can attack materialism and reductionism, blithely dismiss things like emergent properties and offer absolutely nothing better in terms of explanation of phenomena.
This sounds a bit like the philosophical zombies. Personally, I'd say that if an intelligent computer system can really be deluded (i.e. be able hold a belief in the face of contradictory evidence), that's pretty good evidence for consciousness
But seriously, if such a system displays all the behavioural characteristics of consciousness appropriately, how can we say it is not conscious? after all, that's how we judge consciousness, even subjectively.
Yes, we could program a system to superficially appear conscious when it isn't, but I would suggest that there would be differences that would be distinguishable. If it was not possible to tell, I'd say it is conscious.
There is pain if we feel pain; a headache, or even phantom limb pain is 'real' pain; that feeling of hurting is what we mean by 'pain'. So, I say, for consciousness - that sense of feeling aware, or of self-awareness. That sensation is what consciousness is, and it is accompanied by neurophysiological, and, usually, by physiological and behavioural indicators. The 'I' is an emergent construct, a collaboration of neurological processes. Strictly, its location is in the brain, as it's generated by brain processes, but its subjective location (where it feels it is located) may not be.
QuoteIn a computer, no matter how intelligent the software becomes, there will never be an "I" in it capable of feeling anything.That's arguable. Consciousness isn't intelligence.
If we built a neural network similar to the brain (architecturally and connectedly) and trained it appropriately, there's no reason why it should not have a subjective sense of self. It won't happen unless it's structured appropriately; we know certain structures are essential to generate various aspects of self & self image. There are various ventures in progress, of which Blue Brain Project looks like the best bet, but their objective is neurological disease research rather than consciousness, so unless diseases of consciousness come under that remit, we may not see it.
QuoteScience has failed to find an "I" in us too - all we have to go on are the pronouncements of the information systems within us which assert that there is an "I" inside us somewhere feeling feelings, and yet the information systems which create the data that documents this phenomenon cannot be trusted as it should be impossible for them to access such knowledge.That's not entirely true; there are many examples of sensory manipulations, or drugs, or damage through disease or injury, that cause faulty construction of self-image, sense of self, and 'I'. The locations, connections, and functions of many of the affected areas that contribute are known to varying degrees, so we're not completely in the dark. Of course, although we know some of the requirements, we're still some way from identifying precisely how that subjective sense of self is generated.
QuoteIt seems most likely that all that's happening is that assertions are being mapped to inputs such that an input signal which may represent damage being done has the idea of "pain" mapped to it by the information system, and then that fiction of pain is never questioned.That's pretty much how it seems to work. Pain is generated by and in the brain; it doesn't exist outside it. There are mappings that trigger a bunch of dispositional activity that can include sensations of pain. That's how brain in general seems to work - mappings overlaying dispositions (simple or 'primitive' responses). Pain is triggered usually in response to sensory inputs (which are just pulses of membrane depolarizations like most neural activity), but sometimes just from internal neural 'noise' or spontaneous firings. Damasio's 'Self Comes to Mind' has some interesting information about how these systems work (some of it a bit technical).
Who says reality makes sense? Why should it? To whom? Our entire existence is due tot he fact that it doesn't.
Quote from: cheryl j on 04/09/2013 04:44:54I honestly cannot understand how one can attack materialism and reductionism, blithely dismiss things like emergent properties and offer absolutely nothing better in terms of explanation of phenomena. You don't need such explanations if you have faith. Apparently it's beyond logic, reason, and science...
Emergent properties are well worth dismissing. Nothing ever emerges that isn't 100% rooted in the components, even if it's hard to recognise them until they emerge. When it comes to consciousness, you can't have something emerge out of a system to be sentient (which is what consciousness is really all about) without any of the components being sentient unless your explanation is based on magic.
Come on : if that was the case : how ,on earth , was it possible then that the early muslims "invented " and practiced science ,mainly thanks to that Qur'anic epistemology on the subject .?
Quote if such a system displays all the behavioural characteristics of consciousness appropriately, how can we say it is not conscious? after all, that's how we judge consciousnessA simple program printing "Ouch!" to the screen whenever a key is pressed would pass your test, but it would be lying about the existence of pain.
if such a system displays all the behavioural characteristics of consciousness appropriately, how can we say it is not conscious? after all, that's how we judge consciousness
When you torture someone, what is it that's suffering?
Imagine that you can make a brain out of a few thousand atoms. ... <fantasy> ... Does that not strike you as being rather magical?
If we make such a system and it claims to be able to feel qualia, we won't know whether to believe it or not, and it will be programmed in the same way as our minds, hiding all the fine working in complex networks which are practically impossible to untangle.
All we have done so far is find ways to stop and restore the reporting of the experience of sensations - we rely 100% on the individual being studied reporting to us whether they were conscious or not. That may allow us to rule out the possibility of the "I" being in certain places, so we may in time track it down to a small location or set of locations, but even then we'll have a hard time trying to find it within those.
I wasn't referring to the inputs from nerves interfacing with the brain, but to the inputs to the information system from the places where the experiences of pain and other qualia supposedly occur. For the information system to be informed that pain has been experienced, there needs to be an input to signal that, but the input signal cannot transmit actual knowledge of pain to the information system, so the information system has to map an assertion that there was pain to the input, an assertion which it cannot back up because it is nothing more than a mapping. The information system has no means to know anything about the pain - all it can know is that there is an input from something which relates to a warning of potential damage being done.
Besides, and regardless of what the soul might be , i think that the soul "resides " in our whole beings , in every cell , atom or organ of ours , not just in the brain : the soul is "located " within and without in fact (Extended sense of reality or the extended consciousness via the "reading" of peoples ' minds,via some sort of telepathy ...) ,and has no specific "location " ,due to its immaterial nature which escapes space -time= that's no contradiction in fact .
Quote from: David Cooper on 04/09/2013 19:23:06Emergent properties are well worth dismissing. Nothing ever emerges that isn't 100% rooted in the components, even if it's hard to recognise them until they emerge. When it comes to consciousness, you can't have something emerge out of a system to be sentient (which is what consciousness is really all about) without any of the components being sentient unless your explanation is based on magic. Of course emergent properties are rooted in the components, that's the point; they are properties of the components interacting together that are not properties of the components individually. So water is wet, but a water molecule isn't; there's nothing about an individual water molecule that is wet. In Conway's Game of Life, there's nothing in the simple rules of a grid square's life & death that predicts a glider gun, that's an emergent phenomenon of multiple iterations of multiple grid squares. Tin and copper are soft metals, but mix them together and the alloy is harder than either; an emergent property of the interaction of copper and tin atoms, not predictable from examining a tin atom and a copper atom.
QuoteA simple program printing "Ouch!" to the screen whenever a key is pressed would pass your test, but it would be lying about the existence of pain.Well no, no it wouldn't. All the behavioural characteristics of consciousness is not 'Ouch!' when a key is pressed; I'm thinking along the lines of an extended Turing Test, an in-depth examination. Of course, the criteria for establishing consciousness would need to be defined first. What must any system be able to do for us to judge it conscious?
A simple program printing "Ouch!" to the screen whenever a key is pressed would pass your test, but it would be lying about the existence of pain.
QuoteWhen you torture someone, what is it that's suffering?They are; their body and mind.Their body suffers physical damage, triggering a flood of neural signals to the brainstem, the evolutionarily ancient part, where the nucleus tractus solitarius & the parabrachial nucleus generate activity maps that are the felt body states. These areas have feed-forward and feeback links to many other parts of the brain, but the essence of the experience is generated there.
QuoteImagine that you can make a brain out of a few thousand atoms. ... <fantasy> ... Does that not strike you as being rather magical?It would have to be magical, because you can't make a brain out of a few thousand atoms. You need around a million neurons to make a cockroach brain, and it's not clear whether they feel pain at all.
QuoteIf we make such a system and it claims to be able to feel qualia, we won't know whether to believe it or not, and it will be programmed in the same way as our minds, hiding all the fine working in complex networks which are practically impossible to untangle.That's why we'd have to judge it the same way we judge consciousness in ourselves and others - by how it behaves, what it says and does.
QuoteAll we have done so far is find ways to stop and restore the reporting of the experience of sensations - we rely 100% on the individual being studied reporting to us whether they were conscious or not. That may allow us to rule out the possibility of the "I" being in certain places, so we may in time track it down to a small location or set of locations, but even then we'll have a hard time trying to find it within those.Not quite sure what you're saying here; I was referring to the thousands of examples of faulty construction of the self, or sense of self, in various ways; the sort of peculiarities covered by V. S. Ramachandran in his research and books. Here's a link to some of his videos you may find interesting.
QuoteI wasn't referring to the inputs from nerves interfacing with the brain, but to the inputs to the information system from the places where the experiences of pain and other qualia supposedly occur. For the information system to be informed that pain has been experienced, there needs to be an input to signal that, but the input signal cannot transmit actual knowledge of pain to the information system, so the information system has to map an assertion that there was pain to the input, an assertion which it cannot back up because it is nothing more than a mapping. The information system has no means to know anything about the pain - all it can know is that there is an input from something which relates to a warning of potential damage being done.I explained that the brainstem nuclei generate the felt body states, the foundational feelings of pain or pleasure, etc., by mapping the afferent sensory flow from internal and external body senses. From these nuclei, the emotional and hormonal responses are mediated, via signals to the insular cortex and thalamic nuclei. The insular cortex refines and differentiates those basal feelings, relating them to contextual activity elsewhere in the brain. It also feeds forwards to higher cortical areas. This is all described in more detail in Damasio's 'Self Comes To Mind' (chapter 3 onwards).
Quote from: DonQuichotte on 04/09/2013 21:53:22Come on : if that was the case : how ,on earth , was it possible then that the early muslims "invented " and practiced science ,mainly thanks to that Qur'anic epistemology on the subject .?"Thanks to" or "despite"? The essence of science is that there is no supernatural or revealed authority: the very opposite of all religions.
Quote from: DonQuichotte on 04/09/2013 20:55:26Besides, and regardless of what the soul might be , i think that the soul "resides " in our whole beings , in every cell , atom or organ of ours , not just in the brain : the soul is "located " within and without in fact (Extended sense of reality or the extended consciousness via the "reading" of peoples ' minds,via some sort of telepathy ...) ,and has no specific "location " ,due to its immaterial nature which escapes space -time= that's no contradiction in fact .That's lovely, but it still has to interact with the information system of the brain which does all the thinking mechanically. When you damage the structure of the brain, you can see the thinking go wrong. When you look at a species with inferior wiring, you see a reduction in thinking ability. If the "soul" is to think, it is tied to a mechanical system which does all the work for it and without which the soul can do nothing. This mechanical thinking system, the information system, constructs information about the soul and the feelings that are experienced by it, so it needs to be able to get that knowledge from the soul somehow. All matter and energy may be conscious, as may a fabric of space or something outside of space entirely, but you still have to propose a means by which this consciousness can interface with the information system of the brain which asserts that consciousness is real. How can the mechanical information system ever know? The way to try to find out is to try to follow back the claims generated by the information system to see how they are formed and what they're based on; to see on what basis they are labelled as true.
But we don't even know that we have consciousness ourselves. Our brains generate data that claim we have it, and then we believe those claims. But the claims are generated by an information system which is not competent to make such a judgement.