The Naked Scientists

The Naked Scientists Forum

Author Topic: Quantifying Consciousness  (Read 14070 times)

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: Quantifying Consciousness
« Reply #25 on: 07/01/2014 01:15:33 »
...are you asserting that Ms X did not have a soul?

Pain is just the best example of an unpleasant quale - there are plenty of others that can be used for torture. If you take all of them away such that a person can't feel anything unpleasant at all, there's still a sentience in there if they can feel pleasant or completely neutral sensations, so they have a "soul" if they have any of those at all. If they lack all sensation, that person is a zombie.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: Quantifying Consciousness
« Reply #26 on: 07/01/2014 01:36:37 »
I never said there was no room inside the brain for what you term as the soul. So that leaves us with the question I already asked you: "if not in the brain, where would you suggest we find this so-called "soul"?

Outside of this possibly-virtual universe would be the most likely place.

Quote
I do not agree that it is rational to find consciousness outside of the brain. But as you have conceded, "it is also still rational to look for it inside the brain".

I agree that it's rational to keep looking for it inside the brain, but it's also rational to give up on that and to want to look elsewhere. There's not much hope of finding it anywhere because it looks as if consciousness is not a real phenomenon.

Quote
This has been my issue with this whole argument from the outset. You seemed to insinuate that there wasn't room in the brain for the fullness of consciousness.

I assumed that you'd read the early pages of the Don's thread and would have understood my position. I didn't want to attack your position but merely defend the idea of looking for consciousness outside of the brain - I suspect that this universe is virtual and that it is likely to be incapable of hosting sentience as a result.

Quote
And one or two others here have tried to imply that this consciousness lives on even when the physical brain has died. And the point about Zombies gives no support for that notion either, because being a Zombie doesn't eliminate the brain which is still alive even though it's in a Zombie's head.

The brain in a zombie can be alive like a plant, lacking sentience and lacking any kind of "soul". Consciousness depends on a sentience (a thing that experiences qualia), and there's no reason to suppose that that sentience can be destroyed by death. A plant, zombie or rock could be filled with trillions of sentiences which aren't wired into anything that will enduce qualia in them in any useful way. All matter could be sentient, but no use is made of that sentience unless it is in a system which can both load it with feeling and read back the feeling status from it.

Quote
And don't start bringing up near death experiences as evidence for the survival of the consciousness. They call it NEAR death for a good reason.

I agree that they are usless as evidence for that. I once had an out-of-body experience as a child while fully awake and not anywhere near death (though I was shaking violently in a state of shock at the time) - it's just something the mind can do which results in distortions of perception.
 

Offline Ethos_

  • Neilep Level Member
  • ******
  • Posts: 1277
  • Thanked: 14 times
    • View Profile
Re: Quantifying Consciousness
« Reply #27 on: 07/01/2014 03:55:55 »


I assumed that you'd read the early pages of the Don's thread and would have understood my position. I didn't want to attack your position but merely defend the idea of looking for consciousness outside of the brain - I suspect that this universe is virtual and that it is likely to be incapable of hosting sentience as a result.

I must apologize for assuming too much, and for not giving enough time and effort to read that thread from start to finish. But in my own defense, it became very evident from reading several of Don's posts that most of them were only boring repetitions of his previous posts. So in my laziness and boredom I really didn't care about wasting time reading any more of his crap than I had to.

I will confess after reading your clarifications on the subject that I find much more agreement with you than I do with Don. Nevertheless, the one issue I still disagree with you on is a viable consciousness outside the brain. I am willing to overlook that and submit that we can agree to disagree in a friendly manner. However, finding any sort of cordial arrangement with Don has become impossible. His insults, calling some of us swine and monkeys has him looking and sounding like a simple brat. I simply have no use for that sort of attitude, and his unwillingness to calmly discuss, he only wants to argue his points as if nobody else is smart enough to understand his brilliance. Nothing but a waste!
« Last Edit: 07/01/2014 03:59:36 by Ethos_ »
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4715
  • Thanked: 154 times
  • life is too short to drink instant coffee
    • View Profile
Re: Quantifying Consciousness
« Reply #28 on: 07/01/2014 12:48:44 »
If this is a virtual universe, what is it a model of?  If it is an adequate model, then it should replicate or simulate all the characteristics of a real one. If it is not an adequate model, what is its purpose, and why go looking for simulations that you know are absent?
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: Quantifying Consciousness
« Reply #29 on: 07/01/2014 21:10:00 »
I must apologize for assuming too much, and for not giving enough time and effort to read that thread from start to finish. But in my own defense, it became very evident from reading several of Don's posts that most of them were only boring repetitions of his previous posts. So in my laziness and boredom I really didn't care about wasting time reading any more of his crap than I had to.

I understand completely - the guy is incapable of putting his own points across in his own words in a compact form, so no one who has anything else to do with their time can read more than a tiny fraction of the thread.

Quote
I will confess after reading your clarifications on the subject that I find much more agreement with you than I do with Don.

I think if you were to try to draw our positions in a diagram, there would be two circles with a small area of overlap between them. Don's position is represented by one of those circles while the positions of the rest of us are collectively represented by the other and our individual circles within that collective one would vary very little from each other.

Quote
Nevertheless, the one issue I still disagree with you on is a viable consciousness outside the brain. I am willing to overlook that and submit that we can agree to disagree in a friendly manner.

I'm in disagreement with myself on that point - I can't see how it can be viable inside or outside of the brain.

Quote
However, finding any sort of cordial arrangement with Don has become impossible. His insults, calling some of us swine and monkeys has him looking and sounding like a simple brat. I simply have no use for that sort of attitude, and his unwillingness to calmly discuss, he only wants to argue his points as if nobody else is smart enough to understand his brilliance. Nothing but a waste!

The real trick is to starve a thread like that instead of feeding it. The level of attention he's getting is a substantial reward as it boosts his status - he is serving as some kind of teacher handing out reading material for the class to work through, but the quality of most of it is either shoddy or out of date. I don't understand why people are letting him manipulate them in that way when they could find far better things to read on the subject by themselves.
 

Offline Ethos_

  • Neilep Level Member
  • ******
  • Posts: 1277
  • Thanked: 14 times
    • View Profile
Re: Quantifying Consciousness
« Reply #30 on: 07/01/2014 21:30:58 »


The real trick is to starve a thread like that instead of feeding it. The level of attention he's getting is a substantial reward as it boosts his status - he is serving as some kind of teacher handing out reading material for the class to work through, but the quality of most of it is either shoddy or out of date. I don't understand why people are letting him manipulate them in that way when they could find far better things to read on the subject by themselves.
You must have been reading my mind there Dave, if you'll notice, I've voluntarily removed myself from that thread shortly before you posted this note. And BTW, you have earned my respect for dealing honestly and clearly about your interpretations regarding these issues. I look forward to further discussions with you my friend..............................Ethos
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: Quantifying Consciousness
« Reply #31 on: 07/01/2014 22:31:17 »
If this is a virtual universe, what is it a model of?

It could be a model of a real universe, or it could be an experimental model of a universe with invented laws of physics. From inside it, there's no way to tell.

Quote
If it is an adequate model, then it should replicate or simulate all the characteristics of a real one.

If consciousness/sentience can't be simulated, the only way to include it would be to host consciousness/sentience outside of the simulation but to connect it into it. If we tried to simulate this universe in a computer, consciousness/sentience could not be real within that simulation, but we could have real people (potentially with real consciousness/sentience in them) outside of the simulation with all their brains' inputs and outputs connected to the virtual model such that it appears to them to be absolute reality.

But let's think through what would happen if you try to avoid having any consciousness/sentience outside of the virtual world and simulate it within the virtual world instead. If you simulate a fire, there is no real heat generated, but anyone in the virtual world will calculate that the fire was real and that real heat was generated by it (if they start from the premiss that the virtual world they inhabit is real and not virtual). The same would apply to consciousness/sentience if it could be simulated - a virtual person with simulated consciousness/sentience should determine that he/she has real consciousness/sentience in them if they start from the premiss that the virtual world they inhabit is real and not virtual. Such a simulated person could be completely convinced that suffering is absolutely real even though it is nothing more than a simulation, and they would see a need for morality which simply isn't there - there is no harm in making these virtual beings suffer because they are nothing more than data and must be incapable of real suffering. The entire simulation could be run on paper with a pencil and without any possibility of any sentience occurring within the simulation - only the pencil holder would be able to be sentient, and all he would be feeling most of the time would be intense boredom at having to perform such a mindless task, not understanding anything of what is being simulated.

If it's possible for a simulated person to be fooled into thinking they're experiencing real pain when they aren't, then it should be possible to fool a real person into thinking they're experiencing real pain when they aren't too, so however real the pain might appear to feel, it can't be guaranteed to be real and there may be no actual feeling at all. That means that the evidence we think we have about being sentient beings cannot be trusted - if a virtual person can be fooled, so can we. However, it appears to be impossible to simulate a sentient person in the first place - all we can do is simulate a zombie and then add rules to make it generate false claims about it being sentient. Science may find out some day that we do the same thing and that we are all zombies, and that would certainly be the simplest answer to the whole question (though there would still be a problem in explaining how such a system could usefully evolve: why create all these fictions when it's so easy to control a non-sentient machine's behaviour without it feeling anything - the fictions would be a pointless extra layer of stuff which has no impact on the behaviour.

It doesn't look at all good for the existence of consciousness/sentience. We have discovered one way of doing computation which cannot support sentience, but there's no guarantee that there isn't another way that's radically different. What if there's some exotic alternative based on sensation and in which sensation can be understood by the system. I'm not sure that this could be possible within our universe, which is why I consider it reasonable to look outside the universe where the rules may be more flexible. Then again though, the rules of our universe may be more flexible than they appear, so it may be unnecessary to look outside of it. Either way though, what we need to find is some explanation of sentience that could allow us to model it in principle, but we don't have that even though it looks as if it should be dead easy. We can of course model sentience in so far as we can assert that something is sentient, but what we can't do is model how that sentience can make itself known to anything beyond itself.

Quote
If it is not an adequate model, what is its purpose, ...

If you create a virtual world for people to think they live in where they can have fun that they couldn't have had so conveniently (if at all) within reality, that's all the purpose you need. The model's functional incompleteness is not a barrier to it being useful in this way because the things it cannot handle can sit outside of it.

Quote
...and why go looking for simulations that you know are absent?

I can't match that part of the question up to what we're discussing.
« Last Edit: 07/01/2014 22:33:26 by David Cooper »
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4715
  • Thanked: 154 times
  • life is too short to drink instant coffee
    • View Profile
Re: Quantifying Consciousness
« Reply #32 on: 07/01/2014 23:41:55 »
Quote
If you create a virtual world for people to think they live in where they can have fun that they couldn't have had so conveniently (if at all) within reality, that's all the purpose you need. The model's functional incompleteness is not a barrier to it being useful in this way because the things it cannot handle can sit outside of it.

So some malevolent being has created a model of the real universe just so that people can suffer and die, eh?  Or is the real universe even more unpleasant than the world we think we live in?
 

Offline cheryl j

  • Neilep Level Member
  • ******
  • Posts: 1460
  • Thanked: 1 times
    • View Profile
Re: Quantifying Consciousness
« Reply #33 on: 08/01/2014 05:23:16 »
About defining Consciousness

Definitions are generally brief, but somehow must contain the elements that are necessary and sufficient. Consciousness appears to be very complex and multi-faceted, and even leaving aside its unknown aspects, consciousness is difficult to sum up with a definition. The list grew longer and longer when I tried to write down what I thought were key elements: sensation, awareness, self-awareness, memory, intelligence, learning, creativity, problem solving, choice or volition, emotion, integrated information, symbols, qualia, attention switching, and possessing Theory of Mind – that is the ability to imagine or attribute the same qualities to another animal that one believes is also conscious, and adopt their point of view. (The last one might not seem that important, or are a consequence of the others, but if  consciousness developed to foster social functioning, I suggest that empathy or the ability to alter point of view is important.)

Cooper might argue that computers can do many of these things, often better than we can, so consciousness must be something "else". At the same time, it’s hard to conceive of consciousness functioning without, for example, memory. Perhaps memory or intelligence is necessary but not sufficient, the same way ability to replicate, or respond to stimuli, is part of the definition of life but not enough.

I have a strange early childhood memory and I don’t know how accurate it is. But I seem to remember waking up from naps in my crib, and at first being only aware of whatever my eyes were looking at  - the pattern on the curtains, the light from the window, as if that were the alpha and omega  of reality for that moment, and then very, very slowly becoming aware of myself as well. Sometimes even now after a deep sleep it is still a little like that, but the transition seemed much longer when I was little. It is the closest thing I can imagine to some kind of consciousness without a sense of self. I swear I remember feeling amazed at the whole “waking up” process.

Babies and young children often resist being put down for naps or going to bed at night. I wonder if it ever occurred to baby-docs like Dr. Spock that they might find the whole “sleep” thing- losing consciousness for several hours, popping in and out of reality - a little weird and frightening once a certain level of self-awareness develops.
 
Another thing that would happen later in childhood was “zoning out” where I would just sit and stare at something for five minutes or so (I’m not sure how long it lasted.) An adult would say “Quit day dreaming!” which puzzled me because I was never imagining or thinking about anything. I was blank. Maybe it was what they call a micro-sleep.

As a small child,  I can also remember thinking it was odd, that I was inside me, and other people were inside themselves, and wondering what it would be like to be inside someone else instead of me - my mom, my dad, my sister, my best friend -  but I could never know, because I was stuck inside me, and evidently that was just how it worked. I remember thinking that definitely around age four.

But I digress. My point is that both objectively and subjectively, there appear to be levels or degrees of consciousness. How do you define something that is not a single entity but occurs across a broad spectrum? Neurologists often say “More is different” but AI people seem to say, “No, different is different” and nothing “emerges.” And what’s missing is some key element that turns information into active experience. I can’t decide.

There’s probably a method of creating definitions that editors of dictionaries use, combining agreed upon meanings, documented descriptions. How thoroughly does a definition have to “explain” how something works? Do we want a definition that best reflects the end product, the holistic sum of conscious experience, or do we want a definition of consciousness that includes vital steps that produce it, including ones that occur below our conscious awareness of them? (My thalamus or cingulate cortex is essential to my consciousness, or as Don now says, the collapse of the wave function in choosing my brain states, but I have no conscious awareness of any of them, like I am conscious of my big toe.)

Even the best definitions appear to always be lacking in some way. The criteria for life founders on the rocks of reality not just with viruses but with frogs that freeze solid and do nothing for several months, seeds that go undisturbed for thousands of years (found in pyramids) that can still germinate. What’s more there is no chemical process that occurs in living things, that cannot under the right conditions occur outside them. The biologist says, the whole is greater than the sum of its parts, but AI says, show me the links from the parts to whole, and I might believe it.

Although I would include volition or will in my criteria for consciousness, I’m not sure if it matters if will is “free” as in a causal, or arbitrary, or if will results from responding to environmental stimuli, information obtained through experience and stored in memory, recognizing internal needs, and then forming a response that optimizes  the organism’s state in some way. If one substituted "flexibility of response" for "will," that’d be fine by me. I don’t find the idea that free will might simply be arbitrariness necessarily a contradiction. From an evolutionary standpoint, it actually makes sense that the brain just rolls the dice sometimes, shuffles the deck now and then, to generate new potentially useful strategies or experiences that lead to them. Nature essentially does that in genetics through sexual reproduction. Why wouldn’t it do that elsewhere?

Early working definitions always seem to be somewhat functional, and aren't always true backwards and forwards. In other words, "Something that is conscious must be able to do XYZ, but something that does XYZ may not be conscious." Is that sort of definition "a start", or a failure?
« Last Edit: 08/01/2014 07:02:04 by cheryl j »
 

Offline Ethos_

  • Neilep Level Member
  • ******
  • Posts: 1277
  • Thanked: 14 times
    • View Profile
Re: Quantifying Consciousness
« Reply #34 on: 08/01/2014 11:41:00 »
About defining Consciousness


Early working definitions always seem to be somewhat functional, and aren't always true backwards and forwards. In other words, "Something that is conscious must be able to do XYZ, but something that does XYZ may not be conscious." Is that sort of definition "a start", or a failure?
Excellent reading Cheryl, totally unlike most of Don's rants. As you have so eloquently pointed out, defining consciousness requires much more than simple one line phrases.
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4715
  • Thanked: 154 times
  • life is too short to drink instant coffee
    • View Profile
Re: Quantifying Consciousness
« Reply #35 on: 08/01/2014 17:32:26 »
I can accept that it might be a portmanteau word for a whole lot of defined functions, in which case quantifying it becomes an exercise in quantifying its components, all of which seem to have observable and therefore quantifiable attributes.

The philosopher's weasel word is "plus something else" which either puts the subject neatly out of the range of science, or is pure mystification for its own sake.

Recognising self in others is a characteristic of pretty much every living cell or assembly thereof, and the more we delve into immunity and tissue rejection, the more it appears to be a consequence of "simple" chemistry (big molecules, admittedly, but with very few elements).

A noncommutative definition won't wash with me, I'm afraid. If XYZ is a necessary condition of A, I'll need a damn good reason why it is not a sufficient one. All those I have seen advanced so far have been either a reflection of human vanity that did not withstand observation of other living things (or even hypothetical machines) , or mystical fairydust.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: Quantifying Consciousness
« Reply #36 on: 08/01/2014 19:05:17 »
So some malevolent being has created a model of the real universe just so that people can suffer and die, eh?  Or is the real universe even more unpleasant than the world we think we live in?

Humans in a real universe who want to play safe games in a virtual universe where they can risk death without actually risking death at all could find it a lot more fun than a real universe in which they are not prepared to take any risks at all. We have already decided that children are not allowed to live in the real world and must be brought up in padded cells, and as we gain the ability to live for thousands of years, this business of imprisoning people for their own safety will inevitably be extended to adults as well.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: Quantifying Consciousness
« Reply #37 on: 08/01/2014 19:48:23 »
About defining Consciousness

Definitions are generally brief, but somehow must contain the elements that are necessary and sufficient. Consciousness appears to be very complex and multi-faceted, and even leaving aside its unknown aspects, consciousness is difficult to sum up with a definition. The list grew longer and longer when I tried to write down what I thought were key elements: sensation, awareness, self-awareness, memory, intelligence, learning, creativity, problem solving, choice or volition, emotion, integrated information, symbols, qualia, attention switching, and possessing Theory of Mind – that is the ability to imagine or attribute the same qualities to another animal that one believes is also conscious, and adopt their point of view. (The last one might not seem that important, or are a consequence of the others, but if  consciousness developed to foster social functioning, I suggest that empathy or the ability to alter point of view is important.)

Sensation - yes.

Awareness - no. It may involve sentience in some systems, but that should not be lumped in with other things in order to bring them into the definition of consciousness. A security light which comes on when it detects a cat walking past is "aware" of something warm moving past its sensor, but it is not sentient. Add sentience to that and you have awareness in the sense of detection plus sentience, but they are two distinct things.

Self-awareness - no. This just comes out of a system having enough intelligence to identify itself.

Memory - no. You are not conscious of your memories until they are recalled, and then they are run through your head in much the same way as new input.

Intelligence - no. When hard thinking is done, it is done with the processor that appears to be sentient rather than being done in the background by an automated system, but the same intelligent processing can be done without sentience.

Learning - no. Learning is just data and algorithm collection/development.

Creativity - partly. When judging the artistic merits of something, that depends a lot on feelings (qualia), though again that is just sentience. Inventions of the non-artistic variety do not require sentience to help guide them.

Problem solving - no. Mechanical.

Choice or volition - no. There is no choice.

Emotion - sort of: it involves qualia being generated, but that again comes under sentience.

Integrated information - no.

Symbols - no.

Qualia - yes. [Key part of sentience.]

Attention switching - no.

Possessing Theory of Mind - no.

Consciousness is sentience. Everything else that you might want to bring into consciousness is just something that doesn't involve consciousness itself being tied to sentience.

Quote
Cooper might argue that computers can do many of these things, often better than we can, so consciousness must be something "else".

The "else" is sentience.

Quote
At the same time, it’s hard to conceive of consciousness functioning without, for example, memory.

A person with no memory (such people do exist) lacks nothing in terms of consciousness. They merely have a lack of internal ideas to reload into their consciousness and have to make do with new input which they will be able to experience before forgetting it.

Quote
Perhaps memory or intelligence is necessary but not sufficient, ...

Neither are necessary.

Quote
... the same way ability to replicate, or respond to stimuli, is part of the definition of life but not enough.

The divide between chemistry and life is arbitrary. It's not unlike the point at which a computer operating system becomes capable of modifying and saving itself without needing external software to develop it - there is nothing that happens at this point that requires two different words for "software" making a distinction between the two cases.

Quote
But I digress. My point is that both objectively and subjectively, there appear to be levels or degrees of consciousness. How do you define something that is not a single entity but occurs across a broad spectrum? Neurologists often say “More is different” but AI people seem to say, “No, different is different” and nothing “emerges.” And what’s missing is some key element that turns information into active experience. I can’t decide.

There is either sentience or there is not. If you subtract sentience from something, whatever remains is not part of consciousness and has no place in its definition.
« Last Edit: 08/01/2014 19:50:53 by David Cooper »
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4715
  • Thanked: 154 times
  • life is too short to drink instant coffee
    • View Profile
Re: Quantifying Consciousness
« Reply #38 on: 08/01/2014 22:23:30 »
Not a lot of progress there, because sentience is equally undefined (except possibly as consciousness). If you want your security light to be sentient, presumably you want it to decide whether the moving target is a threat, based on previous knowledge of the general characteristics of a threat, or the absence of characteristics of a friend. Either way you are simply adding learning and a statistical algorithm, so you have to look at something you call a sentient sentinel and ask how it (or he) acts and thinks to determine the intentions of an approaching object. Then I guess you would distinguish between a human that makes some kind of instinctive guess and a machine that sticks to rigid or neural rules. But the problem then becomes that you are defining sentience or consciousness as nothing more than fallibility.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: Quantifying Consciousness
« Reply #39 on: 09/01/2014 19:46:14 »
Not a lot of progress there, because sentience is equally undefined (except possibly as consciousness).

That is considerable progress over an analysis which brings all manner of stuff that's separate from consciousness into consciousness in the way that Don does. He doesn't want to do reductionism at all, while I want to take reductionism as far as it can go. Doing it half and half doesn't get you anywhere.

Quote
If you want your security light to be sentient, presumably you want it to decide whether the moving target is a threat, based on previous knowledge of the general characteristics of a threat, or the absence of characteristics of a friend. Either way you are simply adding learning and a statistical algorithm, so you have to look at something you call a sentient sentinel and ask how it (or he) acts and thinks to determine the intentions of an approaching object. Then I guess you would distinguish between a human that makes some kind of instinctive guess and a machine that sticks to rigid or neural rules. But the problem then becomes that you are defining sentience or consciousness as nothing more than fallibility.

A sentient equivalent would run the same computations and behave the same way as the non-sentient system - there's no reason why a sentient machine should have to guess instead of calculating. The difference is that it could be designed to feel scared when the cat first appeared, then relieved when it determines that it's only a moggy and not a burglar. There appears to be no advantage in sentience being involved though, and we can't model its involvement in any way that makes it useful. We know that the sentience we think we have could be entirely fake, no matter how well we are fooled into thinking it's real: if a simulated person in a wholly virtual world can be fooled into thinking simulated sentience is real, we can be fooled too. Sentience/consciousness appears to be a fake phenomenon: we are all zombies.

It's hard to accept that, of course, but that's where reductionism takes us. That is why idealism has so much appeal (where matter only exists in thought and where if we're fixated on matter we're looking at the wrong thing - reductionism applied to something that doesn't really exist and which ignores the unseen things that really exist could potentially generate wrong answers). A machine which generates fictions about being sentient merely produces such data mechanically, so it's no different from the kind of data you find written in books - a made-up character in a story who travels about on flying carpets and who slays mythical creatures would be no less sentient than a real person. If you write a story about someone being tortured to death, you would be causing as much suffering by doing so as if you tortured someone to death for real. The amount of suffering involved would be zero though in both cases. We feel as if we are trapped in side the heads of individual apes and that we look out on the world through their eyes, but if there is no sentience there is no self in there, so we cannot be trapped any more than the imagined self of a character in a book is trapped in that fictional body within a story - we cannot be trapped in anything because we don't exist. But despite not existing, we persist in thinking that we do, and that's quite some trick. That is why I take idealism seriously - reductionism leads us to a place which denies our very existence, and few of us will ever accept that end point. The mechanics of thought which we have discovered suggest that cause-and-effect mechanisms are still heavily involved in some way even within idealism, but there may be some key, radical difference which allows everything to be built upon sentience.
 

Offline cheryl j

  • Neilep Level Member
  • ******
  • Posts: 1460
  • Thanked: 1 times
    • View Profile
Re: Quantifying Consciousness
« Reply #40 on: 10/01/2014 02:44:39 »


There is either sentience or there is not. If you subtract sentience from something, whatever remains is not part of consciousness and has no place in its definition.

I think see your point, but it’s hard to conceive of consciousness entirely separate from what one is conscious of. If you stripped consciousness of all of the processes “associated with” it, like intelligence or memory, looking for that mysterious essence of sentience, perhaps you would end up with absolutely nothing there. You can chip away at the concept, by saying consciousness can exist without this or without that, but if you remove all external sensory information, all internal stimuli, block access to memories, what is there to be sentient of? How does sentience exist in some pure, isolated state?

Either way, the fact that other systems – computers – can perform these functions but aren’t conscious, doesn’t mean consciousness doesn’t require them, (whether we want to include any requirements in our definition or not.)
Take memory for example, one might not need long term memory for consciousness, but I’d think you’d need at the very least have to maintain something in working memory long enough to be sentient of it.  You would need enough short term or working memory to connect one event to another in any meaningful way. It’s hard to imagine conscious experience as a series of instantly experienced and instantly forgotten snap shots of the world or even of internal sensation, instead of the moving picture-like, stream of consciousness we are accustomed to. If every time I see a chair, I am seeing it for the first time with no memory of prior associations, my awareness of it would probably be very photo-detector like. Something is there or not there, with no significance or meaning attached to it, and probably no ability to generate any emotional response. Is that collection of parallel and perpendicular lines in my visual field something good or bad for me?  With no prior associations, and no potential to create new ones, the chair remains parallel and perpendicular lines in my visual field.

 People who lose the ability to lay down new long term memory or even have some short term memory deficits, did possess them at one time.  I would be surprised if a human born without any capacity for  storing memory, or learning, would still develop consciousness or a sense of self.

Maybe one can’t point to the  smallest component of the brain – a neuron or feedback loop – that is still capable of suffering, the way we “suffer.”  What I do question is the criteria – when is a particular function of a component of a system “enough like” the display of that particular function in the system as a whole?  With the function of movement, most people accept the explanatory link between sliding actin and myosin filaments inside muscle cells, and contraction of a muscle cell, the resulting shortening of muscles, and the locomotion of the entire body or movement of parts.  The movement of all of those things is considered  “enough alike” and the jump from one level to the next isn’t questioned. Nor does it bother anyone that if there are disruptions in quantity, arrangement or timing of things that move, you may not get the desired end result. (A heart muscle in V-fib is useless as a pump) But people see sensation in cells as being too qualitatively different (too mechanical) from sentience in the brain. And they also balk at the idea of any “emergent properties” related to quantity, arrangement and timing.  Why? I’m not saying sensation it is or it isn’t enough like sentience, but what is the qualitative demarcation?

Qualia is connected to consciousness, some people even define consciousness as the ability to experience qualia. It does seem evident, both subjectively, and neurologically, that qualia occurs where consciousness occurs. Two good examples are vision and pain. Blindsight involves the ability to avoid obstacles, identify objects and even track movement without experiencing the qualia of vision. A person with blindsight feels like they are making a wild guess, but it’s an accurate one none the less. (Interestingly, Ramachandron says patients can’t seem to use the information obtained through the more primitive, blindsight pathway to make choices.)

There is also no pain qualia associated with the reflex arc of jerking your hand away from a hot element -nerve impulses are  transmitted from a heat receptor, through a sensory nerve to the spinal cord and back out through a motor nerve to the muscles in the arm. A “CC” is also sent to the brain, resulting in the experience of pain, but it occurs after you have already moved your hand. So what is the point of the pain, if the body has a fully functioning, and quite effective “zombie” program that prevents further damage to the skin from the hot element? Is pain from the CC message to the brain just an epiphenomena of consciousness or does it accomplish something that for some reason the zombie program can’t? It would appear to be a future behaviour reinforcer with a dimmer switch that says, in the case of mild pain, “try to avoid that next time,” or with severe pain, “Don’t do ever do that again for any reason!” Perhaps the degree of pain affects, too, whether that experience is even stored as a long term memory.

Qualia may seem subjective, private and ethereal, but they are not without some very specific neural correlates. Ramachandran discusses two patients who laugh when they should experience pain. One lady would interpret pain as ticklish, even when stabbed with a needle. He says: “A CT scan revealed that one of the nerve pathways in her brain was damaged. Even though we think of pain as a single sensation, there are in fact several layers to it. The sensation of pain is initially processed in a small structure called the insula (‘island’) which is folded deep beneath the temporal lobe on each side of the brain. From the insula the  pain information is then relayed to the anterior cingulate in the frontal lobes. It is here you feel the actual unpleasantness – the agony and the awfulness of pain – along with the expectation of danger. If this pathway is cut, as it was in Dorothy and presumably in Mihkey (his other patient), the insula continues to provide the basic sensation of pain, but it doesn’t lead to the expected awfulness and agony. The anterior cingulate doesn’t get the message. It says in effect ‘all’s okay.’ So here we have two key ingredients for laughter: A palpable and imminent indication that alarm is warranted (from the insula) followed by a ‘no big whoop’ - from the silence of the anterior cingulate. So the patient laughs uncontrollably.” (Ramachandran suggests a similar thing happens in the brain when a snake turns out to be a rubber toy, or we see some one slip on a banana peel, but is not hurt –it’s funny.)

Like the two visual pathways, the zombie one and the conscious one, there may be two aversion pathways, but only the one engaging the anterior cingulate generates qualia. Why? I don’t know, but I would expect one pathway accomplishes something that the other can’t, and my guess would be it involves modifying future behaviour and involves generating a multitude of meaningful associations, between that event and similar scenarios, that object and similar objects, etc.

I guess one could still argue that a machine could do all of this without sentience, it could do it some other way. But that doesn’t mean it is not the way animals like us do it. At any rate, I’d argue there is more to gain at looking at the neural pathways or areas of the brain closely associated with consciousness and asking “What’s different about them?” than simply assuming that nothing is different, and consciousness serves no function, or doesn’t exist.

« Last Edit: 10/01/2014 02:51:00 by cheryl j »
 

Offline Ethos_

  • Neilep Level Member
  • ******
  • Posts: 1277
  • Thanked: 14 times
    • View Profile
Re: Quantifying Consciousness
« Reply #41 on: 10/01/2014 05:00:59 »
As a very young child, I developed double pneumonia wherewith I almost died. While only being several months old, I remember the toys that hung above my crib and the colors of the flowers that were directly outside my bedroom window. These are the first memories I can recollect. The next group of memories I can recall is the terrible taste in my mouth and the severe tightness in my chest, resulting from the infection.

When I try to grasp what level of consciousness may have been active during this period of my life, a few interesting observations come to mind.

The shapes and colors of the toys in my crib seemed to be just a part of an abstract whole. By this I mean, it hadn't yet become a conscious fact to me that they were there and I was here in the crib. In fact, the "I" part of that equation hadn't become part of the whole at that point in my history. And the colorful flowers outside the window seemed more real and alive than anything in my room including myself. Not until the memory of the terrible taste and pain in my chest was "I" aware of "the self".

Consciousness can be defined in at least two different ways. First, being conscious of one's surroundings and secondly, being conscious of the self as a sentient being. In recent posts here, questions about computer consciousness have been explored. And while a computer is familiar with data, 1's and 0's, it might be said that they are conscious of those two digits, but, are not sentient or self aware.

Being conscious of details is different than being self aware. This is why I define a precise moment when consciousness, or sentient awareness surfaces as:

The moment when the "I" moves beyond it's surroundings and becomes acquainted with the self.

 Before that moment, even the "I" does not recognize it's peculiar existence. Until that moment arrives, the "I" and it's surroundings are a composite of existence. Only when the "I" moves beyond that point and becomes separate from it's environment and individual to it do we find the birth of sentience. But this separateness is only in administration. While I contend that sentience is a separate and evolved function of the brain, it's origin and process still remains there as a physical process.

It's a process, started in the brain, evolved within the brain, and completed there as well.





 
« Last Edit: 10/01/2014 18:24:49 by Ethos_ »
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4715
  • Thanked: 154 times
  • life is too short to drink instant coffee
    • View Profile
Re: Quantifying Consciousness
« Reply #42 on: 10/01/2014 17:45:10 »
That is why I take idealism seriously - reductionism leads us to a place which denies our very existence, and few of us will ever accept that end point.

I disagree. An object exists to the extent that it affects other objects. If you can pick up a stone, or even get in the way of a photon, you exist, and this must be the starting axiom of any meaningful discussion. If your argument contradicts its axioms, it's wrong.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: Quantifying Consciousness
« Reply #43 on: 10/01/2014 19:33:31 »
That is why I take idealism seriously - reductionism leads us to a place which denies our very existence, and few of us will ever accept that end point.

I disagree. An object exists to the extent that it affects other objects. If you can pick up a stone, or even get in the way of a photon, you exist, and this must be the starting axiom of any meaningful discussion. If your argument contradicts its axioms, it's wrong.

A book exists, and so do the words printed in it, but there is no link between any sentience generated in the material of the book+ink and the meaning of the sentences which may be describing events relating to sentience in imaginary characters. It's the same with a computer holding an electronic text (identical to the text in the book) - the material of the machine may be sentient, but there in no relationship between what it is feeling and the sentience described in the text.

If a machine is writing its own texts, little bits of text making up fictions about sentience supposedly being experienced in the machine, again there is no relationship between those bits of text and anything that might be being experienced by any sentient material in the machine - the generation of the texts simply maps fictions to certain inputs according to rules which have nothing to do with any actual sentience in the machine. Unless we can describe a system in which the texts are actually related to the sentience in the material of the thing holding or generating those texts, there is a total disconnect between them: the sentience of the machine is no more related to the sentience asserted in the texts than the sentience in any rock in another galaxy. A machine exists and it is churning out fictions about sentience, but the sentient thing that is suppoedly experiencing the qualia described in these fictions does not exist.
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4715
  • Thanked: 154 times
  • life is too short to drink instant coffee
    • View Profile
Re: Quantifying Consciousness
« Reply #44 on: 10/01/2014 21:48:37 »
We seem to be drifting from consciousness to sentience with nothing to anchor the meaning of either, but it now seems that you consider sentience to be a property of things that are not machines. So how would an intelligent alien know what is a machine and what its not?
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: Quantifying Consciousness
« Reply #45 on: 10/01/2014 22:36:24 »
There is either sentience or there is not. If you subtract sentience from something, whatever remains is not part of consciousness and has no place in its definition.

I think see your point, but it’s hard to conceive of consciousness entirely separate from what one is conscious of.

You are conscious of qualia, and those could just be states of the thing that is sentient. Consciousness of anything beyond qualia is most likely just an illusion. I could describe to you how a car works, starting with explosions causing an expansion of gas which leads to a team of pistons pushing something round, then that rotation being transmitted to the wheels that push against the road. At the end of the process, you imagine that you consciously understand how a car works, but do you really understand it consciously? When each part of the explanation is processed, you feel as if you've consciously understood that part, and once the chain of components of the whole explanation has been processed, you feel as if you've consciously understood the whole chain, but already most of the chain is not in your immediate thoughts. You have to go back and forth along the chain of events thinking about individual interactions in order to be consciously aware of how they work, and even then there's the question as to how much of an individual thought you can be conscious of at a time. Sometimes the thing you think you're consciously understanding drifts right out of your mind and you're left feeling that you consciously understand it even though the details of the thing you feel you're consciously understanding have completely left the stage.

Quote
If you stripped consciousness of all of the processes “associated with” it, like intelligence or memory, looking for that mysterious essence of sentience, perhaps you would end up with absolutely nothing there. You can chip away at the concept, by saying consciousness can exist without this or without that, but if you remove all external sensory information, all internal stimuli, block access to memories, what is there to be sentient of? How does sentience exist in some pure, isolated state?

You would end up not with nothing, but with a sentient thing experiencing qualia. The things being chipped away are merely the things which induce those qualia in the sentient thing.

Quote
Either way, the fact that other systems – computers – can perform these functions but aren’t conscious, doesn’t mean consciousness doesn’t require them, (whether we want to include any requirements in our definition or not.)

If you decide that consciousness does require them, you end up with a problem when looking at a system which lacks one feature but which has another, such as a person who has no memory but can be made to suffer.

Quote
Take memory for example, one might not need long term memory for consciousness, but I’d think you’d need at the very least have to maintain something in working memory long enough to be sentient of it.

You can torture a person who has no memory just by relying on live inputs.

Quote
You would need enough short term or working memory to connect one event to another in any meaningful way.

Pain doesn't need anything meaningful - when it's at its worst, it is just pain and nothing else.

Quote
It’s hard to imagine conscious experience as a series of instantly experienced and instantly forgotten snap shots of the world or even of internal sensation, instead of the moving picture-like, stream of consciousness we are accustomed to. If every time I see a chair, I am seeing it for the first time with no memory of prior associations, my awareness of it would probably be very photo-detector like. Something is there or not there, with no significance or meaning attached to it, and probably no ability to generate any emotional response. Is that collection of parallel and perpendicular lines in my visual field something good or bad for me?  With no prior associations, and no potential to create new ones, the chair remains parallel and perpendicular lines in my visual field.

The only difference memory makes is that feelings can be triggered by memories as well as by live inputs, or a combination of both. In the same way, the only difference intelligence makes is that feelings can be triggered by calculated ideas as well as by the ideas which were already there before the calculations were done.

Quote
People who lose the ability to lay down new long term memory or even have some short term memory deficits, did possess them at one time.  I would be surprised if a human born without any capacity for  storing memory, or learning, would still develop consciousness or a sense of self.

They don't need memories to teach them how to experience qualia. Memories merely have a role in determining which qualia they are made to feel. A sense of self is something that comes out of intelligence and a feeling of understanding that you exist (even if you don't) and that you have identified something with yourself.

Quote
Maybe one can’t point to the  smallest component of the brain – a neuron or feedback loop – that is still capable of suffering, the way we “suffer.”  What I do question is the criteria – when is a particular function of a component of a system “enough like” the display of that particular function in the system as a whole?  With the function of movement, most people accept the explanatory link between sliding actin and myosin filaments inside muscle cells, and contraction of a muscle cell, the resulting shortening of muscles, and the locomotion of the entire body or movement of parts.  The movement of all of those things is considered  “enough alike” and the jump from one level to the next isn’t questioned. Nor does it bother anyone that if there are disruptions in quantity, arrangement or timing of things that move, you may not get the desired end result. (A heart muscle in V-fib is useless as a pump) But people see sensation in cells as being too qualitatively different (too mechanical) from sentience in the brain. And they also balk at the idea of any “emergent properties” related to quantity, arrangement and timing.  Why? I’m not saying sensation it is or it isn’t enough like sentience, but what is the qualitative demarcation?

When you get into the small scale detail of how things work, the parts of the brain controlling them are beyond the reach of consciousness. The conscious stuff that goes on is in an overseer which doesn't concern itself with little details, but within the places in the brain where the overseer functions, the small scale action within that could be directly related to the qualia experienced by the sentience.

Quote
There is also no pain qualia associated with the reflex arc of jerking your hand away from a hot element -nerve impulses are  transmitted from a heat receptor, through a sensory nerve to the spinal cord and back out through a motor nerve to the muscles in the arm. A “CC” is also sent to the brain, resulting in the experience of pain, but it occurs after you have already moved your hand. So what is the point of the pain, if the body has a fully functioning, and quite effective “zombie” program that prevents further damage to the skin from the hot element? Is pain from the CC message to the brain just an epiphenomena of consciousness or does it accomplish something that for some reason the zombie program can’t? It would appear to be a future behaviour reinforcer with a dimmer switch that says, in the case of mild pain, “try to avoid that next time,” or with severe pain, “Don’t do ever do that again for any reason!” Perhaps the degree of pain affects, too, whether that experience is even stored as a long term memory.

If you just had a reaction to move your hand away from something hot without knowing what caused it, you'd be left not knowing what went wrong. The following pain would serve as an explanation, and a warning that damage may have been done. That too could be done non-consciously in a machine, of course, but in us it has to go through the part of the system that appears to have consciousness built into it, so it's presented to that part of the system in the appropriate form.

Quote
Like the two visual pathways, the zombie one and the conscious one, there may be two aversion pathways, but only the one engaging the anterior cingulate generates qualia. Why? I don’t know, but I would expect one pathway accomplishes something that the other can’t, and my guess would be it involves modifying future behaviour and involves generating a multitude of meaningful associations, between that event and similar scenarios, that object and similar objects, etc.

The conscious system is essentially a programmer within the brain. The background systems which lack consciousness are programmed systems (some preprogrammed through instincts) which react just like the non-conscious machines which we build. The point of informing the programmer is to give the programmer the opportunity to modify the automated systems to improve them. If something hot in the kitchen leads to burns, new automated systems can then be set up to make you more cautious in certain locations within the kitchen in case a hot thing occurs there again.

Quote
I guess one could still argue that a machine could do all of this without sentience, it could do it some other way. But that doesn’t mean it is not the way animals like us do it. At any rate, I’d argue there is more to gain at looking at the neural pathways or areas of the brain closely associated with consciousness and asking “What’s different about them?” than simply assuming that nothing is different, and consciousness serves no function, or doesn’t exist.

There is indeed a lot to be gained by looking at all the things associated with consciousness, but I'm confident that what you'll eventually find is that you are either studying the mechanisms of a zombie or alternatively something virtual which may be set out to look as if it's a zombie in order to hide its connections to something external to the virtual universe. It appears to be impossible to model sentience, and that should make it impossible for us to identify it in the brain, but so long as there's anything going on in there that we don't understand, we'll be able to point at it and hope that sentience somehow happens there and magically manages to interface with the information system of the brain. It's going to be a long wait, and the conclusion at the end will most likely be that we can't know.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: Quantifying Consciousness
« Reply #46 on: 10/01/2014 22:56:56 »
We seem to be drifting from consciousness to sentience with nothing to anchor the meaning of either,

The only thing you can anchor it to is your own experience of sentience - there's no way to anchor the meaning to mere words without going round in circles.

Quote
... but it now seems that you consider sentience to be a property of things that are not machines.

It could be a property of a machine, but for it to be so it would need to be designed into it in such a way as to enable the sentience to have a useful role in the function of the machine.

Quote
So how would an intelligent alien know what is a machine and what its not?

You'd better ask him/her/other/it. It's going to depends on whether that intelligent alien is in the same boat as us or if he/she/other/it has access to more information or a different kind of information about the nature of reality.
 

Offline cheryl j

  • Neilep Level Member
  • ******
  • Posts: 1460
  • Thanked: 1 times
    • View Profile
Re: Quantifying Consciousness
« Reply #47 on: 11/01/2014 15:32:46 »


You would end up not with nothing, but with a sentient thing experiencing qualia. The things being chipped away are merely the things which induce those qualia in the sentient thing...



...If you decide that consciousness does require them, you end up with a problem when looking at a system which lacks one feature but which has another, such as a person who has no memory but can be made to suffer.

But I can take even pain out of the system and it is still both sentient and conscious. I could in theory block, one by one,  every type of sensory information, but more importantly, I could also just  interefere with the specific neural machiney that people like Ranachandron say is needed to generate that particular quale, and as long as I leave you one form, presumably you are still sentient. So imagine you are now "the being that experiences red". No memory – every blast of red, is redness for the first time, and it does not mean stop signs or apples or red lip stick, it’s not good or bad, you can't even miss its absence forlornly. I don’t know if you are a person or other animal, or just a section of brain tissue in a laboratory. You may not know either. Are you still, by our definition,  conscious? And if I cruelly decide never to stimulate  that nerve pathway in any way that signals red, then what?
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4715
  • Thanked: 154 times
  • life is too short to drink instant coffee
    • View Profile
Re: Quantifying Consciousness
« Reply #48 on: 11/01/2014 17:44:01 »

The only thing you can anchor it to is your own experience of sentience - there's no way to anchor the meaning to mere words without going round in circles.


Pity about that. I can define a cow in such a way that a Martian could recognise a cow and a non-cow, and I can define a colour by example. Even such abstractions as energy and entropy are definable such that we both know what the other is talking about, and when we measure energy or calculate entropy, we both get the same number. But consciousness or sentience seems to defeat the definitive powers of those who discuss it, which makes quantification doubly impossible..
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: Quantifying Consciousness
« Reply #49 on: 11/01/2014 20:15:32 »
But I can take even pain out of the system and it is still both sentient and conscious. I could in theory block, one by one,  every type of sensory information, but more importantly, I could also just  interefere with the specific neural machiney that people like Ranachandron say is needed to generate that particular quale, and as long as I leave you one form, presumably you are still sentient.

Correct.

Quote
So imagine you are now "the being that experiences red". No memory – every blast of red, is redness for the first time, and it does not mean stop signs or apples or red lip stick, it’s not good or bad, you can't even miss its absence forlornly. I don’t know if you are a person or other animal, or just a section of brain tissue in a laboratory. You may not know either. Are you still, by our definition,  conscious?

Yes - any quale will do.

Quote
And if I cruelly decide never to stimulate  that nerve pathway in any way that signals red, then what?

There would be nothing cruel about that, but if you never trigger it there might as well be no connection.
 

The Naked Scientists Forum

Re: Quantifying Consciousness
« Reply #49 on: 11/01/2014 20:15:32 »

 

SMF 2.0.10 | SMF © 2015, Simple Machines
SMFAds for Free Forums