Naked Science Forum

General Discussion & Feedback => Just Chat! => Topic started by: Ahmad Soomro on 01/01/2014 04:54:09

Title: Quantifying Consciousness
Post by: Ahmad Soomro on 01/01/2014 04:54:09
Someone asked me:

If consciousness is not generated by the physical brain, but instead exists somewhere else and somehow is linked to the brain, shouldn’t there be some kind of evidence that can be quantified (assuming there is no ‘supernatural’ magic at work or anything). What are your thoughts?

So I posted my response on my blog:
http://ahmadsoomro.com/asking-ahmad-1-quantifying-consciousness/

Check it out, subscribe to my newsletter, and let me know what you have to say!

Happy New Year!
Ahmad Soomro
Title: Re: Quantifying Consciousness
Post by: alancalverd on 01/01/2014 11:40:53
Define consciousness, and I'll be happy to join in the discussion.
Title: Re: Quantifying Consciousness
Post by: Ethos_ on 01/01/2014 15:16:16
Someone asked me:

If consciousness is not generated by the physical brain, but instead exists somewhere else

Baloney!!

What intelligent reasonable person would suggest that consciousness originates anywhere but in the brain?

Maybe it originates across town, or maybe on Alpha Centauri? Anywhere but the logical place, go figure????????

The Don Quixote virus is becoming epidemic!
Title: Re: Quantifying Consciousness
Post by: cheryl j on 01/01/2014 20:39:50
It is an interesting question even without the non-local consciousness aspect. There's brain imaging that measures the the consumption of glucose or oxygen, or rate of blood flow in the brain. There's EEG that looks at electrical activity from nerve transmission. But one problem is that researchers found that the brain works harder and uses more oxygen or glucose when it's struggling with a problem than when it can solve it easily. So I don't know how you could measure the quality of operations or the depth or complexity of thinking - kind of like not being able to tell from the odometer or speedometer  how far or fast a car has moved, or if it's just been spinning its wheels in a snowbank.

One of the most interesting ways of eaves dropping on the brain is comparing the location of activity to brain maps and reconstructing thoughts based on computer data banks. This is a cool website. I think it's impressive how closely they managed to match some of the original images that the person was looking at with what was in a computer's data bank. And it also seems important that the accuracy increased when they combined both visual mapping and semantic brain mapping.

http://gallantlab.org/
Title: Re: Quantifying Consciousness
Post by: David Cooper on 01/01/2014 20:40:26
So I posted my response on my blog:
http://ahmadsoomro.com/asking-ahmad-1-quantifying-consciousness/

Light gray text on a white background is hard to read - most people won't bother trying to read your blog as a result.


What intelligent reasonable person would suggest that consciousness originates anywhere but in the brain?

It easily could. This universe could be virtual and we could be outside of it but wired into it in some way. If that's the case though, how far can we explore the functionality of the brain while the virtual world continues to show us something that appears to provide that functionality while not actually doing so? That would be a fun thing to program, and it may be that someone has done exactly that.
Title: Re: Quantifying Consciousness
Post by: Ethos_ on 01/01/2014 21:56:04
This universe could be virtual and we could be outside of it but wired into it in some way. If that's the case though, how far can we explore the functionality of the brain while the virtual world continues to show us something that appears to provide that functionality while not actually doing so?
While only containing two letters, the word IF is still one of the largest in our vocabulary. My dad always used to say; "IF a frog had wings, he wouldn't bump his behind parts on the ground when he hopped.

This is why we have to examine the feasibility of our search and the possible fruits we may obtain thru it's endeavor. I doubt seriously that we'll be garnering any useful information about an intelligent consciousness that resides beyond our useful control.

You seem to suggest that we may be living in a Matrix of sorts. May I suggest to you that science deals with evidence we can measure. Until evidence for this spooky Matrix world view is found, I suggest we're wasting our time speculating about any such mythical theatrics.

Title: Re: Quantifying Consciousness
Post by: David Cooper on 02/01/2014 19:53:24
This universe could be virtual and we could be outside of it but wired into it in some way. If that's the case though, how far can we explore the functionality of the brain while the virtual world continues to show us something that appears to provide that functionality while not actually doing so?
While only containing two letters, the word IF is still one of the largest in our vocabulary. My dad always used to say; "IF a frog had wings, he wouldn't bump his behind parts on the ground when he hopped.

If you're trying to work out how things are, "if" has a major role to play. If you want to ignore that, you may not explore the right paths.

Quote
This is why we have to examine the feasibility of our search and the possible fruits we may obtain thru it's endeavor. I doubt seriously that we'll be garnering any useful information about an intelligent consciousness that resides beyond our useful control.

If you restrict yourself to analysing what a potentially-virtual world allows you to see, you may be blinding yourself to the very thing you want to know about. The reason it's particularly worth considering the possibility that the universe is virtual is that it encourages us to try to work out how consciousness might work in principle in addition to trying to follow the mechanisms which we can barely get a handle on within the brain.

Quote
You seem to suggest that we may be living in a Matrix of sorts. May I suggest to you that science deals with evidence we can measure. Until evidence for this spooky Matrix world view is found, I suggest we're wasting our time speculating about any such mythical theatrics.

It is quite possible that the stars burned out long ago and we're now living in impoverished places from which we escape into virtual worlds which recreate what the universe has lost. I'm suggesting that you should keep your eyes open to both possibilities, which is quite different from the other two positions you can take on this where you open your eyes only to one of them and risk missing the truth.
Title: Re: Quantifying Consciousness
Post by: Ethos_ on 02/01/2014 21:40:40


If you're trying to work out how things are, "if" has a major role to play. If you want to ignore that, you may not explore the right paths.

The right path is the scientific method David and not just a bunch of speculative if's. I'm not as interested in the if's as I am in the why's and the how's.

We can choose to waste our time with constant metaphysical speculation or, we can look for measurable evidence. I choose to use the scientific method, you're free to waste your time if you so choose.
Title: Re: Quantifying Consciousness
Post by: cheryl j on 02/01/2014 22:17:33
I was reading an article called "A Computational Model of Machine Consciousness." And what impressed me about it was, without really insisting that it was possible or that they were going to do it, the authors just said, if one were to try to build a conscious machine, what would it look like? What would be required?

That is a kind of open ended speculation that I admire.

Here's the abstract and the link if any one is interested.
"
Despite many efforts, there are no computational models of consciousness that can be used to design conscious intelligent machines. This is mainly attributed to available definitions of consciousness being human centered, vague, and incomplete. Through a biological analysis of consciousness and concept of machine intelligence, we propose a physical definition of consciousness with the hope to model it in intelligent machines. We propose a computational model of consciousness driven by competing motivations, goals, and attention switching. We propose a concept of mental saccades that is useful for explaining the attention switching and focusing mechanism from computational perspective. Finally, we compare our model with other computational models of consciousness.
"

http://www.ohio.edu/people/starzykj/network../research/Papers/A%20Computational%20Model%20of%20Machine%20Consciousness.pdf

Even if it turns out to be impossible to generate consciousness in this way, working through the problem, or simulating consciousness, might still reveal other valuable insights into problem solving in machines and brains.
Title: Re: Quantifying Consciousness
Post by: alancalverd on 03/01/2014 00:02:17
A disappointing paper, alas. After wandering into a multiplicity of divergent alleyways it seems to settle on the essence of consciousness being a flexible, multivariate approach to optimisation. Hardly exciting, and to a considerable extent contradicting some of the examples the authors give of non-conscious systems.   

Bu 5/10 for trying, at least, to define what they are talking about, which puts them way ahead of the rest of the field.
Title: Re: Quantifying Consciousness
Post by: cheryl j on 03/01/2014 01:28:55

A disappointing paper, alas. After wandering into a multiplicity of divergent alleyways it seems to settle on the essence of consciousness being a flexible, multivariate approach to optimisation. Hardly exciting, and to a considerable extent contradicting some of the examples the authors give of non-conscious systems.   

Bu 5/10 for trying, at least, to define what they are talking about, which puts them way ahead of the rest of the field.

Yes, I agree, it wasn't an Earth shaking paper, but I admired their gutsy, Brave Little Toaster approach to a daunting question.
Title: Re: Quantifying Consciousness
Post by: cheryl j on 03/01/2014 02:07:57
BRAIN METRICS Dr. Giulio Tononi wants to build a “consciousness meter.”

His working definition:"Consciousness, Dr. Tononi says, is nothing more than integrated information."

http://www.nytimes.com/2010/09/21/science/21consciousness.html?pagewanted=2&_r=0
Title: Re: Quantifying Consciousness
Post by: David Cooper on 03/01/2014 21:00:53


If you're trying to work out how things are, "if" has a major role to play. If you want to ignore that, you may not explore the right paths.

The right path is the scientific method David and not just a bunch of speculative if's. I'm not as interested in the if's as I am in the why's and the how's.

The scientific method does not tell you to reject any of the "if"s on a whim. You should not be rejecting any of them until they are disproved. You can certainly prioratise them in order of likeliness, but this particular "if" is hard to put a probability value on. You may have noticed that computer games are very popular and people want them to feel more and more real. That means that we will almost certainly in the future redesign ourselves in such a way that we can plug our minds into virtual worlds and be unable to tell they are not real. We will also extend our lifespans to thousands of years, maybe tens of thousands or much more, at which point the biggest danger to your life is accidents, but these can be avoided by living in virtual worlds which can be made to feel 100% real and which can also be made more fun than the real world. We may already be in such a virtual world, playing games that can last a hundred years or more. Trying to do science with the fake data presented to us showing the supposed workings of our mind would be very stupid science indeed, so we should not have a religious belief in what we observe there being the truth.

Quote
We can choose to waste our time with constant metaphysical speculation or, we can look for measurable evidence. I choose to use the scientific method, you're free to waste your time if you so choose.

I'm not wasting any time by keeping the door open to this possibility, but you could be missing the truth by rejecting it on false grounds.
Title: Re: Quantifying Consciousness
Post by: Ethos_ on 03/01/2014 22:02:54


The scientific method does not tell you to reject any of the "if"s on a whim. You should not be rejecting any of them until they are disproved.

The scientific method:

1. Define a question.
2. Gather information and resources (observation)
3. Form an explanatory hypothesis
4. Test the hypothesis by preforming experiment and collecting data in a reproducible manner
5. Analyze the data
6. Interpret the data and draw conclusions
7. Publish the results
8. Retest (frequently by other scientists)

We should all question just how far down this list we have gotten following mystical assertions.

The author of this thread starts with this question: "If consciousness is not generated by the physical brain"

1. Define a question  ...........................................(check)

2. Gather information and resources (observations)..(can't check this one)

Where are the observations suggesting consciousness originates elsewhere?   

3. Form an explanatory hypothesis..........................(can't check this one either)

Without those suggestions, this research has reached as far as the scientific method allows us to go!!
Title: Re: Quantifying Consciousness
Post by: David Cooper on 03/01/2014 23:49:35
2. Gather information and resources (observations)..(can't check this one)

What's the problem there? You can investigate your own consciousness even if it leads you beyond the brain and right out of the universe - you just have to feel it.

Quote
Where are the observations suggesting consciousness originates elsewhere?

The inability to pin down anywhere that it occurs in the brain other than by waving at the whole thing and saying "it's there, somewhere".

Quote
3. Form an explanatory hypothesis..........................(can't check this one either)

Consciousness resides outside the universe and the universe is virtual.

Quote
Without those suggestions, this research has reached as far as the scientific method allows us to go!!

No it hasn't - we're still free to explore step 3, thinking up theoretical ways in which sentience can be usefully integrated with an information system, and if anyone ever comes up with a viable model for that, however exotic that model may be, we can then search for ways to test it by looking for the points of interaction.

You can argue that it isn't much use if you get stuck at step 3 and never come up with an explanatory hypothesis beyond suggesting that the mechanism is elsewhere, but that's no different from getting stuck at step 3 and never coming up with an explanatory hypotheses beyond suggesting that the mechanism is in the brain - thus far no one has ever come up with any genuinely-explanatory hypothesis at all, because no attempted account of it can produce any kind of model showing how any aspect of consciousness can be made to serve as a functional part of the system in any way that enables consciousness to be recognised by the system.

We're all stuck at the same point and it would be a mistake to try to shut down anyone else's exotic explorations because they might just be lucky enough to trip over some kind of theoretical mechanism that unlocks the puzzle. In this particular field, what we're actually seeing is a lot of work which is pointing to consciousness being a wholly fake phenomenon with a lot of deluded people trying to shoehorn it in regardless just because they already believe in it and don't want to give up that belief. The complexities of the brain are so horrific that it may be many decades or even centuries before we can get a handle on the mechanism by which the reports of phenomena relating to consciousness are generated by the brain in order to test whether they are actually real or nothing more than fabrications. For this reason, faster progress is more likely to be made by having more people investigate step 3 by attempting to create theoretical models which might unlock mechanims by which consciousness could occur without being shackled in their thinking by what they know of current physics. What we need is a model, any model, that can show a useful cause-and-effect role for consciousness in a system where the existence of that consciousness can also be recognised by the system. We don't have one of those at all at the moment and trying to shut people's minds down is not at all helpful.
Title: Re: Quantifying Consciousness
Post by: Ethos_ on 04/01/2014 00:43:40
What we need is a model, any model, that can show a useful cause-and-effect role for consciousness in a system where the existence of that consciousness can also be recognised by the system. We don't have one of those at all at the moment and trying to shut people's minds down is not at all helpful.
The author of this thread made the statement; "If consciousness is not generated by the physical brain"

To follow the scientific method, the author is responsible to provide for us the alternative he suggests may exist. I challenge him to suggest exactly where he thinks consciousness is generated if not in the brain. And BTW, don't accuse me of trying to shut it down, it is the scientific method that has established the criteria. And it will be the author of this thread that causes the shut down if he fails to produce what the scientific method demands.


Don't get upset when a scientist reminds you that your vision can't proceed past the 3rd step. Science needs evidence to move beyond that mark, and the 3rd step is where this argument will stay until the author of this thread tells us where he thinks consciousness is generated if not in the brain.

So I repeat: "Where are the observations that suggest consciousness originates elsewhere?"

Enough said! 
Title: Re: Quantifying Consciousness
Post by: cheryl j on 04/01/2014 15:52:33
It might still be an interesting question to ask if "If X is a simulation, how could we tell?" In what way could Truman detect the Truman show if he tried hard enough to find out. Whatever it is that makes a simulation slightly different from the real thing would seem to be very significant, in the same way whatever makes a computer model of consciousness different from the real thing is important, although, I suppose, they could turn out eventually to both generate consciousness and mainly be different in materials or how they do it.
Title: Re: Quantifying Consciousness
Post by: David Cooper on 05/01/2014 00:09:22
So I repeat: "Where are the observations that suggest consciousness originates elsewhere?"

Enough said! 

It isn't all about observations. It's about the lack of room for consciousness to exist in the brain. It isn't clear that it's any easier for it to exist outside of the brain, but once you've ruled out the possibility of it existing inside the brain you have to look somewhere else.
Title: Re: Quantifying Consciousness
Post by: alancalverd on 05/01/2014 01:26:50
OK, we now have three very intelligent people discussing how to measure something. Would any of you care to define what it is you are trying to measure? A functional definition will suffice for a start, as in "consciousness is that which....." 
Title: Re: Quantifying Consciousness
Post by: Ethos_ on 05/01/2014 16:13:53

It isn't all about observations.
OK everybody, put your blindfolds on, stuff cotton in your ears, and disregard everything your senses are telling you about reality.
Quote from: David Cooper
It's about the lack of room for consciousness to exist in the brain.
Mr. Cooper; I'm frankly quite content with the storage ability we  presently operate with, have you considered adding on?

Science has estimated that we only use a small fraction of or mental capacity,  should be plenty of room left up there for storage.
Title: Re: Quantifying Consciousness
Post by: Ethos_ on 05/01/2014 16:17:24
OK, we now have three very intelligent people discussing how to measure something. Would any of you care to define what it is you are trying to measure? A functional definition will suffice for a start, as in "consciousness is that which....."
Consciousness is that which: "establishes the communion between the self to it's environment."

Consider the word; "Myself"

This is a compound word consisting of two words; My and self. The "My" establishes ownership of the following word "self". To understand the significance of this union, one needs to grasp the notion that the "My" refers to the physical attributes of one's existence and the "self" extends to the ethereal portion of this alliance between body and mind.

There is no absolute evidence that this alliance exists without both participants being involved. Until that evidence surfaces, we can only speculate, and speculation is not science.
Title: Re: Quantifying Consciousness
Post by: alancalverd on 05/01/2014 17:24:53
Quote
that which: "establishes the communion between the self to it's environment."
is the nervous system and its associated sensors.

My self (every object and process within this body) is simply a distinction between the speaker, your self, and themselves. Bees may not grasp the significance of individual spatial boundaries but intelligent humans certainly do.
Title: Re: Quantifying Consciousness
Post by: David Cooper on 05/01/2014 22:04:51
The trouble is that a zombie can do all this stuff thinking about "self" without having any consciousness or actual self at all. The key aspect of consciousness is sentience - without sentience you have a zombie and no self. Sentience depends on something being in existence which experiences qualia, and that thing is a soul - not the kind of bloated soul that religious people bang on about (with the magical ability to retain memories after the data store has been destroyed and the magical ability to think after the thinking mechanisms of the brain have been destroyed), but I mean a minimal soul as in something which is capable of experiencing qualia: the "I" in the machine, the thing that can be tortured and which therefore requires the invention of morality to protect it from harm. If your scientific model has no room for a soul of this kind in the brain, then consciousness can only occur if it is located somewhere outside the brain. Theories of consciousness which have qualia experienced in something that emerges out of complexity where the sufferer has no actual substance are not viable - you cannot make something suffer if it doesn't really exist but merely emerges into some kind of "existence" by magic. That is why it seems so rational to look for consciousness outside of the brain, but it is also still rational to look for it inside the brain, just so long as you're prepared to look for something that's actually capable of being tortured. If we are just machines like the computer you're using to read these words, you have no sentience (and therefore no consciousness) - no matter how you program it, you cannot make a computer suffer and it is pointless to torture it in the hope that it will feel anything. Every aspect of consciousness is tied up with feeling - you feel sensations, you feel as if you exist, you feel as if you understand things (even when the substance of the thought has slipped away and it's left empty). Sentience is the issue and the sentient thing, if there is one in us, is the self.
Title: Re: Quantifying Consciousness
Post by: Ethos_ on 05/01/2014 23:49:50
If your scientific model has no room for a soul of this kind in the brain, then consciousness can only occur if it is located somewhere outside the brain.
I never said there was no room inside the brain for what you term as the soul. So that leaves us with the question I already asked you: "if not in the brain, where would you suggest we find this so-called "soul"?

Quote from: David Cooper
That is why it seems so rational to look for consciousness outside of the brain, but it is also still rational to look for it inside the brain, just so long as you're prepared to look for something that's actually capable of being tortured.

I do not agree that it is rational to find consciousness outside of the brain. But as you have conceded, "it is also still rational to look for it inside the brain".

This has been my issue with this whole argument from the outset. You seemed to insinuate that there wasn't room in the brain for the fullness of consciousness. And one or two others here have tried to imply that this consciousness lives on even when the physical brain has died. And the point about Zombies gives no support for that notion either, because being a Zombie doesn't eliminate the brain which is still alive even though it's in a Zombie's head. And don't start bringing up near death experiences as evidence for the survival of the consciousness. They call it NEAR death for a good reason.
Title: Re: Quantifying Consciousness
Post by: alancalverd on 06/01/2014 00:04:03
A few people do not feel pain. This is sometimes caused by a rare genetic anomaly leading to a hardware fault in which some or all pain sensors just do not communicate with the brain. Such people are immune to most physical torture but the more common presentation is extreme loss of form or function due to a broken bone and the statement "I didn't realise anything was wrong until I tried to stand up".  I was shown such a case a couple of years ago in a medical research ethics committee, with the question "The proposed experiment would cause extreme pain in a normal subject but not to Ms X. Is it ethically acceptable to do it to Ms X?" 

Aside from the ethical conundrum (which actually resolved around informed consent to the permanent risk), are you asserting that Ms X did not have a soul?

Psychological torture is if anything even more mechanistic since it relies either on sleep deprivation or suchlike, or on the frustration of expectations.
Title: Re: Quantifying Consciousness
Post by: David Cooper on 07/01/2014 01:15:33
...are you asserting that Ms X did not have a soul?

Pain is just the best example of an unpleasant quale - there are plenty of others that can be used for torture. If you take all of them away such that a person can't feel anything unpleasant at all, there's still a sentience in there if they can feel pleasant or completely neutral sensations, so they have a "soul" if they have any of those at all. If they lack all sensation, that person is a zombie.
Title: Re: Quantifying Consciousness
Post by: David Cooper on 07/01/2014 01:36:37
I never said there was no room inside the brain for what you term as the soul. So that leaves us with the question I already asked you: "if not in the brain, where would you suggest we find this so-called "soul"?

Outside of this possibly-virtual universe would be the most likely place.

Quote
I do not agree that it is rational to find consciousness outside of the brain. But as you have conceded, "it is also still rational to look for it inside the brain".

I agree that it's rational to keep looking for it inside the brain, but it's also rational to give up on that and to want to look elsewhere. There's not much hope of finding it anywhere because it looks as if consciousness is not a real phenomenon.

Quote
This has been my issue with this whole argument from the outset. You seemed to insinuate that there wasn't room in the brain for the fullness of consciousness.

I assumed that you'd read the early pages of the Don's thread and would have understood my position. I didn't want to attack your position but merely defend the idea of looking for consciousness outside of the brain - I suspect that this universe is virtual and that it is likely to be incapable of hosting sentience as a result.

Quote
And one or two others here have tried to imply that this consciousness lives on even when the physical brain has died. And the point about Zombies gives no support for that notion either, because being a Zombie doesn't eliminate the brain which is still alive even though it's in a Zombie's head.

The brain in a zombie can be alive like a plant, lacking sentience and lacking any kind of "soul". Consciousness depends on a sentience (a thing that experiences qualia), and there's no reason to suppose that that sentience can be destroyed by death. A plant, zombie or rock could be filled with trillions of sentiences which aren't wired into anything that will enduce qualia in them in any useful way. All matter could be sentient, but no use is made of that sentience unless it is in a system which can both load it with feeling and read back the feeling status from it.

Quote
And don't start bringing up near death experiences as evidence for the survival of the consciousness. They call it NEAR death for a good reason.

I agree that they are usless as evidence for that. I once had an out-of-body experience as a child while fully awake and not anywhere near death (though I was shaking violently in a state of shock at the time) - it's just something the mind can do which results in distortions of perception.
Title: Re: Quantifying Consciousness
Post by: Ethos_ on 07/01/2014 03:55:55


I assumed that you'd read the early pages of the Don's thread and would have understood my position. I didn't want to attack your position but merely defend the idea of looking for consciousness outside of the brain - I suspect that this universe is virtual and that it is likely to be incapable of hosting sentience as a result.

I must apologize for assuming too much, and for not giving enough time and effort to read that thread from start to finish. But in my own defense, it became very evident from reading several of Don's posts that most of them were only boring repetitions of his previous posts. So in my laziness and boredom I really didn't care about wasting time reading any more of his crap than I had to.

I will confess after reading your clarifications on the subject that I find much more agreement with you than I do with Don. Nevertheless, the one issue I still disagree with you on is a viable consciousness outside the brain. I am willing to overlook that and submit that we can agree to disagree in a friendly manner. However, finding any sort of cordial arrangement with Don has become impossible. His insults, calling some of us swine and monkeys has him looking and sounding like a simple brat. I simply have no use for that sort of attitude, and his unwillingness to calmly discuss, he only wants to argue his points as if nobody else is smart enough to understand his brilliance. Nothing but a waste!
Title: Re: Quantifying Consciousness
Post by: alancalverd on 07/01/2014 12:48:44
If this is a virtual universe, what is it a model of?  If it is an adequate model, then it should replicate or simulate all the characteristics of a real one. If it is not an adequate model, what is its purpose, and why go looking for simulations that you know are absent?
Title: Re: Quantifying Consciousness
Post by: David Cooper on 07/01/2014 21:10:00
I must apologize for assuming too much, and for not giving enough time and effort to read that thread from start to finish. But in my own defense, it became very evident from reading several of Don's posts that most of them were only boring repetitions of his previous posts. So in my laziness and boredom I really didn't care about wasting time reading any more of his crap than I had to.

I understand completely - the guy is incapable of putting his own points across in his own words in a compact form, so no one who has anything else to do with their time can read more than a tiny fraction of the thread.

Quote
I will confess after reading your clarifications on the subject that I find much more agreement with you than I do with Don.

I think if you were to try to draw our positions in a diagram, there would be two circles with a small area of overlap between them. Don's position is represented by one of those circles while the positions of the rest of us are collectively represented by the other and our individual circles within that collective one would vary very little from each other.

Quote
Nevertheless, the one issue I still disagree with you on is a viable consciousness outside the brain. I am willing to overlook that and submit that we can agree to disagree in a friendly manner.

I'm in disagreement with myself on that point - I can't see how it can be viable inside or outside of the brain.

Quote
However, finding any sort of cordial arrangement with Don has become impossible. His insults, calling some of us swine and monkeys has him looking and sounding like a simple brat. I simply have no use for that sort of attitude, and his unwillingness to calmly discuss, he only wants to argue his points as if nobody else is smart enough to understand his brilliance. Nothing but a waste!

The real trick is to starve a thread like that instead of feeding it. The level of attention he's getting is a substantial reward as it boosts his status - he is serving as some kind of teacher handing out reading material for the class to work through, but the quality of most of it is either shoddy or out of date. I don't understand why people are letting him manipulate them in that way when they could find far better things to read on the subject by themselves.
Title: Re: Quantifying Consciousness
Post by: Ethos_ on 07/01/2014 21:30:58


The real trick is to starve a thread like that instead of feeding it. The level of attention he's getting is a substantial reward as it boosts his status - he is serving as some kind of teacher handing out reading material for the class to work through, but the quality of most of it is either shoddy or out of date. I don't understand why people are letting him manipulate them in that way when they could find far better things to read on the subject by themselves.
You must have been reading my mind there Dave, if you'll notice, I've voluntarily removed myself from that thread shortly before you posted this note. And BTW, you have earned my respect for dealing honestly and clearly about your interpretations regarding these issues. I look forward to further discussions with you my friend..............................Ethos
Title: Re: Quantifying Consciousness
Post by: David Cooper on 07/01/2014 22:31:17
If this is a virtual universe, what is it a model of?

It could be a model of a real universe, or it could be an experimental model of a universe with invented laws of physics. From inside it, there's no way to tell.

Quote
If it is an adequate model, then it should replicate or simulate all the characteristics of a real one.

If consciousness/sentience can't be simulated, the only way to include it would be to host consciousness/sentience outside of the simulation but to connect it into it. If we tried to simulate this universe in a computer, consciousness/sentience could not be real within that simulation, but we could have real people (potentially with real consciousness/sentience in them) outside of the simulation with all their brains' inputs and outputs connected to the virtual model such that it appears to them to be absolute reality.

But let's think through what would happen if you try to avoid having any consciousness/sentience outside of the virtual world and simulate it within the virtual world instead. If you simulate a fire, there is no real heat generated, but anyone in the virtual world will calculate that the fire was real and that real heat was generated by it (if they start from the premiss that the virtual world they inhabit is real and not virtual). The same would apply to consciousness/sentience if it could be simulated - a virtual person with simulated consciousness/sentience should determine that he/she has real consciousness/sentience in them if they start from the premiss that the virtual world they inhabit is real and not virtual. Such a simulated person could be completely convinced that suffering is absolutely real even though it is nothing more than a simulation, and they would see a need for morality which simply isn't there - there is no harm in making these virtual beings suffer because they are nothing more than data and must be incapable of real suffering. The entire simulation could be run on paper with a pencil and without any possibility of any sentience occurring within the simulation - only the pencil holder would be able to be sentient, and all he would be feeling most of the time would be intense boredom at having to perform such a mindless task, not understanding anything of what is being simulated.

If it's possible for a simulated person to be fooled into thinking they're experiencing real pain when they aren't, then it should be possible to fool a real person into thinking they're experiencing real pain when they aren't too, so however real the pain might appear to feel, it can't be guaranteed to be real and there may be no actual feeling at all. That means that the evidence we think we have about being sentient beings cannot be trusted - if a virtual person can be fooled, so can we. However, it appears to be impossible to simulate a sentient person in the first place - all we can do is simulate a zombie and then add rules to make it generate false claims about it being sentient. Science may find out some day that we do the same thing and that we are all zombies, and that would certainly be the simplest answer to the whole question (though there would still be a problem in explaining how such a system could usefully evolve: why create all these fictions when it's so easy to control a non-sentient machine's behaviour without it feeling anything - the fictions would be a pointless extra layer of stuff which has no impact on the behaviour.

It doesn't look at all good for the existence of consciousness/sentience. We have discovered one way of doing computation which cannot support sentience, but there's no guarantee that there isn't another way that's radically different. What if there's some exotic alternative based on sensation and in which sensation can be understood by the system. I'm not sure that this could be possible within our universe, which is why I consider it reasonable to look outside the universe where the rules may be more flexible. Then again though, the rules of our universe may be more flexible than they appear, so it may be unnecessary to look outside of it. Either way though, what we need to find is some explanation of sentience that could allow us to model it in principle, but we don't have that even though it looks as if it should be dead easy. We can of course model sentience in so far as we can assert that something is sentient, but what we can't do is model how that sentience can make itself known to anything beyond itself.

Quote
If it is not an adequate model, what is its purpose, ...

If you create a virtual world for people to think they live in where they can have fun that they couldn't have had so conveniently (if at all) within reality, that's all the purpose you need. The model's functional incompleteness is not a barrier to it being useful in this way because the things it cannot handle can sit outside of it.

Quote
...and why go looking for simulations that you know are absent?

I can't match that part of the question up to what we're discussing.
Title: Re: Quantifying Consciousness
Post by: alancalverd on 07/01/2014 23:41:55
Quote
If you create a virtual world for people to think they live in where they can have fun that they couldn't have had so conveniently (if at all) within reality, that's all the purpose you need. The model's functional incompleteness is not a barrier to it being useful in this way because the things it cannot handle can sit outside of it.

So some malevolent being has created a model of the real universe just so that people can suffer and die, eh?  Or is the real universe even more unpleasant than the world we think we live in?
Title: Re: Quantifying Consciousness
Post by: cheryl j on 08/01/2014 05:23:16
About defining Consciousness

Definitions are generally brief, but somehow must contain the elements that are necessary and sufficient. Consciousness appears to be very complex and multi-faceted, and even leaving aside its unknown aspects, consciousness is difficult to sum up with a definition. The list grew longer and longer when I tried to write down what I thought were key elements: sensation, awareness, self-awareness, memory, intelligence, learning, creativity, problem solving, choice or volition, emotion, integrated information, symbols, qualia, attention switching, and possessing Theory of Mind – that is the ability to imagine or attribute the same qualities to another animal that one believes is also conscious, and adopt their point of view. (The last one might not seem that important, or are a consequence of the others, but if  consciousness developed to foster social functioning, I suggest that empathy or the ability to alter point of view is important.)

Cooper might argue that computers can do many of these things, often better than we can, so consciousness must be something "else". At the same time, it’s hard to conceive of consciousness functioning without, for example, memory. Perhaps memory or intelligence is necessary but not sufficient, the same way ability to replicate, or respond to stimuli, is part of the definition of life but not enough.

I have a strange early childhood memory and I don’t know how accurate it is. But I seem to remember waking up from naps in my crib, and at first being only aware of whatever my eyes were looking at  - the pattern on the curtains, the light from the window, as if that were the alpha and omega  of reality for that moment, and then very, very slowly becoming aware of myself as well. Sometimes even now after a deep sleep it is still a little like that, but the transition seemed much longer when I was little. It is the closest thing I can imagine to some kind of consciousness without a sense of self. I swear I remember feeling amazed at the whole “waking up” process.

Babies and young children often resist being put down for naps or going to bed at night. I wonder if it ever occurred to baby-docs like Dr. Spock that they might find the whole “sleep” thing- losing consciousness for several hours, popping in and out of reality - a little weird and frightening once a certain level of self-awareness develops.
 
Another thing that would happen later in childhood was “zoning out” where I would just sit and stare at something for five minutes or so (I’m not sure how long it lasted.) An adult would say “Quit day dreaming!” which puzzled me because I was never imagining or thinking about anything. I was blank. Maybe it was what they call a micro-sleep.

As a small child,  I can also remember thinking it was odd, that I was inside me, and other people were inside themselves, and wondering what it would be like to be inside someone else instead of me - my mom, my dad, my sister, my best friend -  but I could never know, because I was stuck inside me, and evidently that was just how it worked. I remember thinking that definitely around age four.

But I digress. My point is that both objectively and subjectively, there appear to be levels or degrees of consciousness. How do you define something that is not a single entity but occurs across a broad spectrum? Neurologists often say “More is different” but AI people seem to say, “No, different is different” and nothing “emerges.” And what’s missing is some key element that turns information into active experience. I can’t decide.

There’s probably a method of creating definitions that editors of dictionaries use, combining agreed upon meanings, documented descriptions. How thoroughly does a definition have to “explain” how something works? Do we want a definition that best reflects the end product, the holistic sum of conscious experience, or do we want a definition of consciousness that includes vital steps that produce it, including ones that occur below our conscious awareness of them? (My thalamus or cingulate cortex is essential to my consciousness, or as Don now says, the collapse of the wave function in choosing my brain states, but I have no conscious awareness of any of them, like I am conscious of my big toe.)

Even the best definitions appear to always be lacking in some way. The criteria for life founders on the rocks of reality not just with viruses but with frogs that freeze solid and do nothing for several months, seeds that go undisturbed for thousands of years (found in pyramids) that can still germinate. What’s more there is no chemical process that occurs in living things, that cannot under the right conditions occur outside them. The biologist says, the whole is greater than the sum of its parts, but AI says, show me the links from the parts to whole, and I might believe it.

Although I would include volition or will in my criteria for consciousness, I’m not sure if it matters if will is “free” as in a causal, or arbitrary, or if will results from responding to environmental stimuli, information obtained through experience and stored in memory, recognizing internal needs, and then forming a response that optimizes  the organism’s state in some way. If one substituted "flexibility of response" for "will," that’d be fine by me. I don’t find the idea that free will might simply be arbitrariness necessarily a contradiction. From an evolutionary standpoint, it actually makes sense that the brain just rolls the dice sometimes, shuffles the deck now and then, to generate new potentially useful strategies or experiences that lead to them. Nature essentially does that in genetics through sexual reproduction. Why wouldn’t it do that elsewhere?

Early working definitions always seem to be somewhat functional, and aren't always true backwards and forwards. In other words, "Something that is conscious must be able to do XYZ, but something that does XYZ may not be conscious." Is that sort of definition "a start", or a failure?
Title: Re: Quantifying Consciousness
Post by: Ethos_ on 08/01/2014 11:41:00
About defining Consciousness


Early working definitions always seem to be somewhat functional, and aren't always true backwards and forwards. In other words, "Something that is conscious must be able to do XYZ, but something that does XYZ may not be conscious." Is that sort of definition "a start", or a failure?
Excellent reading Cheryl, totally unlike most of Don's rants. As you have so eloquently pointed out, defining consciousness requires much more than simple one line phrases.
Title: Re: Quantifying Consciousness
Post by: alancalverd on 08/01/2014 17:32:26
I can accept that it might be a portmanteau word for a whole lot of defined functions, in which case quantifying it becomes an exercise in quantifying its components, all of which seem to have observable and therefore quantifiable attributes.

The philosopher's weasel word is "plus something else" which either puts the subject neatly out of the range of science, or is pure mystification for its own sake.

Recognising self in others is a characteristic of pretty much every living cell or assembly thereof, and the more we delve into immunity and tissue rejection, the more it appears to be a consequence of "simple" chemistry (big molecules, admittedly, but with very few elements).

A noncommutative definition won't wash with me, I'm afraid. If XYZ is a necessary condition of A, I'll need a damn good reason why it is not a sufficient one. All those I have seen advanced so far have been either a reflection of human vanity that did not withstand observation of other living things (or even hypothetical machines) , or mystical fairydust.
Title: Re: Quantifying Consciousness
Post by: David Cooper on 08/01/2014 19:05:17
So some malevolent being has created a model of the real universe just so that people can suffer and die, eh?  Or is the real universe even more unpleasant than the world we think we live in?

Humans in a real universe who want to play safe games in a virtual universe where they can risk death without actually risking death at all could find it a lot more fun than a real universe in which they are not prepared to take any risks at all. We have already decided that children are not allowed to live in the real world and must be brought up in padded cells, and as we gain the ability to live for thousands of years, this business of imprisoning people for their own safety will inevitably be extended to adults as well.
Title: Re: Quantifying Consciousness
Post by: David Cooper on 08/01/2014 19:48:23
About defining Consciousness

Definitions are generally brief, but somehow must contain the elements that are necessary and sufficient. Consciousness appears to be very complex and multi-faceted, and even leaving aside its unknown aspects, consciousness is difficult to sum up with a definition. The list grew longer and longer when I tried to write down what I thought were key elements: sensation, awareness, self-awareness, memory, intelligence, learning, creativity, problem solving, choice or volition, emotion, integrated information, symbols, qualia, attention switching, and possessing Theory of Mind – that is the ability to imagine or attribute the same qualities to another animal that one believes is also conscious, and adopt their point of view. (The last one might not seem that important, or are a consequence of the others, but if  consciousness developed to foster social functioning, I suggest that empathy or the ability to alter point of view is important.)

Sensation - yes.

Awareness - no. It may involve sentience in some systems, but that should not be lumped in with other things in order to bring them into the definition of consciousness. A security light which comes on when it detects a cat walking past is "aware" of something warm moving past its sensor, but it is not sentient. Add sentience to that and you have awareness in the sense of detection plus sentience, but they are two distinct things.

Self-awareness - no. This just comes out of a system having enough intelligence to identify itself.

Memory - no. You are not conscious of your memories until they are recalled, and then they are run through your head in much the same way as new input.

Intelligence - no. When hard thinking is done, it is done with the processor that appears to be sentient rather than being done in the background by an automated system, but the same intelligent processing can be done without sentience.

Learning - no. Learning is just data and algorithm collection/development.

Creativity - partly. When judging the artistic merits of something, that depends a lot on feelings (qualia), though again that is just sentience. Inventions of the non-artistic variety do not require sentience to help guide them.

Problem solving - no. Mechanical.

Choice or volition - no. There is no choice.

Emotion - sort of: it involves qualia being generated, but that again comes under sentience.

Integrated information - no.

Symbols - no.

Qualia - yes. [Key part of sentience.]

Attention switching - no.

Possessing Theory of Mind - no.

Consciousness is sentience. Everything else that you might want to bring into consciousness is just something that doesn't involve consciousness itself being tied to sentience.

Quote
Cooper might argue that computers can do many of these things, often better than we can, so consciousness must be something "else".

The "else" is sentience.

Quote
At the same time, it’s hard to conceive of consciousness functioning without, for example, memory.

A person with no memory (such people do exist) lacks nothing in terms of consciousness. They merely have a lack of internal ideas to reload into their consciousness and have to make do with new input which they will be able to experience before forgetting it.

Quote
Perhaps memory or intelligence is necessary but not sufficient, ...

Neither are necessary.

Quote
... the same way ability to replicate, or respond to stimuli, is part of the definition of life but not enough.

The divide between chemistry and life is arbitrary. It's not unlike the point at which a computer operating system becomes capable of modifying and saving itself without needing external software to develop it - there is nothing that happens at this point that requires two different words for "software" making a distinction between the two cases.

Quote
But I digress. My point is that both objectively and subjectively, there appear to be levels or degrees of consciousness. How do you define something that is not a single entity but occurs across a broad spectrum? Neurologists often say “More is different” but AI people seem to say, “No, different is different” and nothing “emerges.” And what’s missing is some key element that turns information into active experience. I can’t decide.

There is either sentience or there is not. If you subtract sentience from something, whatever remains is not part of consciousness and has no place in its definition.
Title: Re: Quantifying Consciousness
Post by: alancalverd on 08/01/2014 22:23:30
Not a lot of progress there, because sentience is equally undefined (except possibly as consciousness). If you want your security light to be sentient, presumably you want it to decide whether the moving target is a threat, based on previous knowledge of the general characteristics of a threat, or the absence of characteristics of a friend. Either way you are simply adding learning and a statistical algorithm, so you have to look at something you call a sentient sentinel and ask how it (or he) acts and thinks to determine the intentions of an approaching object. Then I guess you would distinguish between a human that makes some kind of instinctive guess and a machine that sticks to rigid or neural rules. But the problem then becomes that you are defining sentience or consciousness as nothing more than fallibility.
Title: Re: Quantifying Consciousness
Post by: David Cooper on 09/01/2014 19:46:14
Not a lot of progress there, because sentience is equally undefined (except possibly as consciousness).

That is considerable progress over an analysis which brings all manner of stuff that's separate from consciousness into consciousness in the way that Don does. He doesn't want to do reductionism at all, while I want to take reductionism as far as it can go. Doing it half and half doesn't get you anywhere.

Quote
If you want your security light to be sentient, presumably you want it to decide whether the moving target is a threat, based on previous knowledge of the general characteristics of a threat, or the absence of characteristics of a friend. Either way you are simply adding learning and a statistical algorithm, so you have to look at something you call a sentient sentinel and ask how it (or he) acts and thinks to determine the intentions of an approaching object. Then I guess you would distinguish between a human that makes some kind of instinctive guess and a machine that sticks to rigid or neural rules. But the problem then becomes that you are defining sentience or consciousness as nothing more than fallibility.

A sentient equivalent would run the same computations and behave the same way as the non-sentient system - there's no reason why a sentient machine should have to guess instead of calculating. The difference is that it could be designed to feel scared when the cat first appeared, then relieved when it determines that it's only a moggy and not a burglar. There appears to be no advantage in sentience being involved though, and we can't model its involvement in any way that makes it useful. We know that the sentience we think we have could be entirely fake, no matter how well we are fooled into thinking it's real: if a simulated person in a wholly virtual world can be fooled into thinking simulated sentience is real, we can be fooled too. Sentience/consciousness appears to be a fake phenomenon: we are all zombies.

It's hard to accept that, of course, but that's where reductionism takes us. That is why idealism has so much appeal (where matter only exists in thought and where if we're fixated on matter we're looking at the wrong thing - reductionism applied to something that doesn't really exist and which ignores the unseen things that really exist could potentially generate wrong answers). A machine which generates fictions about being sentient merely produces such data mechanically, so it's no different from the kind of data you find written in books - a made-up character in a story who travels about on flying carpets and who slays mythical creatures would be no less sentient than a real person. If you write a story about someone being tortured to death, you would be causing as much suffering by doing so as if you tortured someone to death for real. The amount of suffering involved would be zero though in both cases. We feel as if we are trapped in side the heads of individual apes and that we look out on the world through their eyes, but if there is no sentience there is no self in there, so we cannot be trapped any more than the imagined self of a character in a book is trapped in that fictional body within a story - we cannot be trapped in anything because we don't exist. But despite not existing, we persist in thinking that we do, and that's quite some trick. That is why I take idealism seriously - reductionism leads us to a place which denies our very existence, and few of us will ever accept that end point. The mechanics of thought which we have discovered suggest that cause-and-effect mechanisms are still heavily involved in some way even within idealism, but there may be some key, radical difference which allows everything to be built upon sentience.
Title: Re: Quantifying Consciousness
Post by: cheryl j on 10/01/2014 02:44:39


There is either sentience or there is not. If you subtract sentience from something, whatever remains is not part of consciousness and has no place in its definition.

I think see your point, but it’s hard to conceive of consciousness entirely separate from what one is conscious of. If you stripped consciousness of all of the processes “associated with” it, like intelligence or memory, looking for that mysterious essence of sentience, perhaps you would end up with absolutely nothing there. You can chip away at the concept, by saying consciousness can exist without this or without that, but if you remove all external sensory information, all internal stimuli, block access to memories, what is there to be sentient of? How does sentience exist in some pure, isolated state?

Either way, the fact that other systems – computers – can perform these functions but aren’t conscious, doesn’t mean consciousness doesn’t require them, (whether we want to include any requirements in our definition or not.)
Take memory for example, one might not need long term memory for consciousness, but I’d think you’d need at the very least have to maintain something in working memory long enough to be sentient of it.  You would need enough short term or working memory to connect one event to another in any meaningful way. It’s hard to imagine conscious experience as a series of instantly experienced and instantly forgotten snap shots of the world or even of internal sensation, instead of the moving picture-like, stream of consciousness we are accustomed to. If every time I see a chair, I am seeing it for the first time with no memory of prior associations, my awareness of it would probably be very photo-detector like. Something is there or not there, with no significance or meaning attached to it, and probably no ability to generate any emotional response. Is that collection of parallel and perpendicular lines in my visual field something good or bad for me?  With no prior associations, and no potential to create new ones, the chair remains parallel and perpendicular lines in my visual field.

 People who lose the ability to lay down new long term memory or even have some short term memory deficits, did possess them at one time.  I would be surprised if a human born without any capacity for  storing memory, or learning, would still develop consciousness or a sense of self.

Maybe one can’t point to the  smallest component of the brain – a neuron or feedback loop – that is still capable of suffering, the way we “suffer.”  What I do question is the criteria – when is a particular function of a component of a system “enough like” the display of that particular function in the system as a whole?  With the function of movement, most people accept the explanatory link between sliding actin and myosin filaments inside muscle cells, and contraction of a muscle cell, the resulting shortening of muscles, and the locomotion of the entire body or movement of parts.  The movement of all of those things is considered  “enough alike” and the jump from one level to the next isn’t questioned. Nor does it bother anyone that if there are disruptions in quantity, arrangement or timing of things that move, you may not get the desired end result. (A heart muscle in V-fib is useless as a pump) But people see sensation in cells as being too qualitatively different (too mechanical) from sentience in the brain. And they also balk at the idea of any “emergent properties” related to quantity, arrangement and timing.  Why? I’m not saying sensation it is or it isn’t enough like sentience, but what is the qualitative demarcation?

Qualia is connected to consciousness, some people even define consciousness as the ability to experience qualia. It does seem evident, both subjectively, and neurologically, that qualia occurs where consciousness occurs. Two good examples are vision and pain. Blindsight involves the ability to avoid obstacles, identify objects and even track movement without experiencing the qualia of vision. A person with blindsight feels like they are making a wild guess, but it’s an accurate one none the less. (Interestingly, Ramachandron says patients can’t seem to use the information obtained through the more primitive, blindsight pathway to make choices.)

There is also no pain qualia associated with the reflex arc of jerking your hand away from a hot element -nerve impulses are  transmitted from a heat receptor, through a sensory nerve to the spinal cord and back out through a motor nerve to the muscles in the arm. A “CC” is also sent to the brain, resulting in the experience of pain, but it occurs after you have already moved your hand. So what is the point of the pain, if the body has a fully functioning, and quite effective “zombie” program that prevents further damage to the skin from the hot element? Is pain from the CC message to the brain just an epiphenomena of consciousness or does it accomplish something that for some reason the zombie program can’t? It would appear to be a future behaviour reinforcer with a dimmer switch that says, in the case of mild pain, “try to avoid that next time,” or with severe pain, “Don’t do ever do that again for any reason!” Perhaps the degree of pain affects, too, whether that experience is even stored as a long term memory.

Qualia may seem subjective, private and ethereal, but they are not without some very specific neural correlates. Ramachandran discusses two patients who laugh when they should experience pain. One lady would interpret pain as ticklish, even when stabbed with a needle. He says: “A CT scan revealed that one of the nerve pathways in her brain was damaged. Even though we think of pain as a single sensation, there are in fact several layers to it. The sensation of pain is initially processed in a small structure called the insula (‘island’) which is folded deep beneath the temporal lobe on each side of the brain. From the insula the  pain information is then relayed to the anterior cingulate in the frontal lobes. It is here you feel the actual unpleasantness – the agony and the awfulness of pain – along with the expectation of danger. If this pathway is cut, as it was in Dorothy and presumably in Mihkey (his other patient), the insula continues to provide the basic sensation of pain, but it doesn’t lead to the expected awfulness and agony. The anterior cingulate doesn’t get the message. It says in effect ‘all’s okay.’ So here we have two key ingredients for laughter: A palpable and imminent indication that alarm is warranted (from the insula) followed by a ‘no big whoop’ - from the silence of the anterior cingulate. So the patient laughs uncontrollably.” (Ramachandran suggests a similar thing happens in the brain when a snake turns out to be a rubber toy, or we see some one slip on a banana peel, but is not hurt –it’s funny.)

Like the two visual pathways, the zombie one and the conscious one, there may be two aversion pathways, but only the one engaging the anterior cingulate generates qualia. Why? I don’t know, but I would expect one pathway accomplishes something that the other can’t, and my guess would be it involves modifying future behaviour and involves generating a multitude of meaningful associations, between that event and similar scenarios, that object and similar objects, etc.

I guess one could still argue that a machine could do all of this without sentience, it could do it some other way. But that doesn’t mean it is not the way animals like us do it. At any rate, I’d argue there is more to gain at looking at the neural pathways or areas of the brain closely associated with consciousness and asking “What’s different about them?” than simply assuming that nothing is different, and consciousness serves no function, or doesn’t exist.

Title: Re: Quantifying Consciousness
Post by: Ethos_ on 10/01/2014 05:00:59
As a very young child, I developed double pneumonia wherewith I almost died. While only being several months old, I remember the toys that hung above my crib and the colors of the flowers that were directly outside my bedroom window. These are the first memories I can recollect. The next group of memories I can recall is the terrible taste in my mouth and the severe tightness in my chest, resulting from the infection.

When I try to grasp what level of consciousness may have been active during this period of my life, a few interesting observations come to mind.

The shapes and colors of the toys in my crib seemed to be just a part of an abstract whole. By this I mean, it hadn't yet become a conscious fact to me that they were there and I was here in the crib. In fact, the "I" part of that equation hadn't become part of the whole at that point in my history. And the colorful flowers outside the window seemed more real and alive than anything in my room including myself. Not until the memory of the terrible taste and pain in my chest was "I" aware of "the self".

Consciousness can be defined in at least two different ways. First, being conscious of one's surroundings and secondly, being conscious of the self as a sentient being. In recent posts here, questions about computer consciousness have been explored. And while a computer is familiar with data, 1's and 0's, it might be said that they are conscious of those two digits, but, are not sentient or self aware.

Being conscious of details is different than being self aware. This is why I define a precise moment when consciousness, or sentient awareness surfaces as:

The moment when the "I" moves beyond it's surroundings and becomes acquainted with the self.

 Before that moment, even the "I" does not recognize it's peculiar existence. Until that moment arrives, the "I" and it's surroundings are a composite of existence. Only when the "I" moves beyond that point and becomes separate from it's environment and individual to it do we find the birth of sentience. But this separateness is only in administration. While I contend that sentience is a separate and evolved function of the brain, it's origin and process still remains there as a physical process.

It's a process, started in the brain, evolved within the brain, and completed there as well.





 
Title: Re: Quantifying Consciousness
Post by: alancalverd on 10/01/2014 17:45:10
That is why I take idealism seriously - reductionism leads us to a place which denies our very existence, and few of us will ever accept that end point.

I disagree. An object exists to the extent that it affects other objects. If you can pick up a stone, or even get in the way of a photon, you exist, and this must be the starting axiom of any meaningful discussion. If your argument contradicts its axioms, it's wrong.
Title: Re: Quantifying Consciousness
Post by: David Cooper on 10/01/2014 19:33:31
That is why I take idealism seriously - reductionism leads us to a place which denies our very existence, and few of us will ever accept that end point.

I disagree. An object exists to the extent that it affects other objects. If you can pick up a stone, or even get in the way of a photon, you exist, and this must be the starting axiom of any meaningful discussion. If your argument contradicts its axioms, it's wrong.

A book exists, and so do the words printed in it, but there is no link between any sentience generated in the material of the book+ink and the meaning of the sentences which may be describing events relating to sentience in imaginary characters. It's the same with a computer holding an electronic text (identical to the text in the book) - the material of the machine may be sentient, but there in no relationship between what it is feeling and the sentience described in the text.

If a machine is writing its own texts, little bits of text making up fictions about sentience supposedly being experienced in the machine, again there is no relationship between those bits of text and anything that might be being experienced by any sentient material in the machine - the generation of the texts simply maps fictions to certain inputs according to rules which have nothing to do with any actual sentience in the machine. Unless we can describe a system in which the texts are actually related to the sentience in the material of the thing holding or generating those texts, there is a total disconnect between them: the sentience of the machine is no more related to the sentience asserted in the texts than the sentience in any rock in another galaxy. A machine exists and it is churning out fictions about sentience, but the sentient thing that is suppoedly experiencing the qualia described in these fictions does not exist.
Title: Re: Quantifying Consciousness
Post by: alancalverd on 10/01/2014 21:48:37
We seem to be drifting from consciousness to sentience with nothing to anchor the meaning of either, but it now seems that you consider sentience to be a property of things that are not machines. So how would an intelligent alien know what is a machine and what its not?
Title: Re: Quantifying Consciousness
Post by: David Cooper on 10/01/2014 22:36:24
There is either sentience or there is not. If you subtract sentience from something, whatever remains is not part of consciousness and has no place in its definition.

I think see your point, but it’s hard to conceive of consciousness entirely separate from what one is conscious of.

You are conscious of qualia, and those could just be states of the thing that is sentient. Consciousness of anything beyond qualia is most likely just an illusion. I could describe to you how a car works, starting with explosions causing an expansion of gas which leads to a team of pistons pushing something round, then that rotation being transmitted to the wheels that push against the road. At the end of the process, you imagine that you consciously understand how a car works, but do you really understand it consciously? When each part of the explanation is processed, you feel as if you've consciously understood that part, and once the chain of components of the whole explanation has been processed, you feel as if you've consciously understood the whole chain, but already most of the chain is not in your immediate thoughts. You have to go back and forth along the chain of events thinking about individual interactions in order to be consciously aware of how they work, and even then there's the question as to how much of an individual thought you can be conscious of at a time. Sometimes the thing you think you're consciously understanding drifts right out of your mind and you're left feeling that you consciously understand it even though the details of the thing you feel you're consciously understanding have completely left the stage.

Quote
If you stripped consciousness of all of the processes “associated with” it, like intelligence or memory, looking for that mysterious essence of sentience, perhaps you would end up with absolutely nothing there. You can chip away at the concept, by saying consciousness can exist without this or without that, but if you remove all external sensory information, all internal stimuli, block access to memories, what is there to be sentient of? How does sentience exist in some pure, isolated state?

You would end up not with nothing, but with a sentient thing experiencing qualia. The things being chipped away are merely the things which induce those qualia in the sentient thing.

Quote
Either way, the fact that other systems – computers – can perform these functions but aren’t conscious, doesn’t mean consciousness doesn’t require them, (whether we want to include any requirements in our definition or not.)

If you decide that consciousness does require them, you end up with a problem when looking at a system which lacks one feature but which has another, such as a person who has no memory but can be made to suffer.

Quote
Take memory for example, one might not need long term memory for consciousness, but I’d think you’d need at the very least have to maintain something in working memory long enough to be sentient of it.

You can torture a person who has no memory just by relying on live inputs.

Quote
You would need enough short term or working memory to connect one event to another in any meaningful way.

Pain doesn't need anything meaningful - when it's at its worst, it is just pain and nothing else.

Quote
It’s hard to imagine conscious experience as a series of instantly experienced and instantly forgotten snap shots of the world or even of internal sensation, instead of the moving picture-like, stream of consciousness we are accustomed to. If every time I see a chair, I am seeing it for the first time with no memory of prior associations, my awareness of it would probably be very photo-detector like. Something is there or not there, with no significance or meaning attached to it, and probably no ability to generate any emotional response. Is that collection of parallel and perpendicular lines in my visual field something good or bad for me?  With no prior associations, and no potential to create new ones, the chair remains parallel and perpendicular lines in my visual field.

The only difference memory makes is that feelings can be triggered by memories as well as by live inputs, or a combination of both. In the same way, the only difference intelligence makes is that feelings can be triggered by calculated ideas as well as by the ideas which were already there before the calculations were done.

Quote
People who lose the ability to lay down new long term memory or even have some short term memory deficits, did possess them at one time.  I would be surprised if a human born without any capacity for  storing memory, or learning, would still develop consciousness or a sense of self.

They don't need memories to teach them how to experience qualia. Memories merely have a role in determining which qualia they are made to feel. A sense of self is something that comes out of intelligence and a feeling of understanding that you exist (even if you don't) and that you have identified something with yourself.

Quote
Maybe one can’t point to the  smallest component of the brain – a neuron or feedback loop – that is still capable of suffering, the way we “suffer.”  What I do question is the criteria – when is a particular function of a component of a system “enough like” the display of that particular function in the system as a whole?  With the function of movement, most people accept the explanatory link between sliding actin and myosin filaments inside muscle cells, and contraction of a muscle cell, the resulting shortening of muscles, and the locomotion of the entire body or movement of parts.  The movement of all of those things is considered  “enough alike” and the jump from one level to the next isn’t questioned. Nor does it bother anyone that if there are disruptions in quantity, arrangement or timing of things that move, you may not get the desired end result. (A heart muscle in V-fib is useless as a pump) But people see sensation in cells as being too qualitatively different (too mechanical) from sentience in the brain. And they also balk at the idea of any “emergent properties” related to quantity, arrangement and timing.  Why? I’m not saying sensation it is or it isn’t enough like sentience, but what is the qualitative demarcation?

When you get into the small scale detail of how things work, the parts of the brain controlling them are beyond the reach of consciousness. The conscious stuff that goes on is in an overseer which doesn't concern itself with little details, but within the places in the brain where the overseer functions, the small scale action within that could be directly related to the qualia experienced by the sentience.

Quote
There is also no pain qualia associated with the reflex arc of jerking your hand away from a hot element -nerve impulses are  transmitted from a heat receptor, through a sensory nerve to the spinal cord and back out through a motor nerve to the muscles in the arm. A “CC” is also sent to the brain, resulting in the experience of pain, but it occurs after you have already moved your hand. So what is the point of the pain, if the body has a fully functioning, and quite effective “zombie” program that prevents further damage to the skin from the hot element? Is pain from the CC message to the brain just an epiphenomena of consciousness or does it accomplish something that for some reason the zombie program can’t? It would appear to be a future behaviour reinforcer with a dimmer switch that says, in the case of mild pain, “try to avoid that next time,” or with severe pain, “Don’t do ever do that again for any reason!” Perhaps the degree of pain affects, too, whether that experience is even stored as a long term memory.

If you just had a reaction to move your hand away from something hot without knowing what caused it, you'd be left not knowing what went wrong. The following pain would serve as an explanation, and a warning that damage may have been done. That too could be done non-consciously in a machine, of course, but in us it has to go through the part of the system that appears to have consciousness built into it, so it's presented to that part of the system in the appropriate form.

Quote
Like the two visual pathways, the zombie one and the conscious one, there may be two aversion pathways, but only the one engaging the anterior cingulate generates qualia. Why? I don’t know, but I would expect one pathway accomplishes something that the other can’t, and my guess would be it involves modifying future behaviour and involves generating a multitude of meaningful associations, between that event and similar scenarios, that object and similar objects, etc.

The conscious system is essentially a programmer within the brain. The background systems which lack consciousness are programmed systems (some preprogrammed through instincts) which react just like the non-conscious machines which we build. The point of informing the programmer is to give the programmer the opportunity to modify the automated systems to improve them. If something hot in the kitchen leads to burns, new automated systems can then be set up to make you more cautious in certain locations within the kitchen in case a hot thing occurs there again.

Quote
I guess one could still argue that a machine could do all of this without sentience, it could do it some other way. But that doesn’t mean it is not the way animals like us do it. At any rate, I’d argue there is more to gain at looking at the neural pathways or areas of the brain closely associated with consciousness and asking “What’s different about them?” than simply assuming that nothing is different, and consciousness serves no function, or doesn’t exist.

There is indeed a lot to be gained by looking at all the things associated with consciousness, but I'm confident that what you'll eventually find is that you are either studying the mechanisms of a zombie or alternatively something virtual which may be set out to look as if it's a zombie in order to hide its connections to something external to the virtual universe. It appears to be impossible to model sentience, and that should make it impossible for us to identify it in the brain, but so long as there's anything going on in there that we don't understand, we'll be able to point at it and hope that sentience somehow happens there and magically manages to interface with the information system of the brain. It's going to be a long wait, and the conclusion at the end will most likely be that we can't know.
Title: Re: Quantifying Consciousness
Post by: David Cooper on 10/01/2014 22:56:56
We seem to be drifting from consciousness to sentience with nothing to anchor the meaning of either,

The only thing you can anchor it to is your own experience of sentience - there's no way to anchor the meaning to mere words without going round in circles.

Quote
... but it now seems that you consider sentience to be a property of things that are not machines.

It could be a property of a machine, but for it to be so it would need to be designed into it in such a way as to enable the sentience to have a useful role in the function of the machine.

Quote
So how would an intelligent alien know what is a machine and what its not?

You'd better ask him/her/other/it. It's going to depends on whether that intelligent alien is in the same boat as us or if he/she/other/it has access to more information or a different kind of information about the nature of reality.
Title: Re: Quantifying Consciousness
Post by: cheryl j on 11/01/2014 15:32:46


You would end up not with nothing, but with a sentient thing experiencing qualia. The things being chipped away are merely the things which induce those qualia in the sentient thing...



...If you decide that consciousness does require them, you end up with a problem when looking at a system which lacks one feature but which has another, such as a person who has no memory but can be made to suffer.

But I can take even pain out of the system and it is still both sentient and conscious. I could in theory block, one by one,  every type of sensory information, but more importantly, I could also just  interefere with the specific neural machiney that people like Ranachandron say is needed to generate that particular quale, and as long as I leave you one form, presumably you are still sentient. So imagine you are now "the being that experiences red". No memory – every blast of red, is redness for the first time, and it does not mean stop signs or apples or red lip stick, it’s not good or bad, you can't even miss its absence forlornly. I don’t know if you are a person or other animal, or just a section of brain tissue in a laboratory. You may not know either. Are you still, by our definition,  conscious? And if I cruelly decide never to stimulate  that nerve pathway in any way that signals red, then what?
Title: Re: Quantifying Consciousness
Post by: alancalverd on 11/01/2014 17:44:01

The only thing you can anchor it to is your own experience of sentience - there's no way to anchor the meaning to mere words without going round in circles.


Pity about that. I can define a cow in such a way that a Martian could recognise a cow and a non-cow, and I can define a colour by example. Even such abstractions as energy and entropy are definable such that we both know what the other is talking about, and when we measure energy or calculate entropy, we both get the same number. But consciousness or sentience seems to defeat the definitive powers of those who discuss it, which makes quantification doubly impossible..
Title: Re: Quantifying Consciousness
Post by: David Cooper on 11/01/2014 20:15:32
But I can take even pain out of the system and it is still both sentient and conscious. I could in theory block, one by one,  every type of sensory information, but more importantly, I could also just  interefere with the specific neural machiney that people like Ranachandron say is needed to generate that particular quale, and as long as I leave you one form, presumably you are still sentient.

Correct.

Quote
So imagine you are now "the being that experiences red". No memory – every blast of red, is redness for the first time, and it does not mean stop signs or apples or red lip stick, it’s not good or bad, you can't even miss its absence forlornly. I don’t know if you are a person or other animal, or just a section of brain tissue in a laboratory. You may not know either. Are you still, by our definition,  conscious?

Yes - any quale will do.

Quote
And if I cruelly decide never to stimulate  that nerve pathway in any way that signals red, then what?

There would be nothing cruel about that, but if you never trigger it there might as well be no connection.
Title: Re: Quantifying Consciousness
Post by: David Cooper on 11/01/2014 20:19:42
Pity about that. I can define a cow in such a way that a Martian could recognise a cow and a non-cow, and I can define a colour by example.

It's easy to describe something external, including an external colour, but you can't point to an internal quale representing a colour by example, as you doubtless know.

Quote
Even such abstractions as energy and entropy are definable such that we both know what the other is talking about, and when we measure energy or calculate entropy, we both get the same number. But consciousness or sentience seems to defeat the definitive powers of those who discuss it, which makes quantification doubly impossible..

The whole field is deeply unrewarding, with every little bit of progress taking us further away from where we want it to go. It's like jumping into a black hole to study what happens inside it.
Title: Re: Quantifying Consciousness
Post by: cheryl j on 12/01/2014 00:50:16
Pity about that. I can define a cow in such a way that a Martian could recognise a cow and a non-cow,
I’m not sure that is actually true. The martian, if his consciousness is like ours,  might only be able to recognize non-cows.
Ramachandran discusses a patient, John, a former fighter pilot, who had a stroke when a blood clot related to surgery for appendicitis clogged one of his cerebral arteries. He lost the ability to recognize familiar objects. He couldn’t recognize his wife’s face, but he recognized her voice. He couldn’t recognize his own face, but when looking in a mirror, said it must be him, because it moved when he did. Although he couldn’t recognize objects, he could deal with their spatial extent, dimensions, and their movement. He could trim the hedge in his yard and make it nice and even. When shown a picture of a carrot, he said “It’s a long thing with a tuft at the end – a paint brush?” Never-the-less, he wrote this description of a carrot :
“A carrot is a root vegetable cultivated and eaten as human consumption worldwide. Grown from seed as an annual crop, the carrot produces long thin leaves growing from a root head. This is  `deep growing and large in comparison with the leaf growth, sometimes gaining a length of of twelve inches under a leaf top of similar height when grown in good soil. Carrots may be eaten raw or cooked and can harvested during any size or state of growth. The general shape of a carrot is an elongated cone, and it’s color ranges from red and yellow.”

John’s paragraph is impressive. It shows his brain has a lot of information associated with carrots. What’s more some of that information is related to visual aspects – size, color, shape. But that information didn’t seem to help him recognize a carrot. I would have thought that once shown a carrot, with someone saying “this is the thing you just described,” that would be all it might take. But Ramachandran said he never regained that ability to connect the two things. Ramachandran’s story about John goes on for several pages and gets even weirder.



Quote

and I can define a colour by example.


That’s the trouble with qualia. Even if qualia is just a “symbol” for something else – red is the symbol in our brains for certain waveslengths in a spectrum of electormagentic radiation - there is no other symbol that exactly replicates that symbol or what it symbolizes. Sometimes I wonder if the problem with qualia isn’t sentience or consciousness but the fact that there is no way to translate red into words, another symbol,  that exactly describe or reproduce red back into another brain. On the other hand, if emotions are qualia, and a person is angry, he can insult you and generate that qualia inside of you, or describe some injustice done to him that makes you feel angry as well. I can make your neurons do what mine are doing, or something pretty close, when it comes to anger but not red.

 I must admit, though,  I get very confused when it comes to brains and computers, about the difference between input-output functions versus copying or translating information. I often think I am confusing one with other. I would appreciate any help.
Title: Re: Quantifying Consciousness
Post by: alancalverd on 12/01/2014 02:13:02
For most of us, visual information and recognition dominates our senses because the input bandwidth is enormous and tightly correlated with our movements, but I can imagine a system where the connection between the visual processor and learned information gets broken. It doesn't surprise me that the connection is fragile since the visual processor has to act very quickly and not retain previous data (otherwise moving objects would appear blurred) so there must be some filter between sight and medium- or long-term memory.

Not sure how this relates to consciousness, though. Presumably John was able to locate and pick up - and eat - a carrot, so in at least one sense he was conscious of it and his relationship to it.  Given his textbook-accurate description, could he draw a carrot, I wonder? Fascinating case.     
Title: Re: Quantifying Consciousness
Post by: David Cooper on 12/01/2014 22:25:29
On the other hand, if emotions are qualia, and a person is angry, he can insult you and generate that qualia inside of you, or describe some injustice done to him that makes you feel angry as well. I can make your neurons do what mine are doing, or something pretty close, when it comes to anger but not red.

We don't necessarily feel anger the same way any more than we all see red the same way. It's likely to be very different between us and aliens though, even if they have a range of emotions which matches up with ours perfectly.

Quote
I must admit, though,  I get very confused when it comes to brains and computers, about the difference between input-output functions versus copying or translating information. I often think I am confusing one with other. I would appreciate any help.

Here are some ideas you may be able to build upon:-

Input and output functions simply collect values from external sources or send numbers out to external recipients. If you want to control a motor or muscle, you have to send values to it which will make it act the way you want it to act (so you either have to speak the language of the motor/muscle or have something else in between that translates the signal), and if you want to get input from a sensor like a microphone or an ear, you just take whatever values you get from it and then have to try to interpret them. Input and output is like talking to someone who speaks another language - you have to learn their language in order to communicate with them (or they have to learn yours, or you need an interpreter in between who can convert from one language to the other in either direction).

Copying a number is done by reading from one memory address and writing the value you've just read to another memory address. Translating a number may involve reading it from one address, looking up that value in a table to find a number to replace it with, and then writing that new number back to memory somewhere - this could happen when turing the symbol "9" (often represented by the value 57) into the value 9 in order to use it in an arithmetical calculation. These processes are only meaningful if the numbers represent something. Sometimes their meaning is the number itself (although there are different ways of mapping numbers to bytes too: e.g. 255 can either mean 255 or -1), but more often the meaning is something else which is mapped to that value, in which case there has to be some way of storing the details of that mapping and there have to be algorithms available to run which can process the data usefully. The meanings of some data may only be recognised by a piece of program code that processes it, meaning that the only mapping details that exist are tied up in the algorithm.

If we imagine a robot with an eye with a single pixel, the colour value could be sent from the eye to the robot's brain as a series of three bytes, the first representing the colour red, the second green and the third blue. This would already be representing the colour detected by the pixel in the eye in a form that the brain of the computer wants to work with, so there is no translation step needed, unless it feels the need to change the order. A different camera could use just two bytes for the colour, with 5 bits for each colour and one bit not used - the robot's brain would perhaps need to rearrange these sets of 5 bits into three different bytes before working with them. Perhaps this robot eats plants in order to power itself, so it might be programmed to hunt for green things to eat. It can send values to its motors until the input has the greenest value and then go forwards while chomping. If the (adjusted) colour value from the camera is always stored in the same place, perhaps a feeling is generated in the material of the piece of memory holding that value, perhaps related to the magnetic field - there could be a distinct feeling felt there for each individual colour, so we could have a sentient robot, but this sentient robot cannot know that it is sentient because it can't read the feeling - all it can do is read the colour value back from memory, not able to tell if anything has been felt at all, never mind what kind of feeling it might be.
Title: Re: Quantifying Consciousness
Post by: cheryl j on 13/01/2014 16:02:53


We don't necessarily feel anger the same way any more than we all see red the same way.

Possibly, but I would be surprised if it were true. One of the things that is more developed in humans is learning and the ability to predict the behaviour or intentions of others. A lot of research attributes this to mirror neurons that fire not just when you perform an action but when you  see another person performing an action. And it does have to be a person or animal you believe is conscious – mirror neurons don’t, for example, fire in response to the up and down movement of a basketball. (I don't know about robots) Mirror neurons may be responsible for mimicking, learning complex skills, learning to talk. They may also have the evolutionary advantage of allowing one to learn from the misadventures of others without incurring the same risks and injuries oneself, but for this learning to be as effective, I may have to, as Bill Clinton used to say, “feel your pain.”

Mirror neurons are  considered the source of empathy and allow one to predict another person's next action or state of mind (is this person going share their food with me or club me over the head?) If our emotional experiences were vastly different, I don’t think any of this would work very well or as quickly. I could learn that every time you seemed happy, I was about to be whacked on the head, but it might take a few times.  Of course, as you said, all that really matters is emotional states correlate, that our behavior or display matches up, not that our subjective experience be the same. But given the similarities in brain structure, biochemistry, it’s hard to think of a good reason why the subjective experience really should be different.

I sometimes wonder about the qualia red/green inversion question. If our colours were truly inverted or shifted, wouldn’t there be discrepancies in what we thought was the same or different when we compared mixed colors and added and subtracted shades of color?  It seems like an obvious question, so surely someone has done the math.

Another thing about emotion. I always thought about it as driver, or positive or negative tag. But last night I reading about patients with disruptions in connections between the amygdala and other parts of the brain. When there is a disruption in between the amygdala and that part of the brain that recognizes faces, patients have the delusion that people close to them are imposters. My mother is not really my mother. She looks and acts the same, but it is not really her. My father is not my father, and my dog has been substituted with another identical dog. When the  connections are missing from the amygdala to much larger areas that process sensory information, the patient feels like the entire world does not really exist, nothing is “real.”  When it’s a missing connection to parts of the brain associated with sense of self, patients believe they don’t exist, they are actually dead. Not just depressed or lacking energy, but a literal belief that they are dead, their body is a hollow shell, or they are a ghost.

I suppose one might explain it by saying the association of emotion to objects is strongly conditioned, and when it goes missing they wrongly conclude the object has changed, not their emotional response to it. But that still seems odd- you would think they would just explain it by deciding “I just don’t like or care about my mother as much as I  used to,”  not, "she's an imposter."

At any rate, I was kind of intrigued by the association between emotion and our sense of what is real. Of course Ramachandron doesn’t rule the idea that maybe normal people are the ones with delusion or illusion that the world or the self is real. But other qualia seem to help us make similar distinctions. Qualia generated by the senses is vivid and non negotiable, the qualia of dreams less so, and imagined qualia much more fuzzy and malleable .

I wonder if there is anything in AI like mirror neurons in learning, or what simulations of them do.

Title: Re: Quantifying Consciousness
Post by: cheryl j on 13/01/2014 16:08:33
  Given his textbook-accurate description, could he draw a carrot, I wonder? Fascinating case.     

 Actually,  since John was a very  capable artist (even if he couldn't identify was he was drawing) Ramachandran asked him to draw  pictures of various plants from memory –a rose, a daffodil a lupin, a tulip and an iris. The drawings sort of reflect his factual knowledge but look nothing like the real thing. (The daffodil looks like a mushroom. The rose looks like a pom-pom.) The drawings look like what you might end up with if you tried to tell someone over the phone how to draw something he had never seen before, and coincidentally,  Ramachandron refers to  the pictures as “Martian flowers.”
Title: Re: Quantifying Consciousness
Post by: cheryl j on 13/01/2014 17:48:01
Actually, I changed my mind a bit about emotions. It occurred to me I do know people who find anger oddly pleasurable and enjoy a good scrap. For them it's exciting and invigorating (righteous indignation) where as I usually find it upsetting or wildly frustrating. It jams my circuits.
Title: Re: Quantifying Consciousness
Post by: alancalverd on 13/01/2014 18:54:50
But your physiological response to annoyance or frustration is likely to be the same as everyone else's - adrenalin, sweat, tachycardia....it's really a question of whether you have become habituated to noradrenalin after the action has passed. I was disturbed to find, a few months after being widowed, that I actually enjoyed crying, and I wondered if those people who apparently never recover from such an event actually feel more comfortable being miserable - endorphin addiction? - it certainly accounts for "curry addicts".
Title: Re: Quantifying Consciousness
Post by: David Cooper on 13/01/2014 19:05:49
We don't necessarily feel anger the same way any more than we all see red the same way.

Possibly, but I would be surprised if it were true.

If red and blue qualia can be switched round in one person compared with another, they'd still function normally and be unable to determine whether they're seeing things the same way or differently. The same could occur with the qualia of fear and disgust, and it might even be possible to see colours using sound qualia and to hear things through colour qualia. I'm not sure what would exchange well with an anger quale, but there may be a vast number of spare qualia which we never experience at all, and it could be that no two people experience like qualia at all. It seems more likely to me that we all experience the same qualia in the same ways, for the most part at least. Colour-blind people might give us some reason to think there will be differences though, but it's possible that red-green colour-blind people simply never experience red (or alternatively green) qualia at all and that it's a lack of a quale that results rather than an exchange.

Quote
I wonder if there is anything in AI like mirror neurons in learning, or what simulations of them do.

If you can't make a machine feel anything related to itself, you aren't going to be able to make it feel anything related to anyone it's observing either, but it will certainly try to calculate what they are likely to be feeling based on what it knows about how people feel. If it sees someone being punched in the stomach, it won't feel anything and won't be alarmed, but it will generate a lot of conclusions about potential damage, possible immorality and the possible need to intervene.
Title: Re: Quantifying Consciousness
Post by: David Cooper on 13/01/2014 19:13:02
I forgot to comment on this bit:-

Quote
I sometimes wonder about the qualia red/green inversion question. If our colours were truly inverted or shifted, wouldn’t there be discrepancies in what we thought was the same or different when we compared mixed colors and added and subtracted shades of color?  It seems like an obvious question, so surely someone has done the math.

People have done experiments with "glasses" that invert the image and make people see everything upside down. After a few days they adapt to this and see everything as normal. I'd like to try doing the same thing with switching colours round, and inverting some (or all of them) in terms of brightness. The range of colours and shades available should be the same, though fewer can be expressed in the blue, so just switching red and green round would be best, but we can work fairly well with a reduced range without it being obvious, so it may not be too important. If you brought up a baby wearing such a device it would grow up thinking the colours it sees are absolutely normal, and that white is dark while black is light (if you're reversing the brightnesses). Whether someone could adapt to it well later in life is another issue, and would be well worth doing the experiment.
Title: Re: Quantifying Consciousness
Post by: bizerl on 14/01/2014 01:18:35
I'd like to try doing the same thing with switching colours round, and inverting some (or all of them) in terms of brightness. The range of colours and shades available should be the same, though fewer can be expressed in the blue, so just switching red and green round would be best, but we can work fairly well with a reduced range without it being obvious, so it may not be too important. If you brought up a baby wearing such a device it would grow up thinking the colours it sees are absolutely normal, and that white is dark while black is light (if you're reversing the brightnesses). Whether someone could adapt to it well later in life is another issue, and would be well worth doing the experiment.

Is this similar to what happens when I wear tinted ski goggles and for a while everything looks orange, but after a while I forget I'm wearing them and colours just look "normal". I can see a spectrum and it doesn't appear to have a tint. Then when I take the goggles off, everything looks extra blue until my "eyes" (or probably more acurately "brain") re-adjusts.

I've only skimmed over this thread I must admit but it seems that if consciousness was something external, it would be able to interact with other consciousnesses, and would provide something to observe as evidence. I personally believe that this is not the case and take the anthropic argument that the neurons and chemicals and what-nots that are all whizzing around doing their job in our heads create the effect of consciousness. The fact that there is a direct effect on our experience when all this grey matter is altered (ie with chemicals - drugs, or with direct electrical stimulation) seems to support the idea that what we call "consciousness" is contained within the structure we call "brain".

Sorry if i'm repeating points from earlier, as I said, I've only skimmed this thread.
Title: Re: Quantifying Consciousness
Post by: cheryl j on 14/01/2014 01:57:33
Since additive and subtractive color mixing works differently, I was wondering if you'd end up with discrepancies if people really did have inverted qualia.

With mixing paint, there are several ways to make brown. If you did it one way and I did another,  and we had inverted red green, would we agree on the final color?

This article has some interesting pictures.


http://plato.stanford.edu/entries/qualia-inverted/
Title: Re: Quantifying Consciousness
Post by: Aemilius on 14/01/2014 09:48:54
Would any of you care to define what it is you are trying to measure? A functional definition will suffice for a start, as in "consciousness is that which....."

Consciousness is that which perceives.
Title: Re: Quantifying Consciousness
Post by: cheryl j on 14/01/2014 16:32:22
Francis Crick  was lecturing on consciousness at the Salk Institute and a student raised his hand an said "But professor Crick, you say you are going to lecture on the neural mechanisms of consciousness, and you haven't even bothered to define the word properly."

Crick said "My dear Chap, there was never a time in the history of biology when a group of us sat around a table and saying 'let's define life first.'We just went out there and found out what it was - a double helix. We leave matters of semantic distinctions to you philosophers."

I agree that not having an adequate definition can be a problem, but it might also be true that good definitions only follow once you actually know more about the phenomena you are interested in. And even if they don't follow, you still end up with a lot of useful knowledge that can be applied to other things and opens doors to other interesting questions that wouldn't have occurred to you before. I would trade a "perfect" definition of life for the discovery of DNA any day.
Title: Re: Quantifying Consciousness
Post by: David Cooper on 14/01/2014 20:02:11
Since additive and subtractive color mixing works differently, I was wondering if you'd end up with discrepancies if people really did have inverted qualia.

It would remain additive, unless it's already subtractive with qualia, in which case it would remain subtractive.

Quote
With mixing paint, there are several ways to make brown. If you did it one way and I did another,  and we had inverted red green, would we agree on the final color?

Are there multiple ways of making brown if you start out with only three colours of paint, those being magenta, yellow and cyan?
Title: Re: Quantifying Consciousness
Post by: Aemilius on 15/01/2014 00:09:04
Francis Crick  was lecturing on consciousness at the Salk Institute and a student raised his hand an said "But professor Crick, you say you are going to lecture on the neural mechanisms of consciousness, and you haven't even bothered to define the word properly."

Crick said "My dear Chap, there was never a time in the history of biology when a group of us sat around a table and saying 'let's define life first.'We just went out there and found out what it was - a double helix. We leave matters of semantic distinctions to you philosophers."

Looks to me as if he was just artfully dodging the issue of directly linking neural mechanisms to consciousness.

With his DNA research, Crick was more exploring the physical/tangible "Nuts and Bolts" side of things.  When exploring the nuts and bolts side of things in the physical sciences one can often just go out there and start finding things out.

Consciousness, on the other hand, is the non-physical/intangible product of all the nuts and bolts, so if your planning on just going out there and finding out what that is, you shall be out there for a very, very long time.
 
I agree that not having an adequate definition can be a problem....

Can be a problem? This thread has been impaled on it right from the start.
Title: Re: Quantifying Consciousness
Post by: Aemilius on 15/01/2014 05:33:38
Consciousness is that which: "establishes the communion between the self to it's environment."

Consider the word; "Myself"

This is a compound word consisting of two words; My and self. The "My" establishes ownership of the following word "self". To understand the significance of this union, one needs to grasp the notion that the "My" refers to the physical attributes of one's existence and the "self" extends to the ethereal portion of this alliance between body and mind.

There is no absolute evidence that this alliance exists without both participants being involved. Until that evidence surfaces, we can only speculate, and speculation is not science.

Consciousness is that which establishes the communion between the self to it's environment? Are there any little wafers involved? I hate little wafers!

No real need for all that. Consciousness is simply that which perceives. It is perception that is the hallmark of consciousness.... whether for a man, a whale, a parakeet, a tree, a single cell, a leopard, what have you. 
Title: Re: Quantifying Consciousness
Post by: cheryl j on 15/01/2014 17:19:43


Can be a problem? This thread has been impaled on it right from the start.

Perhaps, but I still tend to agree with Crick, in that neuroscience has provided a different perspective about the nature of the things that philosophers sometimes associate with consciousness, words like "self" or "intelligence" or "awareness" or "perception."

What I like about Ramachandran's experiements is that he often demonstrates that the abilities, or type of experiences, that philosophers see as being "the same" or "one thing" aren't always, that certain mental functions viewed as inseparable, some times are. Who would have intuitively thought it might be possible to write but not be able to read what you or someone else has written, to speak (in a meaningful way) but not understand what is said to you, to see but not be consciously aware of what you are seeing, or to divide consciousness in half in a split brain patient?

Database Error

Please try again. If you come back to this error screen, report the error to an administrator.
Back