0 Members and 1 Guest are viewing this topic.
The entire AGI system will be subconscious, or more accurately non-conscious. Unless of course you model it precisely on the human brain in which case it may end up working the same way with claims of consciousness coming out of it and lots of hidden background processes going on which the conscious part can't access. But an intelligent system running on silicon chips of the kind we know how to make cannot interface with any kind of feelings and will therefore lack consciousness, so the question at the top doesn't apply.
consciousness
I agree that it is easy to throw around words like consciousness, unconscious, etc.One might consider what is in "focus", but that may be a trivial aspect of the AI, although selecting what to focus on may not be so trivial.Unconscious may be related memories, events, etc, that don't quite receive the primary focus, but nonetheless influence the overall outcome of the system. As mentioned above, something like priming is testable in humans, and thus one might expect similar responses in an AI system.
"Feelings" may be necessary for a self-directing robot to survive in the real world.Pain and fear may be necessary to force you to drop whatever you are doing, and engage in "fight or flight"Happiness & satisfaction is a reflection on past performance, which may be necessary to strengthen the steps & neural connections that led up to the current state, and increase the probability that they will be taken again in the future.Dissatisfaction is also a reflection on the past, which may weaken neural connections, and decrease the probability that the same state will be reached in the futureFrustration is an indication that nothing you are doing now is working, so stop it and do something totally different.In humans, many of these feelings are driven by chemicals floating around our internal plumbing, like adrenalin for fear, and endorphins for satisfaction. An electronic robot would not dispense chemicals onto its silicon chips, but other mechanisms may be necessary to strengthen or weaken neural links as experience grows, or the environment changes.I heard of an experiment where flies were bred without functional pain sensors. They did not survive long in the world.
Can an AI ever achieve positive goals as it sees them without satisfaction? As to pain, what would that amount to for a robotic system? Would you even want to include a pain sensation? Isn't that a cruelty?
Like you said though it cannot interface with feelings so would it have any motivations of its own? Would the designers simply end up with a super calculator that still had to be fed goals to fill in the emotional void?
Quote from: David Cooper on 13/09/2013 17:51:15 consciousness Would you care to offer a definition of this word?
I think we can distinguish conscious and subconscious responses in the sense of calculated versus reflex actions, but the abstraction of consciousness seems to float around without adding to the discussion.
I agree that it is easy to throw around words like consciousness, unconscious, etc.One might consider what is in "focus", but that may be a trivial aspect of the AI, although selecting what to focus on may not be so trivial.
Unconscious may be related memories, events, etc, that don't quite receive the primary focus, but nonetheless influence the overall outcome of the system.
As mentioned above, something like priming is testable in humans, and thus one might expect similar responses in an AI system.
Focus appears to be vitally important with regard to consciousness. It helps to quickly identify potential threats. Yet an unconscious idea of what a threat is also plays a vital role and is ultimately an automatic response through repetitive experience and memory.
"Feelings" may be necessary for a self-directing robot to survive in the real world.
I heard of an experiment where flies were bred without functional pain sensors. They did not survive long in the world.
Can an AI ever achieve positive goals as it sees them without satisfaction?
As to pain, what would that amount to for a robotic system? Would you even want to include a pain sensation? Isn't that a cruelty?
We have plenty of machines that are capable of making computed (i.e. conscious) decisions based on neural programming from multiple inputs, and/or majority polling to minimise errors from faulty sensors. Most untended machines have "subconscious" reflex actions.
Consider a security system as previously used on the border between East and West Germany. A trip wire or light beam sensor fired a gun along the top of the fence: reflex action. Now add a fog sensor, as used in automatic weather stations, and a polling circuit that disables the light beam sensor if there is rolling fog - hard computed conscious action.
Consciousness is the experiencing of feelings (qualia), including feelings of understanding and feelings of awareness.