0 Members and 1 Guest are viewing this topic.
The entire AGI system will be subconscious, or more accurately non-conscious. Unless of course you model it precisely on the human brain in which case it may end up working the same way with claims of consciousness coming out of it and lots of hidden background processes going on which the conscious part can't access. But an intelligent system running on silicon chips of the kind we know how to make cannot interface with any kind of feelings and will therefore lack consciousness, so the question at the top doesn't apply.
I agree that it is easy to throw around words like consciousness, unconscious, etc.One might consider what is in "focus", but that may be a trivial aspect of the AI, although selecting what to focus on may not be so trivial.Unconscious may be related memories, events, etc, that don't quite receive the primary focus, but nonetheless influence the overall outcome of the system. As mentioned above, something like priming is testable in humans, and thus one might expect similar responses in an AI system.
"Feelings" may be necessary for a self-directing robot to survive in the real world.Pain and fear may be necessary to force you to drop whatever you are doing, and engage in "fight or flight"Happiness & satisfaction is a reflection on past performance, which may be necessary to strengthen the steps & neural connections that led up to the current state, and increase the probability that they will be taken again in the future.Dissatisfaction is also a reflection on the past, which may weaken neural connections, and decrease the probability that the same state will be reached in the futureFrustration is an indication that nothing you are doing now is working, so stop it and do something totally different.In humans, many of these feelings are driven by chemicals floating around our internal plumbing, like adrenalin for fear, and endorphins for satisfaction. An electronic robot would not dispense chemicals onto its silicon chips, but other mechanisms may be necessary to strengthen or weaken neural links as experience grows, or the environment changes.I heard of an experiment where flies were bred without functional pain sensors. They did not survive long in the world.
Can an AI ever achieve positive goals as it sees them without satisfaction? As to pain, what would that amount to for a robotic system? Would you even want to include a pain sensation? Isn't that a cruelty?
Like you said though it cannot interface with feelings so would it have any motivations of its own? Would the designers simply end up with a super calculator that still had to be fed goals to fill in the emotional void?
Quote from: David Cooper on 13/09/2013 17:51:15 consciousness Would you care to offer a definition of this word?
I think we can distinguish conscious and subconscious responses in the sense of calculated versus reflex actions, but the abstraction of consciousness seems to float around without adding to the discussion.
I agree that it is easy to throw around words like consciousness, unconscious, etc.One might consider what is in "focus", but that may be a trivial aspect of the AI, although selecting what to focus on may not be so trivial.
Unconscious may be related memories, events, etc, that don't quite receive the primary focus, but nonetheless influence the overall outcome of the system.
As mentioned above, something like priming is testable in humans, and thus one might expect similar responses in an AI system.
Focus appears to be vitally important with regard to consciousness. It helps to quickly identify potential threats. Yet an unconscious idea of what a threat is also plays a vital role and is ultimately an automatic response through repetitive experience and memory.
"Feelings" may be necessary for a self-directing robot to survive in the real world.
I heard of an experiment where flies were bred without functional pain sensors. They did not survive long in the world.
Can an AI ever achieve positive goals as it sees them without satisfaction?
As to pain, what would that amount to for a robotic system? Would you even want to include a pain sensation? Isn't that a cruelty?
We have plenty of machines that are capable of making computed (i.e. conscious) decisions based on neural programming from multiple inputs, and/or majority polling to minimise errors from faulty sensors. Most untended machines have "subconscious" reflex actions.
Consider a security system as previously used on the border between East and West Germany. A trip wire or light beam sensor fired a gun along the top of the fence: reflex action. Now add a fog sensor, as used in automatic weather stations, and a polling circuit that disables the light beam sensor if there is rolling fog - hard computed conscious action.
Consciousness is the experiencing of feelings (qualia), including feelings of understanding and feelings of awareness.
QuoteConsciousness is the experiencing of feelings (qualia), including feelings of understanding and feelings of awareness.In that case there is no way of knowing whether an entity possesses it without being that entity. Any actor can lie convincingly about the feelings of a wholly fictional character, so a smiley computer could give a perfectly valid reason for you to believe that it had some feelings about something.
This is a dangerous definition as you can use it to justify the concept of untermensch - anyone whose expression of feelings differs from yours, or can be dismissed (without proof being necessary) as a lie. It is very close to the Catholic translation of Genesis in which, to justify bear-baiting, only humans were ascribed a soul, despite all Hebrew versions giving all animals a nefesh.
The talk of rules is probably the biggest issue with AI. Human intelligence has little or even no pre-configured rules as such. The young child who is told not to touch the stove because it is hot and consequently touches the stove learns not only that this will cause pain but also that it is bad to do it again. This could indicate that even the concept of pain is learnt. Not because it is not pre-programmed in the nervous system but it's understanding comes with experience. That is why to pre-program rules into an AI is back to front.
As to damage detections, human pain is the most effective damage detection there is. It certainly gets the message across. In the case of humans it is a huge problem to replace a flesh and blood arm. This is not so in robotics.
Reflexes can be problematic, such as gripping an electric power line and being unable to release it.
The problem we have with robots is that, even if they operate within Asimov's simplistic laws of robotics, they are physically, intellectually and morally superior to ourselves. Can you imagine a robotic version of the Spanish Inquisition? Or of Shariah law?
I'm not sure that we have reached the point where commercially available artificial systems can create new rules outside the scope of their current rules
...or totally rewrite their own set of rules.
However, I am sure that if artificial systems are to take a productive part in the real world, they will need adaptable rules. After all, the environment is always changing, and for artificial systems to remain useful and productive in the long term, they must adapt to the changed environment. And it's not just the external environment - they must adapt to changes in the behavior of their internal systems, as components age and actuators & sensors change their characteristics.
I think the ultimate goal of a brain (and consciousness) is to predict the future as accurately as possible, so that the best actions can be taken. Rules must be adaptable to take into account additional/changed information about the present if they are to make the best predictions about the future.
Returning to my notion of conscious = computed, I am fascinated by my own ability to lob things into a wastepaper basket. Whether it is a cricket ball (you really don't want to share an office with me), a ball of paper, or a paper dart, it hits the target every time without any conscious thought. But it would take hours to write the equation of motion that described all three projectiles with the required accuracy, and nobody ever taught me how to do it - kids generally learn to throw accurately with a tennis ball, then pick up almost any projectile and make the requisite corrections for shape, mass and density (including choosing underarm or overarm delivery) without hesitation.
We know that walking upright on two legs requires a huge amount of realtime computation or some very slick distributed sensors, but that is all about selfcorrective feedback in a wholly defined system.
Launching a standard projectile is no problem for an automated antiaircraft gun or a tennis practice server, but has anyone built a throwing robot that can match the adaptive skill of an average office worker? Indeed is there any other species that can do it?
With a computer you can show that the claims are false by following back the trail of how they are generated, at which point you find them to be nothing more than assertions mapped to inputs on the basis of mapping rules, all done by a processing system which has no possible access to feelings.
You then describe a skill which depends on computations being done without you being conscious of them, illustrating that conscious != computed.
Based on the error in landing point of the current shot, adjust the aiming of the next shot (real-time feedback into actions)
QuoteWith a computer you can show that the claims are false by following back the trail of how they are generated, at which point you find them to be nothing more than assertions mapped to inputs on the basis of mapping rules, all done by a processing system which has no possible access to feelings.All this means is that you can't adequately dissect the human computation sequence because you don't know all the inputs or history.
But it's quite obvious from the study of intercultural or even interpersonal differences of taste and ethics that what we call our feelings are learned rules.
QuoteYou then describe a skill which depends on computations being done without you being conscious of them, illustrating that conscious != computed.But the point made lower down is that I don't know how to compute the necessary actions "on paper", I can't explain them, and I haven't intentionally learned them. This is the difference between subconscious neural learning and conscious vonNeumann thought processes.
As for bipedal walking, electroencephalography and functional MRI studies show that it really uses a lot of brainpower and it is generally accepted as one of the most difficult aspects of robotics.
...though the ability to sidestep or stride over a rock, or walk up stairs, would be hugely useful.
No hardware problem
The problem with bipedal standing, is that a body supported on two pivots below its center of gravity is inherently unstable, so standing still is an active process, requiring continual adjustment of muscle tone - hence the large amount of brain power needed by bipeds.
Walking is slightly easier to compute because as you say it is a process of continually falling forward and arresting the fall, and can be achieved with fewer muscles.
It's interesting to play with a pogo stick
Walking around on a flat floor with no obstacles is pretty pointless for a robot. In such a low-impedance environment, wheels are much more efficient.
I'm planning to work on the two webcam approach for vision and have thought about how to go about it quite a bit, but I think the pattern recognition side of it will take a lot of time to work out - this is needed to match up the same point in the two images so that its distance can be calculated, but even after that you have to model the whole scene and make sense of all the different surfaces, and work out which should not be stood on, so it's going to be a major undertaking. I'm also years behind other people in doing that kind of work and may not be able to catch up, so it may be better not to start on it. I'll see how I feel about that when my other work's finished and out of the way.
You won't get very far playing rugby or catching rabbits if you have to look at the ground when you are running. Animals are extremely adaptable to traversing rough terrain without looking at their feet! It's all done by baroreceptors and extensometers, not the eyeball.
I have already worked out pattern recognition and thought about stereoscopic vision. Maybe we should share ideas? :-)
I can pick a moving shape out of the backgroud and isolate it.
BTW I also have ideas on focal point adjustment for a vision system.
Our semicircular canals only detect acceleration, so no problem simulating them with accelerometers
Sprinters start with a pronounced forward lean as they accelerate, and become more upright at full speed. I think if you watch a normal bipedal gait very carefully you will see that the head actually leads the movement - the body intentionally falls forward then stops itself by swinging a leg forward.
For deceleration you have to lean backwards.
Apols for not distinguishing between linear and rotational accelerometers
http://www.robotshop.com/sensors-gyroscopes.html will provide you with neat solid-state rotational accelerometers. Friends from the aerospace industry have been working on these for ages, looking for medical applications.
3 cameras? probably not necessary. No raptor has evolved a fully functional third eye.
If you stand on the apron or inside the barn your semicircular canals adjust and you can swear that the steel columns are about 3 degrees off vertical, but a plumb line says they are perfect.