0 Members and 1 Guest are viewing this topic.
QuoteConsciousness is the experiencing of feelings (qualia), including feelings of understanding and feelings of awareness.In that case there is no way of knowing whether an entity possesses it without being that entity. Any actor can lie convincingly about the feelings of a wholly fictional character, so a smiley computer could give a perfectly valid reason for you to believe that it had some feelings about something.
Consciousness is the experiencing of feelings (qualia), including feelings of understanding and feelings of awareness.
This is a dangerous definition as you can use it to justify the concept of untermensch - anyone whose expression of feelings differs from yours, or can be dismissed (without proof being necessary) as a lie. It is very close to the Catholic translation of Genesis in which, to justify bear-baiting, only humans were ascribed a soul, despite all Hebrew versions giving all animals a nefesh.
The talk of rules is probably the biggest issue with AI. Human intelligence has little or even no pre-configured rules as such. The young child who is told not to touch the stove because it is hot and consequently touches the stove learns not only that this will cause pain but also that it is bad to do it again. This could indicate that even the concept of pain is learnt. Not because it is not pre-programmed in the nervous system but it's understanding comes with experience. That is why to pre-program rules into an AI is back to front.
As to damage detections, human pain is the most effective damage detection there is. It certainly gets the message across. In the case of humans it is a huge problem to replace a flesh and blood arm. This is not so in robotics.
Reflexes can be problematic, such as gripping an electric power line and being unable to release it.
The problem we have with robots is that, even if they operate within Asimov's simplistic laws of robotics, they are physically, intellectually and morally superior to ourselves. Can you imagine a robotic version of the Spanish Inquisition? Or of Shariah law?
I'm not sure that we have reached the point where commercially available artificial systems can create new rules outside the scope of their current rules
...or totally rewrite their own set of rules.
However, I am sure that if artificial systems are to take a productive part in the real world, they will need adaptable rules. After all, the environment is always changing, and for artificial systems to remain useful and productive in the long term, they must adapt to the changed environment. And it's not just the external environment - they must adapt to changes in the behavior of their internal systems, as components age and actuators & sensors change their characteristics.
I think the ultimate goal of a brain (and consciousness) is to predict the future as accurately as possible, so that the best actions can be taken. Rules must be adaptable to take into account additional/changed information about the present if they are to make the best predictions about the future.
Returning to my notion of conscious = computed, I am fascinated by my own ability to lob things into a wastepaper basket. Whether it is a cricket ball (you really don't want to share an office with me), a ball of paper, or a paper dart, it hits the target every time without any conscious thought. But it would take hours to write the equation of motion that described all three projectiles with the required accuracy, and nobody ever taught me how to do it - kids generally learn to throw accurately with a tennis ball, then pick up almost any projectile and make the requisite corrections for shape, mass and density (including choosing underarm or overarm delivery) without hesitation.
We know that walking upright on two legs requires a huge amount of realtime computation or some very slick distributed sensors, but that is all about selfcorrective feedback in a wholly defined system.
Launching a standard projectile is no problem for an automated antiaircraft gun or a tennis practice server, but has anyone built a throwing robot that can match the adaptive skill of an average office worker? Indeed is there any other species that can do it?
With a computer you can show that the claims are false by following back the trail of how they are generated, at which point you find them to be nothing more than assertions mapped to inputs on the basis of mapping rules, all done by a processing system which has no possible access to feelings.
You then describe a skill which depends on computations being done without you being conscious of them, illustrating that conscious != computed.
Based on the error in landing point of the current shot, adjust the aiming of the next shot (real-time feedback into actions)
QuoteWith a computer you can show that the claims are false by following back the trail of how they are generated, at which point you find them to be nothing more than assertions mapped to inputs on the basis of mapping rules, all done by a processing system which has no possible access to feelings.All this means is that you can't adequately dissect the human computation sequence because you don't know all the inputs or history.
But it's quite obvious from the study of intercultural or even interpersonal differences of taste and ethics that what we call our feelings are learned rules.
QuoteYou then describe a skill which depends on computations being done without you being conscious of them, illustrating that conscious != computed.But the point made lower down is that I don't know how to compute the necessary actions "on paper", I can't explain them, and I haven't intentionally learned them. This is the difference between subconscious neural learning and conscious vonNeumann thought processes.
As for bipedal walking, electroencephalography and functional MRI studies show that it really uses a lot of brainpower and it is generally accepted as one of the most difficult aspects of robotics.
...though the ability to sidestep or stride over a rock, or walk up stairs, would be hugely useful.
No hardware problem
The problem with bipedal standing, is that a body supported on two pivots below its center of gravity is inherently unstable, so standing still is an active process, requiring continual adjustment of muscle tone - hence the large amount of brain power needed by bipeds.
Walking is slightly easier to compute because as you say it is a process of continually falling forward and arresting the fall, and can be achieved with fewer muscles.
It's interesting to play with a pogo stick
Walking around on a flat floor with no obstacles is pretty pointless for a robot. In such a low-impedance environment, wheels are much more efficient.
I'm planning to work on the two webcam approach for vision and have thought about how to go about it quite a bit, but I think the pattern recognition side of it will take a lot of time to work out - this is needed to match up the same point in the two images so that its distance can be calculated, but even after that you have to model the whole scene and make sense of all the different surfaces, and work out which should not be stood on, so it's going to be a major undertaking. I'm also years behind other people in doing that kind of work and may not be able to catch up, so it may be better not to start on it. I'll see how I feel about that when my other work's finished and out of the way.
You won't get very far playing rugby or catching rabbits if you have to look at the ground when you are running. Animals are extremely adaptable to traversing rough terrain without looking at their feet! It's all done by baroreceptors and extensometers, not the eyeball.
I have already worked out pattern recognition and thought about stereoscopic vision. Maybe we should share ideas? :-)
I can pick a moving shape out of the backgroud and isolate it.
BTW I also have ideas on focal point adjustment for a vision system.
Our semicircular canals only detect acceleration, so no problem simulating them with accelerometers
Sprinters start with a pronounced forward lean as they accelerate, and become more upright at full speed. I think if you watch a normal bipedal gait very carefully you will see that the head actually leads the movement - the body intentionally falls forward then stops itself by swinging a leg forward.
For deceleration you have to lean backwards.
Apols for not distinguishing between linear and rotational accelerometers
http://www.robotshop.com/sensors-gyroscopes.html will provide you with neat solid-state rotational accelerometers. Friends from the aerospace industry have been working on these for ages, looking for medical applications.
3 cameras? probably not necessary. No raptor has evolved a fully functional third eye.
If you stand on the apron or inside the barn your semicircular canals adjust and you can swear that the steel columns are about 3 degrees off vertical, but a plumb line says they are perfect.