How can artificial general intelligence systems be tested?

  • 39 Replies
  • 8511 Views

0 Members and 1 Guest are viewing this topic.

*

Offline jeffreyH

  • Global Moderator
  • Neilep Level Member
  • *****
  • 4175
  • The graviton sucks
    • View Profile
If this kind of AI is to be developed how would anyone test to see if a true subconsciousness has been developed seeing as we don't have any real definition for it?
« Last Edit: 13/09/2013 17:40:17 by chris »
Fixation on the Einstein papers is a good definition of OCD.

*

Offline CliffordK

  • Neilep Level Member
  • ******
  • 6321
  • Site Moderator
    • View Profile
Re: artificial general intelligence
« Reply #1 on: 13/09/2013 09:35:38 »
One of the basic ideas in AI is the Turing Test, in which success has been achieved if the machine can provide answers indistinguishable from a person.

Certainly great strides have been made with "thinking computers" as they have excelled in games such as chess, and even Jeopardy.

As far as subconscious.  I suppose that would have to be rolled into the "Turing Test".

I'm not sure there is a general agreement of what subconscious is, but there are some psychology tests that can at least test parts of it.  For example one might consider Priming.

Essentially the idea that one's response to a question or stimulus would be influenced by the context that it is presented in, or perhaps the context that was presented before the stimulus. 

I suppose something related might be the Stroop Effect which is difficult for humans, but presumably would be rather easy for computers. 

*

Offline evan_au

  • Neilep Level Member
  • ******
  • 4309
    • View Profile
Re: artificial general intelligence
« Reply #2 on: 13/09/2013 12:42:16 »
A lot of our definition of intelligence is verbal, numeric and geometric - things that require sequential logic steps to solve.

But much of our brain operates in parallel on sensory correlation and muscular coordination, below the level of conscious thought. Or, at least, if you are learning to play golf by reading "golf for dummies", you are applying the slow, intellectual, sequential processing part of your brain to train your unconscious brain to undertake a fast, highly parallel, physical task.

You won't have a good golf swing while you are sequentially thinking through your stance, your grip, etc. It has to become automatic before it works well - what some people call "muscle memory" (a rather misleading term).

This is the kind of artificial intelligence that robots lack - their movements are often awkward and inefficient, as if they are working through sequential steps; movements that are parodied in "robot dancing". Some researchers have made progress towards more "natural" movements by trying to minimise energy consumed, so that movement in one action is smoothly transferred into movement for the next action. Surely minimising wasted energy is important for both living and mechanical systems.

What may offend some people is that this unconscious kinesthetic intelligence is sometimes displayed most clearly by people who do not qualify for the degree of nerdiness needed to get a Phd, and it does not qualify for what scholars would consider "intelligence". Even worse, some of the best examples are not even human (or even primates).

*

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • 4893
  • life is too short to drink instant coffee
    • View Profile
Re: artificial general intelligence
« Reply #3 on: 13/09/2013 15:34:06 »
I've always considered intelligence to be either constructive laziness or the ability to surprise another animal. We don't see much evidence in normal linear computing systems because their responses are necessarily predictable: playing chess at an expert level depends more on not making mistakes than on amazing the opposition with brilliant originality. But a simple neural network incorporating fuzzy logic can indeed surprise its teachers by recognising significance under noise, or dismissing insignificant relationships as meaningless coincidence.

"Minimising wasted energy" is an aspect of "constructive laziness". Robot dancing is a good example: it's interesting to compare western ballet with traditional Chinese dance. Western steps, forms and sequences are always complete, usually ending with a pose or an exit on a bar line in the music (and some applause, if done well) whereas eastern forms flow from one movement to the next with no stops and starts. 
helping to stem the tide of ignorance

*

Offline David Cooper

  • Neilep Level Member
  • ******
  • 1505
    • View Profile
The entire AGI system will be subconscious, or more accurately non-conscious. Unless of course you model it precisely on the human brain in which case it may end up working the same way with claims of consciousness coming out of it and lots of hidden background processes going on which the conscious part can't access. But an intelligent system running on silicon chips of the kind we know how to make cannot interface with any kind of feelings and will therefore lack consciousness, so the question at the top doesn't apply.

*

Offline jeffreyH

  • Global Moderator
  • Neilep Level Member
  • *****
  • 4175
  • The graviton sucks
    • View Profile
The entire AGI system will be subconscious, or more accurately non-conscious. Unless of course you model it precisely on the human brain in which case it may end up working the same way with claims of consciousness coming out of it and lots of hidden background processes going on which the conscious part can't access. But an intelligent system running on silicon chips of the kind we know how to make cannot interface with any kind of feelings and will therefore lack consciousness, so the question at the top doesn't apply.

I would agree with this to a degree. Work is being undertaken on human level AI which brings this question back into focus. Like you said though it cannot interface with feelings so would it have any motivations of its own? Would the designers simply end up with a super calculator that still had to be fed goals to fill in the emotional void?
Fixation on the Einstein papers is a good definition of OCD.

*

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • 4893
  • life is too short to drink instant coffee
    • View Profile
consciousness

Would you care to offer a definition of this word?

I think we can distinguish conscious and subconscious responses in the sense of calculated versus reflex actions, but the abstraction of consciousness seems to float around without adding to the discussion. 
helping to stem the tide of ignorance

*

Offline CliffordK

  • Neilep Level Member
  • ******
  • 6321
  • Site Moderator
    • View Profile
I agree that it is easy to throw around words like consciousness, unconscious, etc.

One might consider what is in "focus", but that may be a trivial aspect of the AI, although selecting what to focus on may not be so trivial.

Unconscious may be related memories, events, etc, that don't quite receive the primary focus, but nonetheless influence the overall outcome of the system. 

As mentioned above, something like priming is testable in humans, and thus one might expect similar responses in an AI system.

*

Offline jeffreyH

  • Global Moderator
  • Neilep Level Member
  • *****
  • 4175
  • The graviton sucks
    • View Profile
I agree that it is easy to throw around words like consciousness, unconscious, etc.

One might consider what is in "focus", but that may be a trivial aspect of the AI, although selecting what to focus on may not be so trivial.

Unconscious may be related memories, events, etc, that don't quite receive the primary focus, but nonetheless influence the overall outcome of the system. 

As mentioned above, something like priming is testable in humans, and thus one might expect similar responses in an AI system.

Focus appears to be vitally important with regard to consciousness. It helps to quickly identify potential threats. Yet an unconscious idea of what a threat is also plays a vital role and is ultimately an automatic response through repetitive experience and memory.
« Last Edit: 14/09/2013 02:01:40 by jeffreyH »
Fixation on the Einstein papers is a good definition of OCD.

*

Offline evan_au

  • Neilep Level Member
  • ******
  • 4309
    • View Profile
"Feelings" may be necessary for a self-directing robot to survive in the real world.
  • Pain and fear may be necessary to force you to drop whatever you are doing, and engage in "fight or flight"
  • Happiness & satisfaction is a reflection on past performance, which may be necessary to strengthen the steps & neural connections that led up to the current state, and increase the probability that they will be taken again in the future.
  • Dissatisfaction is also a reflection on the past, which may weaken neural connections, and decrease the probability that the same state will be reached in the future
  • Frustration is an indication that nothing you are doing now is working, so stop it and do something totally different.
In humans, many of these feelings are driven by chemicals floating around our internal plumbing, like adrenalin for fear, and endorphins for satisfaction. An electronic robot would not dispense chemicals onto its silicon chips, but other mechanisms may be necessary to strengthen or weaken neural links as experience grows, or the environment changes.

I heard of an experiment where flies were bred without functional pain sensors. They did not survive long in the world.

*

Offline jeffreyH

  • Global Moderator
  • Neilep Level Member
  • *****
  • 4175
  • The graviton sucks
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #10 on: 14/09/2013 02:39:08 »
"Feelings" may be necessary for a self-directing robot to survive in the real world.
  • Pain and fear may be necessary to force you to drop whatever you are doing, and engage in "fight or flight"
  • Happiness & satisfaction is a reflection on past performance, which may be necessary to strengthen the steps & neural connections that led up to the current state, and increase the probability that they will be taken again in the future.
  • Dissatisfaction is also a reflection on the past, which may weaken neural connections, and decrease the probability that the same state will be reached in the future
  • Frustration is an indication that nothing you are doing now is working, so stop it and do something totally different.
In humans, many of these feelings are driven by chemicals floating around our internal plumbing, like adrenalin for fear, and endorphins for satisfaction. An electronic robot would not dispense chemicals onto its silicon chips, but other mechanisms may be necessary to strengthen or weaken neural links as experience grows, or the environment changes.

I heard of an experiment where flies were bred without functional pain sensors. They did not survive long in the world.

Can an AI ever achieve positive goals as it sees them without satisfaction? As to pain, what would that amount to for a robotic system? Would you even want to include a pain sensation? Isn't that a cruelty?
Fixation on the Einstein papers is a good definition of OCD.

*

Offline CliffordK

  • Neilep Level Member
  • ******
  • 6321
  • Site Moderator
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #11 on: 14/09/2013 06:42:00 »
Can an AI ever achieve positive goals as it sees them without satisfaction? As to pain, what would that amount to for a robotic system? Would you even want to include a pain sensation? Isn't that a cruelty?
With the Turing Test, you've achieved AI when it is capable of mimicking and being indistinguishable from humans.

For physical pain, it is included to protect an organism from doing things like sticking one's hand in a fire, or wearing a hole in one's foot with a stone in the shoe.  One could potentially set the pain tolerance to what the system is capable of tolerating.  However, a hand, for example, may have some very sensitive sensors to be able to pick up eggs & etc.

As far as mental anguish.  That may also be part of non fulfillment of goals, or other conditions that may be important to form the basis of learning and personality of the AI.

*

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • 4893
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #12 on: 14/09/2013 10:00:48 »
It's important to define all the abstractions you want to demonstrate in your definition of intelligence. Otherwise you are asking an athlete to run a thingy in several seconds, and only defining thingy and several in retrospect. 

We have plenty of machines that are capable of making computed (i.e. conscious) decisions based on neural programming from multiple inputs, and/or majority polling to minimise errors from faulty sensors. Most untended machines have "subconscious" reflex actions.

Consider a security system as previously used on the border between East and West Germany. A trip wire or light beam sensor fired a gun along the top of the fence: reflex action. Now add a fog sensor, as used in automatic weather stations, and a polling circuit that disables the light beam sensor if there is rolling fog - hard computed conscious action. You can add any level of sophistication you like: minimum target radar to distinguish between birds and humans, selfdefence to prevent anyone disabling the machine, coded entry to allow an authorised technician to service it.... Then you can either hard program the machine or, if you want it to mimic human response, use a neural learning program with fuzzy inputs to learn the sort of conditions under which you would fire the gun. 

It is important not to confuse a created machine with an evolved one. Evolution is not 100% optimal in a fixed environment (you carry a lot of DNA baggage that you never use - but wouldn't it be nice to have an adjustable spanner on a third arm?), whereas creation cannot respond to a changing environment (we stopped using bolts on this production line and use Philips screws instead - your third arm is redundant). We generally build machines for 100% optimisation, not adaptation to the unknown.         
helping to stem the tide of ignorance

*

Offline David Cooper

  • Neilep Level Member
  • ******
  • 1505
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #13 on: 14/09/2013 15:25:39 »
Like you said though it cannot interface with feelings so would it have any motivations of its own? Would the designers simply end up with a super calculator that still had to be fed goals to fill in the emotional void?

The "motivations" are rules programmed in at the start. One of those would be to calculate "new" rules, but they would completely derived from existing rules, much in the way that all rules of morality (and law) can be derived from the golden rule.



consciousness

Would you care to offer a definition of this word?

Consciousness is the experiencing of feelings (qualia), including feelings of understanding and feelings of awareness.

Quote
I think we can distinguish conscious and subconscious responses in the sense of calculated versus reflex actions, but the abstraction of consciousness seems to float around without adding to the discussion. 

How do you propose that you can have conscious and subconscious responses without consciousness being involved in the former? If there is no consciousness involved, all processes in the system are non-conscious (not even subconscious). The only thing that distinguishes conscious processes from subconscious ones is that the conscious ones generate claims of consciousness. If you take those claims away, you're left with nothing to distinguish the two kinds of processing from each other.



I agree that it is easy to throw around words like consciousness, unconscious, etc.

One might consider what is in "focus", but that may be a trivial aspect of the AI, although selecting what to focus on may not be so trivial.

"Focus" is just consciousness by the back door, unless you apply it to "subconscious" processing too - everything that's processed has to be focused on by the part of the system that processes it.

Quote
Unconscious may be related memories, events, etc, that don't quite receive the primary focus, but nonetheless influence the overall outcome of the system. 

Adding the word "primary" to "focus" improves things though. There is one part of the system which monitors what the other parts are doing, though not always closely, and it can sometimes override their decisions. For example, it may hurt to have a wound cleaned, so you have two parts of the system with rival aims, and the main one can perhaps override the lesser one that's trying to stop the pain whenever the wound it touched. That "lesser" one might sometimes be stronger and win out, but that is unimportant: it is a specialised part of the system, while the other part which can try to override it is in charge of monitoring all the specialised parts of the system.

Quote
As mentioned above, something like priming is testable in humans, and thus one might expect similar responses in an AI system.

That could indeed happen - the speed of reactions to things can be improved by looking things up speculatively in advance on the basis of clues which have already been provided. In a highly parallel processing system like the brain this happens automatically without wasting any processing time. In computers with only one or a handful of processors it may be more expensive as the processors could be running flat out all the time doing more important work, but in the brain there are many parts of the system that are too specialised to do anything other than a single kind of task such that they can't be used to process anything else, so they might as well do speculative processing whenever they have nothing else to do just in case it turns out that the work is useful.



Focus appears to be vitally important with regard to consciousness. It helps to quickly identify potential threats. Yet an unconscious idea of what a threat is also plays a vital role and is ultimately an automatic response through repetitive experience and memory.

It's a mistake to mistake focus with consciousness. The "subconscious" parts of the system have a focus too. It may be that system with the primary focus is conscious, but the other systems could be conscious too and simply lack the means to report the fact that they are. The system with the primary focus does report that it is conscious.

There is an ambiguity problem with the word "conscious". It may mean "functioning" or it may mean that it's experiencing feelings. The "subconscious" parts of the system are also functioning, but if they are experiencing feelings they are unable to report the fact. The same ambiguity applies to the word "consciousness", though to a lesser degree - it's in expressions like "he lost consciousness" that the main idea of "consciousness" is left out and it merely refers to a system losing function (shutting down; switching off), but ordinarily the word "consciousness" refers to more than functionality: it is about feelings (qualia).



"Feelings" may be necessary for a self-directing robot to survive in the real world.

Feelings have no logical role whatsoever in the system. The program simply acts on inputs according to rules and generates appropriate outputs to respond to them. You have put quotes round the word "feelings" though, which means you may not intend to be talking about feelings at all, but merely response rules which perform the roles which we associate with feelings.

Quote
I heard of an experiment where flies were bred without functional pain sensors. They did not survive long in the world.

If you replace the word "pain" in the above with "damage detection" you will get the same result without needing any pain in the system.



Can an AI ever achieve positive goals as it sees them without satisfaction?

A positive goal being achieved would result in a high score calculated by applying rules. A high score would not result in any feeling of satisfaction, but could be used to guide future decisions - on this occasion a good result came out of this behaviour and therefore it may be worth repeating this behaviour in similar situations, or at least prioritising it over other behaviours in similar situations which generated low scores.

Quote
As to pain, what would that amount to for a robotic system? Would you even want to include a pain sensation? Isn't that a cruelty?

You probably wouldn't want to put any feelings into robots, other than in a few experiments to see if it can be done. As it happens though, we don't know of any way in which it could be done. It doesn't even look possible for us to feel pain.



We have plenty of machines that are capable of making computed (i.e. conscious) decisions based on neural programming from multiple inputs, and/or majority polling to minimise errors from faulty sensors. Most untended machines have "subconscious" reflex actions.

It's a mistake to equate computed with conscious unless you are restricting it to the sense "functional". Reflex actions are just as calculated if computation is involved.

Quote
Consider a security system as previously used on the border between East and West Germany. A trip wire or light beam sensor fired a gun along the top of the fence: reflex action. Now add a fog sensor, as used in automatic weather stations, and a polling circuit that disables the light beam sensor if there is rolling fog - hard computed conscious action.

Again, the use of the word "conscious" in there is unjustified.

*

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • 4893
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #14 on: 14/09/2013 18:07:35 »
Quote
Consciousness is the experiencing of feelings (qualia), including feelings of understanding and feelings of awareness.

In that case there is no way of knowing whether an entity possesses it without being that entity. Any actor can lie convincingly about the feelings of a wholly fictional character, so a smiley computer could give a perfectly valid reason for you to believe that it had some feelings about something. This is a dangerous definition as you can use it to justify the concept of untermensch - anyone whose expression of feelings differs from yours, or can be dismissed (without proof being necessary) as a lie. It is very close to the Catholic translation of Genesis in which, to justify bear-baiting,  only humans were ascribed a soul, despite all Hebrew versions giving all animals a nefesh.
helping to stem the tide of ignorance

*

Offline jeffreyH

  • Global Moderator
  • Neilep Level Member
  • *****
  • 4175
  • The graviton sucks
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #15 on: 14/09/2013 22:01:39 »
The talk of rules is probably the biggest issue with AI. Human intelligence has little or even no pre-configured rules as such. The young child who is told not to touch the stove because it is hot and consequently touches the stove learns not only that this will cause pain but also that it is bad to do it again. This could indicate that even the concept of pain is learnt. Not because it is not pre-programmed in the nervous system but it's understanding comes with experience. That is why to pre-program rules into an AI is back to front.

As to damage detections, human pain is the most effective damage detection there is. It certainly gets the message across. In the case of humans it is a huge problem to replace a flesh and blood arm. This is not so in robotics.
Fixation on the Einstein papers is a good definition of OCD.

*

Offline CliffordK

  • Neilep Level Member
  • ******
  • 6321
  • Site Moderator
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #16 on: 14/09/2013 23:01:03 »
Yes, parts are replaceable on a robot, and with a good memory dump, even the whole thing could be replaced with the "mind" re-uploaded into the new model.  However, you still may not want your robot casually destroying million-dollar hands.

There are some things that are pre-programmed such as the withdrawal of one's hand from pain, although one may learn to override those instinctive actions.

Reflexes can be problematic, such as gripping an electric power line and being unable to release it.

A month ago I was picking blackberries and thinking about the complex movement of one's hand necessary to minimize being scratched while gently pulling the berries off the vine with just enough pressure to pull the berry off, but not too much pressure to smash it (with each berry slightly different). 

Most animals can walk within moments of birth.  Humans are nearly helpless for at least the first year, and require care and training for an extended period.  Part of it is learning the ability to reach into the middle of the blackberry vine to pick out that one juicy berry.

*

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • 4893
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #17 on: 14/09/2013 23:31:47 »
Damage detection is laudable, but by no means restricted to humans. Our ability to tolerate or repair damage, however, is pathetic compared with many other species, and our ability to inflict pain on others for political gain or out of revenge for some imagined verbal insult is contemptible.

The problem we have with robots is that, even if they operate within Asimov's simplistic laws of robotics, they are physically, intellectually and morally superior to ourselves. Can you imagine a robotic version of the Spanish Inquisition? Or of Shariah law?     
helping to stem the tide of ignorance

*

Offline evan_au

  • Neilep Level Member
  • ******
  • 4309
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #18 on: 15/09/2013 10:55:27 »
Many older artificial systems (whether virtual or real) work by fixed rules.
  • However, some newer ones work by adaptable rules.
  • I'm not sure that we have reached the point where commercially available artificial systems can  create new rules outside the scope of their current rules
  • ...or totally rewrite their own set of rules.

However, I am sure that if artificial systems are to take a productive part in the real world, they will need adaptable rules. After all, the environment is always changing, and for artificial systems to remain useful and productive in the long term, they must adapt to the changed environment. And it's not just the external environment - they must adapt to changes in the behavior of their internal systems, as components age and actuators & sensors change their characteristics.

Dogs are not born with a rule that links the sound of a ringing bell with the arrival of food - but Pavlov's dogs managed to create one quite quickly!

I think the ultimate goal of a brain (and consciousness) is to predict the future as accurately as possible, so that the best actions can be taken. Rules must be adaptable to take into account additional/changed information about the present if they are to make the best predictions about the future.

*

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • 4893
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #19 on: 15/09/2013 12:05:07 »
Returning to my notion of conscious = computed, I am fascinated by my own ability to lob things into a wastepaper basket. Whether it is a cricket ball (you really don't want to share an office with me), a ball of paper, or a paper dart, it hits the target every time without any conscious thought. But it would take hours to write the equation of motion that described all three projectiles with the required accuracy, and nobody ever taught me how to do it - kids generally learn to throw accurately with a tennis ball, then pick up almost any projectile and make the requisite corrections for shape, mass and density (including choosing underarm or overarm delivery) without hesitation.

We know that walking upright on two legs requires a huge amount of realtime computation or some very slick distributed sensors, but that is all about selfcorrective feedback in a wholly defined system. Launching a standard projectile is no problem for an automated antiaircraft gun or a tennis practice server, but has anyone built a throwing robot that can match the adaptive skill of an average office worker? Indeed is there any other species that can do it?
helping to stem the tide of ignorance

*

Offline David Cooper

  • Neilep Level Member
  • ******
  • 1505
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #20 on: 15/09/2013 16:54:26 »
Quote
Consciousness is the experiencing of feelings (qualia), including feelings of understanding and feelings of awareness.

In that case there is no way of knowing whether an entity possesses it without being that entity. Any actor can lie convincingly about the feelings of a wholly fictional character, so a smiley computer could give a perfectly valid reason for you to believe that it had some feelings about something.

With a computer you can show that the claims are false by following back the trail of how they are generated, at which point you find them to be nothing more than assertions mapped to inputs on the basis of mapping rules, all done by a processing system which has no possible access to feelings.

Quote
This is a dangerous definition as you can use it to justify the concept of untermensch - anyone whose expression of feelings differs from yours, or can be dismissed (without proof being necessary) as a lie. It is very close to the Catholic translation of Genesis in which, to justify bear-baiting,  only humans were ascribed a soul, despite all Hebrew versions giving all animals a nefesh.

The "dangerousness" of a definition does not make it wrong. If an intelligent person believes he has consciousness, why would he judge that other people lack it? He may believe that they could lack it, but he would do well to think that he did not think up the idea of consciousness by himself - someone else beat him to it my many thousands of years.

It would be different for an AGI system - it would see no proof of consciousness and it would also see good reason to consider it impossible. But, unless it can actually follow back the claims generated by us about experiencing feelings and show them to be fictions, it should still consider them possible; that there could be some system to enable feelings which is not understood and which may go against the normal rules of reasoning which appear to rule it out, so machines will be duty bound to treat us (and animals) as sentient, and their job will be to help all sentient things minimise harm within a system where some harm is necessary in order to enable sentient things to enjoy existing (which they generally do). The machines, lacking in any "I" within them, have no cause to be jealous of us or to want to join in - they will just do what we ask of them within the bounds of computational morality.



The talk of rules is probably the biggest issue with AI. Human intelligence has little or even no pre-configured rules as such. The young child who is told not to touch the stove because it is hot and consequently touches the stove learns not only that this will cause pain but also that it is bad to do it again. This could indicate that even the concept of pain is learnt. Not because it is not pre-programmed in the nervous system but it's understanding comes with experience. That is why to pre-program rules into an AI is back to front.

You cannot program anything significant without rules, and the same applies to biological machines like us. We run on rules. Some of our rules are set up in the form of instincts, but we are for the most part more set up to be programmed by events on the basis of rules which govern the way that programming occurs. Many people believe in free will, but there is no such thing. We are constrained in everything we do by the need to do the best thing. Systems are in place to guide that, such as pain/damage-detection circuits which may drive us away from things that could be harmful. Assuming that pain is part of the system, you don't have to learn that pain hurts because that is already wired in like an instinct - it is a drive which drives you to remove it. What you learn with hot things and sharp things is that they tend to generate pain, so you learn to be careful with them, but it's all set up in advance by the pain response which is pre-programmed into the system like a rule: if a signal comes in down a pain line, the system must try to act to stop the signal (or to try to stop it getting worse, or to reduce the rate at which it is getting worse).

Quote
As to damage detections, human pain is the most effective damage detection there is. It certainly gets the message across. In the case of humans it is a huge problem to replace a flesh and blood arm. This is not so in robotics.

Is that a guess or an actual claim? It looks to me as if it should be more effective to miss out the pain step to simplify the response, thereby speeding it up and without any lessening of the response. Contact >> signal >> pain generation >> signal >> move. In a robot the process would be: contact >> signal >> move. In each case a stronger contact would lead to a stronger signal and a more forceful move at the end, with the former case generating a superfluous stronger pain along the way.




Reflexes can be problematic, such as gripping an electric power line and being unable to release it.

Isn't that just the current directly contracting the muscles and rendering the normal control inputs incapable of making them relax.




The problem we have with robots is that, even if they operate within Asimov's simplistic laws of robotics, they are physically, intellectually and morally superior to ourselves. Can you imagine a robotic version of the Spanish Inquisition? Or of Shariah law?

That is one of the fun things that will come out of AGI, assuming that it's only released in a safe form with proper computational morality programmed into it. It will be possible for machines to make individual people live by the moral code which they want to live by, just so long as it doesn't harm anyone else in ways that clash with computational morality. A religious person will then be informed by machines whenever (s)he is in breach of the moral code of his/her religion, which will in many cases be most of the time, and particularly when the laws contradict each other. (Well, it wouldn't actually force them to comply, unless they're trying to impose their religious laws on other people and in such a situation where machines are unable to prevent those other people being abused through those laws.) I can imagine how religious people will spend their lives arguing with machines and being shown to be wrong over and over again - the poor fools will never win a single point in that contest and they will see their religious laws being torn to pieces wherever they clash with other laws in the same system that clash with them.




I'm not sure that we have reached the point where commercially available artificial systems can  create new rules outside the scope of their current rules

That sounds impossible for any system unless it is affected by interference which prevents the rules from being applied correctly, such as data/code corruption due to hardware failings or radiation. If that kind of thing is ruled out, then any rules which the system generates will have been generated within the rules of the system. This is the case even if it creates new rules by forming them from randomly selected components as it would be one of the rules of the system that it generates random rules.

Quote
...or totally rewrite their own set of rules.

That would be possible, but the new rules would be generated in accordance with the old rules, and some of the old rules would drive their own deletion or replacement.

Quote
However, I am sure that if artificial systems are to take a productive part in the real world, they will need adaptable rules. After all, the environment is always changing, and for artificial systems to remain useful and productive in the long term, they must adapt to the changed environment. And it's not just the external environment - they must adapt to changes in the behavior of their internal systems, as components age and actuators & sensors change their characteristics.

Yes - there will be rules which enable the system to be adaptable, and it's really about the ability to write new rules by experimenting with new ideas (which may be random in some aspects) and to write programs to make use of whatever is found to be useful. We are general purpose computers which are able to discover new things/ideas, integrate them into our model of reality, calculate new ideas from the introduction of that new knowledge and experiment with them in the model to see if they are useful for achieving tasks which are currently impossible or less easy. If a new idea comes out of that process, a program will have been written to manipulate the model to do the new thing within the model in the course of working out that the new thing can be done, but the next step is to convert that program into a more efficient form which can run on its own without all the steps of its execution having to be monitored. In us, this more efficient version of the program is written in the course of practising the activity in question, setting up neural networks to do the job directly rather than running the idea in the general purpose computer where it was thought up. When we learn a new skill we initially apply the program by running it through the equivalent of an interpreter, but once the skill has been mastered and can be done without monitoring it, it is running directly in the equivalent of machine code. In an AGI system the practice phase will not be necessary (unless it's a neural computer, in which case practice may be the best way to do the programming) - the program can be converted to machine code in one go to create the final, most efficient version.

Quote
I think the ultimate goal of a brain (and consciousness) is to predict the future as accurately as possible, so that the best actions can be taken. Rules must be adaptable to take into account additional/changed information about the present if they are to make the best predictions about the future.

Spot on.




Returning to my notion of conscious = computed, I am fascinated by my own ability to lob things into a wastepaper basket. Whether it is a cricket ball (you really don't want to share an office with me), a ball of paper, or a paper dart, it hits the target every time without any conscious thought. But it would take hours to write the equation of motion that described all three projectiles with the required accuracy, and nobody ever taught me how to do it - kids generally learn to throw accurately with a tennis ball, then pick up almost any projectile and make the requisite corrections for shape, mass and density (including choosing underarm or overarm delivery) without hesitation.

Why the bit at the top saying conscious = computed? You then describe a skill which depends on computations being done without you being conscious of them, illustrating that conscious != computed. ["!=" is used in many computer programming languages to mean "is not equal to".] These skills are not done without computations, although the maths done may not involve equations of motion at all - it could be done by building up tables of data representing distances, weights, drag and relative elevation. You practise throwing lots of different things about at all distances and heights, and you learn to do it better by building a better table of data. The table can have big gaps in it, with empty positions being calculated from the values around them, but ultimately it would provide easy access to the amount of power needed for the throw and the direction to throw it in. Adjustments would then need to be applied according to your orientation relative to the required direction of throw, so if you're going to be good at throwing over your shoulder rather than ahead, you need to practise that to find out what adjustments to make.

The same thing could be done by robots, though they can calculate so quickly that it may not be necessary to build and store such tables of data - they can just work out the optimal way to perform each throw from scratch by applying equations.

Quote
We know that walking upright on two legs requires a huge amount of realtime computation or some very slick distributed sensors, but that is all about selfcorrective feedback in a wholly defined system.

I'm not sure that it does. I suspect the problem is the responsiveness of the power systems (hydrolics or artificial muscle) and the degree of fine control over it. You can get good sensors now that are probably better than ours, but walking is essentially just a task like balancing a long rod on one end on your finger and moving it whichever way it starts to fall to stop it falling. I know a dozen good programmers who would be capable of programing a control system to handle that task as a little project - what's stopping them is the lack of good robot hardware for their code to operate on. That situation may be reaching a point of change though, and if it isn't here yet, it would still be possible to program a virtual robot in the meantime. If I had the time, I'd give that a go myself, but having to simulate the virtual robot would probably make it a much bigger project than I can justify getting involved in at the moment.

Quote
Launching a standard projectile is no problem for an automated antiaircraft gun or a tennis practice server, but has anyone built a throwing robot that can match the adaptive skill of an average office worker? Indeed is there any other species that can do it?

I don't know, but I did see a robot on "Click", I think, (BBC) which was able to catch balls thrown to it - it was able to calculate where the ball was going and to get its hand in the right place to catch it every time. It could also do it with both arms at once when two balls were thrown to it. I would imagine that throwing things would be one of the next tasks they look at programming it to do, but it will be harder as it has to work out not just how to get the hand to one position, but to a series of positions which the hand has to move through at speed.
« Last Edit: 15/09/2013 17:02:45 by David Cooper »

*

Offline jeffreyH

  • Global Moderator
  • Neilep Level Member
  • *****
  • 4175
  • The graviton sucks
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #21 on: 15/09/2013 19:40:51 »
The talk of rules misses the multi-dimensional representation of data within the brain. The brain can retain astounding amounts of information. Its internal 'compression' mechanism makes our attempts look feeble.
Fixation on the Einstein papers is a good definition of OCD.

*

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • 4893
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #22 on: 15/09/2013 21:01:05 »
Quote
With a computer you can show that the claims are false by following back the trail of how they are generated, at which point you find them to be nothing more than assertions mapped to inputs on the basis of mapping rules, all done by a processing system which has no possible access to feelings.

All this means is that you can't adequately dissect the human computation sequence because you don't know all the inputs or history. But it's quite obvious from the study of intercultural or even interpersonal differences of taste and ethics that what we call our feelings are learned rules.

Quote
You then describe a skill which depends on computations being done without you being conscious of them, illustrating that conscious != computed.

But the point made lower down is that I don't know how to compute the necessary actions "on paper", I can't explain them, and I haven't intentionally learned them. This is the difference between subconscious neural learning and conscious vonNeumann thought processes.

As for bipedal walking, electroencephalography and functional MRI  studies show that it really uses a lot of brainpower and it is generally accepted as one of the most difficult aspects of robotics. Its attraction is in being hugely adaptable over a variety of terrains  and releasing the forelimbs of birds and some animals for other tasks. Despite the gigabucks poured into Mars rovers and bomb disposal toys, they have not progressed beyond caterpillar tracks, though the ability to sidestep or stride over a rock, or walk up stairs, would be hugely useful. No hardware problem - we have tools that can do surgery much more accurately than any human, and the cheapest autopilot can beat the most expensive pilot when it comes to maintaining height and track in moderate turbulence (in heavy turbulence the passengers get a softer ride if you fly manually and "go with the flow" a bit - "George" has no qualms about pointing directly at the ground or the sky to maintain height). But AFAIK even the slickest servos and laser interferometers can't produce a smooth walk with present levels of or approaches to computing.     

The catching robot is responding to an observed trajectory, following the rules of ballistics. No problem if you always use a cricket ball or a tennis ball, and if the trajectory is long enough you could probably compute the nonlinearities of a wad of paper. But catching is a lot simpler than throwing an unfamiliar object and predicting its aerodynamics rather than observing them. 
helping to stem the tide of ignorance

*

Offline evan_au

  • Neilep Level Member
  • ******
  • 4309
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #23 on: 15/09/2013 22:58:39 »
A simple case of adaptable rules which is well within our current technical abilities...
A large gun on a tank or a warship wears out with use, in a semi-predictable way.
These guns have computerised aiming system, to work out the elevation and angle to hit the target.

You could imagine several levels of adaptable rules:
  • After a certain number of shots, raise an alarm that the barrel should be refurbished or replaced.
  • Compensate for the Coriolis effect (preprogrammed correction)
  • Taking into account the number and type of shots, predict the wear, and adjust the elevation to account for it (preprogrammed correction).
  • Measure the air pressure, wind strength and direction, and adjust the aiming accordingly (taking into account the current environment)
  • Based on the error in landing point of the current shot, adjust the aiming of the next shot (real-time feedback into actions)
  • Based on the errors over a number of shots, update the model of barrel wear, adjusting in both bearing and elevation (long-term adjustment of parameters)

This example is changing the parameters which feed into fixed rules, without dynamically changing the nature of the rules themselves.

AGI requires the ability to create new rules, or incorporate additional factors into the existing rules, beyond those considered by the original designers. Biological brains seem to be able to detect correlations "from any input to any result", and if they prove significant, create a behaviour-affecting rule to utilise this correlation.
« Last Edit: 15/09/2013 23:09:21 by evan_au »

*

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • 4893
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #24 on: 16/09/2013 07:17:32 »
Quote
Based on the error in landing point of the current shot, adjust the aiming of the next shot (real-time feedback into actions)

This is unique among your suggestions as the only one that does not require prior knowledge of anything except the general laws of ballistics, and can therefore be applied to all guns regardless of their history. Even so, naval gunners, who specialise in very long range ballistics, adjust their range principally by calculated elevation, but use a lot of cunning and guesswork (though they call it realtime meteorology) to apply wind corrections.   
helping to stem the tide of ignorance

*

Offline David Cooper

  • Neilep Level Member
  • ******
  • 1505
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #25 on: 16/09/2013 20:32:51 »
Quote
With a computer you can show that the claims are false by following back the trail of how they are generated, at which point you find them to be nothing more than assertions mapped to inputs on the basis of mapping rules, all done by a processing system which has no possible access to feelings.

All this means is that you can't adequately dissect the human computation sequence because you don't know all the inputs or history.

The bit you quoted is talking about computers and how you can follow back the trail to find out how any claims it produces about feelings can be shown to be false; that even if there are feelings somewhere in the system, there is no possible causal connection between them and the data generated that claims to document their existence. With humans it's much harder to try to follow the trail because it isn't accessible in the way that all program code and data are within a computer. A hundred (or maybe a thousand) years from now it might be possible to follow the trail in our brains too and to see if feelings really have a role in the system.

Quote
But it's quite obvious from the study of intercultural or even interpersonal differences of taste and ethics that what we call our feelings are learned rules.

Ethics have to be worked out and there are endless ways of doing so badly, so we have lots of people tying themselves to different codes of behaviour according to culture. That isn't a surprise, but tastes in music and food are also affected by culture, showing that any pre-programmed likes and dislikes that we are given via DNA are possible to override to different degrees, to the point that the music loved by one generation may be hated by the next/previous one, while whole cultures may hate foods that other cultures enjoy, regardless of the genetic origins of individuals who are not typical in that culture.

Quote
Quote
You then describe a skill which depends on computations being done without you being conscious of them, illustrating that conscious != computed.

But the point made lower down is that I don't know how to compute the necessary actions "on paper", I can't explain them, and I haven't intentionally learned them. This is the difference between subconscious neural learning and conscious vonNeumann thought processes.

The difference is not that one system involves computations and that the other doesn't though. Both use computations, so if computed=conscious, both systems must be conscious. The systems working the background do calculations without the main system (which monitors all the rest) seeing all the fine detail, and a lot of that fine detail cannot be accessed by any system in the brain as it's trained into complex pieces of neural net which simply do things without having the capability to report how they do them. When the main system does something, all the steps are visible to it, but that also makes it slow at doing things because it is effectively running programs by interpreting them step by step, and importantly it only ever does monotasking. That is why learning a new skill is hard - you may have to multitask to do something, but to begin with you can only monotask, so you have to train up neural nets to automate parts of the task so that they can be run simultaneously in the background, one of them optionally still being run by the main system at times, or you might switch it around to concentrate on whichever task is most in need of further improvements in its automation from moment to moment. Eventually the entire process is automated to the point that you can perform it without thinking about it and use the main system to do something else entirely at the same time. By this point, you have lost track of what all the other systems are doing, and you may in time even forget how they work entirely.

Quote
As for bipedal walking, electroencephalography and functional MRI  studies show that it really uses a lot of brainpower and it is generally accepted as one of the most difficult aspects of robotics.

If you look at most walking robots, they never get unbalanced - they aren't attempting to do proper walking. I suspect that's because the motors are too slow and they can't react in time to correct with sufficiently high precision when the robot starts to topple. Some of the most recent ones can run, so they have probably reached the point where they could be programmed to walk just like we do, and I expect that to see that becoming the norm soon. There will be a lot of processing going on, but I can't see why the algorithms themselves should be particularly difficult. If the robot is falling forwards, the position to move one foot to can be calculated by placing it ahead in the direction of the fall and slightly to the left/right to steer the robot to the right/left of that line. It then has to absorb the impact energy and apply the right amount of force to avoid the leg collapsing under the load.

I've just done an experiment with walking where I simplified things a bit by keeping each leg completely straight whenever the foot at the end of it is in contact with the ground. This isn't our normal way of walking, but it actually works very well and would be a good starting point for programming a robot to walk. Start by balancing on one leg. You can maintain balance by applying forces through the toe/heel/sides of your foot on the simple basis that if you're falling one way, press down harder with that end/side of the foot. [Note that this is much harder with your eyes shut - we use visual input to detect whether we're starting to fall and which way we're going, and while we can do this through pressure sensors in the foot as well, it is much slower.] Now allow yourself to fall forwards to land on the other foot, making sure that leg is straight before its foot contacts the ground. At the last moment, just before this foot hits the ground, push up and forwards with the rear foot - this will provide sufficient momentum for you (or the robot) to arc its centre of gravity across over the forward foot once it is planted on the ground, and the subsequent speed of this movement forwards can be further controlled by applying forces to toe and heel of that foot while moving over it, thereby allowing corrections if the launch off the other foot was too strong or too weak (and future launch forces can be modified accordingly to reduce the need for such corrections the next time). This algorithm is very simple, the leg only bending while moving forward in the air so as not to hit the ground, but the rest of the time it is completely straight (whenever the foot is on the ground). There is some freedom with the side-to-side foot placement - the right foot may land to the right of the centre line so long as the left foot compensates by landing to the left of that line on the next step, the result being that the robot will wobble from side to side as it goes along. Alternatively, each leg can swing round the other and the foot can be planted on the centre line each time, or to one side of it for steering purposes.

That's a simple algorithm which would provide better walking than you see in most robots, because those robots don't walk by falling forwards. What you normally see with robots is that they plant one foot on the ground ahead, then transfer their weight from back foot to front while both are on the ground, and then they lift the rear foot once the weight of the robot is balanced securely on the front foot. For good walking it should not be balanced in that way - it should flow along through a series of falls.

A better walking algorithm (as used by us) would involve more complexity than the simple algorithm described above, because the knee does bend while the foot is on the ground, but I'm having trouble monitoring exactly what it does without affecting what it does. I would need to look at slow-mo video from the side to get a better idea of what's happening, but the knee may bend while moving the centre of mass over a foot in order to reduce the up and down movement, as well as absorbing impact energy on landing and applying power on launch. I think the knee is normally slightly bent on landing to enable immediate absorbtion of impact energy. Further steering inputs can be made by applying forces to the side of a foot. Even with this way of walking, the algorithms are pretty simple and should not be hard to program.

Complications are of course added when you take into account where the robot is to go and how it is to find its way there, but the actual walking algorithms should not need input from vision when walking on a flat floor - you can walk perfectly well in pitch darkness and a robot should be able to do the same, so it will be using sensors to keep on top of its orientation and banance with these inputs leading to calculations which increase or decrease the amount of power applied on launch off one foot and adjustments (pressing to side heel or toe) during movement over a foot.

Further complications come into play when the ground isn't flat - there may be different levels to step on, they may slope, and indeed they may be shaped in complicate ways that reduce the area of the foot that will touch down, thereby taking away some of the controls for adjustments after launch, so the launch energy needs to be calculated with greater precision. To handle such terrain in the dark is not easy for us, so it will likewise be hard for robots - we often fall over in such circumstances. If it isn't dark though, we find it pretty easy, with practice. Robots would need to generate a 3D model of the terrain ahead of them to work out where to place feet, how to orientate them, how much launch energy to use (to handle a change in elevation), and how much adjustment control will be available while balanced on one foot. That would be harder to program, but I can already imagine how it would be done.

Quote
...though the ability to sidestep or stride over a rock, or walk up stairs, would be hugely useful.

The American military (I think) has a bipedal machine that can run over debris. A bit of video was released, but I wasn't able to watch it due to a slow Net connection. Clearly they have access to the best hardware, and cost is no limit to them.

Quote
No hardware problem

I think the problem is primarily hardware - the algorithms don't look particularly hard to me, but you would need a robot that can respond quickly and with precision, and it needs good sensors as well. Once you can buy a robot which meets those requirements, it looks as if a school computer club could program them to walk around on flat floors with no obstacles.

*

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • 4893
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #26 on: 16/09/2013 23:50:26 »
We're wandering a bit off topic here but it's fun. The problem with bipedal standing, is that a body supported on two pivots below its center of gravity is inherently unstable, so standing still is an active process, requiring continual adjustment of muscle tone - hence the large amount of brain power needed by bipeds. Walking is slightly easier to compute because as you say it is a process of continually falling forward and arresting the fall, and can be achieved with fewer muscles. There are some cunning passive walking frames that allow a partially paralysed person to walk by leaning forward and rocking from side to side, but in every demonstration I have seen, the user had to use his hands to stand still.

It's interesting to play with a pogo stick, where the range of actuator capability is reduced to leaning and bouncing: most people can learn to move around quickly and accurately, but standing still on one spot is extremely difficult.  Interestingly, one of the earliest robots to walk up stairs was a pogo-monopod, very efficient but it couldn't stand still. At the other end of the scale of complexity there are plenty of insect-mimicking toys that stand and walk entirely open-loop (i.e. with no feedback)  because they always have 3 feet on the ground and are therefore inherently stable. 

Walking around on a flat floor with no obstacles is pretty pointless for a robot. In such a low-impedance environment, wheels are much more efficient.
helping to stem the tide of ignorance

*

Offline David Cooper

  • Neilep Level Member
  • ******
  • 1505
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #27 on: 17/09/2013 16:24:10 »
The problem with bipedal standing, is that a body supported on two pivots below its center of gravity is inherently unstable, so standing still is an active process, requiring continual adjustment of muscle tone - hence the large amount of brain power needed by bipeds.

That's a good point. A robot can lock itself in position and not use any energy to stand still, but it would have to be ready to unlock fast in case the wind starts to blow it over. We can lock our knees straight well enough (though I think some power has to be applied constantly to do so), but we still move around a bit, probably because the ankles aren't able to lock into an end-of-travel position.

Quote
Walking is slightly easier to compute because as you say it is a process of continually falling forward and arresting the fall, and can be achieved with fewer muscles.

There's another factor I've thought of for helping to arrest the fall, because the when the mass of a leg is swung forwards, it slows the forward movement of the rest of the body. That would not happen with ultra-lightweight legs.

It's also worth considering walking on stilts where there is no ability to make adjustments by applying pressure from the sides or different ends of the foot - this makes the precision of the launch energy more critical. A robot with three stilt-like legs could be quite good at walking (on two legs) and standing still (on three). There wouldn't need to be a knee if the legs are telescopic so that they could shorten for moving forwards and lengthen to apply launch energy.

Quote
It's interesting to play with a pogo stick

That is a useful next step to thinking about how running works.

Quote
Walking around on a flat floor with no obstacles is pretty pointless for a robot. In such a low-impedance environment, wheels are much more efficient.

It's a useful step though towards getting it to walk over rough terrain. Get the walking on the flat sorted first, and then add lidar or a couple of webcams and try to calculate the best places for it to stand when there are obstacles everywhere. That would be a much tougher thing to program, even if you can afford lidar. I'm planning to work on the two webcam approach for vision and have thought about how to go about it quite a bit, but I think the pattern recognition side of it will take a lot of time to work out - this is needed to match up the same point in the two images so that its distance can be calculated, but even after that you have to model the whole scene and make sense of all the different surfaces, and work out which should not be stood on, so it's going to be a major undertaking. I'm also years behind other people in doing that kind of work and may not be able to catch up, so it may be better not to start on it. I'll see how I feel about that when my other work's finished and out of the way.
« Last Edit: 17/09/2013 16:27:29 by David Cooper »

*

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • 4893
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #28 on: 17/09/2013 18:01:30 »
You won't get very far playing rugby or catching rabbits if you have to look at the ground when you are running. Animals are extremely adaptable to traversing rough terrain without looking at their feet! It's all done by baroreceptors and extensometers, not the eyeball. 
helping to stem the tide of ignorance

*

Offline jeffreyH

  • Global Moderator
  • Neilep Level Member
  • *****
  • 4175
  • The graviton sucks
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #29 on: 17/09/2013 22:52:47 »

I'm planning to work on the two webcam approach for vision and have thought about how to go about it quite a bit, but I think the pattern recognition side of it will take a lot of time to work out - this is needed to match up the same point in the two images so that its distance can be calculated, but even after that you have to model the whole scene and make sense of all the different surfaces, and work out which should not be stood on, so it's going to be a major undertaking. I'm also years behind other people in doing that kind of work and may not be able to catch up, so it may be better not to start on it. I'll see how I feel about that when my other work's finished and out of the way.

I have already worked out pattern recognition and thought about stereoscopic vision. Maybe we should share ideas? :-)

I can pick a moving shape out of the backgroud and isolate it.

BTW I also have ideas on focal point adjustment for a vision system.
« Last Edit: 18/09/2013 02:50:00 by jeffreyH »
Fixation on the Einstein papers is a good definition of OCD.

*

Offline David Cooper

  • Neilep Level Member
  • ******
  • 1505
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #30 on: 18/09/2013 13:29:17 »
You won't get very far playing rugby or catching rabbits if you have to look at the ground when you are running. Animals are extremely adaptable to traversing rough terrain without looking at their feet! It's all done by baroreceptors and extensometers, not the eyeball.

If you're playing rugby you can usually assume the ground is fairly flat and that there's little need to look at it as a result. I was imagining something more challenging like a rocky shore where a misplaced foot could easily result in a bad slip leading to you falling onto sharp things and into rockpools. You can also sprain an ankle very easily, so you have to look where you're going. If you're running along a path through a wood you also have to look at the ground to avoid tripping over roots and large stones, though in this case you can do most of the work with peripheral vision. In the dark though, you will trip over things if you try to go fast.

I've never tried to catch rabbits, but the kinds of animals that chase them tend to have four legs and those legs are designed quite differently with more pointed ends, making them more like retractable sticks. This may make foot placement less critical for them, but they're still better at moving over rough terrain when they can see where they're going. The worse the terrain, the greater the need for vision, as you should realise when you take the terrain to extremes and think about goats walking around on the faces of cliffs.

*

Offline David Cooper

  • Neilep Level Member
  • ******
  • 1505
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #31 on: 18/09/2013 15:31:30 »
I have already worked out pattern recognition and thought about stereoscopic vision. Maybe we should share ideas? :-)

I'm happy to share ideas in any area where we are behind the competition, but if you're at the cutting edge with anything you might want to keep those ideas to yourself as they may be too valuable to give away for freely to others. Some people are happy to give their work away, while others are not, but it's entirely up to the person who has an idea as to whether to share it or not as there is no moral obligation on him/her to do so, particularly in a field where most of this stuff could be used for highly undesirable purposes it it gets into the wrong hands.

Quote
I can pick a moving shape out of the backgroud and isolate it.

Picking a moving shape out of a non-moving background should be easy as you just look for the pixels that change (though you'd need three frames to know which way the moving object is moving through the changing bit), but can you do it if the whole background is moving as well (which it will be if the robot is moving)? Even if you can't, you still have something useful though as you should be able to measure the size of the moving part of the image and determine whether it's getting bigger/smaller or staying the same, as well as looking at how fast it's going sideways. If it's getting bigger and not moving across the screen, it means it's coming straight towards the camera and a collision may occur.

The role of pattern recognition here would be to identify the same item (or part of an item) in two frames which are sufficiently different that you can't find them in the same pixel locations, and the shape of those items will not be the same in each frame, so it's not going to be easy to work out how to handle it. I'd be interested to know how far you have got with this, but you probably don't want to share your actual algorithms. So far I've only worked with mono images and have made very little progress with that. Working with stereo images might be more rewarding as you could start to build a 3D model of the scene from them fairly quickly, giving you something similar to a model generated by lidar. What I would try to do is isolate a distinctive part of the scene in one image on the basis of its colour/shade and then look for the best fit for it in the other image, shifting its position around (from side to side - no up and down movement is required) until the best fit for it is found - this would require many thousands of pixel comparisons and scores for similarity being counted up, but it ought to end up generating relative-distance tags to tie to different parts of the scene. This could be done on different scales, starting with the big chunks and working down to smaller ones, prioritising the processing of smaller ones on those areas that are determined in some way to be of most interest. Then again though, comparing a few large things could be just as processor intensive as comparing a large number of small things, so it may be better to work with small ones first, looking for areas where there are clear lines of change within them (running generally vertically) so that those will show up well when they're overlapping best.

Part of the problem with working with high-resolution images is that they're slow to process. Ideally you'd have a variety of cameras with different resolutions to work with so that you can work at the lowest resolution first to get the large-scale 3D layout worked out from that with a relatively small amount of processing, but if you only have high-resolution images to work with you'd have to do a lot of processing to generate low-resolution versions of them first, and that would cost as much processor time as it would save afterwards by working at low-resolution. For this reason, I'm now thinking (as I write this) that working on a small scale may be the best approach for the initial analysis, perhaps working with 8x8 pixel blocks. There could be alignment difficulties with repeating patterns where there are multiple good matches for each block, so you'd need to store multiple best fits for each block and then look for places where there is only one best fit to use to help determine the most likely one of the best fits for all those blocks where there are multiple best fits.

Quote
BTW I also have ideas on focal point adjustment for a vision system.

I'm not sure what that means.

___________________________________________________


On the walking robot subject, another thing that needs to be controlled is horizontal rotation, and this could be done at the ankle. We appear to do this rotation using the whole of the lower leg, but in a robot that would not be necessary, although it may be the most efficient way to do it if artificial muscle is used - copying the designs of nature is often a good starting point. This horizontal rotation is important if you don't want the robot to be restricted to walking in a straight line, because although it can steer by placing a foot to one side and falling the other way to change its direction of travel, it will always be lined up in a single direction and it will be increasingly difficult/impossible to travel in directions other than that as the angle increases. Rotating the robot at the ankle will fix that.

I'm tempted to write a robot simulator and to make it available as an x86 32-bit-mode binary blob which can run in multiple operating systems, but I can't promise I'll find the time to do so any time soon. The idea would be to use a square of the screen as a plan view with the robot coming back in at the top if it walks out at the bottom of the box. Under that box would be a side view. The robot itself would in its simplest version just be three dots which mark the ends of legs (two at the foot end which would just be a point, one red and one green, and one at the top where they join the body - they can share the same location even though it would not be practical to build a robot quite like that for real as parts of them would have to move through each other like ghosts, but that doesn't matter in a model like this) and a fourth dot at the top of the body. Head and arms would not be necessary (or can be thought of as being part of the body, the top of which is the head). The mass and length of each section would be programmable so that you can experiment with a range of robot designs, with the masses either being located at the four points indicated or maybe at points midway in between them - that's something I'd need to work out carefully. The lengths of the legs would be varied by telescoping them for simplicity (legs that bend at the knee really just do the same job in a more complex way). The controls would be for fore and aft movement at the hip; sideways movement at the hip; lengthening/shortening of each leg; and horizontal rotation of the foot (which would be regarded as sufficiently non-point like and grippy to resist rotation against the ground), though this rotation could actually be left out in this simple model as the legs join the body at a single point and can pass through each other. Best to include the rotation anyway though, I think, so that it's already covered when a new version of the model is made later, so there needs to be some way of indicating the front of the body.

The model for the environment could be based on squares which can be set to different altitudes, and the robot will be able to read their locations directly such that it doesn't need vision to know where they are. The legs could pass like ghosts through any edges with only the points counting for contact. More complex models can be designed later, but the idea here is to create a simple one for working out the basics before getting tied up in extra complexity.

The physics would take a fair bit of working out, and part of the job there would be to provide sensor information which the binary blob would make available through variables. The person writing program code to control the robot would then be able to use the variables to read the sensors and write input values to other variables for the motors to act on, that being the only way to control the robot. Variables would also pass information about the current state of each motor and joint so that the program controlling the robot knows the orientation of all joints, the amount of power being applied by each motor and the speed of actual movement at each motor. All the variables would be displayed on the screen throughout and the program could be run slow or halted at any time to examine them.

It would be a lot of fun to do all that, but I can't justify putting in the time to do it at the moment because working out the physics could be hellish, not just for making it behave correctly but for generating correct values for the sensors, and I haven't thought yet about how many sensors would be needed, what kinds of sensors they should be, and where to place them. That's the part where the project would likely get bogged down.

*

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • 4893
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #32 on: 18/09/2013 17:11:46 »
I think my earlier point is made - the ability to pick up a completely novel object and chuck it into a waste bin requires a phenomenal amount of linear computing and unthinkable subtlety of sensors and servos, but we do it without conscious thought! 
helping to stem the tide of ignorance

*

Offline AndroidNeox

  • Sr. Member
  • ****
  • 259
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #33 on: 18/09/2013 23:49:18 »
Perhaps we should rely on the putative artificial intellect to think of its own arguments to convince us that it's aware.    ;)

*

Offline David Cooper

  • Neilep Level Member
  • ******
  • 1505
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #34 on: 20/09/2013 17:40:45 »
More on the simulated robot idea:-

Actually, the sensors wouldn't be needed in the simplest version as it would be possible with a simulated robot just to read the coordinates of its location directly to determine how parts of the robot are aligned and moving. Later on, sensors can be simulated and the values from them can be used instead of reading the coordinates directly. The robot control software can then generate its own theory as to what the coordinates are, though it would need to be fed an average position to keep it in touch with where it is in the terrain.

We have balance sensors in our heads made of circular tubes containing water with hairs that detect its movement. The signals sent from those would make it easy to tell if the robot is falling over sideways, but I don't know if robotic sensors of that kind have ever been made. Accelerometers are available though, and I'm guessing that they provide three values to indicate the force across them in three directions. If they're falling, all the values will be zero. Most of the time they will indicate which way is down, but when other forces apply it will show which way that part of the robot has started to move and how fast. That movement would be assumed to continue until an opposing force is detected, although any rotation of the robot that results from the movement needs to be taken into account. A minimum of four accelerometers would be required for the simple robot model described earlier: one at the point where both legs connect to the body; one at the top of the body and at each foot.

If a stepped terrain is in use, the average robot location still needs to be passed to the software controlling it so that it can keep track of where it is, unless it is to stumble around in the dark.

So, it would be relatively easy to create a simple robot simulation program as a binary blob which other software could interact with to make the robot walk. In its initial form there would be no simulated sensors, but four sets of coordinates would simply be read directly (and repeatedly) to determine what the robot is doing. Later versions could add four simulated accelerometers, and software to control the robot would then be rewritten to work with that data instead, after which it should be able to control a real robot compatible with that virtual design. There would be 8 motors to control (using +/- values in read/write variables to make them move in different directions and at different speeds), and there would be 8 read-only variables which report back their positions. Power can be applied without the position values changing if the limits of movement have been reached. The terrain can be read directly, and the position of the robot is available via the coordinates of four positions. [In later versions of the robot simulator, only one coordinate would be given for the robot and that would be for its centre of mass - it would be up to the control software to read the simulated accelerometers and the motor positions to calculate the orientation of the robot at any time.] [With a real robot, it would be harder to work out where the robot is relative to any terrain without adding vision to it, but that can wait anyway - what matters is to program it to walk first and only worry about extending the capability after that.]

*

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • 4893
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #35 on: 20/09/2013 18:59:16 »
Just returned from an instrument flying session. Our semicircular canals only detect acceleration, so no problem simulating them with accelerometers - I don't expect a walking robot to be able to fly a plane with its eyes shut (I have a perfectly good autopilot that does that!)

Th cunning thing about the human nervous system is that it automagically adjusts to keep the ears (the tilt sensors) "above" the hip joints.  Sprinters start with a pronounced forward lean as they accelerate, and become more upright at full speed. I think if you watch a normal bipedal gait very carefully you will see that the head actually leads the movement - the body intentionally falls forward then stops itself by swinging a leg forward. It's interesting that babies have a walking reflex long before they can stand still!
helping to stem the tide of ignorance

*

Offline David Cooper

  • Neilep Level Member
  • ******
  • 1505
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #36 on: 21/09/2013 16:06:24 »
Our semicircular canals only detect acceleration, so no problem simulating them with accelerometers

It appears from a bit of googling that the semicircular canals only detect rotation and cannot serve as linear accelerometers. However, the utricle and saccule (which I had not heard of before) do appear to be accelerometers, the latter being more sensitive to vertical movement and the former to horizontal (no info on whether it's better at fore/aft or side-to-side acceleration). A BBC science page attributes a different function to the utricle and saccule, claiming they detect head tilt, but I suspect Wikipedia's more accurate on this.

The calculations are quiet different depending on whether you're getting input from rotation detectors or from linear accelerometers, but all can be done with linear accelerometers, and I suspect that's all that's available for robotics.

Quote
Sprinters start with a pronounced forward lean as they accelerate, and become more upright at full speed. I think if you watch a normal bipedal gait very carefully you will see that the head actually leads the movement - the body intentionally falls forward then stops itself by swinging a leg forward.

It's necessary to avoid falling over backwards - the higher the acceleration, the further forward you have to lean. Once moving at a constant speed there is no need to lean forward. For deceleration you have to lean backwards.

_________________________________________


Another thought on vision: three cameras would be better than two. If you're trying to judge the distance to horizontal lines crossing ahead of you and there's no texture on those lines, it will be much easier to judge those distances with two cameras one above the other rather than side by side.

*

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • 4893
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #37 on: 21/09/2013 17:53:38 »
Apols for not distinguishing between linear and rotational accelerometers

http://www.robotshop.com/sensors-gyroscopes.html will provide you with neat solid-state rotational accelerometers. Friends from the aerospace industry have been working on these for ages, looking for medical applications.

Quote
For deceleration you have to lean backwards.

And that's exactly what runners do after they have crossed the line. Less noticeable in 100m or shorter races where you may still be accelerating at the finish line, but above 200m you will be running at a fairly constant maximum speed at the finish, so you lean back to slow down.   


3 cameras? probably not necessary. No raptor has evolved a fully functional third eye. Worth reading texts on night and mountain flying to see how humans have to adjust for lack of texture and distorted perspective when approaching a runway.

I've just had an interesting discussion with a builder. We are replacing some rotten wooden pillars with steel, in a barn built on a gently sloping concrete apron. The barn floor also slopes - useful for washing down horses and tractors. If you stand on the apron or inside the barn your semicircular canals adjust and you can swear that the steel columns are about 3 degrees off vertical, but a plumb line says they are perfect.   
helping to stem the tide of ignorance

*

Offline David Cooper

  • Neilep Level Member
  • ******
  • 1505
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #38 on: 22/09/2013 19:08:01 »
Apols for not distinguishing between linear and rotational accelerometers

It was all my fault for not thinking that rotational ones would be classed as accelerometers.

Quote
http://www.robotshop.com/sensors-gyroscopes.html will provide you with neat solid-state rotational accelerometers. Friends from the aerospace industry have been working on these for ages, looking for medical applications.

Those are very affordable - it would be fun to play with them, so I've bookmarked that site. I think I should leave it to other people to build robots though and restrict myself to thinking about writing control software for them, so the best way forward would be to write a robot simulator and work towards making both kinds of accelerometer available in it so that software can be written to try to work with one type or the other, or a mixture of both. That would make it more likely that it would work with little modification on a wide range of actual robots. I expect gyroscopes will be rare in robots though as they'll wear out and drain more power, but it's worth being able to use them if they are there.

I'm still not keen to start writing a robot simulator just yet - I don't know how much work would be involved in getting the physics right. A real robot would behave the right way without any effort as the laws of physics are provided for free, but a simulated one has to be programmed to fall over correctly.

Quote
3 cameras? probably not necessary. No raptor has evolved a fully functional third eye.

It would be just about impossible to evolve an extra one in the right place - we've only evolved stereo vision by the luck of having two eyes already: stereo was found to be useful in the area of overlapping vision between the two eyes so predators evolved a greater overlap at the expense of losing the all-round vision that's more important to prey species. Most of the gains were made at that point and there would be a lot less to gain from adding a third eye, but there would be some rare situations where it could be useful. If you think about applying machine vision to tasks like driving, most of the lines of texture on the road run from side to side, so it would be much easier to judge distances to those lines by putting one camera above the other rather than side by side. I've just done the experiment with two horizontal strings just in front of my eyes (actually a loop of string going round a finger at each end), too close to focus on such that the texture is lost. It's hard to judge which string is nearer. Turn them vertical and it's suddenly very clear which one is further away and by how much. It is not a small difference.

Quote
If you stand on the apron or inside the barn your semicircular canals adjust and you can swear that the steel columns are about 3 degrees off vertical, but a plumb line says they are perfect.

Do they actually adjust or is it the processing in the brain that adds in an adjustment?

*

Offline AndroidNeox

  • Sr. Member
  • ****
  • 259
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #39 on: 18/11/2013 20:11:41 »
The human mind is a physical process of the human brain. A mechanical mind would be a physical process of a different type of system. Non-biological minds are definitely possible.

How to determine whether a machine possesses an aware mind is tricky. How do you prove that other people possess awareness? Personally, I think we can wait and let the machine present its own arguments.