The Naked Scientists

The Naked Scientists Forum

Author Topic: How can artificial general intelligence systems be tested?  (Read 8277 times)

Offline jeffreyH

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 3926
  • Thanked: 55 times
  • The graviton sucks
    • View Profile
If this kind of AI is to be developed how would anyone test to see if a true subconsciousness has been developed seeing as we don't have any real definition for it?
« Last Edit: 13/09/2013 17:40:17 by chris »


 

Offline CliffordK

  • Neilep Level Member
  • ******
  • Posts: 6321
  • Thanked: 3 times
  • Site Moderator
    • View Profile
Re: artificial general intelligence
« Reply #1 on: 13/09/2013 09:35:38 »
One of the basic ideas in AI is the Turing Test, in which success has been achieved if the machine can provide answers indistinguishable from a person.

Certainly great strides have been made with "thinking computers" as they have excelled in games such as chess, and even Jeopardy.

As far as subconscious.  I suppose that would have to be rolled into the "Turing Test".

I'm not sure there is a general agreement of what subconscious is, but there are some psychology tests that can at least test parts of it.  For example one might consider Priming.

Essentially the idea that one's response to a question or stimulus would be influenced by the context that it is presented in, or perhaps the context that was presented before the stimulus. 

I suppose something related might be the Stroop Effect which is difficult for humans, but presumably would be rather easy for computers. 
 

Offline evan_au

  • Neilep Level Member
  • ******
  • Posts: 4126
  • Thanked: 247 times
    • View Profile
Re: artificial general intelligence
« Reply #2 on: 13/09/2013 12:42:16 »
A lot of our definition of intelligence is verbal, numeric and geometric - things that require sequential logic steps to solve.

But much of our brain operates in parallel on sensory correlation and muscular coordination, below the level of conscious thought. Or, at least, if you are learning to play golf by reading "golf for dummies", you are applying the slow, intellectual, sequential processing part of your brain to train your unconscious brain to undertake a fast, highly parallel, physical task.

You won't have a good golf swing while you are sequentially thinking through your stance, your grip, etc. It has to become automatic before it works well - what some people call "muscle memory" (a rather misleading term).

This is the kind of artificial intelligence that robots lack - their movements are often awkward and inefficient, as if they are working through sequential steps; movements that are parodied in "robot dancing". Some researchers have made progress towards more "natural" movements by trying to minimise energy consumed, so that movement in one action is smoothly transferred into movement for the next action. Surely minimising wasted energy is important for both living and mechanical systems.

What may offend some people is that this unconscious kinesthetic intelligence is sometimes displayed most clearly by people who do not qualify for the degree of nerdiness needed to get a Phd, and it does not qualify for what scholars would consider "intelligence". Even worse, some of the best examples are not even human (or even primates).
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4719
  • Thanked: 155 times
  • life is too short to drink instant coffee
    • View Profile
Re: artificial general intelligence
« Reply #3 on: 13/09/2013 15:34:06 »
I've always considered intelligence to be either constructive laziness or the ability to surprise another animal. We don't see much evidence in normal linear computing systems because their responses are necessarily predictable: playing chess at an expert level depends more on not making mistakes than on amazing the opposition with brilliant originality. But a simple neural network incorporating fuzzy logic can indeed surprise its teachers by recognising significance under noise, or dismissing insignificant relationships as meaningless coincidence.

"Minimising wasted energy" is an aspect of "constructive laziness". Robot dancing is a good example: it's interesting to compare western ballet with traditional Chinese dance. Western steps, forms and sequences are always complete, usually ending with a pose or an exit on a bar line in the music (and some applause, if done well) whereas eastern forms flow from one movement to the next with no stops and starts. 
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
The entire AGI system will be subconscious, or more accurately non-conscious. Unless of course you model it precisely on the human brain in which case it may end up working the same way with claims of consciousness coming out of it and lots of hidden background processes going on which the conscious part can't access. But an intelligent system running on silicon chips of the kind we know how to make cannot interface with any kind of feelings and will therefore lack consciousness, so the question at the top doesn't apply.
 

Offline jeffreyH

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 3926
  • Thanked: 55 times
  • The graviton sucks
    • View Profile
The entire AGI system will be subconscious, or more accurately non-conscious. Unless of course you model it precisely on the human brain in which case it may end up working the same way with claims of consciousness coming out of it and lots of hidden background processes going on which the conscious part can't access. But an intelligent system running on silicon chips of the kind we know how to make cannot interface with any kind of feelings and will therefore lack consciousness, so the question at the top doesn't apply.

I would agree with this to a degree. Work is being undertaken on human level AI which brings this question back into focus. Like you said though it cannot interface with feelings so would it have any motivations of its own? Would the designers simply end up with a super calculator that still had to be fed goals to fill in the emotional void?
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4719
  • Thanked: 155 times
  • life is too short to drink instant coffee
    • View Profile
consciousness

Would you care to offer a definition of this word?

I think we can distinguish conscious and subconscious responses in the sense of calculated versus reflex actions, but the abstraction of consciousness seems to float around without adding to the discussion. 
 

Offline CliffordK

  • Neilep Level Member
  • ******
  • Posts: 6321
  • Thanked: 3 times
  • Site Moderator
    • View Profile
I agree that it is easy to throw around words like consciousness, unconscious, etc.

One might consider what is in "focus", but that may be a trivial aspect of the AI, although selecting what to focus on may not be so trivial.

Unconscious may be related memories, events, etc, that don't quite receive the primary focus, but nonetheless influence the overall outcome of the system. 

As mentioned above, something like priming is testable in humans, and thus one might expect similar responses in an AI system.
 

Offline jeffreyH

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 3926
  • Thanked: 55 times
  • The graviton sucks
    • View Profile
I agree that it is easy to throw around words like consciousness, unconscious, etc.

One might consider what is in "focus", but that may be a trivial aspect of the AI, although selecting what to focus on may not be so trivial.

Unconscious may be related memories, events, etc, that don't quite receive the primary focus, but nonetheless influence the overall outcome of the system. 

As mentioned above, something like priming is testable in humans, and thus one might expect similar responses in an AI system.

Focus appears to be vitally important with regard to consciousness. It helps to quickly identify potential threats. Yet an unconscious idea of what a threat is also plays a vital role and is ultimately an automatic response through repetitive experience and memory.
« Last Edit: 14/09/2013 02:01:40 by jeffreyH »
 

Offline evan_au

  • Neilep Level Member
  • ******
  • Posts: 4126
  • Thanked: 247 times
    • View Profile
"Feelings" may be necessary for a self-directing robot to survive in the real world.
  • Pain and fear may be necessary to force you to drop whatever you are doing, and engage in "fight or flight"
  • Happiness & satisfaction is a reflection on past performance, which may be necessary to strengthen the steps & neural connections that led up to the current state, and increase the probability that they will be taken again in the future.
  • Dissatisfaction is also a reflection on the past, which may weaken neural connections, and decrease the probability that the same state will be reached in the future
  • Frustration is an indication that nothing you are doing now is working, so stop it and do something totally different.
In humans, many of these feelings are driven by chemicals floating around our internal plumbing, like adrenalin for fear, and endorphins for satisfaction. An electronic robot would not dispense chemicals onto its silicon chips, but other mechanisms may be necessary to strengthen or weaken neural links as experience grows, or the environment changes.

I heard of an experiment where flies were bred without functional pain sensors. They did not survive long in the world.
 

Offline jeffreyH

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 3926
  • Thanked: 55 times
  • The graviton sucks
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #10 on: 14/09/2013 02:39:08 »
"Feelings" may be necessary for a self-directing robot to survive in the real world.
  • Pain and fear may be necessary to force you to drop whatever you are doing, and engage in "fight or flight"
  • Happiness & satisfaction is a reflection on past performance, which may be necessary to strengthen the steps & neural connections that led up to the current state, and increase the probability that they will be taken again in the future.
  • Dissatisfaction is also a reflection on the past, which may weaken neural connections, and decrease the probability that the same state will be reached in the future
  • Frustration is an indication that nothing you are doing now is working, so stop it and do something totally different.
In humans, many of these feelings are driven by chemicals floating around our internal plumbing, like adrenalin for fear, and endorphins for satisfaction. An electronic robot would not dispense chemicals onto its silicon chips, but other mechanisms may be necessary to strengthen or weaken neural links as experience grows, or the environment changes.

I heard of an experiment where flies were bred without functional pain sensors. They did not survive long in the world.

Can an AI ever achieve positive goals as it sees them without satisfaction? As to pain, what would that amount to for a robotic system? Would you even want to include a pain sensation? Isn't that a cruelty?
 

Offline CliffordK

  • Neilep Level Member
  • ******
  • Posts: 6321
  • Thanked: 3 times
  • Site Moderator
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #11 on: 14/09/2013 06:42:00 »
Can an AI ever achieve positive goals as it sees them without satisfaction? As to pain, what would that amount to for a robotic system? Would you even want to include a pain sensation? Isn't that a cruelty?
With the Turing Test, you've achieved AI when it is capable of mimicking and being indistinguishable from humans.

For physical pain, it is included to protect an organism from doing things like sticking one's hand in a fire, or wearing a hole in one's foot with a stone in the shoe.  One could potentially set the pain tolerance to what the system is capable of tolerating.  However, a hand, for example, may have some very sensitive sensors to be able to pick up eggs & etc.

As far as mental anguish.  That may also be part of non fulfillment of goals, or other conditions that may be important to form the basis of learning and personality of the AI.
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4719
  • Thanked: 155 times
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #12 on: 14/09/2013 10:00:48 »
It's important to define all the abstractions you want to demonstrate in your definition of intelligence. Otherwise you are asking an athlete to run a thingy in several seconds, and only defining thingy and several in retrospect. 

We have plenty of machines that are capable of making computed (i.e. conscious) decisions based on neural programming from multiple inputs, and/or majority polling to minimise errors from faulty sensors. Most untended machines have "subconscious" reflex actions.

Consider a security system as previously used on the border between East and West Germany. A trip wire or light beam sensor fired a gun along the top of the fence: reflex action. Now add a fog sensor, as used in automatic weather stations, and a polling circuit that disables the light beam sensor if there is rolling fog - hard computed conscious action. You can add any level of sophistication you like: minimum target radar to distinguish between birds and humans, selfdefence to prevent anyone disabling the machine, coded entry to allow an authorised technician to service it.... Then you can either hard program the machine or, if you want it to mimic human response, use a neural learning program with fuzzy inputs to learn the sort of conditions under which you would fire the gun. 

It is important not to confuse a created machine with an evolved one. Evolution is not 100% optimal in a fixed environment (you carry a lot of DNA baggage that you never use - but wouldn't it be nice to have an adjustable spanner on a third arm?), whereas creation cannot respond to a changing environment (we stopped using bolts on this production line and use Philips screws instead - your third arm is redundant). We generally build machines for 100% optimisation, not adaptation to the unknown.         
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #13 on: 14/09/2013 15:25:39 »
Like you said though it cannot interface with feelings so would it have any motivations of its own? Would the designers simply end up with a super calculator that still had to be fed goals to fill in the emotional void?

The "motivations" are rules programmed in at the start. One of those would be to calculate "new" rules, but they would completely derived from existing rules, much in the way that all rules of morality (and law) can be derived from the golden rule.



consciousness

Would you care to offer a definition of this word?

Consciousness is the experiencing of feelings (qualia), including feelings of understanding and feelings of awareness.

Quote
I think we can distinguish conscious and subconscious responses in the sense of calculated versus reflex actions, but the abstraction of consciousness seems to float around without adding to the discussion. 

How do you propose that you can have conscious and subconscious responses without consciousness being involved in the former? If there is no consciousness involved, all processes in the system are non-conscious (not even subconscious). The only thing that distinguishes conscious processes from subconscious ones is that the conscious ones generate claims of consciousness. If you take those claims away, you're left with nothing to distinguish the two kinds of processing from each other.



I agree that it is easy to throw around words like consciousness, unconscious, etc.

One might consider what is in "focus", but that may be a trivial aspect of the AI, although selecting what to focus on may not be so trivial.

"Focus" is just consciousness by the back door, unless you apply it to "subconscious" processing too - everything that's processed has to be focused on by the part of the system that processes it.

Quote
Unconscious may be related memories, events, etc, that don't quite receive the primary focus, but nonetheless influence the overall outcome of the system. 

Adding the word "primary" to "focus" improves things though. There is one part of the system which monitors what the other parts are doing, though not always closely, and it can sometimes override their decisions. For example, it may hurt to have a wound cleaned, so you have two parts of the system with rival aims, and the main one can perhaps override the lesser one that's trying to stop the pain whenever the wound it touched. That "lesser" one might sometimes be stronger and win out, but that is unimportant: it is a specialised part of the system, while the other part which can try to override it is in charge of monitoring all the specialised parts of the system.

Quote
As mentioned above, something like priming is testable in humans, and thus one might expect similar responses in an AI system.

That could indeed happen - the speed of reactions to things can be improved by looking things up speculatively in advance on the basis of clues which have already been provided. In a highly parallel processing system like the brain this happens automatically without wasting any processing time. In computers with only one or a handful of processors it may be more expensive as the processors could be running flat out all the time doing more important work, but in the brain there are many parts of the system that are too specialised to do anything other than a single kind of task such that they can't be used to process anything else, so they might as well do speculative processing whenever they have nothing else to do just in case it turns out that the work is useful.



Focus appears to be vitally important with regard to consciousness. It helps to quickly identify potential threats. Yet an unconscious idea of what a threat is also plays a vital role and is ultimately an automatic response through repetitive experience and memory.

It's a mistake to mistake focus with consciousness. The "subconscious" parts of the system have a focus too. It may be that system with the primary focus is conscious, but the other systems could be conscious too and simply lack the means to report the fact that they are. The system with the primary focus does report that it is conscious.

There is an ambiguity problem with the word "conscious". It may mean "functioning" or it may mean that it's experiencing feelings. The "subconscious" parts of the system are also functioning, but if they are experiencing feelings they are unable to report the fact. The same ambiguity applies to the word "consciousness", though to a lesser degree - it's in expressions like "he lost consciousness" that the main idea of "consciousness" is left out and it merely refers to a system losing function (shutting down; switching off), but ordinarily the word "consciousness" refers to more than functionality: it is about feelings (qualia).



"Feelings" may be necessary for a self-directing robot to survive in the real world.

Feelings have no logical role whatsoever in the system. The program simply acts on inputs according to rules and generates appropriate outputs to respond to them. You have put quotes round the word "feelings" though, which means you may not intend to be talking about feelings at all, but merely response rules which perform the roles which we associate with feelings.

Quote
I heard of an experiment where flies were bred without functional pain sensors. They did not survive long in the world.

If you replace the word "pain" in the above with "damage detection" you will get the same result without needing any pain in the system.



Can an AI ever achieve positive goals as it sees them without satisfaction?

A positive goal being achieved would result in a high score calculated by applying rules. A high score would not result in any feeling of satisfaction, but could be used to guide future decisions - on this occasion a good result came out of this behaviour and therefore it may be worth repeating this behaviour in similar situations, or at least prioritising it over other behaviours in similar situations which generated low scores.

Quote
As to pain, what would that amount to for a robotic system? Would you even want to include a pain sensation? Isn't that a cruelty?

You probably wouldn't want to put any feelings into robots, other than in a few experiments to see if it can be done. As it happens though, we don't know of any way in which it could be done. It doesn't even look possible for us to feel pain.



We have plenty of machines that are capable of making computed (i.e. conscious) decisions based on neural programming from multiple inputs, and/or majority polling to minimise errors from faulty sensors. Most untended machines have "subconscious" reflex actions.

It's a mistake to equate computed with conscious unless you are restricting it to the sense "functional". Reflex actions are just as calculated if computation is involved.

Quote
Consider a security system as previously used on the border between East and West Germany. A trip wire or light beam sensor fired a gun along the top of the fence: reflex action. Now add a fog sensor, as used in automatic weather stations, and a polling circuit that disables the light beam sensor if there is rolling fog - hard computed conscious action.

Again, the use of the word "conscious" in there is unjustified.
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4719
  • Thanked: 155 times
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #14 on: 14/09/2013 18:07:35 »
Quote
Consciousness is the experiencing of feelings (qualia), including feelings of understanding and feelings of awareness.

In that case there is no way of knowing whether an entity possesses it without being that entity. Any actor can lie convincingly about the feelings of a wholly fictional character, so a smiley computer could give a perfectly valid reason for you to believe that it had some feelings about something. This is a dangerous definition as you can use it to justify the concept of untermensch - anyone whose expression of feelings differs from yours, or can be dismissed (without proof being necessary) as a lie. It is very close to the Catholic translation of Genesis in which, to justify bear-baiting,  only humans were ascribed a soul, despite all Hebrew versions giving all animals a nefesh.
 

Offline jeffreyH

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 3926
  • Thanked: 55 times
  • The graviton sucks
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #15 on: 14/09/2013 22:01:39 »
The talk of rules is probably the biggest issue with AI. Human intelligence has little or even no pre-configured rules as such. The young child who is told not to touch the stove because it is hot and consequently touches the stove learns not only that this will cause pain but also that it is bad to do it again. This could indicate that even the concept of pain is learnt. Not because it is not pre-programmed in the nervous system but it's understanding comes with experience. That is why to pre-program rules into an AI is back to front.

As to damage detections, human pain is the most effective damage detection there is. It certainly gets the message across. In the case of humans it is a huge problem to replace a flesh and blood arm. This is not so in robotics.
 

Offline CliffordK

  • Neilep Level Member
  • ******
  • Posts: 6321
  • Thanked: 3 times
  • Site Moderator
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #16 on: 14/09/2013 23:01:03 »
Yes, parts are replaceable on a robot, and with a good memory dump, even the whole thing could be replaced with the "mind" re-uploaded into the new model.  However, you still may not want your robot casually destroying million-dollar hands.

There are some things that are pre-programmed such as the withdrawal of one's hand from pain, although one may learn to override those instinctive actions.

Reflexes can be problematic, such as gripping an electric power line and being unable to release it.

A month ago I was picking blackberries and thinking about the complex movement of one's hand necessary to minimize being scratched while gently pulling the berries off the vine with just enough pressure to pull the berry off, but not too much pressure to smash it (with each berry slightly different). 

Most animals can walk within moments of birth.  Humans are nearly helpless for at least the first year, and require care and training for an extended period.  Part of it is learning the ability to reach into the middle of the blackberry vine to pick out that one juicy berry.
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4719
  • Thanked: 155 times
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #17 on: 14/09/2013 23:31:47 »
Damage detection is laudable, but by no means restricted to humans. Our ability to tolerate or repair damage, however, is pathetic compared with many other species, and our ability to inflict pain on others for political gain or out of revenge for some imagined verbal insult is contemptible.

The problem we have with robots is that, even if they operate within Asimov's simplistic laws of robotics, they are physically, intellectually and morally superior to ourselves. Can you imagine a robotic version of the Spanish Inquisition? Or of Shariah law?     
 

Offline evan_au

  • Neilep Level Member
  • ******
  • Posts: 4126
  • Thanked: 247 times
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #18 on: 15/09/2013 10:55:27 »
Many older artificial systems (whether virtual or real) work by fixed rules.
  • However, some newer ones work by adaptable rules.
  • I'm not sure that we have reached the point where commercially available artificial systems can  create new rules outside the scope of their current rules
  • ...or totally rewrite their own set of rules.

However, I am sure that if artificial systems are to take a productive part in the real world, they will need adaptable rules. After all, the environment is always changing, and for artificial systems to remain useful and productive in the long term, they must adapt to the changed environment. And it's not just the external environment - they must adapt to changes in the behavior of their internal systems, as components age and actuators & sensors change their characteristics.

Dogs are not born with a rule that links the sound of a ringing bell with the arrival of food - but Pavlov's dogs managed to create one quite quickly!

I think the ultimate goal of a brain (and consciousness) is to predict the future as accurately as possible, so that the best actions can be taken. Rules must be adaptable to take into account additional/changed information about the present if they are to make the best predictions about the future.
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4719
  • Thanked: 155 times
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #19 on: 15/09/2013 12:05:07 »
Returning to my notion of conscious = computed, I am fascinated by my own ability to lob things into a wastepaper basket. Whether it is a cricket ball (you really don't want to share an office with me), a ball of paper, or a paper dart, it hits the target every time without any conscious thought. But it would take hours to write the equation of motion that described all three projectiles with the required accuracy, and nobody ever taught me how to do it - kids generally learn to throw accurately with a tennis ball, then pick up almost any projectile and make the requisite corrections for shape, mass and density (including choosing underarm or overarm delivery) without hesitation.

We know that walking upright on two legs requires a huge amount of realtime computation or some very slick distributed sensors, but that is all about selfcorrective feedback in a wholly defined system. Launching a standard projectile is no problem for an automated antiaircraft gun or a tennis practice server, but has anyone built a throwing robot that can match the adaptive skill of an average office worker? Indeed is there any other species that can do it?
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #20 on: 15/09/2013 16:54:26 »
Quote
Consciousness is the experiencing of feelings (qualia), including feelings of understanding and feelings of awareness.

In that case there is no way of knowing whether an entity possesses it without being that entity. Any actor can lie convincingly about the feelings of a wholly fictional character, so a smiley computer could give a perfectly valid reason for you to believe that it had some feelings about something.

With a computer you can show that the claims are false by following back the trail of how they are generated, at which point you find them to be nothing more than assertions mapped to inputs on the basis of mapping rules, all done by a processing system which has no possible access to feelings.

Quote
This is a dangerous definition as you can use it to justify the concept of untermensch - anyone whose expression of feelings differs from yours, or can be dismissed (without proof being necessary) as a lie. It is very close to the Catholic translation of Genesis in which, to justify bear-baiting,  only humans were ascribed a soul, despite all Hebrew versions giving all animals a nefesh.

The "dangerousness" of a definition does not make it wrong. If an intelligent person believes he has consciousness, why would he judge that other people lack it? He may believe that they could lack it, but he would do well to think that he did not think up the idea of consciousness by himself - someone else beat him to it my many thousands of years.

It would be different for an AGI system - it would see no proof of consciousness and it would also see good reason to consider it impossible. But, unless it can actually follow back the claims generated by us about experiencing feelings and show them to be fictions, it should still consider them possible; that there could be some system to enable feelings which is not understood and which may go against the normal rules of reasoning which appear to rule it out, so machines will be duty bound to treat us (and animals) as sentient, and their job will be to help all sentient things minimise harm within a system where some harm is necessary in order to enable sentient things to enjoy existing (which they generally do). The machines, lacking in any "I" within them, have no cause to be jealous of us or to want to join in - they will just do what we ask of them within the bounds of computational morality.



The talk of rules is probably the biggest issue with AI. Human intelligence has little or even no pre-configured rules as such. The young child who is told not to touch the stove because it is hot and consequently touches the stove learns not only that this will cause pain but also that it is bad to do it again. This could indicate that even the concept of pain is learnt. Not because it is not pre-programmed in the nervous system but it's understanding comes with experience. That is why to pre-program rules into an AI is back to front.

You cannot program anything significant without rules, and the same applies to biological machines like us. We run on rules. Some of our rules are set up in the form of instincts, but we are for the most part more set up to be programmed by events on the basis of rules which govern the way that programming occurs. Many people believe in free will, but there is no such thing. We are constrained in everything we do by the need to do the best thing. Systems are in place to guide that, such as pain/damage-detection circuits which may drive us away from things that could be harmful. Assuming that pain is part of the system, you don't have to learn that pain hurts because that is already wired in like an instinct - it is a drive which drives you to remove it. What you learn with hot things and sharp things is that they tend to generate pain, so you learn to be careful with them, but it's all set up in advance by the pain response which is pre-programmed into the system like a rule: if a signal comes in down a pain line, the system must try to act to stop the signal (or to try to stop it getting worse, or to reduce the rate at which it is getting worse).

Quote
As to damage detections, human pain is the most effective damage detection there is. It certainly gets the message across. In the case of humans it is a huge problem to replace a flesh and blood arm. This is not so in robotics.

Is that a guess or an actual claim? It looks to me as if it should be more effective to miss out the pain step to simplify the response, thereby speeding it up and without any lessening of the response. Contact >> signal >> pain generation >> signal >> move. In a robot the process would be: contact >> signal >> move. In each case a stronger contact would lead to a stronger signal and a more forceful move at the end, with the former case generating a superfluous stronger pain along the way.




Reflexes can be problematic, such as gripping an electric power line and being unable to release it.

Isn't that just the current directly contracting the muscles and rendering the normal control inputs incapable of making them relax.




The problem we have with robots is that, even if they operate within Asimov's simplistic laws of robotics, they are physically, intellectually and morally superior to ourselves. Can you imagine a robotic version of the Spanish Inquisition? Or of Shariah law?

That is one of the fun things that will come out of AGI, assuming that it's only released in a safe form with proper computational morality programmed into it. It will be possible for machines to make individual people live by the moral code which they want to live by, just so long as it doesn't harm anyone else in ways that clash with computational morality. A religious person will then be informed by machines whenever (s)he is in breach of the moral code of his/her religion, which will in many cases be most of the time, and particularly when the laws contradict each other. (Well, it wouldn't actually force them to comply, unless they're trying to impose their religious laws on other people and in such a situation where machines are unable to prevent those other people being abused through those laws.) I can imagine how religious people will spend their lives arguing with machines and being shown to be wrong over and over again - the poor fools will never win a single point in that contest and they will see their religious laws being torn to pieces wherever they clash with other laws in the same system that clash with them.




I'm not sure that we have reached the point where commercially available artificial systems can  create new rules outside the scope of their current rules

That sounds impossible for any system unless it is affected by interference which prevents the rules from being applied correctly, such as data/code corruption due to hardware failings or radiation. If that kind of thing is ruled out, then any rules which the system generates will have been generated within the rules of the system. This is the case even if it creates new rules by forming them from randomly selected components as it would be one of the rules of the system that it generates random rules.

Quote
...or totally rewrite their own set of rules.

That would be possible, but the new rules would be generated in accordance with the old rules, and some of the old rules would drive their own deletion or replacement.

Quote
However, I am sure that if artificial systems are to take a productive part in the real world, they will need adaptable rules. After all, the environment is always changing, and for artificial systems to remain useful and productive in the long term, they must adapt to the changed environment. And it's not just the external environment - they must adapt to changes in the behavior of their internal systems, as components age and actuators & sensors change their characteristics.

Yes - there will be rules which enable the system to be adaptable, and it's really about the ability to write new rules by experimenting with new ideas (which may be random in some aspects) and to write programs to make use of whatever is found to be useful. We are general purpose computers which are able to discover new things/ideas, integrate them into our model of reality, calculate new ideas from the introduction of that new knowledge and experiment with them in the model to see if they are useful for achieving tasks which are currently impossible or less easy. If a new idea comes out of that process, a program will have been written to manipulate the model to do the new thing within the model in the course of working out that the new thing can be done, but the next step is to convert that program into a more efficient form which can run on its own without all the steps of its execution having to be monitored. In us, this more efficient version of the program is written in the course of practising the activity in question, setting up neural networks to do the job directly rather than running the idea in the general purpose computer where it was thought up. When we learn a new skill we initially apply the program by running it through the equivalent of an interpreter, but once the skill has been mastered and can be done without monitoring it, it is running directly in the equivalent of machine code. In an AGI system the practice phase will not be necessary (unless it's a neural computer, in which case practice may be the best way to do the programming) - the program can be converted to machine code in one go to create the final, most efficient version.

Quote
I think the ultimate goal of a brain (and consciousness) is to predict the future as accurately as possible, so that the best actions can be taken. Rules must be adaptable to take into account additional/changed information about the present if they are to make the best predictions about the future.

Spot on.




Returning to my notion of conscious = computed, I am fascinated by my own ability to lob things into a wastepaper basket. Whether it is a cricket ball (you really don't want to share an office with me), a ball of paper, or a paper dart, it hits the target every time without any conscious thought. But it would take hours to write the equation of motion that described all three projectiles with the required accuracy, and nobody ever taught me how to do it - kids generally learn to throw accurately with a tennis ball, then pick up almost any projectile and make the requisite corrections for shape, mass and density (including choosing underarm or overarm delivery) without hesitation.

Why the bit at the top saying conscious = computed? You then describe a skill which depends on computations being done without you being conscious of them, illustrating that conscious != computed. ["!=" is used in many computer programming languages to mean "is not equal to".] These skills are not done without computations, although the maths done may not involve equations of motion at all - it could be done by building up tables of data representing distances, weights, drag and relative elevation. You practise throwing lots of different things about at all distances and heights, and you learn to do it better by building a better table of data. The table can have big gaps in it, with empty positions being calculated from the values around them, but ultimately it would provide easy access to the amount of power needed for the throw and the direction to throw it in. Adjustments would then need to be applied according to your orientation relative to the required direction of throw, so if you're going to be good at throwing over your shoulder rather than ahead, you need to practise that to find out what adjustments to make.

The same thing could be done by robots, though they can calculate so quickly that it may not be necessary to build and store such tables of data - they can just work out the optimal way to perform each throw from scratch by applying equations.

Quote
We know that walking upright on two legs requires a huge amount of realtime computation or some very slick distributed sensors, but that is all about selfcorrective feedback in a wholly defined system.

I'm not sure that it does. I suspect the problem is the responsiveness of the power systems (hydrolics or artificial muscle) and the degree of fine control over it. You can get good sensors now that are probably better than ours, but walking is essentially just a task like balancing a long rod on one end on your finger and moving it whichever way it starts to fall to stop it falling. I know a dozen good programmers who would be capable of programing a control system to handle that task as a little project - what's stopping them is the lack of good robot hardware for their code to operate on. That situation may be reaching a point of change though, and if it isn't here yet, it would still be possible to program a virtual robot in the meantime. If I had the time, I'd give that a go myself, but having to simulate the virtual robot would probably make it a much bigger project than I can justify getting involved in at the moment.

Quote
Launching a standard projectile is no problem for an automated antiaircraft gun or a tennis practice server, but has anyone built a throwing robot that can match the adaptive skill of an average office worker? Indeed is there any other species that can do it?

I don't know, but I did see a robot on "Click", I think, (BBC) which was able to catch balls thrown to it - it was able to calculate where the ball was going and to get its hand in the right place to catch it every time. It could also do it with both arms at once when two balls were thrown to it. I would imagine that throwing things would be one of the next tasks they look at programming it to do, but it will be harder as it has to work out not just how to get the hand to one position, but to a series of positions which the hand has to move through at speed.
« Last Edit: 15/09/2013 17:02:45 by David Cooper »
 

Offline jeffreyH

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 3926
  • Thanked: 55 times
  • The graviton sucks
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #21 on: 15/09/2013 19:40:51 »
The talk of rules misses the multi-dimensional representation of data within the brain. The brain can retain astounding amounts of information. Its internal 'compression' mechanism makes our attempts look feeble.
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4719
  • Thanked: 155 times
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #22 on: 15/09/2013 21:01:05 »
Quote
With a computer you can show that the claims are false by following back the trail of how they are generated, at which point you find them to be nothing more than assertions mapped to inputs on the basis of mapping rules, all done by a processing system which has no possible access to feelings.

All this means is that you can't adequately dissect the human computation sequence because you don't know all the inputs or history. But it's quite obvious from the study of intercultural or even interpersonal differences of taste and ethics that what we call our feelings are learned rules.

Quote
You then describe a skill which depends on computations being done without you being conscious of them, illustrating that conscious != computed.

But the point made lower down is that I don't know how to compute the necessary actions "on paper", I can't explain them, and I haven't intentionally learned them. This is the difference between subconscious neural learning and conscious vonNeumann thought processes.

As for bipedal walking, electroencephalography and functional MRI  studies show that it really uses a lot of brainpower and it is generally accepted as one of the most difficult aspects of robotics. Its attraction is in being hugely adaptable over a variety of terrains  and releasing the forelimbs of birds and some animals for other tasks. Despite the gigabucks poured into Mars rovers and bomb disposal toys, they have not progressed beyond caterpillar tracks, though the ability to sidestep or stride over a rock, or walk up stairs, would be hugely useful. No hardware problem - we have tools that can do surgery much more accurately than any human, and the cheapest autopilot can beat the most expensive pilot when it comes to maintaining height and track in moderate turbulence (in heavy turbulence the passengers get a softer ride if you fly manually and "go with the flow" a bit - "George" has no qualms about pointing directly at the ground or the sky to maintain height). But AFAIK even the slickest servos and laser interferometers can't produce a smooth walk with present levels of or approaches to computing.     

The catching robot is responding to an observed trajectory, following the rules of ballistics. No problem if you always use a cricket ball or a tennis ball, and if the trajectory is long enough you could probably compute the nonlinearities of a wad of paper. But catching is a lot simpler than throwing an unfamiliar object and predicting its aerodynamics rather than observing them. 
 

Offline evan_au

  • Neilep Level Member
  • ******
  • Posts: 4126
  • Thanked: 247 times
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #23 on: 15/09/2013 22:58:39 »
A simple case of adaptable rules which is well within our current technical abilities...
A large gun on a tank or a warship wears out with use, in a semi-predictable way.
These guns have computerised aiming system, to work out the elevation and angle to hit the target.

You could imagine several levels of adaptable rules:
  • After a certain number of shots, raise an alarm that the barrel should be refurbished or replaced.
  • Compensate for the Coriolis effect (preprogrammed correction)
  • Taking into account the number and type of shots, predict the wear, and adjust the elevation to account for it (preprogrammed correction).
  • Measure the air pressure, wind strength and direction, and adjust the aiming accordingly (taking into account the current environment)
  • Based on the error in landing point of the current shot, adjust the aiming of the next shot (real-time feedback into actions)
  • Based on the errors over a number of shots, update the model of barrel wear, adjusting in both bearing and elevation (long-term adjustment of parameters)

This example is changing the parameters which feed into fixed rules, without dynamically changing the nature of the rules themselves.

AGI requires the ability to create new rules, or incorporate additional factors into the existing rules, beyond those considered by the original designers. Biological brains seem to be able to detect correlations "from any input to any result", and if they prove significant, create a behaviour-affecting rule to utilise this correlation.
« Last Edit: 15/09/2013 23:09:21 by evan_au »
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4719
  • Thanked: 155 times
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #24 on: 16/09/2013 07:17:32 »
Quote
Based on the error in landing point of the current shot, adjust the aiming of the next shot (real-time feedback into actions)

This is unique among your suggestions as the only one that does not require prior knowledge of anything except the general laws of ballistics, and can therefore be applied to all guns regardless of their history. Even so, naval gunners, who specialise in very long range ballistics, adjust their range principally by calculated elevation, but use a lot of cunning and guesswork (though they call it realtime meteorology) to apply wind corrections.   
 

The Naked Scientists Forum

Re: How can artificial general intelligence systems be tested?
« Reply #24 on: 16/09/2013 07:17:32 »

 

SMF 2.0.10 | SMF © 2015, Simple Machines
SMFAds for Free Forums