The Naked Scientists

The Naked Scientists Forum

Author Topic: How can artificial general intelligence systems be tested?  (Read 8278 times)

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #25 on: 16/09/2013 20:32:51 »
Quote
With a computer you can show that the claims are false by following back the trail of how they are generated, at which point you find them to be nothing more than assertions mapped to inputs on the basis of mapping rules, all done by a processing system which has no possible access to feelings.

All this means is that you can't adequately dissect the human computation sequence because you don't know all the inputs or history.

The bit you quoted is talking about computers and how you can follow back the trail to find out how any claims it produces about feelings can be shown to be false; that even if there are feelings somewhere in the system, there is no possible causal connection between them and the data generated that claims to document their existence. With humans it's much harder to try to follow the trail because it isn't accessible in the way that all program code and data are within a computer. A hundred (or maybe a thousand) years from now it might be possible to follow the trail in our brains too and to see if feelings really have a role in the system.

Quote
But it's quite obvious from the study of intercultural or even interpersonal differences of taste and ethics that what we call our feelings are learned rules.

Ethics have to be worked out and there are endless ways of doing so badly, so we have lots of people tying themselves to different codes of behaviour according to culture. That isn't a surprise, but tastes in music and food are also affected by culture, showing that any pre-programmed likes and dislikes that we are given via DNA are possible to override to different degrees, to the point that the music loved by one generation may be hated by the next/previous one, while whole cultures may hate foods that other cultures enjoy, regardless of the genetic origins of individuals who are not typical in that culture.

Quote
Quote
You then describe a skill which depends on computations being done without you being conscious of them, illustrating that conscious != computed.

But the point made lower down is that I don't know how to compute the necessary actions "on paper", I can't explain them, and I haven't intentionally learned them. This is the difference between subconscious neural learning and conscious vonNeumann thought processes.

The difference is not that one system involves computations and that the other doesn't though. Both use computations, so if computed=conscious, both systems must be conscious. The systems working the background do calculations without the main system (which monitors all the rest) seeing all the fine detail, and a lot of that fine detail cannot be accessed by any system in the brain as it's trained into complex pieces of neural net which simply do things without having the capability to report how they do them. When the main system does something, all the steps are visible to it, but that also makes it slow at doing things because it is effectively running programs by interpreting them step by step, and importantly it only ever does monotasking. That is why learning a new skill is hard - you may have to multitask to do something, but to begin with you can only monotask, so you have to train up neural nets to automate parts of the task so that they can be run simultaneously in the background, one of them optionally still being run by the main system at times, or you might switch it around to concentrate on whichever task is most in need of further improvements in its automation from moment to moment. Eventually the entire process is automated to the point that you can perform it without thinking about it and use the main system to do something else entirely at the same time. By this point, you have lost track of what all the other systems are doing, and you may in time even forget how they work entirely.

Quote
As for bipedal walking, electroencephalography and functional MRI  studies show that it really uses a lot of brainpower and it is generally accepted as one of the most difficult aspects of robotics.

If you look at most walking robots, they never get unbalanced - they aren't attempting to do proper walking. I suspect that's because the motors are too slow and they can't react in time to correct with sufficiently high precision when the robot starts to topple. Some of the most recent ones can run, so they have probably reached the point where they could be programmed to walk just like we do, and I expect that to see that becoming the norm soon. There will be a lot of processing going on, but I can't see why the algorithms themselves should be particularly difficult. If the robot is falling forwards, the position to move one foot to can be calculated by placing it ahead in the direction of the fall and slightly to the left/right to steer the robot to the right/left of that line. It then has to absorb the impact energy and apply the right amount of force to avoid the leg collapsing under the load.

I've just done an experiment with walking where I simplified things a bit by keeping each leg completely straight whenever the foot at the end of it is in contact with the ground. This isn't our normal way of walking, but it actually works very well and would be a good starting point for programming a robot to walk. Start by balancing on one leg. You can maintain balance by applying forces through the toe/heel/sides of your foot on the simple basis that if you're falling one way, press down harder with that end/side of the foot. [Note that this is much harder with your eyes shut - we use visual input to detect whether we're starting to fall and which way we're going, and while we can do this through pressure sensors in the foot as well, it is much slower.] Now allow yourself to fall forwards to land on the other foot, making sure that leg is straight before its foot contacts the ground. At the last moment, just before this foot hits the ground, push up and forwards with the rear foot - this will provide sufficient momentum for you (or the robot) to arc its centre of gravity across over the forward foot once it is planted on the ground, and the subsequent speed of this movement forwards can be further controlled by applying forces to toe and heel of that foot while moving over it, thereby allowing corrections if the launch off the other foot was too strong or too weak (and future launch forces can be modified accordingly to reduce the need for such corrections the next time). This algorithm is very simple, the leg only bending while moving forward in the air so as not to hit the ground, but the rest of the time it is completely straight (whenever the foot is on the ground). There is some freedom with the side-to-side foot placement - the right foot may land to the right of the centre line so long as the left foot compensates by landing to the left of that line on the next step, the result being that the robot will wobble from side to side as it goes along. Alternatively, each leg can swing round the other and the foot can be planted on the centre line each time, or to one side of it for steering purposes.

That's a simple algorithm which would provide better walking than you see in most robots, because those robots don't walk by falling forwards. What you normally see with robots is that they plant one foot on the ground ahead, then transfer their weight from back foot to front while both are on the ground, and then they lift the rear foot once the weight of the robot is balanced securely on the front foot. For good walking it should not be balanced in that way - it should flow along through a series of falls.

A better walking algorithm (as used by us) would involve more complexity than the simple algorithm described above, because the knee does bend while the foot is on the ground, but I'm having trouble monitoring exactly what it does without affecting what it does. I would need to look at slow-mo video from the side to get a better idea of what's happening, but the knee may bend while moving the centre of mass over a foot in order to reduce the up and down movement, as well as absorbing impact energy on landing and applying power on launch. I think the knee is normally slightly bent on landing to enable immediate absorbtion of impact energy. Further steering inputs can be made by applying forces to the side of a foot. Even with this way of walking, the algorithms are pretty simple and should not be hard to program.

Complications are of course added when you take into account where the robot is to go and how it is to find its way there, but the actual walking algorithms should not need input from vision when walking on a flat floor - you can walk perfectly well in pitch darkness and a robot should be able to do the same, so it will be using sensors to keep on top of its orientation and banance with these inputs leading to calculations which increase or decrease the amount of power applied on launch off one foot and adjustments (pressing to side heel or toe) during movement over a foot.

Further complications come into play when the ground isn't flat - there may be different levels to step on, they may slope, and indeed they may be shaped in complicate ways that reduce the area of the foot that will touch down, thereby taking away some of the controls for adjustments after launch, so the launch energy needs to be calculated with greater precision. To handle such terrain in the dark is not easy for us, so it will likewise be hard for robots - we often fall over in such circumstances. If it isn't dark though, we find it pretty easy, with practice. Robots would need to generate a 3D model of the terrain ahead of them to work out where to place feet, how to orientate them, how much launch energy to use (to handle a change in elevation), and how much adjustment control will be available while balanced on one foot. That would be harder to program, but I can already imagine how it would be done.

Quote
...though the ability to sidestep or stride over a rock, or walk up stairs, would be hugely useful.

The American military (I think) has a bipedal machine that can run over debris. A bit of video was released, but I wasn't able to watch it due to a slow Net connection. Clearly they have access to the best hardware, and cost is no limit to them.

Quote
No hardware problem

I think the problem is primarily hardware - the algorithms don't look particularly hard to me, but you would need a robot that can respond quickly and with precision, and it needs good sensors as well. Once you can buy a robot which meets those requirements, it looks as if a school computer club could program them to walk around on flat floors with no obstacles.
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4719
  • Thanked: 155 times
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #26 on: 16/09/2013 23:50:26 »
We're wandering a bit off topic here but it's fun. The problem with bipedal standing, is that a body supported on two pivots below its center of gravity is inherently unstable, so standing still is an active process, requiring continual adjustment of muscle tone - hence the large amount of brain power needed by bipeds. Walking is slightly easier to compute because as you say it is a process of continually falling forward and arresting the fall, and can be achieved with fewer muscles. There are some cunning passive walking frames that allow a partially paralysed person to walk by leaning forward and rocking from side to side, but in every demonstration I have seen, the user had to use his hands to stand still.

It's interesting to play with a pogo stick, where the range of actuator capability is reduced to leaning and bouncing: most people can learn to move around quickly and accurately, but standing still on one spot is extremely difficult.  Interestingly, one of the earliest robots to walk up stairs was a pogo-monopod, very efficient but it couldn't stand still. At the other end of the scale of complexity there are plenty of insect-mimicking toys that stand and walk entirely open-loop (i.e. with no feedback)  because they always have 3 feet on the ground and are therefore inherently stable. 

Walking around on a flat floor with no obstacles is pretty pointless for a robot. In such a low-impedance environment, wheels are much more efficient.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #27 on: 17/09/2013 16:24:10 »
The problem with bipedal standing, is that a body supported on two pivots below its center of gravity is inherently unstable, so standing still is an active process, requiring continual adjustment of muscle tone - hence the large amount of brain power needed by bipeds.

That's a good point. A robot can lock itself in position and not use any energy to stand still, but it would have to be ready to unlock fast in case the wind starts to blow it over. We can lock our knees straight well enough (though I think some power has to be applied constantly to do so), but we still move around a bit, probably because the ankles aren't able to lock into an end-of-travel position.

Quote
Walking is slightly easier to compute because as you say it is a process of continually falling forward and arresting the fall, and can be achieved with fewer muscles.

There's another factor I've thought of for helping to arrest the fall, because the when the mass of a leg is swung forwards, it slows the forward movement of the rest of the body. That would not happen with ultra-lightweight legs.

It's also worth considering walking on stilts where there is no ability to make adjustments by applying pressure from the sides or different ends of the foot - this makes the precision of the launch energy more critical. A robot with three stilt-like legs could be quite good at walking (on two legs) and standing still (on three). There wouldn't need to be a knee if the legs are telescopic so that they could shorten for moving forwards and lengthen to apply launch energy.

Quote
It's interesting to play with a pogo stick

That is a useful next step to thinking about how running works.

Quote
Walking around on a flat floor with no obstacles is pretty pointless for a robot. In such a low-impedance environment, wheels are much more efficient.

It's a useful step though towards getting it to walk over rough terrain. Get the walking on the flat sorted first, and then add lidar or a couple of webcams and try to calculate the best places for it to stand when there are obstacles everywhere. That would be a much tougher thing to program, even if you can afford lidar. I'm planning to work on the two webcam approach for vision and have thought about how to go about it quite a bit, but I think the pattern recognition side of it will take a lot of time to work out - this is needed to match up the same point in the two images so that its distance can be calculated, but even after that you have to model the whole scene and make sense of all the different surfaces, and work out which should not be stood on, so it's going to be a major undertaking. I'm also years behind other people in doing that kind of work and may not be able to catch up, so it may be better not to start on it. I'll see how I feel about that when my other work's finished and out of the way.
« Last Edit: 17/09/2013 16:27:29 by David Cooper »
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4719
  • Thanked: 155 times
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #28 on: 17/09/2013 18:01:30 »
You won't get very far playing rugby or catching rabbits if you have to look at the ground when you are running. Animals are extremely adaptable to traversing rough terrain without looking at their feet! It's all done by baroreceptors and extensometers, not the eyeball. 
 

Offline jeffreyH

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 3926
  • Thanked: 55 times
  • The graviton sucks
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #29 on: 17/09/2013 22:52:47 »

I'm planning to work on the two webcam approach for vision and have thought about how to go about it quite a bit, but I think the pattern recognition side of it will take a lot of time to work out - this is needed to match up the same point in the two images so that its distance can be calculated, but even after that you have to model the whole scene and make sense of all the different surfaces, and work out which should not be stood on, so it's going to be a major undertaking. I'm also years behind other people in doing that kind of work and may not be able to catch up, so it may be better not to start on it. I'll see how I feel about that when my other work's finished and out of the way.

I have already worked out pattern recognition and thought about stereoscopic vision. Maybe we should share ideas? :-)

I can pick a moving shape out of the backgroud and isolate it.

BTW I also have ideas on focal point adjustment for a vision system.
« Last Edit: 18/09/2013 02:50:00 by jeffreyH »
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #30 on: 18/09/2013 13:29:17 »
You won't get very far playing rugby or catching rabbits if you have to look at the ground when you are running. Animals are extremely adaptable to traversing rough terrain without looking at their feet! It's all done by baroreceptors and extensometers, not the eyeball.

If you're playing rugby you can usually assume the ground is fairly flat and that there's little need to look at it as a result. I was imagining something more challenging like a rocky shore where a misplaced foot could easily result in a bad slip leading to you falling onto sharp things and into rockpools. You can also sprain an ankle very easily, so you have to look where you're going. If you're running along a path through a wood you also have to look at the ground to avoid tripping over roots and large stones, though in this case you can do most of the work with peripheral vision. In the dark though, you will trip over things if you try to go fast.

I've never tried to catch rabbits, but the kinds of animals that chase them tend to have four legs and those legs are designed quite differently with more pointed ends, making them more like retractable sticks. This may make foot placement less critical for them, but they're still better at moving over rough terrain when they can see where they're going. The worse the terrain, the greater the need for vision, as you should realise when you take the terrain to extremes and think about goats walking around on the faces of cliffs.
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #31 on: 18/09/2013 15:31:30 »
I have already worked out pattern recognition and thought about stereoscopic vision. Maybe we should share ideas? :-)

I'm happy to share ideas in any area where we are behind the competition, but if you're at the cutting edge with anything you might want to keep those ideas to yourself as they may be too valuable to give away for freely to others. Some people are happy to give their work away, while others are not, but it's entirely up to the person who has an idea as to whether to share it or not as there is no moral obligation on him/her to do so, particularly in a field where most of this stuff could be used for highly undesirable purposes it it gets into the wrong hands.

Quote
I can pick a moving shape out of the backgroud and isolate it.

Picking a moving shape out of a non-moving background should be easy as you just look for the pixels that change (though you'd need three frames to know which way the moving object is moving through the changing bit), but can you do it if the whole background is moving as well (which it will be if the robot is moving)? Even if you can't, you still have something useful though as you should be able to measure the size of the moving part of the image and determine whether it's getting bigger/smaller or staying the same, as well as looking at how fast it's going sideways. If it's getting bigger and not moving across the screen, it means it's coming straight towards the camera and a collision may occur.

The role of pattern recognition here would be to identify the same item (or part of an item) in two frames which are sufficiently different that you can't find them in the same pixel locations, and the shape of those items will not be the same in each frame, so it's not going to be easy to work out how to handle it. I'd be interested to know how far you have got with this, but you probably don't want to share your actual algorithms. So far I've only worked with mono images and have made very little progress with that. Working with stereo images might be more rewarding as you could start to build a 3D model of the scene from them fairly quickly, giving you something similar to a model generated by lidar. What I would try to do is isolate a distinctive part of the scene in one image on the basis of its colour/shade and then look for the best fit for it in the other image, shifting its position around (from side to side - no up and down movement is required) until the best fit for it is found - this would require many thousands of pixel comparisons and scores for similarity being counted up, but it ought to end up generating relative-distance tags to tie to different parts of the scene. This could be done on different scales, starting with the big chunks and working down to smaller ones, prioritising the processing of smaller ones on those areas that are determined in some way to be of most interest. Then again though, comparing a few large things could be just as processor intensive as comparing a large number of small things, so it may be better to work with small ones first, looking for areas where there are clear lines of change within them (running generally vertically) so that those will show up well when they're overlapping best.

Part of the problem with working with high-resolution images is that they're slow to process. Ideally you'd have a variety of cameras with different resolutions to work with so that you can work at the lowest resolution first to get the large-scale 3D layout worked out from that with a relatively small amount of processing, but if you only have high-resolution images to work with you'd have to do a lot of processing to generate low-resolution versions of them first, and that would cost as much processor time as it would save afterwards by working at low-resolution. For this reason, I'm now thinking (as I write this) that working on a small scale may be the best approach for the initial analysis, perhaps working with 8x8 pixel blocks. There could be alignment difficulties with repeating patterns where there are multiple good matches for each block, so you'd need to store multiple best fits for each block and then look for places where there is only one best fit to use to help determine the most likely one of the best fits for all those blocks where there are multiple best fits.

Quote
BTW I also have ideas on focal point adjustment for a vision system.

I'm not sure what that means.

___________________________________________________


On the walking robot subject, another thing that needs to be controlled is horizontal rotation, and this could be done at the ankle. We appear to do this rotation using the whole of the lower leg, but in a robot that would not be necessary, although it may be the most efficient way to do it if artificial muscle is used - copying the designs of nature is often a good starting point. This horizontal rotation is important if you don't want the robot to be restricted to walking in a straight line, because although it can steer by placing a foot to one side and falling the other way to change its direction of travel, it will always be lined up in a single direction and it will be increasingly difficult/impossible to travel in directions other than that as the angle increases. Rotating the robot at the ankle will fix that.

I'm tempted to write a robot simulator and to make it available as an x86 32-bit-mode binary blob which can run in multiple operating systems, but I can't promise I'll find the time to do so any time soon. The idea would be to use a square of the screen as a plan view with the robot coming back in at the top if it walks out at the bottom of the box. Under that box would be a side view. The robot itself would in its simplest version just be three dots which mark the ends of legs (two at the foot end which would just be a point, one red and one green, and one at the top where they join the body - they can share the same location even though it would not be practical to build a robot quite like that for real as parts of them would have to move through each other like ghosts, but that doesn't matter in a model like this) and a fourth dot at the top of the body. Head and arms would not be necessary (or can be thought of as being part of the body, the top of which is the head). The mass and length of each section would be programmable so that you can experiment with a range of robot designs, with the masses either being located at the four points indicated or maybe at points midway in between them - that's something I'd need to work out carefully. The lengths of the legs would be varied by telescoping them for simplicity (legs that bend at the knee really just do the same job in a more complex way). The controls would be for fore and aft movement at the hip; sideways movement at the hip; lengthening/shortening of each leg; and horizontal rotation of the foot (which would be regarded as sufficiently non-point like and grippy to resist rotation against the ground), though this rotation could actually be left out in this simple model as the legs join the body at a single point and can pass through each other. Best to include the rotation anyway though, I think, so that it's already covered when a new version of the model is made later, so there needs to be some way of indicating the front of the body.

The model for the environment could be based on squares which can be set to different altitudes, and the robot will be able to read their locations directly such that it doesn't need vision to know where they are. The legs could pass like ghosts through any edges with only the points counting for contact. More complex models can be designed later, but the idea here is to create a simple one for working out the basics before getting tied up in extra complexity.

The physics would take a fair bit of working out, and part of the job there would be to provide sensor information which the binary blob would make available through variables. The person writing program code to control the robot would then be able to use the variables to read the sensors and write input values to other variables for the motors to act on, that being the only way to control the robot. Variables would also pass information about the current state of each motor and joint so that the program controlling the robot knows the orientation of all joints, the amount of power being applied by each motor and the speed of actual movement at each motor. All the variables would be displayed on the screen throughout and the program could be run slow or halted at any time to examine them.

It would be a lot of fun to do all that, but I can't justify putting in the time to do it at the moment because working out the physics could be hellish, not just for making it behave correctly but for generating correct values for the sensors, and I haven't thought yet about how many sensors would be needed, what kinds of sensors they should be, and where to place them. That's the part where the project would likely get bogged down.
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4719
  • Thanked: 155 times
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #32 on: 18/09/2013 17:11:46 »
I think my earlier point is made - the ability to pick up a completely novel object and chuck it into a waste bin requires a phenomenal amount of linear computing and unthinkable subtlety of sensors and servos, but we do it without conscious thought! 
 

Offline AndroidNeox

  • Sr. Member
  • ****
  • Posts: 252
  • Thanked: 2 times
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #33 on: 18/09/2013 23:49:18 »
Perhaps we should rely on the putative artificial intellect to think of its own arguments to convince us that it's aware.    ;)
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #34 on: 20/09/2013 17:40:45 »
More on the simulated robot idea:-

Actually, the sensors wouldn't be needed in the simplest version as it would be possible with a simulated robot just to read the coordinates of its location directly to determine how parts of the robot are aligned and moving. Later on, sensors can be simulated and the values from them can be used instead of reading the coordinates directly. The robot control software can then generate its own theory as to what the coordinates are, though it would need to be fed an average position to keep it in touch with where it is in the terrain.

We have balance sensors in our heads made of circular tubes containing water with hairs that detect its movement. The signals sent from those would make it easy to tell if the robot is falling over sideways, but I don't know if robotic sensors of that kind have ever been made. Accelerometers are available though, and I'm guessing that they provide three values to indicate the force across them in three directions. If they're falling, all the values will be zero. Most of the time they will indicate which way is down, but when other forces apply it will show which way that part of the robot has started to move and how fast. That movement would be assumed to continue until an opposing force is detected, although any rotation of the robot that results from the movement needs to be taken into account. A minimum of four accelerometers would be required for the simple robot model described earlier: one at the point where both legs connect to the body; one at the top of the body and at each foot.

If a stepped terrain is in use, the average robot location still needs to be passed to the software controlling it so that it can keep track of where it is, unless it is to stumble around in the dark.

So, it would be relatively easy to create a simple robot simulation program as a binary blob which other software could interact with to make the robot walk. In its initial form there would be no simulated sensors, but four sets of coordinates would simply be read directly (and repeatedly) to determine what the robot is doing. Later versions could add four simulated accelerometers, and software to control the robot would then be rewritten to work with that data instead, after which it should be able to control a real robot compatible with that virtual design. There would be 8 motors to control (using +/- values in read/write variables to make them move in different directions and at different speeds), and there would be 8 read-only variables which report back their positions. Power can be applied without the position values changing if the limits of movement have been reached. The terrain can be read directly, and the position of the robot is available via the coordinates of four positions. [In later versions of the robot simulator, only one coordinate would be given for the robot and that would be for its centre of mass - it would be up to the control software to read the simulated accelerometers and the motor positions to calculate the orientation of the robot at any time.] [With a real robot, it would be harder to work out where the robot is relative to any terrain without adding vision to it, but that can wait anyway - what matters is to program it to walk first and only worry about extending the capability after that.]
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4719
  • Thanked: 155 times
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #35 on: 20/09/2013 18:59:16 »
Just returned from an instrument flying session. Our semicircular canals only detect acceleration, so no problem simulating them with accelerometers - I don't expect a walking robot to be able to fly a plane with its eyes shut (I have a perfectly good autopilot that does that!)

Th cunning thing about the human nervous system is that it automagically adjusts to keep the ears (the tilt sensors) "above" the hip joints.  Sprinters start with a pronounced forward lean as they accelerate, and become more upright at full speed. I think if you watch a normal bipedal gait very carefully you will see that the head actually leads the movement - the body intentionally falls forward then stops itself by swinging a leg forward. It's interesting that babies have a walking reflex long before they can stand still!
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #36 on: 21/09/2013 16:06:24 »
Our semicircular canals only detect acceleration, so no problem simulating them with accelerometers

It appears from a bit of googling that the semicircular canals only detect rotation and cannot serve as linear accelerometers. However, the utricle and saccule (which I had not heard of before) do appear to be accelerometers, the latter being more sensitive to vertical movement and the former to horizontal (no info on whether it's better at fore/aft or side-to-side acceleration). A BBC science page attributes a different function to the utricle and saccule, claiming they detect head tilt, but I suspect Wikipedia's more accurate on this.

The calculations are quiet different depending on whether you're getting input from rotation detectors or from linear accelerometers, but all can be done with linear accelerometers, and I suspect that's all that's available for robotics.

Quote
Sprinters start with a pronounced forward lean as they accelerate, and become more upright at full speed. I think if you watch a normal bipedal gait very carefully you will see that the head actually leads the movement - the body intentionally falls forward then stops itself by swinging a leg forward.

It's necessary to avoid falling over backwards - the higher the acceleration, the further forward you have to lean. Once moving at a constant speed there is no need to lean forward. For deceleration you have to lean backwards.

_________________________________________


Another thought on vision: three cameras would be better than two. If you're trying to judge the distance to horizontal lines crossing ahead of you and there's no texture on those lines, it will be much easier to judge those distances with two cameras one above the other rather than side by side.
 

Offline alancalverd

  • Global Moderator
  • Neilep Level Member
  • *****
  • Posts: 4719
  • Thanked: 155 times
  • life is too short to drink instant coffee
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #37 on: 21/09/2013 17:53:38 »
Apols for not distinguishing between linear and rotational accelerometers

http://www.robotshop.com/sensors-gyroscopes.html will provide you with neat solid-state rotational accelerometers. Friends from the aerospace industry have been working on these for ages, looking for medical applications.

Quote
For deceleration you have to lean backwards.

And that's exactly what runners do after they have crossed the line. Less noticeable in 100m or shorter races where you may still be accelerating at the finish line, but above 200m you will be running at a fairly constant maximum speed at the finish, so you lean back to slow down.   


3 cameras? probably not necessary. No raptor has evolved a fully functional third eye. Worth reading texts on night and mountain flying to see how humans have to adjust for lack of texture and distorted perspective when approaching a runway.

I've just had an interesting discussion with a builder. We are replacing some rotten wooden pillars with steel, in a barn built on a gently sloping concrete apron. The barn floor also slopes - useful for washing down horses and tractors. If you stand on the apron or inside the barn your semicircular canals adjust and you can swear that the steel columns are about 3 degrees off vertical, but a plumb line says they are perfect.   
 

Offline David Cooper

  • Neilep Level Member
  • ******
  • Posts: 1505
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #38 on: 22/09/2013 19:08:01 »
Apols for not distinguishing between linear and rotational accelerometers

It was all my fault for not thinking that rotational ones would be classed as accelerometers.

Quote
http://www.robotshop.com/sensors-gyroscopes.html will provide you with neat solid-state rotational accelerometers. Friends from the aerospace industry have been working on these for ages, looking for medical applications.

Those are very affordable - it would be fun to play with them, so I've bookmarked that site. I think I should leave it to other people to build robots though and restrict myself to thinking about writing control software for them, so the best way forward would be to write a robot simulator and work towards making both kinds of accelerometer available in it so that software can be written to try to work with one type or the other, or a mixture of both. That would make it more likely that it would work with little modification on a wide range of actual robots. I expect gyroscopes will be rare in robots though as they'll wear out and drain more power, but it's worth being able to use them if they are there.

I'm still not keen to start writing a robot simulator just yet - I don't know how much work would be involved in getting the physics right. A real robot would behave the right way without any effort as the laws of physics are provided for free, but a simulated one has to be programmed to fall over correctly.

Quote
3 cameras? probably not necessary. No raptor has evolved a fully functional third eye.

It would be just about impossible to evolve an extra one in the right place - we've only evolved stereo vision by the luck of having two eyes already: stereo was found to be useful in the area of overlapping vision between the two eyes so predators evolved a greater overlap at the expense of losing the all-round vision that's more important to prey species. Most of the gains were made at that point and there would be a lot less to gain from adding a third eye, but there would be some rare situations where it could be useful. If you think about applying machine vision to tasks like driving, most of the lines of texture on the road run from side to side, so it would be much easier to judge distances to those lines by putting one camera above the other rather than side by side. I've just done the experiment with two horizontal strings just in front of my eyes (actually a loop of string going round a finger at each end), too close to focus on such that the texture is lost. It's hard to judge which string is nearer. Turn them vertical and it's suddenly very clear which one is further away and by how much. It is not a small difference.

Quote
If you stand on the apron or inside the barn your semicircular canals adjust and you can swear that the steel columns are about 3 degrees off vertical, but a plumb line says they are perfect.

Do they actually adjust or is it the processing in the brain that adds in an adjustment?
 

Offline AndroidNeox

  • Sr. Member
  • ****
  • Posts: 252
  • Thanked: 2 times
    • View Profile
Re: How can artificial general intelligence systems be tested?
« Reply #39 on: 18/11/2013 20:11:41 »
The human mind is a physical process of the human brain. A mechanical mind would be a physical process of a different type of system. Non-biological minds are definitely possible.

How to determine whether a machine possesses an aware mind is tricky. How do you prove that other people possess awareness? Personally, I think we can wait and let the machine present its own arguments.
 

The Naked Scientists Forum

Re: How can artificial general intelligence systems be tested?
« Reply #39 on: 18/11/2013 20:11:41 »

 

SMF 2.0.10 | SMF © 2015, Simple Machines
SMFAds for Free Forums