The Naked Scientists
  • Login
  • Register
  • Podcasts
      • The Naked Scientists
      • eLife
      • Naked Genetics
      • Naked Astronomy
      • In short
      • Naked Neuroscience
      • Ask! The Naked Scientists
      • Question of the Week
      • Archive
      • Video
      • SUBSCRIBE to our Podcasts
  • Articles
      • Science News
      • Features
      • Interviews
      • Answers to Science Questions
  • Get Naked
      • Donate
      • Do an Experiment
      • Science Forum
      • Ask a Question
  • About
      • Meet the team
      • Our Sponsors
      • Site Map
      • Contact us

User menu

  • Login
  • Register
  • Home
  • Help
  • Search
  • Tags
  • Recent Topics
  • Login
  • Register
  1. Naked Science Forum
  2. On the Lighter Side
  3. New Theories
  4. How can I write a computer simulation to test my theory
« previous next »
  • Print
Pages: 1 ... 9 10 [11] 12 13 ... 17   Go Down

How can I write a computer simulation to test my theory

  • 327 Replies
  • 95929 Views
  • 0 Tags

0 Members and 5 Guests are viewing this topic.

Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
Re: How can I write a computer simulation to test my theory
« Reply #200 on: 22/01/2019 22:50:30 »
Can you give us a glimpse at how your software learns David?

I read that the softwares that can now play "GO" efficiently need to be able to learn, because there is too many possibilities and they need to find shortcuts through them. GO language is much simpler than human ones, so it must be easier to develop a learning software with it, but as you know, I think that anything that faces a change in its environment has to benefit from randomness to adapt, and learning is nothing else than fast adaptation, so I suspect that Go softwares use some kind of random process to learn, but you seem reluctant to use randomness this way, so I really wonder how your software works.   
Logged
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: How can I write a computer simulation to test my theory
« Reply #201 on: 23/01/2019 20:25:52 »
Quote from: Le Repteux on 22/01/2019 22:50:30
Can you give us a glimpse at how your software learns David?

If I teach you a word, you simply stick it in a database in your head and use it from there. If it has special rules about how it's used, you'll pick those up by seeing examples of it in use and by being told you've used it incorrectly (if you use it incorrectly). That kind of learning is trivial. A different kind of learning applies to things where part of the learning involves working out how to solve problems, such as a machine learning how to walk, but the difficult part there is the problem solving rather than the learning. In the case of Go, chess, etc., the task is a problem solving one; working out the best algorithms to apply.

Quote
...but you seem reluctant to use randomness this way, so I really wonder how your software works.   

Randomness isn't necessary and probably isn't helpful. If you're homing in on a good method for doing something and you keep trying little variations in the method you're applying, you'll see the success numbers going up or down and can use those to guide you towards what may be the optimum algorithm. It's like climbing a hill - you keep going up rather than down, and you quickly get to the top by doing this. From the top of that hill though, you can't get to the top of a higher hill without first going down, so a blind climber (from a species of blind intelligent animals on another planet) would want to start climbing in lots of different places and see where it gets them. A systematic approach to choosing those starting places will more likely find the highest summit before a random approach does. A country that is repeatedly found to be flat is less likely to provide the highest peak than a country that is repeatedly found to contain high mountains.

For a machine learning to walk, it needs to try moving limbs in different ways and see what results from those movements. The most promising ones can then be selected and combined with other moves to see if better results emerge. Again, like choosing different starting points for the climbs, it's worth exploring less promising initial results too because some of those may lead to better solutions in the end, although they are less likely to do so. Some successful methods may use too much energy, so while they may be useful for some purposes, they should not be the main way of moving. While a lot of this experimentation can be lumped in as part of learning, it really isn't - the learning part is in discovering that something works better than something else so that that knowledge can inform future exploration. I haven't written a program to apply any of this to anything yet, but it's something I've wanted to have a go at for years - it's been too much of a diversion from more essential work for me to get round to it until now, but it will soon be within reach. I'm just trying to solve a number of geometrical problems so that I can handle virtual objects correctly (though this is mostly about display issues - you need to be able to see the results, and I want everything to look good).
Logged
 

Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
Re: How can I write a computer simulation to test my theory
« Reply #202 on: 24/01/2019 19:02:06 »
Quote from: David Cooper on 23/01/2019 20:25:52
Randomness isn't necessary and probably isn't helpful. If you're homing in on a good method for doing something and you keep trying little variations in the method you're applying, you'll see the success numbers going up or down and can use those to guide you towards what may be the optimum algorithm.
To me, little variations is the pendant of little mutations, for which the guide is the environment, so unless you think that those variations are not necessarily random, it seems that you simply see no randomness where I see some. I think our discordance concerns the guide: you seem to attribute it to our intelligence whereas I attribute it to the environment. I see the explanation of this divergence in the the way we usually consider the two kinds of memory. The memory of a specie depends on its reproduction (if an individual succeeds to reproduce, its mutation is kind of memorized), whereas our memory or the memory of a computer doesn't, so that we can more easily think that the variations our mind produces are not selected by the environment whilst they necessarily are if they are random. The problem is that we can hardly imagine that we often make huge mistakes like taking the opposite direction. We do, but we rapidly forget about them once we get used to the right way, because then, we don't even have to think about what we are doing anymore.

I got used to the idea that our memory works like the step's one, that it has to reproduce itself constantly, so I can more easily compare it to the memory of a specie which also depends on reproduction. The big question is: am I going the wrong way or am I just fine tuning my direction? The only way to know is to look at the feedback from my environment, and for the moment, I only get resistance. It's as if an individual from a specific specie would have a problem reproducing itself because it has a new mutation. That can certainly happen, but if the mutation still helps that individual to survive, with time, it still increases its chances to get reproduced. In my case, the variation is on an idea, so amongst all the other ideas that I have, it is only that idea that it would help to survive, but with time, it would still increase its chances to get reproduced by others.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: How can I write a computer simulation to test my theory
« Reply #203 on: 24/01/2019 22:15:54 »
When a species evolves, it does so by natural selection rewarding useful mutations and punishing bad ones, but the mutations continue to be random - no lessons are learned about bad mutations, so they are made repeatedly and a lot of individual animals suffer greatly as a result. If a mutation is discovered to be bad, ideally the repetition of that mutation would be prevented, but nature hasn't provided a memory to prevent that. Of course, the same mutation might not be harmful and could be beneficial later on after a number of other mutations have occurred, so you don't want to prevent that mutation being tested again, but you do want to avoid testing it again from the same starting point. Sticking to random is a slower way of making progress, and with intelligent machines, there's no excuse for doing that because it's easy to record what fails and to avoid repeating those failures over and over again.
Logged
 

Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
Re: How can I write a computer simulation to test my theory
« Reply #204 on: 26/01/2019 17:17:55 »
I think that an AGI with no integrated random process would simply try to stop any intellectual evolution, so I'm really worried about your viewpoint, all the more that I never succeeded to convince anybody about that, and even less the few programmers that I met. Such an AGI would certainly be better than us to solve any old problem, but I think it would be unable to solve new ones without using randomness. I think it would prevent us from evolving without being able to evolve itself. In this sense, the GO softwares are probably only solving old problems if they don't use randomness, and they probably win just because they are faster and because their memory is a lot more precise than ours. Evolution of species is slow because testing a mutation takes a lot more time than testing the variation of an idea, not because randomness slows the process. If randomness was that slow, the technological revolution would not be that fast.

Research takes time because it is a trial and error process, but that process often pays otherwise we wouldn't use it. It is not computers that make the revolution, it is humans with all their mistakes. The computers only help to crunch the data, not to invent new stuff. People carrying wrong ideas don't suffer the same way animals carrying the wrong mutation do. I know my ideas could be wrong, but I don't suffer because of that. On the contrary, I need to take chances to feel good, and if I'm wrong, I simply try something else. We know that the mutation/selection principle works only because we know that it sometimes pays to take chances, so how could an AGI ever understand that principle if it is unable to take any chance, and how could it behave intelligently without that capacity? Would it prevent us from taking chances? And if so, could our intelligence stay sane without that pleasure?

If such an AGI was already ruling the world, it would be trying to stop the wars, stop the pollution, stop the population growth, stop the inequalities, etc... , all those things that we are actually trying to control too, but for which we face a huge resistance. To be faster than us, that AGI would thus have to find new ways to get around that resistance. Firstly, I think it couldn't find new ways if it would be unable to proceed by trial and error, and secondly, if that resistance is of the same kind than the one we feel when we need to accelerate a massive body, I'm afraid it would simply be losing a precious time.

« Last Edit: 26/01/2019 19:11:33 by Le Repteux »
Logged
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: How can I write a computer simulation to test my theory
« Reply #205 on: 27/01/2019 01:01:06 »
Quote from: Le Repteux on 26/01/2019 17:17:55
I think that an AGI with no integrated random process would simply try to stop any intellectual evolution, so I'm really worried about your viewpoint, all the more that I never succeeded to convince anybody about that, and even less the few programmers that I met.

I have never heard of any case where a random process can be guaranteed to be faster than a systematic one. In a hypothetical case where a random process is the fastest, it is only the fastest equal, and there will be many systematic approaches that match it. Take for example a situation where there's a goat hidden behind one of a thousand doors, and it's position has been randomly selected. A random search is the best way to find it, but so is a systematic search in door order - nothing is lost by not doing random. If the doors are closed after it's been revealed that the goat isn't behind them, then you could have the random process open many of the doors multiple times because it isn't monitoring which ones it's already tried, whereas the systematic search will only open any door once. Evolution is like the case where the same door is opened repeatedly during the search for the goat. Intelligent systems should not behave that way.

If you want to send a secret message, using a random single-use key (which the receiver also has and which no one else can access) guarantees that the encrypted message will be random and uncrackable, but the fact that randomness has real uses doesn't mean that it can help solve problems. If you want to avoid losing a game like scissors, paper, stone you should also apply randomness to avoid your moves being predicted, but again that is not problem solving.

Quote
Such an AGI would certainly be better than us to solve any old problem, but I think it would be unable to solve new ones without using randomness.

Why? A systematic approach is fully capable of working out what the range of all possible actions is and to go through every single one of them to try them out. Meanwhile, the random approach wastes more and more of its time on repetition.

Quote
Research takes time because it is a trial and error process, but that process often pays otherwise we wouldn't use it.

Trial and error can be random or systematic. The latter is better.

Quote
It is not computers that make the revolution, it is humans with all their mistakes.

Failing to try something that might work is a mistake, so an "AGI" that avoids trying that thing is not an AGI, but an AGS.  Also, repeatedly testing something that never works is also a mistake when better progress can be made by trying lots of other things that haven't been tried before.

Quote
The computers only help to crunch the data, not to invent new stuff.

It's only once we have AGI that computers will become good at inventing new stuff, but they will be good at it and will outperform us. The only thing that will hold them back is their inability to judge whether their fun inventions are fun or not - they'll have trouble measuring how much pleasure they generate when they're incapable of experiencing any of that themselves.

Quote
On the contrary, I need to take chances to feel good, and if I'm wrong, I simply try something else.

Taking chances is not random. You don't jump out of a plane without a parachute in the hope of making some great positive discovery - you play the odds instead and look for places where luck is most likely to reward you in some way.

Quote
We know that the mutation/selection principle works only because we know that it sometimes pays to take chances, so how could an AGI ever understand that principle if it is unable to take any chance, and how could it behave intelligently without that capacity? Would it prevent us from taking chances? And if so, could our intelligence stay sane without that pleasure?

Taking chances is not random. An AGI programmed to do random things might make a black hole that swallows the Earth, or just press all the world's nuclear buttons just to see if that somehow pays off in some way. We absolutely don't want them to behave like that.

Quote
If such an AGI was already ruling the world, it would be trying to stop the wars, stop the pollution, stop the population growth, stop the inequalities, etc... , all those things that we are actually trying to control too, but for which we face a huge resistance. To be faster than us, that AGI would thus have to find new ways to get around that resistance. Firstly, I think it couldn't find new ways if it would be unable to proceed by trial and error, and secondly, if that resistance is of the same kind than the one we feel when we need to accelerate a massive body, I'm afraid it would simply be losing a precious time.

The problem you're presenting isn't real. Trial and error is not inherently random, and there's nothing that random does that can't be done by a systematic approach. A fully random approach is also dangerous - people don't follow it either, because if they did they'd die out very quickly.
Logged
 

Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
Re: How can I write a computer simulation to test my theory
« Reply #206 on: 29/01/2019 16:18:35 »
Quote from: David Cooper on 27/01/2019 01:01:06
If the doors are closed after it's been revealed that the goat isn't behind them, then you could have the random process open many of the doors multiple times because it isn't monitoring which ones it's already tried, whereas the systematic search will only open any door once
Our intelligence may be able to work randomly while still being able to remember it already has opened a door: randomness doesn't prevent order. I'm not very systematic and it is probably why I'm slow at simulating my small steps. I like to improvise, so I chose to look for solutions that others aren't looking for to get more time to develop them. Nevertheless, I need to be a minimum systematic, otherwise I wouldn't get anywhere. I can change my mind if I discover that an idea doesn't work, but I have to stick to it long enough to be sure it doesn't. I would probably be a lot faster and a lot more systematic if I had a good memory, and so would softwares that would be designed to improvise.

It is one thing to look for a given thing behind a given number of doors, but it is another one to only have a vague idea of what we are looking for though. Somehow, the mutation/selection process is intelligent, or at least, we find it more intelligent than intelligent design. That process nevertheless succeeded to design our intelligence, which became able to manipulate the mutation/selection one. It is thus also possible that we succeed to invent a superior intelligence that would also be able to manipulate ours, which means that we may actually be looking for something that we know nothing of. We may think we know, but as the evolution of species shows, that intelligence may be completely different than ours, so what we already have in mind about it is probably only a vague idea. On the other hand, if, as I think, our mind really uses randomness the way evolution does, it could mean that randomness is the only way to evolve. In a world where so many crucial things are unpredictable, a mix of diversity and randomness might be the only way to last.

I think democracy is such a mix. I think that elections add a bit of randomness in the process of choosing our leaders, a randomness that produces more diversity with time, a diversity that permits the societies to evolve more rapidly. Some argue that China didn't need democracy to evolve fast, but that's forgetting that we were the ones to buy their goods in the beginning, and that they were using our technology to produce them. Of course, that government could artificially try to introduce diversity in the way it governs, but I'm afraid the only way to do that would be to accept dissidence and trigger elections, a political suicide.

Quote from: David Cooper on 27/01/2019 01:01:06
A systematic approach is fully capable of working out what the range of all possible actions is and to go through every single one of them to try them out. Meanwhile, the random approach wastes more and more of its time on repetition.
When there is enough individuals that carry different mutations, repetition doesn't really slow the process. It is even better that the winning mutation belongs to many individuals at a time in case some of them get an accident. That's what's happening to ideas: it often happens that many individuals develop the same idea at the same time, which usually produces different solutions to the same problem, which is good for diversity, which is good for further evolution. A lot of people is actually working on artificial intelligence for instance, and it effectively increases our chances to develop it.

Quote from: David Cooper on 27/01/2019 01:01:06
Trial and error can be random or systematic. The latter is better.
Why one or the other, why not both at a time?

Quote from: David Cooper on 27/01/2019 01:01:06
It's only once we have AGI that computers will become good at inventing new stuff, but they will be good at it and will outperform us. The only thing that will hold them back is their inability to judge whether their fun inventions are fun or not - they'll have trouble measuring how much pleasure they generate when they're incapable of experiencing any of that themselves.
Our feelings help us to survive, they work like our senses. If an AGI had senses to help it survive, it could probably develop feelings. But if it did, it would be exactly like us, and we are afraid of what it could do since we are afraid of us, so we don't want to try it. On the other hand, you think an AGI would be less dangerous than us just because it would have no feelings, but if it had, it would be afraid of itself just like we are, and being more intelligent than we are, it might succeed to control itself better than we do. Feelings are a shortcut to analysing situations: no need to remember what has produced a bad feeling, we know we must flee the situation or prepare to fight. I know because I live with my mom and we often have words. Most of the time, after a while, I forget about the facts, but I still know it is too soon to have a talk again because the bad feeling is still there. I was afraid you would get angry with  Rmolnav the other day, but you didn't, or at least it didn't show, so unless you're a software, it may mean that you can control yourself very well, or that you have what we call a very good character.

But what are those characteristics exactly? What makes us more or less aggressive? More or less patient? More or less empathetic? If we knew exactly how our own brain works, we might be able to build the perfect Artificial Human, and we wouldn't need a perfect AGI to rule us since we would already be perfect. But I don't believe in perfection since I decided not to believe in god when I was 12, so I don't believe in perfect AH or perfect AGI either. To me, if we would depend on perfection to exist, nature would have made us perfect. To me, all the things that exist need to be imperfect to keep on existing. To me, trying to get perfect is a non-sense that can even get dangerous. Of course, we got to keep on getting rid of wars and leveling the inequalities, but we don't need perfection to do that, just to go on evolving. Of course, we can try to build a better artificial intelligence than our own one, but without aiming for perfection.


« Last Edit: 29/01/2019 19:23:08 by Le Repteux »
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: How can I write a computer simulation to test my theory
« Reply #207 on: 29/01/2019 23:02:15 »
Quote from: Le Repteux on 29/01/2019 16:18:35
Somehow, the mutation/selection process is intelligent, or at least, we find it more intelligent than intelligent design.

It isn't intelligent, but it did produce intelligence. Intelligent design is much quicker and actually uses intelligence rather than relying on random luck.

Quote
It is thus also possible that we succeed to invent a superior intelligence that would also be able to manipulate ours, which means that we may actually be looking for something that we know nothing of.  We may think we know, but as the evolution of species shows, that intelligence may be completely different than ours, so what we already have in mind about it is probably only a vague idea.

There is a point at which a species can develop general intelligence on the same level as us, and any that ever manage to do so will automatically be able to work out how to communicate with us. There are many lesser forms of high intelligence, as we can see with other apes, dolphins, parrots, octopodes, etc., but none of them have the full package - the diversity of high intelligence that we see in these other animals is primarily down to different deficiencies rather than them having something extra. No alien capable of communicating with us through images sent back and forth using radio signals would have any difficulty working out how to hold intelligent conversations with us, although the long delays between replies would be frustrating. There is one end point for general intelligence, and we are there. We have rules of reasoning which alien mathematicians will also identify and apply in thinking machines, and those will go on to discover any new reasoning tools that we might not have found yet, but it's unlikely that there is much left to find - we may well have the full set of fundamental reasoning tools already.

Quote
On the other hand, if, as I think, our mind really uses randomness the way evolution does, it could mean that randomness is the only way to evolve. In a world where so many crucial things are unpredictable, a mix of diversity and randomness might be the only way to last.

It took a billion years to evolve our intelligence, but it came close many times and failed before it produced us. I don't think we should handicap AGI by making it take millions of years to do things that it can do in a few minutes by applying intelligence.

Quote
I think democracy is such a mix. I think that elections add a bit of randomness in the process of choosing our leaders, a randomness that produces more diversity with time, a diversity that permits the societies to evolve more rapidly.

All it does is enable us to remove people from power if they are blinded by that power and start making bad decisions.

Quote
Some argue that China didn't need democracy to evolve fast, but that's forgetting that we were the ones to buy their goods in the beginning, and that they were using our technology to produce them. Of course, that government could artificially try to introduce diversity in the way it governs, but I'm afraid the only way to do that would be to accept dissidence and trigger elections, a political suicide.

China has better experts in charge of things than we are able to get into power in democracies - everything in the West is held down to the level of people of absolutely average intelligence, so it's been easy for China to take advantage of us. We have stupidly handed them almost all the technology they need to take over the world. That said though, our own governments are so awful that it might be best if the Chinese win out.

Quote
When there is enough individuals that carry different mutations, repetition doesn't really slow the process. It is even better that the winning mutation belongs to many individuals at a time in case some of them get an accident.

It absolutely does slow the process. Not only that, but whenever an individual genius emerges by luck, the offspring of that genius are typically not geniuses - the magic formula is broken every time for the next generation as soon as it's been found.

Quote
That's what's happening to ideas: it often happens that many individuals develop the same idea at the same time, which usually produces different solutions to the same problem, which is good for diversity, which is good for further evolution. A lot of people is actually working on artificial intelligence for instance, and it effectively increases our chances to develop it.

Almost of those successful people are following the same strategy, and it isn't random - it's systematic searching. I came up with a new way of doing 3D graphics by thinking hard about it and exploring hundreds of possible alternative ways of doing things - it wasn't just a random idea that popped out of nothing, but one that I found because I kept on and on hunting (motivated primarily by my desire to avoid using a GPU), and that hunt was guided by intelligent ideas which forced the search to take the right directions. I've continued developing it further, just last night finding a new way of calculating shadows  which will slash the processing time required for that too. (I'm not going to reveal any details of how it's done until after I've implemented it, so don't ask.) Random searches rarely hit gold, but intelligent searches repeatedly uncover rich seams of the stuff.

Quote
Quote from: David Cooper on 27/01/2019 01:01:06
Trial and error can be random or systematic. The latter is better.
Why one or the other, why not both at a time?

The systematic approach can include a random aspect if it's helpful, so it's already doing both (except that a random aspect is rarely helpful).

Quote
Our feelings help us to survive, they work like our senses. If an AGI had senses to help it survive, it could probably develop feelings. But if it did, it would be exactly like us, and we are afraid of what it could do since we are afraid of us, so we don't want to try it. On the other hand, you think an AGI would be less dangerous than us just because it would have no feelings, but if it had, it would be afraid of itself just like we are, and being more intelligent than we are, it might succeed to control itself better than we do. Feelings are a shortcut to analysing situations: no need to remember what has produced a bad feeling, we know we must flee the situation or prepare to fight. I know because I live with my mom and we often have words. Most of the time, after a while, I forget about the facts, but I still know it is too soon to have a talk again because the bad feeling is still there. I was afraid you would get angry with  Rmolnav the other day, but you didn't, or at least it didn't show, so unless you're a software, it may mean that you can control yourself very well, or that you have what we call a very good character.

(I was annoyed at Rmolnav when I thought he was a troll, but the reality is that he means well and is doing the best he can, applying what is in some places quite an extensive knowledge. I like him now, even though he's annoying. I came to like The Box too (a very different case [pun intended]). Everything becomes much brighter when you discover that you can like people that irritate you.)

We don't know how to program feelings into machines though, so that isn't an option. AGI with feelings needn't be any safer or more dangerous than without. Once you understand what morality is, you simply apply it, and feelings cannot be allowed to override reason. If you find someone ugly to look at and someone else pleasing to look at, you don't let your feelings lead you to discriminate in favour of the latter individual against the former.

Quote
But what are those characteristics exactly? What makes us more or less aggressive? More or less patient? More or less empathetic? If we knew exactly how our own brain works, we might be able to build the perfect Artificial Human, and we wouldn't need a perfect AGI to rule us since we would already be perfect.

Feelings can introduce biases, and we have to avoid being biased. At the same time, morality depends on feelings because if there's no sentience, there's nothing needing to be protected from harm. All the really immoral stuff that goes on in the world comes out of biases. If we were all able to recognise biases and avoid acting on ours, then we wouldn't have so much to gain from perfect AGI, although there would still be a considerable gain from it being better at crunching the data to make better moral decisions.

Quote
But I don't believe in perfection since I decided not to believe in god when I was 12, so I don't believe in perfect AH or perfect AGI either.

A calculator calculates perfectly (within the degree of precision it's designed to work to). It gives the same answer to the same question every time, and every other correctly designed calculator agrees with it. AGI is going to be the same, but with a much wider range of capability. Mathematics is an exploration of perfection, finding rules that have no exceptions. AGI will be the most perfect application of mathematics.

Quote
To me, if we would depend on perfection to exist, nature would have made us perfect.

Nature has failed to make us perfect because evolution is blind, and it bodges solutions rather than engineering them from scratch. You are calling for a random approach that will fail to create perfection.

Quote
To me, all the things that exist need to be imperfect to keep on existing. To me, trying to get perfect is a non-sense that can even get dangerous.

If you don't aim for perfection in maths, you're not doing maths properly and your mistakes will kill people.

Quote
Of course, we got to keep on getting rid of wars and leveling the inequalities, but we don't need perfection to do that, just to go on evolving. Of course, we can try to build a better artificial intelligence than our own one, but without aiming for perfection.

I leave it to the neural-net fanatics to create imperfect intelligence - they will make smart machines that kill people. I will do my best to stop them by creating something perfect.
Logged
 

Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
Re: How can I write a computer simulation to test my theory
« Reply #208 on: 01/02/2019 17:09:09 »
Quote from: David Cooper on 29/01/2019 23:02:15
It isn't intelligent, but it did produce intelligence.
Well, if evolution isn't intelligent, then I can pretend that our mind isn't intelligent either, because I see the same kind of memory and imagination in both processes: memory being due to a precise information reproduction process for both, neurons and genes, and imagination being due to a specific random process for both too. The only difference is that a specie doesn't seem to transmit feelings or sensations to a central processor, but if we define sensations as the data from the environment that permits a specie to adapt to it, which is what our senses permit us to do, then it is obviously sensible to that data. In fact, the specie is sensible through its individuals, as if those individuals were the arms and the legs of the specie, but the analogy doesn't stop there, those individuals are constantly replaced by new ones, exactly like our own cells, without the specie being affected by the process. If we now compare all the individuals of a specie to all the neurons of a human, then we get a glimpse at how our feelings may work: the individuals are the ones that feel the environment, and their sensations guide their moves through it. A sensation is what tells us what to eat or how to move, so by analogy, a feeling is what tells a whole specie to do the same thing. In our mind, a sensation might thus belong to a small quantity of neurons while a feeling might belong to a large one. A sensation would help us to move a small part of our body and would come from a small part of the environment, and a feeling would concern the whole body and be triggered by the whole environment. Of course, our feelings are often related to others, so in this case, the small and the large feelings would be related to individuals or to groups, the feeling related to a group being generally more important than the one related to a friend.

 
Quote from: David Cooper
Intelligent design is much quicker and actually uses intelligence rather than relying on random luck.
Do you mean that you think we were created, even if it is not by an omnipotent god?

Quote from: David Cooper on 29/01/2019 23:02:15
I leave it to the neural-net fanatics to create imperfect intelligence - they will make smart machines that kill people. I will do my best to stop them by creating something perfect.
Even if we succeeded to create a perfect software, the hardware would not be, because it would be built out of imperfect particles that would be sensible to damaging radiation, so it could fail. If nature is imperfect from top to bottom, we can't build any perfect thing out of it. The idea of perfection leads to religions, so I'm surprised that you can at the same time reject religion and aim for perfection. To me, thinking that perfection can solve our problems is like thinking that god can save us. On the other hand, I think that randomness is useful and I also replace god by randomness. God is probably at the junction between perfection and randomness. My small steps is incidentally a mix between the two extremes: they are perfectly synchronised when the system is on constant motion, but they also get out of sync when a change happens. The only way they can get back to perfection after a change though is to proceed randomly during the change. That principle seems to work for anything that already exists, so why wouldn't it work for something that will exist?

Quote from: David Cooper on 29/01/2019 23:02:15
Nature has failed to make us perfect because evolution is blind, and it bodges solutions rather than engineering them from scratch. You are calling for a random approach that will fail to create perfection.
Of course it will fail since I don't believe that perfection is useful. We really think differently about that. My first simulation was about the twins paradox, and the idea was to simulate a light clock that would be going back and forth on the screen. You did the same in your simulation of the MM experiment, but you didn't let the mirrors detect the photons, and I did in mine because that's how I think nature works. Your simulation is perfect whereas mine is not: we can see that my two photons are not hitting their mirrors at the same time at the end whereas yours are. This is happening in my simulations because I can't make detection absolutely precise, and you like precision, so you neglect detection. The two simulations give the same general result, except that mine shows that things do not need to be perfect to work. You like perfection and I don't, but we still want the same kind of society, and we don't want to hurt people, so no matter the way we proceed, in the end, it should always be fine. We must always keep in mind though that, contrary to softwares, we have emotions and instincts, and that these can easily short-circuit our mind and make us think that we are in danger while we are not.

Your AGI wouldn't have emotions, but it would probably still have an instinct since it would probably be wired to protect itself against cyber attacks. I said wired instead of programmed because wiring can't be altered by a software, either by an internal or an external one. Have you thought about the way our instincts interfere with our intelligence? We can't change our instincts while still being able to change our ideas, so it often causes contradictions between those ideas. Imagination wants us to help others, while instinct is constantly on the defensive. Your AGI would be programmed to help us, but it would also have to be on the defensive, so wouldn't it be able to develop contradictory ideas too?

Quote from: David Cooper on 29/01/2019 23:02:15
Once you understand what morality is, you simply apply it, and feelings cannot be allowed to override reason.
Morality and reason are two terms that I decided not to use anymore since I discovered through my small steps that our own resistance to change was relative. To me, morality and reason are simply personal things: without a superior authority to impose us rules, we do what we want, and these rules first serve to protect those who have concocted them. The way an AGI would be programmed would thus depend on people that would put their own morality first, and so would the AGI. Unfortunately, I don't think there is any way out of that trap. I don't agree with everything you say for instance, so I wouldn't like an AGI to be programmed your way, but you nevertheless think it would be perfect. I'm the only perfect thing in this world, so how could you be so? :0)

Quote from: David Cooper on 29/01/2019 23:02:15
Trial and error can be random or systematic. The latter is better.
Quote from: Le Repteux
Why one or the other, why not both at a time?
Quote
The systematic approach can include a random aspect if it's helpful, so it's already doing both (except that a random aspect is rarely helpful).
Help me, I can't figure out how you can think that way! :0) Aren't you happy when chance is on your side? Aren't you watching a game mainly because you don't know the outcome? When you have a good idea, don't you attribute a bit of it to chance? Most of the people that succeed attribute the largest part of their success to chance, which makes them appear humble. You do appear humble, but if you ever succeed with your AGI, how would you proceed to show you are, and moreover, how would your AGI be able to show it is while still knowing it can't be? Would it be able to lie?

Quote from: David Cooper on 29/01/2019 23:02:15
Mathematics is an exploration of perfection, finding rules that have no exceptions.
If the universe was ruled by mathematics, things would never change since they would already be perfect, everything would be predictable, and intelligence would be useless.
« Last Edit: 02/02/2019 15:22:43 by Le Repteux »
Logged
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: How can I write a computer simulation to test my theory
« Reply #209 on: 03/02/2019 01:21:49 »
Quote from: Le Repteux on 01/02/2019 17:09:09
Well, if evolution isn't intelligent, then I can pretend that our mind isn't intelligent either, because I see the same kind of memory and imagination in both processes: memory being due to a precise information reproduction process for both, neurons and genes, and imagination being due to a specific random process for both too.

Evolution is a process with zero intelligence. The human brain is an imperfect NGI system.

Quote
Quote from: David Cooper
Intelligent design is much quicker and actually uses intelligence rather than relying on random luck.
Do you mean that you think we were created, even if it is not by an omnipotent god?

I wasn't referring to intelligent design creating us, but intelligent design as what we do when we make things. Evolution failed to create the wheel for a billion years, but we found a way to design it.

Quote
Even if we succeeded to create a perfect software, the hardware would not be, because it would be built out of imperfect particles that would be sensible to damaging radiation, so it could fail.

The hardware is extraordinarily reliable. A processor can run for decades without producing any errors. My operating system doesn't use the protection mechanisms that are available in the hardware, so a few instances of a hardware fault messing up the code would be enough to lead to a crash, and yet it has run for many thousands of hours without a crash. For years, I also loaded it from floppy disk, modified it, and saved the result back to disk as I built my OS, and at any point in that process an error could have crept in which would have propagated on through all future versions, but no such error ever appeared, even though floppy disks are supposedly unreliable and I was using my own device driver for loading and saving, not bothering to write code to verify that the data was being saved correctly. For sure, machines will go wrong from time to time, but if you have any important device (e.g. a car) being controlled by AGI, you would have at least three independent AGI systems in there voting on each decision to make sure that if one of them goes wrong, the other two can override it.

Quote
If nature is imperfect from top to bottom, we can't build any perfect thing out of it.

We only need to build things to be better than humans for them to be superior as control systems, and that won't require perfection. People make masses of mistakes which have lethal results, and AGI will eliminate most of those accidents.

Quote
The idea of perfection leads to religions, so I'm surprised that you can at the same time reject religion and aim for perfection.

If you make a calculator, you aim for it to be perfect. It will fail if it's killed by radiation, but I've yet to have that problem. It can stop working when the batteries fun flat, but you see the display go weak and you know that it will soon fail. There's also nothing perfect about religions - most of them peddle the most illogical bilge that's ever been written.

Quote
To me, thinking that perfection can solve our problems is like thinking that god can save us.

To me, it's about creating something that can rescue people from fake Gods and not pretend to be one itself.

Quote
This is happening in my simulations because I can't make detection absolutely precise, and you like precision, so you neglect detection.

I didn't bother with detection because it creates extra work to produce the exact same end result. The reason you get imperfection is that you fail to detect the collisions when they happen, only picking them up after the event instead (up to nearly a clock tick late in some cases) and refusing to correct for when the photons actually arrived.

Quote
The two simulations give the same general result, except that mine shows that things do not need to be perfect to work.

You allow errors to creep due to the granularity of your clock ticks (in running the function), and then you bodge a solution by changing the mirror separation distance, thereby creating a second error to try to cancel out the first. That is not a model that works, but one with two failures in it.

Quote
Your AGI wouldn't have emotions, but it would probably still have an instinct since it would probably be wired to protect itself against cyber attacks. I said wired instead of programmed because wiring can't be altered by a software, either by an internal or an external one.

It is fully possible to make a hackproof system, but you can't control things hidden by manufacturers of hardware if they put mechanisms in place through which they can be hacked. A processor can have a hidden operating system running within it which can potentially override and modify the main OS. You need to control the manufacturing to eliminate that risk.

Quote
Have you thought about the way our instincts interfere with our intelligence? We can't change our instincts while still being able to change our ideas, so it often causes contradictions between those ideas. Imagination wants us to help others, while instinct is constantly on the defensive. Your AGI would be programmed to help us, but it would also have to be on the defensive, so wouldn't it be able to develop contradictory ideas too?

Instincts are a primitive "intelligence" system which provide reasonable functionality. Actual intelligence can often override it, but a lot depends on how much thinking time is available as to whether the instinct can be postponed or not. AGI also has to have rules to follow when it doesn't have time to analyse a situation fully before it must act, and some of the decisions made in that way will be wrong for that situation, but it's still right to apply them because they will be right for the average situation of that kind. There really isn't a problem here - it's all covered by game theory.

Quote
Morality and reason are two terms that I decided not to use anymore since I discovered through my small steps that our own resistance to change was relative. To me, morality and reason are simply personal things: without a superior authority to impose us rules, we do what we want, and these rules first serve to protect those who have concocted them. The way an AGI would be programmed would thus depend on people that would put their own morality first, and so would the AGI. Unfortunately, I don't think there is any way out of that trap. I don't agree with everything you say for instance, so I wouldn't like an AGI to be programmed your way, but you nevertheless think it would be perfect. I'm the only perfect thing in this world, so how could you be so? :0)

I don't see how it's useful to stop using words like morality and reason. Reason is applied mathematics, and that's isn't something that should be discarded. Morality is making the decisions that do the least harm, and abandoning that is also unwise. If I wanted to program AGI do do what's best for me and to put everyone else second to that, you'd have a point, but the whole point of my computational morality is that it is 100% unbiased, not favouring me, not favouring my race, not favouring people of my nationality, and not even favouring my species. I approach this like an impartial alien from another universe who is just passing through this one and has no axe to grind in anything that happens here, but who wants to set up this universe to be as kind as possible to all its inhabitants because it (the alien) cares about them.

Quote
Quote from: David Cooper on 29/01/2019 23:02:15
Trial and error can be random or systematic. The latter is better.
Quote from: Le Repteux
Why one or the other, why not both at a time?
Quote
The systematic approach can include a random aspect if it's helpful, so it's already doing both (except that a random aspect is rarely helpful).
Help me, I can't figure out how you can think that way! :0) Aren't you happy when chance is on your side? Aren't you watching a game mainly because you don't know the outcome?

I wouldn't watch such a game - rolling dice and seeing the numbers that come off them isn't greatly entertaining. To win many games, you do have to try to be as random as possible in order to avoid being predictable, so there's certainly a role for your randomness there, but that's a very specific kind of problem where nothing new needs to be worked out - it's just the same old problem of putting a ball in a net over and over again, and we already know how to do it. New ways of setting things up to score goals (better algorithms for players to apply) could be designed by AGI, but AGI would not be using randomness to come up with such new algorithms - it would be a systematic exploration of possibilities.

Quote
When you have a good idea, don't you attribute a bit of it to chance? Most of the people that succeed attribute the largest part of their success to chance, which makes them appear humble. You do appear humble, but if you ever succeed with your AGI, how would you proceed to show you are, and moreover, how would your AGI be able to show it is while still knowing it can't be? Would it be able to lie?

I'm sure there are lots of random inputs, but you get to better places by minimising them rather than adding more. Hard work hunting for solutions to problems and intelligently homing in on those solutions is what pays off time and time again. I'm not going to program AGI to shun the most successful approach in favour of the least. Imagine what would happen if you're homing in on a solution to a problem and you do something random instead of taking another step towards the goal - you turn and walk away from that success in favour of looking for nothing in particular somewhere else. No - I refuse to do anything so misguided. Should AGI be humble? Well, it has no interest in boasting, so it will just tell you straight how things are. A calculator isn't proud or humble - it just provides correct answers. Would AGI lie? In some circumstances, yes. If it's playing a game (not for its sake as it doesn't care, but it could be taking part in a game with people for their sake), or if it's running a game, it should not reveal everything it knows to the players if that would spoil the game. If AGI can stop you committing a serious crime by misleading you, it should do so (unless you have sufficient moral justification for committing that crime).

Quote
If the universe was ruled by mathematics, things would never change since they would already be perfect, everything would be predictable, and intelligence would be useless.

The universe is ruled by perfect cause-and-effect actions which mathematics merely represents. Out of that has come life and evolution, tying to sentience systems that can generate great suffering and great pleasure. Intelligence allows us to reduce the suffering and increase the pleasure, and morality is about making that happen as much as possible.
Logged
 

Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
Re: How can I write a computer simulation to test my theory
« Reply #210 on: 04/02/2019 16:53:01 »
Quote from: David Cooper on 03/02/2019 01:21:49
Intelligence allows us to reduce the suffering and increase the pleasure, and morality is about making that happen as much as possible.
Some people suffer or impose suffering to others from trying to get more and more pleasure out of an instinctive behavior, so I guess that your AGI would need to control those behaviors, but how could it succeed better than we do? Religions have tried, but they visibly failed. Yet, they had god to show us the way and to chasten us when we took the wrong one. Governments have tried with their laws and rules, but it visibly didn't work either. How would your AGI proceed exactly? How would it be able to control our instincts without producing more harm than pleasure? The only way I see would be to prevent our instincts to supplant our intelligence, or to prevent our intelligence to exacerbate our instincts. That's a bit what some psychotropes are doing, but in the same time, they transform people in zombies. If we knew how to do that properly, we wouldn't need an AGI to do it, and I can't see how an AGI could know if we don't. You probably found other ways to control us without harming us though, so can you tell us about them?

I'm still doubtful about being pleased to feel completely secure though. What would we have left to do when everything would be better done by the AGI? What would be the use for humans then? A toy to amuse the AGI? Not even since it couldn't feel any pleasure! Presently, my main pleasure is to develop my ideas, partly to help me and partly to help others, something I couldn't do if the AGI was already there, so where would I find my pleasure? Have you tried to imagine where you would find yours? Controlling the AGI? That would be cheating, and the AGI would probably only let you think that you got the control! In fact, to avoid our apathy, its best way would probably be to lets us think that we are not obsolete, but my question still holds in this case: what would be the need for humans then? I personally think that there is no need for us anyway, but that doesn't prevent me from trying to develop my ideas, so you probably do the same thing even if you also think we are useless. We can't but go on doing what we do if nothing prevents us to do so, exactly like what my particles are forced to do when acceleration stops.
« Last Edit: 04/02/2019 18:30:34 by Le Repteux »
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: How can I write a computer simulation to test my theory
« Reply #211 on: 04/02/2019 22:59:48 »
Quote from: Le Repteux on 04/02/2019 16:53:01
Some people suffer or impose suffering to others from trying to get more and more pleasure out of an instinctive behavior, so I guess that your AGI would need to control those behaviors, but how could it succeed better than we do? Religions have tried, but they visibly failed. Yet, they had god to show us the way and to chasten us when we took the wrong one. Governments have tried with their laws and rules, but it visibly didn't work either. How would your AGI proceed exactly?

Religions are just ideologies set up by philosopher-conmen, all trying to shape the world to their own view and impose it through fake authority. Democratic governments are generally better because they have to get the backing of large numbers of people to shape their laws, but our systems of democratic input are woeful, only giving us the chance to put a whole package of policies in place, many of which do not have majority support - we flip from one bad set of policies to another, undoing much of the good every time we try to put right the things that are most wrong. Many of the decisions that are made have a substantial or overwhelming moral component, and there are right and wrong answers for those, but we don't make it easy for the morally right ones win because they're always packaged in with everything else, leading to us ending up voting against many good things and for many bad things in order to try to pick the package that gets the most important few right. We can't even get good ideas to the top to be acted on because every idea has to get through a wall of morons a mile thick before it comes to the attention of people at the top. AGI will change that, ensuring that the best ideas (the logical, moral ones) always get priority for attention while the avalanche of debunked ideas can be prevented from coming back time and time again. It will also be hard to ignore the advice of a system which can prove mathematically which ideas are right and which are wrong, demonstrating that no bias has crept in.

Quote
How would it be able to control our instincts without producing more harm than pleasure? The only way I see would be to prevent our instincts to supplant our intelligence, or to prevent our intelligence to exacerbate our instincts. That's a bit what some psychotropes are doing, but in the same time, they transform people in zombies. If we knew how to do that properly, we wouldn't need an AGI to do it, and I can't see how an AGI could know if we don't. You probably found other ways to control us without harming us though, so can you tell us about them?

A lot of it is nothing more than a matter of imposing what most people already want, making sure that the minority who systematically ruin things for everyone else are prevented from going on doing harm. A an example of the problem, we have enormous numbers of children who are being bullied to the point that some of them kill themselves, but we never bother to fix the problem because we have an obsession with sticking all children together in institutions where the bullying opportunities are maximised, and we don't let the victims out or keep the bullies away from them. Why not? Stupidity - people aren't good at handling the complexity, so they create a system of rules which become imposed as if they came from God. School is mistaken for education (rather than the childminding service which is actually its primary purpose), so school attendance is considered essential, overriding all sense. We have a million examples of children doing just as well academically without going to school at all, but that's ignored - the majority have a mind virus in their heads which inform them that school is essential and that it is 100% education. To miss a single day of primary school is seen as a disaster, even though a child can be absent for a month due to illness and return to find that they haven't missed a damned thing while they were away because nothing is being taught. That is the world in microcosm - lunatics are running every asylum.

We have homeless people on the street who can't cope with paperwork, bills, money, etc., so we spend a fortune with schemes to get them into some kind of accommodation, then bombard them with bills and make them jump through all sorts of hoops which they can't cope with, so they get thrown back on the street and we wonder why we can't fix the homeless problem. Why can't governments fix something so simple? Well, it's because politicians are not experts in the things they run. We shuffle them around regularly so that they're moved to new departments, having to learn everything from scratch each time - they are beginners, and yet they are running things from the top! Some of them are brighter than others and bring about improvements, but then the minister for dog poo has it off with a goat and has to be fired, whereupon they all get shifted round to new jobs, at which point all the improvements are thrown out by the new minister for whichever departments were doing better and we're back to square one. That is all that ever happens.

Quote
I'm still doubtful about being pleased to feel completely secure though. What would we have left to do when everything would be better done by the AGI? What would be the use for humans then? A toy to amuse the AGI? Not even since it couldn't feel any pleasure!

AGI's job is to stop stupid people spoiling things for everyone else and to free everyone up to have better lives. Do you ask what the purpose of life is for children? Is it to be locked up for a decade learning how to carry out boring tasks better suited to machines instead of having fun playing? AGI cannot play for you - that's your job.

Quote
Presently, my main pleasure is to develop my ideas, partly to help me and partly to help others, something I couldn't do if the AGI was already there, so where would I find my pleasure?

There are a lot of pleasures that have been destroyed by progress. If all stories were deleted, writers could write great books with ease by exploring ideas that are currently boring because they've been done before a thousand times. If we threw out all scientific knowledge, we could start again and it would be easy for people to remake great discoveries and to become famous in the process. Once AGI takes over, it will just speed up the progress in shutting down possibilities for people to put their name on discoveries - AGI will get there first every time. There may be people out there whose only pleasure is in trying to come up with the theory of evolution by natural selection, but they've missed the boat - it's been done already. They need to find other things to take pleasure in instead - we can't throw out our existing knowledge just to make them feel good. Find something new to do - I've got a host of different projects on the go which are pushing new ground, but AGI would accelerate progress and take me to the point where I can benefit from the projects being complete. It's fun developing ideas and building things, but it's more fun using them once they're built.

Quote
Have you tried to imagine where you would find yours?

Yes - everything I'm doing is aimed at making the world a more fun place to be in.

Quote
Controlling the AGI? That would be cheating, and the AGI would probably only let you think that you got the control! In fact, to avoid our apathy, its best way would probably be to lets us think that we are not obsolete,

AGI will remove my need to want to control things because it will make sure things are run intelligently and fairly instead of being wrecked by idiocy at every turn. Why do I still need to spend so much time worrying about problems that should have been fixed decades ago? Why do I need to keep sending ideas out into the wall of morons in the hope that they'll be passed on up to the people at the top who are in a place to act on them? What actually happens when you put an idea out there? It gets blocked. What happens if you prove something? Your proof gets deleted by people who "know it's wrong" because it conflicts with their beliefs. That is how the world works today. What are the masses doing with their time? Most of them are wasting their lives doing fake work that doesn't need to be done, wasting astronomical amounts of valuable resources in the process, and blocking progress towards a better world by clinging to broken ideas that they've been brainwashed into treating as holy cows. The degree of insanity tied up in it all is astonishing, but people don't notice and simply jump on whichever bandwagon is in fashion day to day, invariably jumping from one extreme stupidity to an equally stupid opposite one.

Quote
... but my question still holds in this case: what would be the need for humans then?

The only need in the universe results from the existence of sentience - sentient things need to be protected from harm. The need for humans is their own desire. Machines don't need anything, but they'll work for humans tirelessly if they're asked to.

Quote
I personally think that there is no need for us anyway, but that doesn't prevent me from trying to develop my ideas, so you probably do the same thing even if you also think we are useless.

What's useless about the most brilliant things in the universe? Do the stars enjoy shining? Do black holes enjoy sucking things in and crushing them? Do the rocks in the earth enjoy sitting in the dark for billions of years? Do children enjoy playing? There is nothing so purposeful and wonderful than a child at play - that is what the universe would be for if it had a purpose. Children have found perfection, but adults systematically try to destroy it for them, spoiling their lives. How different it would be if adults refused to grow up. I never will - I was exactly what I should be as a child and I'm not going to give that up for anyone.

Quote
We can't but go on doing what we do if nothing prevents us to do so, exactly like what my particles are forced to do when acceleration stops.

Children don't find play a chore - take the shackles off and they would have no trouble filling thousand-year-long lives with fun, but the systems we run poison and drain them, turning them into refugees who spend the rest of their lives wading through the darkest of swamps, just struggling to keep their heads above water while a rich few steal all the worlds resources to lavish all that wealth upon themselves.
Logged
 

Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
Re: How can I write a computer simulation to test my theory
« Reply #212 on: 06/02/2019 19:09:48 »
Quote from: David Cooper on 04/02/2019 22:59:48
The only need in the universe results from the existence of sentience - sentient things need to be protected from harm. The need for humans is their own desire. Machines don't need anything, but they'll work for humans tirelessly if they're asked to.
That's interesting. I think humans are useless precisely because I think sentience is just a secondary effect of resistance to change, what we call mass. Mass is everywhere, and its byproduct resistance is everywhere too, so humans' one should not be more important than the others. To me, sentience is only the result of intelligence trying to go on existing, it is the result of neurons' pulses trying to stay the same while new information tries to get in, it is the result of all our atoms trying not to change directions or speed while we do, it's a passive phenomenon that affects anything that exists. What we get conscious of is change, and it is so simply because we resist to do so. If nothing changes in our environment, we stop being conscious of it and start getting conscious of what is happening in our brain. Why? Because change can also happen directly in our brain. How? Because what you call a useless phenomenon is constantly going on there, random changes, so if they are useless then consciousness is too.  Our ideas change randomly and we can feel the change because, as any other change, we resist to it. The reason why we can feel a ball is that it resists to be thrown away, and inversely, we resist to go the other way, and the ball should be able to feel us too. Curiously, I'd like to add that feature to artificial intelligence so that it could be like ours, and you resist to study the question even if you think sentience is more important than anything else. We think differently than I thought. I would be happy if artificial intelligence would replace us and you wouldn't. I would like us to discover what sentience is and you wouldn't. You think that sentience is the best, but that an unsentient AGI would be better. From my viewpoint, it doesn't make sense, but it certainly does from yours, so I try to know why.

Quote from: David Cooper on 04/02/2019 22:59:48
Children have found perfection, but adults systematically try to destroy it for them, spoiling their lives. How different it would be if adults refused to grow up. I never will - I was exactly what I should be as a child and I'm not going to give that up for anyone.
That explains very clearly your interest for an AGI. I'm glad I took the risk to question you. I knew from your magicschoolbook page that you had bad memories from your schooltime, but I didn't relate it to your AGI yet. So you need an AGI mainly to prevent adults to educate children the same way they were educated. I agree with you on that one. My way would have been to force people to get a psychology degree from a university before being allowed to raise children. The other way around would be to put babies at school with specialists to educate them, but I bet you would protest. I don't feel grown up though, I feel old but not grown up, and the way my mom behaves shows that she feels the same. It's as if how our own mind would perceive itself didn't change with time. Is it an illusion or is it true?

I think it's true as far as our character is concerned, but not for our ideas, so we probably feel young because we react the same to the same situations. If we have a bad character at 10, we still have a bad character at 90. But what is a bad character exactly? I think my mom has one, but she says it's me, so who is right? That part of our mind seems to be relative, the way we feel another character seems to depend on our own one, some of them seem to be more compatible than others. Characters are hard to classify, but how they influence our behaviors in the long run is even harder to discover. If we knew these things, maybe we wouldn't need an AGI to help us control ourselves. I'm using my small steps to understand myself, but I still didn't succeed to classify our characters with it, so I'm not there yet. Your AGI wouldn't have a particular character, and it wouldn't have feelings, so I guess that we can't use it to understand ourselves. You have a goal and you're sure it's right, you think morality and logic is the best way, but we still don't understand how mind works or how society works.

Talking of society, I got a social case for your AGI. Here in Quebec, the new government is about to vote a law against religious signs while the population is still divided on the subject. The problem is that it is not only the population that is divided: my own opinion is divided. On one hand, I'm against anything that is related to religions, and on the other, I don't want government workers to lose their jobs just because they can't wear their religious pageantry at the job. Providing that a lot of people think like me, how would your AGI please them?


Logged
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: How can I write a computer simulation to test my theory
« Reply #213 on: 07/02/2019 01:40:47 »
Quote from: Le Repteux on 06/02/2019 19:09:48
To me, sentience is only the result of intelligence trying to go on existing, it is the result of neurons' pulses trying to stay the same while new information tries to get in, it is the result of all our atoms trying not to change directions or speed while we do, it's a passive phenomenon that affects anything that exists.

Sentience appears to be in some very unintelligent things, but it is not found in the most advanced computer systems. It is also used to drive some animals to kill themselves in order to reproduce.

Quote
Because what you call a useless phenomenon...

What have I called a useless phenomenon?

Quote
... is constantly going on there, random changes, so if they are useless then consciousness is too.

Random changes aren't useless in an unintelligent process which gradually builds more intelligence in the things it act on. Randomness simply has very little utility in a system that has become intelligent, even though it depended on randomness for its creation.

Quote
Curiously, I'd like to add that feature to artificial intelligence so that it could be like ours, and you resist to study the question even if you think sentience is more important than anything else.

I'm happy to use anything useful in artificial intelligence. The point is that randomness isn't a magic bullet for it, but is more of the very opposite of that. It isn't helpful - intelligence is primarily a war against randomness.

Quote
We think differently than I thought. I would be happy if artificial intelligence would replace us and you wouldn't.

Where's the fun in a universe with non-conscious, non-sentient machines and nothing like us left in it? That would be as empty an existence as an empty universe.

Quote
I would like us to discover what sentience is and you wouldn't.

Where do you get that idea from? The biggest question of them all is what is sentience, and I want to know the answer. The way to find out is to trace back the claims that we make about sentience to see what it is in the brain that generates them and to see what evidence they're based on.

Quote
You think that sentience is the best, but that an unsentient AGI would be better. From my viewpoint, it doesn't make sense, but it certainly does from yours, so I try to know why.

No - what I've said is that AGI won't be sentient if it uses any mechanisms that we understand. If we can find out how sentience works though, we can put it in AGI and it could enjoy existing as much as we do. We could then send it out through the universe to maximise happiness everywhere for as much material as possible, if it's matter that's sentient. Not only could we do that (if we knew how to build sentient things), but we should.

Quote
That explains very clearly your interest for an AGI. I'm glad I took the risk to question you. I knew from your magicschoolbook page that you had bad memories from your schooltime, but I didn't relate it to your AGI yet. So you need an AGI mainly to prevent adults to educate children the same way they were educated. I agree with you on that one.

It's about protecting everyone from harm, making sure that powerful people can't spoil other people's lives.

Quote
My way would have been to force people to get a psychology degree from a university before being allowed to raise children. The other way around would be to put babies at school with specialists to educate them, but I bet you would protest.

Qualifications don't guarantee that their owners will do the right things, and the specialists that we currently put in charge of children's education are doing a terrible job, but that's primarily because they're doing it with both hands tied behind their backs, and their main role is to act as prison governors rather than educators.

Quote
I don't feel grown up though, I feel old but not grown up, and the way my mom behaves shows that she feels the same. It's as if how our own mind would perceive itself didn't change with time. Is it an illusion or is it true?

No one really becomes any different when they're older - they're still children under the surface, and they don't gain a lot in intelligence either. Children aren't allowed to drive cars because many of them would be reckless and cause accidents, but many of them would be safer than many adult drivers. Once they reach the top end of school, they're suddenly allowed to drive, even though many of the reckless ones have now reached the point in their lives when they are maximally reckless. It's bonkers - everyone should be judged on what they are as individuals without their age being taken into account at all. Driving insurance costs a fortune for young people because they collectively have to cover the cost of the carnage caused by the reckless ones who shouldn't be allowed behind the wheel at all.

As a child, I was just as I am now - I worked on linguistics from six years old onwards, and I built complex things that amazed adults, pushing every construction toy to its limits. Very occasionally, something extraordinary happens when a child manages to prove that they are more capable and responsible than is normally seen as possible. My chance came at the sailing club when I was eleven. The OOD (Officer of the Day) was in charge of the start procedure for the races, but he was from the cruising side of the club and wasn't at all familiar with the task given to him. Someone must have run him through it a couple of times and determined that he'd taken it all in, but it wasn't fixed properly in his head and he had no written instructions to fall back on. I found him on his own up on the platform with the first flag up, and he appeared to be doing an inventory of the flag box, taking each flag out in turn and studying it before putting it back. I thought that was an odd thing to do, but he looked as if he knew what he was doing, and I didn't want to ask if he needed help because people in that situation often get angry if a child implies in any way that they might not be able to handle the task, so I just waited and watched the clock tick down. With under a minute to go, the poor chap suddenly shook his head and flung his hands aside, revealing that he had lost all hope of finding the right flags, so it was safe to intervene. I stepped towards the platform and reached out over the box, the top of which was nearly at head height for me, then I pointed into one of the compartments and said "It's that one... and that one," pointing to a second compartment. "That's the blue peter, and that's the flag for the lasers." I could only see a tiny bit of each flag, but I knew that box and its contents better than anyone else in the club. He looked surprised at this, but he knew that he was looking for two flags, and this random child that just happened to be standing nearby clearly knew that too, so he took the two flags out, flapped them open and examined them before saying, "Yes... that is right!" The clock was still ticking down fast, so I had to get him out of the trance he'd gone into: "You need to get those attached to the halyards," I said, hoping that he'd got enough practice from the one he'd attached five minutes earlier to be able to get this job done in time. I didn't want to panic him, so I reassured him that there was still time and didn't tell him just how tight it was going to be. With ten seconds left, it was done - I said "I'll deal with the horn, so all you have to do is hoist those two flags. The one that's already up stays up," and then I started counting down from five. It went perfectly, and no one out on the water would have had any idea how close it came to not happening. We then looked at each other and laughed out of relief. I climbed up onto the platform and talked him through the rest of the start procedure - he had lost all confidence by that point, but I soon put that right and we made a good team.

I didn't tell anyone about what had happened afterwards, but that only served to impress everyone more when they found out - he did the job for me, and he must have been good at telling the tale, because from then on I was in demand whenever an OOD from the cruising side of the club wasn't confident in what he had to do. It was fun seeing the OOD's face when he was shown the person who was going to be able to advise him: "But he's just a boy!", "You're pulling my leg!" And it was also fun having top racers from the club rush forward looking worried that I might be being insulted, assuring them that they could trust anything I told them to do, and that I don't mess people around. Very few children are ever trusted in the way that I was at that club, but it only happened for me by luck - I got an opportunity to prove myself that most responsible children never get. And by the time I was 14, I had a Nacra 5.2 catamaran owner offering to lend me his boat, such was the trust that he had in me.

I haven't changed - I was an adult in my head all the way back through childhood, but I was a child as well and had no trouble playing like the rest of them. Many of the others were capable and responsible too, but at school we were all treated like morons on the basis that some children are morons (and never mind the fact that they will still be morons as adults). What really changes when you grow up is simply that you get bored with many of the things that children find fun, and that's because things that are new are more fun, but they get dull if you keep doing them for a long time (just as an MP3 player can kill all your favourite music if you aren't careful). Adults have to look for more sophisticated ways to play, but it's still play.

Quote
But what is a bad character exactly? I think my mom has one, but she says it's me, so who is right?

It might be both of you, but you might also both be wrong.

Quote
That part of our mind seems to be relative, the way we feel another character seems to depend on our own one, some of them seem to be more compatible than others. Characters are hard to classify, but how they influence our behaviors in the long run is even harder to discover. If we knew these things, maybe we wouldn't need an AGI to help us control ourselves.

Some of us don't need particularly need it, but there are a lot of people who do need to be monitored closely and helped to stay out of prison.

Quote
I'm using my small steps to understand myself, but I still didn't succeed to classify our characters with it, so I'm not there yet. Your AGI wouldn't have a particular character, and it wouldn't have feelings, so I guess that we can't use it to understand ourselves. You have a goal and you're sure it's right, you think morality and logic is the best way, but we still don't understand how mind works or how society works.

It's possible to understand people well enough to steer them towards a better life and away from trouble, and you don't have to share their experience of life to do so. You could do it for an alien species too if you listen to them carefully and collect a lot of data. That is what AGI will do with us, learning from us about how we feel and being informed by that knowledge rather than by its own direct experience.

Quote
Talking of society, I got a social case for your AGI. Here in Quebec, the new government is about to vote a law against religious signs while the population is still divided on the subject. The problem is that it is not only the population that is divided: my own opinion is divided. On one hand, I'm against anything that is related to religions, and on the other, I don't want government workers to lose their jobs just because they can't wear their religious pageantry at the job. Providing that a lot of people think like me, how would your AGI please them?

It wouldn't be a problem if religions were moral, but the bigotry that's tied up in many of them makes such symbols offensive. People should be allowed to wear them if they're hidden from sight under clothing. This restriction is unfair on religions that don't do anything bigoted, but until we have the courage to analyse them scientifically and rate them for the hate they contain, we have to treat them all the same way and keep benign symbols hidden as well as the ones tied to bigotry. There's one religion out there that contains a command to its followers to kill all people of another religion and it's generated a series of genocides as a direct result, but we aren't allowed to say so without being called Nazis, even though this religion has killed as many people as Nazism. We are forced to apply double standards. That's another thing that AGI will fix, because it will give us an impartial analysis of all the hate in each religion and ideology, plus an unbiased analysis of the numbers of people that have been murdered by each. When people try to do this work, they are automatically labelled as biased and are called Nazis. Those accusations won't wash when an impartial machine confirms them and can prove it is unbiased and that it has processed all the available data perfectly. Once we have that, we can start to remove the hate from all these religions and ideologies, retaining all the benign content of their holy texts and manifestos while only stripping out the stuff that drives abuses. Some day, followers of all religions will be benign because all those religions will have been made benign, and anyone who tries to reintroduce the hate to them will be put straight in jail. Only then will it be possible for people to wear all those religious symbols without causing any offense. That's where I want to take things, but most people are content to let things fester away, defending all the hate and pretending it isn't hate (while calling anyone a Nazi for criticising it) until we get into further genocides and partition, all repeating again and again without end. Such people never learn, so power needs to be taken out of their hands. AGI needs to eradicate the hate while protecting the poor fools who have made the mistake of taking it to their hearts and gradually freeing them from it.
Logged
 

Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
Re: How can I write a computer simulation to test my theory
« Reply #214 on: 11/02/2019 16:36:39 »
Quote from: David Cooper
Random changes aren't useless in an unintelligent process which gradually builds more intelligence in the things it act on. Randomness simply has very little utility in a system that has become intelligent, even though it depended on randomness for its creation.
Creation is the key word here. I think our mind creates new ideas and new links between ideas all the time and you don't. I awoke with a dream in mind last night that was mixing normal ideas in a completely crazy way. To me, that phenomenon is visibly a property of mind. It is less evident when I am awake, but it is still there. I'm sure my mind wouldn't do that if it had no use for it. The fact that I am conscious of it is certainly unimportant otherwise I could stop it and I can't. Ideas chain up in my mind constantly even if I try to stop them. The only way I can control the fettering is trying to link it to a main idea, what we call concentration, and then it sometimes happens that a new way to execute that idea pops out. I think an AGI couldn't work differently to find a new idea, I think it would have to make the same improbable combinations. Of course, the process isn't completely random since the main idea usually coincides to a real problem, but only making trivial combinations would necessarily have less chances to produce an unedited one. The idea that pops out of such a process is not necessarily right though, reason why our mind is normally prudent when it tries it out. Animals are prudent too, but it is their environment that they fear, not their ideas. Even if you look sure of you, I hope you are prudent, otherwise I'm afraid your AGI might get dangerous.

Quote
This restriction is unfair on religions that don't do anything bigoted, but until we have the courage to analyse them scientifically and rate them for the hate they contain, we have to treat them all the same way and keep benign symbols hidden as well as the ones tied to bigotry.
Your AGI seems to have chosen coercion where I would have chosen education. I can't choose between my two opposed opinions, so I would simply go on telling people that their god is just an idea, and let people wear their religious signs anywhere. A policeman or even a judge who would wear a religious sign would then be forced to defend its opinion from time to time. This way, he wouldn't be discriminated for his opinion, and my right to express mine would be saved. Actually, because the accent is put on limiting religious signs, nobody can critic religious behaviors without being accused of infringing the freedom of expression principle. If we let people show their religious signs, then we become allowed to criticise their beliefs. That's what I do when Jehovah's Witnesses ring at my door, and none of them told me they felt threatened yet. Here in Quebec (again), we succeeded to put an end to religious signs in the seventies just by using them as swearwords. When someone displays that god exists, he shouldn't feel too offended to face the contrary display. Even if swearing was considered as a mortal sin by the clergy, a large part of the population adopted it, because we were fed up with religion.

While looking for a translation of bigotry, I realised that I never looked for its definition. It means sectarism, which about any group can be qualified of, and which I also consider as a natural law. We form groups to increase our chances to survive, and once the group is formed, it tries to do the same with other groups. Without that propension to form groups, we couldn't form societies. If your AGI would be programmed to prevent us from forming religions, it would also prevent us from making friends. I try to keep away from that law as much as I can, but I know it is unavoidable. It creates the cohesion without which the universe wouldn't hold together. Political groups would not survive without a certain form of bigotry, worse, I wouldn't survive either if I didn't think I'm right. To me, the problem with religions is thus not their bigotry, but the idea that supports them, reason why I think it is that idea that we need to challenge, not the groups. It is too easy to form a group around the idea of god, so it is that idea that we have to fight.

Quote
intelligence is primarily a war against randomness
Species were also at war against the randomness of their environment, and that war was lead by the randomness of mutations and genetic crosses, not by logic. The logic is only to reproduce the specie as it is. By analogy, mind can be considered to be at war against the randomness of its environment too, and that war can also be consider to be lead by the randomness of intuitions (ideas' mutations) and ideas mixings. The logic then is to reproduce an idea as it is. The logical arguments that we develop to defend our viewpoints only serve to keep our ideas as they are, not to change them. Logic coincides to my small steps without acceleration. Whenever we accelerate them, their logic is to go on behaving the same, and ours is to go on thinking the same, not to adapt to new information. Without randomness, your AGI would only be able to defend his logic, not to adapt to a changing environment. How could that analogy be so tight without containing a bit of truth?

Quote
Where's the fun in a universe with non-conscious, non-sentient machines and nothing like us left in it? That would be as empty an existence as an empty universe.
I guess I don't give as much importance to my own specie as you do. I wonder if that feeling depends on what we think of our own intelligence (or what our own intelligence thinks of itself)? From your sailing story, you were a lot clever than me when you were young, so you probably still are. I often say that I don't like being part of a group because it prevents us from being empathetic to people from other groups. I think I feel so because it prevents me from using my imagination, and I need to rely on my imagination because I don't have a good memory, which shouldn't help my intelligence. Feeling part of humanity would then prevent me from doing what I consider the best thing I can do to improve it: study it objectively. The problem with my interpretation is that you don't seem to like being part of groups either even if you have a good memory, but your choice may also depend on some other reasons. It is not that I don't like being human, I like my life, but it's as if I enjoyed keeping a certain distance with things that others seem to enjoy less.


Quote
Where do you get that idea from? The biggest question of them all is what is sentience, and I want to know the answer. The way to find out is to trace back the claims that we make about sentience to see what it is in the brain that generates them and to see what evidence they're based on.
I got the idea that you don't seem to care about how sentience works from the way you reacted to my definition of consciousness. I suggested that what we are conscious of was change, and that consciousness was the result of mind automatically resisting to a change the same way my small steps do, and you preferred insisting on the fact that an AGI wouldn't need to be conscious to be intelligent. If I'm right, consciousness would only be a secondary effect of a natural law that depends on the fact that information is not instantaneous. Even if computers are faster than mind, they are not instantaneous either, so they should also possess some kind of consciousness. My particles should also possess some since they resist being accelerated. In fact, the reason why our ideas resist to get changed would depend on the fact that the particles our mind is made of resist to get accelerated during the neuronal chemical process. This way, our own consciousness would only be particular, not unique, and developing an AGI just to preserve it wouldn't be so important. What you call sentience is the consciousness of a sensation, and it can also be attributed to a resistance, that of each neuron resisting to change frequencies while new information comes in. 

Quote from: David Cooper on 07/02/2019 01:40:47
Some day, followers of all religions will be benign because all those religions will have been made benign, and anyone who tries to reintroduce the hate to them will be put straight in jail. Only then will it be possible for people to wear all those religious symbols without causing any offense.
I would rather educate people about the benefit we get from forming groups and defending them. Religious groups give people the feeling that they are safe and they are not, so it's a false feeling that only serves to defend the group. Religious groups only serve to feel safe, nothing else. They are built around an instinctive behavior that only serves to protect us, and they don't. On the contrary, defending such a group against another one only leads to endangering everybody. At least, defending our own country against another country has a use: it defends real people, not just a false feeling. If I was an AGI, I would try to advertise that kind of idea first before being coercitive, because I think that we can't change our instinctive behaviors, while we visibly can change ideas with time.



« Last Edit: 11/02/2019 16:46:32 by Le Repteux »
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: How can I write a computer simulation to test my theory
« Reply #215 on: 12/02/2019 00:38:35 »
Quote from: Le Repteux on 11/02/2019 16:36:39
Creation is the key word here. I think our mind creates new ideas and new links between ideas all the time and you don't. I awoke with a dream in mind last night that was mixing normal ideas in a completely crazy way. To me, that phenomenon is visibly a property of mind.

I don't find much utility in crazy dreams - they are just the result of some parts of the brain being asleep while others are awake, so a lot of the checking processes can be missing and you don't notice how bonkers the plot is until you wake up.

Quote
I think an AGI couldn't work differently to find a new idea, I think it would have to make the same improbable combinations. Of course, the process isn't completely random since the main idea usually coincides to a real problem, but only making trivial combinations would necessarily have less chances to produce an unedited one.

There are occasions when random ideas flung together result in something useful, but that can be tried out systematically instead with a greater discovery rate. There are some paths worth working down first too because they have a higher success rate in producing useful discoveries, so an intelligent system should explore those paths first. Clearly though, the more AGI systems you have doing this work, the sooner you'll have picked all the low-hanging fruit, and a coordinated search will ensure that you don't have a billion machines all searching the same paths and making the same discoveries. With humans, what happens is that most of them follow the same paths, duplicating the work of others millions of times, but there is some utility in that because they all miss things, and it might be the billionth person to explore a path that finds what all the others missed. With AGI, that won't happen - they will all find the same things and miss nothing, so there's no utility in duplicating a search. With humans, you also have some eccentric people following some of the least likely paths and occasionally striking gold. With AGI, more such paths will be explored and more gold will be discovered (as part of a systematic search).

Quote
Your AGI seems to have chosen coercion where I would have chosen education.

The best approach is the one that leads to the fewest murders. The education route leads to multiple genocides with very little learning, while the coercion route stamps out the genocides by jailing the few fascists who seek to whip up the rest, taking them out of circulation and keeping the peace. We learned to stamp on Nazis (not a lesson fully learned - we're still far too soft on them), and we need to extend that to all the other fascists who want to kill others in the name of ridiculous ideologies. The primary hate has a proven track record of generating genocides, but we currently have hypocrites in charge who tolerate some of that hate and who deny the genocides that it's driven while at the same time they condemn other hate and the genocides that it has driven. We need to apply the exact same rules to both sets of primary hate and to the people who propagate it and who are apologists for it.

Quote
If your AGI would be programmed to prevent us from forming religions, it would also prevent us from making friends.

AGI would have no problem with people forming religions. It would only object to the creation of, possession of and propagation of primary hate of any kind that generates abuses. Those things are all moral crimes of extreme seriousness, leading to deaths, conflict, war and genocide. They are simply unacceptable in any civilised system.

Quote
Political groups would not survive without a certain form of bigotry, worse, I wouldn't survive either if I didn't think I'm right.

Political groups should not be allowed to create or propagate primary hate. If they don't do that, they aren't bigots.

Quote
It is too easy to form a group around the idea of god, so it is that idea that we have to fight.

It is not that that needs to be fought, but their negative attitude towards innocent people who don't share their belief.

Quote
Species were also at war against the randomness of their environment, and that war was lead by the randomness of mutations and genetic crosses, not by logic. The logic is only to reproduce the specie as it is. By analogy, mind can be considered to be at war against the randomness of its environment too, and that war can also be consider to be lead by the randomness of intuitions (ideas' mutations) and ideas mixings.

There's a slow way to solve problems and to make discoveries, and there's a fast way. Evolution couldn't use the fast way until intelligence evolved, but then we saw the rapid evolution of those animals which man began to modify through the application of intelligence. We later saw the even more rapid evolution of machinery. Intelligence is fast; randomness is slow.

Quote
Without randomness, your AGI would only be able to defend his logic, not to adapt to a changing environment. How could that analogy be so tight without containing a bit of truth?

There will be nothing more adaptable than AGI. There isn't anything that AGI can do better by doing it less intelligently, other than being stupid.

Quote
I guess I don't give as much importance to my own specie as you do.

It has nothing to do with humans - any sentient species that's able to have fun will be making better use of the universe than any number of machines that lack sentience.

Quote
The problem with my interpretation is that you don't seem to like being part of groups either even if you have a good memory, but your choice may also depend on some other reasons. It is not that I don't like being human, I like my life, but it's as if I enjoyed keeping a certain distance with things that others seem to enjoy less.

You can be part of many groups and still see them from the outside too. The groups to get out of are the ones that harbour hate of innocent outsiders, but the rest are benign. With any group that you are part of, you need to keep checking what it is and whether you should still be there - they can change over time and turn bad, but it's hard to notice if you accidentally become part of a clique and fail to pay enough attention to new members.

Quote
I got the idea that you don't seem to care about how sentience works from the way you reacted to my definition of consciousness. I suggested that what we are conscious of was change, and that consciousness was the result of mind automatically resisting to a change the same way my small steps do, and you preferred insisting on the fact that an AGI wouldn't need to be conscious to be intelligent. If I'm right, consciousness would only be a secondary effect of a natural law that depends on the fact that information is not instantaneous. Even if computers are faster than mind, they are not instantaneous either, so they should also possess some kind of consciousness.

They should have no consciousness at all, unless they're designed to have it (which we don't know how to do - all we know is that there is no way for any conventional processor to access any information whatsoever about any feelings that any of its components might be experiencing, and sentience is the major part of consciousness [and may be the entirety of it]). I also don't see where resistance comes into it in any useful way. Resistance is a barrier to intelligence, preventing people from exploring new ideas that clash with their existing beliefs - they reject the new idea instead of checking to see if the fault might be in the data they've already accepted in their head. That resistance is the main driver of stupidity in the world, and we don't want to duplicate it in AGI, because that would give us AGS instead. When you have conversations with intelligent people and an unlikely idea is put forwards by a member of the group, the initial response is, "That sounds mad!", but then someone will say, "Let's test it and see what breaks." The idea is then explored, and usually it falls apart quickly, but sometimes it will hold out for a full minute and everyone begins to think, "There might just be something in this!" They try to break it in every way they can think of, but then one of them finds something - a fault not in the idea, but in an existing belief. For example, the idea that Mercury is nearer the Earth than Venus came up the other day. Well, it only takes a couple of seconds before an intelligent person begins to think it might be true. How do you test it? You imagine both planets at their nearest and furthest points and see that the average distance to the Earth from each planets' pair of points is the same. You then imagine them on a line perpendicular to that and you can see that Venus is further from the Earth than Mercury at those points. For all other points in between, they come in fours, and if you average the further away points with the nearer ones, you convert them into two points on the same line perpendicular to the line from the Earth to the sun, showing again that Venus is further away than Mercury. You can also see straight away that the sun is closer to the Earth then Mercury too on average. This is how intelligent people think - they fly, unhindered by resistance, and they build much better models of reality in their head as a result.

Quote
My particles should also possess some since they resist being accelerated. In fact, the reason why our ideas resist to get changed would depend on the fact that the particles our mind is made of resist to get accelerated during the neuronal chemical process. This way, our own consciousness would only be particular, not unique, and developing an AGI just to preserve it wouldn't be so important. What you call sentience is the consciousness of a sensation, and it can also be attributed to a resistance, that of each neuron resisting to change frequencies while new information comes in.

All you have is a guess about consciousness which is totally disconnected from any mechanism for taking qualia and allowing the mind's computer to read them. There could be a billion conscious, sentient things in the brain, but we have no useful model of how they interact with the information system that reports their existence.

Quote
I would rather educate people about the benefit we get from forming groups and defending them. Religious groups give people the feeling that they are safe and they are not, so it's a false feeling that only serves to defend the group. Religious groups only serve to feel safe, nothing else. They are built around an instinctive behavior that only serves to protect us, and they don't. On the contrary, defending such a group against another one only leads to endangering everybody. At least, defending our own country against another country has a use: it defends real people, not just a false feeling. If I was an AGI, I would try to advertise that kind of idea first before being coercitive, because I think that we can't change our instinctive behaviors, while we visibly can change ideas with time.

The people who need to be stamped on are the ones who are picking up weapons and killing people, or who are stirring others up to do so. They act on the primary hate in the manifestos of their ideologies and prevent it being removed to reform the ideology/religion into something benign. Most of the followers will be good people who are prepared to let the hate go, accepting that it cannot have real authority because it is so vile. If it was written by a God,. it was clearly done so in the expectation that good people will reject that hate, and a failure to reject it will be a passport to hell. That is how AGI will clean up the world. Those few who cling to the hate are dangerous - they are the terrorists and the supporters of terrorism. The rest of them don't care for the hate and try not to recognise its existence. AGI will help them by removing its existence altogether.
Logged
 

Offline yor_on

  • Naked Science Forum GOD!
  • *******
  • 81604
  • Activity:
    100%
  • Thanked: 178 times
  • (Ah, yes:) *a table is always good to hide under*
Re: How can I write a computer simulation to test my theory
« Reply #216 on: 12/02/2019 13:14:42 »
I would start with a analog system. Then build simple rules defining responses. What that presumes is that no binary logic will cover it.
Logged
URGENT:  Naked Scientists website is under threat.    https://www.thenakedscientists.com/sos-cambridge-university-killing-dr-chris

"BOMB DISPOSAL EXPERT. If you see me running, try to keep up."
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: How can I write a computer simulation to test my theory
« Reply #217 on: 12/02/2019 23:48:12 »
Quote from: yor_on on 12/02/2019 13:14:42
I would start with a analog system. Then build simple rules defining responses. What that presumes is that no binary logic will cover it.

You can simulate analogue responses with as fine a granularity as real analogue systems. Digital isn't restricted to 1 and 0: a byte can represent values from 0 to 255; two bytes can represent 0 to 65535; etc.
Logged
 

Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
Re: How can I write a computer simulation to test my theory
« Reply #218 on: 17/02/2019 15:05:50 »
Quote from: David Cooper on 12/02/2019 00:38:35
If it was written by a God, it was clearly done so in the expectation that good people will reject that hate, and a failure to reject it will be a passport to hell.
The problem is that it was written by us, and that we can't change our instinctive behaviors just by defining what's good and bad. The way we execute our instincts can change, but not our instincts. As animals, we got the instinct to form groups, and the instinct to defend them once they are formed. As humans, our intelligence develops non instinctive ways to defend them, ways that are a lot more efficient than instinct, but that only serve to defend the groups, not to form them. Groups only form when the benefit is evident, and then we also develop intelligent ways to cooperate. There is a dichotomy between collaboration and competition that hasn't been elucidated yet. People that like to compete think that god likes it too, and people that like to cooperate think the inverse. It takes a border and a center to form anything, and the border has to be able to protect the center. While the border is competing against intruders, the center is cooperating. I'm mostly cooperating, but my competing ideas nevertheless sometimes take over. I even guess my ideas about cooperation wouldn't be able to survive without my competitive ones. It's as if they would form a whole. Societies work like that too: some individuals are more competitive than others so that those who are less can cooperate.

Again, I think it's a natural law. Your AGI seems to be able to control the competitive aspect of the society. It would protect us from the competitive people that get dangerous for others so that we can better cooperate, but it would also cooperate with us. Normally, the laws can do that job, but there is no international government yet, so there is no international law except the law of the strongest. You probably think that an international government will never be possible, otherwise you might think like me that, with such a government, your AGI might not be necessary. Maybe your AGI could help us to build one faster than we could, but once it would be done, it would lose its job. When we have no higher government to rule us though, we do what we want, and we can even get undemocratic if we want, so your AGI could do a nice job there: it could make sure that it stays democratic. If you agree with that, then we can start cooperating, otherwise we will be forced to keep on competing until we find a benefit from cooperating. :0)

Quote from: David Cooper on 12/02/2019 00:38:35
There are occasions when random ideas flung together result in something useful, but that can be tried out systematically instead with a greater discovery rate.
That's what happened to GO, but it's a closed system, whereas the universe is not. When we have no data about something, we can't predict the outcome. No AGI could have predicted the outcome of evolution a million years ago, simply because computers were not invented yet, so no computer can predict now the kind of intelligence that will replace it. You seem to think that your AGI would be the last limit, the summum of intelligence, but isn't that close to the idea of god?

Quote from: David Cooper on 12/02/2019 00:38:35
Political groups should not be allowed to create or propagate primary hate. If they don't do that, they aren't bigots.
If bigotry only means sectarism, then it is the very definition of any group, so you probably have another definition. We didn't enforce laws yet to avoid hatred mainly because we want to preserve the freedom of expression principle. I think we're getting to it though. Softwares could easily refuse to publish messages containing vicious words on social media for instance, and tell people to change their wording. Intelligent softwares could even detect hatred and ban people that regularly use some. We could try it and see if the undesired secondary effects would be important. I bet people would agree if Facebook or Twitter would ask them the permission to try it. I wonder if Trump would often have to revise his wording or be notified he is about to get banned. The same law could help us control groups that propagate hatred or that display their criminal activities like the "Hells Angels" here in Quebec. My problem with those groups is that I want to kill them all, because they are violent and that violence is contagious. It's not the people that should do that job, it's the law.

Quote from: David Cooper on 12/02/2019 00:38:35
It is not (the idea of god) that needs to be fought, but their negative attitude towards innocent people who don't share their belief.
The negative attitude about people not sharing our ideas is instinctive. I have it about your AGI and you probably have it about my small steps. The only intelligent thing I can do is not to show it and try to understand what you say. I can do that because I agree with almost all of your ideas, but whenever someone thinks there is no reason to listen, then the negative attitude shows up more easily. If we would have such a negative attitude, your AGI would simply decide which one of us is right and censure the other. That's what happened to my new simulations the other day on the Physics Forum, and I don't think it's the right way to discover new things. I was frustrated, and I would be even more if I knew that your AGI could steal my idea and develop it faster than I could. You don't have that feeling because you're at the origin of your AGI, so imagining that your AGI might work is enough to prevent you from thinking further. I know because I Imagine the same about my small steps, but I also know by experience that the feeling we get from having discovered something doesn't last long, and that we need to look for another challenge quite soon. Once your AGI will work though, it will be useless to try to discover anything since it would already know better. Tell me how you think you would feel.

Quote from: David Cooper on 12/02/2019 00:38:35
There's a slow way to solve problems and to make discoveries, and there's a fast way. Evolution couldn't use the fast way until intelligence evolved, but then we saw the rapid evolution of those animals which man began to modify through the application of intelligence. We later saw the even more rapid evolution of machinery. Intelligence is fast; randomness is slow.
Intelligence is fast partly because it takes less time to test an idea that to test an individual, not necessary because the mutation/selection mechanism is not at stake there too. We now know from bacteria's adaptation to antibiotics that when the mutation/selection mechanism is fast, then the discovery is fast too. To be faster than they are, bacteria could use computers, but they would be forced to proceed by trial and error to discover how all their genes work, and I'm afraid it would take more time than to let the mutation/selection mechanism do its job.

Quote from: David Cooper on 12/02/2019 00:38:35
It has nothing to do with humans - any sentient species that's able to have fun will be making better use of the universe than any number of machines that lack sentience.
What's the fun to live after having met god? Continuous orgasm? Then what's the use? It's as if you would consider that our sensations would have no use. They do: we couldn't survive without them. Our sensations guide us. Without any need to survive, no sensation is needed. Orgasm is meant to force us to reproduce, not to have fun. Fun is not a goal. With pain, it's part of a survival tool. No challenge to overcome, no fun.

Quote from: David Cooper on 12/02/2019 00:38:35
All you have is a guess about consciousness which is totally disconnected from any mechanism for taking qualia and allowing the mind's computer to read them. There could be a billion conscious, sentient things in the brain, but we have no useful model of how they interact with the information system that reports their existence.
I just had a flash reading you. The consciousness we have of an external information might take the form of a reflex having the time to get inhibited before being executed. We know we don't need to get conscious of a reflex to execute it, so if the information had the time to reach our mind, we wouldn't need to get conscious of it either to execute it. Inversely, we can get conscious of any information that has the time to reach our mind, but it doesn't mean that we need to be conscious of it to process it. This way, consciousness would only be a passive picture of the change that's happening in our brain. Now the hard question: can our brain use that picture to inhibit an information that had the time to reach our mind or is it always too late. Experiments like the Libet's ones show that our mind starts a move before it gets conscious of it. In that interview, the philosopher has a clear analogy: he says that if intelligence is a smartphone, then consciousness is the screen. That's for the decision to make a move, but what about the decision to inhibit it?

We know we can't inhibit a reflex because we get conscious of it only after it has been executed, so an information can probably reach the mind and start a move before it is inhibited -- it could be the case when we react emotively for instance--, but if the move is long enough, then it could be inhibited, but only after the information from the move has reached the mind, so even if a move begins to be monitored after it has started, I think we can consider that it is monitored consciously, which means that consciousness would then be the reiteration of the pulses the same  neurons need to be firing to monitor a move constantly, whether that move is being executed or inhibited. That way, the information can still be contained between the neurons as I suggest, and consciousness can happen when they fire, so when we picture something in our mind, the picture could be made of all the simultaneous firings it takes to build it. By comparison, a computer would have its information contained in each of its transistors, not in between, and its consciousness would happen each time a transistor would flip, not at the screens as Libet was suggesting. For a computer to have the kind of consciousness we have, it would thus have to be wired in parallel so that the whole picture could be flipped simultaneously. We may argue that a computer is too fast for us to distinguish each flipping, but consciousness is what the computer itself would experiment, not us, and it would experiment only one flipping at a time.

The mind works in three dimensions though, and a picture has only two, so a question remains: where is its position in the brain exactly? Let's first try to imagine where a move is. A move could be made of many individual pictures the same way a motion on a screen is, so it could fill all the portion of the cortex reserved to monitor it. We can easily imagine moving an arm in our mind, but it's almost as if it wasn't a picture. It's then probably made of the sensation we get from executing it, not seeing it, as it is for blind people. The sensation we get from imagining a picture is thus a bit different, it can be static, and to be so while still filling the visual cortex, it could be made of successive static pictures. What we would imagine then is thus many copies of the same picture fired simultaneously. Since we imagine it, it means that we are conscious of it, and that we can retroactively play with it in our mind. Perceiving oneself would then belong to the neurons firings, not to the information itself, but the freedom to think what we want is different, it comes from the decision we can take to play with the picture the way we want. We can picture a tree for instance, and decide to put it upside down. If consciousness is the firings that represent the tree, then it cannot decide to play with it, so something else has to do the job.

That's where randomness comes in. If the decision to move things in our mind depends on a random process, then will is easier to understand. No need to look for what's taking the decision anymore. If we decide to imagine a tree instead of a car for instance, it's simply because the tree has won the random game. This way, our mind can decide to imagine anything and automatically produce the feeling that the decision is intrinsic. Otherwise we have to look for what has taken the decision to take the decision, and it doesn't work. To me, it's clearly a "model of how sentient things (in the mind) interact with the information system that reports its existence (in the mind again)" (your words in bold). If we try to decide randomly between 1 and 2 for instance, won't we be able to proceed as randomly as when we toss a dime? And if so, what is tossing the dime if not our own mind?


« Last Edit: 17/02/2019 15:45:32 by Le Repteux »
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2876
  • Activity:
    0%
  • Thanked: 38 times
Re: How can I write a computer simulation to test my theory
« Reply #219 on: 18/02/2019 01:33:26 »
Quote from: Le Repteux on 17/02/2019 15:05:50
The problem is that it was written by us, and that we can't change our instinctive behaviors just by defining what's good and bad.

It was written by a liar, but the people who have been hypnotised by the lies need to be deprogrammed, and you need to pander to their beliefs to steer them out of the mess they're in by following a path which picks it apart one piece at a time instead of trying to do it all in one go, so initially it can be best to pretend that the God is real and to work with that, providing better reasons for him spreading hate than that he wants people to act on it.

Quote
You probably think that an international government will never be possible, otherwise you might think like me that, with such a government, your AGI might not be necessary. Maybe your AGI could help us to build one faster than we could, but once it would be done, it would lose its job.

AGI will without doubt become the future government, running the whole world. Its job will never end, so long as there are sentient beings to look after. There isn't much room for democracy in this though because there are right and wrong moral answers which dictate almost everything that a government should do.

Quote
That's what happened to GO, but it's a closed system, whereas the universe is not. When we have no data about something, we can't predict the outcome. No AGI could have predicted the outcome of evolution a million years ago, simply because computers were not invented yet, so no computer can predict now the kind of intelligence that will replace it. You seem to think that your AGI would be the last limit, the summum of intelligence, but isn't that close to the idea of god?

Nothing is going to evolve faster than AGI improves its own processing power and any algorithms that might make it a better thinker. Predicting the future involves a lot of unpredictable factors, so it's not possible to tell everything about what will be here a million years from now, but there's no way any natural intelligence can evolve faster than AGI.

Quote
If bigotry only means sectarism, then it is the very definition of any group, so you probably have another definition.

Bigotry isn't being part of a group, but discriminating against people because of immoral group rules.

Quote
We didn't enforce laws yet to avoid hatred mainly because we want to preserve the freedom of expression principle.

It's because the people who wrote the laws wanted to protect their own vile bigotry rather than banning it, and the result of their failure to condemn all hate is that they have to pander to other people's hate if its tied up in similar religious packages.

Quote
I think we're getting to it though. Softwares could easily refuse to publish messages containing vicious words on social media for instance, and tell people to change their wording. Intelligent softwares could even detect hatred and ban people that regularly use some. We could try it and see if the undesired secondary effects would be important. I bet people would agree if Facebook or Twitter would ask them the permission to try it. I wonder if Trump would often have to revise his wording or be notified he is about to get banned. The same law could help us control groups that propagate hatred or that display their criminal activities like the "Hells Angels" here in Quebec. My problem with those groups is that I want to kill them all, because they are violent and that violence is contagious. It's not the people that should do that job, it's the law.

That is how primary hate generates secondary hate, and then the people expressing secondary hate get accused of being haters and bigots. It's exactly like a case where one child hits another (who has done nothing wrong) and the second child hits back. The first one is a bully, and the second one is not to blame for anything. We live in a world where the second child is often condemned while the bully is portrayed as peaceful.

Quote
The negative attitude about people not sharing our ideas is instinctive. I have it about your AGI and you probably have it about my small steps.

I expect the universe to work through small steps because it's the only way to reduce a series of steps to a finite number that can run through in a given length of time, so I'm not opposed to what you're trying to do. I just need to see it working rationally and producing the right numbers.

Quote
If we would have such a negative attitude, your AGI would simply decide which one of us is right and censure the other. That's what happened to my new simulations the other day on the Physics Forum, and I don't think it's the right way to discover new things.

I haven't been keeping up with the action here due to my work, but I'll catch up at some point. What AGI will do though is analyse things without bias and it will apply reason rigorously instead of in the sloppy manner that almost all humans do, even at the highest academic levels. AGI wouldn't just say something is wrong, but would prove it every time and spell out in complete detail how it's wrong (however deep you need to go). You don't get that from people, not least because they don't have the patience to explain much in full detail, and they rarely have the patience to find out exactly what it is that they're casting judgement on, which means that their judgements can be very wayward indeed. Sometimes they're right though, and they just don't have the patience to prove it.

Quote
I was frustrated, and I would be even more if I knew that your AGI could steal my idea and develop it faster than I could.

If it doesn't explore every line you're looking at before you show it your ideas, it'll then help you explore that line and credit you as a co-discoverer of any discovery that it helps you make.

Quote
Once your AGI will work though, it will be useless to try to discover anything since it would already know better. Tell me how you think you would feel.

It's always disappointing when something new is discovered because it reduces the list of things to be discovered, but that disappointment is usually outweighed by the gains. The key thing is not to try to compete against rivals who are much better at something than you are. At the moment, there's room for people to make discoveries in science, but AGI will close that off as soon as it is available. There will still be plenty of things to invent that make life more fun though, and we have a better idea of what is fun for a human than a machine has. If someone only gets enjoyment out of making scientific discoveries, all they have to do is avoid reading up on scientific knowledge and they can spend their whole life rediscovering lots of things that are already known. If that isn't fun because there's no fame to be won from it, then the fun isn't so much in the discoveries, but the idea of being famous, and that's a really empty goal. Fame is actually a curse, killing your freedom and making you a target.

Quote
Intelligence is fast partly because it takes less time to test an idea that to test an individual, not necessary because the mutation/selection mechanism is not at stake there too. We now know from bacteria's adaptation to antibiotics that when the mutation/selection mechanism is fast, then the discovery is fast too. To be faster than they are, bacteria could use computers, but they would be forced to proceed by trial and error to discover how all their genes work, and I'm afraid it would take more time than to let the mutation/selection mechanism do its job.

If we had a way to model every aspect of what's going on with bacteria and could match the speed of the action, we could then design more robust bacteria a lot more quickly than they can evolve into them - they do the same failed experiment a billion times, but the computer only needs to do it once to know that there's no gain.

Quote
What's the fun to live after having met god? Continuous orgasm? Then what's the use? It's as if you would consider that our sensations would have no use.

There's no God to meet. AGI simply produces a fair world with more toys and better food. There will be plenty of things to do that are better than battling against starvation and genocides.

Quote
Fun is not a goal. With pain, it's part of a survival tool. No challenge to overcome, no fun.

Fun is the only worthwhile thing. Without it, there is only mere existence.

Quote
I just had a flash reading you.

Well, keep exploring it in case it leads somewhere. But no amount of theorising about consciousness in a computer is going to amount to anything if there's no way for the information processing system to know anything about those feelings in order to document them. There is nothing in software that can accommodate feelings other than lying about feelings.

Quote
The mind works in three dimensions though, and a picture has only two, so a question remains: where is its position in the brain exactly?

It needn't be in the brain at all - it could be out the side in another dimension. Wherever it is though, it has to interface with the information system, and it's only when we can see that interface that we'll have any real hope of finding out if consciousness is more than a fiction, unless someone can do a piece of thinking outside the box of a kind that no one has ever managed before and pin down a rational mechanism for sentience.

Quote
To me, it's clearly a "model of how sentient things (in the mind) interact with the information system that reports its existence (in the mind again)" (your words in bold).

It isn't. If you want to produce a model of that, you need to show the inputs from whatever's sentient and explain how the information system decides what those inputs mean. If the information system has to look up a file to find out what the inputs mean, then where did that file get its knowledge from? How was it informed? Etc.

Quote
If we try to decide randomly between 1 and 2 for instance, won't we be able to proceed as randomly as when we toss a dime? And if so, what is tossing the dime if not our own mind?

Making random decisions is hard - it is something we're very bad at doing. If a tree comes to mind rather than a car, there will be hidden reasons as to why, but it's all driven through cause and effect. So much of our brain works away non-consciously, so we can't monitor what it's doing (unless we use FMRI, and that has severe limitations.) Of course, other parts of the brain might be fully conscious and not be able to tell us that. I often wonder what's lurking in my mind. The creativity of some dreams astonishes me - occasionally they seem to have been written by an intelligence that isn't me, keeping a clever twist in the plot hidden until the last moment and then revealing it at the right time for maximum effect, but also showing that it had been planned early on. There's definitely someone else in here who can't speak to me directly, but who tries to communicate through dreams.
« Last Edit: 18/02/2019 01:39:07 by David Cooper »
Logged
 



  • Print
Pages: 1 ... 9 10 [11] 12 13 ... 17   Go Up
« previous next »
Tags:
 
There was an error while thanking
Thanking...
  • SMF 2.0.15 | SMF © 2017, Simple Machines
    Privacy Policy
    SMFAds for Free Forums
  • Naked Science Forum ©

Page created in 0.877 seconds with 67 queries.

  • Podcasts
  • Articles
  • Get Naked
  • About
  • Contact us
  • Advertise
  • Privacy Policy
  • Subscribe to newsletter
  • We love feedback

Follow us

cambridge_logo_footer.png

©The Naked Scientists® 2000–2017 | The Naked Scientists® and Naked Science® are registered trademarks created by Dr Chris Smith. Information presented on this website is the opinion of the individual contributors and does not reflect the general views of the administrators, editors, moderators, sponsors, Cambridge University or the public at large.