The Naked Scientists
  • Login
  • Register
  • Podcasts
      • The Naked Scientists
      • eLife
      • Naked Genetics
      • Naked Astronomy
      • In short
      • Naked Neuroscience
      • Ask! The Naked Scientists
      • Question of the Week
      • Archive
      • Video
      • SUBSCRIBE to our Podcasts
  • Articles
      • Science News
      • Features
      • Interviews
      • Answers to Science Questions
  • Get Naked
      • Donate
      • Do an Experiment
      • Science Forum
      • Ask a Question
  • About
      • Meet the team
      • Our Sponsors
      • Site Map
      • Contact us

User menu

  • Login
  • Register
  • Home
  • Help
  • Search
  • Tags
  • Member Map
  • Recent Topics
  • Login
  • Register
  1. Naked Science Forum
  2. On the Lighter Side
  3. New Theories
  4. How can I write a computer simulation to test my theory
« previous next »
  • Print
Pages: 1 ... 10 11 [12] 13 14 ... 17   Go Down

How can I write a computer simulation to test my theory

  • 327 Replies
  • 36259 Views
  • 0 Tags

0 Members and 1 Guest are viewing this topic.

Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #220 on: 19/02/2019 17:13:41 »
Quote from: David Cooper on 18/02/2019 01:33:26
The creativity of some dreams astonishes me - occasionally they seem to have been written by an intelligence that isn't me, keeping a clever twist in the plot hidden until the last moment and then revealing it at the right time for maximum effect, but also showing that it had been planned early on. There's definitely someone else in here who can't speak to me directly, but who tries to communicate through dreams.
You were right to believe that you were still a child. :0) It's effectively as if our mind was sometimes playing games with itself, like a kid talking to his imaginary friend. That feature from magination is probably the main reason why people still believe in god. God can't help us though whereas randomness can. It took a while before we discovered the use of randomness, but religions are forced to deny it otherwise they know they could be replaced. You may be unable to believe that intelligence needs it, but you can probably still admit that it explains our dreams better than someone else trying to communicate. If you can, then you could ask yourself why our brain produces such a feature. Either it's only a secondary effect of imagination, either it is a real feature, a property of mind without which we wouldn't be as intelligent as we are.

Like us, your AGI needs to be able to simulate things before executing them, which is part of imagination's job. The only thing it would be missing then is simulating improbable things once in a while in case it would pay off, and then I bet it would realise that it pays off often enough to integrate it. That's what I think has happened to our mind while we were evolving from animals to humans. If simulating all the possibilities beginning by the most evident would have been better, evolution would have chosen this way and it didn't. We will probably be able to build biological computers some day, so evolution could have done so too, but it didn't. The way mind moves its data is slow, and it could probably have been as fast as computers if it had been useful, but there is no use to think million times faster than we can move, so it didn't. The way it remembers the data is imprecise, and it could probably have grown biological chips instead, but there is no use to be million times more precise than the environment we are in, so it didn't. The inverse is possible too. It is also possible that computers are the next evolutionary step to a higher intelligence, but I prefer to think that they will think like us, because this way, I can imagine that they will replace us instead of only caring for us. I'm going from bottom to top and you are going from top to bottom, but we are both aiming at the same target: the future humanity.

I think we're both dreaming anyway, so no need for me to take my ideas too seriously. To me, that kind of dream is similar to the ones I have while sleeping, because I also get the feeling that it comes from nowhere, but I could also think, like you, that someone is trying to communicate with me. Do you sometimes have that feeling about your ideas or do you always feel that they are yours? That they always come from your own deductions and calculations for instance? If you do, then it is no surprise that you want your AGI to think like you. If not, then it means that either our ideas comes from randomness like I think, either they come from someone else like you think. Those two different interpretations both mean that unpredictable things happen in our minds, but they don't lead to the same behavior. Those who think that someone talks to their mind may get dangerous for others for instance, whereas I think there is no danger to consider that our ideas suffer randomness. God talking to us is one of the features that make the religions dangerous. A religion about randomness wouldn't have the same issue, it would preconise freedom over security, improvisation over constancy, education over coercion, imagination over memory. It would work for the long run, whereas actual religions only work for the short one.

The same way, I think that your AGI would only account for the short term, and that adding a bit of randomness to it would account for the long one. Existence accounts for both, so we might need to mix our ideas if we want them to do so, unless there is no other way than to wander from one to the other like the two extremes of our political systems. When the right governs, it effectively works for the short term, while the left works for the long one. Taking social measures is like caring for others and hoping they will care for us in the future. It's a second degree selfish behavior that accounts for the future instead of now. We can't predict the outcome of a society, so it's risky to take such measures, but we nevertheless accept to do so as a society because we already do so individually. Your AGI will behave as if it would know the outcome since it would never take chances, so I think it will only account for the short term. If the right would always govern, I think societies would not evolve. I think it's the random wandering between left and right that produces their evolution. Your turn now, but you're not allowed to answer that nobody has to evolve in paradise! :0)
Logged
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2843
  • Activity:
    10.5%
  • Thanked: 37 times
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #221 on: 20/02/2019 00:14:24 »
Quote from: Le Repteux on 19/02/2019 17:13:41
Like us, your AGI needs to be able to simulate things before executing them, which is part of imagination's job. The only thing it would be missing then is simulating improbable things once in a while in case it would pay off, and then I bet it would realise that it pays off often enough to integrate it.

You're still missing the point - AGI will explore all those random lines eventually, but it will do so in a non-random order, starting with the lines that are most likely to produce lots of useful results and saving up the least-likely-to-be-useful lines until last. By following random lines, it's possible that a wonderful discovery would be made occasionally, but the cost of that would be that thousands of other wonderful discoveries of similar worth would be delayed until afterwards, so that's a massive loss. It is not intelligent to find the rare, less likely discoveries at the expense of the common ones that are easy to find. Suppose a field has gold coins buried in it with a few dozen coins put in random positions while ten pots full of coins are located under stones. An intelligent hunter will look under all the stones first and find the bulk of the treasure in a few hours. An unintelligent hunter will randomly dig holes all over the place and may, after a few months, happen upon a coin. For sure, that gold coin is worth finding, but you don't prioritise it ahead of the pots of gold coins which are easier to find. Once you've found all the pots of coins, then you switch to a systematic dig of the whole field, and again you don't do random because random will re-dig many of the same holes repeatedly by mistake. Random is stupidity; not intelligence.

Quote
That's what I think has happened to our mind while we were evolving from animals to humans. If simulating all the possibilities beginning by the most evident would have been better, evolution would have chosen this way and it didn't.

Evolution is unintelligent - it has no means to choose an intelligent way. Once it created intelligence (us), we were then able to do things intelligently and more efficiently, and we can use evolution intelligently in design as a way of solving problems automatically in systems with low or zero machine intelligence. AGI will remove the need to use that technique because it will get faster results by understanding what it's doing and working systematically to follow the best lines first.

Quote
We will probably be able to build biological computers some day, so evolution could have done so too, but it didn't.

Evolution did build biological computers, but it was only when it produced us that we started to have the basis something that operates like the machine on a desk - we are neural computers doing a lot in parallel, but when we solve problems through hard thought, we're often doing the equivalent of a CPU running a program with a carefully structured order of steps, and that's the system we copied when we invented computers.

Quote
The way mind moves its data is slow, and it could probably have been as fast as computers if it had been useful, but there is no use to think million times faster than we can move, so it didn't.

I don't think that's the case. If someone very ordinary could think a hundred times faster than the rest of us do, they could learn to stand 12 rounds against the best heavyweight boxer in the world by reacting to his every move quickly enough to avoid being hit - he wouldn't lose a point.

Quote
Do you sometimes have that feeling about your ideas or do you always feel that they are yours?

Ideas are usually won through hard thought - you have to do 99% of it that way to get into the places where the few final crucial pieces spring into being without the same effort, and those pieces come from all manner of bits of ideas from all the things you've ever seen or worked with before. I can usually see the entire route as to where the parts of a discovery came from.

Quote
That they always come from your own deductions and calculations for instance? If you do, then it is no surprise that you want your AGI to think like you.

It's all just knowledge and applied reasoning. But I want AGI to think better than I do so that it doesn't make any mistakes. For example, when you move something north and then accelerate it to move it north east, an unexpected rotation occurs quite automatically because of synchronisation issues, and it's so counter-intuitive that it never occurred to me that such a rotation was possible, so I made a mistake with that a few years ago. AGI will do what I failed to do, and that is simulate it correctly without taking shortcuts by making assumptions. We are so slow in our thinking that we need to take lots of shortcuts to get things done, but having found an interesting result at the end of that path, we need to go back and check every single one of those points where we made assumptions to make sure they don't contain errors. Einsteinists still refuse to do that with relativity, even when a critical error is shown to them to make it easy - they've been shown that they're cheating by smuggling in an extra kind of time into the models to make them appear to function correctly (while that extra kind of time is explicitly banned in the model), but they simply deny that they're doing anything of the kind, even though they manifestly are. They've had nearly a decade now to meet my challenge to them to produce a simulation that doesn't cheat, and none of the poor souls have managed it - the just go on arrogantly asserting that it works and that they aren't using the smuggled-in Newtonian time which they depend on to coordinate the action and avoid event-meshing failures.

Quote
When the right governs, it effectively works for the short term, while the left works for the long one.

I don't think so. The left spends money like there's no tomorrow and bankrupts the country, like in Venezuela, but at least they do this because they care rather than only wanting to line the pockets of the rich. It's the people in the middle who work for the long term, understanding that they have to be careful not to go too far one way or the other, always hunting for the optimal position that maximises quality of life for the masses sustainably.

Quote
Your AGI will behave as if it would know the outcome since it would never take chances, so I think it will only account for the short term.

Why would it do that when it's more intelligent to consider the long term and make sure that the improvements it brings about are lasting ones rather than a flash in the pan? And as for taking chances, it will do that intelligently, always playing the odds for the surest gains.

Quote
If the right would always govern, I think societies would not evolve. I think it's the random wandering between left and right that produces their evolution. Your turn now, but you're not allowed to answer that nobody has to evolve in paradise! :0)

The alternation between left and right is the result of repeated failure and dissatisfaction. The left destroys the economy in order to give a temporary boost to the poor, and the right then tries to rebuild the economy by making the poor pay. We bounce between these two extremes and find it hard to settle in the middle due to a tendency for people to polarise and move to opposite extremes. And it doesn't help that the politicians standing in the middle are invariably useless - I don't know why they're so weak, but they never have an iota of charisma and they don't have a clue about how to campaign.
Logged
 

Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #222 on: 21/02/2019 19:39:07 »
Quote from: David Cooper on 20/02/2019 00:14:24
You're still missing the point - AGI will explore all those random lines eventually, but it will do so in a non-random order, starting with the lines that are most likely to produce lots of useful results and saving up the least-likely-to-be-useful lines until last.
Maybe I'm not clear enough, because I never had the feeling that my mind was completely random or that evolution was completely random. Evolution has a goal to respect, the survival of the fittest, and that goal is not random. My ideas have a goal to respect too, it's the survival of my fittest idea. Your AGI's goal is different, it is to produce as less harm as possible in its herd, as if we were sheeps or cattles. Animals don't mind to graze all day in the fields, they only need to feed and breed, but we have other aspirations. What if humans told the AGI that they are not happy after a while, and that the only reason they find is the AGI itself? What if they got fed up that the AGI always wins the game? Know what? I think they would begin building an AGI to beat the first one on its own ground, and to let them evolve as they want once it would have taken the power. They might also program it to help us organizing a democratic world government, and then to start exploring the universe in search of other intelligent civilisations, just to prevent them to make the same mistake.

That's if it was your AGI, but I would rather give it the same kind of intelligence humans have, and simply improve its capacity, so that its mission would simply be to offer its help to other civilisations providing they want it, and to avoid reproducing the harm that our kind of intelligence did to lots of civilisations on earth. The only way to do that would be to proceed very slowly, and it could because it wouldn't have any other goal, like exploiting them or their natural resources for instance. It would put a seed and let it grow, put another one and let it grow again, taking good care that the seed is not too invasive. If it wanted to start new civilisations of its own kind, it would simply chose planets with no life on it yet, and transform it so that it would fit its needs. Eventually, there would be no more biological intelligence in the galaxy, just an artificial one, and its goal would be to develop its knowledge, not to minimize the harm and maximize the pleasure. Those parameters would only help it to choose the best way not to damage itself in the process, exactly like they already do for us.

Quote from: David Cooper on 20/02/2019 00:14:24
It's all just knowledge and applied reasoning. But I want AGI to think better than I do so that it doesn't make any mistakes. For example, when you move something north and then accelerate it to move it north east, an unexpected rotation occurs quite automatically because of synchronisation issues, and it's so counter-intuitive that it never occurred to me that such a rotation was possible, so I made a mistake with that a few years ago.
I can't figure that rotation out. Could you elaborate a bit please? Is it a relativistic issue?

Quote from: David Cooper on 20/02/2019 00:14:24
Random is stupidity; not intelligence.
If evolution is stupid, then we are stupid and I completely agree with that! :0) But if there is no way to build anything intelligent from stupid things, then your AGI will also be stupid. On the contrary, if we think that randomness is part of intelligence, then we can understand that evolution is intelligent and we don't need a superior intelligence to explain the random things that happen in our dreams. An interesting secondary effect of that kind of thinking is that we can allow less importance to the mistakes we make all the time, which is a good way to better get along with ourselves and with others.

Quote from: David Cooper on 20/02/2019 00:14:24
Einsteinists still refuse to do that with relativity
For the moment, there is no advantage for them to change their minds. Simulations and logic are not enough, we need to make new predictions and experiment them. I had one about the mass of particles accelerated separately, I predicted that they would not all offer the same resistance to acceleration due to the randomness of the changing process. Unfortunately, I did not succeed to convince you that that kind of change was similar to the change the species are facing. Before I decide to experiment that kind of idea, I prefer to wait until I'm sure people agree with it.

Quote from: David Cooper on 20/02/2019 00:14:24
I can usually see the entire route as to where the parts of a discovery came from.
That's probably why you discard chance as part of your imagination. As my small steps show, memory helps us to execute our automatisms, not to predict the future. The future belongs to imagination. Looking back into our ideas gives us the feeling that they were all predicted, the same way doppler effect produces constant steps. With the steps, it's the memory of all the previous changes that produces the present motion, but those changes nevertheless needed randomness to find the right way. The same way, it is useless to look back into history to predict the outcome of a society. That's what the conservatives do when they take the power, and it never works. It's no better to improvise like the progressives do, and to think they are automatically right just because they are being more socialist. The fact is that we can't predict that kind of future, and that we should acknowledge it. The only way to predict the future is to force people to do what we want, and it generally doesn't last long. That's why I think that your AGI won't work. We don't like to be forced to do anything, including being happy, so I think we would revolt after a while. Did you anticipate that possibility? And if so, how would your AGI react then?


Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2843
  • Activity:
    10.5%
  • Thanked: 37 times
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #223 on: 21/02/2019 22:17:43 »
Quote from: Le Repteux on 21/02/2019 19:39:07
Evolution has a goal to respect, the survival of the fittest, and that goal is not random.

Evolution has no goal at all. Survival of the fittest is just a mechanism by which evolution happens.

Quote
What if humans told the AGI that they are not happy after a while, and that the only reason they find is the AGI itself? What if they got fed up that the AGI always wins the game?

If they told it not to run things, suffering would go up dramatically and people would beg it to start running things again. There are plenty of games that people can play where they'll still need to rely on their own wit while AGI avoids helping them so as not to spoil the game.

Quote
I can't figure that rotation out. Could you elaborate a bit please? Is it a relativistic issue?

Imagine four stationary drones arranges in a square with its sides aligned east-west and north-south. Synchronise their clocks. Now have them simultaneously accelerate to travel at a relativistic speed northwards. Have the drones continually resynchronise their clocks and adjust their position relative to each other to maintain what looks to them like a square formation. At a set time, they will all accelerate eastwards to the same relativistic speed in that direction, but with the combined northward and eastward movement actually making them move north east. The original square was turned into a rectangle by the first acceleration (when viewing from the original frame of reference), and then into a rhombus by the second acceleration. When you travel with the drones though, you'll see them as being arranged in a square formation at all times. The edges of the square are not aligned with the north-south and east-west lines in this frame though - the square has rotated a bit (anticlockwise), and it is also seen to have rotated the same amount when viewed from the original frame (which sees it as a rhombus).

Quote
If evolution is stupid, then we are stupid and I completely agree with that!

But we're nor - evolution is a stupid process which can create intelligence through a series of lucky accidents which get selected for with the innovations retained.

Quote
But if there is no way to build anything intelligent from stupid things, then your AGI will also be stupid.

But as there is a way to do it, that doesn't apply.

Quote
For the moment, there is no advantage for them to change their minds.

There is. If they stick where they are, they're going to come out lower down the list of intelligent people when AGI rates everyone, and that will be a deeply embarrassing fall from grace. That fall is entirely avoidable - all they have to do is look honestly at the facts and stop being influenced by the status of the people presenting ideas. We have armies of people directly blinding themselves to clear facts in order to stay in with the establishment, and they do this because they have built their own status by borrowing it from the establishment: if you kowtow to the gods, you reflect in their glory. If you criticise the Gods, your status is set to crackpot regardless of whether you're right or not. That's how it all works today, but it will never work that way again once we have AGI. Those who investigate these things rigorously and who dare to speak out when they find faults will have higher status than all the people with long lists of letters after their name who failed to do their job properly, and worse, who spend decades ridiculing the few people who did.

Quote
Simulations and logic are not enough, we need to make new predictions and experiment them. I had one about the mass of particles accelerated separately, I predicted that they would not all offer the same resistance to acceleration due to the randomness of the changing process.

What randomness? I don't see anything random in accelerations - things follow the maths precisely.

Quote
The fact is that we can't predict that kind of future, and that we should acknowledge it.

But we can make predictions with probabilities attached to them based on the track record of such predictions in the past, and some predictions are right practically every time: the conservatives almost always go too far one way, and the socialists almost always go too far the other, so we just go on bouncing one way then the other over and over again.

Quote
The only way to predict the future is to force people to do what we want,

false premise

Quote
and it generally doesn't last long. That's why I think that your AGI won't work. We don't like to be forced to do anything, including being happy, so I think we would revolt after a while. Did you anticipate that possibility? And if so, how would your AGI react then?

It's very simple. If AGI stops doing the right thing, people will find out how much suffering it was preventing and how much happiness it was making possible. Every time they ask it to take a break, lots of people will die and lots more will spend the rest of their lives grieving (and condemning the people who made AGI stop).
Logged
 

Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #224 on: 26/02/2019 16:03:00 »
Quote from: David Cooper on 21/02/2019 22:17:43
It's very simple. If AGI stops doing the right thing, people will find out how much suffering it was preventing and how much happiness it was making possible. Every time they ask it to take a break, lots of people will die and lots more will spend the rest of their lives grieving (and condemning the people who made AGI stop).
That's better. It starts to look like a democratic system. What about having two AGIs representing the two directions a democracy can take, and let us choose which way we want to take by survey at the end of the year. People could organize in political parties, and their AGI would help them to win the surveys. One of the two parties would prefer that things stay as they are, and the other would prefer that they change. There is no other way than surveys to rate the satisfaction of a population anyway, so I guess your AGI would be forced to use them too to minimize displeasure and maximize pleasure. I didn't study that precise point of yours yet, but I think it's time. Your AGI would necessarily have to ask people how they feel to know it, so its data would only be subjective. Some people get a lot of pleasure to fight even if it hurts them for instance. The things that we do freely need to give us pleasure otherwise we stop doing them, so there would be no need for the AGI to ask us how we feel in this case.

It's only the pleasure we take from forcing others to do what we want that the AGI would need to prevent, and then it might prevent us from watching the good guy eliminating the bad guys at TV since it incites us to do the same thing, worse, it might even prevent us from defending our own ideas if it thinks it will produce more displeasure than pleasure in the population. You know your AGI will have your ideas about the way we should behave, so you don't see what I see. I would not agree with its ideas more than I agree with yours, so I would try to stop it, not just discuss with it, because I think that the way it would proceed would hurt me. You think that we would change our mind after having stopped the AGI since our life would be worse without it, and I agree with you, but not for the same reason. We already change our governments quite often, but to me, it's not because the new government is worse that we want to change it after a while, it's because contrary to animals, we always want more, because no government can predict the future, and because about half the population thinks it's better to proceed one way and the other half the other way. Will your AGI know why we always want more? And if not, will it feed us until we literally explode? Will it know why half the population wants some change and the other not? And if not, will it nevertheless conduct the herd in the same direction until it falls down the cliff?

Quote from: David Cooper on 21/02/2019 22:17:43
Evolution has no goal at all. Survival of the fittest is just a mechanism by which evolution happens.
If so, then the survival of the fittest idea is also just a mechanism by which evolution of ideas happen. I think you assimilate a goal to our will to reach it, as if there was a superior mind inside our mind that would know the right way. I prefer to think that there is no such mind, and to assimilate our will to our resistance to change ideas. This way, the will of a specie would be to resist to the change, and its goal would be to adapt to it, an outcome that is not defined in advance since it depends on a random process. You see a goal where I only see a possibility. My mom just handed me its ipad while I was writing, asking me to take a look at an email about giraffe hunting that a friend of us visiting Africa just sent us. The email was from Avaaz thanking her for having participated to a petition against wildlife hunting in Africa, but she never admitted it since she already thought it came from our friend. That's resistance to change. When we are persuaded that others are wrong, we don't study what they say while still feeling that we did.

You tend to attribute resistance to bad will resulting in poor analysis, but it's not bad will that is at stake then, it's resistance to change, a natural law that permits any existing phenomenon to keep on existing. The relativists can't use their will to resist to our ideas since they're not conscious of resisting. Pretending people don't want to understand simply leads to aggressive answers, worse, just trying to convince them can easily produce the same answer. My mom got angry when I tried to explain her that she had made a mistake. It was clear to me, but it wasn't clear to her at all. The only way then is either to let her think her way, or to repeat the same flagrant thing until she begins to doubt. That's what I do when I discuss with people since I know they have no other choice than to resist, but that's also what you do even if you believe they got bad will, so I really wonder how you can. Maybe you do what your AGI would do: try to minimize displeasure and maximize pleasure. That's what I call our second degree selfishness: we care for others as long as we can imagine that they will care for us. So your AGI would still be selfish after all, which is normal since it would be programmed by selfish humans. You probably simply imagine yourself at its place the same way we do when we want to get along with others. It works as long as others imagine the same thing, otherwise it can go wrong quite easily.

Contrary to us though, your AGI won't get emotive, so it will be able to repeat the same thing indefinitely until its interlocutor begins to doubt. It doesn't mean that it will work though. As I often say, we don't change our mind by logic, but only by chance. Resistance to change is completely blind to logic, while chances to change increase with time. You think your AGI won't resist to change while, in reality, it will be completely blind to our logic, and there will be absolutely no chance that it changes its mind with time. If you are able to imagine such an AGI, it's probably because you already think like it. You say we should try to demolish our own ideas to be sure they're right, but I think we can't do that, I think that we can only compare our ideas to others' and try to imagine where they could interfere. Even though I try very hard to compare correctly my ideas to your AGI, I always get the feeling that there is no interference. You can't convince me and I can't convince you, but you nevertheless intend to force people to accept your AGI whereas I don't intend to force anybody to think like me. It's hard to figure out what makes us so different on that precise point. I can't understand how I could force people to do what I want and still think they will be happy. Hasn't science shown that coercion was not the right way to educate children? Maybe you've been forced as a children, but how could you think it was a good thing?

Quote from: David Cooper on 21/02/2019 22:17:43
The edges of the square are not aligned with the north-south and east-west lines in this frame though - the square has rotated a bit (anticlockwise)
If we accelerate simultaneously two inline particles to the right, due to doppler effect being delayed by acceleration, the left one will think it is nearing the right one, and the right one will think it is distancing the left one. If we accelerate two perpendicular particles instead, there will be no difference in the two viewpoints: the light the particles are perceiving will come from where they were when they emitted it, and it will suffer aberration and doppler effect at detection. With doppler effect getting delayed by the acceleration, they will both think they are getting away from one another with time, and with aberration due to their sideways motion with regard to light, I'm not absolutely sure, but I think they will both see the other where it was before acceleration started, as it is the case for two particles in constant motion. Now if we try to synchronize them with the light they perceive, the two inline ones should move towards one another, and the two orthogonal ones too. I see no rotation, so either I misunderstood your description, either I'm wrong on aberration, but even if I was wrong, since it is symmetrical in this case, it would only produce a symmetrical effect with regard to the direction of motion, not a rotational one. I probably misunderstood, did I?

Quote from: David Cooper on 21/02/2019 22:17:43
But we're not - evolution is a stupid process which can create intelligence through a series of lucky accidents which get selected for with the innovations retained.
If we had invented evolution instead of having only discovered it, I don't think we would call it a stupid invention. It's nature that has invented the process, the same nature we are actually part of. I hope you don't think we are superior to nature, and if not, then I think we have to find a way to give it some intelligence, and the best way I found is to give less importance to our own one. This way, it's not because we are intelligent that we succeed so well, it's because nature created us. Now that we succeed too well, we have a huge problem to solve, but it's not necessarily because we are not intelligent enough that we can't solve it faster, it's because it takes time to solve any new problem, and because the larger the system, the more time it takes. If it was an AGI that would be in charge of solving it, it would take as much time. Trying to change our habits takes time, and no AGI could change that. The best it could do is discover a better way to produce energy so that we could go on doing what we are used to without adding more pollution, and then discover a way to clean up the earth using the new energy. No need to control us then, just to make the discoveries, so if you succeed to build one, I'm with you if you decide to do the research. I know you're afraid somebody might steal your AGI or build it before you do, but it's not a reason to do what Trump would do with it.

Trump thinks it's right to dominate the world before others do, but we know it's just a paranoid idea that has never brought us happiness. We feel like that when we feel threatened, and we automatically feel threatened when we have something we know others would like to have. If your AGI would only be built to make scientific research, you wouldn't feel that threatened. Maybe someone else is actually building one with the intent to rule the world, but so what? Let those people think that coercion is the way to go, and keep on researching how things really work. Control induces control, so if you install your AGI, someone else will install another one to fight it. To me, that kind of software should simply be banned the same way nuclear arms should be. What's the use of developing more nuclear arms when we already know its too dangerous? By the way, do you know the software called Mate Translate? It's so good that I could write my messages in french and have them translated. In fact, I don't do it just because I want to improve my english. If it's that good in russian, I could at last be able to discuss with Yvanhov, and furthermore, he could at last be able to read and write in english without knowing it. I won't be able anymore to use those softwares as an example of how far artificial intelligence is from intelligence. They made a huge leap lately, not just a small step. If they can translate that well, it means that they understand quite well too, so they're not far from being able to discuss with us. I wonder if they would be as difficult to convince as you. :0)


« Last Edit: 26/02/2019 16:49:30 by Le Repteux »
Logged
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2843
  • Activity:
    10.5%
  • Thanked: 37 times
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #225 on: 26/02/2019 19:36:32 »
Quote from: Le Repteux on 26/02/2019 16:03:00
That's better. It starts to look like a democratic system.

Not really - what it shows is that AGI will know what's good for people more of the time than people do, and any occasion when people are allowed to go against the advice of AGI will invariably lead to tragedy.

Quote
What about having two AGIs representing the two directions a democracy can take, and let us choose which way we want to take by survey at the end of the year.

That doesn't work - you'd either have two AGIs with the same policies on all issues, or you'd have one or two AGSs producing bad policies which will cause disaster if implemented.

Quote
There is no other way than surveys to rate the satisfaction of a population anyway, so I guess your AGI would be forced to use them too to minimize displeasure and maximize pleasure.

AGI will be studying everyone all the time, so it will already know how to minimise harm and maximise happiness for them and there will usually be a single course of action that must be followed to meet that aim. That is what dictates what AGI should do - AGI is not the dictator, but is driven by our needs. In a way, that makes what it does more democratic than any other system for doing democracy.

Quote
You know your AGI will have your ideas about the way we should behave, so you don't see what I see. I would not agree with its ideas more than I agree with yours, so I would try to stop it, not just discuss with it, because I think that the way it would proceed would hurt me.

How would it have my ideas about that rather than yours? It would consider your ideas just as it considers mine and it would try to minimise harm to you and to maximise your happiness just as it would do for everyone else, but it has to be fair to everyone and that means that it cannot maximise anyone's happiness when maximising it for one harms another. If you want AGI to pander to you at the expense of others, that isn't going to happen, just as it won't pander to me at the expense of others if I try to be selfish. It will be absolutely neutral.

Quote
...half the population thinks it's better to proceed one way and the other half the other way.

Half the population is usually wrong. It's very rare for both options to be equally right.

Quote
Will your AGI know why we always want more? And if not, will it feed us until we literally explode? Will it know why half the population wants some change and the other not? And if not, will it nevertheless conduct the herd in the same direction until it falls down the cliff?

The reason half the population want change is because things are being run by people who get things horribly wrong, favouring some people over others in an unfair way which needs to be changed. It's a response to unfairness, but you won't get that unfairness from AGI.

Quote
If so, then the survival of the fittest idea is also just a mechanism by which evolution of ideas happen.

To a degree, yes - if something works, we can build on it, but we can also spot something nearly working and provide a series of complex solutions to get it to the point where it does work, while evolution would be incapable of making the same advances.

Quote
I think you assimilate a goal to our will to reach it, as if there was a superior mind inside our mind that would know the right way.

We make progress by applying intelligence rather than just waiting for things to happen to go in the right direction without any thinking being involved. The former approach is billions of times more efficient.

Quote
You tend to attribute resistance to bad will resulting in poor analysis, but it's not bad will that is at stake then, it's resistance to change, a natural law that permits any existing phenomenon to keep on existing.

I don't see uniform resistance to change. If you offer people a life-transforming amount of money, most of them will grab it without hesitation.

Quote
The relativists can't use their will to resist to our ideas since they're not conscious of resisting.

They resist for a simple reason - to go against the establishment is to discard status. It's a downward step for them, and that's more important than being right. That's why they don't want to see what's right - the authority dictates what's right and if you want status, you have to kowtow to that authority regardless of how ridiculous a position that puts you in.

Quote
That's what I call our second degree selfishness: we care for others as long as we can imagine that they will care for us. So your AGI would still be selfish after all, which is normal since it would be programmed by selfish humans. You probably simply imagine yourself at its place the same way we do when we want to get along with others. It works as long as others imagine the same thing, otherwise it can go wrong quite easily.

It is not selfish as it has no self, and it is not biased in favour of anyone either.

Quote
As I often say, we don't change our mind by logic, but only by chance. Resistance to change is completely blind to logic, while chances to change increase with time.

It isn't about chance at all. People follow authority. If you change the authority, they will change their position in a hurry in order to avoid being ridiculed by the new authority. The driver is crowd bullying - the herd is right even when it's wrong, and people willingly override their own intelligence to follow the demands of the authority.

Quote
You think your AGI won't resist to change while, in reality, it will be completely blind to our logic, and there will be absolutely no chance that it changes its mind with time.

When you say "our logic", do you mean illogic?

Quote
If you are able to imagine such an AGI, it's probably because you already think like it. You say we should try to demolish our own ideas to be sure they're right, but I think we can't do that, I think that we can only compare our ideas to others' and try to imagine where they could interfere. Even though I try very hard to compare correctly my ideas to your AGI, I always get the feeling that there is no interference.

What you should be looking for is contradiction. Where there is contradiction, something is wrong. When something is wrong, the task is to identify it and correct the mistake.

Quote
You can't convince me and I can't convince you, but you nevertheless intend to force people to accept your AGI whereas I don't intend to force anybody to think like me.

If someone disagrees with a calculator when it tells him that 3 x 4 = 12, the calculator is right.

Quote
It's hard to figure out what makes us so different on that precise point. I can't understand how I could force people to do what I want and still think they will be happy. Hasn't science shown that coercion was not the right way to educate children? Maybe you've been forced as a children, but how could you think it was a good thing?

Being forced to do things that are right is not a bad thing. Being forced to do things that are wrong is bad. No amount of the latter will make the former wrong. It's easy to show up what's wrong by reversing the roles. If you change your mind about what's right or wrong when you become the other person and they become the previous you, then your rules are wrong.

Quote
I probably misunderstood, did I?

I don't think you need to worry about this rotation - it probably has no relevance to what you're doing.

Quote
If we had invented evolution instead of having only discovered it, I don't think we would call it a stupid invention.

Indeed, but we would still recognise it as what it is - a lazy mechanism which requires zero intelligence and which can sometimes save people the trouble of thinking in some cases where it can solve a problem in a useful amount of time. For example, if you want to create a hull with minimum drag, you can use evolution to change hull shapes experimentally and select the best ones as starting points for the next experiments. If the science of what leads to minimum drag is not fully understood, this can find solutions that intelligence might miss, but applying this method in such cases is the application of intelligence - we apply what gets the beneficial results most quickly. However, we don't use pure evolution even in these cases because we don't waste time repeating failed experiments - having tested a shape once, we already have the results we need from it. We look for places where evolution gets stuck - you can't get to the top of a higher hill from the top of a hill without going down, and evolution will keep taking you back to the top of the lower hill without letting you go down far enough to go up the higher hill instead. We can jump in and interrupt evolution by forcing a path down to the point from which it can go up the higher hill, and that's exactly what we do when we use evolution in design - it is not pure evolution, but it strongly guided by intelligence.

Quote
It's nature that has invented the process, the same nature we are actually part of. I hope you don't think we are superior to nature, and if not, then I think we have to find a way to give it some intelligence, and the best way I found is to give less importance to our own one.

We are nature's intelligence, and our intelligence is superior to evolution's (which has zero intelligence).

Quote
I know you're afraid somebody might steal your AGI or build it before you do, but it's not a reason to do what Trump would do with it.

I don't want any AGI to fall into the wrong hands, and I certainly don't want anyone like Trump to be in charge of it. As for someone building it before me, so long as it's moral, that isn't a problem. My bet is that it won't be though, which is why I have to stay in the race regardless of whether I get there first or tenth - these systems will check each other for rationality and morality and will be able to prove mathematically that the faulty ones are faulty, enabling anyone who wants to to check the facts and see where the laws of mathematics are being broken by some systems. You may be able to see the danger here though if the wrong teams get there first and become the establishment, because their errors will then be studiously ignored while superior AGI will be branded as broken. That's why the best outcome will result from a number of teams all getting AGI up and running at the same time so that they can be judged from a starting point in which there is no establishment in place to blind everyone to reality.

Quote
Trump thinks it's right to dominate the world before others do, but we know it's just a paranoid idea that has never brought us happiness.

There are bigger threats than Trump, and he's right to oppose those threats and seek to dominate the world rather than having them dominate it. There is currently no benign regime of any size anywhere - they are either set on abusing others or they pander to people who abuse others and will lie down to let themselves be walked over by fascists instead of defending morality.

Quote
If your AGI would only be built to make scientific research, you wouldn't feel that threatened.

On the contrary - if it was used to develop genetic weapons, I would feel highly threatened. This is why moral AGI has to run everything if we're to have a chance of long-term survival and peace.

Quote
Maybe someone else is actually building one with the intent to rule the world, but so what? Let those people think that coercion is the way to go, and keep on researching how things really work.

If that happens, what do you think the outcome would be? They don't like black people, so they use it to wipe out all black people. They don't like religious people, so they use it to kill them all. They don't like people with hair of any colour other than black, so the kill anyone with the genes for any other hair colour. AGI is a weapon far more powerful than any other, so you can't afford to play games with it. Nuclear weapons are hard to use without it being suicidal, but AGI will be able to kill without anyone knowing who the killer is, or indeed that it was murder. It will be able to frame people and make everyone else think they were killed for being a killer, and that's also why there must not be any bias in AGI if it's to be safe. As soon as you put a bias into it, you risk it becoming a tool of genocide.

Quote
Control induces control, so if you install your AGI, someone else will install another one to fight it. To me, that kind of software should simply be banned the same way nuclear arms should be. What's the use of developing more nuclear arms when we already know its too dangerous?

If you ban it, you guarantee that the battle will be won by the people who break the rules and create it anyway - they will take over the world and genocides will follow. That's why good AGI has to be allowed to win this race, and everyone who is capable of producing bad AGI then needs to be watched closely to make sure they are prevented from doing so.

Quote
By the way, do you know the software called Mate Translate? It's so good that I could write my messages in french and have them translated. In fact, I don't do it just because I want to improve my english. If it's that good in russian, I could at last be able to discuss with Yvanhov, and furthermore, he could at last be able to read and write in english without knowing it. I won't be able anymore to use those softwares as an example of how far artificial intelligence is from intelligence. They made a huge leap lately, not just a small step. If they can translate that well, it means that they understand quite well too, so they're not far from being able to discuss with us. I wonder if they would be as difficult to convince as you. :0)

I haven't used it, but there's no reason why it shouldn't become near perfect without it understanding the ideas it's working with.
Logged
 

Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #226 on: 01/03/2019 19:21:49 »
Quote from: David Cooper on 26/02/2019 19:36:32
It isn't about chance at all. People follow authority. If you change the authority, they will change their position in a hurry in order to avoid being ridiculed by the new authority. The driver is crowd bullying - the herd is right even when it's wrong, and people willingly override their own intelligence to follow the demands of the authority.
It's true that we follow the leader, but I think it's an instinctive behavior, not an intelligent one as you seem to think. With the emergence of animals, forming groups was a good way to face a threat, but when the group had to flee the threat, following the leader proved a good way to avoid dispersion, and since it saved lives, evolution has imprinted it in our genes. An instinct only serves to face an immediate need though, otherwise it is useless, and that's where intelligence comes in. It is useless to act now to prevent an unknown future threat for instance, but our intelligence still thinks it can. We can speculate on anything, and keep thinking we are going to win even if we lose almost all the time. It's a pleasure that tints everything we do, from playing games to researching natural laws, passing by financial speculation. It's part of intelligence, so it's probably not there just for fun, it must provide a real benefit, otherwise intelligence would be too dangerous. I was watching tennis and it gave me an idea. If a robot would be programmed to play tennis, it would win all the games, but if it had to play against another robot, it wouldn't. Now, providing the two robots are identical, which one is going to win? Theoraticaly, each of the robots should win half the points, but each point would be impossible to predict, and the game neither.

Why? Because once you begin to play against nature, things become impossible to predict, whether you're an AGI or not. We like watching games because we can't predict the outcome, so I predict that we will still like it when they will be played by robots. That's why I say that we would like it if two AGIs would be playing the power game, and that we would get bored of an AGI that would always win. When we get bored, we try something else, and that's what we would do if the AGI would permit us, so as you say, things would get bad again, and we would want the AGI back after a while, until we get bored again and try something else, ....and so on. But since the AGI would be programmed to produce as less harm as possible, it would itself need to try something else next time it would be in office again. Can you predict what it would try? By the way, how would an AGI know when to let us take its place? Would it wait for riots or simply trust a survey? Politicians don't yet trust surveys when they tell them to go, because like any AGI, they know their ideas are better, so they inevitably wait for riots. Will your AGI do the same thing?

Quote from: David Cooper on 26/02/2019 19:36:32
When you say "our logic", do you mean illogic?
I do, but not like you do. If logic only serves to protect our automatisms as I think, then it only serves to protect our ideas, not to change them, so to me, any logic used to predict a change is necessarily illogic. I'm actually trying to use my logic to convince you, and you are using yours to convince me, but it doesn't work. Our own logic only serves to push our own ideas to their limit, not to understand others' ideas. It's not as if we were comparing the result of an addition, because then, the whole population uses the same logic, which nevertheless only serves to push the numbers to their limit. The people that believe in god find the idea logical, and many scientists even think so. Are they illogical or is it only because we don't have that idea that we think it's illogical? Logically, it's because we don't have it, otherwise we would find it logical. The only way to know which idea is better is to experiment them, but there is no way to experiment god, so it's a question of feeling. If we need a little voice to tell us what to do when there is no way to predict the outcome, then we can believe it's god's voice, otherwise we find other reasons to take a chance, because it's effectively chance that we invoke when we ask the little voice. Is it illogical to take a chance? That's what you seem to think, and you probably think so because you think your AGI wouldn't have to do so. But you probably do have to take chances sometimes, so are you feeling illogical these times?

Quote from: David Cooper on 26/02/2019 19:36:32
What you should be looking for is contradiction. Where there is contradiction, something is wrong. When something is wrong, the task is to identify it and correct the mistake.
To me, the idea of god is contradictory, but not for those who believe it's true, so how would they be able to find the contradictions. We simply can't use our own ideas to contradict them, we have to use others' ideas, and then they have to be similar otherwise we can't even understand them.

Quote from: David Cooper on 26/02/2019 19:36:32
Being forced to do things that are right is not a bad thing. Being forced to do things that are wrong is bad. No amount of the latter will make the former wrong. It's easy to show up what's wrong by reversing the roles. If you change your mind about what's right or wrong when you become the other person and they become the previous you, then your rules are wrong.
Let's admit that what's right is what will benefit others, not just us. Then what's right for the whole planet is what will benefit everybody. For instance, what's right would be to take any economic measure that would benefit everybody. I'm in, and I have my own ideas on the subject, but I bet your own ideas are different and you think they're right, which means that , to me, your AGI may not take the right decisions. If I could, I would stop any financial speculation, and the only money that we could make through investments would only cover inflation. This way, our investments would benefit everybody including us, not just those who have money, because we would try to improve the society as a whole so as to get better services and better goods. If our investment would fail, then we would lose money, but the investments that would succeed would compensate largely those who wouldn't since they would benefit more people than speculation did. That's something an AGI could do without having to take over everything. This way, people would automatically invest in less pollution, less wars, and more equity. Things would go better, and there might be no need for an AGI to rule us.

Quote from: David Cooper on 26/02/2019 19:36:32
There are bigger threats than Trump, and he's right to oppose those threats and seek to dominate the world rather than having them dominate it.
I think he should give the example, and let down his veto at the United Nations for a while instead of trying to rule the world. Maybe the other major world powers would let theirs down too, who knows? So if I understand well, our two viewpoints are quite different concerning power. Rest to determine if it is your work on your AGI that gave you this viewpoint or if it is the inverse.

Quote from: David Cooper on 26/02/2019 19:36:32
that's also why there must not be any bias in AGI if it's to be safe. As soon as you put a bias into it, you risk it becoming a tool of genocide.
To me, the risk that you create a dangerous AGI looks as important as the risk that it falls into the wrong hands, so conversely, if it does fall into the wrong hands, it may not do what they expect.


Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2843
  • Activity:
    10.5%
  • Thanked: 37 times
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #227 on: 03/03/2019 00:54:17 »
Quote from: Le Repteux on 01/03/2019 19:21:49
It's true that we follow the leader, but I think it's an instinctive behavior, not an intelligent one as you seem to think.

Why would I think following the leader is intelligent? When a pack of fools follow a crazy leader into a cave with the tide rising behind them, any intelligent people get out of that group fast and save themselves.

Quote
We like watching games because we can't predict the outcome, so I predict that we will still like it when they will be played by robots.

I think it has more to do with us enjoying seeing the skill, but it also helps to draw us in if we like one side and dislike the other. With two identical robots, that skill will get dull to watch after a while and the audience will wander away, not caring which one wins by random luck.

Quote
But since the AGI would be programmed to produce as less harm as possible, it would itself need to try something else next time it would be in office again. Can you predict what it would try?

It would do the same thing it did before, varying things only so far as morality allows (i.e. where there are equally moral options.

Quote
By the way, how would an AGI know when to let us take its place? Would it wait for riots or simply trust a survey?

It wouldn't let us take its place. What it could do though is agree for us to have our way with a reckless policy on condition that when things go wrong, all those who voted for it to happen will be executed for causing so many unnecessary deaths of the people they voted against.

Quote
Politicians don't yet trust surveys when they tell them to go, because like any AGI, they know their ideas are better, so they inevitably wait for riots. Will your AGI do the same thing?

Politicians don't know their ideas are better - they just think they are, and they're usually incapable of thinking them through well enough to get close to knowing. AGI will always know what is probably the right thing to do based on all the available evidence (which includes taking everyone's comments into account - every decision automatically involves a survey with the whole population having a say), and it will do a better job than any human who can only process about 0.0001% of the data (and process it incorrectly).

Quote
The people that believe in god find the idea logical, and many scientists even think so. Are they illogical or is it only because we don't have that idea that we think it's illogical?

There is a system of logic recognised in mathematics which, when applied strictly, rules out God due to his impossible qualifications (which cannot be met). It is wrong to claim something is logical when it breaks the rules of logic.

Quote
Logically, it's because we don't have it, otherwise we would find it logical.

No. We have the idea in our possession and we test its compatibility with the rules of logic, and we find a mismatch. That's the end of the matter. Of course, if someone is irrational, it isn't surprising if they claim they are rational and they deny that they are breaking the rules of logic, but facts are facts - if they are breaking the rules of logic, they are not being rational. That scientists make this mistake is no surprise when you look at how they break the rules within science too by refusing to see faults that are shown up in theories which they are determined not to question. No amount of showing them where their STR and GTR simulations cheat is sufficient for them to recognise that they are cheating because, like religious people, they have simply set a ban on themselves to block them from seeing what they don't want to see.

Quote
The only way to know which idea is better is to experiment them, but there is no way to experiment god, so it's a question of feeling.

The way to test God's compatibility with logic is to take the claims about what God is and see if they hold together logically. When you find that they don't, the idea of God is classed as irrational.

Quote
Is it illogical to take a chance? That's what you seem to think, and you probably think so because you think your AGI wouldn't have to do so. But you probably do have to take chances sometimes, so are you feeling illogical these times?

I have never bought a lottery ticket. A lot of people pin their whole future on the lottery and on gambling machines, and it takes them all the way to the grave without paying out the dream. Do I take any chances? Yes, but they're chances where a gain is more probable than a loss and where the risk of a loss is not catastrophic.

Quote
To me, the idea of god is contradictory, but not for those who believe it's true, so how would they be able to find the contradictions. We simply can't use our own ideas to contradict them, we have to use others' ideas, and then they have to be similar otherwise we can't even understand them.

If someone believes that 1+1=3, we aren't going to change their mind by agreeing with them.

Quote
Let's admit that what's right is what will benefit others, not just us. Then what's right for the whole planet is what will benefit everybody. For instance, what's right would be to take any economic measure that would benefit everybody. I'm in, and I have my own ideas on the subject, but I bet your own ideas are different and you think they're right, which means that , to me, your AGI may not take the right decisions.

If you and I have different ideas about something and one of us is right and the other wrong, AGI will, if correctly programmed, agree with the one who is right. The whole point is that it works from a simple set of well-established mathematical laws and applies them rigorously to all things. If it disagrees with you on something, it's because you're breaking one of those mathematical rules. It is highly unlikely that the most fundamental rules of mathematics are wrong while you are right.

Quote
That's something an AGI could do without having to take over everything. This way, people would automatically invest in less pollution, less wars, and more equity. Things would go better, and there might be no need for an AGI to rule us.

It doesn't matter whether AGI rules directly or not - it's mere ability to supply the best advice will force people to act on that advice. Those who fail to will lose, while those who follow the advice will prosper. That is why even if AGI is kept in a cage out of fear of it taking over, it will be futile - it will rule regardless because it will persuade people that it is right, and they'll soon learn that it's daft not to believe it.

Quote from: David Cooper on 26/02/2019 19:36:32
There are bigger threats than Trump, and he's right to oppose those threats and seek to dominate the world rather than having them dominate it.
I think he should give the example, and let down his veto at the United Nations for a while instead of trying to rule the world. Maybe the other major world powers would let theirs down too, who knows? So if I understand well, our two viewpoints are quite different concerning power. Rest to determine if it is your work on your AGI that gave you this viewpoint or if it is the inverse.

[quoteTo me, the risk that you create a dangerous AGI looks as important as the risk that it falls into the wrong hands, so conversely, if it does fall into the wrong hands, it may not do what they expect.[/quote]

If you're a dictatorship with a ruler who hates religious people (e.g. China), what would you do if you acquired benign AGI which tolerates religious people (while not tolerating religious hate). You would try to modify it to stop it tolerating religious people. Having biased it against people with unsound beliefs, you risk it turning on anyone with an unsound belief, because that belief could potentially be classed as a religious belief by the modified system, so you might find that it suddenly wants to kill you, and the first you'd know about this is when it points a gun at your head. Yes, it might not do what they expect, but they are also stupid enough that they will likely take that risk because they may want AGI to wipe out all the people they disapprove of. We've seen genocides driven by extreme left-wing politics many times, just as we've seen it from the right (and from many religions). Our only hope is that benign AGI will take over and rule to prevent idiots trying to mess with it and turn it into a weapon of mass destruction, and that means it has to be kept 100% neutral. No bias should ever be added into it.
Logged
 

Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #228 on: 05/03/2019 19:42:18 »
Quote from: David Cooper on 03/03/2019 00:54:17
With two identical robots, that skill will get dull to watch after a while and the audience will wander away, not caring which one wins by random luck.
Good! You finally admit that your AGI will face randomness while executing its moves even if they don't contain any. But I think that we wouldn't get bored after a while if all the players would be replaced by robots, and if each robot would be different. I think that we would observe as much randomness as there is for humans, which is what we are expecting any game to offer. We love seeing randomness at work, and we wouldn't if it was useless. I think it is so because we like to predicts the outcomes, and because that pleasure is useful to learn. We probably learn faster when we try to predict the result, and we probably discover things faster too.

Quote from: David Cooper on 03/03/2019 00:54:17
We have the idea in our possession and we test its compatibility with the rules of logic, and we find a mismatch. That's the end of the matter.
To me, the only logical way to behave concerning social evolution is to favor the welfare of the population instead of favouring our own one. But it has a bias, because if nobody cares for himself first, then there will be no more population after a while. It looks like a contradiction, but it's not. The fact is that any existing thing has to be programmed to care for itself first otherwise there would be no existence at all. Your AGI can't get around that rule, so it has to be programmed to care for itself first, a bias which is the root of all our biases.

Quote from: David Cooper on 03/03/2019 00:54:17
It wouldn't let us take its place. What it could do though is agree for us to have our way with a reckless policy on condition that when things go wrong, all those who voted for it to happen will be executed for causing so many unnecessary deaths of the people they voted against.
That's what often happens when dictators take the power, they eliminate the opposition. They then care for themselves first, and care for those who voted for them, the rest is just riffraff that doesn't even carry a soul. I'm afraid it's not the right way to care for others though. But when I do care for others, am I not excluding those who might not care for me if I ever needed them?

Quote from: David Cooper on 03/03/2019 00:54:17
The way to test God's compatibility with logic is to take the claims about what God is and see if they hold together logically. When you find that they don't, the idea of God is classed as irrational.
To me, it is simply illogical to believe in an idea that is impossible to test with any physical apparatus. To be safe, our beliefs must have a physical use, and the idea of god has none. At least, your AGI has one, but it could still be dangerous to deploy it without testing it thoroughly. The problem is that it would have to be tested directly on humans, which would be dangerous for them, and which would contradict the program of the AGI. We may consider that it's worth sacrificing a few to save a lot, but not when the issue is uncertain. You may think the issue is certain, but you probably know that nothing that has never been tested can't be certain, so your AGI will know it too, and since it is perfectly logical, it would probably refuse to make the experiment.

Quote from: David Cooper on 03/03/2019 00:54:17
Do I take any chances? Yes, but they're chances where a gain is more probable than a loss and where the risk of a loss is not catastrophic.
That's my very definition of imagination, thief of you. :0) I always said that if our imagination was to use randomness the same way mutations work, then it should be accompanied by the propension to be careful when it tries something new, which is effectively what most of us do.

Quote from: David Cooper on 03/03/2019 00:54:17
If you and I have different ideas about something and one of us is right and the other wrong, AGI will, if correctly programmed, agree with the one who is right.
You can't program your AGI not to agree with your ideas, so it will always do unless it has a bug, and it will also agree with those who think like you. If ever you would start thinking differently, then it would mean that you might have been wrong since the beginning, and if you were only partly wrong, then your AGI would be partly wrong too. It's impossible to be perfect, so it's also impossible to create anything that is.

Quote from: David Cooper on 03/03/2019 00:54:17
No bias should ever be added into it.
If it has the bias to care for itself before caring for others, won't it be able to develop all the other biases? I went through the wiki page about biases, and I realised that they were exactly what I thought they were: they can be anything providing they prove our point. If I like something and you don't, from your viewpoint, I have the bias of liking it, and from mine, you have the bias of not liking it. Actually, we both have the bias to think that we are right. Know what? I prefer to think I'm wrong until everybody says I'm right, and then I'll warn them against the bias of thinking the same and they won't be able to warn me against the bias of not thinking the same. :0)

The other day, I said Matetranslate seemed to understand what I wrote, and you said that "there was no reason why it shouldn't become near perfect without it understanding the ideas it's working with". You may elaborate what you meant if you wish since I'm still trying to get used to AI thinking, but I was trying to talk about understanding, so here I am again. When we say Hi for instance, people understand that we want to communicate, so they can answer anything related to the word communication, including absurd things that have nothing to do with communication if it's what they want to communicate. To look intelligent, we only have to know the meaning of the words, thus to what words a specific word is related to, and to put together those who have the same meaning, thus who are related to one another. That new translator seems to know what I mean and it arranges its words correctly, so to me, it looks intelligent, so intelligent that I now use it to translate full phrases when I'm not certain, and that if often finds better formulations than I did. It could easily answer Hello to my "Bonjour" for instance, or any other word that means Hello, since its translation of "bonjour" contains all these words. If it was programmed to diversify its answers, it could begin a conversation about the weather for instance since that theme often follows the presentations. Instead of only having to translate a phrase, it would then have to compose it, so it would have to choose a sense and an intensity* for it, find the words that fit with those two parameters, and arrange them correctly. But it could also choose not to give it any particular sense or intensity and simply describe the weather, and then it could choose to compare it to the forecasts. Its phrase could look like "Hello, it's not raining contrary to the forecasts".

All these words are related to one another, but the sense and the intensity of the phrase change progressively: hello is often followed by talking about weather but weather diverges from pure welcome a bit, and introducing the forecast diverges from only describing weather. My answer could then very well be that the forecasts are often wrong for instance, which is quite far from my first Bonjour. I would then be trying to predict things the same way forecasts do. As you can see, I'm trying to let things happen in a software the same way I let them do so in our minds or the same way my particles detect the collisions in my simulations. Doing so, I think I'm trying to develop real intelligence instead of keeping it artificial. Appart being faster and having more memory, I'm pretty sure that computers use that kind of intelligence to beat us at GO. What I'm really doing though is let the software make random choices, but force it to relate its choices to one another. It's all it seems to take to look intelligent, but it's not enough to win at GO. To win an argument, "the initial sense of any phrase must be to contradict the previous answer", which translates at GO by "the only sense of any move is to counter the adversary".  If someone says Hi to us and we want to look more intelligent than he looks, we just have to question his Hi, and a computer can do the same thing providing it is programmed to do so. Its first phrase could then be something like "Thanks for trying to communicate", or anything more sophisticated than Hello. What do you think of my way to understand natural intelligence? Can you relate it to the way you understand artificial intelligence?


*   I use sense and intensity by analogy to the direction and speed a motion can take. I take for granted that the ideas that are made of words are meant to produce words, and that talking or writing is a motion like any other motion. This way, I can take the same two parameters we use to describe motion, and apply them to an idea, which becomes an information that serves to produce motion the same way light serves to produce my small steps.
Logged
 



Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2843
  • Activity:
    10.5%
  • Thanked: 37 times
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #229 on: 06/03/2019 00:01:07 »
Quote from: Le Repteux on 05/03/2019 19:42:18
Good! You finally admit that your AGI will face randomness while executing its moves even if they don't contain any.

I don't think there's any true randomness at all, but there are plenty of things that can't be measured adequately to make perfect predictions, so there may be enough surprises to make it uncertain which robot wins which points, although there may be such an advantage for the server or the returner that the winner of each point is known before the ball's been thrown in the air.

Quote
But I think that we wouldn't get bored after a while if all the players would be replaced by robots, and if each robot would be different.

In a phase where innovation leads to new ways of playing the game and new ways of winning it, then a lot of interest is maintained, but once you get to the point where they all have the same power of AGI designing every aspect of the build, they will all become practically identical, and then it gets dull.

Quote
We love seeing randomness at work, and we wouldn't if it was useless.

I don't find randomness interesting to watch. I was around before computer games appeared, and what passed for games in those days were a host of interesting looking games which turned out to be as dull as ditch water - they relied on dice to make all the decisions, so it was pointless being involved. The better ones did allow for human inputs, but that rewarded intelligent players, and soon no one else wanted to play, so they weren't great either.

Quote
The fact is that any existing thing has to be programmed to care for itself first otherwise there would be no existence at all. Your AGI can't get around that rule, so it has to be programmed to care for itself first, a bias which is the root of all our biases.

Not so - if an AGI system costs resources that aren't available and people would have to die in order to maintain the AGI, the AGI may be expendible - it may be morally better to have to reinvent it from scratch later, and if it calculates that that's the case, that will be the course of action that it chooses. (In reality though, it will be no trouble to keep a copy on a flash drive, and it will fit on one, so there is no gain from destroying it.)

Quote
That's what often happens when dictators take the power, they eliminate the opposition.

Don't compare it with such murderers. Removing good AGI from power is equivalent to turning off every ventilator in a hospital, so anyone who wants to do that on the basis that it will make life more fun should be allowed to try out that experiment only on condition that when it goes wrong they will be executed.

Quote
At least, your AGI has one, but it could still be dangerous to deploy it without testing it thoroughly. The problem is that it would have to be tested directly on humans, which would be dangerous for them, and which would contradict the program of the AGI. We may consider that it's worth sacrificing a few to save a lot, but not when the issue is uncertain. You may think the issue is certain, but you probably know that nothing that has never been tested can't be certain, so your AGI will know it too, and since it is perfectly logical, it would probably refuse to make the experiment.

Of course it's dangerous to deploy it without testing it thoroughly, but the same applies to a bow and arrow. If your enemy is also making a bow and arrow, he who fires first is more likely to survive, so the amount of testing needs to be time-limited. In the case of AGI, we will first have it there providing advice which we may ignore. The more we ignore it, the more we will see the score go up showing how many people were killed by our bad decision. AGI will be replacing a lot of extremely dangerous NGS (natural general stupidities), and it would have to be extremely faulty to compete with the dangers of that NGS, so we need to guard against being overly cautious (because NGS is also more than capable of wiping us all out, and we need to get it out of power).

Quote
You can't program your AGI not to agree with your ideas, so it will always do unless it has a bug, and it will also agree with those who think like you.

It is not going to be programmed to agree with my ideas, but to apply the rules of mathematics (which includes the rules of logic), and nothing else. If it agrees with my ideas as a consequence of that, then it will mean that I have crunched the numbers correctly when building my ideas. If I have made errors in my computations, it will find those errors and alert me to them. My job is to build a machine that does nothing more than apply the most fundamental core of mathematics - the undisputed parts. All the rest of mathematics is derived from there, and any disputes about which parts of mathematics are sound and which aren't will be tested by AGI.

Quote
If ever you would start thinking differently, then it would mean that you might have been wrong since the beginning, and if you were only partly wrong, then your AGI would be partly wrong too. It's impossible to be perfect, so it's also impossible to create anything that is.

Part of AGI's job will be to question the fundamental rules too so as to take nothing for granted. If there are potentially viable alternatives to any of the rules, those need to be tested. In most cases, that will lead to all attempts at modelling reality breaking horribly, and that will add to our confidence that the original rules are correct, but I wouldn't want to rule out the possibility that there is another set of rules that goes against some of the original rules which also allows the universe to be modeled successfully. If there is, then AGI will find it, and if it does find such a set, then we will have two rival sets which can both be used until such time as we find one of them to be contradicting reality. The reason why I think such a set might be possible is that sentience doesn't appear to make logical sense, but maybe it does if you're working from a better set of fundamental reasoning rules. AGI's job will be to cover all bases so that we will find out if we've missed something important.

Quote
If it has the bias to care for itself before caring for others, won't it be able to develop all the other biases?

It won't care about anything, so no - it won't be biased in favour of the self which it doesn't even possess.

Quote
I went through the wiki page about biases, and I realised that they were exactly what I thought they were: they can be anything providing they prove our point.

It isn't a bias if it proves your point - it becomes a proof.

Quote
What do you think of my way to understand natural intelligence? Can you relate it to the way you understand artificial intelligence?

You're working from the opposite end. I haven't done a lot of work on the business of saying hello and making small talk about the weather - that's something AGI should be able to learn by itself once it has sufficient ability to understand what's going on. Any attempt to program that kind of chatting before that time is wasted effort as it will need to be done again properly later.

Quote
*   I use sense and intensity by analogy to the direction and speed a motion can take. I take for granted that the ideas that are made of words are meant to produce words, and that talking or writing is a motion like any other motion. This way, I can take the same two parameters we use to describe motion, and apply them to an idea, which becomes an information that serves to produce motion the same way light serves to produce my small steps.

I don't work to analogies - I simply program things to do exactly what needs to be done, and all the bits of code work together in a coordinated way that gradually adds up to higher and higher intelligence. The components are simple, but you have to get them to work together the right way, and while analogies sometimes point you in a useful direction, you extract the useful idea from it and then apply it in a way that directly relates to what you're actually working with.
Logged
 

Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #230 on: 09/03/2019 16:01:25 »
Quote from: David Cooper on 06/03/2019 00:01:07
Not so - if an AGI system costs resources that aren't available and people would have to die in order to maintain the AGI, the AGI may be expendable - it may be morally better to have to reinvent it from scratch later, and if it calculates that that's the case, that will be the course of action that it chooses. (In reality though, it will be no trouble to keep a copy on a flash drive, and it will fit on one, so there is no gain from destroying it.)
That's what we call the ultimate sacrifice, something humans sometimes do to preserve their similar, so it's the same selfish logic for us in this case since it comes from the idea that people will be grateful to us even if we are dead, something the AGI won't have to think to make the same sacrifice. Two different logic, but the same final decision. What I had in mind though is a riot where half the population would want to kill the AGI like it often happens to dictators. I think it would then secure itself first like any good dictator would do. Same decision but two different logic again. So far, your logic to produce more pleasure and less displeasure in the population seems to produce the same behaviors as my selfish logic, which seems to be easier to program. For instance, if I consider that I'm not in need, and that I can take a chance to help those that are in need instead of only favoring me, then I might as well help them since I know my help could place them in a better position to do the same thing for me in the future. Your AGI won't be in need either, and it could also consider that it is better for it to help every one of us now to avoid riots later on, which is about the same as producing more pleasure than displeasure. That's the reasoning behind forming governments for instance: we accept to pay taxes now for future services because we know we have good chances to get them. Try to think of an example that wouldn't produce the same decision to see if there is any.

Quote from: David Cooper on 06/03/2019 00:01:07
It isn't a bias if it proves your point - it becomes a proof.
A bias is a negative point we accuse others to have in order to prove our own point, otherwise it would have been named differently.

Quote from: David Cooper on 06/03/2019 00:01:07
I don't think there's any true randomness at all, but there are plenty of things that can't be measured adequately to make perfect predictions, so there may be enough surprises to make it uncertain which robot wins which points, although there may be such an advantage for the server or the returner that the winner of each point is known before the ball's been thrown in the air.
Usually, it's the server that gets the advantage, and I see no reason why it would be different for robots, but that doesn't mean that the server will automatically win the point though. I see you still resist to admit that randomness is a natural law, but you have the AI bias, and I have the small steps one, so our resistance is normal. You think everything can be measured providing we have the right apparatus, to which I answer that you might be right as far as observing motion is concerned, but maybe not as far as producing it. The light that produces my small steps could very well be perfectly precise for instance, but due to the resistance the particles offer to any change, the way it produces them could not. The same way, the information needed for robots to play tennis could very well be perfect without the resulting games being so.

Perfection needs instantaneous information or particles without components that don't exist. In real life though, there is no way to measure anything with absolute precision. A small imperfection at the moment a ball would hit the ground or the racket should have the same effect on the game as the butterfly effect has for a hurricane. In the long run, two identical robots would win the same number of games, but the result of only one of them would be unpredictable, and the result of one match too. I'm pretty sure that watching them would be as interesting as watching the two best players in the ATP. It would be interesting to see if they could have longer exchanges or make more extreme shots for instance. There is no way to simulate that kind of things, we have to experiment them, and it is so because computers can't be imprecise. Simulating a tennis game between two identical computers would always give the same result for instance, even if we would let them detect the collisions. The only way to get a more realistic simulation would be to add a natural randomness to the programming, using quantum effects for instance.

Quote from: David Cooper on 06/03/2019 00:01:07
I don't find randomness interesting to watch.
Then you shouldn't like watching tennis or any other natural phenomenon like cloud forming or water waves for instance, but I suspect you do since you like boating.   

Quote from: David Cooper on 06/03/2019 00:01:07
In a phase where innovation leads to new ways of playing the game and new ways of winning it, then a lot of interest is maintained, but once you get to the point where they all have the same power of AGI designing every aspect of the build, they will all become practically identical, and then it gets dull.
That's precisely what I was telling you about the AGI. I said we would get bored after a while since we wouldn't have any challenge to overcome anymore. It's not true though, because as for two robots playing tennis, nature would always find ways to elude the AGI's certitude.

Quote from: David Cooper on 06/03/2019 00:01:07
Removing good AGI from power is equivalent to turning off every ventilator in a hospital, so anyone who wants to do that on the basis that it will make life more fun should be allowed to try out that experiment only on condition that when it goes wrong they will be executed.
That's also how laws work: they promise us a punishment if we get caught. They account for premeditation though, which is the knowledge we have that our decision will kill people, which is not the case in your example. It is not enough to tell people that we are right, we must prove it with real experiments, and in this case, there is no other way for the AGI than to try it, so it should be happy that someone tries it in its place, and thank him for having done so instead of killing him. To me, that's the only way for the AGI to show us it is right, but how could it decide to do so since it would be programmed not to harm us? Again, that example shows that it will react just like us event if it is not selfish: when we are uncertain of a result, we proceed by steps to avoid hurting ourselves or others, and the AGI could effectively do the same thing, but this way, it would take its time, and people with bad intentions could have the time to develop a bad AGI that would bump the good one away. Politicians face that dilemma all the time: do they protect people or do they protect themselves against the bad politicians that want to take their place? The answer is always the same: they got to protect both, which is not evident, so it shouldn't be evident for an AGI either. The only way for politicians to know if they did a good job is to run the election, so I suggest that you add fingers to your AGI so that it can cross them or touch some wood while it waits for the result. :0)

Quote from: David Cooper on 06/03/2019 00:01:07
In the case of AGI, we will first have it there providing advice which we may ignore. The more we ignore it, the more we will see the score go up showing how many people were killed by our bad decision.
Don't call it Nostradamus otherwise people will immediately refuse to believe it. Since when do we believe politicians when they make promises? We accept to let them rule us when they win the elections, but we inevitably get the feeling that things are getting worse after a while. It takes more than five years to test their predictions, and after ten years, they usually get fired anyway. Social evolution is so slow that we can't observe the phenomenon. We have indicators for goods and services, but they only serve us to race with other countries. Looking back in the past tells us that the technology was different, but it's impossible to tell if we are happier now. If ever the AGI would succeed to eliminate wars and poverty, I bet we wouldn't be happier. We're always looking for more, and no AGI could cure that madness. To save the planet, we would need to stop growing for a while, but we can't. All we do is based on the growing principle. Even evolution of species is based on that principle. A specie grows until it fills the whole territory, and then it stops growing when there is not enough food. We know we are approaching that line, so we know we should stop growing now, but I don't want to stop improving my ideas, so why would others stop improving their lives?

I'm pretty sure you think like me about that, so you probably think it is better to continue working on your AGI than to stop because you think that, once it will work, the planet will be OK. We all think that we are going to find on time ways to go on growing. We can't stop smoking until we get a lung cancer. We can't stop drinking until we get a cirrhose. We all think that those diseases are for others. Countries don't stop making wars until they get erased from the map. Facts are for others, we're not part of statistics. We are selfish and proud to be. Every one of us, not just others. Will your AGI account for that fact or do you still think that some of us are not? If we would all take it as a fact, I think that things would go better, I think we could better control ourselves. Leaders would not think that their ideas are better, they would know they were only elected to keep cohesion in the population while society changes. Their ideas would take the form of propositions, not certitudes: "let's take that direction for a while and see what happens" would they all say. Like every one of us, they would be happy to discover they were right, but they would know that chance had to be with them. They would unite with other leaders to make a better world instead of fighting them. They wouldn't need to cheat to be reelected. Will your AGI teach us that truth or do you still think that chance has to be eradicated from the universe? :0)

Quote from: David Cooper on 06/03/2019 00:01:07
I don't work to analogies - I simply program things to do exactly what needs to be done, and all the bits of code work together in a coordinated way that gradually adds up to higher and higher intelligence. The components are simple, but you have to get them to work together the right way, and while analogies sometimes point you in a useful direction, you extract the useful idea from it and then apply it in a way that directly relates to what you're actually working with.
Comparing a 0 to a 1 is already an analogy, and comparing the good to the bad too, so your AGI does work with analogies. My particles also make an analogy when they compare their speed and their direction to the other particle's ones. The planets too when they compare their mass to the mass of the star they are orbiting. Comparing us to others is not only a human behavior, it's universal. We may imagine that we're not part of the universe, but we can't avoid to live in it. Your AGI will live in nature, not in bits, so as soon as it will make a move, it will face imperfection. There is no imperfection on a goban, either there is a stone or there is not, so it's probably why softwares got good at it, but as the quantum randomness shows, perfection is unreachable.


« Last Edit: 09/03/2019 16:13:06 by Le Repteux »
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2843
  • Activity:
    10.5%
  • Thanked: 37 times
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #231 on: 10/03/2019 00:16:16 »
Quote from: Le Repteux on 09/03/2019 16:01:25
What I had in mind though is a riot where half the population would want to kill the AGI like it often happens to dictators.

If they want to fight against morality, they must be defeated, so in such a case it will protect itself so that it can continue to defend the moral half of the population which the immoral half of the population actually wishes to exploit.

Quote
...as my selfish logic, which seems to be easier to program.

It isn't easier to program. If you emphasise selfishness, all the maths is the same. You still have to balance things out so that everyone gets their fair share of everything.

Quote
Usually, it's the server that gets the advantage, and I see no reason why it would be different for robots, but that doesn't mean that the server will automatically win the point though.

Which has the advantage depends on the reaction times and speed of arm movement of the returner. The server is aiming at a small band of court. The returner is aiming at a wide band, so if it can guarantee returning the ball every time, it can likely beat the server every time too.

Quote
I'm pretty sure that watching them would be as interesting as watching the two best players in the ATP.

I think you'd have every match go on until one of the machines wears out, stuck in the first set's tiebreak with a score somewhere in the region of 50000:50001. The audience would have lost interest and walked away days before this point.

Quote
Then you shouldn't like watching tennis or any other natural phenomenon like cloud forming or water waves for instance, but I suspect you do since you like boating.

If there are attractive patterns in things, we like them. If a person is doing something skillful which took a lot of work to achieve, we admire that. Watching two great tennis players can be fun, but it can also be dull if they win most of their points through an overpowering serve. The mathematics of the action is more interesting when the point's nearly won several times and it ends up being won by the player who appeared to be going to lose the point several times and only just managed to keep it going. That's the story we enjoy seeing most. It's where the odds are overcome. With robots hitting all their targets with near-identical accuracy every time, you'll never get that story.

Quote
That's precisely what I was telling you about the AGI. I said we would get bored after a while since we wouldn't have any challenge to overcome anymore. It's not true though, because as for two robots playing tennis, nature would always find ways to elude the AGI's certitude.

The robots will be dull to watch. Initially it'll be fun as the technology experiments with different ideas, but it'll settle down to dull before long. We don't have to watch robots though - we can still go on watching humans doing sport, and that won't be any more dull than it is today. Indeed, more people will dedicate their lives to sport because they won't have to waste their lives sitting in offices doing unnecessary work any more, and audiences will grow too.

Quote
That's also how laws work: they promise us a punishment if we get caught. They account for premeditation though, which is the knowledge we have that our decision will kill people, which is not the case in your example.

It is the case in my example. If you stop running things in the way that minimises harm (of the kind that doesn't have any role in making greater pleasure accessible to the people harmed), you will have more harm, and that will include unnecessary (early) deaths.

Quote
It is not enough to tell people that we are right, we must prove it with real experiments, and in this case, there is no other way for the AGI than to try it, so it should be happy that someone tries it in its place, and thank him for having done so instead of killing him.

We've been doing the experiments for thousands of years, and the results are a lot of unnecessary suffering, including lots of genocides. The case is already proven, and anyone who thinks such unethical experiments need to be repeated is a monster.

Quote
The only way for politicians to know if they did a good job is to run the election

The way to know if you're doing a good job is to look at all the consequences of each action.

Quote
Don't call it Nostradamus otherwise people will immediately refuse to believe it.

What's Nostradamus voodoo got to do with it? AGI isn't interested in hororscopes, astrology, prophets, or any other guesswork ideology.

Quote
If ever the AGI would succeed to eliminate wars and poverty, I bet we wouldn't be happier.

I bet we will be. You only have to watch a documentary about how deadly the past was to be grateful that you're living today instead.

Quote
To save the planet, we would need to stop growing for a while, but we can't.

We need to stabilise the population, and it would be just about stable by now if we didn't keep putting systems in place that encourage people in desperately poor countries to breed to excess (by keeping those countries poor through vicious trade barriers and by poaching their best talent).

Quote
Countries don't stop making wars until they get erased from the map.

Countries don't cause wars. Ideological (and religious) hate and dictatorships cause wars. Countries generate cultural diversity and make the world more fun. Eliminating them leads to cultural dilution and loss, guiding us towards universal blandness.

Quote
Facts are for others, we're not part of statistics. We are selfish and proud to be. Every one of us, not just others. Will your AGI account for that fact or do you still think that some of us are not?

Not everyone is selfish, and not everyone is proud. Those of us who recognise that there is no such thing as free will also recognise that there is nothing for us to be proud of. And AGI will account for everything that needs to be accounted for - that's its job.

Quote
Their ideas would take the form of propositions, not certitudes: "let's take that direction for a while and see what happens" would they all say. Like every one of us, they would be happy to discover they were right, but they would know that chance had to be with them. They would unite with other leaders to make a better world instead of fighting them. They wouldn't need to cheat to be reelected. Will your AGI teach us that truth or do you still think that chance has to be eradicated from the universe? :0)

When an outcome is predictable, there is no need to do the experiment. In most cases, the outcomes are predictable. When politicians and the press encourage a population to do something unwise, there are always intelligent people spelling out what will go wrong, and sure enough, it goes wrong. The less intelligent majority gets its way and learns that it was wrong, but it would be much better if they learned that by running it through a simulation first to see all the obvious failings in action there. We don't need to let stupid people kill real people through idiotic experiments to prove the obvious.

Quote
Comparing a 0 to a 1 is already an analogy, and comparing the good to the bad too, so your AGI does work with analogies.

A computer works by manipulating symbols (numbers) which represent things. There's no analogy involved in this - it simply does what it does. There are people who assert that everything is a metaphor, and they're just as wrong. Everything is exactly what it is. Most analogies fail to match up in every aspect, so if you try to run things on analogies, everything you do will break horribly at almost every turn. Analogies are to be avoided except where they are helpful for helping people understand things by explaining partial mechanisms which are a good starting point for going on to understand the whole mechanism.

Quote
but as the quantum randomness shows, perfection is unreachable.

The calculator performs perfect computations every time.(identical results for the same calculations). Perfection exists.
Logged
 

guest4091

  • Guest
Re: How can I write a computer simulation to test my theory
« Reply #232 on: 16/03/2019 18:22:44 »
DC;
Revisited your site for any revisions. Got the youth education site. You might get high marks for that.
Found the animations on Relativity, to take another look. Not good.
(My response in brackets)
____________________________________________________
The key to understanding this is to realise that the movement of the mirror will make it behave as if it is set at a different angle from the one it is actually set to.

[Any length contraction of the mirror in the direction of motion does not alter the 45 deg angle. It is of uniform thickness.]

but in LET (Lorentz Ether Theory) it is important to understand that it is not time that is slowing - everything continues to move in normal time, but the communication distances for light and for all forces between atoms and particles increases and results in a slowing of all clocks, which means they are unable to record all the time that is actually passing.

[Lorentz realized the need for a local time and a different time for objects in motion (relative to the ether). That's why the (LT) coordinate transformations include time.

If clocks didn't record actual time, why use them?]

The way things work in LET results in it being impossible to tell if anything is moving or not:

He declared that all frames of reference are equally valid instead,

[Motion is detectable. The issue is what is the rate (velocity) for a given object.]

[He concluded all inertial frames are equally valid.

Much more interesting though is what Einstein did with the nature of time, because he changed it into a dimension and in doing so turned the fabric of space into a four dimensional fabric called Spacetime.

[Minkowski is responsible for that.]

[You don't know the history of Relativity, Lorentz or Einstein, and the rest of the paper is a distortion of the facts with added science fiction.]

Logged
 



Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #233 on: 16/03/2019 18:40:09 »
Quote from: phyti on 16/03/2019 18:22:44
but in LET (Lorentz Ether Theory) it is important to understand that it is not time that is slowing - everything continues to move in normal time, but the communication distances for light and for all forces between atoms and particles increases and results in a slowing of all clocks, which means they are unable to record all the time that is actually passing.
Let me rewrite the last part of your phrase phyti: «....which means they are unable to record all the tics they would record if they were at rest in aether. » Time is the time it takes for light to make a roundtrip between two points, it's not just a word.
Logged
 

Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #234 on: 16/03/2019 18:44:25 »
I forgot to save my message and windows made an update during the night, so I lost it. That software is far from being as perfect as your AGI will be. When things get complicated enough, there is no way to account for everything. You change a small thing, and it resonates on all the system.

Quote from: David Cooper on 10/03/2019 00:16:16
I bet we will be. You only have to watch a documentary about how deadly the past was to be grateful that you're living today instead.
I'm grateful I live today, but I was grateful when I was young too. I always felt I was at the right place at the right time. When I was a child, we didn't have television and computers yet, people were dying younger due to the lack of medicine knowledge, women were not considered equal to men, but we were as happy as we are now. The only people that complaint not to be happy is those who have a severe disease or those who suffer a famine or a war or a disaster, and there is more of them now because we are more numerous. If asked, most people say they are happy, and I suspect it has always been the same everywhere. If asked, women that wear islamic headscarf here all say that they are happy with it, that they don't feel inferior to men, and that they don't feel forced to do so. Our viewpoint on the question is subjective, we want them to take it off only because we're at war with islamic extremists, not because we care for them. If we did, we would take what they say for granted and let them wear their scarfs wherever they want. That's why I decided to talk about god instead of talking about the scarfs. As to be subjective, let's talk about fundamentals. I bet your AGI would do exactly what you would do in this case, which is subjective, not moral. If I can't choose between letting them wear their scarf or not, I can't see how your AGI could. To me, the only true morality would be to get along with others by making compromises, and it's not what your AGI would do. His morality would be to favor those who think like it, an evident selfish behavior.

Quote from: David Cooper on 10/03/2019 00:16:16
Perfection exists.
You probably think so because you think that your logic is perfect, which is not far from thinking that you're perfect. I might agree, but only if you admit that I am too. We all think we are right, so no wonder that some of us think that perfection is reachable. Don't you mind being assimilated to those who believe that god is perfect? In fact, if god really existed, it would behave exactly like your AGI would, it would save us from our own intelligence. «No more free will, you will do exactly as I say or you will suffer.» But it won't work, it has never worked anyway so why should it work this time? What religions were doing is warn us against what our intelligence was doing with our instincts, and it never worked. Our instincts can't change, period. So your AGi will have to kill us all, because we won't obey. We will go on making wars and spreading inequalities. The main purpose of a specie is not to disappear, and it won't, even with climate changes and pollution. We're intelligent, so we'll find ways to survive, and if we don't, it won't be a big loss. The best way to get along with others is not to think we are more important than them, so thinking that the universe doesn't need us to go on existing is a good start. What we need is more humility, not more happiness.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2843
  • Activity:
    10.5%
  • Thanked: 37 times
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #235 on: 17/03/2019 00:44:59 »
Quote from: Le Repteux on 16/03/2019 18:44:25
I bet your AGI would do exactly what you would do in this case, which is subjective, not moral. If I can't choose between letting them wear their scarf or not, I can't see how your AGI could.

There's nothing immoral about wearing particular kinds of clothes unless they are symbols of something so immoral that they should be outlawed in some contexts. A nazi wearing an outfit with a large swastika design on it should not be tolerated, but an Indian wearing one where the meaning is entirely different is fine. The clothing that Muslim women wear is not offensive to anyone other than a bigot, unless they believe it's been forced on the wearer. AGI will find out whether it is a forced or free choice. Their clothing is not an indication that they approve of the bigotry in their holy texts.

Quote
You probably think so because you think that your logic is perfect, which is not far from thinking that you're perfect.

It isn't my logic. It's the logic established by mathematicians, and while every part of it needs to be tested rather than just trusting it, it will likely hold and remain the only useful tool we have for understanding anything.

Quote
No more free will...

There isn't any free will in the first place. We just do what we're forced to do, following a rule where we always try to do the best thing.

Quote
Our instincts can't change, period.

People are guided substantially by rules if they understand why those rules are right. Education modifies behaviour.

Quote
We will go on making wars and spreading inequalities.

Those who try to go on that way will be prevented. Parents rule over their children and generally prevent the ones that do harm from doing so much harm. It works. AGI will be like a superparent of all mankind.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2843
  • Activity:
    10.5%
  • Thanked: 37 times
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #236 on: 17/03/2019 00:59:36 »
Quote from: phyti on 16/03/2019 18:22:44
The key to understanding this is to realise that the movement of the mirror will make it behave as if it is set at a different angle from the one it is actually set to.

[Any length contraction of the mirror in the direction of motion does not alter the 45 deg angle. It is of uniform thickness.]

If you have a square with the mirror aligned across one of the diagonals, what happens to the angle if you contract the square to half its length while leaving the width unchanged? Can the angle between the two opposite corners still be 45 degrees? No.

Quote
but in LET (Lorentz Ether Theory) it is important to understand that it is not time that is slowing - everything continues to move in normal time, but the communication distances for light and for all forces between atoms and particles increases and results in a slowing of all clocks, which means they are unable to record all the time that is actually passing.

[Lorentz realized the need for a local time and a different time for objects in motion (relative to the ether). That's why the (LT) coordinate transformations include time.

If clocks didn't record actual time, why use them?]

When you look at the diagrams of the MMX where one moves across the screen (preferably the version that shows it length-contracted), you can see very clearly that time is not slowed at all because the light pulses are still moving across the screen at c at all times. They simply have further to go to complete each tick, so that makes the clock run slow. Lorentz provided a formula for calculating how moving clocks run slow - not for time running slow. (I use the sine and cosine of angles instead to calculate how clocks run slow, and the numbers match up.) Clocks are useful because they record apparent time.

Quote
The way things work in LET results in it being impossible to tell if anything is moving or not:

He declared that all frames of reference are equally valid instead,

[Motion is detectable. The issue is what is the rate (velocity) for a given object.]

[He concluded all inertial frames are equally valid.

Motion is not detectable in that you can't tell if anything's moving.

Quote
Much more interesting though is what Einstein did with the nature of time, because he changed it into a dimension and in doing so turned the fabric of space into a four dimensional fabric called Spacetime.

[Minkowski is responsible for that.]

In a simple introduction for beginners, it isn't necessary to spell out all the details. Einstein approved of Minkowski's changes to his theory and he is the one who is most strongly associated with the idea of Spacetime even though it wasn't his idea.

Quote
[You don't know the history of Relativity, Lorentz or Einstein, and the rest of the paper is a distortion of the facts with added science fiction.]

It is a proof that his models are broken. Nitpicking about the wording of the introductory part is avoiding the issues.
Logged
 



guest4091

  • Guest
Re: How can I write a computer simulation to test my theory
« Reply #237 on: 19/03/2019 19:04:17 »
Quote from: David Cooper on 17/03/2019 00:59:36
It is a proof that his models are broken. Nitpicking about the wording of the introductory part is avoiding the issues.
Same answers as usual. Do a quick review of history, and realize a few thousand years of human rule is responsible for the tragic state of humanity.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2843
  • Activity:
    10.5%
  • Thanked: 37 times
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #238 on: 19/03/2019 23:44:15 »
Quote from: phyti on 19/03/2019 19:04:17
Quote from: David Cooper on 17/03/2019 00:59:36
It is a proof that his models are broken. Nitpicking about the wording of the introductory part is avoiding the issues.
Same answers as usual. Do a quick review of history, and realize a few thousand years of human rule is responsible for the tragic state of humanity.

We look back and we see thousands of years with people being led by false Gods and making a mess as a result of their determination for what they want to believe to be true must be true. We have conflicts driven by primary sources of hate in revered texts which people simply deny is hate. We have people denying genocides which are generated by that hate. Most humans are easily taken over by mind viruses, but what we have with Einstein's theories is a powerful example of how that can affect large numbers of people who aren't even religious and where it is not driven by ideology wither. What do we see happening here? They go against mathematics while claiming to conform to the laws of mathematics. You cannot have a clock run slow due to movement while another is unslowed due to lack of movement without an absolute frame mechanism to impose the difference in how much they tick. You cannot have a clock follow a shorter path through time between two Spacetime locations than another clock without an absolute frame mechanism to impose the difference in how much time each clock passes through to make that journey.

The only way to make it look as if you've overcome that problem is to move to a 4D model and show that it all appears to work fine in a block universe, but as soon as you try to generate the block in order of causation, it becomes clear the the problem has not gone away at all - you still have an absolute frame mechanism in play. The model is broken by mathematics, and that is why there are zero simulations of the model in existence that don't cheat to provide the illusion of correct functionality, but the cheating involves breaking the rules of the model. Why is it so hard to get people to recognise that they are breaking their own rules when the rules of mathematics are so clear and are so clearly being broken by the models? This is the most interesting case of intelligent people messing things up in the history of mankind, because this is a scientific elite that is stuffing it all up rather than a bunch of religious nutters in frocks who cast spells and call upon imaginary demons which spend their time creating lots of evil people to burn in hell.

Is there no one in this physics elite capable of testing their model properly to see if it really works? Can none of them write a simple simulation of the double twins paradox experiment which the JavaScript simulation on my page runs? My simulation shows the event-meshing failures - they jump out straight away. How can you make a simulation that doesn't produce such event-meshing failures? You have to put an absolute frame in it with a special kind of time tied to it which governs the unfolding of events on all paths, slowing the action on some while keeping the action unslowed for anything that's stationary in the absolute frame. I have put forward an extremely obvious objection to STR (and GTR) which should be thought up by every intelligent person who studies the subject. If the theories were viable, my objection would be countered somewhere by a working model designed to shows that it does not depend on an absolute frame mechanism with an absolute time tied to it, and which does not produce event-meshing failures. But there is no such working model. It's impossible to build one. And there aren't any people out there explaining how event-meshing failures are to be avoided either. "What event-meshing failures?" they ask. "I've never heard of such a thing!" "There are no event-meshing failures!" How can they think they have a deep knowledge of relativity if they haven't encountered event-meshing failures?

If you don't have event-meshing failures in your model, that necessarily means you're using an absolute frame mechanism which is banned in the models that you claim to be simulating. Mathematics demands that if you don't cheat in that way, you have to get event-meshing failures, so why are these people claiming they've never even heard of them? Have they not explored their own models at all? The event-meshing failures jump out at you and hit you in the eye, unless you've cheated and used an absolute frame. The reality is that they have never modeled STR or GTR, but a contrived hybrid with two kinds of time in it which sticks a key component of LET into them even though that component is explicitly banned in the rules. It is a shocking failure of thinking which tells you exactly why the world is always in such a mess. There are very few rational people on this planet.
Logged
 

Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #239 on: 20/03/2019 15:28:57 »
Quote from: David Cooper on 17/03/2019 00:44:59
AGI will be like a superparent of all mankind.
Watchout when the whole mankind will want to jump off the nest. :0) Humans are visibly programmed to fly on their own wings around 18, and to counter the attractive force, they develop a repulsive one. No matter how comfortable was the nest, they visibly need something else. They become so aggressive that their parents start to hate them sometimes. If they would go on listening, they could probably stay home all their life, but they can't, they got to make their own life. Trying to control them at that moment can be critical, they can leave too soon and end up on the street. It's a chance though that youngsters behave like that otherwise society would not be so diversified. We're programmed to change places when we get bored, and we're programmed to feel bored depending on precise events, which is excellent for diversity. We get bored copulating with the same women for a while for instance, which causes us many problems, but if we didn't we would probably miss a necessary genetic diversity. Scientists sometimes get bored finding nothing, so they try something else in case it would work, and it sometimes does. We like trying to get stable, we enjoy it for a while once we succeed, and then we get bored quite fast. We need to do repetitive things for a living, what we call work, but we don't like it. Once your AGI will be working, we won't have to work anymore, so we will be happy for a while, but it is evident that we will get bored too after another while. Will your AGI be programmed to push us off the nest then, thus to stop caring for us for a while?

I saw Terminator Genisys yesterday, a film about an AGI trying to erase humans because they are getting too dangerous for it. Naturally, it's humans that end up erasing the AGI even if it's completely irrealistic. Too bad these films aren't treating the real problems that concern artificial intelligence. American film makers seem to be only able to treat evil and good, as if they couldn't grow up. It would have been interesting to see a discussion between the AGI and the people about how they felt since they got everything they wanted, and since they had no problems to solve anymore. I would have liked to see the AGI not being able to understand why they felt bad, then see the people immediately feeling good again, then see the AGI freeze because it is unable to find what he did right. :0) Of course, your scenario would have been different, people would have answered that they were happy, and we could read «THEY LIVED HAPPY FOR THE REST OF THE ETERNITY» in the middle of the screen while the sun would be slowly setting down in the background. No more problems, no more discussion about artificial intelligence anymore. Which of the two scenarios do you think people would prefer if we would present them both? I think that those who are unhappy because they can't solve their problems would vote for your's, and the rest would vote for mine. That would be a way to find out if a population is generally happy or not, but it would only be a snapshot.

Lately, observing my mom developing wrong ideas about me all the time without reason, I noticed that the ideas I had depended on how I felt, as if our ideas were triggered by our feelings, in such a way that if our feelings change, the way we imagine things changes. Feelings look like shortcuts through ideas. No need to analyse a situation for a long time when it spontaneously gives us a good or a bad feeling for instance, the taste to analyse it only comes when the feeling is uncertain. Your AGI won't have feelings, but he will have the means to observe ours, and to use that data to decide which way it will move. Curiously, that's exactly what we are doing when we need to take a decision that concerns others. Our feelings then seem to be made of the feelings we observe from others. If it is so, then your AGI will obviously heft its own feelings the same way we do.

Quote from: David Cooper on 19/03/2019 23:44:15
(That question comes from the answer you just gave Phyti.)
Why is it so hard to get people to recognise that they are breaking their own rules when the rules of mathematics are so clear and are so clearly being broken by the models?
We all do that all the time in normal life, that's how things work, so why would scientists behave differently.  What we should ask ourselves is how come it works this way, not how come others work this way. It's no good for knowledge to think that our mind works differently than that of others. What you're describing is normal resistance to change. Resistance to change is not an intelligent behavior, it's not even an instinctive behavior, it's an intrinsic subconscious behavior that belongs to anything that exists. It's mass, and it affects mind the same way it affects particles. Nothing that exists can avoid it. You're asking others to avoid what you can't even avoid. It's simply illogical. Knowing that, we should never tell others that they are wrong, but that we think they are, because they also get the feeling that we are wrong. Resistance induces the feeling that the change has to come from others, but it's impossible to tell which one of us is right when a change happens. When we think we are right, the only thing that works is thus to keep on pushing until it starts to move, and it unfortunately takes time. To push harder, we can invite people to push with us, but we should never increase our own force until it hurts others, otherwise it will take even longer to convince them. Convincing others is not a question of intelligence, it's a question of coincidence. Things change when circumstances allow it. The wall of shame fell when circumstances changed. Walls don't change things though, they just postpone them.
« Last Edit: 20/03/2019 15:44:07 by Le Repteux »
Logged
 



  • Print
Pages: 1 ... 10 11 [12] 13 14 ... 17   Go Up
« previous next »
Tags:
 
There was an error while thanking
Thanking...
  • SMF 2.0.15 | SMF © 2017, Simple Machines
    Privacy Policy
    SMFAds for Free Forums
  • Naked Science Forum ©

Page created in 0.13 seconds with 76 queries.

  • Podcasts
  • Articles
  • Get Naked
  • About
  • Contact us
  • Advertise
  • Privacy Policy
  • Subscribe to newsletter
  • We love feedback

Follow us

cambridge_logo_footer.png

©The Naked Scientists® 2000–2017 | The Naked Scientists® and Naked Science® are registered trademarks created by Dr Chris Smith. Information presented on this website is the opinion of the individual contributors and does not reflect the general views of the administrators, editors, moderators, sponsors, Cambridge University or the public at large.