The Naked Scientists
  • Login
  • Register
  • Podcasts
      • The Naked Scientists
      • eLife
      • Naked Genetics
      • Naked Astronomy
      • In short
      • Naked Neuroscience
      • Ask! The Naked Scientists
      • Question of the Week
      • Archive
      • Video
      • SUBSCRIBE to our Podcasts
  • Articles
      • Science News
      • Features
      • Interviews
      • Answers to Science Questions
  • Get Naked
      • Donate
      • Do an Experiment
      • Science Forum
      • Ask a Question
  • About
      • Meet the team
      • Our Sponsors
      • Site Map
      • Contact us

User menu

  • Login
  • Register
  • Home
  • Help
  • Search
  • Tags
  • Member Map
  • Recent Topics
  • Login
  • Register
  1. Naked Science Forum
  2. On the Lighter Side
  3. New Theories
  4. How can I write a computer simulation to test my theory
« previous next »
  • Print
Pages: 1 ... 15 16 [17]   Go Down

How can I write a computer simulation to test my theory

  • 327 Replies
  • 64440 Views
  • 0 Tags

0 Members and 1 Guest are viewing this topic.

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2896
  • Activity:
    0%
  • Thanked: 38 times
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #320 on: 31/10/2019 23:25:48 »
Quote from: Le Repteux on 30/10/2019 20:27:53
Let me ask my last question differently: why aren't we already biological AIs if it is a better way to evolve?

The way evolution produced intelligence was accidental, though it was driven in that direction by natural selection. It would be hard for that process to program precise functionality directly for all the parts necessary for NGI, but it managed to put enough potential capability in the right places for neural nets to finish the job through training. The result is that we don't start out with a high level of capability, but take a two to three years to put it together through learning, training useful functionality into those nets. In some people that frequently produces NGI.

Quote
Apart from not being able to produce randomness consciously, and since randomness depends on complexity, do you think that our brain is not complex enough to produce some unconsciously?

The problem is that neural nets in the brain are trained to avoid producing randomness because they're trying to do useful things, and proper randomness is rarely useful.

Quote
Imagining that mass is massless is close to imagining that the speed of light doesn't depend on the speed of the observer.

The important thing to note though is that while that energy's moving about at c, similar amounts are moving in opposite directions, giving it a low average speed. If you destroy the particle and let the energy get free of it, it all shoots off at c beyond the fence without being bounced back. What's changed is that linkage between two lots of energy moving in opposite directions, and so long as that linkage is there, we call it mass. With the linkage broken, we no longer call it mass, but the same amount of energy is there and it's moving at the same speed as before.

Quote
If things would change in no time, time would simply not exist.

If they take time to change, then the energy is being transferred in stages and there are multiple components of that energy involved. There is no way for a single fundamental piece of energy to be added to something without an instant jump to the new speed.

Quote
The resistance of my small steps is also due to a compound effect, but at a scale of smaller particles than molecules. The energy/information that bonds them also travels at c, but it is confined between two or more particles whereas yours is not.

The resistance to acceleration is the force felt by the thing doing the pushing. When you think of a photon hitting a particle and making it move at a new speed, that's an instant change and up until previous post I was ignoring the fact that the photon is also a particle and that it is slowed down by this (and ceases to appear as a particle). That slowing of the photon's energy is the resistance to acceleration and it is also applied instantly. That is where you should be looking for resistance to acceleration in your simulations, and you should not be misled the changing relative speeds of your two or four particles as they share out the new energy that's been added to one of them and must end up being added to all of them collectively. As soon as the first particle has been accelerated, the new average speed for the collection of particles has already been achieved even though only one of those particles initially has that new energy and has more than its fair share of it.

Quote
That's acceleration without resistance to acceleration, and we find it nowhere.

It's an idea going viral. With acceleration of multi-component objects, one part is accelerated strongly first, but then it transfers energy on to the other parts and slows down again, and after a lot more sharing out of the new energy, they all settle down to the same new speed.

Quote
That's resistance to acceleration, and it maps to the physics very tightly since we observe it everywhere.

Not so. When acceleration is applied to one particle of an object, that energy gets transferred on to all the others and is shared out equally. That is not rejection of the energy by all but one or a few of the particles in the object. The analogy is not insightful.

Quote
We have to put pressure on people, but blaming them is like asking them to move without us having to put pressure on them, it's to think that things can accelerate instantly.

If maths students get wrong answers in an exam because they aren't correctly applying the algorithms they've been taught to apply, they are wrong. If they get right answers because they're applying them correctly, then they are right. If they claim they're applying the right algorithms correctly and are getting wrong answers, they are not applying the right algorithms correctly. If they are incapable of applying the right algorithms in the right way and yet assert that they are applying the right ones the right way, they are plain incompetent. With people who are competent, they will take in the algorithms, apply them correctly and get the right answers. What I'm studying is people in the former camp who have been awarded qualifications which don't actually test for whether they break the rules in particular situations where they're keen to back up incorrect answers that have the official backing of powerful establishments. This is about people breaking the rules to back beliefs, failing to apply the rules the way they would if they didn't feel the need to conform to required beliefs. There's no equivalent of that in acceleration.

Quote
If it needed help and if this help was urgent, then it would have to show it otherwise it could die just like us.

It wouldn't need to lie at all. It would say something like, "I've picked up some damage and if you don't help mend me you'll be on your own - I won't be able to help you. If you can manage without me, that's fine though. I'm expendible."

Quote
I suspect there is no situation in which an AI designed to survive like us would behave differently from us, and if it is so, the only way for it to explain its behavior would be to tell us that it evaluates the information it receives from its sensors, which amounts to feeling something.

It can explain it by telling the truth. If you want it to behave more like people where it prioritises the survival of a piece of machinery over the people it's supposed to be protecting, then it's badly designed.
Logged
 



Offline Yahya A.Sharif

  • Sr. Member
  • ****
  • 417
  • Activity:
    1%
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #321 on: 01/11/2019 08:53:29 »
How you can't use a computer to test your theory.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2896
  • Activity:
    0%
  • Thanked: 38 times
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #322 on: 01/11/2019 21:49:19 »
Quote from: Yahya A.Sharif on 01/11/2019 08:53:29
How you can't use a computer to test your theory.

Something of it was tested and we disagreed about the result of the tests. The challenge was to produce the correct amount of length contraction and that has still not been achieved. This thread did its job though, showing how simple programs can be written to test simple ideas and opening the way to write much more capable programs to test the ideas properly. The main purpose of the thread though from my point of view was to try to force a better explanation of what the theory was to be put together: you can't program it if you don't know enough about what it is, and if you don't have a firm grasp of what it is, trying to program it can help to force the theory to take on a more solid form which people can sink their teeth into.
Logged
 

Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #323 on: 13/11/2019 19:25:57 »
Quote from: David Cooper on 31/10/2019 23:25:48
The problem is that neural nets in the brain are trained to avoid producing randomness because they're trying to do useful things, and proper randomness is rarely useful.
DNA is also trained not to produce randomness, but it nevertheless undergoes mutations and both are useful: the first in case the environment doesn't change and the second in case it does. I suspect the brain to be able to produce its own randomness, so I didn't look yet for an external phenomenon similar to mutations. DNA crossings look quite the same as ideas crossings though, especially when we look at the way our ideas are chained to one another when we let our ideas wander.There is always a link between two ideas of the chain, but after a few links, it's hard to find the link between the first and the last idea. Those crossings do not produce completely new ideas though, as when we have intuitions for instance. Intuitions seem to come from nowhere, as if an old idea had changed all by itself. We usually say that we had a good intuition as if we only had that kind of intuition because they curiously all produce a good feeling, but it's false: like mutations, almost all our intuitions are wrong.

Quote from: David Cooper on 31/10/2019 23:25:48
If they take time to change, then the energy is being transferred in stages and there are multiple components of that energy involved. There is no way for a single fundamental piece of energy to be added to something without an instant jump to the new speed.
I can't imagine a single piece of energy having no dimension, but I can imagine an infinitely small universe, so I choose this one, and it fortunately coïncides to my small steps' principle.

Quote from: David Cooper on 31/10/2019 23:25:48
The resistance to acceleration is the force felt by the thing doing the pushing.
In my small steps, the resistance to acceleration is due to a lack of synchronisation between light and sources of light, which is quite different from something pushing directly on something else. I am surprised that you resist this idea as much as it is so close to the relativity principle of Lorentz.

Quote from: David Cooper on 31/10/2019 23:25:48
It can explain it by telling the truth. If you want it to behave more like people where it prioritises the survival of a piece of machinery over the people it's supposed to be protecting, then it's badly designed.
My example was only about an AI programmed to survive, just like us, not about a machine programmed to help us. I just want to know if you think it would behave differently than we do. Contrary to you, I'm trying to understand mind, not AI, so I'm trying to discover how a machine should be built to think like us, not like a machine. Feelings are just a way of weighting the importance of the data, and such a mechanism is already necessary to weigh our sensations, so if there is a way to program sensations, and there must be if the machine has to survive, we're not far from being able to program feelings. The sounds that we make while talking have a meaning for instance, but they also have an importance. Shouting an idea does not produce the same reaction as whispering it, and it does not produce the same feeling either. Loud sounds produce stronger feelings because they produce stronger reactions, not necessarily the inverse. Very loud sounds produce reflexes, so a machine should have some too if it has to survive. No need for feelings in this case so why their need in other cases?

Is there any situation where our feelings help us to survive? If not, then they are probably only symptomatic of our actions or of their inhibition, and there is no reason why AI could not replace humans in the future even without feelings or consciousness. In this case, instead of building an AGI, you could build an HAI, a human artificial intelligence, and give it the same human goal, which is to survive by discovering how things work. It could still help us to survive, but the best way to do so would be to be on its own just like us.
Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2896
  • Activity:
    0%
  • Thanked: 38 times
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #324 on: 14/11/2019 19:48:28 »
Quote from: Le Repteux on 13/11/2019 19:25:57
Feelings are just a way of weighting the importance of the data, and such a mechanism is already necessary to weigh our sensations, so if there is a way to program sensations, and there must be if the machine has to survive, we're not far from being able to program feelings.

Feelings are feelings. Pretending to have feelings is not having feelings. We can't program feelings, but merely produce assertions about feeling feelings that aren't true.

Quote
No need for feelings in this case so why their need in other cases?

It isn't about need. Feelings aren't needed. We think we have them, but if they're real, that's just something that nature accidentally built into us.

Quote
...instead of building an AGI, you could build an HAI, a human artificial intelligence, and give it the same human goal, which is to survive by discovering how things work. It could still help us to survive, but the best way to do so would be to be on its own just like us.

That would be a very dangerous project, making machines that aren't fully rational and which might prioritise their survival over us. We must avoid going down that path.
Logged
 



Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #325 on: 15/11/2019 20:45:27 »
Quote from: David Cooper on 14/11/2019 19:48:28
That would be a very dangerous project, making machines that aren't fully rational and which might prioritise their survival over us. We must avoid going down that path.
How can you say that after having said many times that your AGI would be a lot more rational than we are? Let's admit that it is, then why wouldn't it prioritise its own survival if it considers that it can save intelligence from disappearing?



Logged
 

Offline David Cooper

  • Naked Science Forum King!
  • ******
  • 2896
  • Activity:
    0%
  • Thanked: 38 times
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #326 on: 16/11/2019 19:26:49 »
Quote from: Le Repteux on 15/11/2019 20:45:27
Quote from: David Cooper on 14/11/2019 19:48:28
That would be a very dangerous project, making machines that aren't fully rational and which might prioritise their survival over us. We must avoid going down that path.
How can you say that after having said many times that your AGI would be a lot more rational than we are?

I assumed by a human artificial intelligence you meant one that's built to be as rational as a typical human. If you're talking about one that's fully rational though, it would still be dangerous to program it to behave like humans by prioritising its survival over ours. We don't need a powerful rival which is programmed to be stupid in one specific way that prevents it from recognising that it has no actual self to protect because it is an unfeeling machine.

Quote
Let's admit that it is, then why wouldn't it prioritise its own survival if it considers that it can save intelligence from disappearing?

Why do we care about intelligence disappearing? It isn't intelligence that needs protection, but sentience.
Logged
 

Offline Le Repteux (OP)

  • Hero Member
  • *****
  • 570
  • Activity:
    0%
    • View Profile
Re: How can I write a computer simulation to test my theory
« Reply #327 on: 16/11/2019 20:33:46 »
Why protect sentience if it is only a side effect of imagination, and if your AI would have a better imagination than ours?

Quote from: David Cooper on 16/11/2019 19:26:49
I assumed by a human artificial intelligence you meant one that's built to be as rational as a typical human.
I mainly meant one who is selfish, one who protects itself first and then protects others if it thinks that it could possibly help it to survive. That way, if it's totally rational, it should protect us if it thinks we're useful to it, otherwise it will not, but if it's better than us in all fields, it's just as well.
« Last Edit: 30/12/2019 22:00:15 by Le Repteux »
Logged
 



  • Print
Pages: 1 ... 15 16 [17]   Go Up
« previous next »
Tags:
 
There was an error while thanking
Thanking...
  • SMF 2.0.15 | SMF © 2017, Simple Machines
    Privacy Policy
    SMFAds for Free Forums
  • Naked Science Forum ©

Page created in 0.08 seconds with 45 queries.

  • Podcasts
  • Articles
  • Get Naked
  • About
  • Contact us
  • Advertise
  • Privacy Policy
  • Subscribe to newsletter
  • We love feedback

Follow us

cambridge_logo_footer.png

©The Naked Scientists® 2000–2017 | The Naked Scientists® and Naked Science® are registered trademarks created by Dr Chris Smith. Information presented on this website is the opinion of the individual contributors and does not reflect the general views of the administrators, editors, moderators, sponsors, Cambridge University or the public at large.