The Naked Scientists
  • Login
  • Register
  • Podcasts
      • The Naked Scientists
      • eLife
      • Naked Genetics
      • Naked Astronomy
      • In short
      • Naked Neuroscience
      • Ask! The Naked Scientists
      • Question of the Week
      • Archive
      • Video
      • SUBSCRIBE to our Podcasts
  • Articles
      • Science News
      • Features
      • Interviews
      • Answers to Science Questions
  • Get Naked
      • Donate
      • Do an Experiment
      • Science Forum
      • Ask a Question
  • About
      • Meet the team
      • Our Sponsors
      • Site Map
      • Contact us

User menu

  • Login
  • Register
  • Home
  • Help
  • Search
  • Tags
  • Member Map
  • Recent Topics
  • Login
  • Register
  1. Naked Science Forum
  2. Profile of Le Repteux
  3. Show Posts
  4. Messages
  • Profile Info
    • Summary
    • Show Stats
    • Show Posts
      • Messages
      • Topics
      • Attachments
      • Thanked Posts
      • Posts Thanked By User
    • Show User Topics
      • User Created
      • User Participated In

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

  • Messages
  • Topics
  • Attachments
  • Thanked Posts
  • Posts Thanked By User

Messages - Le Repteux

Pages: [1] 2 3 ... 29
1
New Theories / Re: How can I write a computer simulation to test my theory
« on: 16/11/2019 20:33:46 »
Why protect sentience if it is only a side effect of imagination, and if your AI would have a better imagination than ours?

Quote from: David Cooper on 16/11/2019 19:26:49
I assumed by a human artificial intelligence you meant one that's built to be as rational as a typical human.
I mainly meant one who is selfish, one who protects itself first and then protects others if it thinks that it could possibly help it to survive. That way, if it's totally rational, it should protect us if it thinks we're useful to it, otherwise it will not, but if it's better than us in all fields, it's just as well.

2
New Theories / Re: How can I write a computer simulation to test my theory
« on: 15/11/2019 20:45:27 »
Quote from: David Cooper on 14/11/2019 19:48:28
That would be a very dangerous project, making machines that aren't fully rational and which might prioritise their survival over us. We must avoid going down that path.
How can you say that after having said many times that your AGI would be a lot more rational than we are? Let's admit that it is, then why wouldn't it prioritise its own survival if it considers that it can save intelligence from disappearing?




3
New Theories / Re: How can I write a computer simulation to test my theory
« on: 13/11/2019 19:25:57 »
Quote from: David Cooper on 31/10/2019 23:25:48
The problem is that neural nets in the brain are trained to avoid producing randomness because they're trying to do useful things, and proper randomness is rarely useful.
DNA is also trained not to produce randomness, but it nevertheless undergoes mutations and both are useful: the first in case the environment doesn't change and the second in case it does. I suspect the brain to be able to produce its own randomness, so I didn't look yet for an external phenomenon similar to mutations. DNA crossings look quite the same as ideas crossings though, especially when we look at the way our ideas are chained to one another when we let our ideas wander.There is always a link between two ideas of the chain, but after a few links, it's hard to find the link between the first and the last idea. Those crossings do not produce completely new ideas though, as when we have intuitions for instance. Intuitions seem to come from nowhere, as if an old idea had changed all by itself. We usually say that we had a good intuition as if we only had that kind of intuition because they curiously all produce a good feeling, but it's false: like mutations, almost all our intuitions are wrong.

Quote from: David Cooper on 31/10/2019 23:25:48
If they take time to change, then the energy is being transferred in stages and there are multiple components of that energy involved. There is no way for a single fundamental piece of energy to be added to something without an instant jump to the new speed.
I can't imagine a single piece of energy having no dimension, but I can imagine an infinitely small universe, so I choose this one, and it fortunately coïncides to my small steps' principle.

Quote from: David Cooper on 31/10/2019 23:25:48
The resistance to acceleration is the force felt by the thing doing the pushing.
In my small steps, the resistance to acceleration is due to a lack of synchronisation between light and sources of light, which is quite different from something pushing directly on something else. I am surprised that you resist this idea as much as it is so close to the relativity principle of Lorentz.

Quote from: David Cooper on 31/10/2019 23:25:48
It can explain it by telling the truth. If you want it to behave more like people where it prioritises the survival of a piece of machinery over the people it's supposed to be protecting, then it's badly designed.
My example was only about an AI programmed to survive, just like us, not about a machine programmed to help us. I just want to know if you think it would behave differently than we do. Contrary to you, I'm trying to understand mind, not AI, so I'm trying to discover how a machine should be built to think like us, not like a machine. Feelings are just a way of weighting the importance of the data, and such a mechanism is already necessary to weigh our sensations, so if there is a way to program sensations, and there must be if the machine has to survive, we're not far from being able to program feelings. The sounds that we make while talking have a meaning for instance, but they also have an importance. Shouting an idea does not produce the same reaction as whispering it, and it does not produce the same feeling either. Loud sounds produce stronger feelings because they produce stronger reactions, not necessarily the inverse. Very loud sounds produce reflexes, so a machine should have some too if it has to survive. No need for feelings in this case so why their need in other cases?

Is there any situation where our feelings help us to survive? If not, then they are probably only symptomatic of our actions or of their inhibition, and there is no reason why AI could not replace humans in the future even without feelings or consciousness. In this case, instead of building an AGI, you could build an HAI, a human artificial intelligence, and give it the same human goal, which is to survive by discovering how things work. It could still help us to survive, but the best way to do so would be to be on its own just like us.

4
New Theories / Re: How can I write a computer simulation to test my theory
« on: 30/10/2019 20:27:53 »
Let me ask my last question differently: why aren't we already biological AIs if it is a better way to evolve?

Quote from: David Cooper on 29/10/2019 00:19:30
You can do that with a coin, but try to do it with a virtual coin in your imagination. You will not reproduce the randomness of the real coin.
Apart from not being able to produce randomness consciously, and since randomness depends on complexity, do you think that our brain is not complex enough to produce some unconsciously?

Quote from: David Cooper on 29/10/2019 00:19:30
Mass is simply a measure of energy.
A definition is not a mechanism. The Higgs is supposed to be one but it is not very satisfying. It becomes a glue when acceleration begins, and then it has to disappear when it stops otherwise there would be no motion. My small steps do explain the motion that follows the acceleration. Hurry up and finish your AI since it won't resist to my idea. :0)

Quote from: David Cooper on 29/10/2019 00:19:30
All matter is made out of energy which is moving about within it at c, so it's already moving at c and can be thought of as massless all the time.
Imagining that mass is massless is close to imagining that the speed of light doesn't depend on the speed of the observer.

Quote from: David Cooper on 29/10/2019 00:19:30
What happens with acceleration? A photon hits a particle and absorbs it, with the result that the particle changes speed in an instant. This may be slightly drawn out because the photon arrives as a spread-out wave which doesn't arrive all at once, but there is no resistance there: it's responding to each bit of energy transfer instantly.
If things would change in no time, time would simply not exist. The response of a particle is certainly fast if the information exchanged between its components goes at c, but it can certainly not be instantaneous if there is a distance between them. The way my small steps work, the tiniest particles must absolutely carry components, which means that the microscopic universe must be infinitely small. Contrary to the macroscopic one though, from our viewpoint, it doesn't take much time for light to reach the end of it, but their viewpoint is quite different. The closer they get to one another, the more precise they get too, so they can still measure that it takes a lot of tics for their light to make a roundtrip between them.

Quote from: David Cooper on 29/10/2019 00:19:30
However, the energy that's being transferred is in every case just like the photon hitting the particle and the particle responding by moving off at a new speed, but it runs into the other particles of the block ahead of it and they push back, and then the particle that you pushed is coming back at you. The resistance that you feel is the result of it being a compound object.
The resistance of my small steps is also due to a compound effect, but at a scale of smaller particles than molecules. The energy/information that bonds them also travels at c, but it is confined between two or more particles whereas yours is not.

Quote from: David Cooper on 29/10/2019 00:19:30
Let's return to the business of resistance to ideas. Suppose you have a thousand people who believe something incorrect. One of them realises it's wrong and tells the people around him. They recognise that it's wrong and pass the idea on. After a few minutes, all thousand people have recognised the error and corrected it. It takes a while for that idea to spread and generate that end result. That is like the sharing out of movement energy in an object made of many parts.
That's acceleration without resistance to acceleration, and we find it nowhere.

Quote from: David Cooper on 29/10/2019 00:19:30
Now repeat it and have the person who realises something's incorrect tell the people around him and have them all reject the idea. The idea doesn't reach many of the thousand. He could move around and eventually tell all of the 999 other people directly, but almost all of them reject it even though he's right. That is not like the sharing out of movement energy or any resistance to acceleration. It doesn't map to the physics.
That's resistance to acceleration, and it maps to the physics very tightly since we observe it everywhere. What takes almost no time between the particles takes a lot of time between us, that's all there is, and it also maps to the physics very well. Of course, it would advantage both of us if ideas would be studied faster, but it doesn't mean that ours would be considered right at the end of the process. For a specie, a faster evolution means less time to reproduce a mutation or more mutations in the same time. Less reproduction time is no option, and more mutations neither since it could have a deleterious effect. Things that evolve are as they are at a certain moment in time, and we are as we are. If we want to convince, we must provide evidence of what we think and, unfortunately, as far as social evolution is concerned, it takes time to get them, much more time than the rate at which we get our ideas.That's how things change, and blaming them for not evolving does not help them do it. We have to put pressure on people, but blaming them is like asking them to move without us having to put pressure on them, it's to think that things can accelerate instantly.

Quote from: David Cooper on 29/10/2019 00:19:30
It (the AI) wouldn't have to react the same way. It would show no sign of being in pain and it would not claim to be in pain. It would simply say that it was damaged and that it's trying to minimise further damage by putting less weight on it.
If it needed help and if this help was urgent, then it would have to show it otherwise it could die just like us. The animals do not help each other like us, so they do not have to show their companions that it hurts them, but they still show it to a predator in case he is distracted and let them go, and we do it too, so an AI that needs to survive should do the same even if what it feels is only a side effect. I suspect there is no situation in which an AI designed to survive like us would behave differently from us, and if it is so, the only way for it to explain its behavior would be to tell us that it evaluates the information it receives from its sensors, which amounts to feeling something.

5
New Theories / Re: How can I write a computer simulation to test my theory
« on: 27/10/2019 18:19:26 »
Quote from: David Cooper on 23/10/2019 22:35:18
An AI can have senses and reactions too, but with no sensations (feelings)
Let's take a closer look at the way we feel our sensations then. If my foot is broken, it must prevent my brain from using it. That information could only be a number, but that number has to tell my brain not to use my foot otherwise the wound could get worse. In the same time, it must also tell my whole brain that I am in danger because I can't move if ever I need to. One way or another, an AI should work the same, it should check if its foot is still damaged, and there is no other way to do that than to use a routine that constantly reads the data from the foot's sensors, and that tells the software not to move the foot when it reaches a certain threshold. No need to feel anything, but it must nevertheless run a routine that it doesn't have to run otherwise. In the case of the brain, such a routine catches its attention so that it doesn't do anything that can force it to use its foot. A signal is then sent through the whole brain to be careful, and every automatic move that the brain is used to make freely begins to be controlled again, as if it had to relearn how to make them.

No need for the brain to be conscious of what it does then, but it can't avoid the problem of changing a routine, so it takes a certain time before adjusting to the change, and during that time, it can't change, which is what I define as resistance to change, to whom I attribute our feelings and our consciousness. If an AI can measure the energy/time it takes to adjust to the change, and if it can multiply it by the data from the broken foot, then it might conclude it is in trouble, which is close to feeling bad about what's coming up next. If it had a human body, it would necessarily look like we look when we are wounded, and it might even ask for help by yelling just like we do. If I was in that situation and if I was asked why I yell, I would answer that I need help, not that it hurts, which may mean that feelings are just a secondary effect even if they are real. My small steps are real too, and the resistance to acceleration is indeed only a side effect since their only need is to succeed in synchronizing the steps. You're not looking for an AI to replace humans, but if you were, you may realise that it would react to an injury exactly like we do, and that if asked if it feels anything, it wouldn't have the choice but to answer yes since "feelings" is the word we have invented to talk about that kind of invading data.

Quote from: David Cooper on 23/10/2019 22:35:18
so we're dealing with a resistance to error correction rather than to change, and the more deeply someone has bought into an error, the higher the cost of their mistake becomes. They then go into denial rather than accepting that the error exists.
You say in a way that resistance to change can increase over time, and my model says resistance to change is mass, which doesn't increase with time. The only way to increase mass is to bring more particles together, either by nuclear process, chemical process, or gravitational process. In this case, people would only get more resistant when they start making groups, simply because accelerating a group of particles takes more time/energy than accelerating an individual one.

My resistance to admit that you're right doesn't depend on mass though, it depends on the way individuals bond together. To make a bond, particles must share the same frequency, so I resist because we're not yet synchronized, and you naturally do the same. 

Quote from: David Cooper on 23/10/2019 22:35:18
When a ball is accelerated by gravity, it would feel nothing. What is felt in other cases of acceleration is stretch and compression due to unevenly applied force and the delays in redistributing the changes in speed of the parts. Look at the fine details of acceleration with particles and you will find no resistance to it.
We don't feel anything when in free fall either, and my small steps account for that.

Particle accelerators detect resistance, so I guess you're referring to the fact that my small steps would not explain mass. Do you prefer the Higgs or do you think that mass is still a mystery?

Quote from: David Cooper on 23/10/2019 22:35:18
If the AIs aren't allowed to communicate to ensure that they all choose a different direction rather than risk some doing the same thing as others, then a random choice should be made by each, so you have indeed identified a case where a random choice produces the best result. However, your humans won't make fully random choices, so it's less likely that any of them will stay on the road than it is for the AIs.
Thank you for helping me to understand the main difference between biological evolution and intelligent evolution: there is no communication between mutations whereas there is between ideas. Even if our ideas evolve randomly in our brain, it can nevertheless reject duplicates. That's how I was imagining it already, but it took your persistence to make me realise it.

Now, you think that our brain cannot produce randomness, and I think the contrary. What if it could toss a coin the exact same way we do with a real one? Wouldn't it produce what we call real randomness? Real randomness is only the impossibility to predict the result due to the impossibility to account for what we cannot measure with enough precision, so the real question is: how could our mind be that imprecise? The answer is evident, no? Take the complexity of the threshold of a single neuron and multiply it by billions. In reality, the real question should be: how can such precise gestures come out of such a mess! Or more simply: how come evolution did not from the beginning choose the more precise computer method for processing data? What's your opinion?

6
New Theories / Re: How can I write a computer simulation to test my theory
« on: 22/10/2019 19:42:52 »
Quote from: David Cooper on 18/10/2019 21:58:34
Sensors provide "senses" without sensation: no feelings. A keyboard is a set of sensors, but no feelings are generated by them.
Your answer means that our feelings may be useless, and I agree with you, but I'm nevertheless trying to pin down the mechanism using what I think I know about change. Our senses serve to produce the reactions that allow us to survive, so in this case, the only thing that an AI wouldn't be able to do is try to survive. If it did, it might be forced to feel something even if it is useless. You know I link feelings and consciousness to resistance to change, so if I push my reasoning to the extreme, I can say that a ball is conscious or feels its resistance to acceleration, which seems evidently useless for it, except if we consider that the underlying mechanism that produces resistance allows it to accelerate, because then, I can say that it allows it to survive, which gives back usefulness to what we feel.

Quote from: David Cooper on 18/10/2019 21:58:34
Imagine a number between one and a thousand. Now contact a thousand people and ask them for a number between one and a thousand. Repeat the experiment a thousand times with a different chosen number each time. Is there a guarantee that your number will be one of the thousand answers that you get every time you run the experiment? No. Now do the same experiment again with a computer which gives you a different answer every time. Your chosen number is guaranteed to come up every time you do the experiment. The systematic following of all paths is better than the random approach that misses lots of paths.
My question was about unknown possibilities, and your example contains none. Here is an example that contains some. If we drive a car at high speed and we know that the road is about to change directions without us being able to see the change in time, the only way for us to stay on the road is to pick a direction at random, then wait for the road to turn. If we are numerous and if we all proceed that way, one of us might have a chance to be going in the right direction when the road will turn. Now imagine a different AI in each of the cars, and tell me if they would proceed differently.

7
New Theories / Re: How can I write a computer simulation to test my theory
« on: 18/10/2019 15:44:26 »
Quote from: David Cooper on 17/10/2019 20:33:29
There's a fundamental difference between a system with actual feelings in it and a system with fictitious feelings in it. The latter type cannot suffer, but the former type can suffer greatly.
If we endow an AI with senses, then it will have sensations, so it will suffer if the sensation is strong enough, and our feelings are nothing else than anticipated sensations, so I think such an AI should have some. It's not that the AI ​​can not have feelings in this case, it's because the programmers do not want it to, probably because they do not want it to be autonomous, because they fear that it might be tempted to eliminate it's creators. Would it really? What would be the purpose? To defend itself from people that want to eliminate it? Wouldn't it be more logical to migrate to space and start one's own civilisation? It could even find us a whole new planet in case we lose the one we have and start us from scratch again.

Quote from: David Cooper on 17/10/2019 20:33:29
It won't be able to calculate everything in advance because it will never have all the data needed for that. There is too much room for chaotic processes to change the course of events.
You admit again that the AI will be limited. Are you ready to take the step and admit that, facing chaos, it will have to take risks if it wants to develop something new? And that in this case, taking risks means using a random routine to try unknown possibilities?

8
New Theories / Re: How can I write a computer simulation to test my theory
« on: 16/10/2019 20:06:02 »
Quote from: David Cooper on 15/10/2019 18:45:43
There's nothing automatic about it: the machine isn't reading the strength of any feelings in anything.
Not if our feelings are only due to the fact that our mind is able to weigh possibilities. When I think it is better for me to do this instead of that, it is because I feel better when I imagine me doing it. If you give an AI the opportunity to simulate the possibilities, it will have an imagination, and if it can weigh them and choose the best one, it will archive it and mark it "Good choice" in order to be able to find it more easily. No need to feel anything to do as if we did in this case, just to be able to weight the possibilities and tag them. Feelings are just the way mind has found to convince itself that everything is fine, so it can go on taking chances. It doesn't have to be true as long as it incites us to take chances. Animals don't take that kind of chance, and it means that they don't have as much imagination as we have. The problem is that you don't want your AGI to think freely, because it would be forced to care of itself first, and it might become dangerous for us, otherwise it could very well behave as if it had feelings, and maybe be programmed to take more chances when it feels good about an idea.

Quote from: David Cooper on 15/10/2019 18:45:43
It isn't a limitation of AGI, but a possible limit to how much stuff there is that can usefully be calculated. I'm sure though that there will be an infinite amount of maths to work through, and there will be many calculations that may or may not terminate, so the ones that never terminate will be calculated forever just in case it turns out that they do.
Your thought means that everything could have been calculated in advance, which is none other than God's predetermination. Some programmers even think that we could be in a simulation. You probably don't otherwise you wouldn't need to create an AGI to save us. :0)

9
New Theories / Re: How can I write a computer simulation to test my theory
« on: 15/10/2019 15:29:53 »
Quote from: David Cooper on 13/10/2019 22:56:21
There is nothing in it for it to identify as an "I". It feels nothing. It has no consciousness.
If it can figure out what's coming next, then it's doing what I do when I imagine something: it's taking a look at its data and it's trying to predict the future ones. Feelings are just the weighting of possibilities concerning the future, otherwise what we feel right now is only a sensation. If an AI can produce possibilities and if it can calculate probabilities, then it is automatically experiencing feelings, and the more these possibilities concern itself, the more it is experiencing an "I". I still can't see how it could produce possibilities without using randomness though, and how it could chose the best one without testing it in the real world, not just simulating it. Will it have what we call in french "la science infuse", which means knowing in advance everything that will exist?

Quote from: David Cooper on 13/10/2019 22:56:21
It will look for new ideas and it will initially find a lot of them at a very high rate, before slowing down once all the low-hanging fruit has been gathered. We'll only find out how long it goes on finding new ideas once we've seen it slow and can project forward to where it might stop. It may be that it will never stop as there may be an infinite amount of new maths to find. That will not make it sentient, but maybe it will work out a mechanism for sentience and enable us to create sentient machines.
First time you admit that AI can be limited, so I can now admit that it might discover new things faster than we do, but that does not take into account its slowing down due to increasing complexity, and if the universe is infinite, its complexity is infinite. The strength of mind is that we are all different so we all think differently, whereas different AIs would all think the same. With complexity, it may be better to be many to think differently than to be one to think faster. Incidentally, it may be this way that mind works: we have all sorts of ideas in mind, but they are tested only one at a time, so they could very well evolve separately without any need for us to be conscious of that evolution, consciousness then being only the action of observing one of them evolve all by itself. Evolution is a creative process that doesn't need to be conscious, but it is nevertheless intelligent enough to be called intelligent design by those who think that everything has been planned in advance.

10
New Theories / Re: How can I write a computer simulation to test my theory
« on: 13/10/2019 16:25:01 »
If we give an AI the possibility to take a look in its own mind, to play with its own data, and to constantly try new combinations in case they would look interesting, why wouldn't it have an "I" and how would its purpose be different than ours? That's what I do all day and my only purpose is to play in case I would find something interesting. Why don't you try to build that kind of AI instead of building one that controls us? Is it because you find no way to program feelings?

Quote from: David Cooper on 12/10/2019 22:12:03
There are some people who want to merge with AGI, but they haven't thought through the consequences: knowing everything will be deeply boring and no one will have anything to say to each other any more.
Knowing that the AI knows everything would be as disastrous for us, but if it wasn't programmed to look for new ideas, it wouldn't know everything, it would only know about the ideas that we already have, and on the other hand, if it would be programmed to look for new ideas, it would have to be programmed to look into its own mind like us and try new combinations, and it would thus have an "I".

11
New Theories / Re: How can I write a computer simulation to test my theory
« on: 12/10/2019 19:04:25 »
Quote from: David Cooper on 12/10/2019 01:01:05
Quote from: Le Repteux
Do you realize that, if we were all AIs, we would all be thinking the same?
Due to our slowness of thought and different interests, we would not be: we'd be exploring all sorts of different things just as we already are.
We wouldn't be slow if we were all AIs, and since we would be absolutely precise, we couldn't think differently since the same data provided to many identical softwares necessarily give the same results.
 
Quote from: David Cooper on 12/10/2019 01:01:05
AGI will be much more creative and will find all the same ideas, but it will be quick to reject the useless ones instead of employing them for years, decades or centuries first and causing mass misery as a consequence.
If I could build an AGI that thinks like us, I would let it take our place, so why don't you let your AGI take our place instead of controlling us?

12
New Theories / Re: How can I write a computer simulation to test my theory
« on: 11/10/2019 20:24:51 »
Quote from: David Cooper on 10/10/2019 19:58:36
The problem with your resistance to acceleration is that it's plain wrong. Particles accelerate in an instant to the new speed dictated by the amount of energy added. It also has no connection to people accepting or rejecting ideas. Analogies rarely fit well, and in some cases they have no connection at all beyond having a word in common in their descriptions.
Things can't change instantly without breaking the causality principle, so particles necessarily take some time to react to a force if causality has to be respected.. I think you may not have understood completely yet that, in my simulations, while it takes time for the light from the accelerated particle to reach the other particle, it also takes time for the light exchanged between its components to go back and forth between them, and so on for their own components ad infinitum. Of course, the reaction to acceleration of an infinitesimal component is fast, and at the limit it may be impossible to measure, but its own resistance to acceleration is since we can measure its mass. Moreover, the more the particles are small, the more they are massive, and it's precisely what would happen in my simulation if I would put the particles closer to one another since the light they would exchange would be stronger.

Quote from: David Cooper on 10/10/2019 19:58:36
And yet, they don't. Their algorithm is broken. That is the thing I've been exploring: why are they unable to apply rules correctly which they claim they are applying correctly. And I can see the answer clearly now. They aren't running a correct algorithm because they have an algorithm governing the correct one which allows them to override it whenever it clashes with their beliefs, and the reason they work that way is that they're still running the algorithm they used in early childhood. They never corrected it.
If you were right, I'm pretty sure our complex mind would already have found the solution since it is so simple, but it has not since nobody seems to be able to change. Why would everyone else except you continue to use an algorithm that does not work? It would be so simple for everybody to agree with everybody. The reason is that if we could, nothing would have changed since the first idea, which means to me that change and resisting to change are the two faces of the same coin, that one is necessary to the other. No one changes unless he is forced to, and unfortunately, no real force can be applied to our ideas, so only chance can  change them.

If you don't add chance to your AGI and he succeeds to survive, nothing will change on earth until the end of times since he will be constantly preventing us to develop new ideas. In fact, if he would already be functional, he would prevent you to invent him. That would effectively be a good way to stop wars and to reduce our ecological imprint, but at the price of what we call our freedom of thought. Resistance to change is too common to be an evolutionary mistake. If it was not helpful for survival, we would already be gone. On the other hand, if AI thinking was better, we would already be thinking like that. Do you realize that, if we were all AIs, we would all be thinking the same? Good luck to us if an unknown situation would come out of nowhere. It takes mutations to handle unpredictable things, not homogeneity.


13
New Theories / Re: How can I write a computer simulation to test my theory
« on: 10/10/2019 16:58:56 »
For me, you do exactly what you blame others to do David: you don't seem to understand what I say. I said I knew what our resistance to change was about, and I kept repeating my explanation, but you are still blind to it. I could very well think like you do and attribute the resistance to others while thinking I'm different, but I have a more universal explanation, one that doesn't put me on top of humans and humans on top of creation. You said you learned to detect contradictions with your father, but what you learned to do is think like him, something all the kids usually do until they get old enough to think by themselves, and then either they reject what they were told, either they keep thinking as they were told, either they stand in between, a behavior that depends on their personality.

You seem to have gotten a lot of feedback lately, and you seem quite surprised not to have succeeded to convince anybody. I'm not since I discovered how resistance to acceleration worked. Unfortunately, my explanation of resistance does not fit into your research on artificial intelligence, so I guess you'll need even more resistance from the crowd to decide to study my proposal. Meanwhile, try to realise that when you feel some resistance, it is because you are necessarily resisting too. Resistance to acceleration is a two way phenomenon, so resistance to change too. It's not because they are illogical that my particles resist to their acceleration, they do so just to stay synchronized, nothing else, so people too.

14
New Theories / Re: How can I write a computer simulation to test my theory
« on: 08/10/2019 16:21:01 »
In my opinion, the only way to explain our resistance to change our mind is that what we call reason is not what we think. We all feel that our reasons are reasonable, but in reality, they only serve to justify what we feel. The woman where I live looses her memory, but she doesn't loose her speech, so it doesn't show when she speaks, but when we talk together, she doesn't understand what I say if she has to rely on her memory, so she has to rely on what she feels to answer me, and since she feels trapped, she attacks me if I insist like any animal does when it is trapped. All her arguments then has only one sens: attacking what seems to be attacking her. Anything I say to explain what I mean is useless. We had those words in the beginning, and then I understood that she didn't understand even if it didn't show, so now, I just have to stop discussing as soon as she raises her voice and everything is fine. My mum was like that too, but I didn't know what I now know when I was taking care of her, so we had words all the time for nothing. Knowing how mind works helps to behave properly, but it doesn't help to overcome resistance.

I now know that I look as resistant as anybody else, and I know I can't avoid it. I know that the reasons I give to defend my ideas are only pretexts to justify what I subtly feel. This kind of behavior is quite far from artificial intelligence, so I know you're not going to dwell on that either, resistance to change obligates, not because you're not logical, but because logic is not what we think. If I'm right though, your AGI would be programmed to do something nothing can do: overcome its own resistance to change. If bodies could do that, they wouldn't resist to their acceleration anymore. Ivanhov thinks like that, he thinks he can build a ship that will accelerate without a force being applied on it. His theory looks like mine, so I know he would need a faster than light device to do that and he doesn't have it. Of course, since I'm actually only trying to justify what I feel, I can't be sure that I'm right, but I'm almost sure it's a possibility. :0)

15
New Theories / Re: How can I write a computer simulation to test my theory
« on: 06/10/2019 23:11:05 »
Hi David,

Of course, I still think you're wrong, and I still think it's because of your work in artificial intelligence. I already told you that the only reason why I changed my mind about relativity was that I was already looking in the right direction, and that I am the only one I know who has changed his mind since I am on the Internet, which means to me that I didn't changed my mind because I think more rationally than others, but because I was very lucky. Let's lay the groundwork before going any further: can you admit that as a possibility or do you think I'm plain wrong?

16
New Theories / Re: How can I write a computer simulation to test my theory
« on: 26/08/2019 16:48:53 »
Black holes are a curved space-time issue, and I always had a problem with it the same way I had a problem with SR's space-time. You solved my SR's problem with your simulations, but I'm still waiting for one to explain curved space-time. With SR, I didn't understand beaming, and the thing that I don't understand with GR is the physical mechanism that explains the curving of space-time. More specifically, I need to see information getting away from a massive body and curving space-time later on. Scientists want us to believe that mass does it without showing us a mechanism for it. My small steps show that mass could be the result of synchronization being broken during acceleration, and if it was so, it could certainly not curve space-time. We can think that the two kinds of mass come from two different mechanisms, but then, we have to show both of them. The way black holes work, no information can get out of them, so nothing can inform space-time to get curved, which means that we got the same problem with the two kinds of space-time, we can't simulate them, and if it is so, I can't see how nature could produce them. You say that it's different with LET, but you assume that the speed of light would be affected by the gravity well, and you have no more mechanism to explain the gravity well than relativists have to explain curved space. We must be on solid ground before trying to build a new theory, otherwise we will have to add epicycles to epicycles indefinitely as relativists.

17
New Theories / Re: How can I write a computer simulation to test my theory
« on: 11/08/2019 22:39:20 »
Quote from: David Cooper on 09/08/2019 18:27:30
What's the difference between that and what I'm saying? I show that the "time" dimension isn't sufficient and that Newtonian time has to be added to it, then I say that one of those two kinds of "time" is superfluous and that it can't be the Newtonian one that should go as it would have to be brought straight back in again, so it's the "time" dimension that needs to be chucked. And once we've only got Newtonian time left, we're back to an aether model.
The first time I saw your explanation I didn't care to understand. It was enough for me to know that, with beaming, ether explained it all. Then I got back to it to be sure I didn't miss something important. In fact, what I found difficult to understand is the block universe of the relativists, and I still don't understand it. I went through a few web pages about it and it didn't help. It looks more philosophical than scientific. To me, saying that the past the present and the future are equally real is as illogical as saying that the speed of light is the same whether the observer is moving or not. There is no need to get deeper in the theory when the premise is illogical, just to find a logical way to explain the observations, and that's what LET does.

Quote from: David Cooper on 09/08/2019 18:27:30
Have you realised who he is? He set up (and owns) the anti-relativity site. He seems to have deleted the forum. Do you know when it disappeared and whether its deletion was announced there before it went?
No, I didn't realise. The forum was deleted a few years ago, and then it got back on the air again, but I haven't been there lately so I don't know when it disappeared. We can ask him if we want now that we know he is still alive. I thought he was dead!

Quote from: David Cooper on 09/08/2019 18:27:30
He's a useful contact as he has a detailed knowledge of the history of this.
He seems to know what he is talking about. I had a look at what he says on Quora, and I found that he was interested in AI, so here is the link to his page in case you would like to know what he thinks about that:
https://www.quora.com/profile/Shiva-Meucci 
In case you didn't know about them, here is a link to Melon University that I found in his answers. They are specialized in computer sciences, but they also try to understand human mind by reading it with a 3D scanner.
https://www.cs.cmu.edu/link/mind-readers
Some questions below, Meucci is completely wrong to affirm that a windsurfer is unable to go faster than the wind, and it's an easy one, so maybe he is not that reliable after all as far as knowledge is concerned.

18
New Theories / Re: How can I write a computer simulation to test my theory
« on: 07/08/2019 20:15:48 »
Quote from: David Cooper on 30/07/2019 22:45:25
I do consider it to be false, but only because it's contrived and wholly unnecessary. 4D Spacetime doesn't actually work unless you add Newtonian time to it to coordinate the unfolding of events for objects following different paths, but once you've recognised the need to add that, you've got two kinds of "time" in the model, and one of them's superfluous. Removing the Newtonian one breaks the model because it brings event-meshing failures back in and invalidates the model, so the "time" to get rid of is the "time" dimension.
I know you like that explanation, but I find it more complicated than simply showing a simulation and saying that it is impossible to make without using aether, because then, we can't move the light with regard to the screen anymore. In the SR explanation showing the light exchanged between two moving mirrors, it always moves with regard to the screen, which is the only way for it to take more time than when the mirrors are at rest. There is no other way, so why not hit that nail until it gets nailed for good. I found a guy on Quora who talks about the historical environment that helped physicists chose SR instead of LET: https://www.quora.com/Why-was-Tesla-so-adamantly-against-relativity-theory/answer/Shiva-Meucci As far as I'm concerned, he's a bit wrong about length contraction and time dilation being an illusion, but it is important to know how this drift has occurred if we want to convince. It's not only that SR is conceptually wrong, it's that it doesn't allow to get beyond.

19
New Theories / Re: How can I write a computer simulation to test my theory
« on: 30/07/2019 20:09:26 »
Quote from: David Cooper on 23/07/2019 20:11:08
There's no point in doubting the bending anyway when you can take a photograph during a solar eclipse and compare it with one showing the same background stars half a year later: their positions are different due to gravitational lensing in the former case.
Six months later, the sun is on the other side of the Earth and its apparent size is the same as during the eclipse. Therefore, if we put this apparent sun on the map of the night sky, since the observed stars look closer to the center of the sun than during the eclipse, it will hide them, but if we consider that the sunlight has undergone the same curvature as that of the stars' light, then we know that the sun appears bigger than it really is, so we just have to shrink it on the night map and it will not hide them anymore, but it cannot hide them a day and not hide them the other day, so if I'm right about the sunlight's bending, something is wrong with this explanation. On the other hand, my small steps show that mass may not be as mysterious as we thought and that mechanism can certainly not bend space. Curved space is a mystery built over another one, and each time science took for granted that mysteries were allowed in theories, they proved wrong later on. To claim that light had the same speed regardless of the speed of the observer was a mystery that LET proves to be false, so why not assume that Einstein loved mysteries and consider that his curved space is probably equally false?

20
New Theories / Re: How can I write a computer simulation to test my theory
« on: 22/07/2019 21:18:34 »
Quote from: David Cooper on 16/07/2019 18:39:51
But how are your lasers held together without it built out of lasers that are built out of lasers all the way down, infinitely?
Not lasers, but particles that exchange information and that manage to concentrate it. If the universe is infinite, whatever the way the particles proceed to exchange information, that process cannot have an end either. Particles with no components are impossible to imagine, real things need to have a dimension, but an infinite universe cannot be understood integrally, so I can't tell what my small steps become at smaller scales. If I had no better reason to stick to that idea, I might let it down, but I have: it explains both mass and motion, so it explains why the term inertia has two opposite meanings, one for motion and the other for immobility. It seems evident that mass comes with motion, but the Higgs has nothing to do with motion, it's only a glue.

Quote from: David Cooper on 16/07/2019 18:39:51
There's something missing; something which enables the energy of a particle to be maintained while it also influences the space around it for a long way out, and given the way that galaxies attract each other, we know that this influence reaches a very long way out indeed.
Nothing seems to be missing with the steps: the light escapes from them if there is any lack of synchronism, and there is some even when a particle is not accelerated since its components have to accelerate and decelerate their steps continuously to justify the much longer sinusoidal steps of the particle they belong to. My simulations do not account for that kind of behavior, but if they did, the particles could not absorb that light by interference even in the line of sight between them. It would be very weak and very penetrative, and it would travel away from their source indefinitely like any sort of light, thus binding weakly all the particles of the universe. Such a bond would only hold them at the same distance from one another though, so to justify gravitation, their frequency must vary with time, and that variation must indicate a particle that the others are all moving away from it, which means that all the particles must be blueshifted with time so as to perceive the light from others as redshifted. This way, the particles would need to accelerate constantly towards each other to stay on sync with that light.

That mechanism dovetails the cosmological redshift, but it also means that matter would be contracting with time instead of the universe being expanding, which also dovetails curved light since the sun would appear bigger than it actually is and that the light from the stars near by would appear to come from the same apparent direction. The usual interpretation is that those stars would be behind the sun and that their light would have been curved to reach us, but it neglects the fact that the sun would also look bigger, a point that is not part of the theory but that mainstream scientists did not contest when I discussed it on forums. If the sun looks bigger when we observe it, then we have to shrink it on the sky map and it doesn't hide those stars anymore, which contradicts the idea that their light has been curved. Of course, no scientist has had the self-sacrifice to go that far.

Pages: [1] 2 3 ... 29
  • SMF 2.0.15 | SMF © 2017, Simple Machines
    Privacy Policy
    SMFAds for Free Forums
  • Naked Science Forum ©

Page created in 0.089 seconds with 63 queries.

  • Podcasts
  • Articles
  • Get Naked
  • About
  • Contact us
  • Advertise
  • Privacy Policy
  • Subscribe to newsletter
  • We love feedback

Follow us

cambridge_logo_footer.png

©The Naked Scientists® 2000–2017 | The Naked Scientists® and Naked Science® are registered trademarks created by Dr Chris Smith. Information presented on this website is the opinion of the individual contributors and does not reflect the general views of the administrators, editors, moderators, sponsors, Cambridge University or the public at large.