0 Members and 1 Guest are viewing this topic.
I fell on a an unexpected information about that principle on PF. It seems that accelerating two clocks at g on the two ends of a spaceship is not the same as putting those two clocks at the same vertical distance from one another on earth.
David wants me to correct the imprecision by calculating back, but I keep thinking that nature wouldn't do that
Weird things happen at the quantum scale; we have to use probabilities because things do not always happen the same way. There is two ways to consider that precision problem: either things are absolutely precise and we can predict anything if we know everything, or things are not absolutely precise and we can't predict them with absolute precision even if we know everything. The second way means that chance exists, the first means that it doesn't exist.
The way my simulation on acceleration is actually built, it is not the photon that has a dimension but the steps it makes to execute its motion.
(By the way, do you know why the speed gets down to -3.469446951953614e-18 instead of 0?)
we also know where length contraction comes from in any case (relativistic mass increase reduces acceleration and automatically imposes contraction)
I couldn't get it to produce that value, but it's so close to 0 that it may be a tiny error produced by the accuracy limits of the FPU (floating-point unit [maths co-processor]).
if I let the distance between the components contract the same way I let the distance between the particles contract, the steps made by the particles will also contract, which should slow down their contraction and their acceleration a bit.
I was about to correct the error with code, because it complicates the reading of the display, is there a better way?
One important thing is to adjust the amount of acceleration delivered depending on the current speed of the thing being accelerated (though I'm not sure how the numbers would need to be crunched when decelerating it).
The other thing I'd want to do is give each object its own time so that its speed of movement can adjust its speed of functionality - matter may be continually sending signals out at a rate related to that speed of functionality in order to detect what other matter around it is up to, because if one lot of matter meets another that's traveled a long way to meet it, neither will accelerate unless they can detect and react to each other in some way, so you'll never have anything travel between them unless they're continually testing their surrounds.
Another important thing is to keep in mind that whatever we do for contraction to coincide with SR's ad hoc assumption,
the MM experiment would always give a null result whatever the contraction and whatever the time light takes to make its round-trip.
If such a simulation had been available to Michelson, things might have turned out differently.
If I would let the contraction happen during acceleration in the Twins Paradox simulation for instance, the traveling twin would age more than the one at rest instead of aging less, which means that there is always a possible in-between contraction rate where, for the same speed, the two Twins would age the same.
For those who are skeptical, have a look at that one for a speed of .7027c and a contraction of .5 .