« Last post by yor_on on Today at 13:29:34 »
One more point to it. Assuming that it is right, and that you can inject energy, actually presumes a hidden parameter too, doesn't it? As it won't matter 'how' we destroy the original entanglement, the other 'side' of it must still exist, until measured. So if you want to 'inject energy' you now have to define why it won't destroy both sides. On the other hand, if the entanglement indeed are(is:) one entity, as is presumed by me, then? Forgot what I thought :) Getting senile here.
« Last post by yor_on on Today at 13:21:58 »
This is using 'energy' as some minimalistic common nominator in all transformations. Also assuming that change costs.
« Last post by yor_on on Today at 13:18:20 »
Let's get back to entanglements. Injections of energy 'teleported' to another location. It has to be wrong, because if you destroy one 'side', the logical conclusion if it was right would be that the other side should be destroyed too, unless we assume some restrictions. Another way to use this example would then be to consider, if it is wrong, what it says about hidden parameters. If it is so that you by weak experiments on one can influence both, without destroying the entanglement, at the same time as we assume that injecting energy into one (measuring) does not carry over to the other? Weak experiments as an idea for communicating 'instantly' becomes questionable here, wouldn't you agree?
And what does it say about the possibility of there being a hidden parameter?
« Last post by yor_on on Today at 13:03:40 »
How about this, define the universe as a clock, one dimensional :) Or, if you prefer (I do, I do:) having one degree of freedom to 'vibrate in'. That's where your local constants, as 'c', comes from. One degree of freedom does not define what this degree is 'free' in, and that one seems better to me than assuming 'preexisting dimensions', as some original in where things 'exist'.
Becoming a 'field' through frames of reference interacting, if you like. Then that is our 'global definition' of what makes 'repeatable experiments' come true. Still local though
« Last post by yor_on on Today at 12:44:23 »
Sorry but "What happens when time slows down at light speed?" is wrong. Time stops for noone. It's observer dependencies, I thought I had made that one clear by the example of three observers at different speed, each one defining different time dilations for the others, noone agreeing with another's local measurements? What is unclear with that example? That time dilations is measurable as in a twin experiment, is a result of 'c', locally giving you a same answer (equilibrium) in each uniformly moving frame of reference, no matter your speed or mass.
As for massless 'photons' they have a speed that is a limit. If a neutrino has a ever so slight rest mass it can't be 'moving' at 'c'. If it could Einstein would be wrong.
Time does not slow down locally.
« Last post by alan hess on Today at 12:19:52 »
Sir is a down quark with a charge of -1/3 is converting to a up quark charge of +2/3 total state change is one call it math, call it physics it's still the same. An example that may prove it to you take an apple cut off 1/3. This brings the charge to 0. Now give me 2/3 more. You have no apple left your charge is plus, and I have a whole apple with a charge of -1. If you still don't believe that go to any site on neutron decay and look at the examples. There is no change of charge, there is no magic.
« Last post by jeffreyH on Today at 10:33:13 »
I'm not misreading you, Sciconoclast. Science can only talk about what we, as massive observers, can access via experiments. Saying that time is frozen at the speed of light is ambiguous at best and probably meaningless. Because we, nor any observer, nor our experimental apparati can take measurements from the point of view of a photon (or other light-speed entity), we can't measure time (or distance) from its point of view. We can talk about how we as massive observers experience photons and say some thing about photons from that perspective.
It can be said that the spectrum of light remains unchanged barring any interactions. Time is a problematic concept to apply generally but it is all we have for purposes of measurement.
« Last post by Expectant_Philosopher on Today at 10:32:49 »
Physicists postulate that higher dimensions exist, but that they are scrunched down to a very tiny structure, that we cannot perceive and that is why we have no evidence of their existence. I'd like to turn that idea on its head. It is not because these higher dimensions are very tiny, I postulate the fifth dimension is quite large, that it encompasses every point in four dimensions, yet because it is unaffected by time, it never registers in our physical perception. We need time to form our perceptions, to register memories, but since this dimension exists outside of time, our standard physical experimentation would never ever illustrate this existence. Entering the fifth dimension you should be able to go anywhere, anywhen. You don't travel, you are here then you are there.
« Last post by flr on Today at 08:52:24 »
Could it be that the simulation slows down to accommodate more "computationally expensive" regions of space? The massive object responsible for gravitational the field probably has a lot of interacting particles or more information (ie more calculations to make per unit volume). My gut feeling is that this could fit into explanations of time dilation due to planets, stars, black holes, atomic nuclei...
If the computations are quantum (i.e. it uses that crazy parallelism due to quantum entanglement) then those "computationally more expensive" (or denser) regions of space may have exponentially more local 'hardware' to do computations.
For example, in a quantum computation model, if we entangle 292 qbits together then one single 'step' can perform a number of parallel operations equal to 2^292=10^88=the number of protons in the entire visible universe!.