# The Naked Scientists Forum

### Author Topic: Unified field theory and the structure of the universe  (Read 5117 times)

#### Odysseus2

• First timers
• Posts: 9
##### Unified field theory and the structure of the universe
« on: 02/08/2013 07:33:27 »
Unified field theory and the structure of the universe

The theory that I am proposing provides an understanding of the structure of the universe and how all matter behaves within the universe.  It also determines the behaviour of the forces of gravity, magnetism and electric charge within the universe.

To support this theory I will use it to explain the results of the double slit experiment and at the same time show how and why the quantum world appears to behave like a wave.  I will also explain why the universe expands in all directions and not from a single centralised point in space.

The theory requires that we view the universe (past, present and future) as having a lattice that provides a framework for 3-dimensional space.   We can imagine the lattice as a set of bars that run parallel to each other in 3 dimensions that intersect to create cubes with no floors, ceilings or walls.  This lattice has the following properties:  it remains stationary for all time and no mass or forces can pass through the bars of the lattice, they can only pass through the space between the bars into a discrete cube that is defined by the boundaries of the bars.   Now that we have defined discrete 3-dimensional spaces that remain fixed for all time, we can apply the following laws.  If we view the sum of the contents of each cube over all time then the contents of each cube would be identical to every other cube.  This would be true for mass, gravity, charge and magnetic moment.  So if we consider mass as an example then the following law would apply:

Sum of mass for cube 1 from time = 0 to infinity

is equal to

Sum of mass for cube 2 from time = 0 to infinity

is equal to

Sum of mass for cube n from time = 0 to infinity,

Where n = 1 to infinity.

The equivalent of the above law also applies to each of the forces of gravity, charge and magnetic moment.

The double slit experiment

Now we can apply this theory to explain the results of the double slit experiment.

In this case, we can view each single photon in the experiment as a marble and show how each marble will travel through the lattice of 3- dimensional space, (described above), over time.  First we need to understand that the physical universe will look to keep the forces in each cube, including mass, as balanced as possible at any cumulative instant in time compared to the neighbouring cubes.  That is to say before entering a 3 dimensional cube, each individual cube will have a set of forces that define the properties of the cube at that instant.  The properties of each cube will be determined by the sum of the mass and the individual sums of the respective forces that have entered the cube since time zero.

Now we can look at the path that the marble must take to arrive at the point of measurement.  In essence the marble has to move from one cube in the general direction in which it has been propelled.  Now we need to understand it cannot simply travel in a straight line due the bars of the lattice (unless it was perfectly aligned with the lattice, but even in this case a straight line of travel would not be guaranteed).

At this stage we will describe the path of the marble in a 2-dimensional plane, although it is a straight forward to extend the reasoning to a 3-dimensional plane.

If for the 2 -dimensional plane we view the experiment as the surface of a table that is tilted downwards, where the table surface is acting as our horizontal plane and we imagine that the surface of the table is covered with a single layer of aligned cubes, now we can imagine that the cubes have been hollowed out and only the vertical corner posts of the cubes can be seen above the surface of the table.  Now we will place two doors half way down the table that are sufficiently wide such that at least one full cube has been framed by each doorway.  We will also place a wall at the end of the table in order to detect the location of the impact of any marbles making their way through either of the two doors.

As the marble travels from the cube in which it starts its journey (in row 1), it needs to move further down the table towards the cube in row 2.  We will assume that the marble cannot move diagonally, but can only move to an adjacent cube to the left, right, forward, backward, up or down.   In this 2-dimensional space example we only need to consider the cube directly in front of the cube housing the marble (in row 2) and the cubes immediately to the left and right of it in row 1.   The marble is now subject to the properties of the mass and forces of the marble itself and the properties of the cube directly in front of it and to the side of it.

The cube into which the marble moves will be the cube with the current properties that best fit the marbles properties (mass, gravity, charge and magnetic moment).  If the cumulative properties of the cube, in front of the current cube, is a better fit for the properties of the marble at this instant in time then the marble will move forward into that cube.   For example, if the cube in front of the current cube housing the marble has a deficit of mass that has passed through it up to this instant in time compared to the surrounding cubes.  Such that if the marble passed into this cube it would result in an improved balance for the sum of masses passed into it to-date, when compared when compared to its surrounding cubes, then it will exert a gravitational pull on the marble.  Similarly the cumulative (from time zero to this instant in time) magnetic force and charge will also generate a net force of attraction or repulsion on the marble.  This will determine which of the neighbouring cubes the marble would enter in to.   Let’s say in this instance the marble moves forward into the cube directly in front of it.

This process would then repeat, let’s say in row 2 the marble moves forward one cube, so we are in the same column that we started off in but now in row 3.  Next it is determined that the cube having the properties of best fit for the marbles new properties results in the marble moving into the left hand cube in row 3.  As there will be considerable forward momentum on the marble it will generally move forward as opposed to left or right through the different rows.

This process will continue from row to row, until we reach the row in front of the row in which the doors are situated.  Exactly the same process will occur in this case.  If the properties of the marble at this new instant in time are best matched by the cube framed by the doorway then the marble will pass into the cube framed by the doorway, otherwise it will not.

This process will continue row after row (moving left or right or straight ahead) until the marble hits the wall at the end of the table (in the double slit experiment it is the device that measures the arrival of the photon).

In this way, providing the doorways are sufficiently close together, a proportion of the individual marbles will pass through one of the two doors eventually arriving at the wall at the end of the table.  All other individual marbles will be stopped by the wall that the doorways are located in.

It should be noted that the properties of the nearby cubes should be relatively similar at an instant in time and as such the forces attracting the marble away from the most direct path would not generally be large, but sufficient to create small deviations from a straight path when interacting with the properties of the marble.

The wall would show the pattern of impact from the marbles that resembles the pattern shown by the double slit experiment.  Due to the discrete nature of the lattice cube, we will arrive at a set of discrete impact points that will be normally distributed around a central point.

Now we need to explain why the path changes when a sensor is added to determine if the marble passes through a slit.  If the sensor uses a magnetic or electromagnetic field to identify the path of the photon, the sensor will align the magnetic field through which the photon passes and will entirely bias the path that the photon can take.  Providing a direct path through the slits from the point of origin of the photon with minimal deviation, showing the double slit image in the horizontal plane.

In the case of the marble, the sensor is biasing the magnetic and electrical properties of both the marble and the cubes through which the marble can enter.  In effect aligning the pathway through which the marble can travel.

The expansion of the known universe

I believe a question has arisen over how the universe expands in all directions as opposed to from a central point.

The above logic needs to be applied to each point in the universe at a given point in time.  In this case instead of a single marble, we can consider a row of 5 marbles.  The path that these marbles will take will again be determined by the cubes available for each of the 5 marbles to move into, it will also be determined by the properties of each of the 5 marbles, but in this case it will also be determined by the competition from the neighbouring marbles.  This will lead to some expansion in the band of columns that the marbles could fall into.  For example, if the marbles originally started their journey in adjacent columns it is quite likely that by the end of their journey even if they are in the same row, they may not be in the same columns.  Whether the space between the marbles expands in the horizontal plane, at a particular instant in time will be determined by the balance of the forces of gravity, magnetism and charge.

In the same way this will also occur in the vertical plane as well as the horizontal plane.   Extrapolating this process would result in a universe where there is not a central point of expansion of the universe, but it would show the universe expanding in all directions.

#### Odysseus2

• First timers
• Posts: 9
##### Re: Unified field theory and the structure of the universe
« Reply #1 on: 14/08/2013 09:52:18 »
I feel I should provide a little further detail on the proposed theory and its implications.  In particular I will expand on the explanation for the results observed under the double slit experiment.

The proposed theory states that the sum of masses (and the distribution of those masses) passing through a given cube when summed over all time, (for any of the cubes defined in the proposed theory above), will be identical to the sum of the masses (and the distribution of those masses) when summed over all time for any other cube as defined above.  Under the proposed theory the same would be true of magnetic field strength and its direction and electric field strength and its direction.  In order to better understand the explanation of the processes that are occurring we should imagine the dimensions of the cubes to be on the sub atomic scale.

The consequences of the proposed theory and its laws are that the path that any matter or gravitational force (or magnetic or electrical forces) can take through the 3-dimensional space at a given instant in the time is dependant upon the cumulative sum of the forces and masses that have passed through that point in space from the beginning of time up to but not including the current instant in time.

These past forces and masses will in effect create a 3–dimensional surface described by the differences in the cumulative sums of the three forces (magnetic, electrical and mass / gravitational) when compared to the neighbouring points in space.   It is this 3-dimensional surface that the masses and forces present at the current instant in time must map on to.

As these past forces and masses are not present in the current instant in time they are not directly measurable in the current time, although they can to some extent be measured indirectly by measuring their influence on matter in the present time.

Double slit experimental results

I will summarise the results as I see them.

Single slit

First we have the single slit.  If the slit is wide enough then a pattern emerges that the shows the photons appear to have been distributed in a normal distribution in both the horizontal and vertical plane.

This I believe is widely accepted as an unsurprising result.  If we consider the proposed theory, using the analogy of a marble moving through the cube lattice.  Then in this case the photon (represented by the marble) has no charge or magnetic moment, and as a result should only be affected by the net difference in masses that have passed through the neighbouring cubes over time.  In this case the way the marbles would travel through the lattice would be similar to the scenario when we have many marbles traveling in the same row in nearby columns.  Reducing the possibility of spread in the central region.  The spread would become more possible where there is less mass travelling through a column, i.e. away from the centre.

Double slit – photons emitted as a stream (i.e. not one at a time) and no photon detector present

Here the photons appear at the photosensitive screen as a series of bands.

It is also important to note the shape of the distribution of the photons.  It appears that they follow a series of normal distributions on both the horizontal and vertical planes.  If we take the horizontal plane (the vertical plane shows similar results), we can see that the normal distribution in the centre appears to have a smaller standard deviation about the mean, as we move out from the centre we can see that the standard deviation about the mean gets progressively larger, i.e. points are more concentrated in the centre distribution and the individual normal distributions get progressively less concentrated about their respective central points as we move away from the centre of the overall photon pattern.

It would be interesting to know whether these points over a longer time frame would ultimately result in the same distribution as shown for the single slit.

In this case we have two potential explanations for the distribution of the photons.

The first possible explanation is that the cumulative time surface has a random distribution of its net mass in this specific area in space at the beginning of the experiment.  In this case the net mass effect in the cumulative time surface would build over the duration of the experiment to show an increasing effect on the distribution of the photons as they pass through either of the two slits.  As the experiment progresses the individual photons that have passed through a specific point in space will contribute towards the cumulative net mass / gravitational effect on the cumulative time surface in that specific area, this in turn will act upon any subsequent photons passing sufficiently near to this point in space.

The distribution observed could be due to the potential for a greater spread of mass, resulting from the space between the two slits.  In the marble analogy, although we have marbles in the same row, the columns could be more sparsely populated and there would be some columns (in the row the slits are located) that are not populated, due to some marbles hitting the wall between the slits.  Once this spread occurs it is quite plausible that due the gravitational pull of other marbles there is the potential for the photons to group in the way shown.

The second possible explanation is that the cumulative time surface could already have a wave like distribution in the net mass effect in this particular region of space.  In this case the masses when travelling through the cubes would align in some way to fit the existing distribution.

If this were the case, then in the single slit experiment the volume of photons could prevent the same level of spread.  In the marble analogy this would represent a situation where marbles travelling in a given row have a high number of columns populated preventing the same level of movement into neighbouring columns.  In a sense overwhelming the underlying distribution.

As the effects appear to build over time, it would seem that the first scenario is the more likely (i.e. that the cumulative time surface has a random distribution of its net mass in this specific area in space at the beginning of the experiment.  I would assume if the underlying distribution was present from the beginning of the experiment then we would see these patterns clearly emerging from the outset.

Double slit – photons only emitted one at a time and no photon detector present

Here the photons appear at the photosensitive screen as a series of bands, just as they did when the photons were emitted as a stream.

It is my understanding that this outcome is where part of the central mystery lies, as it cannot be readily explained through the current laws of Physics.

If we assume that the proposed theory is correct then we have pretty much the same explanation for the final photon pattern emerging as we did when the photons were emitted in a stream.  In this case instead of a photon being affected by the presence of a neighbouring photon, it is affected by the net mass effect that has built up in the cumulative time surface in this area, as a result of photons that have already passed nearby.

This is a key difference between the proposed theory and current Physics theories.  The proposed theory suggests that what has happened in the past, could affect what happens in the present.  Whereas, as far as I know, the current Physics theories do not have any similar laws.

Double slit – photons only emitted one at a time with photon detector present

Here the photons appear at the photosensitive screen, as two single bands each like the band that resulted from the single slit experiment.

I believe this further adds to the mystery, as although I suspect the sensor uses a magnetic field (and detects disturbances in the magnetic field), since photons carry no charge then according to the current laws of Physics their path should not be affected by the presence of the magnetic field.

I believe what might be happening in this case, is that although the presence of the magnetic field should not interact with the properties of the photon directly, I believe it has a significant effect on the cumulative time surface in that region in space.

As a possible explanation, the presence of a magnetic field will align any matter present in the region of space that has a magnetic moment.  It will similarly align any ions that are present in the room due to their net charge.  In this way it could be possible that the properties of the cumulative time surface are significantly altered, and the level of change will build over time in the presence of a magnetic field generator.   In practice I suspect that all magnetic fields could be due to the alignment of net magnetic, electrical and mass / gravitational properties of the cumulative time surface.

If this is the case, then the cumulative time surface would be developed in an ordered non-random fashion, and it may have a normal distribution of mass already in place before the experiment begins, or could develop one as the experiment progresses.  It could be that the distribution of mass on the cumulative time surface then determines the distribution of photons during the experiment.

In short, I believe it is possible that the magnetic field “prepares” the cumulative time surface, removing the irregularities in the net effects of mass / gravity, charge and magnetic forces on its surface.

Other effects of the cumulative time surface

At the macroscopic level I believe it is the net forces on the cumulative time surface acting at the sub atomic level that add to propel all planets and determine planetary motion.

#### Odysseus2

• First timers
• Posts: 9
##### Re: Unified field theory and the structure of the universe
« Reply #2 on: 31/08/2013 11:38:37 »
Other effects of the cumulative space-time surface continued…

Before discussing any further effects of the cumulative time surface theory proposed, I will reiterate its effects.

The consequences of the proposed theory and its laws are that the path that any matter or gravitational force (or magnetic or electrical forces) can take through the 3-dimensional space at a given instant in the time is dependant upon the cumulative sum of the forces and masses that have passed through that point in space from the beginning of time up to but not including the current instant in time.

These past forces and masses will in effect create a 3–dimensional surface described by the differences in the cumulative sums of the three forces (magnetic, electrical and mass / gravitational) when compared to the neighbouring points in space.   It is this 3-dimensional surface that the masses and forces present at the current instant in time must map on to.

Going forward the cumulative time surface will be referred to as the cumulative space-time surface as the theory specifically relates to the cumulative net differences in charge, mass (gravity) and magnetic moment at a given area in space at a specific instant in time.

Planetary rotation

The current generally accepted explanation for planetary rotation is that as the Sun and the Planets formed from a gas cloud, the gas cooled and as it condensed it caused a rotation due to the conservation of angular momentum.  As there are relatively few external forces acting on the planets, it is generally accepted that through the conservation of angular momentum any planetary motion originally generated has been largely retained.  The major exception to this is the rotation of the planet Uranus.  It has been suggested that the unexpected rotation of the planet in relation to its polar axis may be due to a past meteorite impact.

Possible alternative explanation for Planetary Rotation

As mentioned in my previous note on the proposed explanation of the results of the double slit experiment, in practice I suspect that all magnetic fields could be due to the alignment of net magnetic, electrical and mass / gravitational properties of the cumulative space-time surface.

If we assume at this stage that the cumulative space-time surface does represent a magnetic field in our particular region of space at the present time then using Flemings left-hand rule we would expect that a lightning strike (of negative charge) would result in a force that would rotate the planet from West to East.  It could also contribute to the direction of the winds.

It is also interesting to note that the planet Mars was also believed to have had a similar atmosphere to our own.  It has also got a similar rotational speed, and inclination of the equator to its orbit.  Although the atmosphere on Mars has been largely lost, by the conservation of angular momentum it should have retained the rotational speed and the angle of rotation from when an atmosphere was present.

The planet Venus similarly could have its rotation generated by lightning strikes but in this case the planet has a slow retrograde motion.  As the atmosphere of Venus is very different to the Earth and what was believed to be on Mars, the slow retrograde motion could possibly be the result of a more closely balanced positive and negative lightning strike rate, with the balance in favour of positive strikes.

In addition to the Earth and Venus, Jupiter and Saturn are both known to have lightning storms.  Both planets have relatively fast planetary rotations compared to Mercury and Pluto that have negligible magnetic fields.  The planets Uranus and Neptune also have relatively fast planetary rotations but currently, (I believe), there have been no documented lightning storms.  Personally I suspect they will have lightning storms, but in the case of the Planet Uranus I believe that its planetary mass is non-symmetrical and its rotation will be the result of a combination of lightening strikes and gravitational forces.

Continental drift (plate tectonics) and Earthquake prediction

Although approximately 90% of the lightning strikes on the earth are negatively charged.  It is important to note that a lightning strike of positive charge should create a force in the opposite direction.  There is considerable recent interest in the possibility that positive lightning strikes may help the prediction of tornado formation.

If lightning is found to contribute to the rotation of the planet by applying a force to the tectonic plates then this mechanism must have a significant effect on continental drift (plate tectonics).  It is also quite conceivable that there could be a correlation between lightning strikes and earthquake activity.  In this case it would require complex mathematical modelling to identify all the rotational forces acting on the earth’s tectonic plates.  I would also expect that positive lightning strikes could be very significant.

#### Skyli

• Full Member
• Posts: 54
##### Re: Unified field theory and the structure of the universe
« Reply #3 on: 15/09/2013 12:28:56 »
I'm no physicist but there seems to be an inconsistency in this hypothesis. Paraphrased, my understanding of the argument is that every fixed cube of space is changed by everything that has previously happened in that cube and these changes, in turn, affect every future event in that cube. So a photon entering that cube today would not exit the cube in the same way that a photon did yesterday; something would be different. Furthermore, todays photon would itself change the attributes of that cube of space such that the next photon, and the next, would result in an ever increasing degree of "differentness" in that cube - the cubes attributes would diverge more and more from their original values. Now the idea that every cube of space will, eventually, share the same set of events breaks down. Cube A modifies the photons entering it and passes these modified photons on to the adjacent cubes; no other cubes in the universe receive "Cube A-modified" photons.

In this case, if we view the contents of each cubes over all time, would they not all be unique rather than identical?

#### alancalverd

• Global Moderator
• Neilep Level Member
• Posts: 4492
• Thanked: 137 times
• life is too short to drink instant coffee
##### Re: Unified field theory and the structure of the universe
« Reply #4 on: 15/09/2013 13:48:32 »

To support this theory I will use it to explain the results of the double slit experiment and at the same time show how and why the quantum world appears to behave like a wave.

But it doesn't. The photoelectric effect is best modelled as a particle interaction and pair production is a relativistic quantum phenomenon that produces massive particles from photons.

It's not a good idea to build a theory of everything on an untrue statement.

The more likely explanation of the rotation of planets is the inherent spin caused by gravitational accretion of objects with random trajectories. Since clouds are part of the planet, no interaction such as lightning will alter the angular momentum of the whole system.

Quote
The consequences of the proposed theory and its laws are that the path that any matter or gravitational force (or magnetic or electrical forces) can take through the 3-dimensional space at a given instant in the time is dependant upon the cumulative sum of the forces and masses that have passed through that point in space from the beginning of time up to but not including the current instant in time.

If only that were true! Formation flying and rifle shooting would be an absolute doddle. But they are actually quite difficult. It might however explain why the last man to throw in a shot putt contest often exceeds his personal best to date - physics is so much simpler than psychology!
« Last Edit: 15/09/2013 13:56:48 by alancalverd »

#### Odysseus2

• First timers
• Posts: 9
##### Re: Unified field theory and the structure of the universe
« Reply #5 on: 20/10/2013 04:00:54 »
Response to the the post from Skyli

Thank you for your question and apologies for the delay in getting back to you.  I think it might be best if I describe an example to illustrate how this could work in practice.

When we view the matter in the Universe we see it at an instant in time.  The proposed theory suggests that cumulative mass at every point in space will tend to a constant as time tends to infinity.

Single particle Universe

If we imagine a Universe that only contains a single particle, then according to the proposed theory the particle would need to lay down a mass footprint that tends to a constant as time tends to infinity, at every point in space.

As an example, we can imagine that the planet Earth will represent our particle.  The reason for using the planet Earth as an example is that we know that the mass of the planet has a non-uniform distribution, so we need to explain how we get anything approaching a constant from here.

Let’s imagine that our particle suddenly appears from nowhere.  If we also imagine our cubes to be on the subatomic scale, clearly our particle is at least partially occupying a very large number of cubes.  (Please note, that we mean ‘partially occupying’ in the sense that a cube can only be considered to be fully occupied for mass when it has received sufficient mass to reach the final constant value that will be populated in each cube over all time).

We can now spiral out our particle from its initial centre of mass at a very gradual rate, for example, at the nanometer scale for each complete revolution about its centre.  In this way our particle will describe an ever increasing sphere over time.  As we move beyond the diameter of the earth, from the initial centre of origin, we can see that the mass that has passed through each cube would tend towards a constant.

It would be best to imagine that the particle is rotating around its centre of origin, it is attracted to the mass that has gone before it, and can only fall into a space that is sufficiently previously unoccupied to accept its mass at the present instant in time.

In this way, over a vast period of time we could imagine that the particle would appear to have an orbit around a central point, and could eventually have an orbit that would appear identical to the one that we have for the Earth today.

Interestingly under the proposed theory this example shows that a central Sun would not be required to have planets moving about an orbit from a central point.  In practice, I understand that we do have orbiting planets without a central Sun present in the Universe and that these planetary orbits are used as evidence of the presence of black holes.

It is also interesting to note that the proposed theory provides a possible reason for why all matter rotates, and why we have spiral galaxies.  Even on the atomic scale all particles are rotating or spiraling about their direction of motion.

Multiple particle Universe

In the multiple particle Universe we would get the same outcome as for the single particle, but here multiple particles are occupying the space to provide the same result.

In this way as an example, we could imagine that the planets in our solar system are behaving as a collective particle.  Rotating in a similar way to that that we described for the single particle, ultimately still providing a constant footprint over a vast time period.

The proposed theory suggests that all particles will take on a wave like motion as they pass through the cumulative space-time surface, giving rise to wave particle duality.

#### Odysseus2

• First timers
• Posts: 9
##### Re: Unified field theory and the structure of the universe
« Reply #6 on: 02/11/2013 02:38:48 »
Response to post from alancalverd

Response to lightning comment

“The more likely explanation of the rotation of planets is the inherent spin caused by gravitational accretion of objects with random trajectories. Since clouds are part of the planet, no interaction such as lightning will alter the angular momentum of the whole system”.

As the cumulative space-time surface suggested by the proposed theory could not be considered as part of the "whole system” (the planet and clouds), then in theory I believe that lightning could alter the angular momentum of the planet.

I am aware that gravitational accretion is the most commonly accepted cause for planetary rotation, and clearly it is entirely plausible.  Using that theory the rotation of any planet can be explained away.  Personally I feel there is more likely a different underlying reason for planetary rotation, as the theory of gravitational accretion suggests planetary rotation is quite random.  If with more data we can conclude that lightning does not play any part in planetary rotation (for example, if we can find planets with lightning strikes, but no planetary rotation), then I would be inclined to suspect that planetary rotation is caused by direct gravitational effects resulting from the uneven distribution of the mass of the planet.

Response to formation flying comment

“If only that were true! Formation flying and rifle shooting would be an absolute doddle. But they are actually quite difficult. It might however explain why the last man to throw in a shot putt contest often exceeds his personal best to date - physics is so much simpler than psychology!”.

As we are talking about gravitational effects, as far as formation flying is concerned, in theory you could have a mountain fly past previously and its gravitational effects would be negligible.  This would also be true for the other examples provided.

Response to double slit comment

“But it doesn't. The photoelectric effect is best modelled as a particle interaction and pair production is a relativistic quantum phenomenon that produces massive particles from photons.”

In a document on quantum mechanics Richard Feynman wrote “Probability can show the typical phenomena of interference, usually associated with waves, whose intensity is given by the square of the sum of the different sources.”

(Source: Feynman.  Space-time approach to Non-Relativistic Quantum mechanics.)

My understanding of the modeling of the results of the double slit experiment using quantum mechanics is that if we apply the experimental parameters to the model we obtain the probability density function of the expected final recorded particle positions.  It is this probability density function that will show the expected interference pattern.  Furthermore, the probability density function relates to the individual particle i.e. it describes the probability that a specific particle will arrive at a certain location or area of the detector.

As I understand it, in the case of the double slit experiment, the model only assumes that there are two sources of waves, a wave from slit 1 and a wave from slit 2.  Outside of the parameters of the experiment, through which the interference pattern is predicted, in no way whatsoever does the model explain or even attempt to explain the source of the wave.  In that, I mean that it does not have anything to say about whether the source of interference was as a result of “Pair production”, or indeed whether the interference is a result of current particles somehow being influenced by particles that have previously passed nearby as suggested by the proposed theory.  In this way, it is my understanding that the proposed theory and the existing theories would be identically modeled through quantum physics.

In order to develop a test through which we can eliminate one of the two theories we need to understand the differences in the theories.  The key difference between the existing theories and the proposed theory is the source of the interference in the wave pattern produced.  In the proposed theory, for the double slit experiment the source of interference for particles that pass one at a time through one of the two slits, for example slit one would be any previous particles that have passed through slit 2 and vice versa.

In the current physics theories put forward to explain the results of the double slit experiment I am not aware that there is any concept that particles from the past could be influencing matter from the present.  In this case we are left in a situation where we only have one particle and some how that particle is aware of the second slit.  The proposed solution to this is that the particle must in some way subdivide and pass through both slits.  In this way the path of the particle can be influenced by itself.

Unfortunately the double slit experiment in its current form is not sufficient to enable us to eliminate one of the two competing theories.  As a result I would like to propose a further refinement to the double slit experiment that should enable us to determine which of the two theories is correct.

Proposed refinement to the Double slit experiment

In theory and hopefully in practice the proposed refinement to the double slit experiment is relatively straightforward.  Essentially, provided we can move the mask that covers the individual slits without disturbing the remainder of the apparatus it is anticipated that the refined version of the experiment could be completed within a day.

As a precaution we should ensure prior to starting the experiment that the equipment is set in a new location such that any paths taken by particles from previous experiments would be sufficiently far away as not to be able to influence current particles.  Also, the equipment should be set to detect double slit interference from a previous experiment as no adjustment should be made to check that we have the interference pattern prior to recording the first photon location at the detector.

Herman Batelaan of the University of Nebraska-Lincoln, together with his colleagues there and at the Perimeter Institute for Theoretical Physics in Waterloo, Canada conducted an excellent example of the double slit experiment.  The experiment took approximately 2 hours to conduct and shows the build up of an interference pattern involving over 6,000 electrons. (Source: newbielink:http://physicsworld.com/cws/article/news/2013/mar/14/feynmans-double-slit-experiment-gets-a-makeover [nonactive]).

Using an apparatus similar to that used in the experiment mentioned above.  The refinement to the existing double slit set up would be that instead of sending the particles with the possibility for them to travel through both slits at the same time we can randomly send a particle through one of the two slits, (using a set of random numbers to determine which of the two slits will be open and which one will be closed until a particle is detected at the detection device).  In this way the experiment can be continued, up to say 6,000 times to see if the interference pattern is generated.

If no interference pattern is registered at the detector then we can conclude that particles from the past are not the source of the interference pattern.  Alternatively, if a pattern is generated the existing theories cannot be the cause of the interference pattern as there would not be an opportunity for the particle to pass through both slits at the same time since one of the two slits would always be closed.

« Last Edit: 22/11/2013 20:44:07 by Odysseus2 »

#### Odysseus2

• First timers
• Posts: 9
##### Re: Unified field theory and the structure of the universe
« Reply #7 on: 10/11/2013 05:50:34 »
The double slit experimental results explained due to the presence of the cumulative space-time surface.

Please note:  I have changed the explanation of what I believe is happening in the next post but I have left this in for completeness.

If we set up the double slit apparatus with a slit separation width appropriate to obtain the interference pattern then we can imagine the pattern build up under the various scenarios, as follows:

Single slit open

With a single slit open (let’s say slit 1) photons will randomly be emitted from the light source.  Under the proposed theory the cumulative space-time surface, in order to obtain a constant mass over time, will influence the paths that the particles will take from their point of origin such that over time a constant mass will be realised at a specific point (and ultimately all points) in space.  I am suggesting that as particles pass through the single open slit, (where the slit width is very large compared to the particle width), then as the particles are being emitted at random from the source they should arrive in localised random areas.  I believe that the constraints applied by the cumulative space-time surface will organise the paths of the early particle emissions from the single slit into a number of localised normal distributions.  Over a short time period, with further particle emissions we should imagine a set of normal distributions being generated across the whole of a wave front.  These normal distributions will be larger directly in front of the light source and will become progressively smaller as we move away from the central location.  Although I am describing what I believe happens with only a single slit open, it is a key step in understanding how the interference pattern is produced with both slits open.

It is very important to realise that these normal distributions after many particle emissions will simply combine to become a single large normal distribution, with its centre in line with the centre of the source of light.  Consequently, as the experiment progresses with a single slit open we get a result consistent with a single normal distribution of photons (in both the vertical and horizontal directions).  This produces a single band pattern at the photosensitive screen.

If we refer to the diagram in the following link:

we should imagine that all of the individual wave fronts shown schematically in the diagram would, only at the very early stages of the experiment,  be made up of a wave comprising of small normal distributions.  It should be expected that in effect this would create a wave pattern of normal distributions across the wave front, but that its frequency and amplitude would be so small as to be indiscernible.

We should also imagine that as we move further away from the light source that these normal distributions would become less focused.  This is as a result of Heisenberg’s Uncertainty principle.

Both slits open

If we open both slits, and we imagine that we now have two sets of wave fronts initially behaving as described for the early stages with only a single slit open i.e. both sets of wave fronts carrying the set of normal distributions as described for slit 1.   Then if we refer to the diagram, (in the attached link) we can see that we have dashed lines that show the points of intersection for the two sources of diffracting waves.  Essentially, this is the key to the interference pattern.  As the waves radiate out from the source, in the initial stages of the double slit experiment, with each wave front carrying a set of normal distributions as described in bold with a single slit open, we will have the paths of groups of particles (from each normal distribution) that have passed through slit 1 intersecting the paths of groups of particles from slit 2.  I believe that this intersection of the paths of the two groups of particles is then sufficient to produce gravitational centres.

If we imagine a cone of increasing radius, as we move away from the photon source, around each of the dashed lines in the diagram, to encompass the increasing radii of the points of intersection.  We can then imagine that as future particles are emitted from the light source, their paths will be diverted very early in their journey from the light source and as such these photons will take a spiral path through the gravitational centres produced.  Ultimately this will produce the spheres shown at the photosensitive screen.  As we move further from the light source the intersecting groups of particles will be more widely spread resulting in larger spheres being produced.

Why don’t we get the interference pattern if we left slit 1 open for some time and then slit 2?

Unless we alternate the source of the photons (between slit 1 and slit 2, see ‘Proposed refinement to the double slit experiment’ in my previous post), then the small normal distributions at the wave fronts will be lost to the larger overall normal distribution over the wave fronts.  We will then no longer get intersecting paths of groups of particles and this will prevent gravitational centres forming.  As a result the interference pattern will be lost.

Why is the interference pattern lost when we attempt to detect which slit the photon passes through?

Again any attempt at measurement of the location of the particle will have an effect on the cumulative space-time surface (or mass in a particular localised area in space) as a result this will destroy the wave front of normal distributions required to produce the interference pattern.

Proposed refinement to the Double slit experiment

I have repeated the proposed experiment from the previous post as I believe it would ultimately determine which of the proposed theories is correct.

In theory and hopefully in practice the proposed refinement to the double slit experiment is relatively straightforward.  Essentially, provided we can move the mask that covers the individual slits without disturbing the remainder of the apparatus it is anticipated that the refined version of the experiment could be completed within a day.

As a precaution we should ensure prior to starting the experiment that the equipment is set in a new location such that any paths taken by particles from previous experiments would be sufficiently far away as not to be able to influence current particles.  Also, the equipment should be set to detect double slit interference from a previous experiment as no adjustment should be made to check that we have the interference pattern prior to recording the first photon location at the detector.

Herman Batelaan of the University of Nebraska-Lincoln, together with his colleagues there and at the Perimeter Institute for Theoretical Physics in Waterloo, Canada conducted an excellent example of the double slit experiment.  The experiment took approximately 2 hours to conduct and shows the build up of an interference pattern involving over 6,000 electrons. (Source: newbielink:http://physicsworld.com/cws/article/news/2013/mar/14/feynmans-double-slit-experiment-gets-a-makeover [nonactive]).

Using an apparatus similar to that used in the experiment mentioned above.  The refinement to the existing double slit set up would be that instead of sending the particles with the possibility for them to travel through both slits at the same time we can randomly send a particle through one of the two slits, (using a set of random numbers to determine which of the two slits will be open and which one will be closed until a particle is detected at the detection device).  In this way the experiment can be continued, up to say 6,000 times to see if the interference pattern is generated.

If no interference pattern is registered at the detector then we can conclude that particles from the past are not the source of the interference pattern.  Alternatively, if a pattern is generated the existing theories cannot be the cause of the interference pattern as there would not be an opportunity for the particle to pass through both slits at the same time since one of the two slits would always be closed.
« Last Edit: 04/01/2014 00:20:03 by Odysseus2 »

#### Odysseus2

• First timers
• Posts: 9
##### Re: Unified field theory and the structure of the universe
« Reply #8 on: 04/01/2014 00:27:50 »
The double slit experimental results explained due to the presence of the cumulative space-time surface.

Short explanation

I believe the interference pattern observed in the double slit experiment is due to the centre of gravity of the system randomly oscillating around a central line located at the perpendicular midpoint of the two slits.  (In addition to oscillating about the central line the centre of gravity is increasing in intensity as more mass is added to the system).  This suggests that the pattern emerges as a result of sequential particle emissions and not as is currently theorized as the result of each particle carrying an identical probability density function.  This solution could only be realized if past particle paths contribute to the shape of a cumulative space-time surface in turn contributing to the localized gravitational field.

A more detailed explanation

The proposed cumulative space-time surface theory suggests that all points (or states) on the paths taken by each previous particle and the current particle should be considered to be present at the current time.  This is similar but not the same as the currently believed quantum theory, in the quantum theory it is assumed that the previous particle emissions do not affect the current particle in any way and that the paths of previous particles are not relevant, instead the current particle subdivides in some way to occupy all possible paths in the present instant in time, (this is known as quantum superposition), finally the particle appears at a single location.

The key difference between the two theories is that in the quantum theory model every particle is assumed to carry the same probability density function, where the probability density function determines the likelihood of a particle arriving at a specific location in space.  In my view this is effectively the same as saying that we know what the final distribution of many particle emissions should look like and if we assume that every particle has this identical probability density function then after a sufficient number of particle emissions we will arrive at the expected distribution.  In the cumulative space-time surface model it is assumed that each particle does not carry an identical probability density function, instead it assumes that given a sufficient number of particle emissions the final particle locations observed at the detection screen will converge to the expected solution.  In practice the key difference is that the cumulative space-time surface model assumes the sequential development of the final probability density function whereas the current quantum model assumes an instantaneous solution since each particle carries an identical probability density function, each particle would be expected to contribute to the final solution identically.  In short under the quantum theory model every random sample of sufficient volume should be representative of the final solution.  In practice if we took the first 2000 observations they should have statistically the same distribution as the next 2000 and the next 2000 and so on.

In the cumulative space-time surface model we should imagine that the solution is arrived at sequentially and that the third set of 2000 observations are more likely to have converged towards their gravitational centres than the first 2000 observations.  Data required to test whether the observations converge over time to the interference pattern may already exist through the experiments conducted by Professor Batelaan and his colleagues.  If there is any evidence of convergence towards the centres of the gravitational centres (see attached diagram) in the formation of the interference pattern then the solution is sequential and therefore past particle emissions should be considered as the probable cause.  In this case the mean distances to the gravitational centres for each group of particles (e.g. the first 2000 observations versus the third 2000 observations) could be measured and checked to assess whether differences in the sample mean distances to the gravitational centres are statistically significant.  To prove definitively that past particle emissions are responsible for the interference pattern observed, the experiment that I outlined in an earlier post could be conducted.

The attached diagram provides an illustration of what I believe occurs during the double slit experiment.  The diagram shows rays emerging from two slits taking paths from each slit from 90 degrees to approximately 45 degrees towards the centre of the system. In practice rays would include a 180 degree path from each slit.  A centre line has also been included at the midpoint between the two slits as it reflects the line of the overall centre of gravity of the system.  Further lines have been included from the midpoint between both slits reflecting the distance from the central point to what I believe to be localized centres of gravity represented by the blue circles in the diagram.  The sizes of the circles do not illustrate the strength of the gravitational centre.  It should be expected that the strength of the gravitational field within each gravitational centre should be normally distributed from the centre of each of the circles in both the horizontal and vertical directions.  As we move away from the photon source and away from the central line between the two slits, the gravitational centres will become weaker.  This is illustrated at the bottom of the diagram.  I believe it is precisely this distribution that is observed in the double slit experiment.  As the particles passing through each gravitational centre hit the detection screen we have the greatest intensity in the central location.  The intensity patterns are observed in waves with maxima intensities weakening as we move away from the centre.

If we set up the double slit apparatus with a slit separation width appropriate to obtain the interference pattern then I believe that we can consider the interference pattern to develop as follows:

Single slit open

If we consider just a single slit open, let’s say slit 1, then we need to understand where the centre of gravity is located.  In this case the line of the centre of gravity will be the vertical line from the centre of slit 1.  It is very important to realize that the actual centre of gravity will oscillate about this central line and if we were to plot the changing centre of gravity around the midpoint of slit 1 (as a result of new particles arriving at either side of the slit) this would appear to be a circular plot with an increasing number of observations as we approach the centre of the circle, i.e. normally distributed in both the horizontal and vertical directions.  More specifically, in the plane across the length of slit 1 the centre of gravity with a single slit open will oscillate to the left or right of the central line as particles arrive randomly from either side of the centre of slit 1.  As particles arrive at random from either side of the slit, we would expect the number of times that a new observations appears from the same side of the slit as the previous observation would follow a normal distribution.  As a result we would expect the centre of gravity in the plane across the length of slit 1 to be normally distributed about the central point and that this normal distribution would become wider as we move away from the source of slit 1.  Expanding the previous explanation to cover the centre of gravity across the width of the slit1, we would expect that the centre of gravity for slit 1 to oscillate randomly around the central point of slit 1 to form a normal distribution from the central point in all directions in the horizontal plane, creating the circular plot described above.  (I believe that it is these oscillations that are the root cause of the interference pattern observed when we have both slits open, showing multiple circular plots at points of intersection).

Now we understand what the centre of gravity is doing we can consider any random local path taken by a number of particle emissions.  If we consider any particles that are emitted from the same very localized point in a same direction, then they will follow a path that is normally distributed.  This is a result   of the path being affected by the centre of gravity of the system and we have just shown that the centre of gravity itself for slit 1 over time is normally distributed about a central line.

Where we have only a single slit open we would expect that these normal distributions would simply add to generate an overall normal distribution.  As the centre of gravity will be greatest at the line perpendicular to the midpoint of slit 1 then we would expect the intensities of the normal distributions to be greatest directly in line with the centre of slit 1 and to weaken as we move away from the central point.  With just a single slit open the overall observed pattern does represent a normal distribution with greatest intensity at the centre and weakening intensity as we move away from the centre.

Both slits open

With both slits open we can see that we will have a very similar scenario.  In this case the overall centre of gravity of the system will move from the perpendicular line from the centre of slit 1 to the perpendicular line at the midpoint between slit 1 and slit 2.  In same way that the centre of gravity was shown to oscillate around the central line for slit 1, the centre of gravity of the system will randomly oscillate around the central line between the two slits.  We can now imagine that as particles travel down each of the rays (as illustrated by the example rays shown in the diagram), their paths will be affected by the oscillating gravitational centre of the system to provide localized normal distributions.  As these normal distributions intersect I believe that they will create the gravitational centres as illustrated in the diagram, where the strength of the localized gravitational centre will be proportional to the product of the two intersecting masses from each of the localized normal distributions.

Tube theory

In order to provide a possible explanation of how the cumulative space-time surface and gravity are related and how this impacts matter at the subatomic level, I imagine that particles experience what we understand as gravity in a way that I refer to as tube theory.

I imagine that subatomic particles when travelling form tubes in the cumulative space-time surface.  The particles then simply roll around the inside surface of the tube.  I also believe that the diameter of the tube, (if we consider it to be circular), is determined by the mass and velocity of the particle, where a greater mass or velocity of a particle results in a smaller diameter of the tube.  This then I believe determines the wavelength of a particle.  In practice, for the individual particle, as a result of the cumulative space-time surface theory, I believe a particle essentially revolves about its own centre of gravity and that the centre of gravity is determined by the previous states occupied on its path of travel.

The relevance of tube theory is that it provides a possible illustration of how gravitational centres could form.  If we imagine that a tube’s diameter will shrink or grow instantly as it detects stronger or weaker gravitational fields.  (Which to some extent I believe is reflected in the De-Broglie wavelength).  We can then imagine that as a particle travels over a region of cumulative space-time that has experienced a greater mass then the tube diameter will decrease.  If we assume the tube to be of circular cross section, (although in practice I think it should be regarded as elliptical like the planetary orbits), then the particle path will take a tighter circular orbit around its central point.  In which case if we put a detection screen in front of the particle, the particle will be detected at the screen within a much more precise region of space, i.e. somewhere on the perimeter of the circular orbit at a small distance from the centre of the orbit.  Alternatively if the particle detects less past matter in its current location the tube diameter will expand and now the particle will be seen to be further away from its central location on impact with a surface.  I believe this is how we observe mass concentrations that result in gravitational centres of increasing intensity.

« Last Edit: 04/01/2014 00:30:56 by Odysseus2 »

#### Odysseus2

• First timers
• Posts: 9
##### Re: Unified field theory and the structure of the universe
« Reply #9 on: 08/06/2014 11:48:47 »
Dark Matter

A consequence of the cumulative space-time surface theory is that past matter continues to exert a gravitational influence, despite the fact that past matter is no longer visible in the present time.  I believe this is also the property of dark matter, i.e. something that exerts a gravitational influence that is otherwise undetectable.  In practice I believe that dark matter is really past matter.

#### The Naked Scientists Forum

##### Re: Unified field theory and the structure of the universe
« Reply #9 on: 08/06/2014 11:48:47 »