« on: 11/04/2021 05:36:10 »
Single slit experiment is usually explained using Huygen's principle, like in these videos.
Is it really the best explanation we can provide?
Is it really the best explanation we can provide?
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
The purpose of this article is to point out that Maxwell’s electromagnetic theory,
believed by the majority of scientists a fundamental theory of physics, is in fact built
on an unsupported assumption and on a faulty method of theoretical investigation.
The result is that the whole theory cannot be considered reliable, nor its conclusions
accurate descriptions of reality. In this work it is called into question whether radio
waves (and light) travelling in vacuum, are indeed composed of mutually inducing
electric and magnetic fields.
This study is addressed to that small percent of students and researchers who suspect
that there is something wrong with the way in which we understand nowadays how radio
waves are generated and how they propagate in space.
I know that there is always a feeling of distrust amongst students when university
professors obtain the equation of a wave from the four Maxwell’s equations. I felt that
myself as a student and I have seen it again in the open courses made available on the
Internet by prestigious universities of the world. Students ask pertinent questions but the
professor fails to address the issue.
[See min. 0:35:00].
SummaryWhat do you think is wrong with his argumentation? Do you agree with him instead?
In conclusion, in this article it was shown that Maxwell’s theory of electromagnetic
waves contains an unfounded assumption, a faulty method of theoretical investigation and
makes a prediction that is contrary to observations.
(i) the unfounded assumption that a changing magnetic field B creates (induces) an
electric field E (a.k.a. Faraday’s law of electromagnetic induction). In fact, a changing
magnetic field B is observed to produce an electric current J, not an electric field E and
there is a great difference between an electric current J and an electric field E.
(ii) the assumption that a changing electric field E creates (induces) a magnetic field B
(a.k.a. Maxwell’s correction to Ampere’s Law). This was derived by Maxwell through a
faulty method of theoretical investigation, no such effect was known in Maxwell’s time
and no experiment has been made since then that proves this assumption.
(iii) the prediction that radio waves and light are composed of entangled electric and
magnetic waves that create (induce) one another in vacuum. No experiment revealed that
radio waves and light have a structure containing electric and magnetic fields.
Although it seemed an easy and straightforward matter to accomplish, Faraday failed
in his attempt to change the plane of polarization of light travelling in vacuum by the
application of strong electric and magnetic fields. Only when the polarized beam of light
passed through glass of great density could this be accomplished, and even then by the
application of a magnetic field only.
Furthermore, Faraday initially applied the magnetic field perpendicular to the ray,
believing that this would change the direction of the plane of polarization. Not obtaining
any positive result, he then placed the magnetic field parallel to the direction of the ray,
and he finally obtained the change he was looking for. But then how can this result be
reconciled with the theory in which light is considered to be composed of two transverse
magnetic and electric fields? It does not seem that the magnetic field applied by Faraday
and the magnetic field of the light-ray vibrating perpendicular to it give a resultant in a
Three Polarizer “Paradox”http://www.users.csbsju.edu/~frioux/polarize/POLAR-sup.pdf
If the polarizers are opposed at a 90° angle, the polarized light from the first polarizer is stopped by the second. If a third polarizer is sandwiched between the two opposed polarizers at a 45° angle some light gets through the last polarizer.
A beam of unpolarized light illuminates a vertical polarizer and 50% of the light emerges vertically polarized. This light beam encounters a diagonal polarizer oriented at a 45 degree angle to the original vertical polarizer and 50% of it emerges as diagonally polarized light. Finally 50% of the diagonally polarized light passes a horizontally oriented polarizer. In other words 12.5% of the light illuminating the first vertical polarizer passes the final horizontal polarizer. However, if the diagonal polarizer sandwiched between the vertical and horizontal polarizers is removed, no light emerges form the final horizontal polarizer.https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Quantum_Tutorials_(Rioux)/Quantum_Optics/268%3A_The_Three-Polarizer_Paradox
Using the figure below vector algebra will be used to analyze this so-called "three-polarizer paradox." The paradox being that it is surprising that the insertion of the diagonal polarizer between crossed polarizers allows photons to pass the final horizontal polarizer.
If you take two crossed polarizers (for example, a horizontal and vertical one), no light will get through them. Yet when you insert a third polarizer between the two, oriented diagonally, then some photons make it through. How does adding that polarizer (which will block some photons) cause photons to get through?https://www.scientificamerican.com/article/quantum-eraser-answer-to-three-polarizer-puzzle/
Say that the first polarizer is horizontal. Any photons that make it through that one are then horizontally polarized. If the vertical polarizer comes next, it will block all of these photons. When the diagonal polarizer is in place, however, it will let half of them through and these transmitted photons will then be diagonally polarized. When these diagonally polarized photons arrive at the vertical polarizer, now half of them will get through—they have no "memory" of ever having been horizontally polarized.
Dirac Three Polarizers Experimenthttps://www.informationphilosopher.com/solutions/experiments/dirac_3-polarizers/
In his 1930 textbook The Principles of Quantum Mechanics, Paul Dirac introduced the uniquely quantum concepts of superposition and indeterminacy using polarized photons.
Dirac's examples suggest a very simple and inexpensive experiment to demonstrate the notions of quantum states, the projection or representation of a given state vector in another basis set of vectors, the preparation of quantum systems in states with known properties, and the measurement of various properties.QuoteAlbert Einstein said of Dirac and polarization,
"Dirac, to whom, in my opinion, we owe the most perfect exposition, logically, of this [quantum] theory, rightly points out that it would probably be difficult, for example, to give a theoretical description of a photon such as would give enough information to enable one to decide whether it will pass a polarizer placed (obliquely) in its way or not." Maxwell's Influence on the Evolution of the Idea of Physical Reality...1931, Ideas and Opinions, p.270
Any measuring apparatus is also a state preparation system. We know that after a measurement of a photon which has shown it to be in a state of vertical polarization, for example, a second measurement with the same (vertical polarization detecting) capability will show the photon to be in the same state with probability unity. Quantum mechanics is not always uncertain. There is also no uncertainty if we measure the vertically polarized photon with a horizontal polarization detector. There is zero probability of finding the vertically polarized photon in a horizontally polarized state.
Since any measurement increases the amount of information, there must be a compensating increase in entropy absorbed by or radiated away from the measuring apparatus. This is the Ludwig-Landauer Principle.
The natural basis set of vectors is usually one whose eigenvalues are the observables of our measurement system. In Dirac's bra and ket notation, the orthogonal basis vectors in our example are | v >, the photon in a vertically polarized state, and | h >, the photon in a horizontally polarized state. These two states are eigenstates of our measuring apparatus.
The interesting case to consider is a third measuring apparatus that prepares a photon in a diagonally polarized state 45° between | v > and | h >.
Dirac tells us this diagonally polarized photon can be represented as a superposition of vertical and horizontal states, with complex number coefficients that represent "probability amplitudes."
| d > = ( 1/√2) | v > + ( 1/√2) | h > (1)
Note that vector lengths are normalized to unity, and the sum of the squares of the probability amplitudes is also unity. This is the orthonormality condition needed to interpret the (squares of the) wave functions as probabilities, as first proposed by Max Born in 1927.
When these complex number coefficients are squared (actually when they are multiplied by their complex conjugates to produce positive real numbers), the numbers represent the probabilities of finding the photon in one or the other state, should a measurement be made. Dirac's bra vector is the complex conjugate of the corresponding ket vector.
It is the probability amplitudes that interfere in the two-slit experiment. To get the probabilities of finding a photon, we must square the probability amplitudes. Actually we must calculate the expectation value of some operator that represents an observable. The probability P of finding the photon in state |ψ> at location (in configuration space) r is
P(r) = < ψ | r | ψ >
No single experiment can convey all the wonder and non-intuitive character of quantum mechanics. But we believe Dirac's simple examples of polarized photons can teach us a lot. He thought that his simple examples provided a good introduction to the subject and we agree.
We animated Dirac's idea of introducing an oblique polarizer between the two crossed polarizers A and B that are blocking all light. Adding this filter actually allows more photons to pass through, which is counter-intuitive.
In the early 20th century, experiments by Ernest Rutherford established that atoms consisted of a diffuse cloud of negatively charged electrons surrounding a small, dense, positively charged nucleus. Given this experimental data, Rutherford naturally considered a planetary model of the atom, the Rutherford model of 1911 – electrons orbiting a solar nucleus – however, the said planetary model of the atom has a technical difficulty: the laws of classical mechanics (i.e. the Larmor formula) predict that the electron will release electromagnetic radiation while orbiting a nucleus. Because the electron would lose energy, it would rapidly spiral inwards, collapsing into the nucleus on a timescale of around 16 picoseconds. This atom model is disastrous, because it predicts that all atoms are unstable.
An extremely sensitive method for the purpose, pioneered by Onnes himself, is the
technique of estimating the upper limit of the resistivity by studying the decay rate of the
persistent current in a superconducting ring. Once established, the time dependence of the
current I(t) through the ring is given by I(t) = I0 e – (R/L) t where I0 is the current at t = 0, R is the
resistance and L is the inductance of the ring. If the superconductor had zero resistance, the
current would not decay even for infinitely long times. However, an experiment can be
performed only over a limited amount of time. In a number of such experiments no detectable
decay of the current was found for periods of time extending to several years.
In a minor variation of the experiment, after the loop became superconducting, the
source current was switched off, the superconducting loop being driven into the persistent
current mode. It was observed that even now the field generated by coil B remained much
larger than the value in the normal state, indicating that the resistances in the two paths are
exactly zero. This provides additional evidence that no extraneous effects such as differential
terminal resistances have any role to play.
In summary, we have demonstrated that the dc resistance of a superconducting wire
is indeed zero and not just unmeasurably small, thus resolving the uncertainty that had lingered
on for nearly a century after the discovery of the phenomenon of superconductivity.
Temperature is a measure of the average kinetic energy of all the molecules in a gas. As the temperature and, therefore, kinetic energy, of a gas changes, the RMS speed of the gas molecules also changes. The RMS speed of the molecules is the square root of the average of each individual velocity squared.
The kinetic theory of gases is a historically significant, but simple, model of the thermodynamic behavior of gases, with which many principal concepts of thermodynamics were established. The model describes a gas as a large number of identical submicroscopic particles (atoms or molecules), all of which are in constant, rapid, random motion. Their size is assumed to be much smaller than the average distance between the particles. The particles undergo random elastic collisions between themselves and with the enclosing walls of the container. The basic version of the model describes the ideal gas, and considers no other interactions between the particles and, thus, the nature of kinetic energy transfers during collisions is strictly thermal.average kinetic energy per molecule of ideal gas.
The Stern–Gerlach experiment demonstrated that the spatial orientation of angular momentum is quantized. Thus an atomic-scale system was shown to have intrinsically quantum properties. In the original experiment, silver atoms were sent through a spatially varying magnetic field, which deflected them before they struck a detector screen, such as a glass slide. Particles with non-zero magnetic moment are deflected, due to the magnetic field gradient, from a straight path. The screen reveals discrete points of accumulation, rather than a continuous distribution, owing to their quantized spin. Historically, this experiment was decisive in convincing physicists of the reality of angular-momentum quantization in all atomic-scale systems.
An experiment caught a quantum system in the middle of a jump — something the originators of quantum mechanics assumed was impossible.
When quantum mechanics was first developed a century ago as a theory for understanding the atomic-scale world, one of its key concepts was so radical, bold and counter-intuitive that it passed into popular language: the “quantum leap.” Purists might object that the common habit of applying this term to a big change misses the point that jumps between two quantum states are typically tiny, which is precisely why they weren’t noticed sooner. But the real point is that they’re sudden. So sudden, in fact, that many of the pioneers of quantum mechanics assumed they were instantaneous.
A new experiment shows that they aren’t. By making a kind of high-speed movie of a quantum leap, the work reveals that the process is as gradual as the melting of a snowman in the sun. “If we can measure a quantum jump fast and efficiently enough,” said Michel Devoret of Yale University, “it is actually a continuous process.” The study, which was led by Zlatko Minev, a graduate student in Devoret’s lab, was published on Monday in Nature. Already, colleagues are excited. “This is really a fantastic experiment,” said the physicist William Oliver of the Massachusetts Institute of Technology, who wasn’t involved in the work. “Really amazing.”
But there’s more. With their high-speed monitoring system, the researchers could spot when a quantum jump was about to appear, “catch” it halfway through, and reverse it, sending the system back to the state in which it started. In this way, what seemed to the quantum pioneers to be unavoidable randomness in the physical world is now shown to be amenable to control. We can take charge of the quantum.
Hidden beneath the expanding lobes of gas and dust in this multi-wavelength view is one of the most massive and volatile stars in the Milky Way. Eta Carinae was just another nondescript star until 1843, when it underwent a dramatic outburst and briefly became the second-brightest star in the night sky. It’s unclear exactly what prompted this explosive episode, but we do know that Eta Carinae – actually a double star system concealed at the epicentre of the two lobes – shed an enormous amount of mass from its outer layers in the process.
Hubble’s false-colour image combines visible light observations by its Wide Field Camera 3 with ultraviolet-light data from the its Ultraviolet Imaging Spectrograph. It shows the presence of gas – magnesium in blue, and shocked nitrogen gas presented here in red – that could have been ejected by the star shortly before its outburst, and which could therefore provide clues as to what caused the tumultuous eruption.