But usually you start at the simple end, right? I don't know, perhaps the way momentum starts with a mass and a velocity, but depending on whether you have a relativistic or a quantum context you then have a more complex physical thing. Or do you just have a more complex idea?Did you want a broad and general discussion about the history of scientific development or how new theories in physics are usually developed? Are you asking me what I do? (I don't usually develop new theories).
neatly expressed by my engineering colleagues: mechanical engineers build things that move, civil engineers build things that don't move.Who builds swing bridges?
I was asked more than once, in the discussion I mentioned, to define what I thought "physical" means. So I said anything with physical units is physical.Are there nonphysical units or do you just mean units? The reason I ask is because energy has units and energy itself is not what I would call a physical thing.
energy itself is not what I would call a physical thing.Would you call it a religious thing, or a grammatical thing or an agricultural thing?
My opinion is that physics is the business of building mathematical models of things that happen (or don't happen - see below*) in order to predict what will happen next or if we alter something.Yes, fair enough. I would add that the models arise because Physics is also the science of measurements; it's about "how are we measuring", not so much what the thing being measured is except that "it's physical".
Are there nonphysical units or do you just mean units? The reason I ask is because energy has units and energy itself is not what I would call a physical thing.Ok. Richard Feynman said nobody knows what energy is, I'm guessing that still holds. So, can you or anyone say it's not a physical thing?
Ok. Richard Feynman said nobody knows what energy is, I'm guessing that still holds. So, can you or anyone say it's not a physical thing?Yes.
Hopefully you aren't thinking in Metaphysical terms, are you?No. I think I might be thinking in terms of: If Feynman is correct, nobody can say what energy is.
How do we measure a distance? We use a fixed unit of . . . distance. Algebraically speaking, we tile a one dimensional space or make a pattern appear.Actually that is a debateable point. It's not really how the modern idea of a physical distance is thought about.
Ok. Richard Feynman said nobody knows what energy is, I'm guessing that still holds.@Origin actually did very well bringing this up as a point of discussion.
An entity is a distinct object - electron, motor car, whateverOk. I got into this a bit with other people who seemed keen to point put that once you define an entity, it has attributes or properties. But are the properties the things that are identified and in what way is an entity separate from its attributes?
The way you define distance matters, it matters a lot. The modern understanding of distance is NOT based on fitting many sticks between two points or "tiling" a one dimensional space as you described. Under the modern understanding of "distance" we cannot meaningfully say that light has slowed down compared to yesterday.Isn't the modern understanding about having a more precise way to measure distances? I can still use my 'tiling algorithm' and it's quite serviceable.
Isn't the modern understanding about having a more precise way to measure distances?That is only one half of it. Yes we have a more precise or accurate way of specifying a distance.
I can still use my 'tiling algorithm' and it's quite serviceable.For practical purposes, yes. Your model might have c, the speed of light, vary over time or space and it might be unsuitable for predicting real world phenomena in regions of space with unusual curvature (gravity). For planet earth and over short timescales (like the lifetime of planet earth) it should be OK.
For example the electron is a distinct particle, an entity, it "has" mass, charge, and spin. It seems to me the electron is, in fact, mass, charge, and spin, in a kind of superposition. That is, the electron is its attributes, not something separate from them, or in addition to them.I've walked across a nylon carpet and am sitting in my rotating office chair. I have mass, charge and spin. I am not an electron. Nor is an electron a proton.
An entity is a distinct object - electron, motor car, whateverDo physicists really use the term "entity" that much? Sounds a bit metaphysical.
OK. I got into this a bit with other people who seemed keen to point put that once you define an entity, it has attributes or properties. But are the properties the things that are identified and in what way is an entity separate from its attributes?It depends how much science you want to discuss. To keep it simple, then yes - real objects have properties and it makes some sense to ask further questions along the lines you have presented. However, if you wanted to consider Quantum Mechanics (QM) then it is no longer so obvious that objects must have properties.
Do physicists really use the term "entity" that much?The high priests of pedantry, the International Standards Organisation, use the term exactly as I did. It's been a while since I was involved in such matters but I don't think energy is an entity in ISO - it is a quantity. Feynman was a good musician but I think he got this one wrong.
I've walked across a nylon carpet and am sitting in my rotating office chair. I have mass, charge and spin. I am not an electron. Nor is an electron a proton.Ok. Sort of.
OK, let's suppose the quantities are identical. I have £50 and The Boss has £50. I am not a woman, and she is not Alan.
More identities: John and Tom are identical twins, down to the last atom of their DNA. They are still distinct entities.
But if you want to be mystical, conduction electrons really are indistinguishable!
It depends how much science you want to discuss. To keep it simple, then yes - real objects have properties and it makes some sense to ask further questions along the lines you have presented. However, if you wanted to consider Quantum Mechanics (QM) then it is no longer so obvious that objects must have properties.The mass, charge, and spin of the electron are things we can safely assume all occupy the same place, in classical experiments. But those are what we might term the local properties or attributes; the position and momentum are also "properties" because--spacetime . . .
nobody can say what energy is.A dictionary editor can.
To measure a distance you MUST actually measure THE TIME it would take for light to travel travel there, you cannot do it just by putting some sticks between the two points.No
how far light would travel in a vaccum after a small fraction ( 1 / 299 792 458 ) of a second.And it's arbitrary.
Apart from anything else, how do you measure time?Is a time unit arrived at by counting the spontaneous and random emissions in the radioactive decay of an atom?
The mass, charge, and spin of the electron are things we can safely assume all occupy the same place, in classical experiments.The spin of an electron is exactly the sort of property that could become entangled with the spin of another particle and illustrate behaviour similar to photons with entangled polarisation that were described in Bells inequality.
ES said: To measure a distance you MUST actually measure THE TIME it would take for light to travel travel there,.....1. Nobody eats 5 portions of vegetables every day but they should. Distance is defined a certain way, that's not my fault, that's just how it is in the SI system.
BC replied: ...Nobody does that.
Also there's a reason for the weird number in ( 1 / 299 792 458 ) of a second.Yes. Setting it to 1 would have been easier but, I suppose, there were limits on what number they could chose without putting the modern definition too far out of tolerance with the older standards already in use. It's an integer, so... glass half full.
And it's arbitrary.
Apart from anything else, how do you measure time?Well, as I expect you already know there is a standard established for the second. It looks like @geordief has already asked about this. I was tempted not to even make a start discussing time, it's too late in the day. For practical purposes I use the elephant method ( just count 1 elephant, 2 elephants, 3 elephants...etc. ).
No, it's defined in terms of microwaves emitted by caesium atoms.Apart from anything else, how do you measure time?Is a time unit arrived at by counting the spontaneous and random emissions in the radioactive decay of an atom?
ome people do measure distances just by measuring the time for light to travel. Estate agents have radar guns to measure the dimensions of a room quickly. As you know, estate agents are paragons of truth.Those are interesting gadgets, but they don't work by time of flight of photons.
And I have, at least twice in this thread.nobody can say what energy is.A dictionary editor can.
The mass, charge, and spin of the electron are things we can safely assume all occupy the same place, in classical experiments.The sentence is meaningless, so you can't "safely assume" its validity.
So, if you have anything nearby that you want to measure the length of, your clock is wrong and so is your rulerWhy is that important.?We can never attain absolute accuracy in measurements ,can we?
But the tricky bit is that, in principle, anything perturbs that emission- even if it's only by gravitational shifts.Not true, my friend! Both space and time are distorted by the same phenomenon so your measurement is correct!
So, if you have anything nearby that you want to measure the length of, your clock is wrong and so is your ruler.
2. Some people do measure distances just by measuring the time for light to travel. Estate agents have radar guns to measure the dimensions of a room quickly. As you know, estate agents are paragons of truth.I rarely deal with estate agents, but I frequently rely on the opinion of radar operators who use exactly this method to tell me about conflicting traffic.
Those are interesting gadgets, but they don't work by time of flight of photons.I thought most radar systems measure distance this way. I'm almost tempted to ask how estate agents devices do work - but I have only a passing interest so a short answer would be fine.
But the tricky bit is that, in principle, anything perturbs that emission- even if it's only by gravitational shifts.I've got to agree with @alancalverd and @geordief here. It's not "wrong" as such, it just is the local flow of time. It's not the same rate as you might obtain elsewhere but since Einstein's results we were expecting or counting on that. It is exactly how we would want time to be measured or defined.
So, if you have anything nearby that you want to measure the length of, your clock is wrong and so is your ruler.
Is a time unit arrived at by counting the spontaneous and random emissions in the radioactive decay of an atom?Are you happy with the answers / discussion so far, or did you want a reference to the way that time is measured?
Hopefully you aren't thinking in Metaphysical terms, are you?No. I think I might be thinking in terms of: If Feynman is correct, nobody can say what energy is.
If someone else says energy isn't physical they need to explain how they aren't saying they know what energy is.
It's just logic, really. I don't think metaphysics comes into it.
Or perhaps there's an idea that if you can't say what it is, because nobody knows, you can still say what it isn't.
Like, you can say energy isn't time, or distance. Can you say energy isn't physical? What does that mean?
The mass, charge, and spin of the electron are things we can safely assume all occupy the same place, in classical experiments.If the sentence is meaningless, how did Milliken do his experiment? What did he assume?
The sentence is meaningless, so you can't "safely assume" its validity.
decay of an atom?
Are you happy with the answers / discussion so fa
f the sentence is meaningless, how did Milliken do his experiment? What did he assume?There are no assumptions in Millikan's experiment. He "simply" discovered that charge is quantised - it's a clever and quite difficult experiment but yielded a remarkably accurate value of the quantum of charge.
Then Energy seems to be a Calculable entity.It is a quantity, not an entity. See reply #7 above.
I am not sure I agreed with you regarding yard sticks.You've asked a lot of sensible questions about yard sticks and you are recognising the problems. The main point is precisely that real physical sticks do tend to be made of something and they could change dimensions for all sorts of reasons. When distance is defined as how far light travels in a unit of time then we avoid the need to worry about any of this.
As per what I understand you to be saying those yard sticks remain unchanged in length
I was wondering if the force of electric attraction was proportional to the speed of em radiation -ie the speed of light.The answer is yes, assuming Maxwells equations continue to model the behaviour of light. The speed of light, c, is given by c = 1 / √(με) where ε and μ are the permittivity and permeability of space (things that will influence how strong the electric or magnetic attractions will be). So if c changes over time then at least one of μ or ε must be changing with time. Assuming it's ε then we have: c decreasing => ε must be increasing => the electric attraction between charges is proportional to 1/ε => so that would have been decreasing. That might end up with the atoms in the stick being less strongly pulled together so that the stick might grow longer. (It might also be that the attraction is now so weak the molecules can't even hold together and the stick falls apart). However, the reason or cause for light to be slowing down was deliberately left arbitrary and hypothetical - it may be that it wasn't following Maxwells equations in that future due to some as yet unknown physics.
There are no assumptions in Millikan's experiment.Sorry, I just can't see how that could be possible.
Mass and charge are not wave functions and thus do not have a phase.The mass and charge of an electron aren't a part of its wavefunction? Or have I misinterpreted you?
What Millikan (and countless subsequent undergraduates) did was to measure the voltage required to prevent a charged droplet from falling under gravity between two parallel plates. It turned out that the required voltage has discrete values, from which he deduced that charge is quantised. Clever, difficult (I've never known an undergraduate get it to work first time), but no assumptions.
The mass and charge of an electron aren't a part of its wavefunction? Or have I misinterpreted you?You have misinterpreted physics. A wavefunction is the mathematical model we use to describe the probability of finding an object at a point in space. The electron doesn't "have" a wavefunction, but we assign one to it. You are not alone in your misconception, by any means!
You have misinterpreted physics. A wavefunction is the mathematical model we use to describe the probability of finding an object at a point in space. The electron doesn't "have" a wavefunction, but we assign one to it. You are not alone in your misconception, by any means!Ok. If there's a Schrodinger equation that describes the probability of finding an electron near a proton, say in a Hydrogen atom, in what sense does it not describe the position of the electron's mass or any other property of that electron?
If the droplets weren't charged, they wouldn't be prevented from falling by the electric field. If the charge dissipated, the stationary droplet wouldn't remain stationary. Nobody said anything about electrons.Nobody said anything about electrons and having measured their charge? I thought that was the whole point of the experiment.
If there's a Schrodinger equation that describes the probability of finding.......Minor detail: It's not the Schrodinger equation that describes these things, it's the wave function which appears in that equation.
in what sense does it (the wave function) not describe the position of the electron's mass* or any other property of that electron?Various ways exist. These are some of the easiest ones to explain:
Hi. I recently had a discussion online about the subject of physics, in which I posted something about simplification, and how that seems to be where physics starts, at least.I assumed you might be wanting to get some new ideas for that other online discussion from this web forum. That's why I have sometimes rambled on about a very small detail or niche area.
Ok. If there's a Schrodinger equation that describes the probability of finding an electron near a proton, say in a Hydrogen atom, in what sense does it not describe the position of the electron's mass or any other property of that electron?You are beginning to get the picture. The wave function does what I said - it maps the probability density of finding the electron. It is not a property of the electron, because it is a function of the environment (it's different for an electron in a hydrogen atom compared with a hydrogen molecule), not just the entity.
Nobody said anything about electrons and having measured their charge? I thought that was the whole point of the experiment.I repeat: Millikan demonstrated that charge is quantised. How did he know what he measured? Whenever an oil drop was stationary he looked at the voltmeter. No assumptions (other than that the voltmeter measured volts). You (and almost everyone else) have made the assumption that the quantum is the charge of an electron. Why not just read the Wikipedia article?
How then did Milliken know what he measured?
Measurement of one property will cause a wave function collapse and the wave function is then changed.I seriously disparage this statement!
So collapsing the wave function is a bit like swatting a fly ?Measurement of one property will cause a wave function collapse and the wave function is then changed.I seriously disparage this statement!
You can plot the outcome of dice throws as a wave function. You throw the dice and get a number. You haven't done anything to the hypothetical wave function, just chosen one value of it, which you could not predict. If you roll the dice again, you will get an equally unpredictable number. If you had "collapsed" the wave function, you would have restricted the range of future possibilities - the gambler's fallacy.
It is perfectly true that if you measure any property of a subatomic particle you will have altered its state in some way, e.g. by bouncing a photon off it, and thus biased its future state and wave function because you have changed its energy and momentum, but the notion of "collapse" rather militates against Heisenberg.
I seriously disparage this statement!OK.
You can plot the outcome of dice throws as a wave function.It's an awful example but if you really want to use it we can.
Do you collapse the wave function of a pair of dice?Yes. You haven't really clearly defined what you are considering as a "wave function" but I'll just assume it involves a description of the system that is sufficient to predict the outcome of dice rolling (and nothing much else). Exactly how you're going to get that into the Schrodinger equation or what you are considering as the Hamiltonian is questionable - but hey, whatever, this is 2023 and we'll just go with it. Let's just assume you are conceptualizing the dice as some system that is described by something which is loosely like a wave function.
It's just a pointless and confusing expression, implying that a wave function has more significance than a mere model.Well, the premise of QM is that every system can be described by a wave function and that wave function describes everything that is knowable about the system. So, for example, if the mathematics says that you can't determine all the components (x, y and z- Axis components) of angular momentum simultaneously (which they do), then many physicists will assume that this would be true and you can not actually do that in reality.
You haven't really clearly defined what you are considering as a "wave function" but I'll just assume it involves a description of the system that is sufficient to predict the outcome of dice rollingNo! A Schrodinger wave function does not (cannot) predict the position of an object, but the probability density of its distribution in space. You can't predict the outcome of a dice roll but you can write down the probability density of each possible outcome. Same thing.
What I'm saying is that for many physicists, Quantum mechanics is of earth-shaking importance that changes our understanding of what reality might actually be.I think you are confusing physicists with philosophers. There is nothing earthshaking about quantum mechanics - it is a good model of what happens: the everyday currency of physicists.
Have a first filter that will collapse the wave function so that the lights polarisation is entirely in the x-axis direction. Put that through a second filter which permits only light polarised along the y-axis to pass and no light will get through. However, if you add a third filter turned at 45 degrees between the x- and y-axis and insert that in between the two other filters, then you will now get some light to pass the last filter. The measurement of the polarisation along the 45 degree axis has forced a wave function collapse which has now made sure that the wave function is back in a superposition of states for polarisation along the x-axis or y-axis.Er, no. There being no polarisation along the 45 degree axis (because you said the first filter confined the beam to x polarisation only) the intermediate filter cannot have "measured" anything. What it did was to rotate the plane of polarisation between its input and output. If each filter had measured rather than reformatted the incoming beam, there would be very little loss and the superposition would result in close to 100% transmitted intensity overall.
Given that the input to the second S-G apparatus consisted only of z+, it can be inferred that a S-G apparatus must be altering the states of the particles that pass through it.(my italics)
I've been thinking about the 45 degree polariser. Quite simply, it comes down to the resolution of a vector.Yes. The Classical explanation of how a polarising filter works is exactly like this. Well done and keep the dust off those old notes, they're always useful.
Interesting that the Wikipedia article, when discussing sequential S-G systems, talks about "measuring" when a particle passes through a nonhomogeneous magnetic field. When we use magnetic fields to select regions for analysis by spin resonance (e.g. MRI) we talk about polarising or forcing, not measuring*.
It occurred to me whilst driving home yesterday but you beat me to publication!Since it's a friendly forum, it's not as if it matters a lot. I only made the comment since there may have been 1 other person reading it and getting confused for a day.
Your option c is the only one that makes sense and describes the "black box" transfer function of multiple polarisers,...Yes absolutely. A "black box" approach is all I would have tried for if I had continued developing the argument for how a polarising filter works when a photon approaches it. I wasn't entirely sure exactly how I was going to finish that development and make plausible arguments for polarisation being thought of more as a blend of answers "yes" and "no" for whether a photon would pass a filter of a given orientation etc. I was just fairly sure that going straight for a mathematical representation or any argument that polarisation is spin would not be useful for many readers. (I don't know why I worry about that too much, even on a busy day there will be only 2 readers plus a moderator or two who felt obliged to read it).
It seems to me the application of an entity/attribute model to physics theories or experiments isn't optimal.Which seems like a very reasonable statement to me. It's quite different to what was suggested in some of your earlier posts. In much of QM it is very unclear how attributes behave and should be associated with an entity.
As for the photographic use of polarisers, it seems like a complicated way of reducing intensity, compared with simply reducing the aperture of the lens,....Aperture may be limited in adjustment (1,2,3,4, units etc.), while polarisers aren't - you can spin them to any angle.
If you can't put it in a bottle, it isn't an entityWell, again, what kind of bottle?
Excellent sunglasses for coarse fishing and gliding, but dangerous for driving!Why?
And, is the universe an entity? If it is, what kind of bottle does it go in?A really big one. Or, if you are of a mathematical turn of mind, a Klein bottle, which resolves the dispute between Big Bang and Continuous Creation.
1. You can see below the reflecting surface of still water.Excellent sunglasses for coarse fishing and gliding, but dangerous for driving!Why?
Quick question concerning the nature of energy: work can definitely be described as a boundary phenomenon, can energy also be categorised as such? ( WAG question ).The conversion between energy and work involves at least a hypothetical boundary, and in most engineering embodiments there is a material boundary between the energy source and the object we want to change, so some energy is "lost" in heating or moving the boundary. I would term that loss as a boundary phenomenon, but stick to considering work and energy as quantities, not phenomena.
Quick question concerning the nature of energy: work can definitely be described as a boundary phenomenon, can energy also be categorised as such?I don't know, that's something I had to look up. Thanks for expanding my knowledge of stuff. To be honest I'm still not sure what a WAG is (internet says: a wife or girlfriend of a football player).
I worked for a short time in the video industry and all the lens assemblies we used had continually variable irises....And also relating to @alancalverd 's comments about photography and polarising filters earlier...
I haven't brought this up yet, but here goes.Actually, it wasn't hard to guess that you were going to talk about "information" in some profound sense:
...in terms of the quantum information...post # 22:
...Copying information classically never gives you identical copies...- - - - - - -
What do you think of the use of modern day information science in theoretical physics?In principle, there's no need to draw boundary lines separating "physics" from "computer science" or "geology" or any other science. If something is useful, then it's useful.
Do you think there is a better understanding of information these days,Yes, most fields of knowledge do develop over time, Information science isn't any different from other areas of human knowledge or endeavour.
If particles in the Standard Model are fundamental, does that mean they are a form of information...As I expect you already know there are some recent theories that tie some concepts in physics to some concepts in information science.
I think you need to distinguish between mapping (where you lose dimensionality that can be inferred), minimisation (losing genuinely redundant information) and lossless compression (coding recognised sequences into shorter sequences that fully identify the original).Ok well, I think the concept most people have of classical information is related to our sense of vision, primarily.
beware of LCD displays on all GPS systems.Some years ago I had a car with a small driver information panel. It used LCD technology, with a linear polariser.
How many dots are needed to see the interference pattern from a distance? Is it the same number for any kind of particle?Er, no. If you are thinking about single-photon two-slit experiments, we know that each receiver event involves a single photon with the same energy as the one that left the transmitter: the photon clearly doesn't split and interfere with itself because that would give you two red dots from each blue photon, but what we observe is a pattern of blue dots.
And so on. We know that up close, each particle that leaves a dot actually leaves a lot more than a pointlike mark, but we ignore it.
Er, no. If you are thinking about single-photon two-slit experiments, we know that each receiver event involves a single photon with the same energy as the one that left the transmitter: the photon clearly doesn't split and interfere with itself because that would give you two red dots from each blue photon, but what we observe is a pattern of blue dots.I'm not sure that you got the gist of the question: how many dots are needed so a pattern is recognisable?
So how many events constitute a pattern? That is pretty much the same question as how long is a piece of string. The more you know about the cause, the less you need to know about the effect to calculate the entire pattern - a case of fully encoded lossless compression. If I know you have a blue light source and two slits with a defined geometric relation, I can tell you what the interference pattern of an infinity of photons will look like as soon as I have detected just one.I'm talking about doing the experiment. In that case you wouldn't expect to see the same pattern twice, especially any time before the pattern emerges.
If the laws of physics are constant (which seems to be the case) you would always expect to see the same pattern because the probability distribution will be the same each time.The only issue I have with the phrase "the same pattern" is if, say you have a single-particle beam and stop after say 20 particles. Now replace the screen and repeat, the next pattern of 20 dots will be different from the first.
The question is how many dots are required to recognise that a sampled distribution is consistent with your theoretical continuous distribution.Yes, and because it is a sample of a statistical distribution you expect different samples to be statistically different.
I'm sure ES has a better grasp of formal statistics than I, but the χ2 "goodness of fit" test is rattling about in the recesses of my memory....I don't think exactly which test you use is going to be important for answering the OP, the important point is only that it's all about rejecting some hypothesis with some probabilities. You may need an infinite amount of data from an infinite number of repetitions of the experiment before you can conclude with certainty, that the data does conform to the distribution predicted by the model.
So why no consistent quantum philosophy?A few philosophers are throwing out papers and discussions involving Quantum theory. See, for example, https://plato.stanford.edu/entries/qt-issues/ The online Stanford Encyclopedia of Philosophy, which has a lengthy discussion and cites many other modern articles. It's still a bit too early to know if there will be a consistent quantum philosophy (in my opinion).
On the other hand I'm embarrassed on behalf of my colleagues if they give you different answers to "what is a photon". It is a quantum of electromagnetic energy, modelled as a particle with zero mass. Anything else would have a different name.Ok. I agree that most sources will say that a photon is a discrete particle or quantum of the electromagnetic field.
Given that such "new" quantum effects have appeared unexpectedly, what does that do to any quantum philosophy?
The philosophical problem might be connected to how the theories we have, don't tell us all that much in terms of what to expect. Unlike Newtonian mechanics which generally does.
"Complete" surprise is rare. A lot of particles were hypothesised because of an apparent breach of the usual conservation rules, and then discovered when we have worked out where to look.Ok. I'd say that the Higgs boson and the top quark qualify as discoveries that at least had an expectation of being detected.
So that could explain to some extent why philosophers struggle with, you know, ontology or objective reality.Philosophers invent things to struggle with, because they don't have meaningful lives.
In my world people have real problems and I get paid to understand and solve them with physics, chemistry, maths, brute force and duct tape. Very satisfying.I would agree that hands on is more interesting than sitting around wondering what everything is, erm, is.
What a headacheOr what fun.
Or what fun.Abstruse considerations being the "can it go in a bottle" rule?
Plus ,what is to say that those abstruse considerations may not give rise to practical outcomes eventually?
I'd like to canvas some responses, about the subject of energy and how this is understood.There was a fairly recent (end of 2022) discussion in this thread:
In simple terms, there isn't a simple explanation or even a simple definition. Energy is one of the quantities that is conserved in classical physics, and very few adults have any idea what that means.
In the simplest terms, I have always understood energy as the ability to do work.
To return to the original question, I propose the following: energy is the capacity to do work with the limitation that in the case of thermal energy some or all( worst case ) will not be able to do useful work.
One more detail: Feynman says that energy is a conserved numerical quantity. What's your opinion of that? Do you think he's saying energy is just a number?Yes, that is pretty much the gist of what he was saying in that lecture.
Yes, that is pretty much the gist of what he was saying in that lecture.
There were some other lectures discussing symmetries and conservation laws but they are more specialised. He discusses symmetry and conservation laws mainly in the context of Quantum Mechanics:
You see, therefore, the relation between the conservation laws and the symmetry of the world. Symmetry with respect to displacements in time implies the conservation of energy;...
I think you need to be careful about the phrase "numerical quantity" that Feynman uses. My opinion of it is that he's reminding everyone he also used an analogy of counting up 'abstract' children's toy blocks.You are free to make of Feynman's lectures what you please, however there are some conventional understandings of what was said and intended. Feynman wasn't really using language or addressing his lectures to an audience of philosophers, he was aiming at scientists and that particular lecture was an early one for the students (in their progress through undergraduate studies). So he was aiming to break some misconceptions from school and provide a good introduction to undergraduate level physics.
It can't be that nobody knows what energy is, but we know it's a number, because . . . we then know what it is.It is commonly understood that what Feynman was saying is that all you (his students) should accept about energy is just it is a numerical quantity which seems to be conserved (does not change with time) in all experiments and observations. Just don't make any further assumptions about it - because you can't.
You are free to make of Feynman's lectures what you please, however there are some conventional understandings of what was said and intended.I've noticed that plenty of people seem to be able to make what they please of what Feynman said in that lecture. For instance, they maintain doggedly that Feynman means energy is a concept. That's what it is in any theory, but what about the physics?
Feynman wasn't really using language or addressing his lectures to an audience of philosophers, he was aiming at scientists and that particular lecture was an early one for the students (in their progress through undergraduate studies). So he was aiming to break some misconceptions from school and provide a good introduction to undergraduate level physics.Yep. I think he was trying to uncover the big secret about physics; theories don't really tell you what physical things are, mathematics is about relations between sets of numbers. I can't see that therefore taking away the idea that he says "energy is a number", when he actually says "energy is a conserved numerical quantity", follows at all.
Given the audience, it's fair to say that Feynman was attempting to communicate something different when he said "Energy is a numerical quantity". He meant that it is a quantity AND ALSO it has numerical properties.Yes. I would caution though, that that conclusion needs to stay in the theoretical domain. Recall that properties are things that entities have, except so far we have that energy is numerical. Numbers are entities because they have properties or attributes, right? Numbers most certainly don't have a physical existence, all they have is a value.
I've noticed that plenty of people seem to be able to make what they please of what Feynman said in that lecture.Yes, that's what people will do when they read a document that is about 3 pages (or listen to lecture of about 1 hour), they will summarise and/or condense it. Different words are bound to be used by them.
I think he was trying to uncover the big secret about physics;OK, sure. I wasn't looking that deeply. On the face of it, he was just presenting a lecture to educate his students. So trying to introduce them to some of the big ideas in physics or uncover the big secrets etc. is precisely what he was trying to do. Although, in the wider sense, his own motivation for studying and teaching physics may very well have been trying to discover some of the fundamental questions he has himself (why are we here?, what is the nature of our world? etc.)
I can't see that therefore taking away the idea that he says "energy is a number", when he actually says "energy is a conserved numerical quantity", follows at all.OK. Although you asked about it in post #107, so you got a reply.
Do you think he's saying energy is just a number?There is at least one sentence where he does DIRECTLY talk about energy with the word "number":
If we don't know what energy "really" is, and if mathematics doesn't tell us beyond it being conserved (numerically), that's as far as it goes.As discussed previously. Feynman is not the only source of information about what energy is or isn't. However, just confining our attention to this one lecture, yes, that is pretty much what he was saying here. Well done.
Numbers are entities because they have properties or attributes, right? Numbers most certainly don't have a physical existence, all they have is a value.I'm not sure I can discuss the nature of numbers in a small amount of space and this post has already taken too much time (to write or to read).
Overall, a scientist should have some appreciation of what energy is and isn't by the end of that lecture.I think it goes further, in that all physical quantities are abstract numerical quantities.
Split the lecture into thirds:
1st section: Smash pre-conceived ideas. Illustrate that all we know is that energy is some abstract numerical quantity.
At best, non-numerical quantities can be ordered but they do not have all the properties of numbers.Friendly grumble:
What is a bottle, first of all?A bottle is something we make (or imagine) to contain something else. It is a member of the set of containers, which includes boxes, cages, and finite bounded universes.
Friendly grumble: I'd reserve quantity for something that can be associated with a numerical valueAgreed and it was agreed originally.
Even the word "quantities" is an issue since in the English language one tends to think of something you can count and number. Overlooking that....Since Feynamn said ".... Energy is a numerical quantity...", it was much more natural just to change one word in that and have numerical quantities vs. non-numerical quantities rather than changing everything and talking about numerical data vs. non-numerical data.
Abstractly, information can be thought of as the resolution of uncertainty.
I never really appreciated that information could be a learning discipline in itself.I took a course in communications, and I was surprised that information has entropy. At the time I figured it was something somebody borrowed from "real" physics. But what did I know?
Suppose a sentient being received some sensory input ,is there a maximum amount of physical sensory inputs it must receive so as to output something like a work of art(maybe extremely primitive)?I'd try asking ChatGPT about it. What would you need to tell it, in descriptive terms, so it outputs the required work?
Is there a correlation between the physical input and the mental output or can a minimal physical input produce an unrelated large mental (informational?) output?I'd say that depends on what the sentient being knows already.
I'd say that depends on what the sentient being knows alreadyYes, I thought that too.The brain creates it's own inputs.Any external input goes through a huge amount of processing before anything like an output can be observed.
I'd try asking ChatGPT about it. What would you need to tell it, in descriptive terms, so it outputs the required work?"Required" disqualifies it from being art. The more closely the customer specifies the end product, the more the process becomes engineering rather than art.
More abstractly (assuming this is what the questioner thought he was asking), information requires a carrier and a terminator. A bottle serves as both.Actually I thought the questioner believed their question was a way to show that information is a concept, it's not something you can physically put in a bottle. This person is either a bit crazy or doesn't understand what information is.
I think this questioner would reject my "hypothesis", given they style themselves as a philosopher,You have correctly defined a philosopher as a person who makes a living by telling you that you don't understand what you just said. A narcissist with no redeeming features.
The information is the position of an electron. You have no idea where a free electron might be found, but if you know the location of a proton you have significantly reduced your uncertainty.Electron position has more entropy than proton position because of the mass difference (I guess). But classical measurements sort of smear this out, because classical measurements are always a representative sample.
ChatGPT seems like a longwinded philosopher rather than someone who actually understands information theory. Try Wikipedia - the entry seems to have been written by folk who know what they are taking about.Thanks. Are you really an old guy living in Cambridge, and is that the US or the UK? I had online dealings with a person who was studying string theory at King's College. Bit of a maths rottweiler.
Last time I looked, I was definitely an old bloke, and AFAIK this Cambridge is cold, wet, and near the East Coast, just like the other one.Does the idea of a convention have import when discussing what "information" is?
You see, my friend, it all depends on what you think you mean by Cambridge. Is there a universal meme that can be deconstructed as a set of paradigms sufficiently delocalised in spacetime that your Cambridge and mine are the same but not identical, or identical but not, in whatever sense you think you are talking about, the same? In what sense does the Pythagorean essence of Cambridge heuristically or existentially conflict with the Aristotelian ur-Cambridge such that they cannot coexist?
I could say that in a few minutes I can walk across a bridge over the river Cam, but a philosopher would ask "how do you know that it is really you, and whilst the river was named by the Saxons, since the water that was there at the time is no longer there, is it still the Cam?
Does the idea of a convention have import when discussing what "information" is?Information is what we say it is. But you have to choose a physical basis for it in order that it is "representative".
Ps ,what on earth is ur-Cambridge when it is at home?It is Cambridge, at its original home!
We decide when it looks like an interference pattern, or when there is enough information content.I've just been learning about "hyperuniformity", particularly in the case of the distribution of prime numbers. Humans are very good at discerning patterns but unlike pigeons (who are brilliant at it) we tend to see patterns where by mathematical proof (prime numbers) or statistical sampling (pseudoartefacts on medical images), none exist.
Thus many people think there is a disjuncture between classical and quantum physics because you can't explain the gross interference pattern in terms of particles and you can't explain the distribution of individual events in terms of waves.Waves and particles are things we inject into the experiment, we attempt to use that context so we can understand the effects we see.
Oh dear! I think you are still not distinguishing between observations and models.Yes perhaps I should have said that we decide when we are observing wavelike effects and when we aren't.
I'll allow "inject" as a colloquialism in this context: I think you mean "assign to the model of" an experiment. When we actually inject energy or stuff, the object is to see what happens next - understanding may come later.
Procedures and algorithms are predetermined processes through which we force data or real stuff. There is nothing predetermined about the fate of a photon in the double-slit experiment, and there can't be: the outcome of any given event is essentially random.
what do random or regular patterns of dots mean?The dots arrive at random times, and the position of the next dot cannot be predicted from past events, but every time we repeat the experiment for long enough we get the same overall distribution.
One of the first observations if not the very first one of the implications of the quantum mechanics to the computational complexity was made by a most famous physicist, Nobel Prize winner Richard P. Feynman, who proposed in his seminal article [8] that a quantum physical system of R particles cannot be simulated by an ordinary computer without an exponential slowdown in the speed of the simulation. On the other hand, the simulation of a system of R particles in classical physics is possible with only a polynomial slowdown.--https://www.utupub.fi/bitstream/handle/10024/162051/Hirvensalo_InterferenceV3.pdf?sequence=1 (https://www.utupub.fi/bitstream/handle/10024/162051/Hirvensalo_InterferenceV3.pdf?sequence=1)
The main reason for this is that the mathematical description size of a particle system is linear in R in classical physics but exponential in R according to quantum physics. As Feynman himself expressed:
. . . But the full description of quantum mechanics for a large system with R particles is given by a function ψ(x1, x2, . . . , xR, t) which we call the amplitude to find the particles x1, . . ., xR, and therefore, because it has too many variables, it cannot be simulated with a normal computer with a number of elements proportional to R or proportional to N . [8]
Number N in the previous citation refers to the accuracy of the simulation: the number of points in the space, as Feynman formulates. In the same article, Feynman considered the problem of negative probabilities, and returned to the same issue a couple of years later [9]. Feynman's approach may be earliest formulations to understand the role of interference in the probabilities induced by quantum mechanics.
Following Feynman's idea and using quantum mechanical systems for bearing the information and carrying out the computation, it is possible to design algorithms that benefit from the interference: the undesired computational paths may cancel each other, whereas the desired ones may amplify.--ibid.
This phenomenon is generally believed to be the very source of the power of quantum computing.
Is there any other way we can reproduce that effect?I think I can give a tentative answer to your question, which is, it depends.
And what is "long enough"?
Two dots? Three?
So,is that to say that when we have just two.(or even one?) dots on the screen they can be graphically represented as an interference pattern even though our own optical system (the eyes and the brain) do not process it that way?Is there any other way we can reproduce that effect?I think I can give a tentative answer to your question, which is, it depends.
And what is "long enough"?
Two dots? Three?
In experiments that demonstrate interference, it's nice to see a pattern that we can say is definitely there.
But in quantum computers, the wavefunctions of two particles can be in superposition, such that it's a form of constructive or destructive interference.
We arrange for this to happen that way, and so it must have a nonzero probability of occuring in that case.
the electromagnetic field is a connection in a fiber bundle over spacetime.I'm willing to put my hand up and say that I don't understand that sentence. I'm no expert and not too worried about looking ignorant - it was ambiguous and confusing to me.
A photon is representative of the field in that it has the same dimensions as the field.What dimensions does the field have? Do you mean length, time, mass - those sorts of dimensions OR the dimension of some linear space (the number of vectors in a basis etc.) OR a measurement of just space occupied ( 3 cm x 10 cm x 12 cm ) OR something else? i.d.k.
An electromagnetic force is the analog of a Newtonian mechanical force; there's a symmetry in the equations of motion for an LRC circuit and a simple pendulum.There is some analogy between how a LRC circuit behaves and how a pendulum or harmonic oscillator behaves but you've been very human in finding this analogy. You've ignored the thousands of electrical circuits and situations where Current was not like velocity and/or there wasn't anything like mass or inductance. Instead you've identified a situation where analogies can be made, assumed that would be important and run with it. That's ok in that sometimes these analogies and ideas will lead somewhere and turn out to be very useful in very general situations. However, sometimes they remain just happy co-incidences - an analogy that existed for that one situation only.
This only tells you that Newtonian momentum has an electromagnetic equivalent, which is inductance multiplied by the current.Does it? It's an analogy you can make for an LRC circuit but not necessarily for other electrical circuits or for all things involving electromagnetism.
if optical measuring instrumentation was sufficiently sensitive might it ,in theory pick up directly the interference pattern from one ,two or three dots?Easier with x-rays, but the answer is yes, we can detect individual photons, and they do indeed arrive at random with the spatial probability distribution as calculated from the continuum-wave analysis.
it is commonly said that as a student of physics all you will do is study the harmonic oscillator in ever increasing levels of complexity and detail.Spoken like a mathematician!
Not complex. They are both real particles with charge and mass.In field theories particles have complex probability amplitudes.
When you started this thread you ( @varsigma ) were interested in a Feynman lecture, so you might like section 27-6 of the Feynman lecture documented here: https://www.feynmanlectures.caltech.edu/II_27.htmlYes, there you describe the free field; in an LRC circuit the evolution of the field is constrained. It's an interesting exercise finding the correspondences; although you can say there's an electromagnetic momentum which is "equivalent" to Newtonian momentum in the context of field oscillations, it's far from a unifying principle. But it's still interesting that, in order for oscillations to appear certain constraints are needed. Certain physical things need to be fixed in place.
In that lecture it discusses precisely how much momentum is contained in the E and B fields. In any region with non-zero E and B fields, there is a momentum density g = (1/c2 ) . E x B . That's all we need - just a non-zero E and B field, no current has to flow from somewhere to anywhere for momentum to exist in the electromagnetic field. (Minor note: It's commonly said that the momentum is "in the field" but I would prefer to say only that it is in the space permeated by those fields - i.e. avoid suggesting that the fields could hold it, just that momentum is in the space somehow when the fields are in that space).
The main problem is understanding what is meant by a "connection" and the possibility that "a fibre bundle over spacetime" meant the usual one associated with general relativity. A "connection" would then be understood as an "affine connection". However, you probably didn't mean that but just used the word "connection" to imply some relationship or link between elements of a fibre bundle that just has 4-D spacetime as a base space.The connection and its curvature is the field. Sean Carrol explains what that means to some extent in one of his online lectures. (it's number 15 Gauge Theory)
What dimensions does the field have? Do you mean length, time, mass - those sorts of dimensions OR the dimension of some linear space (the number of vectors in a basis etc.) OR a measurement of just space occupied ( 3 cm x 10 cm x 12 cm ) OR something else? i.d.k.The dimensions are the electric and magnetic fields--photons have two field dimensions; when they propagate in the vacuum they 'occupy' all three spatial dimensions. The dimensions of spacetime aren't the dimensions of the electromagnetic field.
1. Idea--https://ncatlab.org/nlab/show/Aharonov-Bohm+effect (https://ncatlab.org/nlab/show/Aharonov-Bohm+effect)
The Aharonov-Bohm effect is a configuration of the electromagnetic field which has vanishing electric/magnetic field strength (vanishing Faraday tensor F=0) but is nevertheless non-trivial, in that the vector potential A is non-trivial. Since the vector potential affects the quantum mechanical phase on the wavefunction of electrons moving in an electromagnetic field, in such a configuration classical physics sees no effect, but the phase of quantum particles, which may be observed as a interference pattern on some screen, does.
More technically, a configuration of the electromagnetic field is generally given by a circle-principal connection and an Aharonov-Bohm configuration is one coming from a flat connection, whose curvature/field strength hence vanishes, but which is itself globally non-trivial. This is only possible on spaces (spacetimes) which have a non-trivial fundamental group, hence for instance it doesn't happen on Minkowski spacetime.
In practice one imagines an idealized electric current-carrying solenoid in Euclidean space. Away from the solenoid itself the magnetic field produced by it gives such a configuration.
But the probability is A2 so real particles have real distributions.Not complex. They are both real particles with charge and mass.In field theories particles have complex probability amplitudes.
But the probability is A2 so real particles have real distributions.Yeah. I don't know that I can explain why the square of a complex amplitude is a mass term (in a Lagrangian). Sean Carroll might be able to, or we might be able to discuss what he's talking about.
And once again you are in danger of confusing model with reality. We can predict an interference pattern by superposing wave functions, but where an individual photon/electron/buckyball goes within that distribution is entirely random. It's no big deal: you can predict the outcome distribution of an infinite number of dice throws or even coin tosses very accurately, but each throw is unpredictable. Nothing is "coded in a superposition" to determine what happens next.I think the best model of an interference pattern is that it's a way to verify that your quantum computer is
In short: all global structure in field theory is controled by fiber bundles, and all the more the more the field theory is quantum and gauge. The only reason why this can be ignored to some extent is because field theory is a complex subject and maybe the majority of discussions about it concerns really only a small little perturbative local aspect of it. But this is not the reality. The QCD vacuum that we inhabit is filled with a sea of non-trivial bundles and the whole quantum structure of the laws of nature are bundle-theoretic . . .--https://ncatlab.org/nlab/show/fiber+bundles+in+physics (https://ncatlab.org/nlab/show/fiber+bundles+in+physics)
What does the existence of energy mean, anyway?Physics is the business of constructing mathematical models of what happens (or doesn't happen - the branch of physics known as civil engineering). Within those models, energy is a conserved quantity in classical physics, and remains conserved as mass-energy in relativistic physics. Nothing more or less.
Philosophy tries to decide which questions are meaningful; Physics just does some experiments.No. Philosophers assert which questions are meaningful, then question the meaning of meaningful until they disappear through their own anal sphincters. Physics is about discovering and predicting useful and interesting stuff.
nobody has ever insulted me with the title of philosopher!I had assumed you were a PhD.
But a gentleman wouldn't draw attention to it, surely?nobody has ever insulted me with the title of philosopher!I had assumed you were a PhD.
I wasn't aware that PhDs were handed out by gentlemen.But a gentleman wouldn't draw attention to it, surely?nobody has ever insulted me with the title of philosopher!I had assumed you were a PhD.
The Aharonov?Bohm effect is important conceptually because it bears on three issues apparent in the recasting of (Maxwell's) classical electromagnetic theory as a gauge theory, which before the advent of quantum mechanics could be argued to be a mathematical reformulation with no physical consequences. The Aharonov?Bohm thought experiments and their experimental realization imply that the issues were not just philosophical.--https://en.wikipedia.org/wiki/Aharonov%E2%80%93Bohm_effect#Magnetic_solenoid_effect (https://en.wikipedia.org/wiki/Aharonov%E2%80%93Bohm_effect#Magnetic_solenoid_effect)
The three issues are:
1. whether potentials are "physical" or just a convenient tool for calculating force fields;
2. whether action principles are fundamental;
3. the principle of locality.
We all have to deal with philosophy. Physics, "by itself", doesn't really try to cover philosophical questions or answers.Best way to deal with philosophy is to hold your nose and walk away. Philosophical questions do not have answers because if they did, philosophers would be out of work.
A physics joke.I believe the neutron would also be affected by a magnetic field due to the neutrons magnetic moment. Of course it would be much less than the proton.
A proton and a neutron go into a magnetic field bar. The neutron heads straight towards the bartender, but the proton tries to and ends up smacking into a wall.
The bartender says to the neutron, "Your mate ok?", The neutron says, "I think he's a bit charged up about something".
I believe the neutron would also be affected by a magnetic field due to the neutrons magnetic moment. Of course it would be much less than the proton.The neutron scattering experiment was finely tuned. Actually both experiments that Bernstein and Phillips explain are highly engineered to very close tolerances.
The total rotation angle induced along the path through the magnetic field is equal to the Larmor precession frequency multiplied by the time the neutrons spend in the field. The angle can therefore be calculated from measurements of the velocity of the beam, the intensity of the field and the distance across the field. In the version of the experiment done by Rauch, Bonse and their colleagues the neutrons travel through a magnetic field 1.5 centimeters wide at a speed of 2,170 meters per second, so that each neutron spends a little less than seven microseconds in the field.
When the electromagnet is operating at maximum current, the strength of the field is 400 gauss, which corresponds to a Larmor frequency of 433 million degrees per second. At this rate, in seven microseconds the spin vector of each neutron rotates about eight full turns. If each 360-degree rotation of the spin vector restored a neutron to its original state, one would expect to observe eight cycles of maximum and minimum counts. The actual result is significantly different. As the magnetic field increases from zero to its maximum the number of neutrons detected at the counter passes through only four cycles.
Inside a solenoid the (electric) current is in the same direction as the magnetic flux along the field lines, outside it's inverted and opposes the current.Not sure we are looking at the same picture here! The field of a solenoid is merely an extension of the field of a single loop. The current direction is circular, with the field lines axial to the solenoid.
The "turns" business is just a matter of engineering practicality: it's much easier to make a long homogeneous field by driving 1 amp through 100 turns of wire than by driving 100 amps around 1 turn of rolled sheet - though superconductors do allow very large currents to circulate through very few turns.Yes, I would say that it's because of the convenience of having the same current density in a metal coil as in a metal cylinder.
how can simplification and sophistication, work together?Said it before, will say it again. Rocket science is just two equations. Rocket engineering is a lot more complicated. The trick is to keep adding bits of science until your model is good enough for practical purposes.
Assume a current I = 1 ampere, and a wire of 2 mm diameter (radius = 0.001 m). This wire has a cross sectional area A of π ? (0.001 m)2 = 3.14?10−6 m2 = 3.14 mm2. The charge of one electron is q = −1.6?10−19 C. The drift velocity therefore can be calculated as.....
.....2.3 x 10-5 m/s
There is movement of electrons, transmitted at the speed of light in the medium, but the net flow (drift) is very slow.You mean the movement is transmitted, not the electrons?
What does anyone here think of the idea, commonly found, that electrons flow along wires and that's what you pay for, when the bill is due?Advice: If you plan on making a significant change in topic from what you started in this thread, then it may be best to start a new thread. I'm not staff and it doesn't bother me, it's just that you might get more replies and/or more relevant replies if people know they don't have to read the last 11 pages to join the current discussion.
You mean the movement is transmitted, not the electrons?effectively, yes
Another reason the electrons don't move much is, they don't have to, because electrons pack a lot of energy into a small volume.Er, no. There are just an awful lot of free electrons in a conductor! Current is the quantity of charge passing through a plane per unit time.
In 1 m3 of copper, there are about 8.5x1028 atoms. Copper has one free electron per atom, so n is equal to 8.5x1028 electrons per cubic metre.I was trying to make the point that a small fraction of that number is what moves under an applied electric field.
I was trying to make the point that a small fraction of that number is what moves under an applied electric field.No, all the conduction electrons move. Same as my "poking tar with a stick" example.
No, all the conduction electrons move. Same as my "poking tar with a stick" example.All the electrons move, but only a fraction move in the same direction at the drift velocity. This is much lower than the random velocities, with no net flow.
If there is no net flow, why do we measure a current?The "no net flow" condition is when there is no applied field (no voltage or current source). An applied field gives some of the valence electrons a net flow, with a corresponding flow of holes in the opposite direction. The bulk of the electrons remains dynamically thermal though, so I say there is no net flow for those.
The picture is simple - just think of the stick model, or if that's too complicated, imagine a crowd leaving a stadium. When the final whistle blows they all move. The ones nearest the gate leave immediately, the number leaving the stadium per unit time (the current) depends on the ratio of the width of the gate to the width of a person, and the drift velocity of those inside the stadium may be very slow indeed.Ok. There's always a problem with analogies but I get the gist of what you say. The big difference is when electrons do this, the changes in the fields move too, at the speed of light, and most of the field is outside the conductor. In free space.
Not sure what you mean by changes in the external field.I mean in the sense electromagnetic signals travel at c.
A d.c. flow of electrons produces a static magnetic field around the conductor but no external electric field.That's only a simplified truth. A more complete truth is that there will also be a static electric field around the conductor.
Wires have some small resistance.Interesting. I've just spent the morning measuring the magnetic field outside a closed superconducting ring where this statement is clearly not true!
Say you have a superconducting wire delivering power to a load what would the Poynting vector be?
Now, Alancalverd seems to be denying E and B fields and their importance in delivering power in electrical circuits, that's something I might be able to change with some discussion.It is entirely reasonable that a static E field can be detected where there is a potential gradient, and obvious that there is a static B field around a conductor carrying a steady current, but varsigma was talking about an electromagnetic field, which I take to mean a time-varying and self-propagating field generated by accelerating charges.
It is entirely reasonable that a static E field can be detected where there is a potential gradient, and obvious that there is a static B field around a conductor carrying a steady current, but varsigma was talking about an electromagnetic field, which I take to mean a time-varying and self-propagating field generated by accelerating charges.Actually I was talking about the difference between an open circuit with no current flowing, and a circuit with a DC current. When you apply the current (resp. the voltage), the changes in the circuit (inside and outside) propagate through it, at c.
Actually I was talking about the difference between an open circuit with no current flowing, and a circuit with a DC current.
If instead you attach an open wire to a battery terminal, it has the same potential as the terminal for the same reason.
When you apply the current (resp. the voltage), the changes in the circuit (inside and outside) propagate through it, at c.There needs to be a careful use and understanding of the word "changes". It would be helpful to make a distinction between the changing of the potential and the (final) change in potential that is achieved.
power has passed in a straight line through the open space between the switch and the bulb - it has clearly not travelled all along the length of the wires.AAAGH! (to quote the bad guy getting his comeuppance in all the best comics).
And before the pedantic sharks attack, yes, the power dissipated in a LED or spark gap does indeed increase with time but again this is a thermal effect (tungsten has a positive temperature coefficient of resistance, semiconductors and plasmas generally negative) not a consequence of the finite value of v or u in the wires!LEDs and transistors have a smooth v-i response, except at the beginning or end of a switch from off to on (or on to off).
A rigorous analysis would require transmission line treatment with the parameters of inductance and resistance of the conductor and interconductor capacitance and admittance( in air 0 ).Yes. I've seen seen transmission lines modelled like this:
I'd like to canvas an opinion or two about the likeness between two well-understood things, namely energy and information.You could start a new thread and put a Poll on it. It appears right at the top of the thread and is automatically updated. People can easly select a response and a bar chart of results is automatically generated (or hidden until some deadline is reached). Experiment with the options and you'll see for yourself.
With one tap on a keyboard I can eliminate any arbitrarily large amount of information, in principle.Can you explain what you mean by eliminate?
but the National Curriculum used to require primary school teachers to establish that "Energy is the 'go' of things"That phrase is sometimes attributed to J C Maxwell.
Maybe not, but it was quoted in the NC even if they didn't coin it. Maybe that's why so many people still think about aether - the Dept of Education's only physics textbook is 100 years out of date!With all the water vapour out there it can't be long before they track down the aether.
Aether does not exist.'twas said in jest(not joust).
Aether does not exist.'twas said in jest(not joust).
The heuristic I'm using is that, given a pattern from a double-slit 'interaction', that is, given that interference is detected, there is a symmetry in that, along any vertical scan line there should be a regular (semi-regular ?) distribution of dots.No.