The Naked Scientists

The Naked Scientists Forum

Author Topic: What physics and math topics do people find hardest to grasp?  (Read 18191 times)

Offline damocles

  • Hero Member
  • *****
  • Posts: 756
  • Thanked: 1 times
    • View Profile
Thankyou Yor_on! your example just highlights my reasoning! Because you only had a 1 in 100 chance of getting the first door right, then you are 99% sure if you change doors that you will be right after the game leader has opened 98 of them
That's not how probability works. Take a guess out of the 100 doors. Your probability of guessing right is 1/100. A door is opened and its empty. Regardles of whether you keep or change doors the probabiligy will be 1/99 of choosing the right one, and so on. This is different if you were playing the lottery. When playing the lottery always play the same number since its your goal to win in your lifetime, not merely today even if the chances of the new number you pick has the same probability of winning as any other number. Each problem is specific and needs to be addressed in each case. In the Montey Hall problem the winning door is never changed whereas in the lottery problem the number is always changed.

That's not how the game works! If you choose the wrong door, the host is obliged to show you where the prize is by revealing the 98 doors where he knows the prize is not, giving you a sure pointer to the prize. So your chances of winning are 1% if you stand, but 99% if you swap.
 

Offline dlorde

  • Neilep Level Member
  • ******
  • Posts: 1441
  • Thanked: 9 times
  • ex human-biologist & software developer
    • View Profile
When playing the lottery always play the same number since its your goal to win in your lifetime, not merely today even if the chances of the new number you pick has the same probability of winning as any other number.
Can you explain the reasoning here? you surely have the same chance whether you change your number each time or not.

As I understand it, the only criteria for selecting a lottery number is to avoid one that other people might be likely to pick too; it doesn't help your chances, but if you do win, you're less likely to be sharing the prize.
 

Offline dlorde

  • Neilep Level Member
  • ******
  • Posts: 1441
  • Thanked: 9 times
  • ex human-biologist & software developer
    • View Profile
I disagree with these interpretations of quantum mechanics. A cat is a macroscopic animal whereas an atom is not. A cat is either alive or dead and not in a superposition of both.
At what point from atom to cat do you draw the line? Objects visible to the eye (40 microns, ~about 10 trillion atoms) have apparently been put into quantum superposition (see quantum microphone). I don't doubt that decoherence would rule out any animate organic creature, but what about viruses or bacteria at cryogenic temps? or a cryptobiotic tardigrade?

From the POV of size alone, it would be interesting to discover the practical size limit for a measurable duration of superposition.

The comments about Schrodinger's Cat were just whimsy.
 

Offline confusious says

  • Jr. Member
  • **
  • Posts: 38
    • View Profile
Reading my electricity bill :)
 

Offline yor_on

  • Naked Science Forum GOD!
  • *******
  • Posts: 11993
  • Thanked: 4 times
  • (Ah, yes:) *a table is always good to hide under*
    • View Profile
That one is weird too, there are two ways and I don't know which one, or if both, are true, but they are contradictory. If i flip a coin and get a hundred tails then that's a very unlikely result. I would call that a very low probability. But the coin is in a way reset each time you flip it, meaning that it shouldn't care for those other results, instead treating each flip as a new instance, the other results having no bearing on the outcome. So one does not lead to the other, but statistically you should have a 50/50 probability of tail and head, flipping it in a series long enough. Maybe the question should be if there is a possibility of defining when a improbable series become so overextended, only tails, that you could expect a probability of it to come up as head? It makes me think of feigenbaum's constant this one, wondering if there is some mathematical way to define a 'overextension'?

that as common sense tell me that one thousand flips, all coming out tail, should be a very seldom seen pattern. Against it you have the equally valid point that each new flip should be counted as starting anew? Looked at that way there is no reason to choose any numbers before any other, although you might want them to be those that people don't tend to pick. On the other hand, there should be some breaking point to those tails, when they become so improbable that ??
=

Keep jumping over words, and, eh, spellings :)
=

One can think of it this way maybe, the pattern of a thousand tails is no more uncommon that any other combinations of set patterns like tail - tail  - head, if repeated over and over, for a thousand flips. Which then just should make one thousand tails, or heads, uncommon, because it is so easy to recognize for us. That will make the one defining it as being 'reset' with each new flip the one making most sense. What i mean is that if you count the way a pattern evolve over time, flipping a coin, that all patterns possible have a equal chance of evolve, singling out no pattern as more probable. And a pattern would then be whatever way you found head and tails arrange themselves over a thousand throws.
« Last Edit: 14/05/2013 23:58:02 by yor_on »
 

Offline damocles

  • Hero Member
  • *****
  • Posts: 756
  • Thanked: 1 times
    • View Profile
If a coin were flipped and came up tails one thousand times in a row, I would be inclined to bet on it coming up tails the next time, because I would regard the previous one thousand times as statistical evidence in favour of the coin (or the toss) being biassed.
 

Offline yor_on

  • Naked Science Forum GOD!
  • *******
  • Posts: 11993
  • Thanked: 4 times
  • (Ah, yes:) *a table is always good to hide under*
    • View Profile
Yeah, I know Damocles :) Can't help it going against my basic instincts though. Which is why I don't gamble, can't trust those instincts ::))
=

Then again, thinking of it as patterns, the probability of it coming out 'any which way' should still be 50/50, shouldn't it, as no pattern is more probable than any other? (over 1000, and 1, flips) . They should all be equally probable as it seems to me, otherwise we find a bias? And I think that one can look at the constant 'reset', as well as the 'equally probable patterns', as a logical proof for that, the coin flipped enough times, also must give us a equal amount of heads and tails.
« Last Edit: 15/05/2013 00:23:31 by yor_on »
 

Offline yor_on

  • Naked Science Forum GOD!
  • *******
  • Posts: 11993
  • Thanked: 4 times
  • (Ah, yes:) *a table is always good to hide under*
    • View Profile
There is one more thing though, calling a outcome randomly before the flip. Would that give you a statistically better chance to be correct, more than a 50% probability, than always calling a same, set, outcome? As calling 'tail' before each flip? And if that would be true (introducing a random call before the actual throw), why would that be?
 

Offline dlorde

  • Neilep Level Member
  • ******
  • Posts: 1441
  • Thanked: 9 times
  • ex human-biologist & software developer
    • View Profile
Seems to me that if it's a fair (random) coin, you'll average 50% success calling the same each time or calling randomly. Each call has a 50% chance of being right, regardless of previous calls.
 

Offline yor_on

  • Naked Science Forum GOD!
  • *******
  • Posts: 11993
  • Thanked: 4 times
  • (Ah, yes:) *a table is always good to hide under*
    • View Profile
Yes, that's what I thought too, but it seem to have been a professor in statistics, meaning that you by introducing randomness on your side too, will get a better probability? I don't think it can be right myself, but I'm not sure?
 

Offline dlorde

  • Neilep Level Member
  • ******
  • Posts: 1441
  • Thanked: 9 times
  • ex human-biologist & software developer
    • View Profile
If you can consistently get better than 50%, the coin sequence isn't random. Your choice (prediction) can't affect the coin odds.
 

Offline wolfekeeper

  • Neilep Level Member
  • ******
  • Posts: 1092
  • Thanked: 11 times
    • View Profile
I think people have a lot of trouble with statistics (e.g. the Monty Hall problem is a very classic example), but statistics in general seems to fry people's brains.

Relativity, again, people usually can't hack it.

QM, practically nobody really has much clue!
 

Offline Pmb

  • Neilep Level Member
  • ******
  • Posts: 1838
  • Physicist
    • View Profile
    • New England Science Constortium
Quote from: damocles
That's not how the game works! If you choose the wrong door, the host is obliged to show you where the prize is by revealing the 98 doors where he knows the prize is not, giving you a sure pointer to the prize. So your chances of winning are 1% if you stand, but 99% if you swap.
Then what I said was caca?? ;)

I don't understand. Suppose that the prize is behind door number 2. If I choose door number 1 then he reveals that doors numbers 3-100 have nothing behind them? If so then it seems to me that you have to choose door number 2 after that and have 100% chance of winning.

I don't think I understand that game. In any case I'm not interested. It's getting off topic for me.
 

Offline Pmb

  • Neilep Level Member
  • ******
  • Posts: 1838
  • Physicist
    • View Profile
    • New England Science Constortium
Quote from: dlorde
Can you explain the reasoning here? you surely have the same chance whether you change your number each time or not.
Do two experiments using a single die.

Experiment Number 1: Roll the die 100 times. Every time the number 1 comes up give yourself a penny.

Experiment Number 2: Roll the die 100 times.
On the first roll if the number that comes up is a 1 give yourself a penny.
Roll the die again. If the number that comes up is a 2 give yourself a penny.
Roll the die again. If the number that comes up is a 3 give yourself a penny.
Roll the die again. If the number that comes up is a 4 give yourself a penny.
Roll the die again. If the number that comes up is a 5 give yourself a penny.
Roll the die again. If the number that comes up is a 6 give yourself a penny.
Roll the die again. If the number that comes up is a 1 give yourself a penny.
Roll the die again. If the number that comes up is a 2 give yourself a penny.
(keep doing this until you've rolled the die 100 times)

The probability of you getting more money during experiment number one is greater than that of number two.

My expertiese in combinatorics is too rusty to calculate the excact probabilites. It's been over twenty years since I took that course. Blech! :)

Quote from: dlorde
As I understand it, the only criteria for selecting a lottery number is to avoid one that other people might be likely to pick too; it doesn't help your chances, but if you do win, you're less likely to be sharing the prize.
Never worry about that because its beyond your control and doesn't affect the probability of winning or the amount.
 

Offline burning

  • Full Member
  • ***
  • Posts: 71
    • View Profile
The probability of you getting more money during experiment number one is greater than that of number two.


I'm pretty sure that's wrong.

The probability of the number you guessed coming up is 1 in 6 for every roll, regardless of whether you guess the same number each time, a different number each time following a pattern, or a different number each time chosen at random.  The expectation value for the number of wins will then be 100/6 for either experiment.

Can you explain your reasoning why you expect differently?

By the way, I ran the experiments using the random number generator in Excel.  I know that it's not a high quality random number generator, but it should be good enough to imitate a fair die.  I "rolled" 1000 dice at a time and compared the number of wins under the two assumptions.  While I didn't conduct the experiments sufficient times enough to give a conclusive statistical analysis, both methods gave results within a reasonable error range from 1000/6, and neither method showed a tendency to win more often than the other.
 

Offline Pmb

  • Neilep Level Member
  • ******
  • Posts: 1838
  • Physicist
    • View Profile
    • New England Science Constortium
Quote from: burning
Can you explain your reasoning why you expect differently?
As I explained above, my combinatorics is very rusty. Think of changing the number is trying to hit a moving target rather than a stationary one. But who knows. I could be wrong. You need to have a solid knowledge of combinatorics to determine this exactly and I haven't done that in decades. So sure, perhaps you're right and I'm wrong. You can always try it and see what happens.
 

Offline dlorde

  • Neilep Level Member
  • ******
  • Posts: 1441
  • Thanked: 9 times
  • ex human-biologist & software developer
    • View Profile
The probability of you getting more money during experiment number one is greater than that of number two.

I don't see how that's possible. The odds are one in six each time; your choice can't change that. OTOH, if your idea had legs, we could clean out the casinos  ;D
 

Offline yor_on

  • Naked Science Forum GOD!
  • *******
  • Posts: 11993
  • Thanked: 4 times
  • (Ah, yes:) *a table is always good to hide under*
    • View Profile
Is it this you're thinking of Pete?

"If every object in a collection of objects fails to have a certain property, then the probability that a random object chosen from the collection has that property is zero. Turning this around, if the probability that the random object has the property is greater than zero, then this proves the existence of at least one object in the collection that has the property. It doesn't matter if the probability is vanishingly small; any positive probability will do.

Similarly, showing that the probability is (strictly) less than 1 can be used to prove the existence of an object that does not satisfy the prescribed properties.

Another way to use the probabilistic method is by calculating the expected value of some random variable. If it can be shown that the random variable can take on a value less than the expected value, this proves that the random variable can also take on some value greater than the expected value."

"In probabilistic combinatorics, the questions are of the following type: what is the probability of a certain property for a random discrete object, such as a random graph? For instance, what is the average number of triangles in a random graph? Probabilistic methods are also used to determine the existence of combinatorial objects with certain prescribed properties (for which explicit examples might be difficult to find), simply by observing that the probability of randomly selecting an object with those properties is greater than 0. This approach (often referred to as the probabilistic method) proved highly effective in applications to extremal combinatorics and graph theory. A closely related area is the study of finite Markov chains, especially on combinatorial objects. Here again probabilistic tools are used to estimate the mixing time.

Often associated with Paul Erdős, who did the pioneer work on the subject, probabilistic combinatorics was traditionally viewed as a set of tools to study problems in other parts of combinatorics. However, with the growth of applications to analysis of algorithms in computer science, as well as classical probability, additive and probabilistic number theory, the area recently grew to become an independent field of combinatorics."

And this http://www.goldsim.com/Web/Introduction/Probabilistic/MonteCarlo/

"In order to compute the probability distribution of predicted performance, it is necessary to propagate (translate) the input uncertainties into uncertainties in the results. A variety of methods exist for propagating uncertainty.  Monte Carlo simulation is perhaps the most common technique for propagating the uncertainty in the various aspects of a system to the predicted performance.
 
In Monte Carlo simulation, the entire system is simulated a large number (e.g., 1000) of times. Each simulation is equally likely, referred to as a realization of the system. For each realization, all of the uncertain parameters are sampled (i.e., a single random value is selected from the specified distribution describing each parameter). The system is then simulated through time (given the particular set of input parameters) such that the performance of the system can be computed. This results is a large number of separate and independent results, each representing a possible “future” for the system (i.e., one possible path the system may follow through time). The results of the independent system realizations are assembled into probability distributions of possible outcomes. As a result, the outputs are not single values, but probability distributions."

That one sound close to what I called 'patterns' to me. And the number fits too :)
Anyone that have a simple example of it, maybe?

How using uncertainty to make a guess more certain, without hidden parameters?
Or do I need to assume hidden parameters for it to work? As the game master 'knowing' which door that contained the car, and so never opening that one. (As well as he can't open my first choice of door, as that destroys my later choice, well, as it seems to me?)
 

Offline JP

  • Neilep Level Member
  • ******
  • Posts: 3366
  • Thanked: 2 times
    • View Profile
You don't have to do 100 times to check it.  There's nothing fundamentally different between the way the dice behave after 100 rolls or 2 rolls, and you can substitute 2 sided dice (a coin with numbered faces) for 6 sided dice without changing the fundamentals of the problem.  Your possible outcomes of two flips are:

1,1
1,2
2,1
2,2

and they're all equally likely.

If you choose method 1 (predicting 1,1 as the outcome), your earnings are:
2 cents 25% of the time
1 cent 50% of the time
0 cents 25% of the time

If you choose method 2 (predict 1,2 as the outcome), your earnings are:
2 cents 25% of the time
1 cent 50% of the time
0 cents 25% of the time

And its easy to verify that any guess will have the same odds of winning, since each roll is independent of the others and you have a 50% chance of winning.  This extends straightforwardly to more rolls and 6 sided dice.  Things do change if you just want to guess at the numbers rolled, independent of ordering.  If someone just asks what the two values in 2 flips of this die are, you're best off guessing 1 and 2, since that combination shows up half the time.
 

Offline yor_on

  • Naked Science Forum GOD!
  • *******
  • Posts: 11993
  • Thanked: 4 times
  • (Ah, yes:) *a table is always good to hide under*
    • View Profile
That's pure probability, as I read you JP :)

But using uncertainty to lower a uncertainty of the other side, aka, not knowing the dices outcome before they show it? Is there really a way to do that? And a example of it please :)
=

Or better expressed, can you fight the dice or lottery's randomness by introducing your own randomness. It's not the exact same as what Pete suggested but the idea caught my imagination. Is it possible? And in what ways/situations?
« Last Edit: 16/05/2013 17:51:47 by yor_on »
 

Offline dlorde

  • Neilep Level Member
  • ******
  • Posts: 1441
  • Thanked: 9 times
  • ex human-biologist & software developer
    • View Profile
In Monte Carlo simulation, the entire system is simulated a large number (e.g., 1000) of times. Each simulation is equally likely, referred to as a realization of the system. For each realization, all of the uncertain parameters are sampled (i.e., a single random value is selected from the specified distribution describing each parameter). The system is then simulated through time (given the particular set of input parameters) such that the performance of the system can be computed. This results is a large number of separate and independent results, each representing a possible “future” for the system (i.e., one possible path the system may follow through time). The results of the independent system realizations are assembled into probability distributions of possible outcomes. As a result, the outputs are not single values, but probability distributions."

That one sound close to what I called 'patterns' to me. And the number fits too :)
Anyone that have a simple example of it, maybe?

Weather forecasting. They run numerous projections with a number of models, varying the initial parameters. This gives them a spectrum of possible futures. If the weather is in a reasonably non-chaotic state there will be groups of similar patterns in the result spectrum. The sizes of the groups can be used to give a probability estimate for each predicted weather pattern.
 

Offline yor_on

  • Naked Science Forum GOD!
  • *******
  • Posts: 11993
  • Thanked: 4 times
  • (Ah, yes:) *a table is always good to hide under*
    • View Profile
Nice example dlorde, I reminds me of a 'fractal approach' too, as you look for larger patterns in the patterns visible. It also reminds me of assigning 'weights' in neural networks, as those 'clumps' of patterns closest to each other might be said to be 'weighted up' by probability.

Then we just have randomness left it seems, and Pete's suggestion going the other way defining a ordered approach. Both involves decision making though, even if random in the first case. And those two are the ones I find most difficult to imagine, but rather intriguing.
 

Offline dlorde

  • Neilep Level Member
  • ******
  • Posts: 1441
  • Thanked: 9 times
  • ex human-biologist & software developer
    • View Profile
Then we just have randomness left it seems, and Pete's suggestion going the other way defining a ordered approach.
There is another major option - chaos. For example, when the weather is in a chaotic state, the simulations come out very different regardless how close the initial parameters are set. It's not random, it's entirely deterministic; but it's totally unpredictable... non-linear dynamics, the Butterfly Effect; it was all the rage n the '80's.
 

Offline yor_on

  • Naked Science Forum GOD!
  • *******
  • Posts: 11993
  • Thanked: 4 times
  • (Ah, yes:) *a table is always good to hide under*
    • View Profile
Yes, I agree. Let me put it this way though, when I think of randomness then I do it from chaos. Maybe that's not correct but to me they become equivalent, although you might want to define randomness to superpositions microscopically, as a example of how I think :) versus chaos macroscopically. Even though you can call chaos deterministic, as in if we only knew all parameters we could describe it, I personally relate it to a randomness.

Maybe I could express it as I don't think there ever will a possibility of knowing the whole history, of anything. It seems to go through all physics that one, no matter what scales you look at it from?
 

Offline JP

  • Neilep Level Member
  • ******
  • Posts: 3366
  • Thanked: 2 times
    • View Profile
That's pure probability, as I read you JP :)

But using uncertainty to lower a uncertainty of the other side, aka, not knowing the dices outcome before they show it? Is there really a way to do that? And a example of it please :)
=

Or better expressed, can you fight the dice or lottery's randomness by introducing your own randomness. It's not the exact same as what Pete suggested but the idea caught my imagination. Is it possible? And in what ways/situations?

No, you can't if the dice are actually random.  You don't have to take my word for it, though.  The 2 coin case is easy enough to think about--you can list out all the possibilities very easily and each flip has a 50/50 chance of producing either 1 or 2. 
 

The Naked Scientists Forum


 

SMF 2.0.10 | SMF © 2015, Simple Machines
SMFAds for Free Forums