Naked Science Forum

On the Lighter Side => New Theories => Topic started by: Chondrally on 26/09/2015 10:58:40

Title: What is the State Space Method of Information Theory?
Post by: Chondrally on 26/09/2015 10:58:40
It is Derived From Statistical Mechanics and the Set Theoretic Approach.  Please follow the derivation:,
Imagine Overlapping sets A and B, so that there is an intersection A^B, also a union AUB,  and for each of these in a universe Omega,  there are definite probabilities of occurring. So these would be:
p(Omega)=1, containing N items.
p(A) is the probability of set A, containing Na items, so p(A)=Na/N
p(B) is the probability of set B, containing Nb items, so p(B)=Nb/N
p(A^B) is the probability of the intersection of sets A and B.
SO p(AUB)=p(A)+p(B)-p(A^B) or p(A^B)=p(A)+p(B)-p(AUB) and

Omega=Omega(A)+Omega(B)-Omega(A^B)+Omega(Not AUB)
therefore from statistical mechanics:
1 =Sum over A : p(xi/A) + Sum over B: p(xi/B) - Sum over A^B: p(xi/A^B)+Sum over Not AUB: p(xi/Not AUB)
-k ln(Omega)=k ln(Sum: p(xi/A))+k ln(Sum:p(xi/B))- k ln(Sum:p(xi/A^B))+k ln(Sum:p(xi/Not AUB))


if p(xi/A)= SUm over A: exp(beta*xi)*p(A)/Sum over A:exp(beta*xi)=kNa+k ln(p(A))
therefore
simplifying:
Entropy total = RTot=-k ln (p(A)) - k ln(p(B)) + k ln(p(A)+p(B)-p(AUB)) - k ln( p(Not AUB))
If A and B are independent so that A^B= NULL then
RTot = -k ln(p(A)) - k ln(p(B) - k ln(p(Not AUB))
If Na+Nb=N and Not AUB= NULL then
p(B)= 1-p(A)= 1-p and
R = -k ln(p(1-p))
imagine a statistical distribution of xi, then RTot= -k*Sum over xi: ln(p(1-p))
if xi is continuous , x
then
RTot = -k*Integral from a to b: ln(p(1-p))

so this formula would replace Shannons formula: 
ShanSTot=-p*ln(p)
Shannons formula implicitly states that if the probability of an event is zero, it contains infinite information Entropy and is highly informative, but if it is certain to occur with probability 1, it contains no information entropy.

This new law asserts that if the probability is zero or one, ie. completely impossible or absolutely certain to occur, it is highly informative and the information entropy is infinity.  If the probability is .5,  ie completely random in occurrence, then the information is at a minimum.
This latter formula makes more intuitive sense to me but that is my personal opinion.  We will have to let the population of information Theorists decide.
Best wishes,
Richard.
Similarly inspired by Statistical Mechanics we can define an information Temperature and making use of the formula
L= Exp(5/2)*V/h^3*(2*PI*m*k*T)^(3/2)/N
where
R=Nk ln(L)
it follows that after some work:
5/2+3/2*ln(2*Pi*k*Ta/Na)=Ta*ln(Na)
Solve for Ta, similarly for Tb.

R=kTa*ln(Na)+k*Tb*ln(Nb)=approx. k*(p*Ta+(1-p)*Tb)*ln(N)=
k*ln(N^(Ta+Tb)*p^Ta*(1-p)^Tb)=
approx.k*ln(N^(p*Ta+(1-p)*Tb))
R= -kln(p*(1-p)) in its simplest form
Title: Re: If certainty is informative, what is the formula in Information Theory?
Post by: Colin2B on 26/09/2015 17:35:02
Shannons formula implicitly states that if the probability of an event is zero, it contains infinite information Entropy and is highly informative, but if it is certain to occur with probability 1, it contains no information entropy.
Not quite.
If something is certain not to occur and we don't know what will occur there is infinite entropy. However if it's definitely not going to happen because something else is certain to happen then there is zero entropy. So to me it depends....

This new law asserts that if the probability is zero or one, ie. completely impossible or absolutely certain to occur, it is highly informative and the information entropy is infinity.  If the probability is .5,  ie completely random in occurrence, then the information is at a minimum.
This latter formula makes more intuitive sense to me but that is my personal opinion.  We will have to let the population of information Theorists decide.
I think you will need to call your law something other than Entropy.  The current usage is too well established and useful to be worth changing.
To me the current usage makes sense as it is defining the degree of uncertainty. Remember, what is called information in information theory is not what most laypeople would recognise as information. For example, most modern languages have low entropy because they are very predictable, but most people might think that a language is very informative.
In information theory random is high entropy and very informative, predictable is less informative.
You also need to be careful with your examples. 0.5 may be random but in information theory an event with 0.5 probability eg toss of a coin, has a very different entropy to an event with 0.5 probability where there are 5 other possible outcomes each with 0.1 probability.
Best of luck with your new law, now you need to find a useful application.
Title: Re: If certainty is informative, what is the formula in Information Theory?
Post by: Chondrally on 27/09/2015 18:55:39
I disagree that I can't call it ENTROPY.  It is entropy in its purest sense derived from Statistical Mechanics.  I believe the entropy of Shannon to be a contrived definition.
A useful application for this law is in questionnaire Theory and Statistical Testing.  To see if two sets of probability,  one from a pure distribution, and one from data come from the same distribution.
Title: Re: If certainty is informative, what is the formula in Information Theory?
Post by: puppypower on 28/09/2015 00:30:53
In engineering (original need for the concept of entropy) entropy is a state variable, meaning it has a specific measured value for a given state of matter. For example, the entropy of water at 25C and standard pressure is 6.6177 J mol-1 K-1 (25 °C). This will be the same in all labs. This value does not change with time, lab to lab, or increase over time according to the second law. This state comes about due to randomness at the micro level but a state always averages the same. 

Another interesting example of this state affect of entropy is osmotic pressure. If you take a semi-permeable membrane with water on one side and water/salt on the other side, the pure water will diffuse through the membrane in the direction of the salt water. This is driven by an entropy increase as the pure water tries to randomize with the salt water. The entropy increase, in turn, will generate a pressure called the osmotic pressure. We have a micro level randomization, due to entropy, leading to a definitive bulk value; pressure. The liquid state is unique, such as with osmosis, in that the macro and micro states can work in opposite ways; micro random leading to bulk order.

In terms of information, with "liquid" information, one could theoretically create definitive bulk conclusions; state, from what appears to be random pieces of information; micro.
Title: Re: If certainty is informative, what is the formula in Information Theory?
Post by: Colin2B on 29/09/2015 16:09:19
...I believe the entropy of Shannon to be a contrived defin [:-\]ition.
I agree.
The point I was making is that this is accepted usage in information theory. If you use the same term people will just concentrate on explaining why you are wrong rather than understanding your idea. If you use a different term and show the benefit of your idea you are more likely to get it accepted.
However, if you enjoy paddling upriver!
Title: Re: If certainty is informative, what is the formula in Information Theory?
Post by: Chondrally on 29/09/2015 17:20:29
We can call it the StateSpace Method for Information Theory Derived from Statistical Mechanics,  or the Set Theoretic approach.
Where R=Takln(Na)+Tbkln(Nb)=kln(N^(Ta+Tb)*p^Ta*(1-p)^Tb)
or when Ta=Tb=1,  R=-kln(p*(1-p))
It has a link to Control Theory as well, and they will have to minimize p to get the certainty they need and maximize R simultaneously.  It can also be used in Genetics to examine string entropy, and for pattern recognition and to assess the rarity or frequency of strings if p is included in the categorization. Also for Telecommunications and Pattern Recognition and Artificial Intelligence.....to name a few. Also Bioinformatics....!
Title: Re: If certainty is informative, what is the formula in Information Theory?
Post by: Colin2B on 29/09/2015 23:25:09
We can call it the StateSpace Method for Information Theory Derived from Statistical Mechanics,  or the Set Theoretic approach.
Where R=Takln(Na)+Tbkln(Nb)=kln(N^(Ta+Tb)*p^Ta*(1-p)^Tb)
or when Ta=Tb=1,  R=-kln(p*(1-p))
It has a link to Control Theory as well, and they will have to minimize p to get the certainty they need and maximize R simultaneously.
I like StateSpace Method for Information Theory, but I would drop the derived from bit.
Set Theoretic Method also has a nice ring.
I'll have a think about your examples.

Database Error

Please try again. If you come back to this error screen, report the error to an administrator.
Back