Naked Science Forum

General Science => General Science => Topic started by: BenVitale on 01/11/2007 16:07:58

Title: finite average level of complexity??
Post by: BenVitale on 01/11/2007 16:07:58
Complexity can only be merely defined. Anyway, the world is somewhat complex. But not so much, it could have been more intricate. We can make some fractal interpolation in nature (e.g. Fractals and Chaos in Geology and Geophysic, Turcotte, Cambridge University Press, Cambridge, 1997) and understand the complexity of some shapes from basics law (e.g. basics erosion and tectonics yield the fractal behavior of numerous landscapes).

Question : is there an approach to estimate the average global complexity?
Title: finite average level of complexity??
Post by: another_someone on 01/11/2007 16:55:19
Question : is there an approach to estimate the average global complexity?

Is complexity the same thing as information entropy?
Title: finite average level of complexity??
Post by: BenVitale on 01/11/2007 19:38:35
another_some, could you expand on this?
Doesn't it depend on the whim of the one doing the calculation?
It is in no way obvious what is even meant with complexity in the absolute sense needed to calculate a total complexity level. right?
I watched on PBS (Public Broadcasting Service) the program NOVA, which featured Emergence of Complexity.
Did you watch it?
Title: finite average level of complexity??
Post by: JimBob on 02/11/2007 01:20:12
I do not know if NOVA is broadcast in the UK. And perhaps this subject needs a different place to live?????
Title: finite average level of complexity??
Post by: another_someone on 02/11/2007 01:59:16
another_some, could you expand on this?
Doesn't it depend on the whim of the one doing the calculation?
It is in no way obvious what is even meant with complexity in the absolute sense needed to calculate a total complexity level. right?
I watched on PBS (Public Broadcasting Service) the program NOVA, which featured Emergence of Complexity.
Did you watch it?

My understanding of information entropy is that it is a measure of the number of independent pieces of information within a system (i.e. the number of measurements within a system that cannot be predicted from other measurements taken upon that system).
Title: finite average level of complexity??
Post by: BenVitale on 04/11/2007 00:05:31
Let's consider an image of a natural scene ;we 've got some individuals (pixels) that may differ one from another (different colors).
the entropy E of the system is E=sigma(-p.logp), where the sum is computed over the colors and p is the probability of occurence of a given color.
the issue here is that E simply does not depend on the location of the pixels within the image and thus does not depend on the "shapes" or "object" that one can perceive in the image (tress, etc): E only depends on the histogram of the pixels but not on the geometry of the image...

Can anyone suggest a way to estimate the complexity of images of the world?
Title: finite average level of complexity??
Post by: lyner on 04/11/2007 00:07:31
Sorry - the question is too complex.
Title: finite average level of complexity??
Post by: another_someone on 04/11/2007 00:51:13
Let's consider an image of a natural scene ;we 've got some individuals (pixels) that may differ one from another (different colors).
the entropy E of the system is E=sigma(-p.logp), where the sum is computed over the colors and p is the probability of occurence of a given color.
the issue here is that E simply does not depend on the location of the pixels within the image and thus does not depend on the "shapes" or "object" that one can perceive in the image (tress, etc): E only depends on the histogram of the pixels but not on the geometry of the image...

Can anyone suggest a way to estimate the complexity of images of the world?

I am guessing that you are misinterpreting the equation:

 [ Invalid Attachment ]

We are not interested specifically in the probability of a colour occurring, but the probability of any particular attribute (for instance, the probability that two adjacent pixels have the same colour).  Clearly, if we look at the probability of a pixel having the same colour as an adjacent pixel, and we have large areas where pixels are of the same colour, then the entropy of the system is lower than if the number of pixels that are the same as adjacent pixels conforms to a logarithmic probability.

Another way of looking at this is to think of an ideal lossless compression algorithm.  If one passes the image through such an ideal compression algorithm, then the smaller the compressed image, the lower the entropy of the image.
Title: finite average level of complexity??
Post by: Atomic-S on 05/11/2007 04:20:27
Would pi in the said equation be not simply the total number of that color divided by the number of all pixels, but rather the probability that THAT PARTICULAR PIXEL would be such and such, given the circumstances surrounding the creation of the picture (I guess this would strictly be the sum of all the probabilities that that pixel would  be any of all possible colors)?
Title: finite average level of complexity??
Post by: Atomic-S on 05/11/2007 04:22:05
Another way we say this is that in a signal consisting of a sequence of symbols, the information value of each symbol is the logarithm of its improbability (at that particular position, I assume). And the entropy of the message is the sum of all those information values.
Title: finite average level of complexity??
Post by: another_someone on 05/11/2007 11:55:06
Another way we say this is that in a signal consisting of a sequence of symbols, the information value of each symbol is the logarithm of its improbability (at that particular position, I assume). And the entropy of the message is the sum of all those information values.

I would not disagree with this, but the key point is that the information value of each symbol is not merely the value of that symbol in isolation, but the value of the symbol within the context of all the other symbols, and the probability of the symbol within that context is as important as the probability of the symbol just taken in isolation.

A simple example of this is if one looks at two words:

'q?een' and 't?n'

The first word, if you add the letter 'u', you have added no information to the word at all except to confirm that the word is a valid English word), whereas in the second word, you have within the English language a choice of 'e', 'a', 'i', 'u', and 'o', and so the letter does carry considerably more information value.  Thus simply looking at the random probability of a letter within the data stream is only meaningful when looking at it within its local context within the data stream.