Minimum photon requirements for pictures:
One can see an object only insofar as one receives light from it. Therefore it is generally assumed that one cannot take a meaningful picture without a certain minimum number of photons. This is considered the absolute limit of photography.
What is the minimum number of photons required? It depends upon how accurately the shades must be represented (let us consider only grayscale photography for now). If you want to distinguish 16 levels of lightness, then each pixel must receive from 0 to 15 photons.
Unfortunately, that analysis does not take into consideration the statistical nature of quantum mechanics. When light intensity equivalent to a number of photons reaches a detector, we do not have the certainty that that many will be detected, but only the probability that that many will be detected. The actual number detected may be different. In order to reliably produce a level of gray, the light intensity must be such that the level will not be deviated from, due to chance, by more than an acceptable margin.
To calculate the margin of error associated with the arrival of photons at a pixel, we could divide the total number of photons for the whole scene by the number of pixels, and conclude that the resulting number represents the probable result, and then work on the basis that any photon may by chance arrive at any pixel, to calculate the deviation; however this model is unrealistic because each photon does not, in fact, have an equal chance at arriving at each pixel (otherwise the scene would be entirely blank!). Adjacent pixels will in general have markedly different probable numbers of photons. So we must approach the problem differently.
Let us look at a single pixel, and state that for this pixel, certain photon detections may occur. We can regard each possibility of a photon detection as the toss of a die, some of whose faces are painted white (meaning detection) and others painted black (meaning non-detection). The number of white faces is the same on all dice, as is the number of black faces. These dice do not necessarily have 6 faces; each may in fact each have as many faces as we deem necessary, so long as the number of faces is the same for each. The fraction of faces on any one die that is white is the probability that a photon will be detected; and black that the photon will not be detected.
The following quantities are still unknown: the fraction of faces on any one die which are white or black, and the number of dice being tossed. What we do know is that when they are tossed, the expected, i.e. most probable, outcome is that a certain number of white faces will show up, equal to the number of expected photons. However, if 1200 dice are tossed each of which has 1/12 of its faces white, then the probable outcome (100 white faces) is the same as if 120000 dice were tossed each of which had 1/1200 of its faces white. That means that we are free to adjust (with reason) these two numbers as we wish, and the results will be the same, so long as the expected number of white faces is kept equal to the expected number of photons.
Let us choose the most convenient numbers. The simplest possible die has but 2 faces, one white and one black. I.e., a coin. The expected outcome of a toss of n coins is that n/2 will come up white. Thus, we set n/2 equal to the expected number of photons, and calculate the resultant statistics based upon the toss of n coins.
As you know, the formula for coin-tossing tells us that if n coins are tossed, n being a large number, then there is an approximatley 95% chance that the number of whites will fall between n/2 - sqrt(n) and n/2 + sqrt(n). For small values of n, this formula is somewhat inaccurate but will be close enough for our purposes. Thus, if a pixel "ought" (based upon the classical physics of the situation) to receive 50 photons, then the number it actually will receive has a good chance of falling anywhere between 50 - sqrt(100) and 50 + sqrt(100), that is, between 40 and 60.
Using this analysis, the following table of statistically adjacent and effectively non-overlapping intervals is constructed:
Nominal half-width resultant
number of of uncertainty boundaries of
photons (rounded to this gray
integer) level
0
2 2
4
7 3
10
13 4
17
22 5
27
33 6
39
46 7
53
61 8
69
78 9
88
97 10
107
118 11
129
141 12
153
166 13
179
193 14
207
222 15
237
253 16
269
286 17
303
This table shows the minimum number of photons to photograph in 16 dependable shades of gray. The scale is somewhat nonlinear due to the expanding margins of error as the number of photons increases; but we may say, roughly, that to photograph a typical scene in 16 shades of gray, the detector must accumulate an average of about 140 photons per pixel (corresponding to average tone or middle gray) as an absolute minimum, in order to overcome quantum statistical noise.
Now let us consider transmitting this picture. How many photons do we need to transmit it? Well, each pixel is accurate to 1 of 16 levels, which can be sent as 4 bits. To send 4 bits requires how many photons? It was mentioned earlier in this thread that 10 bits, or even more, can be sent using one photon, if the system is correctly arranged. Let us say it is set up to send 12 bits per photon, by identifying the time of arrival of the photon to an accuracy of 1 part in 2^12 . With that, we can send THREE pixels with one photon.
But it took about 140 photons to record one pixel.
This sounds like a violation of the conservation of something.
It is generally thought that it is not possible to send a message containing 140 units of information using a channel having capacity of only 1/3 of a unit. If that be possible, then the original message would be considered highly redundant and compressible.
But as we have just observed, the original message is already photon limited -- the least use of photons possible to take the picture.
How is this possible?