Concerning Photographic Resolution

The article published at The Luminous Landscape leaves lots of room for question and doubt ­ if not outright disbelief. Allow me to explain.

I noted that the location of the diffraction limit as it applies to resolution open to interpretation and debate. Before we can even begin to define the resolution limit there are some things we need to know about light and the way it works.

Most work on diffraction has assumed that the light in question is "monochromatic"; that is, the light is restricted to just one frequency, wavelength or colour. Sunlight, on the other hand is "white" ­ it includes a random mixture of all frequencies within approximately an octave band: a two-to-one ratio of frequencies ranging from lowest (8000 Angstroms wavelength or red) to highest (4000 Angstroms wavelength or violet). Furthermore, relative amounts of red to green to blue etc. can vary over a significant range; that's what "colour temperature" describes. Yet we still call the light "white". Diffraction effects are quite different for monochromatic and white light.

Describing the light as monochromatic, polychromatic, white or whatever is still not enough to let us characterize diffraction effects, however. We also need to consider the "coherence" of light. If we measure light at two locations, coherence is a measure of how nearly identical the light is at the two locations (or at two times) is. If the light at the two locations is absolutely identical ­ as if it were produced by a single bright tiny point source of light ­ the light is said to be coherent. If the light at the two locations is produced by two completely separate sources ­ point source or otherwise ­ the light will be "incoherent". The light can also be partially coherent. The light can be coherent, incoherent or partially coherent whether it is monochromatic or not. If that's not enough, there are at least two kinds of coherence: spatial coherence and temporal coherence. The coherence measured at two locations is spatial coherence. We can also measure the light (at one location or at two different locations) at two different times and determine how much the same they are. That's temporal coherence. When light is coherent, we need to take light's wave nature into account when calculating intensities. When light is incoherent, we can just add measured intensities directly, ignoring wave-like interactions. If light is partially coherent we have to work out the details in a more complex fashion.

The use of monochromatic light in normal photography is extremely rare. It is typically only in an optical laboratory where one is using lasers that one might encounter significant amounts of monochromatic light. As typical photographers we normally encounter only white or coloured light. Even our coloured filters do not give us truly monochromatic light. A laser pointer, on the other hand, does produce monochromatic (and coherent) light, so many people probably have seen true monochromatic light, and may have witnessed the speckled appearance most objects have under true monochromatic and coherent light.

We commonly encounter both coherent and incoherent light. Our lenses only work because the light from a point source is spatially coherent across the aperture of the lens. If the light were not coherent across the lens aperture, the lens could not focus it. And yet, the light from any two point sources of light in our world ­ no matter how close together they are, is probably incoherent. Normally we are looking at scenes illuminated by incoherent light ­ the sun, lightbulbs etc. are groupings of many independent sources. This makes the light from them incoherent. Yet it is possible to create coherent lighting. When a scene is illuminated with coherent light, all the light scattered by objects in the scene is 'related' and this leads to a major change in appearance: the speckle mentioned in the previous paragraph.

Now we can talk about diffraction and resolution. The standard Rayleigh-Airey resolution criterion is based on what Rayleigh somewhat arbitrarily decided was needed to resolve two equally bright "stars" (point sources of light). He assumes the two stars produce monochromatic light of the same frequency or wavelength (or colour) but that the light from the two stars is incoherent with respect to the other. The diffraction-limited image of each star is a distribution of light intensity known as an "Airey disk". This disk has a central bright spot with surrounding alternating dark and bright rings - each bright ring of progressively lower intensity. Rayleigh says that the two stars can be resolved if the maximum brightness of the Airey disk of one star coincides with the first dark ring of the other. When this condition exists, the light intensity of the combined pattern has two intensity peaks but falls off 26.5% in brightness at the midpoint between the peaks.

(If the light from the two stars were coherent ­ as would appear to be the case if there were really only one star but the light is reaching us via two equal-length paths ­ the light from the two stars would be coherent. One could place a weak prism or two in the optical path to do this. For resolution, this can be a good thing or a bad thing. It can make the two star images more easily discernable, or it can make it impossible to resolve the two stars! It all depends upon the details!)

OK, so Lord Rayleigh has provided us with a fairly clear definition to use for resolution. For photographers there are just two problems with this. First, standard resolution tests use lines not point sources. Especially with film, there is generally too much noise (grain) to discriminate between images of points, unless the diffraction images are quite large. Second, we typically use white light, not monochromatic light.

Now Rayleigh did also describe a resolution criterion to use with lines. It's similar to the example for point sources, except that the intensity dip between the two thin bright line images is only 19% . I think we could live with that, but there's another issue. Even with lines, Rayleigh is talking about infinitely narrow line sources against a black background. What standards committees have dictated for photogtaphic testing are alternating black and white bars having equal width. This will have the effect of reducing the intensity dip between white lines further. In order to do the calculations we need to determine the coherence conditions. Rayleigh implicitly assumes the line sources of light are coherent along their length, but incoherent between the two lines. In photography, we can probably assume the light from every point on our resolution chart is incoherent with respect to every other point. This is not an inviolable truth, however. We could illuminate our chart with coherent light if we wanted to do so.

In normal photographic applications, we use white light. This means that we shouldn't really use the Airey disk. It needs to be replaced by its white-light equivalent. The white light image of a point source has no dark rings. Instead, the rings have subtle colour! So we can't use Rayleigh's "put the maximum of one star image on the first dark ring of the other" process. If we were to employ this criterion, two white stars would never be resolved! We can still use the drop in intensity criterion, however.

I have tried to do the appropriate calculations (integrals) approximately (using MS Excel) based on Rayleigh's result for the resolution of two lines, extending the calculations to white light and a black space equal to the width of each of the two white lines. I did make the (admittedly improper) assumption that the light reflected from the white bars of the resolution target is coherent along the length of the lines but incoherent across the width. (This allowed me to use a single intrgral of a function that Excel can calculate rather than a two-dimensional integral of the Airey function ­ a function Excel can't deal with easily.) I also assumed that white light can be simulated by eight equal-strength geometrically-spaced frequencies populating an octave band, and that these components are mutually incoherent. The calculations accounting for wide bars reduced resolution by about 5%, but the use of white light pretty well returned the results to that for stars, with a slightly reduced dip in light intensity (23% instead of 26.5% for stars or 19% for thin lines).

Yet another factor that influences the results is the number of equal-width black and white bars used in the resolution test as well as whether a black background or a white background is used. Different proposed standard testing methods have specified 2, 3 and 5 black bars on a white background. We should expect to get the best results with two white bars on a black background, as assumed in my calculations described above. Then, adding more white bars should lead to slight degradation. There is a valid practical reason for using three to five bars that I will describe in the third paper. I would expect the number of bars to have a lesser effect if we are talking black bars on a white background. That white background necessarily leads to overall contrast-reducing flare. In this case adding black bars should reduce that flare slightly! The actual contrast ratio between black and white will also influence the final result.

Back in the real photographic world, just about every person writing about photographic resolution ­ including this one ­ has simple made one or two "hand-waving" assumptions about how to apply the Rayleigh-Airey resolution criterion to photographic resolution testing. Typically we assume a representative colour ­ like yellow or green - and apply the Rayleigh criterion as it applies to stars. This leads to an optimistic estimate of what we should expect in real resolution tests as dictated by various standards organizations. It is not unreasonable to apply a fudge-factor of up to about 2 to account for the degradation resulting from white light and equal width black and white bars, less-than-infinite contrast, as well as the actual dip in light intensity required to declare the lines resolved in the presence of noise (or grain). I have never seen a full proper analysis of what the mandated photographic tests "should" give with perfect lenses. I would guess that such a calculation has been done, but I can't direct you to an authoritative document. I have looked in all of the dozen or so books on photographic optics that I possess. None of them gives other than a "hand-waiving" estimate.

Assuming green stars, one gets a quite simple formula for diffraction-limited lens performance. The diffraction-limited resolution, R, measured in line-pairs per millimeter is given by:

R = 1600/N

Where N is the lens aperture. In turn N is simply the focal length of the lens divided by the diameter of its open aperture. Greenleaf gives

R = 1300/N

based on using yellow light. Greenleaf also describes how resolution should be expected to deteriorate off-axis ­ that is away from the center of the lens field of view. My calculations suggest that a requirement for 50% final image contrast would imply about an 18% reduction in apparent resolution:

R = 1312/N for green light, or

R = 1066/N for yellow light.

I have also seen arguments suggesting that we need one Rayleigh distance between black and white and another such distance from that white to the next black. This leads to the rather pessimistic:

R = 800/N.

I personally believe this is probably a result of a misunderstanding of Rayleigh's resolution criterion applied to photography. It can, however be justified in the limit of digital photography, near the resolution limit of the sensor itself. But that is more properly part of the next consideration.

The above tells us what to expect the ideal lens to do. The final photograph will also be affected by the resolution the film or electronic imager can achieve. Then we need a rule to explain how the two interact.

The resolution capability of photographic film is another whole complex story and I don't intend to go there. Electronic imaging systems are, fortunately, fundamentally easier to characterize. The imaging chip will have its individual sensors arranged in a regular geometric pattern. The pitch of this pattern provides a pretty firm limit to resolution. In order to resolve the black and white bars, we need to be able to put the black bar on one pixel and a white bar on an adjacent pixel. This gives a pretty simple rule: To resolve 100 lines per millimeter we need to have 200 pixels per millimeter. In this case 100 lines per millimeter is known as the "Nyquist frequency". Of course, with Beyer colour filter patterns and anti-aliasing filters and such it does get more complicated. The fundamental limit is clear, however. We may not be able to do quite as well as the Nyquist limit, but we know we can't do better. (Well, if we know more about the structure of the scene being photographed, See my "post script" below!)

I have not personally researched the issue of how to calculate the combined resolution capability of lens and image sensor. There has been some debate, but the rule usually used is as follows:

(1/Rs)2 = (1/Rl)2 + (1/Rf)2

In this formula Rs is the resolution of the combined system, Rl is the resolution of the lens and Rf is the resolution of the "film". This means, by way of example, that if both the lens and the "film" are good for 100 lines per millimeter, the combined system should be able to resolve only 71 lines per millimeter. Whatever the rule, we should not normally expect our images to display the maximum resolution of the lens or of the image chip: it will be worse than the lesser of the two. In order to observe resolution at the Rayleigh limit, either the lens or the 'film' needs to have resolution ability much better than that limit.

 

 

 

References

Max Born and Emil Wolf, Principles of Optics, 3rd Edition, Pergamon Press, Oxford, 1965.

Alan R. Greenleaf, Photographic Optics, MacMillan, New Yory, 1950.

John B. Williams, Image Clarity, Focal Press, Boston, 1990.

 

Post Script

Be careful about what you read. There are always hidden assumptions! Even the Nyquist frequency does not set the absolute upper limit of resolution. Strictly speaking, it determines the useful "bandwdith" of the digital image. If we know that the scene contains no spatial frequency components below 50 lines/mm, and if the lens, anti-aliasing filter and pixel size will allow it, we can reliably resolve spatial frequencies up to 150 lines/mm. Or, given an appropriate anti-aliasing filter, our 200 pixels/mm sensor can reliably detect spatial frequencies between say, 400 and 500 lines/mm. Any 100 line/mm bandwidth will work. And that, in brief, explains precisely why we need the anti-aliasing filter! This sort of thing is done in sonar, radar and radio routinely.

Back to Resolution Intro

Onwards to Super-Resolution - Or Not

Return to HMbooks Table of Contents.