Dirk Bertels

The greatest malfunction of spirit
is to believe things (Louis Pasteur)

Digital Imaging Resolution

Last updated 25 September 2011


Index

Introduction
Charge Coupled Devices
Optical Format
Physical Pixel Size
Spatial Resolution
High Resolution Colour Imaging
Medical Imaging Application
Comments

Introduction

Generally, resolution is understood to be whatever number (in megapixels) is engraved on your digital camera. But this number in essence only gives you how many pixels it can store. For any serious imaging application, much more needs to be considered. This article covers these more complex aspects of resolution and strays into high resolution colour imaging for medical applications.

back to index

Charge Coupled Devices - CCD

The brain of the camera is the sensor. Effectually it records the number of photons hitting an array of storage capacitors. The two main types of sensors are the CCD and CMOS.


The CCD transfers a charge accross the chip and reads it at one corner of the array. CMOS devices use several transistors at each pixel to amplify and move the charge using more traditional wires. - ref How stuff works: digital cameras.


However, for the purpose of this discussion, they can be regarded as nearly identical devices. For a detailed understanding of CCDs, read Nicon Microscopy: CCD Intro on Nicon's website. A more simplified rundown on the workings of the CCD can be summarized as follows:

  • Charge Coupled Devices transfer charge along the surface of a semiconductor from one storage capacitor to the next.
  • An image is projected through a lens onto the capacitor array (the photoactive region), causing each capacitor to accumulate an electric charge proportional to the light intensity at that location.
  • Once the array has been exposed to the image, a control circuit causes each capacitor to transfer its contents to its neighbor (operating as a shift register).
  • The last capacitor in the array dumps its charge into a charge amplifier, which converts the charge into a voltage.
  • By repeating this process, the controlling circuit converts the entire contents of the array in the semiconductor to a sequence of voltages.
  • In a digital device, these voltages are then sampled, digitized, and usually stored in memory.
  • Digital color cameras generally use a Bayer mask over the CCD. Each square of four pixels has one filtered red, one blue, and two green (the human eye is more sensitive to green than either red or blue). The result of this is that luminance information is collected at every pixel, but the color resolution is lower than the luminance (greyscale) resolution. - ref Wikipedia: Charge Coupled Devices

Note that a pixel in the quoted excerpts refers to one of these storage capacitors. This is one of the many confusing terminology conflicts in digital imaging. In computing, a pixel refers to one dot on the screen, which is already a combination of the RGB colors. But staying in the spirit of this discussion, assume each pixel representing a storage capacitor, then ...

  • Each pixel is a MOS capacitor operated as a photodiode and storage device.
  • The larger the size of a pixel in the digital camera the more light-gathering capacity it has - improving dynamic range and signal to noise ratio (SNR).
  • A 10 x 10 micrometer pixel (diode) has a full-well capacity (FWC) of approximately 100,000 electrons
  • Larger sensors require smaller apertures to achieve the same depth of field.
    • An aperture is the hole through which the light travels
    • Depth of field determines how well areas around the focussed area are in focus.
    • Depth of field decreases with actual focal length of the lens.
  • The larger the pixels, the larger the size of the object to capture must be. In microscopy for example, the objects are tiny in which case large pixels are not possible.
  • The larger the sensor, the more expensive they are.

back to index

Optical Format

Optical format is a measure of the maximum diagonal size of an imaged
object in the focal plane of an optical system.
- Wikipedia: Optical Format.


Optical Format, sometimes referred to as Inch Fraction Designation gives you a measure of the sensor's dimensions. However this measure is outdated, based on vacuum tubes. It does not define the diagonal of the sensor area but rather the outer diameter of the long glass envelope of the tube. It turns out that the diagonal of the actual sensor is roughly two thirds of this Optical Format value.

For example, my Olympus 'M 725 SW' has an Optical Format of 1/2.33 inches (10.9 mm). Two thirds of this is 7.2 mm. As a general rule the following equation is used:

Optical Format(inches) = Diagonal of Sensor(mm) / 16

Note that conversion between inches and mm is implicit in this equation. Reworking this equation to derive the diagonal for my example:

diagonal = OF * 16 = (1/2.33) * 16 = 6.87

Which is close to the value of 7.2mm we found earlier.

You can also use this table to derive CCD dimension values. The values for my Olympus (OF = 1/2.33) are:

  • Diagonal: 7.7 mm
  • Width: 6.16 mm
  • Height: 4.62 mm
  • Area: 28.5 mm2
  • Crop Factor: 5.62

So using 3 different methods, we derived a value between 6.87 and 7.7mm. I presume it's safest to adopt the values derived from the table, in this case 7.7 mm.

Physical Pixel Size

Another value we can calculate is the size of one pixel. For this we use the equation

OF = p * SQRT(w2 + h2) / 16,000

  • w = width of array in pixels
  • h = height of array in pixels
  • p = pixel size in micrometers

Given that my camera's image size in pixels is 3072 * 2304, the size of one pixel for my camera is:

p = (OF * 16,000) / SQRT(w2 + h2)
p = ((1/2.33) * 16,000) / SQRT(30722 + 23042)
p = 1.788 microns

This value seems quite small, generally I've seen values closer to 6 microns, but it's not in the realm of impossibility.

back to index

Spatial Resolution

It is a well known fact that the resolution given by digital cameras is not true resolution, it just gives you the number of pixels that the image consists of. For example if the image size is 4288 x 2848, then the given resolution is 4288 x 2848 = 12.2 megapixel. A more accurate definition of resolution is the number of pixels that covers a particular area. It is generally measured in pixel per inch (ppi) on cameras and dots per inch (dpi) on printers. (The standard rule for high quality resolution for printers is 300 dpi at actual size. A low resolution image would have 72 dpi).

The quality of an image depends on more than just Pixels per Inch, we also need to take into account factors such as sensor quality, optics, interpolation used, etc. There is a somewhat empirical method we can use to measure what is sometimes referred to as Spatial Resolution.


Spatial Resolution is the hypothetical maximal number of distinguishable squares that could be fitted in an image. - ref Wikipedia: Image Resolution.


The spatial resolution of current CCDs is similar to that of film, while their resolution of light intensity is one or two orders of magnitude better than that achieved by film or video cameras - ref Nicon Microscopy.


Spatial Resolution is calculated to determine the optimal resolution required for a particular camera. Any more pixels above this optimal value doesn't add anything to the quality of the picture - and just wastes storage space.


The following description describes one method to measure spatial resolution. It is an elaboration on the article Wikipedia: Image Resolution. You need a free image editor, such as Gimp and an image that has 100 * 100 = 10,000 squares. We will refer to this image as the model.

  • Open the model in Gimp and zoom in until you just can see the black/white alternating squares. The zoom factor is not crucial - just ensure that your pc displays the squares sharply.
  • Assuming adequate lighting, take images with your camera of this model at ever increasing distances (no need to measure your distances). Rely on the auto focus to keep things in focus.
  • Open each image in Gimp and zoom into the model, determine if you still can distinguish the squares.
  • From the images that were successful in the previous step, select the one that taken from the greatest distance away.
  • Open this image in Gimp and
    • note the resolution of the image in the top bar
    • measure the model (the 100*100 squares section) with the compasses tool (click on one corner and drag to next corner)

Insert your data in the following equation:

Max distinguishable squares =
(number squares in model * image width * image height) / ( model width * model height)

The optimal pixel count can be found by multiplying this number by 6 (minimum 4 to maximum 8). A given resolution higher than this value does not significantly add to the quality of graphical information.


For example, my 7.1 Megapixel Olympus m 725 SW measured as follows:

  • image size: 3072 * 2304
  • model size: 284 * 284
  • Max distinguishable squares = (10,000 * 3072 * 2304) / (284 * 284) = 877,540
  • Optimal number of pixels = 6 * 877,540 = 5.265 megapixels

In this case, the value of 5.265 is not a far cry from the camera's 7.1 megapixel capacity. This method may be too rough for some, but nevertheless offers an empirical method to determine optimal resolution without resorting to complex calculations.

back to index

High Resolution Colour Imaging

There is a reason why greyscale (black and white) images appear much sharper than colour images: They are, it takes 3 to 4 times as much information to produce colour as it does to produce greyscale using the same apparatus.

We know of the 2 standard ways of producing colour:

Additive

  • Primary colours R, G, B are added
  • Can be demonstrated with spotlights
  • Rd + Gn = Yellow, Rd + Be = Magenta, Gn + Be = Cyan, Rd + Gn + Be = White
  • As more colours get added, the result gets lighter and tends to white.
  • Rd, Gn, Be pixels in CCDs and LCDs.

Subtractive

  • Substances absorb light of certain frequencies and reflect other wavelengths.
  • Cyan + Magenta = Be, Cyan + Yellow = Gn, Magenta + Yellow = Rd, Magenta + Yellow + Cyan = Black
  • As colours gets added, the result gets darker and tends to black.
  • Magenta, Yellow, Cyan and Black ink in a printer.
  • Printing inks are transparent. This allows light to pass through and reflect off the paper. It is the paper that reflects any unabsorbed light back to the viewer - ref RGB World.

Both of these techniques require extra information resulting in a loss of resolution when compared to their greyscale counterparts. However there is a way to produce colour images of the same resolution as greyscale ...

As is customary in various science disciplines such as Astronomy, where the subjects are slow moving, 3 greyscale images are taken of the same subject, each through a different colour filter (R, G, and B). Each greyscale image thus represents the intensity of that particular colour. Each image is then filtered through its representative colour so we end up with one R, one G and one B image. The 3 images are then added (additive colouring) to produce the final colour image.

Visit the Faulkes Telescope website for some interactive flash animations that demonstrate this concept.

back to index

Medical Imaging Application

I am currently developing software for a Retinal Camera - hence this investigation of the optimal digital camera setup. From the information put forth so far and its application to retinal images, we need to take the following into consideration:

  • Since the pixel array is overlayed with RGGB filters, for a colour image each image pixel needs 4 CCD pixels in order to determine its colour, effectively reducing its resolution by a factor of 4.
  • Since the retinal image is round, a fifth or so of the CCD pixels are not used.
  • The real measure is in how many pixels per milimeter we can map on to the retina. Given the limited amount of pixels at our disposal, the larger the area of the retina we are trying to capture, the fewer pixels we can allocate per mm.
  • The eye itself is a major component in the optical system and brings its own limitations, the severity of which depends on the condition of the patient's eyes. Research has indicated that for a good eye, the limitation is one pixel per 7 micrometers. For a 30 degree view through the pupil, this amounts to 1.3 megapixels! More resolution will not improve the quality.
  • Blue light is so strongly absorbed by the eye that it has little use. Infrared light on the other hand, which is filtered out by regular cameras because it affects the quality, would greatly enhance the retinal image which consists of various degrees of 'redness'.
  • Only a limited amount of light can be shone through the pupil to avoid damage. However, the blue spectrum of light (which is most intense) is not needed for retinal images thus effectively eliminating this problem.

In conclusion, in order to get optimal resolution and colour contrast for a retinal image, one needs a (puny) square 1000 x 1000 (1 megapixel) monochrome CCD. A quick succession of 3 monochrome images are taken through Infrared, Red and Green filters. Each of these images is then digitally filtered through its representive colour and then digitally added to produce the final colour retinal image .

back to index


Comments