Sensor
In this section, we provide an in-depth discussion of sensor technologies and specifications, coupled with recommendations of which to choose for a particular purpose.
Megapixels
Sensor size
Crop factor
Pixel pitch (relative size of the photosites/pixels)
Pixel size and diffraction
Sensor type
Video
Rolling shutter explained
CMOS vs. CCD
Micro lenses and filters
Filters in front of the sensor (low-pass, anti-aliasing, anti-moiré):
Quality of the analog-to-digital converter (ADC) and its read speed
Sensor bit depth (8-bits, 12-bits, 14-bits, 16-bits)
Megapixels
Digital camera sensors are made up of millions of tiny wells that collect photons of light. Each of these wells is called a photosite. During an exposure, each photosite will collect a certain number of photons based on the light characteristics of the scene and the length of the exposure. Each photo-site will transmit an electrical signal corresponding to the amount of photons it collected during an exposure.
These electrical signals are converted to luminance values and collected for all of the photosites. This aggregate of luminance values forms the basis of an image. If the sensor has 1500 photosites on one side and 3000 photosites on the other side, it has 1500 x 2000 or 3,000,000 photosites. Since each photosite represents a pixel in the final image, the sensor is said to have 3,000,000 pixels or 3 megapixels.
The number of pixels on a sensor can be directly translated into a print size by dividing the pixels by the pixels per inch (ppi) of the final output or output device. For instance, 1500 x 2000 pixels divided by 300 pixels per inch for printed output equals a print size of 5" x 6.7" at 300 ppi. If the output device is a computer screen which has a resolution of 72 ppi, the image at 100% will be 20.8" x 27.7".
FIGURE 1 The more megapixels, the larger the native print size. Digital image files can make larger prints than illustrated here by either reducing the print resolution- using 180 ppi instead of 300 ppi for instance, or by upresing. Digitally captured files can be upsampled up to 400% in many cases, as long as the original image is sharp and well exposed to begin with. |
Sensor size
Digital sensors come in a wider variety of sizes than we had with film formats. They range from the tiny (5.76 x 4.29 mm) to the large (53.9 x 40.4 mm).
FIGURE 2 Larger sensors can hold more photosites (or more photosites that are larger). Both larger sensors and larger photosites are an advantage from an image quality standpoint. Another important factor for photographers to consider is whether the sensor is “full frame” or will the sensor size crop the angle of view of the lens. Equally important is the aspect ratio. Some photographers are used to and prefer the 35mm 1.5:1 ratio, while many others are wedded to the 1.33:1 ratio of medium-format. The 1.33:1 ratio fits normal publication page sizes better and is often preferred for that reason. The so-called “crop factor” was an impediment for many photographers in the early days of digital capture. Some have grown to like its advantages while others were relieved when full frame sensors allowed full use of wide-angle lenses. Since digital sensors are not tied to any particular film format and they only need to be compatible with lens coverage, we wonder why there hasn’t been more innovation in aspect ratios. Why wouldn’t we have a 1.33:1 or even a 1.25:1 DSLR sensor that uses the full 36 mm width, i.e. 36x28mm or 36x29mm? |
Sensor size and video
Perhaps the fiercest debate when it comes to shooting DSLR video is full frame versus cropped sensors. A full frame sensor matches the size of a 35mm film frame, and the sensor is approximately 36mm x 24mm. Manufacturers like Canon, Nikon and Panasonic have different sizes for their cropped sensors or smaller sensors but generally adhere to standardized sizes based on the APS (advanced photo system). APS–C and APS–H are the most common.
So why does it matter what size the sensor is? As a general rule of thumb, the larger the sensor, the greater the influence on depth of field (DOF). Put simply, a large sensor allows you to blur the background easier than a small sensor with the same lens. The artistic use of shallow depth of field is one of the aesthetic reasons DSLR video cameras are so popular for shooting video. Large sensors allow for greater control in the DOF (which many equate to a cinematic look). Generally, full frame bodies are more expensive than cropped sensors, so you’ll need to decide if the added DOF of a full frame sensor is worth it to you.
Crop factor
If you choose a body that has a cropped sensor, you’ll need to examine the sensor crop factor. Smaller sensors show a smaller field of view for a given focal length. The term Crop Factor is used to define how the small sensor relates to a 35mm (24x26mm) frame.
If the sensor is only half the size of a normal 35mm frame, it is said to have a crop factor of 2. If you multiply the focal length by the crop factor, you’ll see how the field of view on the small sensor relates to 35mm. For instance, a 50mm lens on a camera with a crop factor of 2 will have the same field of view as a 100mm lens does on a 35mm camera.
Here are some common crop factors used in cameras today:
- 1.3: This crop factor is used by Canon on some of its 1-series bodies that use an APS–H sensor like the 1D Mark IV.
- 1.5: This crop factor is used by Nikon for all of its non-full frame cameras.
- 1.6: This crop factor is used by Canon for its APS-C bodies like the 7D and the Digital Rebel series.
- 2.0: This is a large crop factor ratio that’s used by Micro Four Thirds image sensors like the one featured on the Panasonic Lumix GH1 DSLR.
Cropped factors can be a benefit or a drawback, depending on what you are trying to accomplish. With a higher crop factor, you can get much longer reach on your lenses without having to purchase expensive telephoto lenses. Unfortunately, the opposite is true when trying to shoot wide. To get a nice wide-angle view, you’ll need to purchase wider lenses.
And, as pointed out above, it’s easier to achieve an out-of-focus background with a larger sensor
Pixel pitch (relative size of the photosites/pixels)
Increasing the number of megapixels results in higher resolution but if the sensor size remains the same, it results in smaller photosites (pixels). Photosite size is referred to as the pixel pitch. Smaller photosites gather less light, so they have less signal strength. Less signal strength, all other things being equal, results in a less efficient signal to noise ratio, therefore more noise.
This effect is much more pronounced at higher ISO settings, because increasing the ISO setting requires cranking up the amplification of the signal, which, in turn, increases the noise level.
This is why point-and-shoot cameras produce noisy images at high ISO settings, and why the Nikon D3 with its large pixel pitch (8.4 µm) has much lower noise at high ISO settings than a point-and-shoot that might have a pixel pitch as small as 1.7 µm.
CCD chips, commonly used for medium and large format sensors, tend to generate more heat than CMOS sensors: more heat contributes to noise.
Some medium-format backs have active cooling to counteract that effect. CCD chips in cameras used by astronomers are cooled with liquid nitrogen to combat noise realized at the long exposures required for their specialized type of photography.
Pixel size and diffraction
Smaller photosites can also affect image quality due to an optical effect known as diffraction. Diffraction limits image resolution by nullifying the gains of higher megapixel counts when the lens is stopped down. As photosites get smaller, diffraction occurs at a larger f-stop. For instance, a point-and-shoot camera’s resolution can become diffraction limited between f/4.0 and f/5.6, whereas a full frame DSLR with large photosites will not become diffraction limited until between f/16 and f/22.
Keep in mind that cameras with smaller sensors have a higher depth of field at any given f/stop, so diffraction is not quite as limiting as it might appear. Still, the effect is noticeable enough that a review of the Canon D50 (15.1 megapixels) concluded that it required higher quality lenses to realize the same image quality as its predecessor, the Canon D40 (10.1 megapixels).
Medium-format sensors are becoming just as crowded as some higher resolution DSLR sensors. Since medium-format lenses are longer for any given angle of view, achieving great depth of field may result in detail loss at small apertures with high megapixel count sensors. Medium-format camera makers have joined DSLR manufacturers in touting new digital lenses that claim to have higher-resolving power and have wide angle lens designs that aim to send light waves at straighter angles back to ever smaller photo sensor sites.
A tutorial on diffraction issues associated with digital cameras is available at Cambridgeincolor.com. Another important discussion is available on the Luminous-Landscape site. The bottom line is that at a certain point, increasing resolution by adding more photosites without also increasing total sensor size will not add anything to image quality and, in fact, will decrease image quality, especially at small apertures.
Phase One has acknowledged that packing sensors with ever higher numbers of ever smaller photosites does create limitations in terms of ISO and f/stop settings by introducing a “binning” function in their new Sensor+ camera backs. For instance, the Phase P65+ back has 60 million 6 micron pixels. This gives extremely high resolution but at the expense of image quality if pushed much beyond ISO 200 and stopped down much past f/11. Combining 4 photosites into one through a proprietary technology results in a 15 MP sensor with relatively large 12 micron pixels. This function completely changes the character of the sensor, allowing faster image capture, higher ISO settings and likely less diffraction issues.
FIGURE 3 Detail from a 39MP camera at f/22 Detail from a 21MP DSLR camera at f/11 |
Sensor type
An interesting note about measuring megapixels is that megapixel counts vary in relation to image quality depending on whether the camera sensor uses color filter array technology, Foveon technology, multi-shot, tri-linear array technology, or beam splitter technology.
Color filter arrays are composed of red, green, and blue (and sometimes other colors) filters that are placed over the sensor photosites. The great majority of digital cameras use color filter array technology, specifically a Bayer-pattern color filter array (shown on the left). A few manufacturers have experimented with other types of arrays, notably Kodak and Sony (shown on the right).You'll notice that the Sony color filter array introduces a fourth color which they call emerald to the array.
FIGURE 4 A normal RGB Bayer pattern array. | |
FIGURE 5 The Sony Bayer array with emerald as a fourth color. |
A feature of color filter array cameras is that for each of the photosites, only one color is actually measured- which is the color of the filter array over that photosite. The other two color values for that pixel are interpolated based on the measured color values of nearby photosites. This interpolation process is called demosaicing or replacing the mosaic color filter array pattern with continuous tone information. Although demosaiciing works very well, it can created some issues such as color artifacts and moire.
Foveon sensors don't require color filter arrays at all since they record all three colors of light at each photosite. Foveon sensors don't need as many photosites (megapixels) in order to achieve the same apparent image quality at any given output size. The issue with Foveon sensors is that the pixel interpolation required to make a Foveon image the same size as a color filter array camera image is roughly equivalent to the color interpolation that occurs with demosaicing.
Multi-shot cameras require three exposures, one each through red, green, and blue filters. Multi-shot cameras don't require color interpolation so their megapixel counts are understated in terms of image quality when compared to single shot color filter array cameras. The same can be said of tri-linear array cameras. These cameras are called scan-back cameras because they work the same as scanners, a three-part sensor array one each for red, green, and blue sweeps across the sensor area recording all three colors from the scene in the process. It has similar quality to a multishot camera although the means of recording the image is different.
FIGURE 6 The Foveon sensor captures all three RGB colors at each photosite. |
Video
The final technology is the beam-splitter camera. This is in wide use in digital video cameras. The light from the lens passes through a dichroic mirrored prism that splits the light into red, green, and blue beams. Each beam is directed to a small CCD chip. The electrical signals from each CCD chip is combined to create a full color image. The use of three small CCD chips gives these types of video cameras a characteristic wide depth of field.
This contrasts with digital video shot with the comparatively large sensors used in DSLR cameras which can produce a shallow depth-of-field. Shallow depth of field and excellent image quality is leading even some film makers to use cameras such as the Canon 5D II for serious film projects.
Convergence is coming from the other end just as rapidly with the 4K RED One camera system. This camera can shoot 24 12 megapixel frames per second in a raw file format. The sensor at 24.4 x 13.7 mm is approximately the size of a Nikon DX sensor allowing for reasonably shallow depth of field with fast lenses. It has adaptors allowing the use of many standard 35mm format lenses. Its raw file format is ground breaking in several respects. It is adjustable in post production, the same as still digital raw formats, and it has two levels of visually lossless compression which allows practical on-camera recording of such high definition files.
The RED One shares another characteristic with DSLR CMOS cameras that can record video: the rolling shutter effect. Since these cameras read the data off of the sensor line by line and not all at once (like CCD sensors do), moving objects, or camera panning can result in vertical objects that appear to lean. This jiggly effect is more pronounced in some cameras, such as the Nikon D90 than it is in the Canon 5D II or the RED One. To complete the convergence picture, The RED One is capable of producing single frames with high enough resolution to use for magazine covers.
Rolling shutter explained
One problem with DSLR CMOS cameras that can record video is the rolling shutter effect. Since these cameras read the data off of the sensor line by line and not all at once (like CCD sensors do), moving objects, or camera panning can result in vertical objects that appear to lean since the top of the frame is scanned at a slightly different time than the bottom of the frame.
This skewing effect is more pronounced in some cameras, such as the Nikon D90 than it is in the Canon 5D II or the RED ONE. To minimize this image distortion, avoid fast pans. You can also use a camera dolly to smoothly move the camera and minimize distortion
CMOS vs. CCD
Although CCD (Charge Coupled Device) preceded CMOS (Complementary Metal Oxide Semi-Conductor) chips to market, most smaller sensor cameras (up to full-frame 35mm) use CMOS chips. The larger sensors found in medium/large format cameras are currently all CCD technology.
At this point, both technologies can achieve the highest image quality. The Foveon sensor is of the CMOS variety although it's method of collecting light is entirely different from color filter array sensors. The fundamental difference between CMOS and CCD technology is that CMOS integrated circuits have more processing functions on the chip itself.
CCD integrated circuits have the single function of collecting light and converting it to electrical signals. CMOS chips tend to use less power, generate less heat, and provide faster read-out of the electrical signal data. In addition, CMOS sensors can provide additional signal processing such as noise reduction on the chip itself. This explains to some extent why CMOS chipped cameras tend to provide more highly processed, less "raw" data than the CCD chips normally found in medium format backs.
Micro lenses and filters
FIGURE 7 Digital sensor photosites have significantly more depth than the relatively flat surface of film. The use of micro lenses to focus incoming light from the lens down into the sensor has become a common way to increase sensor efficiency and how to avoid problems caused by the angled light path that commonly occurs with wide-angle lenses. Until recently, most medium-format sensors did not have micro lenses, which is why medium-format backs did not work as well with wide angles, and why the use of shifting lenses often resulted in lens cast (a magenta/green color shift across the image). The use of micro lenses has greatly improved the efficiency of digital sensors while also helping to nullify the effects of squeezing ever more resolution without increasing sensor size. In addition, effective micro lenses can help to alleviate diffraction limits by capturing the light that would have fallen outside the photosites.
Filters in front of the sensor (low-pass, anti-aliasing, anti-moiré):
Digital sensors that use Bayer pattern arrays (which is most of them) are subject to aliasing (jaggies) and moiré (a colorful pattern appearing when a scene contains a repetitive pattern that lines up with the Bayer pattern overlaying the sensor).
Moiré can be alleviated by:
- by using higher resolution sensors
- by moving the camera closer or farther from the subject
- most reliably by using a filter that creates a one-pixel-wide blur over the image
Various strategies can be used to increase the images.
Quality of the analog-to-digital converter (ADC) and its read speed
Similar to the race to higher megapixels and the need to keep digital noise at bay, there is tension between the need for higher frame rates and the higher image quality gained by giving the ADC more time to collect data from the sensor.
As usual, manufacturers are working towards doing both. The quality available from medium-format sensors is partly due to their slower frame rates (in some cases by a factor of 10).
The extent to which your photography requires high frame rates will factor into your speed vs. quality equation. The ADC also determines the sampling bit depth, as it converts the analog voltage to luminance levels. The ADC needs to be matched to the dynamic range of the sensor, thus defining the sensor bit depth for a camera.
Sensor bit depth (8-bits, 12-bits, 14-bits, 16-bits)
The trend has been to increase bit depth from the 8 bits available in JPEG capture to 12 bits, 14 bits and even 16 bits. Newer DSLR cameras have progressed from recording raw data with 12-bit tonal gradation to 14-bit tonal depth. Most medium-format sensors record 14-bit depth data with some claiming 16-bit depth. The question is whether higher bit depth translates into higher image quality.
First we should dispel the myth that higher bit depth translates into higher dynamic range. It does not. Dynamic range is determined by the sensor's ability to read very low brightness as well as very high brightness levels. High bit depth slices the data more finely, but does not increase the ratio between the lowest and highest brightness levels a sensor can record.
Since 12-bit data has 4,096 tonal levels and 14-bit data has 16,384 tonal levels, one might expect to see smoother tonal transitions and less possibility of posterization with 14-bit capture, both of which would result in higher image quality. However, this is currently not the case for most 14-bit cameras due to the fact that digital noise overwhelms the extra bit depth. Slicing the data more finely than the level of noise means that the extra bit depth doesn't contribute to image quality.
Another factor that keeps us from realizing extra image quality from a 14-bit sensor as opposed to a 12-bit sensor is that most current DSLR cameras have less than 12 stops of dynamic range. Unfortunately, this makes the extra bit depth superfluous data. There are some exceptions. Fuji cameras employ two sets of pixels of differing sensitivity that allow for spanning more than 13 stops, so 14-bit depth is useful in that case. The latest Sony sensors, used in Nikon, Pentax and Sony cameras, have very low noise levels and high dynamic range and so are close to being able to generate genuinely useful 14-bit data.
As sensor technology continues to evolve, lower noise levels and greater dynamic range promise to make high bit depth capture a useful feature in future cameras.
For more information about noise, dynamic range and bit depth, see: http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/noise-p3.html#bitdepth
Bit depth and video
While high-bit capture offers benefits for still workflows, it offers no advantage for DSLR video workflows. The file formats used to record video are currently limited to 8 bits per channel. For this reason, greater care should be placed on monitoring exposure accurately as the types of adjustment possible for raw photo workflows are not as easily implemented on a video image.
Up to Camera
Back to Tethered Capture
On to Raw vs. Rendered