about us
products
Contact Us
support

What is High Speed Video?
Why Use High Speed Video?
What is Required?
Frame Rate
Sensor Dimensions
Exposure
Depth-Of-Field
Sensitivity
Record Time
Resolution
Record Modes
Time Magnification
Lighting Techniques
Color

 

 

What is High Speed Video?

High frame rate imaging origins began with the English photographer Eadweard Muybridge. He used a sequence of still cameras in the 1870s to photograph moving horses.

Most people at that time believed that the motion of a horse's gait was side to side, one pair of legs at a time. However, what Muybridge discovered from his high speed imaging technique was that all four legs come off the ground at one time. This was the first time that high speed imaging was used to analyze an event that moved faster than eye could perceive.

Up to the early 1960s, film was the only medium available to record motion that was too fast for observation. In the late 1960's, the development of reliable video technology gave researchers and engineers another tool for motion analysis and the benefit of immediate review of the recorded event.

Film processing has changed little over the previous century. It has become more automated, the chemical solutions are more balanced, and the processing time has been reduced. But time and special developing facilities are still required. Therefore, video became the technology of choice in motion analysis applications. High speed events that are random in nature, extreme in size or speed, or have other challenging characteristics can be difficult or impossible to study using conventional video (camcorder) imaging techniques. New video technology was required to capture images of such demanding applications. The combination of these new video technologies into a recording system are commonly referred to as a high speed video system or a motion analyzer. Since the 1970s, when the first electronic motion analyzers became commercially available, the cost, capabilities and user friendliness of high-speed video and electronic cameras have improved dramatically. Today’s high-speed video cameras offer far more capabilities and advantages than their forerunners. And while film certainly continues to have important applications in high-speed photography, the increased sophistication of electronic motion analyzers ensures its place in future image data acquisition requirements.

Why Use High Speed Video?

High-speed video cameras offer the advantages of ease of use, live picture set up, reusable recording media, and most importantly, immediate playback capabilities. The technology also offers specific cost benefits. There are no chemicals or film to buy. The high-speed electronic camera can be used repeatedly without concern of the cost of disposable.

Other applications currently using high-speed motion analysis, include: production line trouble shooting, machine diagnostics, destructive testing, automated assembly, packaging, paper manufacturing and converting, and a variety of impact, shock, and drop tests. Some research groups use the technology to study combustion, ballistics, aerodynamics, flow visualization and human performance.

Because film cameras require a "wind-up" time to get up to full speed, electronic imaging has distinct advantages when events are unpredictable or intermittent. Some examples include: lightning strikes, a jam in a production line, a blade failure in a turbine engine, or a vessel subjected to increasing pressure until it ruptures. Due to the unpredictable nature of such events it is hard to know in advance when to start the camera. Electronic cameras, on the other hand, can be triggered automatically by a variety of means; or they can record continuously in a loop until triggered to stop.

For example, in a canning operation, one out of every thousand cans jams up the production line. But because the jam is completely unpredictable, there is no indication when a problem may occur. By the time an event occurs, recording is too late. To keep the camera running at high-speed while waiting for an event is extremely costly and impractical. Shown below is a table for 1/2 second record time at various record rates and playback times (seconds).

0.5
250
500
1000
2000
3000
4500
1
125.00
250.00
500.00
1000.00
1500.00
2250.00
5
25.00
50.00
100.00
200.00
300.00
450.00
10
12.50
25.00
50.00
100.00
150.00
225.00
15
8.33
16.67
33.33
66.67
100.00
150.00
30
4.17
8.33
16.67
33.33
50.00
75.00
Playback Time (sec) vs. Record Frame Rate for 0.5 sec

Electronic cameras offer another distinct advantage: synchronization. Multiple electronic cameras can be set up at different angles to record an event or series of events. The cameras can be triggered together or in any particular sequence. Most importantly, when cameras are running at the same time, they capture data at exactly the same moment. This allows for more complete data, better quantitative measurements and greater analysis. Such precise synchronization is not possible with high-speed film cameras.

In an airbag deployment for example, engineers would want to see a test from a variety of angles. The information is far more valuable if an exact moment can be viewed from different angles. Because such events are of extremely short duration, having cameras even slightly out of synchronization reduces their information value for 3-D analysis tremendously.

The electronic camera’s immediate playback capabilities may be its greatest asset. The cost of film is miniscule when compared to the cost of an engineer’s time. If a test can be reviewed immediately, engineers will know if they need to plan another test. It also speeds the entire process up in finding a problem and then correcting it. The lengthy delays between tests and the expensive set-up and tear-down of test equipment are things of the past.

[ return to top ]

What is Required?

To obtain satisfactory motion analysis results from a high-speed video camera, a number of factors have to be considered. Knowing what frame rate, image resolution and method of recording can determine the imagery obtained from a test. How much light is available? How much light is needed? What is the sensitivity and resolution capabilities of the imager? The answers to these questions determine not only the test’s equipment requirements, but obviously influence the test’s results.

The first question that must be asked is: What do I want to be able to see and/or measure from the motion analysis test? That answer determines everything else. But because of the technology’s flexibility, the questions don’t have to be answered perfectly. One of high-speed video’s greatest assets is immediate playback. If in the first test the frame rate is too slow, the frame is simply increased. If more light is needed to get a sharper image, another lamp can be added, the lens aperture may be opened or a light amplifier (intensifier) could be used. Engineers can also experiment with various settings to find the optimal parameters.

The following section describes a few of the parameters that determine the end result. In any motion analysis test, all imaging parameters must be determined to some degree, even if the experimenter must guess through trial and error.

[ return to top ]

Frame Rate

Frame rate, sample rate, capture rate and imager (or camera) speed are interchangeable terms. Measured in frames per second, the imager’s speed is one of the most important considerations in motion analysis. The frame rate is determined after considering the event’s speed, the size of the area under study, the number of images needed to obtain all the event’s essential information, and the frame rates available from the particular motion analyzer. For example, at 1,000 fps a picture is taken once every millisecond. If an event takes place in 15 milliseconds, the imager will capture 15 frames of that event. If the frame rate is set too low, the imager will capture not enough images. If the frame rate is set higher than necessary, the analyzer’s limited storage may not be able to store all the necessary frames. In other instances, too high of a frame rate sacrifices the area of coverage. This happens when an imager’s frame rate is set higher than it’s ability to provide a full-frame coverage. In most of the new generation of motion analyzers, the imagers have an option that provide "partial frames per second." At this rate, the height of the image is sacrificed but in return, the frame rate can be as much as twelve times the imager’s full frames per second rate. When considering the performance, some of the lower frame rate motion analyzer’s will increase their frame rate by recording partial frames. Currently, the fastest motion analyzer provides 4,500 full fps and up to 40,500 partial fps.

When considering the frame rate performance of a motion analyzer be specific about your requirements. Look closely at a manufacture’s specification sheet to see what the true resolution is at any given frame rate. Some lower frame rate motion analyzer’s are using a technique called line doubling to increase their full frame rate performance. However, the true resolution at the stated frame rate is actually lower and upon display, the lines are doubled to fill out the image (4:3 aspect ratio). If no analysis is intended for the images this presents no problem. However, if measurements are to be made, it is important to know the true frame size (resolution) so that measurements in the direction that lines are doubled can be corrected in the calculations. Typically, for this type of motion analyzers the imaging sensor was designed for standard video. By using this type of sensor the cost is less than a sensor designed for high frame rates. The sensor is being pushed to a higher frame rate. To achieve a higher frame rate beyond it’s original specification, the amount of image data read out of the sensor must be reduced (lower resolution). Therefore, make sure the frame rate performance matches the motion analyzer’s capability.

[ return to top ]

Sensor Dimensions

The size of the image sensor in a camera is important to know. Some common size sensors include 1/2 inch, 2/3 inch and 1 inch. The 1 inch sensor has an effective width of 12.8 millimeters, while the 2/3-inch sensor has an effective width of 8.8 millimeters. A lens that works properly on a camera having a small sensor may not produce a large enough image to work correctly on a camera having a large sensor. This is due to the distortion in the fringe areas of the lens. Knowing the width of a sensor prevents image blur because users can calculate parameters such as the correct exposure time. The sensor’s width also allows users to calculate the depth of field for a given aperture.

[ return to top ]

Exposure

Many factors influence the amount of light required to produce the best image possible. Without sufficient light, the image may be;

  • under-exposed, detail is lost in dark regions
  • unbalanced, poor color reproduction,
  • blurred, due to the lack of depth-of-field

The time that light is exposed to the imaging sensor depends on several factors. These factors include, lens f-stop, frame rate, shutter time, light levels, reflectance of surrounding material, imaging sensor’s well capacity, and the sensor’s signal-to-noise (SNR) ratio. All of these factors can significantly impact the image quality. An often overlooked factor is the exposure time, also known as the shutter time.

The exposure time, shutter rate, shutter angle are interchangeable terms. The exposure time for mechanical shutters is set in terms of number of degrees that it is open. The exposure time for electronic sensors is either the inverse of the frame rate if no electronic shutter exists or the time that an electronic shuttered sensor is exposed in microseconds. Shown below are the relationships for defining the exposure time.

mechanical shutter = ( revolutions per second x angle/360)
no shutter = 1/frame rate
electronic shutter = period of time that the sensor is exposed

The exposure time determines how sharp or blur free an image is—regardless of the frame rate. The exposure time needed to avoid blur depends on the subject’s velocity and direction, the amount of lens magnification, the shutter speed or frame rate (which ever is faster) and the resolution of the imaging system.

A high velocity subject may be blurred in an image if the velocity is too high during the integration of light on the sensor. If a sharp edge of an object is imaged, and the object moves within one frame more than 2 pixels or a line pair, the object may be blurred. This is due to the fact that multiple pixels are imaging an averaged value of the edge. This creates a smear or blur effect on the edge. To get good picture quality, the shutter rate should be 10x that of the subject’s velocity.

The lens magnification can influence the relative velocity of the subject being imaged. The velocity of an object moving across a magnified field-of-view (FOV) is increased linearly according to the magnification level. Instinctually, if an object is viewed far away, the relative velocity in the FOV is less than that viewed next to the object.

Motion analyzers use electronic or mechanical shutters that operate as fast as 10 micro- seconds (1/100,000 of a second), which is fast enough to provide blur-free images of high-speed events. The shutter controls the amount of light that is exposed to the sensor by the cycle rate of the shutter and the time that the shutter is open. The cycle time is set by the frame rate. The shutter then determines the exposure time. If no shutter capability exist for the imaging sensor, then the frame rate will be the effective exposure time. Therefore, for a high velocity object, higher frame rates are required. The shutter is synchronized to the sensor timing. Multiple cameras can be synchronized if the shutters can be controlled in unison. Shown in Table below are subjects that their velocities have been averaged and converted to frame rate/exposures.

SUBJECT
Min. Frame Rate
Exposure (µSec)
Money sorting machine (single bill time)
500
100
Flame pattern test (fuel combustion)
3000
20
Wire bonding (one cycle)
1000
50
Surface mount (one placement cycle, no pickup)
1000
100
Food—crackers on process line (three samples)
250
1000
Potato chips being bagged (one cycle)
250
1000
Tire testing, front and rear over glass plate
500
100
Hot glue applied to film box flap
500
500
Blood stream (one cell motion across screen)
1000
20
High voltage circuit breaker (one cycle)
1000
1000
Label pickup (one label)
250
1000
Golf ball impact and flight (club)
1000
20
Composite material fracture
1000
100
Car crash test (impact)
1000
100
Air Bag Inflation
3000
70

A proper shutter speed may be calculated as follows.

Exposure (shutter rate) <= 2X Pixel Size / Vr
where:
Vr = sensor dimension x (field-of-view / object’s velocity)
Pixel Size = pixel dimension / total pixels
Note: pixel dimension should correspond to the dimension used for the total pixel count.

If the object’s velocity, the field-of-view, the imaging sensor’s dimensions and pixel count are known, the shutter speed required to produce a sharp image can be calculated. The relative velocity (Vr) at the sensor can be calculated by reducing the subject’s velocity by the optical reduction at the sensor. The pixel size must be calculated by dividing the sensor size in the dimension of interest (x or y). Knowing that a relative velocity at the sensor plane that is less than 2 pixels or a line pair will produce a good image, we multiply the pixel size by two. Therefore, the shutter speed is calculated by dividing the 2X pixel size by the relative velocity (Vr). The inverse yields the minimum shutter speed or in the case of an imaging system without a shutter, it is the minimum frame rate for sharp images.

[ return to top ]

Depth of Field
Depth-of-field (DOF) is the range in which an object would be in focus within a scene. The largest DOF is when a lens is set to infinity. The smaller the f-stop the smaller the DOF. If the object is move closer to the lens, the DOF also decreases. Lenses of different focal lengths will not have the same DOF for a given f-stop.

[ return to top ]

Sensitivity

Most modern image sensors have a sensitivity that is equivalent to a film Exposure Index value of between 125 ISO and 480 ISO in color and up to 3200 ISO in monochrome. The sensitivity is a very important factor for obtaining clear images. An inexperienced user may confuse motion blur with a poor depth-of-field. If the sensitivity of the camera is not high enough for imaging an object for a given scene, the lens aperture must be opened up. This reduces the depth-of-field for the object to remain in focus. As the object moves, it could take a path outside the area that is in focus. This would then give the appearance of an object with motion blur. However, in reality, it is out of focus.

In practice, a single 600-watt incandescent lamp placed four feet from a typical subject provides sufficient illumination to make recordings at 1,000 fps with an exposure of one millisecond (1/1,000 of a second) a f/4. This level of performance is fine for many applications, although some demanding high-speed events have characteristics where greater light sensitivity may be preferred.

[ return to top ]

Record Time

The recording time of a high-speed video system is dependent on the frame rate selected and the amount of storage medium available. The continuing technological advances in DRAM cards make higher storage levels affordable, but DRAM is still a limiting factor. However, as the following chart shows, most high-speed events occur in such short duration that 2000 frames is usually more than enough to capture the event. As memory chips get denser, the storage capacity will increase in motion analyzers. The table below provides average event times for various applications. The event times were measured from actual imaging data. The definition of an event time is the duration of event that produced significant information for motion analysis.

SUBJECT
EVENT TIME (sec)
FRAMES
(1K fps)
Money sorting machine (single bill time)
1.2
1,200
Flame pattern test (fuel combustion)
0.7
700
Wire bonding (one cycle)
0.8
800
Surface mount (one placement cycle, no pickup)
0.3
300
Food—crackers on process line (three samples)
0.3
300
Potato chips being bagged (one cycle)
1.1
1,100
Tire testing, front and rear over glass plate
0.4
400
Hot glue applied to film box flap
0.2
200
Blood stream (one cell motion across screen)
0.8
800
High voltage circuit breaker (one cycle)
0.2
200
Label pickup (one label)
0.6
600
Golf ball impact and flight (club)
0.6
600
Composite material fracture
0.1
100
Car crash test (impact)
0.3
300
Air Bag Inflation
0.035
35

[ return to top ]

Resolution

Resolution of a motion analyzer is general expressed in terms of the number of pixels in the horizontal and vertical dimension. A pixel is defined as a the smallest unit of a picture that can be individually addressed and read. At the present, high-speed-camera resolutions range from 128 x 128 to 512 x 512 pixels. Future resolutions will go as high as 1024 x 1024 pixels. Generally, the limiting resolution of the imaging system is the imaging sensor.

A rule of thumb for capturing high-speed events is that the smallest object or displacement to be detected by the camera should not be less than 2 pixels within the camera’s horizontal field of view.

The sensor resolution may be expressed also in terms of line pairs per millimeter (lp/mm). The meaning of line pairs per millimeter is an expression of how many transitions from black to white (lines) can be resolved in one millimeter. To calculate a sensor’s theoretical limiting resolution in lp/mm, take the inverse of two times the pixel size. Shown below is the limiting resolution of a sensor with a 16 micron pixel.

Theoretical Limiting Resolution = ( 1/ (2 x pixel size)) x 1000
  = 1/(2 x 16) x 1000
  = 31.25 lp/mm

[ return to top ]

Record Modes

Motion analyzer’s have various methods of recording. The variety of recording methods is one of the most distinguishing features of high-speed video. Certain recording methods can not be matched by high-speed film cameras. The motion analyzer’s most useful recording mode is called continuous record. In continuous record mode the camera runs and runs, replacing it’s oldest images with the newest image until an event occurs and triggers the camera to stop. Further flexibility allows the operator to program exactly how many images before and after an event are saved. For engineers and technicians trying to record something unpredictable or intermittent, the continuous-record with triggering is the only feasible method of capturing the event.

One of the most powerful but, the least understood, hence least used, is Record-On-Command (ROC). ROC is powerful because images may be selected according to a user supplied signal. The objective of the application example above is to capture over a thousand images of a box lid being closed. There is an intermittent error that causes the lid to close damaged. To capture an intermittent problem such as this one is difficult to trigger since the damage may only be discovered further down the packaging line. By using a tachometer pulse off the shaft driving the closing mechanism, precise timing can be derived for indicating the exact position when the lid is being closed. This timing pulse is used to qualify the image for storage in memory. If the pulse exists, images are written into the motion analyzer’s memory. In absence of the pulse, no images are recorded. Therefore, only images of the lid in an exact position will be recorded. The recording continues until memory is full. In addition, a range of motion may be recorded if the pulse is longer than a single frame period. In other words, if the motion analyzer is operating at 1000 fps and the pulse into ROC is 5.5 milliseconds long, 5 images per pulse will be stored. The use of this recording technique is only limited by the user’s imagination. Indeed, it is one of the most powerful but, least understood recording techniques.

Another obscure recording technique for motion analyzer’s with DRAM memory is Slip Sync. This recording technique is used to operate the motion analyzer at a frame rate that is defined by a user’s signal. Again, we will look at the application above to explain the operation. Slip sync imaging is very similar to the method of imaging with a strobe synchronized with an object that has a repetitious movement. In our example, the user would input a frequency that was synchronized to the tachometer. As the frequency is varied, the images captured will be sync with the tachometer in a positive or negative direction. This allows any position of the lid movement to be observed and captured. Another example would be that of an accelerometer voltage that is feed to a voltage-to-frequency converter. As the acceleration changes, so does the frequency out of the converter. This frequency then drives the frame rate of the motion analyzer. Why should this interest us? Objects that move faster need a higher frame rate for recording than objects that move slower. Therefore, the rate of change is directly proportional to the rate of recording. Application examples include a crush test for materials using a strain gauge, a flame propagation study in a combustion engine using a pressure sensor, an automotive car crash using an accelerometer or an explosion that has a light sensor detecting the detonation. This mode of recording is uniquely possible with DRAM based motion analyzers.

[ return to top ]

Time Magnification
The goal in using a high-speed camera is to obtain a series of pictures, that are observable in slow motion after capturing the pictures of a high-speed event. Time magnification describes the degree of "slowing down" of motion that occurs during the playback of an event. To determine the amount of time magnification, divided the recording rate by the replay rate. For example, a recording made at 1,000 fps and replayed at 30 fps will show a time magnification of 33:1. One second of real time will last for 30 seconds on the television or computer monitor. If the same recording was replayed at only 1 fps, that one second event would take more than 16 minutes to play back! Most systems allow replay in forward or reverse with variable playback speeds. Therefore, it is important to capture only the information that is necessary otherwise, long recordings can take hours to playback. Some examples are shown below.
Record
Rate
Time
(sec)
Frames Recorded
Playback @ 30 fps
Playback @ 1 fps
250
20
5000
167 sec
83 min
500
50
30000
1000 sec
500 min
1000
2
1500
50 sec
25 min
4500
0.11
500
17 sec
8 min

[ return to top ]

Lighting Techniques

Lighting an application properly can produce dynamic results over poor light management. There are four fundamental directions for lighting high speed video subjects; front, side, fill and backlight. Placing a light behind or adjacent to a lens is the most common method of illuminating a subject. However, some fill lighting or side lighting may be needed to eliminate the shadows produced by the front lighting. It is advisable to have the light behind the lens to avoid specular reflections off the lens. Side lighting is the next most common lighting technique. As the name implies, the light is at an angle from the side. This can produce a very pleasing illumination. In fact, for low contrast subjects, a low incident lighting angle from the side can enhance detail. Fill lighting may be used to remove shadows or other dark areas. Fill lighting may also be used to lessen the flicker from lamps that have poor uniformity. Fill is from the side or top of a scene. Backlighting may be used to illuminate a translucent subject from behind. It is not used that frequently in high speed video. However, certain applications such as microscopy, web analysis or flow visualization will be suited for backlighting. All of these techniques are important for getting a high quality image.

Lighting Sources

There are a number of lighting sources available for high speed video. Some care must be taken in lighting selection due to the several factors. The areas that need to be considered included the type of light, the uniformity of the light source, the intensity of the light, the color temperature, the amount of flicker, the size of the light, the beam focus and the handling requirements. All of these factors are important in matching the light to the application.

Type of Lighting

Lighting types can be identified by two characteristics; physical design and the method of producing the light. The physical characteristics include lens, the reflector, packaging and the bulb design. The method of producing light includes tungsten, carbon arc, fluorescent and HMI.

  • Tungsten
    Tungsten lighting is also referred to as incandescent lamps. Tungsten color temperature is 3200K. A type of tungsten lamp is called halogen. Halogen is a hotter lamp since the bulb must heat the regenerative tungsten. The tungsten lamps are efficient in their light output.
  • Carbon Arcs
    This type of lamp forms an arc between two carbon electrodes. The arc produces a gas that fuels a bright flame that burns from one electrode to the other. In time, this consumes the carbon.
  • Gas Discharge
    The fluorescent tube is one type of gas discharge lamp. At the end of each tube are electrodes. The tube is normally filled with argon and some mercury. As current is applied at the electrodes, the mercury is vaporized by the argon gas. The mercury emits an ultraviolet emission. This then strikes the side of the tube that is coated with a phosphor. The phosphor then transforms the ultraviolet to visible light. Most fluorescent lamps emit a dominant green hue which is not very suitable for a balanced light source. Additional, the discharge produces a non-uniform light that is easily detected as a 60 cycle flicker when playing images back from a high-speed motion analyzer.
  • Arc Discharge
    HMI (mercury medium-arc iodide) is the most common lamp in this class of lighting. As current is passed through the HMI electrodes, an arc is generated and the gas in the lamp is excited to a light emitting state. The spectrum of light emitted includes visible as well as ultraviolet. This light source typically has a UV filter to block the harmful emissions. The HMI light is a balanced light source. It generates an intense white light. If a switching ballast is used with the HMI, it produces a uniform light with very low flicker. Other types of ballast are not as well regulated.

[ return to top ]

Color

Understanding color is difficult but necessary even for monochrome imaging. The color of light is determined by its wavelength. The longer wavelengths are hotter in color (red). The shorter wavelengths are cooler (blue).

Color perception is a function of the human eye. The surface of an object either reflects or absorbs different light wavelengths. The light that the human eye perceives is unique in that it produces a physiological effect in our brain. What is red to one person may have a slight difference of perception by another person. Terms that further describe the color of an object is hue, saturation and brightness. Hue is the base color such as red, blue violet, yellow and others. Saturation is the shades that vary from a basic color to that of a different shade. An example of a hue would be green and a saturated color would be lime (light green). Brightness also known as luminance is the intensity of the light. The subject of color would take an entire book to fully explain the science. However, studying a color chart can give the user some insight into the composition a color scene.

Color temperature is a common way of describing a light source. Color temperature originally derived it’s meaning from the heating of a theoretical black body to a temperature that caused the body to give off varying colors that ranged from red hot to white hot. This term was developed by Lord Kelvin and his name was associated with the unit measure.

Color versus Monochrome

Most of the early high-speed film was black-and-white. Once color film became available, the use of black and white declined. The use of high-speed color film set the format standard that video has attempted to meet. Over the years, monochrome images have been all that could be recorded on most motion analyzers. Today’s motion analyzers can produce images that replace color film for some high speed applications. Full 24-bit color images are now possible from motion analyzers. To understand the strengths and weaknesses of both color and monochrome in varying high speed video applications, some background must be discussed.

There are various methods of producing color in high speed video. The three the most widely used techniques are color wheel, beam splitter, and color filter arrays. The color wheel is used in still imaging. The subject does not move but, the wheel rotates to a position with a primary color filter and an image is taken. Then the wheel moves to the next filter and an image is taken. Finally, the last filter is in position and an image is taken. All three images taken with the primary filters are built into a three color plane image (RGB). This technique is not suitable for high speed video due to the motion differences between each successive image. Using three imaging sensors with stationary color filters and a beam splitter, true color reproduction is possible. True color means that the primary colors and all the saturations are possible. This technique is costly since all the electronics is tripled with the need for three imaging sensors. The alignment of the three sensors must be very precise. Otherwise, misregistration will occur on the colors. The last technique is a cost saving compromise. Color Filter Arrays (CFA) provide a more cost affective means for producing color (only one imaging device). There are individual color filters deposited on the surface of each pixel. There is some combination of Red, Blue and Green or a complimentary color scheme. Each pixel is isolated to a certain color spectrum. Although the pixels are filtered, the raw data must be interpolated for solving the missing pixels in each color plane.

Now that the main methods for producing color have been discussed, we need to review why image in color and not monochrome. Generally, monochrome images are better in image quality. Monochrome cameras are more sensitive due to the lack of color filtering. The resolving capability is better than CFA imaging sensors. This is due to the fact that there is no interpolation involved. The disadvantage of a monochrome image is the loss of color differentiation. The subtle change in gray levels is harder to observe than a change in hue or saturation. Color is valuable for differentiating shades. It also produces a bridge from color film to color video.

[ return to top ]

 

Home
About Us
Products
Contact Us
Support
Faqs