• 1394a: Also known as FireWire (see below).
  • 1394b: Also known as FireWire-b (800 mb/s)
  • Aberration: Factors in an optical system that generate adverse effect on the resultative image. Any design work in making optics entails many different approaches to correct various aberrations, such as spherical and chromatic aberrations, astigmatism, comatic flare, and distortion.
  • Acquisition: Image acquisition refers to how a computer gets image data from a camera into the computer.
  • AGC: Abreviation of Automatic Gain Control. A feature built in a camera to automatically control gain level.
  • Analog: Analog cameras do not have a digital output. These cameras generally provide a TV-like signal that needs to be digitized in the host computer if it is to be used in machine vision. Although analog cameras are still used widely in machine vision they are quickly being displaced by digital cameras, which provide a much higher performance machine vision solution. When comparing analog vs. digital cameras, the main differences are image quality, exposure control, speed, and ease of integration.
  • Aperture Ratio: The ratio of the effective lens opening to its focal length (1/ F#)
  • Area scan: Area scan refers to a camera sensor consiting of a rectangular array of pixels. Area Scan cameras are sometimes called matrix cameras. By way of contrast, Line Scan cameras are those with a sensor comprising a single line of pixels (linescan camera).
  • Aspherical: An optical element processed with non-spherical surface(s). There are a couple of different ways to create aspherics; e.g. grinding, press molding, injection molding and hybrid methods, any of which requires high-precision technology.
  • Autoiris (Auto Iris) : Some lenses, particularly those used in outdoor imaging, incorporate a galvanometer-type drive to automatically control the aperture, or iris, of the lens. There are basically two types of auto-iris: DC-type and video type.
  • Automatic Light Compensation (ALC Control): Photometric control that sets the auto-iris to react to bright objects in a picture that does not affect the overall video level. Turning the control towards Peak will increase sensitivity, towards Average will decrease sensitivity. It is normally set to "Average" under factory-shipped conditions.
  • Barrel: The chassis of a lens, usually cylindrical, that contains the lens elements and iris diaphragm.
  • Binning: Binning is the technique of combining pixels together on a CCD to create fewer but larger pixels. True binning combines charge in adjacent pixels in a manner that increases the effective sensitivity of the camera. Machine vision cameras do not generally have true binning functions.
  • Blob Analysis: a machine vision computer algorithm that identifies segmented objects according to geometrical properties such as area, perimeter size, color, etc.
  • Brightness: In reference to cameras, an offset setting applied equally to all pixels regardless of the pixel value. Similar to the brightness setting on a typical computer monitor or television. See "Offset"
  • Camera Link: One of the common digital camera hardware interface in the market today. It offers high-data transfer rates, but is limited by cable length and does not have a standard communications protocol. Camera Link is largely being displaced by more modern high-performance digital interfaces such as Gigabit Ethernet (GigE Vision).
  • CCD: An abbreviation for charge-coupled device. A CCD sensor is a light-sensitive semiconductor device, which converts light particles (photons) to electrical charge (electrons). CCD cameras are one of two dominant types of sensor technologies used in machine vision. The other sensor technology is called CMOS.
  • CMOS: Complementary Metal Oxide Semiconductor. CMOS refers to an image sensor technology that is manufactured using the same processes as computer chips. This technology works like a photodiode where the light 'gates' a current that that is representative of the amount of light impinging on each pixel. This differs significantly from CCD technology. There are a number of advantages in using CMOS sensors over CCD including cost, speed, anti-blooming, and programmable response characteristics (ie. multiple slope response). CCD's also have certain advantages.
  • Dark Current: Dark current is the accumulation of electrons within a CCD or CMOS image sensor that are generated thermally rather than by light. This is a form of noise that is most problematic in low light applications requiring long exposure times.
  • DCAM: DCAM or IIDC is a software interface standard for communicating with cameras over FireWire. It is a standardized set of registers etc. If a camera is DCAM compliant then its control registers and data structures comply with the DCAM spec. Such a camera can be truly plug-and-play in a way that other cameras are not.
  • Decibel or dB: A logarithmic unit of measure. When used of digital cameras this unit is usually used for describing signal-to-noise or dynamic range.
  • Depth of Field (DOF): The maximum object depth that can be maintained entirely in focus. DOF is also the amount of object movement (in and out of best focus) allowable while maintaining a desired amount of focus.
  • Digital Imaging: Refers to the capture of a video image in such a way that the resulting image data is in digital format useful for analysis by a computer.
  • Dynamic Range: The ratio of the maximum signal relative to the minimum measurable signal often measured in decibels or dBs. Dynamic range is sometimes used interchangably with SNR. It can also refer to Optical Dynamic Range.
  • E-Flip: a feature found in some PTZ cameras which have a 180° tilt. It detects the position of the camera and automatically flip the image to the correct orientation so that it will always appear the right-side-up.  It allows a moving target to be tracked when they move directly underneath a security camera. Normally if a target moves underneath the camera the image will be upside once it has passed the 90° point.
  • Exposure Time: This is the amount of time that the sensor is exposed to the light. This is the control that is used first (before gain and offset) to adjust the camera. In Labview, the shutter controls are a little confusing: there are ‘manual relative’, ‘manual absolute’, “One-push’ and “auto’ controls. Normally, you should use ‘manual absolute’ where each unit corresponds to 1 us of exposure time. When using the ‘relative’ controls, the units are different – 20us per unit. This control is called "shutter" in Labview and some DCAM controls.
  • Fast Lens: A lens that admits a lot of light. A lens with a low F-number. A typical fast lens will have a F-number of less than 1.2.
  • Field of View (FOV): The viewable area of the object under inspection. In other words, this is the portion of the object that fills the camera’s sensor.
  • FireWire: A standard computer interface and its various versions otherwise called IEEE 1394, IEEE-1394a, or IEEE-1394b. It is an especially fast serial interface that is low cost with plug and play simplicity of integration. It is currently the only interface for digital industrial cameras that is standardized both in hardware and software communications protocols.
  • Filter Driver: With respect to Gigabit Ethernet cameras, a filter driver, or "filter" is used to reduce the CPU burden when handling large volumes of data. The filter strips out, or "filters", the image data from the Ethernet packets at the lowest level so that the CPU does not have to do this. Using a filter driver can significantly reduce the CPU load associated with image acquisition.
  • Flange Back (Flange Back Focal Distance): The distance from the mechanical flange of the lens (rear edge surface of the lens mount) to the focal plane. C-mount lenses have a flange back distance of 17.526mm while CS-mount lenses have 12.5mm. Because of this, C-mount lenses can be used on CS-mount cameras with an adapter ring of 5mm thickness (however, CS-mount lenses cannot be used on C-mount cameras).
  • F-Number (F/#): Expression denoting the ratio of the equivalent focal length of a lens to the diameter of its entrance pupil (smaller F/# provides larger aperture of the lens, transmitting greater amount of light).
  • Frame Rate: Frame rate is the measure of camera speed. The unit of this measurement is "frames per second" (fps) and is the number of images a camera can capture in a second of time.
  • Frame Grabber (or Framegrabber): This is the industry name for the circuit board (usually a PCI card) that is an interface to connect analog cameras, or Camera Link cameras, to a computer system. With the wide range of FireWire and GigE Vision gigabit Ethernet cameras, which do not require such specialized interface cards, frame grabbers are generally no longer required.
  • Gaging (or Gauging): In reference to machine vision, this is non-contact dimensional examination and measurement of an object using an imaging system or machine vision camera.
  • Gain: This is the same as the contrast control on your TV. It is a multiplication of the signal. In math terms, it controls the “slope” of the exposure/time curve. The camera should normally be operated at the lowest gain possible, because gain not only multiplies the signal, but also multiplies the noise. Gain comes in very handy when you require a short exposure (say, because the object is moving and you do not want any blur), but do not have adequate lighting. In this situation the gain can be increased so that the image signal is strong.
  • Gigabit Ethernet: An industry standard interface, variously called 'gige (gig-ee)', 'GbE', '1000-speed', etc., that is used for high-speed computer networks capable of achieving data transfer rates in excess of 1000 megabits per second. Gigabit Ethernet has been now adapted to high performance CCD cameras for industrial applications. This generalized networking interface is being adapted for use as a standard interface for high-performance machine vision cameras that is called GigE Vision.
  • GigE Vision: 'GigE Vision' is an interface standard from the Automated Imaging Association (AIA), for high-performance machine vision cameras. GigE (Gigabit Ethernet), on the other hand, is simply the network structure on which GiGE Vision is built. The GigE Vision standard includes both a hardware interface standard (Gigabit Ethernet), communications protocols, and standardized camera control registers. The camera control registers are based on a command structure called GenICam. GenICam seeks to establish a common software interface so that third party software can communicate with cameras from various manufacturers without customization. GenICam is incorporated as part of the GigE Vision standard. GigE Vision is analogous to Firewire's DCAM, or IIDC interface standard and has great value for reducing camera system integration costs and for improving ease of use.
  • Global Shutter: Generally speaking, when some one says "global shutter", they really mean "snapshot shutter". See "Snapshot Shutter" below. In actuality, a global shutter starts all a camera's pixels imaging at the same time, but during readout mode, some pixels continue to image as others are read out. (see Rolling Shutter, Snapshot shutter). For machine vision applications, snapshot shutter is generally a 'must have'.
  • Gray Scale: refers to a monochrome image with gradations of grey. An 8-bit camera, for example would represent images in 256 shades of gray. A 12-bit camera would represent images in 4096 shades of grey.
  • Histogram: A graphical representation of the pixel values in an image. Generally the left edge of the image represents black, or zero, and the right edge represents white, or 256/4096. The histogram curve represents how many pixels of each luminence value.
  • IIDC: IIDC (DCAM) is a software interface standard for communicating with cameras over Firewire. It is a standardized set of registers etc. If a camera is IIDC compliant then its control registers and data structures comply with the IIDC spec. Such a camera can be truly plug-and-play in a way which other cameras are not.
  • Image Analysis: The software process of generating a set of descriptors or features by which a computer may make a decision about objects in an image.
  • Image Size: Reference to the size of an image formed by the lens onto the camera pick-up device. The current standards are: 1", 2/3", 1/2", and 1/3", corresponding to 16mm, 11mm, 8mm and 6mm measured diagonally.
  • Integration: generally refers to the task of assembling the components of a machine vision system (camera, lens, lighting, software, etc). Usually used as short form for "System Integration". When used in reference to what the camera does, it is another word for exposure time (see Integration Time).
  • Integration Time: Also referred to as exposure time. This is the length of time that the image sensor is exposed to light while capturing an image. This is equivalent to the exposure time of film in a photographic camera. The longer the exposure time, the more light will be acquired. Low light conditions require longer exposure times.
  • Interlaced Scan: Refers to one of two common methods for "painting" a video image on an electronic display screen (the second is progressive scan) by scanning or displaying each line or row of pixels. This technique uses two fields to create a frame. One field contains all the odd lines in the image, the other contains all the even lines of the image.
  • Interline Transfer: A CCD architecture where there exists an opaque transfer channel between pixel columns. Such a CCD does not require a mechanical shutter but spatial resolution, dynamic range, and sensitivity are reduced due to the masked column between light sensitive columns.
  • IR Lens: A lens that is specially designed so that chromatic aberrations in the infrared wavelengths are corrected. An IR-lens should be used in cases where both visible and IR illumination is being received by the camera; otherwise the resulting image would be blurred.
  • ISO 9000, 9002: Internationally recognized standards that certify a company's manufacturing record keeping. ISO accreditation does not imply any product quality endorsement, but it is rather an acknowledgement of the manufacturing and/or engineering record keeping practices of the accredited company.
  • Jumbo Frames: With respect to Gigabit Ethernet, Jumbo frames refers to the data packet size used for each Ethernet frame. Since each data frame must be handled by the operating system, it make sense to use large data frames to minimize the amount of overhead when receiving data into the host computer. Such large data blocks are called Jumbo frames.
  • Level Control: Main iris control. Used to set the auto-iris circuit to a video level desired by the user. After set-up, the circuit will adjust the iris to maintain this video level in changing lighting conditions. Turning the control towards High will open the iris, towards Low will close the iris.
  • Linescan (or Linear Array): A line scan, or linear array camera has a single row of pixels and captures an image by scanning an object that moves past the lens. Conceptually similar to a desktop scanner (compare "area scan").
  • Machine Vision: Machine vision is the application of cameras and computers to cause some automated action based on images received by the camera(s) in a manufacturing process. Generally, the term "machine vision" applies specifically to manufacturing applications and has an automated aspect related to the vision sensors. However, it is common to use machine vision equipment and algorithm outside of the manufacturing realm.
  • Manual Focus: Refers to a lens which requires a human user to set the focus as opposed to an auto-focus lens which is controlled via a computer or camera.
  • Manual Iris: Refers to a lens which requires a human user to set the iris as opposed to an auto-iris lens which is controlled via a computer or camera.
  • Megapixel: Refers to one million pixels - relating to the spatial resolution of a camera. Any camera that is roughly 1000 x 1000 or higher resolution would be called a mega pixel camera.
  • Microlens: A type of technology used in some interline transfer CCD's whereby each pixel is covered by a small lens which channels light directly into the sensitive portion of the CCD.
  • Morphology: The mathematics of shape analysis. An algebra who variables are shapes and whose operations transform those shapes.
  • Motorized Lens: A lens whereby zoom, aperture, and focus (or one or more of these) are operated electronically. Usually, a computer operated controller is used to drive such lenses. The controller often has an RS-232 port through which a camera, or computer, controls the lens.
  • Network Adaptor - another word for the Ethernet interface card or port used found on many computers..
  • Neutral Density (ND) Filter: A type of filter to reduce the amount of light transmission without cutting of any particular frequency range of the light. Some lenses incorporate ND filter as a built-in feature for the purpose of helping the diaphragm function toward the minimum aperture range. Optional filters of different diameters are available for attachment to the front of a lens (ND 2X, 4X, etc.).
  • OCR: stands for Optical Character Recognition and refers to the use of machine vision cameras and computers to read and analyze human-readable alphanumeric characters to recognize them.
  • OHCI: (Open Host Controller Interface) describes the standards created by software and hardware industry leaders--including Microsoft, Apple, Compaq, Intel, Sun Microsystems, National Semiconductor, and Texas Instruments--to assure that software (operating systems, drivers, applications) works properly with any compliant hardware.
  • Offset: This is the same as the brightness control on your TV. It is a positive DC offset of the image signal. It is used primarily to set the level of “black”. Generally speaking, for the best signal, the black level should be set so that it is near zero (but not below zero) on the histogram. Increasing the brightness beyond this point just lightens the image but without improving the image data.
  • Pixel: An abbreviated form of picture element. The individual elements that make up a digitized image array.
  • Preset Lens (Pre-position lens): Zoom lenses which incorporate variable-resistors (potentio meter) to index zoom, focus and/or aperture positions to the lens controller. After initial set-up, this allows the operator to view different preset areas quickly without having to readjust the zoom, focus and/or aperture each time.
  • Progressive Scan: Also known as non-interlaced scanning, progressive scan is a method for displaying, storing or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to the interlacing used in traditional television systems where only the odd lines, then the even lines of each frame (each image now called a field) are drawn alternatively.
  • Readout: Readout refers to how data is transferred from the CCD or CMOS sensor to the host computer. Readout rate is an important specification for high-resolution digital cameras. Higher readout rates mean that more images can be captured in a given length of time.
  • Region of Interest: Region of interest readout (ROI) refers to a camera function whereby only a portion of the available pixels are read out from the camera. This is also referred to as "partial scan" or "area of interest" (AOI).
  • Rolling Shutter: Some CMOS sensors operate in "rolling shutter" mode only so that the rows start, and stop, exposing at different times. This type of shutter is not suitable for moving subjects except when using flash lighting because this time difference causes the image to smear. (see Global Shutter, Snapshot Shutter).
  • Sensitivity: A measure of how sensitive the camera sensor is to light input. Unfortunately there is no standardized method of describing sensitivity for digital CCD or CMOS cameras, so apples-to-apples comparisons are often difficult on the basis of this specification.
  • Sensor Size: The size of a camera sensor’s active area, typically specified in the horizontal dimension. This parameter is important in determining the proper lens magnification required to obtain a desired field of view. The primary magnification (PMAG) of the lens is defined as the ratio between the sensor size and the FOV. Although sensor size and field of view are fundamental parameters, it is important to realize that PMAG is not.
  • Smart Camera: Sometimes called "intelligent camera", or "smart sensor", the term smart camera refers to a camera with a built-in computer running image processing software in a single compact package capable of doing some simple machine vision tasks.
  • Snapshot shutter: Sometimes called a global shutter, snapshot shutter refers to an electronic shutter on CCD or CMOS sensors. A snapshot shutter is a feature of the image sensor that causes all of the pixels on the sensor to begin imaging simultaneously and to stop imaging simultaneously. This feature makes the camera especially suitable for capturing images of moving objects. (see Rolling Shutter, Global Shutter).
  • Spatial resolution: A measure of how well the CCD or camera can resolve small objects. Usually used relating not only to the pixel resolution, but also to lens resolution -- ie the resolution of the whole optical system. See also High Resolution.
  • Spot Filter: A supplement to the iris which allows the lens to have a larger aperture opening than is physically possible with the iris only. These usually range from F/88 to F/1600. This allows very sensitive cameras to view bright scenes easily. The iris of a lens without a spot filter would not be able to close down enough in bright light without creating an image degradation caused by refraction.
  • System Integrator: A company or person who provides turnkey vision systems using cameras, computers, software, and possibly robotics and other mechanical hardware usually aimed at a specific customer application and installation.
  • Sync: Refers to an external signal generated by a camera than can be used to synchronize the camera with outside events such as flash illumination, or other cameras.
  • Trigger: An input to an industrial digital camera than initiates the image capture sequence. Otherwise, an electrical signal or set of signals used to synchronize a camera, or cameras, to an external event.
  • Varifocal Lens: Optical assembly containing several movable elements to permit changing the effective focal length (EFL). Unlike a zoom lens a varifocal lens requires refocusing with each change.
  • Video-type auto iris: There are two major types of auto-iris lenses: DC-type, and video-type. The video-type auto-iris requires a video signal to determine how far to open the iris on the lens.
  • Vignetting: Fall-off of light illumination observed at the image corners. When gradual, it is likely to be inherent to the optical system. In the case of eclipse, it might be caused by mechanical factors such as housing. (Port hole effect as when a 1/2" lens is viewed on a 1" camera is a result of smaller image circle of the lens as opposed to the size of the imager).
  • Working Distance (WD): The distance from the front of the lens to the object under inspection.
  • Zoom Lens: A lens that delivers different focal lengths without creating a shift of focus regardless of the focal length setting.
  • Zoom Ratio: The ratio of the starting focal length (wide end) to the ending focal length (tele end) of a zoom lens. A lens with a 10X zoom ratio will magnify the image at the tele-end by 10 times.