List of digital imaging terms

From LabAutopedia

Jump to: navigation, search

A Labautopedia compendium of words and terms related to Digital Imaging. Click on linked terms for more detail. Refer to the Contributing section for author information.

This list is incomplete; you can help by expanding it.


Related articles

Click [+] for other articles on 
Digital imaging(1 C, 8 P)
The Market Place for Lab Automation & Screening  Imaging Systems Imaging Software


  • Acquisition - The manner in which outside information is brought into an analysis system.
  • Aberration - The failure of an optical lens to produce an exact point-to-point correspondence between the object and its resulting image. Various types are chromatic, spherical, coma, astigmatism and distortion.
  • Absorption - The loss of light of certain wavelengths as it passes through a material and is converted to heat or other forms of energy. (-)
  • Accuracy - The extent to which a machine vision system can correctly measure or obtain a true value of a feature. The closeness of the average value of the measurements to the actual dimension.
  • Active Illumination - Lighting a scene with a light source coordinated with the acquisition of an image. Strobed flash tubes, pulsed lasers and scanned LIDAR beams are examples.
  • Algorithm - A set of well-defined rules or procedures for solving a problem or providing an output from a specific set of inputs.
  • Alpha Risk (ý-risk) - The risk of rejecting good product.
  • Ambient light - Light which is present in the environment of the imaging front end of a vision system and generated from outside sources. This light, unless used for actual scene illumination, will be treated as background noise by the vision system.
  • Analog - A smooth, continuous voltage or current signal or function whose magnitude (value) is the information. From the word "analogous," meaning "similar to."
  • Analog-to-Digital Converter (A/D) - A device which converts an analog voltage or current signal to a discrete series of digitally encoded numbers (signal) for computer processing. Architecture - For a vision system, the hardware organization designed for high speed image analysis.
  • Application-Specific Machine Vision (ASMV: A turnkey system that addresses a single specific application that one can find widely throughout industry or within an industry.Area - Portion or area of the image to be analyzed. Area analysis measures the number of pixels which fall in a specified range of gray levels for the feature of interest.
  • Area Array Camera - A solid state imaging device with both rows and columns of pixels, forming an array which produces a 2D image.
  • Area scan: Area scan refers to a camera sensor consiting of a rectangular array of pixels. Area Scan cameras are sometimes called matrix cameras. By way of contrast, Line Scan cameras are those with a sensor comprising a single line of pixels (linescan camera). Currently, all of Prosilica's cameras are areascan devices.
  • Array Processor - A specially designed vision engine peripheral which attaches to the host to speed up arithmetical calculations by using parallel processing techniques. The host manages image data access and analysis results.
  • Artifact - An artificially created structure (by accident or on purpose), form or shape, usually part of the background, used to assist in measurement or object location.
  • Artificial Intelligence - The capability of a computer to perform functions normally attributed to human intelligence, such as learning, adapting, recognizing, classifying, reasoning, self- correction and improvement. Rarely found connected to vision systems.
  • ASIC - An acronym for Application Specific Integrated Circuit. All vision system elements including firmware can be integrated onto one ASIC.
  • Aspect ratio - The ratio of the width to the height of a frame of a video image. The U.S. television standard is 4:3 or 1.333
  • Astigmatism - A defect in a lens which causes blur or imperfect image results, since the rays from a given point fail to meet at the focal point. (-)
  • Asynchronous - A camera characteristic which allows the return to top-of-frame to occur on demand, rather than synchronously following the 60 hz power line scanning frequency.
  • Attribute List - List of distinguishing features which are selected for IP calculation.
  • Autofocus - The ability of an imaging system to control the focus of the lens to obtain the sharpest image on the detector. Edge crispness is a typical control variable.
  • Autoiris (Auto Iris) : Some lenses, particularly those used in outdoor imaging, incorporate a galvanometer-type drive to automatically control the aperture, or iris, of the lens. There are basically two types of auto-iris: DC-type and video type. Prosilica's GC-Series GigE Vision cameras can operate the video-type auto-iris.


  • Background - The part of a scene behind the object to be imaged. (-)
  • Backlighting - Placement of a light source behind an object so that a silhouette of that object is formed. It is used where outline information of the object and its features is important rather than surface features.
  • Backpropagation - A training technique which adjusts the weights of the hidden and input layers of a neural net to force the correct decision for a given feature vector data input set.
  • Baffle - A type of shield that prohibits light from entering an optical system. (-)
  • Bandpass Filter - An absorbing filter which allows a known range of wavelengths to pass, blocking those of lower or higher frequency. (2)
  • Bar Code - An identification system that employs a series of machine-readable lines of varying widths of black and white. Usually read with a laser scanner. (-)
  • Bar Code (2D) - An arrangement of rectangles and spaces that contains far more information than a traditional bar code. (-)
  • Barrel Distortion - An optical imperfection which causes an image to bulge convexly on all sides similar to a barrel. (-)
  • Beamsplitter - An optical device which divides one beam into two or more separate beams. A simple coated piece of glass in the optical path might reflect 60% of the light down onto the object, while allowing the other 40% to pass. (2)
  • Beta Risk ( -risk) - The risk of accepting bad or defective product.
  • Binary - An image with pixel values of one or zero.
  • Binary image - A black and white image whose data is represented as a single bit either zeros or ones, in which objects appear as silhouettes. The result of backlighting or thresholding.
  • Binning - Binning is the technique of combining pixels together on a CCD to create fewer but larger pixels. True binning combines charge in adjacent pixels in a manner that increases the effective sensitivity of the camera. Machine vision cameras do not generally have true binning functions. However, Prosilica's CCD-based cameras have a wide range of binning functions.
  • Bit - An acronym for a Binary digit. It is the smallest unit of information which can be represented. A bit may be in one of two states, on or off, represented by a zero or a one.
  • Bit Map - A representation of graphics or characters by individual pixels arranged in rows and columns. Black and white require one bit, while high definition color up to 32. (-)
  • Blanking - The time during a raster scan retrace when the video signal is suppressed. (-)
  • Blob - A single, connected region in a binary or grayscale image. (2)
  • Blob Analysis - Identification of segmented objects in an image based on their geometric features (ie area, length, number of holes). (SRI) (2)
  • Borescope - A device for internal inspection of difficult access locations such as pipes, engines, rifle barrels and pipes. Its long narrow tube contains a telescope system with a number of relay lenses. Light is provided via the optical path or fiber bundles. A 45 degree mirror at the end allows inspection of tube walls.
  • Boundary - The line formed by the joining of two image regions, each having a different light intensity. The edge of a region or object.
  • Bounding Box - The four coordinates which define a box around the object parallel to the major and minor axis. (SRI feature)
  • Brewster's Angle - The angle at which incident light, by reflecting at a boundary between two mediums of different refractive indices (ie air/glass or air/water), becomes plane polarized. For air/glass it is about 67.4 degrees.
  • Brightness - The total amount of light or incident illumination on a scene or object per unit area. Also called intensity.
  • Bus - A set of parallel conductors, which allow devices attached to it to communicate with the CPU. The bus consists of three main parts: Control lines, Address lines, and Data lines. Control lines allow the CPU to control which operations the devices attached should perform, I.E. read or write. The address lines allows the CPU to reference certain (Memory) locations within the device. The meaningful data which is to be sent or retrieved from a device is placed on to the data lines.
  • Byte - Eight bits of digital information. A byte has values from 0 to 255, and is the unit most commonly used to represent the gray scale value of one pixel.


  • C-mount - A threaded means of mounting a lens to a camera.
  • Calibration - 1. a measurement or comparison against a standard.a measurement or comparison against a standard. 2. the determination of any equipment deviation from a standard source so as to ascertain the proper correction factors.the determination of any equipment deviation from a standard source so as to ascertain the proper correction factors.
  • Cameralink - Cameralink is one of the common digital camera hardware interface in the market today. It offers high-data transfer rates, but is seriously limited by cable length and does not have a standard communications protocol. Cameralink is largely being displaced by more modern high-performance digital interfaces such as Gigabit Ethernet (GigE Vision). See the following note: Firewire vs USB2. Compare GigE Vision and Firewire.
  • CCD - Charge Coupled Device. A photo-sensitive image sensor implemented with large scale integration technology.
  • CCD - (Frame Transfer) The entire image is transferred from the sensing area to a storage area on chip. Data (charge) is read out from the storage area in a full frame mode. This workhorse of the industry is also capable of non-RS-170 operation.
  • CCD - (Interline Transfer) Data (charge) is transferred simultaneously out by odd and even lines or fields directly from the image sensors to their corresponding sensor registers. The output from the camera is always one field (frame) behind the image being captured.
  • Centroid - Points that are, respectively, the center of a given area or midpoint of a given line segment.
  • Character - A single letter, digit or punctuation symbol requiring one byte storage. (-)
  • Character Recognition (OCR) - Imaging and recognizing individual text characters in a scene. Also called Optical Character Recognition. (-)
  • Character Verification (OCV) - Imaging and verifying the correctness, quality and legibility of known text characters in an image. Also Optical Character Verification. (-)
  • Child - Computer programming term. In data structures, any node in a tree except the root; a direct descendant of a given node.
  • Chroma - The quality of a color including both the hue and saturation. Not present in gray. (-)
  • CID - Charge Injection Device - A photo-sensitive image sensor implemented with large scale integration technology. Based on charge injection technology, a CID can be randomly addressed, non-destructively read, can be subscanned in a small region and is less susceptible to charge overflow from bright pixels to neighbors. The pixel structure is contiguous with maximum surface to capture incident light which is useful for sub-pixel measurement.
  • CIE - An acronym for a chromaticity coordinate system developed by the Commission Internationale de l'Eclairage, the international commission on illumination. In the CIE system, a plot of ratios (x, y and z) of the three standard primary colors (tristimulus values) to their sum. The most common diagram is the 2 dimensional CIE (x,y).
  • Classification - Assignment of image objects to one of two or more possible groups. Decisions are made by evaluating features either 1) structurally based on relationships or 2) statistically. For example, 1) a penny is round, a certain diameter (+/- a tolerance) and has a histogram of a mean value; or 2) statistically, the object is measured a number of times, then the average and standard deviation are recorded. After training the features are weighted based on significance in object identification. For multiple features, absolute values are used.
  • Closing - A dilation followed by an erosion. A morphological operator useful to close holes and boundaries.
  • CMOS: Complementary Metal Oxide Semiconductor. CMOS refers to an image sensor technology that is manufactured using the same processes as computer chips. This technology works like a photodiode where the light 'gates' a current that that is representative of the amount of light impinging on each pixel. This differs significantly from CCD technology. There are a number of advantages in using CMOS sensors over CCD including cost, speed, anti-blooming, and programmable response characteristics (ie. multiple slope response). CCD's also have certain advantages.
  • Coaxial Illumination - Front lighting with the illumination path running along the imaging optical axis and usually introduced with a 45 degree angle beam splitter.
  • Coherent Fiber Optics - A bundle of optical fibers with the input and output spatial x-y relationship maintained, resulting in near spatially correct image transmission.
  • Collimate - To produce light with parallel rays. (-)
  • Collimated Lighting - Radiation from a given point with every light ray considered parallel. In actuality, even light from a very distant point source (ie a star) diverges somewhat. Note that all collimators have some aberrations.
  • Color - A visual object attribute which may be described by a "coordinate system" such as hue, saturation and intensity (HSI), CIE or LAB. Wavelengths in the visible part of the electromagnetic spectrum to which retinal rods respond.
  • Color Space - A two or three dimensional space used to represent an absolute color coordinate. RGB, HSI, LAB and CIE are all representations of color spaces.
  • Color Temperature - A colorimetric concept related to the apparent visual color of a source, but not its actual temperature.
  • Colorimetry - Techniques used to measure color of an object or region and to define the results in a comparison or coordinate system.
  • Compact Vision System (CVS): an industrial computer designed for machine vision applications that is manufactured by National Instruments. NI's official name for their compact vision system is variously "NI CVS-1454", "NI 1455" , "NI 145x". The NI CVS is configured to operate 1394 cameras using Labview IMAQ1394 driver. The National Instruments CVS also has special machine vision features like advanced trigger I/O that match the back panel of Prosilica's CV-series cameras.
  • Composite Video - A television signal which is produced by combining both a video or picture signal with horizontal and vertical synch and blanking signals. (-)
  • Condenser Lens - Used to collect and redirect light for the purpose of illumination. Often used to collect light from a small source and project even light onto an object.
  • Connectivity Analysis - An Stanford Research Institute routine used to determine which pixels are interconnected and part of the same object or region. The results are used for blob analysis.
  • Contrast - The difference of light intensity between two adjacent regions in the image of an object. Often expressed as the difference between the lightest and darkest portion of an image. Contrast between a flaw or feature and its background is the goal of illumination. (2)
  • Contrast Enhancement - Stretching of the gray level values between dark and light portions of an image to improve both visibility and feature detection.
  • Convolution - Superimposing a m x n operator (usually a 3x3 or 5x5 mask) over an area of the image, multiplying the points together, summing the results to replace the original pixel with the new value. This operation is often performed on the entire image to enhance edges, features, remove noise and other filtering operations.
  • Correlation - A mathematical measure of the similarity between images or areas within an image. Pattern matching or correlation of an X by Y array size template to the same size image, produces a scaler number, the percentage of match. Typically, the template is walked through a larger array to find the highest match.
  • CPU - An acronym for Central Processing Unit. A VLSI chip such as 80486 or pentium.
  • Cross section - A 3D profile of a slice of an object.


  • Dark Current: Dark current is the accumulation of electrons within a CCD or CMOS image sensor that are generated thermally rather than by light. This is a form of noise that is most problematic in low light applications requiring long expsosure times.
  • Darkfield Illumination - Lighting of objects, surfaces or particles at very shallow or low angles, so that light does not directly enter the optics. Objects are bright with a dark background. This grazing illumination causes specular reflections from abrupt surface irregularities.
  • Data Reduction - The process of lowering the data content of a pixel or image such as thresholding or run length encoding. (-)
  • DCAM: DCAM or IIDC is a software interface standard for communicating with cameras over Firewire. It is a standardized set of registers etc. If a camera is DCAM compliant then its control registers and data structures comply with the DCAM spec. Such a camera can be truly plug-and-play in a way that other cameras are not. All of Prosilica's firewire cameras are DCAM-compliant (IIDC 1.30 and IIDC 1.31).
  • Decibel or dB: A logarithmic unit of measure. When used of digital cameras this unit is usually used for describing signal-to-noise or dynamic range.
  • Decision Tree - A structural classification technique based on relationships of feature measurements. Useful for differentiating a number of objects.
  • Dedicated System - Refers to a system which is configured for a specific application. Able to function when plugged in with no further development. Also called turnkey.
  • Depth-of-field - Depth of field refers to the in-focus region of an imaging system. When using a lens, especially in close proximity, objects at and near a certain distance will be in focus whereas other objects in the field of view that are closer or farther away will appear fuzzy, or out of focus. The depth of the region that appears in focus is called the depth of field. Generally speaking, the depth of field will be large if the lens aperture is small (large f-number), and the depth of field will be small with a wide aperture (small f-number).
  • Depth Perception - The perception of solidity of a visual object and its location in the spatial field, through the fusion in the brain of the two slightly dissimilar images from the two eyes.
  • Dichroic Filter - A filter used to transmit light based on its wavelength, rather than on its plane of vibration. Transmits one color, while reflecting a second when illuminated with white light. Often used in heads-up displays. (2)
  • Diffraction Pattern Sampling - Inspection by comparing portions of the interference pattern formed on a screen or special sensor from light waves diffracted by object edges. (-)
  • Diffuse Reflection - Light which bounces off an object surface in many different directions. Light radiated from a matte surface is highly diffused.
  • Diffused lighting - Scattered soft lighting from a wide variety of angles used to eliminate shadows and specular glints from profiled, highly reflective surfaces.
  • Digital Camera - The newest generation of video cameras transform visual information into pixels, then translate each pixel's level of light into a number in the camera.
  • Digital-to-Analog Converter - A VLSI circuit used to convert digital computer processed images to analog for display on a monitor. DAC is the acronym.
  • Digital Image - A video image converted into pixels. The numeric value of each pixel's value can be stored in computer memory for subsequent processing and analysis.
  • Digital Signal Processor (DSP) - A VLSI chip designed for ultra high speed arithmetic processing. Often imbedded in a vision engine. TI's TMS320C40 is the industry standard.
  • Digitization - Sampling and conversion of an incoming video or other analog signal into a digital value for subsequent storage and processing.
  • Dilation - A morphological operation which moves a probe or structuring element of a particular shape over the image, pixel by pixel. When an object boundary is contacted by the probe, a pixel is preserved in the output image. The effect is to "grow" the objects.
  • Dispersion - Separation of a beam of light into its wavelength components, each of which travel at slightly different speeds. Also called chromatic dispersion.
  • Dust - Finely divided, dry, solid matter of silt- and clay-sized earthy particles, less than 0.0625 millimeter in diameter.finely divided, dry, solid matter of silt- and clay-sized earthy particles, less than 0.0625 millimeter in diameter.
  • Dynamic Range - The range in signal amplitude over which a communication receiver or audio amplifier is capable of operating while producing an acceptable output; usually expressed in decibels.


  • Edge - A change in pixel values exceeding some threshold amount. Edges represent borders between regions on an object or in a scene.
  • Edge Detection - The ability to determine the edge of an object.
  • Edge Enhancement - Image processing method to strengthen high-spatial frequencies in the image.
  • Edge Operator - Templates for finding edges in images.
  • Electrical Noise - 1. an unwanted, often random disturbance to a signal that tends to obscure the signal's information content; caused primarily by the random thermal motions of particles in the system. 2. any signal disturbance that interferes with the operation of a system.any signal disturbance that interferes with the operation of a system. 3. any random disturbance that obscures the clarity of a signal.
  • Electro-magnetic Spectrum - The total range of wavelengths, extending from the longest (audio) to the shortest (gamma rays) which can be physically generated. This entire spectrum is potentially useful for imaging, well beyond just the visible spectrum.
  • Encoder (Shaft or position) - Provides rotation information for control of image acquisition, especially for moving web processes. Outputs either pulses for counting or BCD parallel with absolute position information.
  • Endoscope - A medical instrument used to view inside the human body. It may use borescope optics or coherent fibers to relay the image to the eye or camera. Illumination is provided by a non-coherent bundle of optical fibers.
  • Erosion - The converse of the morphology dilation operator. A morphological operation which moves a probe or structuring element of a particular shape over the image, pixel by pixel. When the probe fits inside an object boundary, a pixel is preserved in the output image. The effect is to "shrink or erode" objects as they appear in the output image. Any shape smaller than the probe (ie noise) disappears.
  • Extension Tube - A cylindrical threaded tube used to change the magnification, effective focal length and field of view of a lens when inserted between the lens and imaging sensor.
  • Extended Dynamic Range: Prosilica's CMOS sensor devices all employ a feature called multislope or extended dynamic range. Multislope provides a non-linear mode that allows the camera to image very bright and very dark detail in the same image frame. This is not possible with CCD-type sensors.
  • Exposure Time: This is the amount of time that the sensor is exposed to the light. This is the control that is used first (before gain and offset) to adjust the camera. In Labview, the shutter controls are a little confusing: there are ‘manual relative’, ‘manual absolute’, “One-push’ and “auto’ controls. Normally, you should use ‘manual absolute’ where each unit corresponds to 1 us of exposure time. When using the ‘relative’ controls, the units are different – 20us per unit. This control is called "shutter" in Labview and some DCAM controls.


  • F-number or f-stop - The ratio of the focal length to the lens aperture. The smaller the f- number, the larger the lens diameter and brighter the image and narrower the depth-of-field. (-)
  • Fast Lens: A lens that admits a lot of light. A lens with a low F-number. A typical fast lens will have a F-number of less than 1.2
  • Fast Fourier Transform - Produces a new image which represents the frequency domain content of the spatial or time domain image information. Data is represented as a series of sinusoidal waves.
  • Features - Simple image data attributes such as pixel amplitudes, edge point locations and textural descriptors, center of mass, number of holes in an object with distinctive characteristics defined by boundaries or regions.
  • Feature Extraction - Determining image features by applying feature detectors to distinguish or segment them from the background.
  • Feature Vectors - A set of features of an object (such as area, number of holes, etc) that can be used for its identification or inspection.
  • Fiber Optics - Light source or optical image delivery via a long, flexible fiber(s) of transparent material, usually bundled together. Light is transmitted via internal reflection inside each fiber. Coherent fiber optics are spatially organized so images can be relayed.
  • Fiberscope - An optical instrument similar to a borescope, but uses a flexible, coherent fiber or bundle (usually silicon), an objective lens and an eyepiece or camera.
  • Fiducial - A line, mark or shape used as a standard of reference for measurement or location.
  • Field - One of the two parts of a television frame in an interlaced scanning system. The odd plus the even field comprise one video frame. A field is scanned every 1/60th of a second.
  • Field-of-view - The 2D area which can be seen through the optical imaging system. (FOV)
  • Filtering - The use of an optical filter for picture or color enhancement in front of the camera lens or light source. Also analog or digital image processing (IP) operations to enhance or modify an image. May be linear & non-linear.
  • Filter - A device or process that selectively transmits frequencies. In optics, the material either reflects or absorbs certain wavelengths of light, while passing others. (2)
  • Filter Driver: With respect to Gigabit Ethernet cameras, a filter driver, or "filter" is used to reduce the CPU burden when handling large volumes of data coming from Prosilica's GigE Vision cameras. The filter strips out, or "filters", the image data from the Ethernet packets at the lowest level so that the CPU does not have to do this. Using a filter driver can significantly reduce the CPU load associated with image aquisition.
  • Firewire: A standard computer interface and its various versions otherwise called IEEE 1394, IEEE-1394a, or IEEE-1394b. It is an especially fast serial interface that is low cost with plug and play simplicity of integration. It is currently the only interface for digital industrial cameras that is standardized both in hardware and software communications protocols.
  • Firmware - Software hard coded in non-volitle memory (ROM), usually to increase speed.
  • Fixture - A device to hold and locate a workpiece during processing or inspection operations.
  • Fluorescence - The emission of light or other electromagnetic radiation at longer wavelengths by matter as a result of absorption of a shorter wavelength. The emission lasts only as long as the stimulating irradiation is present.
  • Focal Length - The distance from a lens' principal point to the corresponding focal point on the object.
  • Focal Plane - Usually found at the image sensor, it is a plane perpendicular to the lens axis at the point of focus (-).
  • Focus - The point at which rays of light converge for any given point on the object in the image. Also called the focal point.
  • Focus Following - A ranging and tracking technique that uses image processing to measure object range based on best focus.
  • Fourier Domain Inspection - Evaluation of the fourier transform (frequency information) of a 2D spatial image for features of interest. (-)
  • Frame - The total area scanned in an image sensor while the video signal is not blanked. In interlaced scanning, two fields comprise one frame. Frame rate is typically 30 Hz.
  • Frame Buffer - Image memory in a frame grabber.
  • Frame Grabber - A device that interfaces with a camera and, on command, samples the video, converts the sample to a digital value and stores that in a computer's memory.
  • Frame Rate: Frame rate is the measure of camera speed. The unit of this measurement is "frames per second" (fps) and is the number of images a camera can capture in a second of time. Using region of interest readout, some of Prosilica's cameras are capable of thousands of frames per second.
  • Front End System - The object, illumination, optics and imager blocks of a vision system. Includes all components useful to acquire a good image for subsequent processing.
  • Front Lighting - The use of illumination on the camera side of an object so that surface features can be observed.


  • General-Purpose Vision System (GPMV) A vision system that can be configured or adapted to many different applications.
  • Gaging - In machine vision, non-contact dimensional examination of an object.
  • Gain: This is the same as the contrast control on your TV. It is a multiplication of the signal. In math terms, it controls the “slope” of the exposure/time curve. The camera should normally be operated at the lowest gain possible, because gain not only multiplies the signal, but also multiplies the noise. Gain comes in very handy when you require a short exposure (say, because the object is moving and you do not want any blur), but do not have adequate lighting. In this situation the gain can be increased so that the image signal is strong.
  • Gamma (γ)- The numeric value for the degree of contrast in a television picture. The exponent in the power law relating output to input signal magnitude. Non-linear camera tube.
  • Genicam: GenICam, or Gen<i>cam, is a camera interface standard from the European Machine Vision Association (EMVA) that offers a software interface that is independent from camera hardware.
  • Gigabit Ethernet: An industry standard interface, variously called 'gige (gig-ee)', 'GbE', '1000-speed', etc., that is used for high-speed computer networks capable of achieving data transfer rates in excess of 1000 megabits per second. Gigabit Ethernet has been now adapted to high performance CCD cameras for industrial applications. This generalized networking interface is being adapted for use as a standard interface for high-performance machine vision cameras that is called GigE Vision.
  • GigE Vision: 'GigE Vision' is an interface standard from the Automated Imaging Association (AIA), for high-performance machine vision cameras. GigE (Gigabit Ethernet), on the other hand, is simply the network structure on which GiGE Vision is built. The GigE Vision standard includes both a hardware interface standard (Gigabit Ethernet), communications protocols, and standardized camera control registers. The camera control registers are based on a command structure called GenICam. GenICam seeks to establish a common software interface so that third party software can communicate with cameras from various manufacturers without customization. GenICam is incorporated as part of the GigE Vision standard. GigE Vision is analogous to Firewire's DCAM, or IIDC interface standard and has great value for reducing camera system integration costs and for improving ease of use.
  • Glints - Shiny, specular reflections from smooth objects or surfaces.
  • Global Shutter: Generally speaking, when some one says "global shutter", they really mean "snapshot shutter". See "Snapshot Shutter" below. In actuality, a global shutter starts all a camera's pixels imaging at the same time, but during readout mode, some pixels continue to image as others are read out. (see Rolling Shutter, Snapshot shutter). Prosilica's cameras have snapshot shutter meaning that all the pixels start and then stop imaging together. For machine vision applications, snapshot shutter is generally a 'must have'.
  • Global Method - An image processing operation uniformly applied to the whole image. (-)
  • Gradient - The rate of change of pixel intensity (first derivative).
  • Gradient Space - A matrix containing values for the rate of change of pixel values or gray level intensity of the image.
  • Gradient Vector - The orientation and magnitude of the rate of change in intensity at a point or pixel location in the image.
  • Grating - An optical element with an even arrangement of rods or stripes with spaces between them for light to pass. Its ability to separate wavelengths is expressed in line pairs per millimeter, for example. A moire grating of parallel dark and light stripes is an example. Also used for structured light projection. (2)
  • Gray level - A quantized measurement of image irradiance (brightness), or other pixel property typically in the range between pure white and black.
  • Grayscale Image - An image consisting of an array of pixels which can have more than two values. Typically, up to 256 levels (8 bits) are used for each pixel.
  • GUI - An acronym for Graphical User Interface. Pronounced "gooie." A Windows based user interface screen or series of screens allowing the user to point-and-click to select icons rather than typing commands.
  • Guidance Deriving properties in an image to describe a position at various points in time.


  • Halogen lamp - An incandescent lamp with a gas similar to iodine inside which is constantly evaporated then redeposited on the filament.
  • Hardware - Electronic integrated circuits, boards and systems used by the system.
  • HDTV - High Definition TV proposed broadcast standard to double the current 525 lines per picture to 1,050 lines, and increasing the screen aspect ratio from 12:9 to 16:9. The typical TV of 336,00 pixels would increase to about 2 million. (-)
  • Height/Range - Object profile is usually measured by changes in range or distances from the sensor. 3D techniques are usually used.
  • High Pass Filter - Passes detailed high frequency image information, while attenuating low frequency, slow changing data.
  • High Speed Imaging - Image capture near, at or above 1800 parts per minute. (30 parts per second) (-)
  • Histogram - A graphical representation of the frequency of occurrence of each intensity or range of intensities (gray levels) of pixels in an image. The height represents the number of observations occurring in each interval. (2)
  • Histogram Analysis - Determination of the presence or absence of a feature or flaw based on the histogram values in a certain gray scale region.
  • Histogram Equalization - Modification of the histogram to evenly distribute a narrow range of image gray scale values across the entire available range.
  • Holography - Optically recording of the interference pattern from two coherent waves which forms a 3 dimensional record or hologram. (-)
  • Hough Transform - A global parallel method for locating both curved and straight lines. All points on the curve map into a single location in the transform space.
  • HSI Conversion - A mathematical conversion from the color RGB space to hue, saturation and intensity values.
  • HSI - An acronym for the Hue-Saturation-Intensity color representation. A mathematical conversion from RGB. Often used for machine vision analysis.
  • Hue - One of the three properties of HSI color perception. A color attribute used to express the amount of red, green, blue or yellow a certain color possesses. White, gray and black do not exhibit any hue.
  • Hueckel Operator - An edge finding operator which fits an intensity surface to the neighborhood of each pixel and selects surface gradients above a specified threshold.
  • Hybrid Electro-Optic Sensor - A silicon sensor fabricated in a configuration to match spatial information generated by the imaging system, such as a PSD (position sensitive detector), concentric rings, pie shapes and others.
  • Hz - An abbreviation for Hertz or cycles per second. Often used with metric prefixes such as kHz or MHz for kilohertz and megahertz respectively. (-)


  • Identification - The process of specifically identifying an object from a large class of objects through reading symbols.
  • IEEE 1394 The IEEE 1394 standard is a digital interface that will integrate the worlds of consumer electronics and personal computers by defining a backplane physical layer and a point-to-point cable-connected virtual bus implementation.
  • IIDC: IIDC (DCAM) is a software interface standard for communicating with cameras over Firewire. It is a standardized set of registers etc. If a camera is IIDC compliant then its control registers and data structures comply with the IIDC spec. Such a camera can be truly plug-and-play in a way which other cameras are not. All of Prosilica's firewire cameras are IIDC 1.30 and IIDC 1.31 compliant.
  • Illumination - Normally a wavelength or range of wavelengths of light or visible light used to enhance a scene so the detector, normally a camera, can produce an image.
  • Image - Projection of an object or scene onto a plane (ie screen or image sensor). (-)
  • Image Analysis - Evaluation of an image based on its features for decision making. (-)
  • Image Capture - The process of acquiring an image of a part or scene, from sensor irradiation to acquisition of a digital image.
  • Image Distortion - A situation in which the image is not exactly true to scale with the object scale.
  • Image Enhancement - Image processing operations which improve the visibility of image detail and features. Usually performed for humans.
  • Image Formation - Generation of an image of an object or scene on the imaging sensor. It includes effects from the optics, filters, illumination and sensor itself.
  • Image Intensifier - Usually an electron tube equipped with a light sensitive electron emitter at one end and a phosphor screen at the other. Used to provide electron gain for imaging in low light conditions such as night vision.
  • Image Memory - An internal, high speed, large capacity storage area on a frame grabber card or in a computer dedicated to image retention.
  • Image Plane - The plane surface of the imaging sensor, perpendicular to the viewing direction, at which the optics are focused.
  • Image Processing - Digital manipulation of an image to aid feature visibility, make measurements or alter image contents.
  • Incandescent lamp - An electrical lamp in which the filament radiates visible light when heated in a vacuum by an electrical current.
  • Incident Light - Light which falls directly onto an object. (-)
  • Index of Refraction - A property of a medium that measures the degree that light bends when passing between it and a vacuum.
  • Infrared - The region of the electromagnetic spectrum adjacent to the visible spectrum, just beyond red with longer wavelengths.
  • Infrared Imaging - Image formation using wavelengths just above the visible spectrum. (-)
  • Inspection Non-destructive examination of a workpiece to verify conformance to some criteria.
  • Integration: generally refers to the task of assembling the components of a machine vision system (camera, lens, lighting, software, etc). Usually used as short form for "System Integration". When used in reference to what the camera does, it is another word for exposure time (see Integration Time).
  • Integration Time: Also referred to as exposure time. This is the length of time that the image sensor is exposed to light while capturing an image. This is equivalent to the exposure time of film in a photographic camera. The longer the exposure time, the more light will be acquired. Low light conditions require longer exposure times.
  • Interline Transfer: A CCD architecture where there exists an opaque transfer channel between pixel columns. Such a CCD does not require a mechanical shutter but spatial resolution, dynamic range, and sensitivity are reduced due to the masked column between light sensitive columns. Certain sensors, such as the Sony ICX285 sensor used in Prosilica's EC1380 camera has microlenses on the sensor which mitigate the effect of the masked column.
  • IR Lens: A lens that is specially designed so that chromatic aberations in the infra-red wavelengths are corrected. An IR-lens should be used in cases where both visible and IR illumination is being received by the camera; otherwise the resulting image would be blurred.
  • ISO 9000, 9002: Internationally recognized standards that certify a company's manufacturing record keeping. ISO accreditation does not imply any product quality endorsement, but it israther an acknowledgement of the manufacturing and/or engineering record keeping practices of the accredited company.
  • Intensity - The relative brightness of a portion of the image or illumination source.
  • Interlaced Scanning - A scanning process in which all odd lines then all even lines are alternately scanned. Adjacent lines belong to different fields.
  • I/O - An acronym for Input/Output data either entering or leaving a system. (-)


  • Jumbo Frames: With respect to Gigabit Ethernet, Jumbo frames refers to the data packet size used for each Ethernet frame. Since each data frame must be handled by the operating system, it make sense to use large data frames to minimize the amount of overhead when receiving data into the host computer. Such large data blocks are called Jumbo frames. To achieve maximum performance from gigabit Ethernet cameras, Jumbo frames should be enabled and sized to at least 9000 bytes (although the cameras will also work with smaller frame sizes).




  • LAB - CIELAB color gets its name from a color space that uses three values to describe the precise three-dimensional location of a color inside a visible color space CIE stands for Commission Internationale de l-Eclairages an international body of color scientists whose standards make it possible to communicate color information accurately L describes relative lightness; A represents relative redness-greenness,and B represents relative yellowness-blueness.
  • Labview: Labview is a graphical programming language/software application manufactured by National Instruments. It is widely used for testing and control applications and is increasingly being used for machine vision applications. National Instruments also has a number of machine vision-specific software products including Vision Builder for Automated Inspection and the Vision Assistant. Prosilica's GigE and firewire cameras are supported natively by National Instruments vision software.
  • Laplacian Operator - The sum of the second derivatives of the image intensity in both the x and y directions is called the Laplacian. The Laplacian operator is used to find edge elements by locating points where the Laplacian in zero.
  • Laser Illumination - Lighting an object with a laser source for frequency selection, pulse width (strobe) control or for accurate positioning.
  • Laser Radar - See LIDAR.
  • LED - Light emitting diode. Often used as a strobe for medium speed objects.
  • Lens - A transparent piece of material, usually glass or plastic, with curved surfaces which either converge or diverge light rays. Often used in groups for light control and focusing.
  • Lens Types - The lenses most commonly used in machine vision are: 35mm, CCTV, Copying, Cylindrical, Enlarger, Micrographic, Video, and Wide Angle.
  • LIDAR - An acronym of Light Detection And Ranging. A system that uses light instead of microwaves for range and tracking measurements. LADAR uses a laser light source to measure velocity, altitude, height, range or profile
  • Light Tent - An arrangement of diffusing surfaces above the object to create a horizon to horizon diffuse illumination.
  • Lightpen - A pen on a cable used to select items from a display screen.
  • Line(s) of Light - One or more light stripes projected at a known angle onto the object. Deformation of this type of structured light results in 3D information in a 2D image.
  • Line Scan Camera - A solid state video camera consisting of a single row of pixels. Also called a linear array camera.
  • Linear Array - see Line Scan Camera.
  • Link Aggregation : Certain Gigabit Ethernet switches have an interesting feature called Link Aggregation whereby the switch dynamically manages the data bandwidth between multiple NIC cards. This is very usefull in some cases. Consider three fast GigE Vision cameras running through a switch to two NIC cards. Without link aggregation, you would need to allocate two of the three cameras to one NIC and the ramaining camera to the other NIC. This obviously does not take advantage of the total bandwidth provided by two NIC cards. However, with Link Aggregation, the switch will optimally divide the data created by the three cameras and equally allocated the data between the two NIC cards thus maximizing the data throughput. Call Prosilica for further details.
  • Lighting - See illumination. (-)
  • Location - The point in X and Y image space where a recognized object is found.
  • Look-Up Table (LUT) - High speed digital memory used to transform image input values to outputs for thresholding, windowing and other mappings such as pseudo-color. (-)
  • Low Angle Illumination - See darkfield. Very useful to enhance and highlight surface texture features.
  • Low Pass Filter - A digital or optical filter which passes slow changing, low frequency information, while attenuating high frequency, detailed edge information.


  • Machine Vision - The use of devices for optical non-contact sensing to automatically receive and interpret an image of a real scene, in order to obtain information and/or control machines or processes. (-)
  • Magnification - The relationship between the length of a line or size of a feature in the object plane with the length or size of the same in the image plane.
  • Manual Focus: Refers to a lens which requires a human user to set the focus as opposed to an auto-focus lens which is controlled via a computer or camera.
  • Manual Iris: Refers to a lens which requires a human user to set the iris as opposed to an auto-iris lens which is controlled via a computer or camera.
  • Mask - 1) Setting portions of an image are neighbors to a constant value; 2) A filter matrix used as a convolution operator; 3) A logical or physical structure placed in an optical system to prevent viewing or passing of information in a certain spatial or frequency region.
  • Material Handling - Hardware systems that provide motion, indexing and/or orientation both during manufacture and the inspection process. (-)
  • Matrix Array Camera - See Area Array Camera.
  • Median Filter - A method of image smoothing which replaces each pixel value with the median grayscale value of its immediate neighbors.
  • Megapixel: Refers to one million pixels - relating to the spatial resolution of a camera. Any camera that is roughly 1000 x 1000 or higher resolution would be called a megapixel camera.
  • Memory - The internal, high-speed, large capacity working storage in a computer where data and images may be both stored and retrieved.
  • Micron - One millionth of a meter also called a micrometer. (-)
  • Microlens: A type of technology used in some interline transfer CCD's whereby each pixel is covered by a small lens which channels light directly into the sensitive portion of the CCD. Prosilica's CCD products, the EC1350 and EC1380 cameras both have microlenses on the sensors.
  • Mirror - A smooth, highly polished surface, for reflecting light. It may be plane or curved. Mirrors are fabricated by depositing a thin coating of silver or aluminum on a glass substrate. First surface mirrors are coated on the top surface, thus avoiding a second ghost image produced when light is reflected off the back surface after passing through the glass twice. (2)
  • MIPS - Millions of Instructions per Second measure for computer processing speed. (-)
  • Modulation Transfer Function (MTF) - The ability of a lens or optical system to reproduce (transfer) various levels of detail (modulation) of an object to the image as the frequency (usually sinusoidal) increases.
  • Moire Interferometry - A method to determine 3D profile information of an object or scene, using interference of light stripes. Two identical gratings of known pitch are used. The first creates a shadow of parallel lines of light projected on the object. The second is placed in the imaging train, and superimposed on the shadow cast by the first grating, forming a moire fringe pattern. Distance between the fringes or dark bands is directly related to range or profile. Varying the gap between the lines changes the sensitivity. (2)
  • Moire Pattern - A pattern resulting from the interference of light when gratings, screens or regularly spaced patterns are superimposed on one another. Two stacked window screens create this effect.
  • Moire Topography - A contour mapping technique in which the object is both illuminated and viewed through the same grating. The resulting moire fringes form contour lines of object elevation or profile.
  • Monochromatic - Refers to light having only one color or a single wavelength of radiation.
  • Monochrome - Refers to a black and white image with shades of gray but no color. (-)
  • Morphology - Image algebra group of mathematical operations based on manipulation and recognition of shapes. Also called mathematical morphology. Operations may be performed on either binary or gray scale images. Mathematics of shape analysis. An algebra whose variables and shapes and whose operations transform those shapes.
  • MOS Array - Metal Oxide Semiconductor camera array sensor with random addressing capability, rows and columns of photodiodes and charge sent directly from the photodiode to the camera output.
  • Motorized Lens: A lens whereby zoom, aperture, and focus (or one or more of these) are operated electronically. Usually, a computer operated controller is used to drive such lenses. The controller often has an RS-232 port through which a camera, or computer, controls the lens.
  • Mouse - A device, thought of as somewhat resembling a mouse in appearance and movement, that allows the user to control cursor movement on a video display screen by rolling the device over a flat surface. It is also used to select commands, designate text blocks, and for other functions.


  • Neural Networks - A computing paradigm which processes information based on biological neural systems. No programming is involved as in artificial intelligence. Rather decisions are made based on weighted features analyzed by interconnected nodes of simple processing elements using analog computer-like techniques.
  • NI: National Instruments. National Instruments is a large company in the scientific and industrial control market that is also a significant player in the machine vision market. See "Labview".
  • Noise - Irrelevant or meaningless data resulting from various causes unrelated to the source. Random, undesired video signals.
  • Normalized Correlation - Removes the absolute illumination value from a traditional correlation, making the algorithm less sensitive to light variations.


  • OCR/OCV - Acronyms for optical character recognition (OCR) and optical character verification (OCV).
  • Object - The 3D item to be imaged, gauged or inspected.
  • Object Features - Any characteristic that is descriptive of an image or region, and useful for distinguishing one from another. A feature may be any measurable item such as length, size, number of holes, surface texture amount or center of mass.
  • Object Plane - An imaginary plane at the object, which is focused by the optical system at the image plane on the sensor.
  • Oblique Illumination - A lighting direction at an angle which emphasizes object features by shadows produced. (-)
  • OEM - Original Equipment Manufacturer that supplies components to another for resale. (-)
  • Off-the-Shelf - Refers to a general purpose system, readily available for immediate shipment, which is not configured for a specific application.
  • OHCI: (Open Host Controller Interface) describes the standards created by software and hardware industry leaders--including Microsoft, Apple, Compaq, Intel, Sun Microsystems, National Semiconductor, and Texas Instruments--to assure that software (operating systems, drivers, applications) works properly with any compliant hardware. In order to operate Prosilica's firewire cameras, the firewire interface in the host computer must be OHCI-compliant.
  • Offset: This is just the same as the brightness control on your TV. It is a positive DC offset of the image signal. It is used primarily to set the level of “black”. Generally speaking, for the best signal, the black level should be set so that it is near zero (but not below zero) on the histogram. Increasing the brightness beyond this point just lightens the image but without improving the image data. The factory setting for brightness on the EC1380 is generally optimal and should only be changed if it solves some lighting problem. The camera units for brightness are 0 to 255 in linear increments. In Labview, we normal turn this control to “off” or “ignore”. Unless your application really requires this control to be exposed, I would hide it.
  • Oil mist - An environmental contaminant which builds up on vision optical surfaces.
  • Opaqueness - Degree to which an object does not transmit light.
  • Opening - An erosion followed by a dilation, it is the opposite of the closing morphololgical operator.
  • Optical Computing - Performing operations usually handled by electronic, serial computers with optical or photonic circuits/elements in parallel at near the speed of light. (-)
  • Orientation - The angle or degree of difference between the object coordinate system major axis relative to a reference axis as defined in a 3D measurement space.


  • Pantone Matching System (PMS) - A system of describing colors by assigning numbers. (-)
  • Parallax - The change in perspective of an object when viewed from two slightly different positions. The object appears to shift position relative to its background, and also appears to rotate slightly.
  • Parallel Processor - A redundant hardware design using a number of processors so multiple pixels may be processed at the same time.
  • Parent -1. the previous generation of an item or file that is required to create a new record.the previous generation of an item or file that is required to create a new record. 2. in data structures, a node on a tree that has a given node as one of its data structures, a node on a tree that has a given node as one of its subtrees
  • Pattern Recognition - A process which identifies an object based on analysis of its features. (-)
  • Perceptron - The basic processing element used in neural networks. A simple analog circuit with weighted inputs and a nonlinear decision element such as a hard limiter, threshold logic or sigmoid nonlinearity.
  • Photodiode - A single photoelectric sensor element, either used stand-alone or a pixel site, part of a larger sensor array.
  • Photometry - Measurement of light which is visible to the human eye (photopic response). (-)
  • Photopic Response - The color response of the eye's retinal cones.
  • Pinhole - A small, sharp edged hole, acts as a lens aperture which produces a soft edged image, is distortion free, with a wide field of view and large depth of field.
  • Pixel - An acronym for "picture element." The smallest distinguishable and resolvable area in an image. The discrete location of an individual photo-sensor in a solid state camera.
  • Pixel Counting - A simple technique for object identification representing the number of pixels contained within its boundaries.
  • Polarized Light - Light which has had the vibrations of the electric or magnetic field vector typically restricted to a single direction in a plane perpendicular to its direction of travel. It is created by a type of filter which absorbs one of the two perpendicular light rays. Crossing polarizers theoretically blocks all light transmission.
  • Polarizer - An optical device which converts natural or unpolarized light into polarized light by selective absorption of rays in one direction, and passing of rays perpendicular to the polarizing medium. Usually fabricated from stretched plastic sheets with oriented, parallel birefringent crystals. The first polarizers were constructed with parallel wires. (2)
  • Positioning Equipment - Used to bring the part into the field of view, or to translate when multiple images or views are required.
  • Precision - The degree of spread or deviation between each measurement of the same part or feature. Repeatability.
  • Prism - An optical device with two or more non-parallel, polished faces from which light is either reflected or refracted. Often used to redirect light as in binoculars. (2)
  • Processing Speed - A measure of the time used by a vision system to receive, analyze and interpret image information. Often expressed in parts per minute.
  • Profile - The 3D contour of an object. (2)


  • QImaging: a digital camera company founded by Prosilica founders Marty Furse, Brian Pontifex and one other person. QImaging focuses on scientific imaging applications, especially microscopy.


  • Radiometry - Measurement of light within the entire optical spectrum. (-)
  • RAM - An acronym for Random Access Memory for storage and retrieval of data. (-)
  • Random Access - The ability to read out chosen lines or windows of information from an imager as needed, without following the RS-170 standards.
  • Range Measurement - Determination of the distance from a sensor to the object.
  • Raster Scan - A scanning pattern, generally from left to right while progressing from top to bottom of the imaging sensor or the display monitor. Generally comprised of two fields composed of odd and even lines.
  • Real Time Processing - In machine vision, the ability of a system to perform a complete analysis and take action on one part before the next one arrives for inspection.
  • Reflection - The process by which incident light leaves the surface from the same side as it is illuminated. (2)
  • Refraction - The bending of light rays as they pass from one medium (ie air) to another (ie glass), each with a different index of refraction.
  • Region - Area of an image. Also called a region of interest for image processing operations.
  • Registration - The closeness of the part to the actual position expected for image acquisition.
  • Reject - A mechanism used on a manufacturing line to remove defective or sample product from the main stream or conveyor. Reject design is usually customized to the process.
  • Repeatability - The ability of a system to reproduce or duplicate the same measurement. See precision. The total range of variation of a dimension is called the 6-sigma repeatability.
  • Resolution, Pixel Grayscale - The number of resolvable shades of gray (ie 256).
  • Resolution, Image - The number of rows and columns of pixels in an image.
  • Resolution, Spatial - A direct function of pixel spacing. Pixel size relative to the image field of view is key.
  • Resolution, Feature - The smallest object or feature in an image which may be sensed.
  • Resolution, Measurement - The smallest movement measurable by a vision system.
  • Reticle - An optical element with a pattern located in the image plane to assist in calibration, measurement or alignment of a system or instrument. Examples are cross lines or grids.
  • RGB - An acronym for the Red-Green-Blue color space. This three primary color system is used for video color representation. (2)
  • Ringlight - A circular lamp or bundles of optical fibers arranged around the perimeter of an objective lens to illuminate the object in the field below it. A wide variety of sizes are available on both a stock and custom basis.
  • RS-170 - The Electronic Industries Association (EIA) standard governing monochrome television studio electrical signals. The broadcast standard of 30 complete images per second.
  • RS-232-C - The Electronic Industries Association (EIA) standard governing serial communications over a twisted pair. Good to about 150 feet.
  • RS-330 - Standard governing color television studio electrical signals.
  • RS-422; RS-423; RS-449 - The Electronic Industries Association (EIA) standards for serial communication protocols intended to gradually replace the widely used RS-232-C standard.
  • Rotation - Translation of a part about its center axis from the expected orientation in X and Y space. Expressed in degrees. (2)
  • Rolling Shutter: Some CMOS sensors operate in "rolling shutter" mode only so that the rows start, and stop, exposing at different times. This type of shutter is not suitable for moving subjects except when using flash lighting because this time difference causes the image to smear. (see Global Shutter, Snapshot Shutter).
  • Run Length Encoding - A data reduction method to code a binary image. For each line in an image, data is stored denoting only the starting location of a blob and object and the length of the run of that line over the object.


  • Scanner (galvo & polygon mirror) - An image sensor which uses a swept or scanned beam of light (usually a laser) to generate or acquire a one or two dimensional grayscale reflectance pattern.
  • Scene - The object and a background in it's simplest form. A portion of space imaged by a vision system for investigation or measurement.
  • Scattering - Redirection of light reflecting off a surface or through an object. See diffuse. (-)
  • Scene Analysis - Performing image processing and pattern recognition on an entire image.
  • Segmentation - The process of dividing a scene into a number of individual objects or contiguous regions, differentiating them from each other and the image background.
  • Sensitivity: A measure of how sensitive the camera sensor is to light input. Unfortunately there is no standardized method of describing sensitivity for digital CCD or CMOS cameras, so apples-to-apples comparisons are often difficult on the basis of this specification.
  • Shading - The variation of the brightness or relative illumination over the surface of an object, often caused by color variations or surface curvature.
  • Shape - An object characteristic, often referring to its spatial contour.
  • Shape from Shading - A 3D technique that uses shadows from interaction of the object and the light source to determine shape.
  • Sharpening - An image proccessing operation which enhances edges. An unsharp mask adds a low pass filtered image to the original, resulting in edge enhancement.
  • Shutter - An electrical or mechanical device used to control the amount of time the imaging surface is exposed to light. Often used to stop blur from moving objects.
  • Siblings - In Stanford Research Institute (SRI) terminology, several child objects within a parent object are siblings.
  • Silhouette - A black and white image of an object illuminated by backlighting.
  • Simple Lens - A lens with only a single element. (-)
  • Sinusoidal Projection - Use of a grating in which the dark stripes vary in their density sinusoidally across each one, rather than constant black. Improved profile or range discrimination is possible when used in a moire type configuration.
  • Size - An object characteristic typically measured by x and y dimensions. Size may be expressed in pixels, the system calibrated units of measure or classes or size groups.
  • Smart Camera - A new term for a complete vision system contained in the camera body itself., including imaging, image processing and decision making functions .
  • Snapshot shutter: Sometimes called a global shutter, snapshot shutter refers to an electronic shutter on CCD or CMOS sensors. A snapshot shutter as found on all Prosilica's cameras is a feature of the image sensor that causes all of the pixels on the sensor to begin imaging simultaneously and to stop imaging simultaneously. This feature makes the camera especially suitable for capturing images of moving objects. (see Rolling Shutter, Global Shutter).
  • SMB: A simple connector (the gold-colored ones on the back of the CV-Series cameras) that is used for triggering and synchronization. The CV-Series cameras have two SMB connectors (Trigger-in, and Sync-out) that correspond to SMB connectors on on the NI 145x compact vision system.
  • Sobel Transform - A 3x3 convolution used for edge enhancement and locating.
  • Solid-state Camera - A camera which uses a solid state integrated circuit chip to convert incident light or other radiation into an analog electrical signal.
  • Span - The allowance of gray level acceptance for thresholding, adjustable from black to white from 0 to 100%. (-)
  • Spatial Light Modulator - (Also SLM) A transparent screen used in optical computer systems to introduce an image into the optical processing path. Similar to liquid crystal computer display screens, their resolution approaches 512x512 and grayscale imaging 8 bits.
  • Spatial resolution: A measure of how well the CCD or camera can resolve small objects. Usually used relating not only to the pixel resolution, but also to lens resolution -- ie the resolution of the whole optical system. See also the following article: High Resolution.
  • Spectral Analysis - Evaluation of the wavelength composition of object irradiance. (-)
  • Spectral Characteristics - The unique combination of wavelengths of light radiated from a source or transmitter or reflected from an object.
  • Spectral Response - The characteristic of a sensor to respond to a distribution of light by wavelength in the electromagnetic spectrum.
  • Specular Reflection - Light rays that are highly redirected at or near the same angle of incidence to a surface. Observation at this angle allows the viewer to "see" the light source.
  • Speed - An object characteristic expressed in distance moved per unit time. Velocity. Image blur may be caused by high speeds unless strobes or shutters are used to "stop motion."
  • SRI Algorithms - A rich set of routines used for geometric analysis and identification developed at the Stanford Research Institute in the early 1970s. Four main steps are: 1) Convert the image to binary; 2) Perform connectivity analysis to identify each blob or object; 3) Calculate the core statistical features for image objects; and 4) Calculate additional user selected features.
  • Stadimetry - A range measuring technique based on the apparent size measurement of a known size object in the field-of-view.
  • Statistical (Theoretic) Pattern Recognition - Statistical analysis of object features to perform recognition and classification.
  • Stereo (Passive) - For imaging, the use of two cameras, offset by a known distance and angle, to image the same object and provide range, depth or 3D information. Active stereo uses a controlled or structured light source to provide 3D data.
  • Stereo Photogrammetry - See Shape from Shading.
  • Stereoscopic Approach - The use of triangulation between two or more image views from differing positions. Used to determine range or depth.
  • Strobe Duration - The amount of time, expressed in microseconds, during which the flash lamp (strobe) is at 90% intensity.
  • Strobed Light - Brief flashes of light for observing an object during a short interval of time, typically used to "stop" movement and resulting image blur. Strobes may use xenon flash tubes, banks of LEDs or a laser to illuminate the scene.
  • Structural (Syntactic) Pattern Recognition - Evaluation of the relationship of object features in a specific order, ie decision trees, to perform recognition and classification.
  • Structured Light - Points, lines, circles, sheets and other projected configurations used to directly determine shape and/or range information by observing their deformation as it intersects the object in a known geometric configuration.
  • Subpixel Resolution - Mathematical techniques used on gray scale images to resolve an edge location to less than one pixel. A one tenth pixel resolution is reasonable in the factory.
  • Syntactic PR - See Structural Pattern Recognition
  • System Performance Measures - Accuracy, precision or repeatability, and alpha and beta risk for a given throughput rate specify the performance of a vision system. (-)
  • Synch Pulse - Timing signals used to control the television scanning and display process. The horizontal synch triggers tracing of a new line from left to right, while the vertical synch initiates the start of a new field.
  • Synchronous - A camera characteristic denoting operation at a fixed frequency locked to the AC power line (typically 60 or 50Hz).
  • Systems Integration - The art of assembling hardware, software, components, mounts and enclosures to produce a system that meets a customer's specification.
  • Systems Integrator - A company that provides a turnkey machine vision system, adapting the vision system to a specific customer's requirements. Saturation - The degree to which a color is free of white. One of the three properties of color perception along with hue and intensity (HSI).


  • Tail End System - The operator interface, I/O and communications blocks of a vision system. Includes all aspects of information display and handling.
  • TDI Camera - Time Delay Integration. Similar to a line scan, a TDI camera is comprised of a number of rows of pixels. As an object such as a web moves, the charge from one row is passed to the next row, synchronously continuing the integration. Requires far less illumination intensity than the standard line scan.
  • Template - An artificial model of an object or a region or feature within an object. (-)
  • Template Matching - A form of correlation used to find out how well two images match.
  • Texture - The degree of smoothness of an object surface. Texture affects light reflection, and is made more visible by shadows formed by its vertical structures.
  • Thickness - The measurement in the third dimension (length and width being the other two) from one object surface to another using one or two 3D range sensors or other technique.
  • Thresholding - The process of converting gray scale image into a binary image. If the pixel's value is above the threshold, it is converted to white. If below the threshold, the pixel value is converted to black.
  • Throughput Rate - The maximum parts per minute inspection rate of a system.
  • Top Hat - A morphological operator comprised of an opening followed by a subtraction of the output image from the original input image.
  • Trackball - A stationary ball used as a pointing device to select items from a display screen.
  • Transition - For an edge in a binary image, the location where pixels change between light and dark. (-)
  • Translation - Movement in the X and/or Y direction from a known point.
  • Translucent - An object characteristic in which part of the incident light is reflected and part is transmitted. The transmitted light emerges from the object diffused.
  • Transmittance - The ratio of the radiant power transmitted by an optical element or object to the incident radiant power.
  • Transputer - A type of computer architecture with several CPUs connected in parallel.
  • Triangulation - A method of determining distance by forming a right triangle consisting of a light source, camera and the object. The distance or range can be calculated if the camera-to-light source distance and the incident to reflected beam angle are both known. Based on the Pythagorean relation.
  • Trigger: An input to an industrial digital camera than initiates the image capture sequence. Otherwise, an electrical signal or set of signals used to synchronize a camera, or cameras, to an external event.
  • Tube Type Camera - A camera in which the image is formed on a fluorescent screen, then read out sequentially in a raster scan type pattern by an electron beam for conversion to an analog voltage proportional to incoming light intensity.


  • Ultrasonic Imaging - Use of ultrasound waves as the imaging "illumination" source. (-)
  • Ultrasound - Low frequency radiated acoustical waves just above human sound perception which are useful for penetration and "illumination" for inspection of solid objects.
  • Ultraviolet - The region of the electromagnetic spectrum adjacent to the visible spectrum, but of higher frequency (shorter wavelength) than blue ranging from 1 to 400 nm. UV A ranges from 320 to 400 nm while UV B falls between 280 and 320 nm.
  • User Interface - Includes display, operator, user controls and a means to access and modify custom user programming. See operator interface.


  • Validation - A rigid set of tests to verify that a system performs as documented.
  • Variable Scan Input - Frame grabber capability to accept a variety of non RS-170 input formats from a variety of cameras. Allows operation above the 30 Hz limit.
  • Verification - Activity providing qualitative assurance that a fabrication or assembly process was successfully completed.
  • VESA - Video Electronics Standards Association. A 32 bit display or other hardware card. (-)
  • VGA - An acronym for Video Graphics Array. The IBM video display standard of 16 colors.
  • Video - Visual information encoded in a specific bandwidth and frequency spectrum location originally developed for television and radar imaging. (-)
  • Video-type autoiris: There are two major types of auto-iris lenses: DC-type, and video-type. The video-type auto-iris requires a video signal to determine how far to open the iris on the lens. Prosilica's GC-series Gigabit Ethernet cameras support video-type auto-iris. The GC-Series are digital cameras and simulate a video signal on one of the camera outputs which is used to driver the video autoiris lens. The camera software controls also include features to control how the autoiris works. (see Autoiris)
  • Virtual Instrument (VI): A VI is a set of instructions, or software, that run processes in National Instruments Labview software.
    Vidicon - A generic name for a camera tube of normal light sensitivity. It outputs an analog voltage stream corresponding to the intensity of the incoming light.
  • Visible Light - The region of the electromagnetic spectrum in which the human retina is sensitive, ranging from about 400 to 750 nm in wavelength.
  • Vision Engine - Analyzes the image and makes decisions, using a very fast processor inside a computer. It performs dedicated evaluation of the pre-processed image data to find features and make measurements. Unlike a personal computer, the vision engine is built for speed, not flexibility.


  • Wavelength - The distance covered by one cycle of a sinusoidally varying wave as it travels at or near the speed of light. It is inversely proportional to frequency.
  • Well - A morphological operator comprised of a closing followed by a subtraction of the output image from the original input image.
  • Window - A selected portion of an image or a narrow range of gray scale values.
  • Windowing - Performing imaging proccessing operations only within a predefined window or area in the image.


  • Xenon Strobe - A gas filled electronic discharge tube, useful for high speed, short duration illumination for inspection.
  • X-ray - A portion of the electromagnetic spectrum beyond the ultraviolet with higher frequency and shorter wavelengths. Able to penetrate solid objects for internal, non-destructive evaluation.



  • Zoom Lens - A compound lens which remains in focus as the image size is varied continuously. May be motorized or manually operated.

External Links

wikipedia:Machine Vision Glossary

Online machine vision glossary repositories:

General resources

Computer vision laboratories



See also

Contributing to this article

All entries should be entered under the proper alphabetical heading as a bullet item.  The term or word being defined should be in Bold.  The brief definition should be in normal text.  If a definition of more than two lines is needed, please create a new page using the term or word being defined as the page title, and place the more full description or definition on that page.  Also feel free to link to non-LabAutopedia web references.  Refer to the help page Creating a page and  Editing a page for information about these editing operations.  Please keep all entries as factual and vendor-neutral as possible. 

Click [+] for other articles on 
Lists(18 P)
The Market Place for Lab Automation & Screening  The Market Place
Click [+] for other articles on 
Digital imaging(1 C, 8 P)
The Market Place for Lab Automation & Screening  Imaging Systems Imaging Software