WO2002001209A1 - Compensation system and related techniques for use in a printed circuit board inspection system - Google Patents

Compensation system and related techniques for use in a printed circuit board inspection system Download PDF

Info

Publication number
WO2002001209A1
WO2002001209A1 PCT/US2001/017574 US0117574W WO0201209A1 WO 2002001209 A1 WO2002001209 A1 WO 2002001209A1 US 0117574 W US0117574 W US 0117574W WO 0201209 A1 WO0201209 A1 WO 0201209A1
Authority
WO
WIPO (PCT)
Prior art keywords
memory
pixel
values
compensation
offset
Prior art date
Application number
PCT/US2001/017574
Other languages
French (fr)
Inventor
Douglas W. Raymond
Original Assignee
Teradyne, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Teradyne, Inc. filed Critical Teradyne, Inc.
Priority to AU2001266631A priority Critical patent/AU2001266631A1/en
Publication of WO2002001209A1 publication Critical patent/WO2002001209A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8806Specially adapted optical and illumination features
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/956Inspecting patterns on the surface of objects
    • G01N21/95684Patterns showing highly reflecting parts, e.g. metallic elements
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/956Inspecting patterns on the surface of objects

Definitions

  • This invention relates generally to automated optical inspection (AOI) systems and more particularly to illumination systems used in AOI systems.
  • an automated optical inspection (AOI) system typically includes an illumination system which projects or otherwise provides light to illuminate a specimen being inspected and a camera which captures an image of the specimen and converts it to electrical signals.
  • One or more frame grabbers transfer the electrical signals representing the image to a computer or other processing device for further processing.
  • a processing device which is unaware of the specific nonuniformities in lighting cannot differentiate between those patterns of light in the image which are due to the appearance of the specimen and those patterns of light in the image which are due to irregularities in the illumination system.
  • an image of a uniformly illuminated plain uniform grey surface ideally has pixel values which are all equal. In practice, however, the pixel values will not be equal unless the camera is ideal and the illumination is perfectly uniform.
  • the computer or other processing device receives a "true" image only if the effects of nonuniform illumination can somehow be cancelled out. Nonuniformity of illumination is thus a source of noise in AOI systems.
  • a typical illumination system includes one or more fluorescent lamps. On problem with fluorescent and other types of lamps is that intensity of the lamp varies along the length of the lamp.
  • the intensity also varies with the amount of current provided to the lamp, the age of the lamp, specific variations in the manufacture of the lamp including its shape, and the temperature at which the lamp operates. It is a daunting prospect to attempt to regulate the light at one point, let alone regulate the light at all points on the object. Thus, suffice it to say that for a variety of reasons, it is relatively difficult to produce uniform illumination over the field of view of a camera.
  • lamps which provide relatively uniform intensity. Such lamps, however, are relatively expensive. More importantly perhaps such lamps still have variations in intensity although the variations are less severe than the intensity variations in relatively low cost lamps.
  • an automated optical inspection (AOI) system includes an image acquisition system for capturing an image and for providing a digital signal to a compensation circuit which compensates the pixel values to provide compensated or corrected pixel values.
  • the compensation circuit provides the compensated pixel values in real time to a video frame memory and to a direct memory access (DMA) channel.
  • the DMA channel transfers the compensated pixel values to a storage device which is coupled to or provided as part of a processor.
  • the image acquisition system includes a digital camera and in another embodiment the image acquisition system includes an analog camera having a video digitizer coupled thereto.
  • the details of the compensation memory architecture vary with the nature of the aberrations expected during inspection. For example, if a pure linear correction with no offset errors is expected, only a scale factor memory is needed. On the other hand, if offset errors are expected, then an offset memory is also required (e.g. to eliminate the effects of unexcludable stray light). If a nonlinear correction is needed, additional scale memories and associated arithmetic circuits to perform scale factor computations may be needed. In a preferred embodiment, only offset and scale factor are linearly corrected. It should be appreciated, however, that the concept of performing real time correction can be extended beyond simple linear calculations or reduced to simpler linear multiples without offset correction.
  • the system uses a plurality of lighting modes and it may be desirable to provide the compensation memory from a plurality of separate banks of memories with each bank storing a set of correction or compensation coefficients.
  • each lighting mode use a separate bank and thus a separate set of compensation coefficients.
  • a bank switch coupled between processor and the compensation memory allows the processor to select a particular one or ones of the banks of compensation memory. The appropriate memory bank is selected by the processor prior to the time a frame is acquired. In particular, the processor selects particular values in the memory to be used according to which lighting mode will be used.
  • the compensation circuit includes a compensation memory coupled to an adder circuit and a multiplier circuit.
  • the image acquisition circuit provides a pixel value (in the form of a digital signal) to a first port of the adder circuit and the compensation memory provides an offset value to a second port of the adder circuit.
  • the adder circuit combines the two values fed thereto and provides a partially compensated pixel value to the input of the multiplication circuit.
  • the multiplication circuit further compensates the pixel value prior to providing the fully compensated pixel value to the video frame memory and the DMA channel.
  • a further advantage of the present invention is that the cost of illumination hardware can be reduced.
  • relatively low cost illuminators tend to have worse nonuniformities than relatively expensive illuminators.
  • the present invention makes it possible to tolerate greater irregularity in the illumination, and therefore makes it possible to employ relatively low cost illuminators while still achieving an accuracy which is equivalent to that obtained with relatively high cost illuminators.
  • an AOI system includes an image acquisition system having an output port coupled to a first input port of an adder circuit.
  • a second input of the adder circuit is coupled to an offset memory and an output of the adder circuit is coupled to a first input of a multiplier circuit.
  • a second input of the multiplier circuit is coupled to an offset memory and an output of the multiplier circuit has an output coupled to a video frame memory and a DMA channel.
  • the DMA channel is coupled to a processor memory.
  • the processor memory may be provided, for example, either as a main memory in a so-called "motherboard" or as a processor on a frame grabber module which includes bother the frame grabber processor and a frame grabber memory.
  • an AOI system which corrects pixel values in a digital image on a pixel by pixel basis as the signals representing the video frame are transferred from the image acquisition system to the frame memory is provided.
  • the adder, multiplier and memory circuits operate at a speed which is sufficient to allow the system to compensate pixel values in real time, thus allowing compensated or corrected pixel values to be stored in the memory for later processing.
  • a further advantage of the present invention is that in the case where the image acquisition system includes both a camera and a digitizer, the invention simultaneously compensates for linear nonuniformities in the camera and in the digitizer as well as for illumination nonuniformity. If the digitizer had a constant offset, for example, that offset would be removed in the calibration process described here.
  • the digitized pixel values contain signal information from the inspected specimen and error information caused by uneven illumination.
  • the digitized pixel values are presented to the second input of the adding circuit.
  • the output of the adding circuit is a pixel value which has been corrected for offset.
  • the output of the multiplication circuit is a pixel value which has been corrected for scale and offset.
  • the addition and multiplication circuits can be implemented in any of several ways known to those skilled in the art of arithmetic circuitry.
  • the adder and multiplier can be generated using Very high speed integrated circuit Hardware Descriptive Language (VHDL) software statements compiled for a programmable gate array.
  • VHDL Very high speed integrated circuit Hardware Descriptive Language
  • the correction coefficient memories may be organized in any convenient way. In the preferred embodiment, it is sixteen million sixteen-bit words in which each sixteen-bit word is broken up into eight bits of offset coefficient and eight bits of scale coefficient. Embodiments are envisioned in which fewer bits of offset are used and more bits of scale, or vice versa - this would be appropriate if the variation in offset were small and the variation in scale were large. It might be reasonable in some cases to represent both scale and offset values in a total of eight bits, with three bits for offset and five bits for scale. Other ways of managing multiple blocks of correction coefficients for multiple lighting modes are contemplated as well. For example, if memory cost were important and lighting modes were infrequently changed, the time of reloading a lighting mode's information would not be important.
  • the bank switch could be eliminated (or set to a constant value) and the entire memory could be reloaded from the computer for each separate lighting mode. This would allow the invention to be realized with smaller memories, but would use time in reloading the memories, as opposed to the much faster bank switch. In the case where only one lighting mode is used, only one memory bank (or one set of memory banks) is needed.
  • the output of the multiplication circuit is a corrected pixel value - the value the pixel would have had if the illumination had been uniform.
  • the pixel has been corrected for offset and for scale.
  • the corrected value is stored in frame memory in lieu of the original raw digitized pixel, as would have been stored by a conventional framegrabber.
  • the correction has been made in real time, so the computer can process corrected pixels as fast as if they had not been corrected. No central processor cycles are used in making the correction.
  • a method for correcting errors in an AOI system includes the steps of (a) acquiring a frame and digitizing each pixel in the frame, (b) generating an address to indicate where the pixel is to be stored in a video frame memory, (c) retrieving correction values related to the pixel position and (d) combining the correction values with the digitized pixel value to provide a corrected pixel value.
  • the correction values correspond to offset and scale correction values.
  • the correction values are combined with the pixel values by first adding an offset correction value to the digitized pixel value to provide a partially corrected digitized pixel value and then by multiplying the corrected digitized pixel value by a scale correction value to provide a fully corrected pixel value.
  • Fig. 1 is a block diagram of an automated optical inspection system which performs correction of images
  • Fig. 2 is a block diagram of an automated optical inspection system which includes an offset memory and a scale memory for use in performing correction on a pixel by pixel basis as the signals representing the video frame are being loaded into frame memory from a camera;
  • Fig. 3 is a flow diagram of a method to perform real time compensation
  • Fig. 4 is a flow diagram of a method for determining compensation values.
  • An analog or continuous parameter image such as a still photograph may be represented as a matrix of digital values and stored in a storage device of a computer or other digital processing device.
  • the matrix of digital data values is generally referred to as a "digital image” or more simply an “image” and may be stored in a digital data storage device, such as a memory for example, as an array of numbers representing the spatial distribution of energy at different wavelengths in a scene.
  • an image sequence such as a scanned view of a printed circuit board, for example, may be converted to a digital video signal as is generally known.
  • the digital video signal is provided from a sequence of discrete digital images or frames.
  • Each frame may be represented as a matrix of digital data values which may be stored in a storage device of a computer or other digital processing device.
  • a matrix of digital data values are generally referred to as an "image frame” or more simply an “image” or a "frame.”
  • Each of the frames in the digital video signal may be stored in a digital data storage device, such as a memory for example, as an array of numbers representing the spatial distribution of energy at different wavelengths in a scene in a manner similar to the manner in which an image of a still photograph is stored
  • each of the numbers in the array correspond to a digital word (e.g. an eight-bit binary value) typically referred to as a "picture element” or a "pixel” or as “image data.”
  • the image may be divided into a two dimensional array of pixels with each of the pixels represented by a digital word.
  • each digital word corresponds to the intensity of the pixel and thus the image at that particular pixel location.
  • each pixel being represented by a predetermined number of bits (e.g. eight bits) which represent the color red (R bits), a predetermined number of bits (e.g. eight bits) which represent the color green (G bits) and a predetermined number of bits (e.g. eight bits) which represent the color blue (B-bits) using the so-called RGB color scheme in which a color and luminance value for each pixel can be computed from the RGB values.
  • R bits color red
  • G bits color green
  • B-bits predetermined number of bits
  • RGB hue, saturation, brightness
  • CMYK cyan, magenta, yellow, black
  • the techniques described herein are applicable to a plurality of color schemes including but not limited to the above mentioned RGB, HSB, CMYK schemes as well as the Luminosity and color axes a & b (Lab), YUV color difference color coordinate system, the Karhunen-Loeve color coordinate system, the retinal cone color coordinate system and the X, Y, Z scheme.
  • An image is also sometimes made herein to an image as a two-dimensional pixel array.
  • An example of an array size is 512 pixels x 512 pixels.
  • One of ordinary skill in the art will of course recognize that the techniques described herein are applicable to various sizes and shapes of pixel arrays including irregularly shaped pixel arrays.
  • An image region or more simply a region is a portion of an image. For example, if an image is provided as a 32 X 32 pixel array, a region may correspond to a 4 X 4 portion of the 32 X 32 pixel array.
  • an automated optical inspection (AOI) system 10 includes an image acquisition system 12 which obtains analog or continuous parameter images of a specimen to be inspected.
  • the specimen is illuminated by an illumination system 13.
  • the image acquisition system 12 provides a digital signal (e.g. a stream of digital bits) to a compensation circuit 14.
  • the compensation circuit 14 includes a compensation function circuit 16 coupled to a compensation memory circuit 19.
  • the compensation memory circuit 19 has stored therein compensation values which are provided to the compensation function circuit 16.
  • the compensation function circuit 16 receives the compensation values and appropriately applies them to pixel values being provided thereto from the image acquisition system 12.
  • the compensation values stored in the compensation memory 19 can be generated via a system calibration technique such as the techniques described below in conjunction with Figs. 3 and 4.
  • the compensation function circuit 16 includes one or more compensating circuits 18a - 18N generally denoted 18.
  • compensating circuit 18a corresponds to an offset circuit 18a
  • compensating circuits 18b -18N correspond to scale circuits 18b — 18N.
  • only one offset circuit is here shown, it should be noted that in some embodiments, it may be desirable to use more than on offset circuit.
  • the operation of the offset and scale circuits 18a-18N will be described in detail below. Suffice it here to say that the offset circuit 18a and scale circuits 18b-18N receive compensation values from the compensation memory 19 and apply the compensation values to pixel values received from the image acquisition system 12.
  • the compensation circuit 14 thus corrects for errors due to non-uniform illumination and other errors.
  • the compensation memory 19 can be provided from a single memory or a bank of memories.
  • compensation memory 19 is shown provided from a plurality of memories 20a and 21a-21M.
  • the memory 20 is provided as an offset memory and memories 2 la-21M are provided as scale memories.
  • each of the individual memories 20 and 21a-21M could comprise a bank of memories with each of the different memory banks have different compensation values stored therein.
  • a first bank of memory 20 can hold compensation values to be used for a first lighting mode and a second different bank of memory 20 can hold compensation values used for a second different lighting mode.
  • the particular bank of offset memory 20 selected for use during an inspection process depends upon the lighting mode being used in the system. When the system is operating in the first lighting mode, the first memory bank of offset memory 20 is used and when the system is operating in the second lighting mode, the second memory bank of offset memory 20 is used. Thus, different compensation values can be used for each lighting mode.
  • Banks of scale memories 21a-21M could operate in a similar manner.
  • N scale circuits 18b-18N are shown in Fig. 1, those of ordinary skill in the art will appreciate that one or more scale circuits 18b -18N could be used.
  • M scale memories 21a-21M are shown in Fig 1, those of ordinary skill in the art Will appreciate that one or more scale memory circuits could be used.
  • Each of the one or more offset memories and scale memories provide compensation values to one or more of the offset and scale circuits.
  • the compensation circuit 14 compensates the pixel values from the image acquisition circuit 12
  • the compensated pixels values are transmitted from the compensation circuit 14 to a memory for storage.
  • the compensated pixels values are transmitted from the compensation circuit 14 through a direct memory access (DMA) channel 24 to a memory 26 of a processor 28.
  • the processor 28 may be provided, for example, as a general purpose computer, an image processing processor or any other type of processor capable of processing image information and performing all or portions of an inspection process.
  • the system can include a frame grabber module.
  • the frame grabber module could include the processor 28 and the memory 26.
  • the memory 26 would correspond to a video frame memory 26 on the frame grabber module.
  • the DMA channel 24 is coupled between the compensation circuit 14 and the frame grabber module (which includes the processor 28 and the video frame memory 26) and corrected pixel values are transferred via a DMA process directly from the compensation circuit 14 to the frame grabber module memory 26.
  • a frame grabber module could include a video frame memory (illustrated as video frame memory 27 in Fig. 2) and the processor 28 can be provided as part of a module separate from but coupled to the frame grabber module.
  • the frame grabber module is coupled between the compensation circuit 14 and the processor 28 and corrected pixel values are transferred directly from the compensation circuit 14 to the frame grabber module memory (illustrated as video frame memory in 27 in Fig. 1) and subsequently accessed by the processor 28.
  • the DMA channel 24 can be omitted.
  • the processor 26 selects particular memory addresses in which compensation values for particular pixel regions are stored. It should be appreciated that to compensate the pixel values in real time, it is necessary to provide at least one memory with correction information in it. Thus, the compensation memory 19
  • the correction is performed pixel by pixel as the signals representing the video frame are being transferred from the image capture system 12 though the compensation circuit 14 and to the video frame memory 27 or DMA channel 24. Compensating the pixels for . irregularities in real time makes it possible to continue processing video information at the same rate as without correction.
  • the compensation memory 19 can be provided from a single scale memory (e.g. memory 21a) and an offset memory (e.g. offset memory 20a which includes, for example, compensation values to eliminate the effects of unexcludable stray light). If a nonlinear correction is needed, more memories (e.g. scale memories 21a - 21M) and more compensation function circuits (e.g. compensation function circuits 18a - 18N) may be needed. Thus, the compensation scheme of the present invention can be extended beyond simple linear calculations or can be reduced l to simpler linear multiples without offset correction. .
  • compensation circuit 14, memory 26 and processor 28 may be provided as part of a general purpose computer properly programmed to perform the functions of all of the above circuits.
  • an automated optical inspection (AOI) system 30 includes an image acquisition system 32 which obtains analog or continuous parameter images of a specimen 33 to be inspected.
  • the image capture system 32 may be provided, for example, as a digital camera (i.e. a camera which captures an analog input signal and provides a digital signal output).
  • the image acquisition system 32 may also be provided from a camera 34 having a video digitizer 36 coupled thereto.
  • the video digitizer 36 receives analog signals from the camera 34 and provides digital signals at an output thereof.
  • the camera 34 obtains a new image frame every 1/60 seconds.
  • cameras which obtain new image frames at speeds greater or less than 1/60 seconds may also be used.
  • the video digitizer 36 receives the analog signal from the camera 34, samples the analog signal, converts the analog signal into a stream of bits and provides a digital signal at an output thereof. Thus, in this manner, each pixel of every image obtained by the camera 34 is digitized.
  • Each of the freshly digitized pixel values which contain signal information from the inspected specimen 33 and error information caused by uneven illumination, are presented to a first input of an adder circuit 38.
  • Error information can be generated, for example, by stray light which reaches the inspection or calibration area or pixel sensitivity variations across a camera detector (e.g. a CCD, a CMOS detector or any other type of detector suitable for use in an image acquisition system.
  • a camera detector e.g. a CCD, a CMOS detector or any other type of detector suitable for use in an image acquisition system.
  • a correction coefficient memory 39 having correction coefficient values stored therein comprises a bank of offset memories 40a - 40N and a bank of scale memories 41a - 41M.
  • the correction coefficient memory 39 is coupled to a second input port of the adder circuit 38.
  • a selected one of the offset memories 40a - 40N provides an offset value to the second port of the adder 38.
  • the pixel values can be transferred from the selected one of the offset memories 40a - 40N to the second port of the adder circuit 38 either before, after or at the same time the pixel value is transferred from the image acquisition system to the second port of the adder circuit 38.
  • the particular one of the offset memories 40a - 40N which is selected is determined in a manner described below.
  • the adder 38 combines the values provided to the first and second inputs and provides a corrected offset digitized pixel value to a first input of a multiplier circuit 42.
  • the output of the adding circuit 38 is a pixel value which has been corrected for offset. It is presented to a second input of the multiplication circuit 42.
  • the second port of the multiplier circuit 42 is coupled to the correction coefficient memory 39.
  • a selected one of the scale memories 41a — 41M provides a scale correction value to the second input of the multiplier circuit 42.
  • the pixel values can be transferred from the selected one of the scale memories 41a - 41M to the second port of the multiplier circuit 42 either before, after or at the same time the pixel value is transferred from the adder circuit 38 to the second port of the multiplier circuit 42.
  • the multiplier circuit 42 provides a corrected pixel value (i.e. the value the pixel would have had if the illumination had been uniform) at an output thereof.
  • the pixel has been corrected for offset and for scale.
  • the offset memory 40 is provided from a bank of memories 40a-40N and the scale memory 41 is provided from a bank of memories 41a-41M.
  • Each of the different offset memory banks 40a-40N have different compensation values stored therein.
  • the inspection system 30 operates in a plurality of different lighting modes. Depending upon the particular lighting mode being used, the compensation values are selected from a predetermined one of the offset memories 40a- 40N.
  • offset memory bank 40a can hold compensation values used for a first lighting mode
  • offset memory bank 40b can hold compensation values for a second different lighting mode. The particular compensation values used during an inspection process thus depends upon the lighting mode being used in the system.
  • each of the different scale memory banks 41a-41N have different compensation values stored therein.
  • the compensation values are selected from a predetermined one of the scale memories 41a-41N.
  • scale memory bank 41a can hold compensation values used for a first lighting mode
  • offset memory bank 41b can hold compensation values for a second different lighting mode.
  • the output of the multiplier circuit 42 is coupled to a DMA channel 50 through which the corrected pixel data is transferred to a memory 54 of an image processing computer 52.
  • the transfer is made via a DMA process.
  • the computer 52 may be provided, for example, as a general purpose computer, an image processing processor or any other type of processor capable of processing image information and performing all or portions of an inspection process.
  • the processor 52 and memory 54 could be provided as part of a frame grabber module.
  • the DMA channel 24 would be coupled between the multiplier circuit 42 and the frame grabber module (which includes the processor and memory 54).
  • a frame grabber module could include a video frame memory (illustrated in phantom as video frame memory 44 in Fig. 2) and the processor 52 can be provided as part of a module separate from but coupled to the frame grabber module.
  • the video frame memory 44 of the frame grabber module is coupled between the multiplier circuit 42 and the processor 52 and corrected pixel values are transferred directly from the multiplier circuit 42 to the video frame memory 44 of the frame grabber module and subsequently accessed by the processor 52.
  • the corrected value is stored in the video frame memory 44 in lieu of the original raw digitized pixel, as would have been stored by a conventional framegrabber and the DMA channel 24 can be omitted.
  • a bank switch 46 coupled between a computer interface and the memory 39 allows the computer to select the appropriate one of the several memory banks 40a-40N and 41a- 41M Prior to frame acquisition time, the computer 52 selects appropriate ones of the banks 40a-40N and 41a-41M.
  • An address generator 48 is coupled to the correction coefficient memory and to the DMA channel 50 (or the video frame memory 44 in the embodiments which include the video frame memory and omit the DMA channel).
  • the address generator 48 is also coupled to a computer interface 56 which in turn is coupled to an illuminator control 58.
  • the illuminator control 38 controls an illumination source 40 during the calibration process and implements different lighting modes.
  • the address generator 48 generates an address to activate the video frame memory
  • the address generator 48 also activates the offset and scale factor memories 40a, 40b in the selected bank.
  • the offset correction value is retrieved from the offset memory 40a and presented to a first input of the adding circuit 38.
  • the adding circuit 38 adds the offset value to the pixel value to provide a corrected offset pixel value.
  • the scale correction is retrieved from the scale memory 41a and presented to a first input of a multiplication circuit 42.
  • the multiplication circuit 42 multiplies the offset corrected pixel value by the scale correction value.
  • the resultant corrected pixel value is transmitted to the processor and memory 52, 54 through the DMA channel 50.
  • the resultant corrected pixel value is transmitted to the video frame memory 44 and accessed by the processor 52.
  • the correction has thus been made in real time, so the computer 52 can process corrected pixels as fast as if they had not been corrected. It should be noted that no central processor cycles are used in making the correction with the technique of the present invention.
  • a computer interface 56 is coupled between the computer 52 and an illumination control 58 which in turn is coupled to an illuminator 60.
  • the illuminator control 58 is used to control (e.g. vary) the intensity and/or flash duration of the illuminator 60. This is done to generate variable lighting modes during use and during calibration.
  • correction coefficient memory 39 may be organized in any convenient way. In a preferred embodiment, it is sixteen million sixteen-bit words in which each sixteen-bit word is broken up into eight bits of offset coefficient and eight bits of scale coefficient which are stored in offset and scale memories 20a, 20b, respectively. In some embodiments ' , however, it may be desirable to use fewer bits of offset and more bits of scale, or vice versa. Such modifications may be desirable or even necessary if the variation in offset were small and the variation in scale were large, for example. It might be reasonable in some cases to include both scale and offset values in eight bits, with three bits used to represent an offset value and five bits used to represent a scale value.
  • addition and multiplication circuits can be implemented in any of several ways known to those skilled in the art of arithmetic circuitry.
  • the adder and multiplier circuits 38, 42 are generated using VHDL software statements compiled for a programmable gate array.
  • the adder 38, memory 39, multiplier 42 switch 46 and address generator may be provided as part of a general purpose computer properly programmed to perform the functions of all of the above circuits.
  • the camera 34 could provide image signals to a properly programmed general purpose computer which would then perform all of the calibration and inspection steps described above in conjunction with Figs. 1 and 2.
  • the image acquisition system 32 does not store the image in a local memory (e.g. video frame memory 44). Rather, the image is placed in the computer's main memory through the DMA channel 50.
  • the addresses generated by the DMA channel 50 may not be identical to those of the offset and scale memories 40a, 40b.
  • the physical addresses in the memory 54 may not even be contiguous, depending upon the memory management scheme employed in the computer 52. Thus, in order for accurate corrections to be made to correct the pixel, there must be coordination between the address generator for the offset and scale memories and the addressing sequence of the DMA channel 50.
  • the correction is performed pixel by pixel as the signals representing the video frame are being transferred from the camera 34 though the compensation circuits 38, 42 and to the video frame memory 44 or the DMA channel 50. Compensating the pixels for irregularities in real time makes it possible to continue • processing video information at the same rate as without correction.
  • the time it takes for the frame data to be transmitted from the camera 14 output to the frame memory 44 or DMA channel must be no greater than the speed at which the camera captures a frame (e.g.. 1/60 sec).
  • Figs. 3 and 4 are a series of flow diagram showing the processing performed by a processing apparatus which may, for example, be provided as part of an AOI system such as that shown in Fig. 1.
  • the rectangular elements (e.g. block 64 in FIG. 3) in the flow diagram(s) are herein denoted "processing blocks” and represent steps or instructions or groups of instructions. Some of the processing blocks can represent an empirical procedure or a database operation while others can represent computer software instructions or groups of instructions.
  • the diamond shaped elements in the flow diagrams (e.g. block 74 in FIG. 3) are herein denoted "decision blocks" and represent steps or instructions or groups of instructions which affect the processing of the processing blocks.
  • Some of the decision blocks can also represent an empirical procedure or a database operation while others can represent computer software instructions or groups of instructions. Thus, some of the steps described in the flow diagram may be implemented via computer software while others may be implemented in a different manner e.g. via an empirical procedure.
  • some of the blocks can represent steps performed by functionally equivalent circuits such as a digital signal processor circuit or an application specific integrated circuit (ASIC).
  • ASIC application specific integrated circuit
  • still all of the blocks in Figs. 3 and 4 can represent steps performed by a properly programmed general purpose computer or processor.
  • the flow diagram does not depict the syntax of any particular programming language. Rather, the flow diagram illustrates the functional information one of ordinary skill in the art requires to perform the steps or to fabricate circuits or to generate computer software to perform the processing required of the particular apparatus. It should be noted that where computer software can be used, many routine program elements, such as initialization of loops and variables and the use of temporary variables are not shown. It will be appreciated by those of ordinary skill in the art that unless otherwise indicated herein, the particular sequence of steps described is illustrative only and can be varied without departing from the spirit of the invention.
  • processing begins with steps 64 and 66 in which a pixel is received from a camera and a digitized pixel value is generated.
  • a first compensation value is combined with the digitized pixel value generated in step 66 to provide a first order corrected digitized pixel value.
  • the first compensation value is provided as an offset correction value.
  • the offset correction value is combined with the digitized pixel value to provide an offset corrected digitized pixel value.
  • a second correction value is applied to the first order corrected digitized pixel value to provide a second order corrected digitized pixel value.
  • the second correction value is provided as a scale correction value which multiplies the corrected offset digitized pixel value to provide a corrected digitized pixel value.
  • steps 68 or 70 may be omitted. That is, in some applications only the first correction value would be used while in other applications only the second correction value will be used.
  • step 72 the corrected digitized pixel value is provided to video frame memory where the corrected digitized pixel value is then available for further processing by an image processor for example.
  • Processing then proceeds to decision block 74 where decision is made as to whether more pixels remain to be processed. If no additional pixels remain to be processed, then processing ends. If, on the other hand, additional pixels remain to be processed, then processing returns to step 64 where the next pixel is received by the camera and steps 66-74 are repeated until no more pixels remain to be processed.
  • Obtaining the correction coefficients in the first place requires a calibration procedure.
  • a camera to be corrected with its illuminator is aimed at a uniform neutral surface such as a gray card or a white card.
  • Uniform neutral media can be obtained from a number of sources supplying the photography and optical trade and known to those skilled in the art of photometry and photography.
  • a number of exposures of the chosen uniform neutral surface are taken at different intensities or flash durations which can be achieved via an illuminator control (e.g. illuminator control 58 in Fig. 2).
  • Another calibration procedure for automatically obtaining correction coefficients is described below in conjunction with Fig. 4.
  • the number of exposures taken in a calibration process must be sufficient to provide enough data to calculate all necessary pixel correction coefficients for each pixel. For example, if only scale is to be corrected, a single exposure is sufficient. If scale and offset are to be corrected, two exposures at different illumination values will be required. Suitable arithmetic will be performed on each pixel value in order to separate the effects of offset and scale. If the corrections to be performed are other than straight line offset and scale, more than two sets of coefficients will likely be required, and more than two calibrating exposures will also be required. The number of coefficients is particular to the kind of compensation function being used. If more than two coefficients are needed per pixel, then more than the two memories shown in the drawing will also be required.
  • linear scale and offset are corrected, and thus two memories are used (e.g. memories 40 and 41 in Fig. 2), and two calibration exposures are performed.
  • One exposure is taken at a low or zero value of illumination and one at an illumination value known not to saturate any pixel.
  • saturation can be detected automatically and its effects eliminated. Methods of detecting saturation and backing off from it are well known to those of ordinary skill in the art.
  • both "raw frames" are transferred without compensation to arrays in central computer memory.
  • the first pixel array is designated Pdark[0 .. N] and the second pixel array is designated Pbright[0..N].
  • the pixel arrays Pdark and Pbright are used to calculate two other arrays Scale[0 .. N] and Offset[0 .. N].
  • the variable 0 .. N represents not only a memory address but also a physical location in the field of view.
  • the Pdark values are not all zero, perhaps because the illuminator cannot be fully turned off, or there is stray light which cannot be shut out. Some pixel offsets may be negative, and others positive. The Pbright values are not all equal, because the illuminator is not uniform. A preferred embodiment of the invention corrects both of these sources of nonuniformity at the same time, provided a linear model describes the aberrations.
  • each incoming pixel will be added to the contents of the Offset memory and the result of the addition will be multiplied by the contents of the Scale memory, i.e.
  • the corrected pixel value Truepixel(n) is the provided to the appropriate memories and DMA channel.
  • a process for learning the scale and offset values stored in a compensation memory e.g. compensation memory 19 in Fig. 1 or memories 39, 40a, 40b in Fig. 2
  • a processor learns the scale and offset values for each pixel by utilizing an automatic process as next described.
  • a uniform neutral surface is positioned in a calibration position.
  • the neutral surface may be provided, for example, as a white card or a gray card (e.g. an eight inch by ten inch gray card with 18% density of the type manufactured by Eastman Kodak and referred to as Kodak Publication No. R-27).
  • the calibration position is a position normally occupied by the specimen to be measured and is within a field of view of the image acquisition and illumination systems.
  • step 84 the compensation memory is initialized. It should be appreciated by those of ordinary skill in the art that in the case where the compensation memory is provided as separate memories (e.g. by partitioning a single physical memory or by utilizing physically separate memories) all of the memories should be appropriately initialized. In the AOI system of Fig. 2, for example, all cells of the offset memory 40a should be set to zero and all cells of the scale memory 40b should be set to unity.
  • step 82 an illumination level is set to a first desired level to provide a particular illumination characteristic to the uniform neutral surface which is positioned in the calibration position.
  • the illumination level is set to zero meaning that no illumination is provided.
  • step 84 a first image frame is acquired or "grabbed.” In the case where the first desired level corresponds to zero, the first frame corresponds to a "dark" frame. At this point, the pixel values in the first frame are uncompensated and ideally all pixel values in this frame should be zero. In practice, however, some of the pixel values in the dark frame are nonzero values. The nonzero values are due to offsets caused primarily by non-uniform or stray illumination of the neutral surface.
  • step 86 the additive inverse of the value found in the first frame is stored in the compensation memory.
  • this may be accomplished by first storing the measured values in a temporary array of pixels, computing the inverse of the measured values and then storing this array in a mass storage file for future use.
  • the array of additive inverse values is also be stored in the offset memory (e.g. offset memory 40a in Fig. 1) for immediate use.
  • the offset memory 40a now contains sufficient information to correct the next grabbed frame for offset values.
  • the illumination level is set to a desired second level to provide a second particular illumination characteristic to the uniform neutral surface which is positioned in the calibration position.
  • the second illumination level preferably corresponds to the brightest non-saturating level contemplated for use.
  • a second image frame is acquired or "grabbed.”
  • the second frame corresponds to a "bright" frame.
  • the pixel values are again "raw” and ideally all pixel values in this frame should have a value of one. In practice, however, some of the pixel values in the bright frame are not equal to one. Because all pixel values are compensated for offset, all pixel value variations in this frame are presumed to be due to intensity aberrations in the illumination field. This is due to offsets caused by non-uniform illumination of the neutral surface.
  • optics can lead to nonuniformities because more light passes through the center of an path than the sides and variations across a detector field of view or optical system of a camera can lead to nonuniformities. It should be noted that when grabbing the bright frame, a check is made to determine whether any pixels are saturated. If any are, then the illumination is reduced and another frame is grabbed. It is preferable to process a pixel values which afe not in saturation since this results in a linear system while processing of pixel values which are in saturation results in a non-linear system. While it is possible to calibrate using saturated values, it is more efficient in terms of time and computational efficiency to process a linear system.
  • a scale factor is calculated for each location in a temporary array of pixels.
  • the scale factor will, when multiplied by the bright-frame offset-corrected pixel value, result in a constant nominal value.
  • the constant nominal value must be less than the arithmetic processing limit of the data type being used.
  • the data type is an unsigned eight bit integer, and the constant value is two hundred. Other embodiments may use different data types and different constant values, if appropriate, without departing from the spirit of the present invention.
  • the values from the temporary array are stored in a mass-storage device for future use and are also stored in a scale memory (e.g. scale memory 40b in Fig. 2) for immediate use.
  • Processing then proceeds to processing step 94 and decision block 96 in which a next frame is grabbed and a decision is made as to whether that frame contains pixels of value equal to the chosen constant nominal value (in the example given above, this corresponds to a value of two hundred).
  • an inspection program may have several different lighting modes. For example, different lighting modes may be used to light a specimen being inspected from different angles in order to accentuate or wash out certain features. In each lighting mode, some of the available lighting elements are turned on and others are turned off. It should be noted that the shape of the light distribution may be different for each lighting mode, since each lighting mode uses a different combination of lights, and each different light possibly has is own individual distribution.
  • the lighting system is a dome having numerous light-emitting diodes all aimed at the specimen but separately controlled, similar to the dome described in US Patent nos. 5,060,065 and 5,254,421.
  • the lighting system is a dome having numerous light-emitting diodes all aimed at the specimen but separately controlled, similar to the dome described in US Patent nos. 5,060,065 and 5,254,421.
  • the various sets of correction coefficients can be brought into play at the time when its corresponding lighting mode is in use by using, for example, a bank switch (e.g. bank switch 46 in Fig. 2).
  • the bank switch could be a distinct item (as shown in Fig. 2) or could simply be an extension of an address generator (e.g. the address generator 48 in Fig. 2).
  • the bank switch is set to zero for lighting mode zero, one for lighting mode one, and so forth up to thirty-two (the number thirty-two is arbitrary and could in practice be larger or smaller than thirty-two without violating the spirit of this invention).
  • each picture in one embodiment is 640 by 480 pixels, it consumes in round numbers half a million addresses, and the memory storing correction coefficients must be large enough to store all the coefficients needed for half a million pixels. Thirty two banks of a half million addresses each occupy 16 million addresses.
  • a nonlinear compensation could be provided as well, using any of a number of well known polynomial, piecewise-linear, tabular or function-based compensation functions. It is understood that the real-time arithmetic functions could take many forms, and working implementations could easily be formulated by one of ordinary skill in the art of framegrabber design. The specific realization of the circuits and software must operate in real time, providing arrays of corrected frame pixel values ready for immediate analysis by the a processor.
  • One advantage of the present invention is that the cost of illumination hardware can be reduced. Relatively low cost illuminators tend to have worse nonuniformities than relatively expensive illuminators.
  • the present invention makes it possible to tolerate greater irregularity in the illumination, and therefore makes it possible to employ relatively low cost illuminators while still achieving an accuracy which is equivalent to that obtained with relatively high cost illuminators.
  • a further advantage of the present invention is that in the case where the image acquisition system includes both a camera and a digitizer, the invention simultaneously compensates for linear nonuniformities in the camera and in the digitizer as well as for illumination nonuniformity. If the digitizer has a constant offset, for example, that offset would be silently removed in the calibration process described herein.

Abstract

A system and method for compensating pixel values in an inspection machine for inspecting printed circuit boards includes an image acquisition system for providing pixel values from a digitized image to a compensation circuit. The compensation circuit applies one or more compensation values to the digitized pixel values to provide compensated digitized pixel values for storage in a memory. The compensated digitized pixel values are then available for use by an image processor which implements inspection techniques during a printed circuit board manufacturing process. With this technique, the system corrects the errors on a pixel by pixel basis as the pixel values representing an image of a printed circuit board are transferred from the image acquisition system to the memory.

Description

COMPENSATION SYSTEM AND RELATED TECHNIQUES FOR USE IN A PRINTED CIRCUIT BOARD INSPECTION SYSTEM
STATEMENTS REGARDING FEDERALLY SPONSORED RESEARCH Not Applicable.
CROSS-REFERENCE TO RELATED APPLICATIONS Not applicable.
FIELD OF THE INVENTION
This invention relates generally to automated optical inspection (AOI) systems and more particularly to illumination systems used in AOI systems.
BACKGROUND OF THE INVENTION As is known in the art, an automated optical inspection (AOI) system typically includes an illumination system which projects or otherwise provides light to illuminate a specimen being inspected and a camera which captures an image of the specimen and converts it to electrical signals. One or more frame grabbers transfer the electrical signals representing the image to a computer or other processing device for further processing.
It is difficult, however, to produce uniform illumination over the field of view of a camera. A processing device which is unaware of the specific nonuniformities in lighting cannot differentiate between those patterns of light in the image which are due to the appearance of the specimen and those patterns of light in the image which are due to irregularities in the illumination system.
For example, an image of a uniformly illuminated plain uniform grey surface ideally has pixel values which are all equal. In practice, however, the pixel values will not be equal unless the camera is ideal and the illumination is perfectly uniform. The computer or other processing device receives a "true" image only if the effects of nonuniform illumination can somehow be cancelled out. Nonuniformity of illumination is thus a source of noise in AOI systems. There are a number of reasons for nonuniform illumination in AOI Systems. A typical illumination system includes one or more fluorescent lamps. On problem with fluorescent and other types of lamps is that intensity of the lamp varies along the length of the lamp. The intensity also varies with the amount of current provided to the lamp, the age of the lamp, specific variations in the manufacture of the lamp including its shape, and the temperature at which the lamp operates. It is a daunting prospect to attempt to regulate the light at one point, let alone regulate the light at all points on the object. Thus, suffice it to say that for a variety of reasons, it is relatively difficult to produce uniform illumination over the field of view of a camera.
One technique to overcome this illumination problem is to use lamps which provide relatively uniform intensity. Such lamps, however, are relatively expensive. More importantly perhaps such lamps still have variations in intensity although the variations are less severe than the intensity variations in relatively low cost lamps.
It would, therefore, be desirable to provide a system for compensating nonuniformity in an image due to variations in illumination characteristics. It would also be desirable to provide an AOI system for printed circuit boards which compensates for variations in illumination characteristics. It would be further desirable to provide a system which performs compensation in real time.
SUMMARY OF THE INVENTION
In accordance with the present invention, an automated optical inspection (AOI) system includes an image acquisition system for capturing an image and for providing a digital signal to a compensation circuit which compensates the pixel values to provide compensated or corrected pixel values. The compensation circuit provides the compensated pixel values in real time to a video frame memory and to a direct memory access (DMA) channel. The DMA channel transfers the compensated pixel values to a storage device which is coupled to or provided as part of a processor. With this particular arrangement, an AOI system which corrects errors in a digital image due to illumination is provided. The system corrects the errors on a pixel by pixel basis as signals representing the video frame are being transferred from the image acquisition system to a frame memory. By storing compensation values in the compensation circuit and coupling the compensation circuit to a DMA channel, it is possible to compensate the pixel values in real time as the pixel values are transmitted from the image acquisition system to the frame memory. Compensating the pixels for irregularities in real time makes it possible to continue processing video information obtained by the image acquisition system at the same rate as without correction. In one embodiment, the image acquisition system includes a digital camera and in another embodiment the image acquisition system includes an analog camera having a video digitizer coupled thereto.
The details of the compensation memory architecture vary with the nature of the aberrations expected during inspection. For example, if a pure linear correction with no offset errors is expected, only a scale factor memory is needed. On the other hand, if offset errors are expected, then an offset memory is also required (e.g. to eliminate the effects of unexcludable stray light). If a nonlinear correction is needed, additional scale memories and associated arithmetic circuits to perform scale factor computations may be needed. In a preferred embodiment, only offset and scale factor are linearly corrected. It should be appreciated, however, that the concept of performing real time correction can be extended beyond simple linear calculations or reduced to simpler linear multiples without offset correction.
In preferred embodiments, the system uses a plurality of lighting modes and it may be desirable to provide the compensation memory from a plurality of separate banks of memories with each bank storing a set of correction or compensation coefficients. In this case, each lighting mode use a separate bank and thus a separate set of compensation coefficients. When multiple memory banks are used, a bank switch coupled between processor and the compensation memory allows the processor to select a particular one or ones of the banks of compensation memory. The appropriate memory bank is selected by the processor prior to the time a frame is acquired. In particular, the processor selects particular values in the memory to be used according to which lighting mode will be used.
In one embodiment, the compensation circuit includes a compensation memory coupled to an adder circuit and a multiplier circuit. The image acquisition circuit provides a pixel value (in the form of a digital signal) to a first port of the adder circuit and the compensation memory provides an offset value to a second port of the adder circuit. The adder circuit combines the two values fed thereto and provides a partially compensated pixel value to the input of the multiplication circuit. The multiplication circuit further compensates the pixel value prior to providing the fully compensated pixel value to the video frame memory and the DMA channel.
A further advantage of the present invention is that the cost of illumination hardware can be reduced. As mentioned above , relatively low cost illuminators tend to have worse nonuniformities than relatively expensive illuminators. The present invention, however, makes it possible to tolerate greater irregularity in the illumination, and therefore makes it possible to employ relatively low cost illuminators while still achieving an accuracy which is equivalent to that obtained with relatively high cost illuminators.
In accordance with a further aspect of the present invention, an AOI system includes an image acquisition system having an output port coupled to a first input port of an adder circuit. A second input of the adder circuit is coupled to an offset memory and an output of the adder circuit is coupled to a first input of a multiplier circuit. A second input of the multiplier circuit is coupled to an offset memory and an output of the multiplier circuit has an output coupled to a video frame memory and a DMA channel. The DMA channel is coupled to a processor memory. The processor memory may be provided, for example, either as a main memory in a so-called "motherboard" or as a processor on a frame grabber module which includes bother the frame grabber processor and a frame grabber memory.
With this particular arrangement, an AOI system which corrects pixel values in a digital image on a pixel by pixel basis as the signals representing the video frame are transferred from the image acquisition system to the frame memory is provided. The adder, multiplier and memory circuits operate at a speed which is sufficient to allow the system to compensate pixel values in real time, thus allowing compensated or corrected pixel values to be stored in the memory for later processing.
A further advantage of the present invention is that in the case where the image acquisition system includes both a camera and a digitizer, the invention simultaneously compensates for linear nonuniformities in the camera and in the digitizer as well as for illumination nonuniformity. If the digitizer had a constant offset, for example, that offset would be removed in the calibration process described here.
The digitized pixel values contain signal information from the inspected specimen and error information caused by uneven illumination. The digitized pixel values are presented to the second input of the adding circuit.
The output of the adding circuit is a pixel value which has been corrected for offset.
IMs presented to the second input of the multiplication circuit. The output of the multiplication circuit is a pixel value which has been corrected for scale and offset.
The addition and multiplication circuits can be implemented in any of several ways known to those skilled in the art of arithmetic circuitry. In the preferred embodiment the adder and multiplier can be generated using Very high speed integrated circuit Hardware Descriptive Language (VHDL) software statements compiled for a programmable gate array.
The correction coefficient memories may be organized in any convenient way. In the preferred embodiment, it is sixteen million sixteen-bit words in which each sixteen-bit word is broken up into eight bits of offset coefficient and eight bits of scale coefficient. Embodiments are envisioned in which fewer bits of offset are used and more bits of scale, or vice versa - this would be appropriate if the variation in offset were small and the variation in scale were large. It might be reasonable in some cases to represent both scale and offset values in a total of eight bits, with three bits for offset and five bits for scale. Other ways of managing multiple blocks of correction coefficients for multiple lighting modes are contemplated as well. For example, if memory cost were important and lighting modes were infrequently changed, the time of reloading a lighting mode's information would not be important. The bank switch could be eliminated (or set to a constant value) and the entire memory could be reloaded from the computer for each separate lighting mode. This would allow the invention to be realized with smaller memories, but would use time in reloading the memories, as opposed to the much faster bank switch. In the case where only one lighting mode is used, only one memory bank (or one set of memory banks) is needed.
The output of the multiplication circuit is a corrected pixel value - the value the pixel would have had if the illumination had been uniform. The pixel has been corrected for offset and for scale. The corrected value is stored in frame memory in lieu of the original raw digitized pixel, as would have been stored by a conventional framegrabber.
The correction has been made in real time, so the computer can process corrected pixels as fast as if they had not been corrected. No central processor cycles are used in making the correction.
In accordance with a further aspect of the present invention, a method for correcting errors in an AOI system includes the steps of (a) acquiring a frame and digitizing each pixel in the frame, (b) generating an address to indicate where the pixel is to be stored in a video frame memory, (c) retrieving correction values related to the pixel position and (d) combining the correction values with the digitized pixel value to provide a corrected pixel value.
With this particular arrangement, a method for real-time compensation of pixel values in an AOI system is provided. The correction values correspond to offset and scale correction values. The correction values are combined with the pixel values by first adding an offset correction value to the digitized pixel value to provide a partially corrected digitized pixel value and then by multiplying the corrected digitized pixel value by a scale correction value to provide a fully corrected pixel value.
BRIEF DESCRIPTION OF THE DRAWINGS The foregoing features of this invention, as well as the invention itself, may be more fully understood from the following description of the drawings in which:
Fig. 1 is a block diagram of an automated optical inspection system which performs correction of images;
Fig. 2 is a block diagram of an automated optical inspection system which includes an offset memory and a scale memory for use in performing correction on a pixel by pixel basis as the signals representing the video frame are being loaded into frame memory from a camera;
Fig. 3 is a flow diagram of a method to perform real time compensation; and
Fig. 4 is a flow diagram of a method for determining compensation values.
DETAILED DESCRIPTION OF THE INVENTION
Before describing an automated optical inspection (AOI) system in accordance with the present invention, and the operations performed to make corrections due to non-uniform illumination and other non-time varying errors some introductory concepts and terminology are explained.
An analog or continuous parameter image such as a still photograph may be represented as a matrix of digital values and stored in a storage device of a computer or other digital processing device. Thus, as described herein, the matrix of digital data values is generally referred to as a "digital image" or more simply an "image" and may be stored in a digital data storage device, such as a memory for example, as an array of numbers representing the spatial distribution of energy at different wavelengths in a scene. Similarly, an image sequence such as a scanned view of a printed circuit board, for example, may be converted to a digital video signal as is generally known. The digital video signal is provided from a sequence of discrete digital images or frames. Each frame may be represented as a matrix of digital data values which may be stored in a storage device of a computer or other digital processing device. Thus in the case of video signals, as described herein, a matrix of digital data values are generally referred to as an "image frame" or more simply an "image" or a "frame." Each of the frames in the digital video signal may be stored in a digital data storage device, such as a memory for example, as an array of numbers representing the spatial distribution of energy at different wavelengths in a scene in a manner similar to the manner in which an image of a still photograph is stored
Whether provided from a still photograph or a video sequence, each of the numbers in the array correspond to a digital word (e.g. an eight-bit binary value) typically referred to as a "picture element" or a "pixel" or as "image data." The image may be divided into a two dimensional array of pixels with each of the pixels represented by a digital word.
Reference is also sometimes made herein to color images with only a luminance component. Such images are known as grey scale images. Thus, a pixel represents a single sample which is located at specific spatial coordinates in the image. It should be noted that the techniques described herein may be applied equally well to either grey scale images or color images.
In the case of a grey scale image, the value of each digital word corresponds to the intensity of the pixel and thus the image at that particular pixel location. In the case of a color image, reference is sometimes made herein to each pixel being represented by a predetermined number of bits (e.g. eight bits) which represent the color red (R bits), a predetermined number of bits (e.g. eight bits) which represent the color green (G bits) and a predetermined number of bits (e.g. eight bits) which represent the color blue (B-bits) using the so-called RGB color scheme in which a color and luminance value for each pixel can be computed from the RGB values. Thus, in an eight bit color RGB representation, a pixel is represented by a twenty-four bit digital word.
It is of course possible to use greater or fewer than eight bits for each of the RGB values. It is also possible to represent color pixels using other color schemes such as a hue, saturation, brightness (HSB) scheme or a cyan, magenta, yellow, black (CMYK) scheme. It should thus be noted that the techniques described herein are applicable to a plurality of color schemes including but not limited to the above mentioned RGB, HSB, CMYK schemes as well as the Luminosity and color axes a & b (Lab), YUV color difference color coordinate system, the Karhunen-Loeve color coordinate system, the retinal cone color coordinate system and the X, Y, Z scheme.
Reference is also sometimes made herein to an image as a two-dimensional pixel array. An example of an array size is 512 pixels x 512 pixels. One of ordinary skill in the art will of course recognize that the techniques described herein are applicable to various sizes and shapes of pixel arrays including irregularly shaped pixel arrays.
An image region or more simply a region is a portion of an image. For example, if an image is provided as a 32 X 32 pixel array, a region may correspond to a 4 X 4 portion of the 32 X 32 pixel array.
Referring now to Fig. 1, an automated optical inspection (AOI) system 10 includes an image acquisition system 12 which obtains analog or continuous parameter images of a specimen to be inspected. The specimen is illuminated by an illumination system 13. The image acquisition system 12 provides a digital signal (e.g. a stream of digital bits) to a compensation circuit 14.
The compensation circuit 14 includes a compensation function circuit 16 coupled to a compensation memory circuit 19. The compensation memory circuit 19 has stored therein compensation values which are provided to the compensation function circuit 16. The compensation function circuit 16 receives the compensation values and appropriately applies them to pixel values being provided thereto from the image acquisition system 12. The compensation values stored in the compensation memory 19 can be generated via a system calibration technique such as the techniques described below in conjunction with Figs. 3 and 4.
The compensation function circuit 16 includes one or more compensating circuits 18a - 18N generally denoted 18. In Fig. 1, compensating circuit 18a corresponds to an offset circuit 18a and compensating circuits 18b -18N correspond to scale circuits 18b — 18N. Although only one offset circuit is here shown, it should be noted that in some embodiments, it may be desirable to use more than on offset circuit. The operation of the offset and scale circuits 18a-18N will be described in detail below. Suffice it here to say that the offset circuit 18a and scale circuits 18b-18N receive compensation values from the compensation memory 19 and apply the compensation values to pixel values received from the image acquisition system 12. The compensation circuit 14 thus corrects for errors due to non-uniform illumination and other errors.
The compensation memory 19 can be provided from a single memory or a bank of memories. In Fig. 1, compensation memory 19 is shown provided from a plurality of memories 20a and 21a-21M. In this particular embodiment, the memory 20 is provided as an offset memory and memories 2 la-21M are provided as scale memories.
It should be appreciated that each of the individual memories 20 and 21a-21M could comprise a bank of memories with each of the different memory banks have different compensation values stored therein. For example, a first bank of memory 20 can hold compensation values to be used for a first lighting mode and a second different bank of memory 20 can hold compensation values used for a second different lighting mode. The particular bank of offset memory 20 selected for use during an inspection process depends upon the lighting mode being used in the system. When the system is operating in the first lighting mode, the first memory bank of offset memory 20 is used and when the system is operating in the second lighting mode, the second memory bank of offset memory 20 is used. Thus, different compensation values can be used for each lighting mode. Banks of scale memories 21a-21M could operate in a similar manner.
Although, N scale circuits 18b-18N are shown in Fig. 1, those of ordinary skill in the art will appreciate that one or more scale circuits 18b -18N could be used. Similarly, although, M scale memories 21a-21M are shown in Fig 1, those of ordinary skill in the art Will appreciate that one or more scale memory circuits could be used. Each of the one or more offset memories and scale memories provide compensation values to one or more of the offset and scale circuits.
After the compensation circuit 14 compensates the pixel values from the image acquisition circuit 12, the compensated pixels values are transmitted from the compensation circuit 14 to a memory for storage.
In one embodiment, the compensated pixels values are transmitted from the compensation circuit 14 through a direct memory access (DMA) channel 24 to a memory 26 of a processor 28. The processor 28 may be provided, for example, as a general purpose computer, an image processing processor or any other type of processor capable of processing image information and performing all or portions of an inspection process.
It should be appreciated, however, that other embodiments are also possible. For example, in another embodiment, the system can include a frame grabber module. The frame grabber module could include the processor 28 and the memory 26. In this case, the memory 26 would correspond to a video frame memory 26 on the frame grabber module. Thus, in this embodiment, the DMA channel 24 is coupled between the compensation circuit 14 and the frame grabber module (which includes the processor 28 and the video frame memory 26) and corrected pixel values are transferred via a DMA process directly from the compensation circuit 14 to the frame grabber module memory 26.
In still another embodiment, a frame grabber module could include a video frame memory (illustrated as video frame memory 27 in Fig. 2) and the processor 28 can be provided as part of a module separate from but coupled to the frame grabber module. Thus, in such an embodiment, the frame grabber module is coupled between the compensation circuit 14 and the processor 28 and corrected pixel values are transferred directly from the compensation circuit 14 to the frame grabber module memory (illustrated as video frame memory in 27 in Fig. 1) and subsequently accessed by the processor 28. Thus, in this case, the DMA channel 24 can be omitted.
In operation, prior to frame acquisition, the processor 26 selects particular memory addresses in which compensation values for particular pixel regions are stored. It should be appreciated that to compensate the pixel values in real time, it is necessary to provide at least one memory with correction information in it. Thus, the compensation memory 19
(e.g. the memories 10 and 21a-21M) have stored therein compensation values for particular pixel regions.
The correction is performed pixel by pixel as the signals representing the video frame are being transferred from the image capture system 12 though the compensation circuit 14 and to the video frame memory 27 or DMA channel 24. Compensating the pixels for . irregularities in real time makes it possible to continue processing video information at the same rate as without correction.
It should be understood by those of ordinary skill in the art, that the details of the architecture of the compensation memory 19 vary with the nature of the aberrations expected. For example, if a pure linear correction with no offset errors is expected, then the error values can be represented mathematically as y = mx, where m corresponds to the scale factor. Thus, in this case, the compensation memory 19 can be provided from a single scale memory (e.g. memory 21a).
On the other hand, if a pure linear correction with an offset error is expected, then the error values can be mathematically represented as y = mx + b, where m corresponds to the scale factor and b corresponds to the value of the offset. In this case, the compensation memory 19 can be provided from a single scale memory (e.g. memory 21a) and an offset memory (e.g. offset memory 20a which includes, for example, compensation values to eliminate the effects of unexcludable stray light). If a nonlinear correction is needed, more memories (e.g. scale memories 21a - 21M) and more compensation function circuits (e.g. compensation function circuits 18a - 18N) may be needed. Thus, the compensation scheme of the present invention can be extended beyond simple linear calculations or can be reduced l to simpler linear multiples without offset correction. .
It should be appreciated that compensation circuit 14, memory 26 and processor 28 may be provided as part of a general purpose computer properly programmed to perform the functions of all of the above circuits.
Referring now to Fig. 2, an automated optical inspection (AOI) system 30 includes an image acquisition system 32 which obtains analog or continuous parameter images of a specimen 33 to be inspected. The image capture system 32 may be provided, for example, as a digital camera (i.e. a camera which captures an analog input signal and provides a digital signal output). Alternatively, as shown in Fig. 2, the image acquisition system 32 may also be provided from a camera 34 having a video digitizer 36 coupled thereto. The video digitizer 36 receives analog signals from the camera 34 and provides digital signals at an output thereof. In the embodiment, shown in Fig. 2, the camera 34 obtains a new image frame every 1/60 seconds. Those of ordinary skill in the art will appreciate of course that cameras which obtain new image frames at speeds greater or less than 1/60 seconds may also be used.
The video digitizer 36 receives the analog signal from the camera 34, samples the analog signal, converts the analog signal into a stream of bits and provides a digital signal at an output thereof. Thus, in this manner, each pixel of every image obtained by the camera 34 is digitized.
Each of the freshly digitized pixel values, which contain signal information from the inspected specimen 33 and error information caused by uneven illumination, are presented to a first input of an adder circuit 38. Error information can be generated, for example, by stray light which reaches the inspection or calibration area or pixel sensitivity variations across a camera detector (e.g. a CCD, a CMOS detector or any other type of detector suitable for use in an image acquisition system.
A correction coefficient memory 39, having correction coefficient values stored therein comprises a bank of offset memories 40a - 40N and a bank of scale memories 41a - 41M. The correction coefficient memory 39 is coupled to a second input port of the adder circuit 38.
When a pixel is transferred from the image acquisition system 32 to the first input of the adder 38, a selected one of the offset memories 40a - 40N provides an offset value to the second port of the adder 38. It should be appreciated that the pixel values can be transferred from the selected one of the offset memories 40a - 40N to the second port of the adder circuit 38 either before, after or at the same time the pixel value is transferred from the image acquisition system to the second port of the adder circuit 38. The particular one of the offset memories 40a - 40N which is selected is determined in a manner described below. The adder 38 combines the values provided to the first and second inputs and provides a corrected offset digitized pixel value to a first input of a multiplier circuit 42. Thus, the output of the adding circuit 38 is a pixel value which has been corrected for offset. It is presented to a second input of the multiplication circuit 42.
The second port of the multiplier circuit 42 is coupled to the correction coefficient memory 39. In particular, a selected one of the scale memories 41a — 41M provides a scale correction value to the second input of the multiplier circuit 42. It should be appreciated that the pixel values can be transferred from the selected one of the scale memories 41a - 41M to the second port of the multiplier circuit 42 either before, after or at the same time the pixel value is transferred from the adder circuit 38 to the second port of the multiplier circuit 42.
The multiplier circuit 42. provides a corrected pixel value (i.e. the value the pixel would have had if the illumination had been uniform) at an output thereof. The pixel has been corrected for offset and for scale. As mentioned above, in the embodiment of Fig. 2, the offset memory 40 is provided from a bank of memories 40a-40N and the scale memory 41 is provided from a bank of memories 41a-41M. Each of the different offset memory banks 40a-40N have different compensation values stored therein. The inspection system 30 operates in a plurality of different lighting modes. Depending upon the particular lighting mode being used, the compensation values are selected from a predetermined one of the offset memories 40a- 40N. For example, offset memory bank 40a can hold compensation values used for a first lighting mode and offset memory bank 40b can hold compensation values for a second different lighting mode. The particular compensation values used during an inspection process thus depends upon the lighting mode being used in the system.
Similarly, each of the different scale memory banks 41a-41N have different compensation values stored therein. Depending upon the particular lighting mode being used, the compensation values are selected from a predetermined one of the scale memories 41a-41N. For example, scale memory bank 41a can hold compensation values used for a first lighting mode and offset memory bank 41b can hold compensation values for a second different lighting mode.
The output of the multiplier circuit 42 is coupled to a DMA channel 50 through which the corrected pixel data is transferred to a memory 54 of an image processing computer 52. The transfer is made via a DMA process. The computer 52 may be provided, for example, as a general purpose computer, an image processing processor or any other type of processor capable of processing image information and performing all or portions of an inspection process.
It should be appreciated, however, that other embodiments are also possible. For example, in another embodiment, the processor 52 and memory 54 could be provided as part of a frame grabber module. Thus, in this embodiment, the DMA channel 24 would be coupled between the multiplier circuit 42 and the frame grabber module (which includes the processor and memory 54). In still another embodiment, a frame grabber module could include a video frame memory (illustrated in phantom as video frame memory 44 in Fig. 2) and the processor 52 can be provided as part of a module separate from but coupled to the frame grabber module. Thus, in such an embodiment, the video frame memory 44 of the frame grabber module is coupled between the multiplier circuit 42 and the processor 52 and corrected pixel values are transferred directly from the multiplier circuit 42 to the video frame memory 44 of the frame grabber module and subsequently accessed by the processor 52. Thus, in this case, the corrected value is stored in the video frame memory 44 in lieu of the original raw digitized pixel, as would have been stored by a conventional framegrabber and the DMA channel 24 can be omitted..
A bank switch 46 coupled between a computer interface and the memory 39 allows the computer to select the appropriate one of the several memory banks 40a-40N and 41a- 41M Prior to frame acquisition time, the computer 52 selects appropriate ones of the banks 40a-40N and 41a-41M.
An address generator 48 is coupled to the correction coefficient memory and to the DMA channel 50 (or the video frame memory 44 in the embodiments which include the video frame memory and omit the DMA channel). The address generator 48 is also coupled to a computer interface 56 which in turn is coupled to an illuminator control 58. The illuminator control 38 controls an illumination source 40 during the calibration process and implements different lighting modes.
The address generator 48 generates an address to activate the video frame memory
44 in which the pixel is to be stored. The address indicates where the corrected digitized pixel value is to be stored in the video frame memory 44. The address generator 48 also activates the offset and scale factor memories 40a, 40b in the selected bank. „
In operation in a first lighting mode, in response to the address generator 48, the offset correction value is retrieved from the offset memory 40a and presented to a first input of the adding circuit 38. The adding circuit 38 adds the offset value to the pixel value to provide a corrected offset pixel value. Also in response to the address generator 48, the scale correction is retrieved from the scale memory 41a and presented to a first input of a multiplication circuit 42. The multiplication circuit 42 multiplies the offset corrected pixel value by the scale correction value. In an embodiment which includes a DMA channel, the resultant corrected pixel value is transmitted to the processor and memory 52, 54 through the DMA channel 50. In an embodiment which does not include a DMA channel, the resultant corrected pixel value is transmitted to the video frame memory 44 and accessed by the processor 52. In either case, the correction has thus been made in real time, so the computer 52 can process corrected pixels as fast as if they had not been corrected. It should be noted that no central processor cycles are used in making the correction with the technique of the present invention.
A computer interface 56 is coupled between the computer 52 and an illumination control 58 which in turn is coupled to an illuminator 60. The illuminator control 58 is used to control (e.g. vary) the intensity and/or flash duration of the illuminator 60. This is done to generate variable lighting modes during use and during calibration.
It should be appreciated that the correction coefficient memory 39 may be organized in any convenient way. In a preferred embodiment, it is sixteen million sixteen-bit words in which each sixteen-bit word is broken up into eight bits of offset coefficient and eight bits of scale coefficient which are stored in offset and scale memories 20a, 20b, respectively. In some embodiments', however, it may be desirable to use fewer bits of offset and more bits of scale, or vice versa. Such modifications may be desirable or even necessary if the variation in offset were small and the variation in scale were large, for example. It might be reasonable in some cases to include both scale and offset values in eight bits, with three bits used to represent an offset value and five bits used to represent a scale value.
Other ways of managing multiple blocks of correction coefficients for multiple lighting modes are contemplated as well. For example, if memory costs were important and lighting modes were infrequently changed, the time of reloading a lighting mode's information would not be important. In this case, the bank switch 46 could be eliminated (or set to a constant value) and the entire memory 39 could be reloaded from the computer 52 for each separate lighting mode. This would allow the calibration technique to be realized with smaller memories, but would use time in reloading the memories, as opposed to the much faster bank switch 46.
It should also be appreciated that the addition and multiplication circuits can be implemented in any of several ways known to those skilled in the art of arithmetic circuitry. In a preferred embodiment, the adder and multiplier circuits 38, 42 are generated using VHDL software statements compiled for a programmable gate array.
Likewise, the adder 38, memory 39, multiplier 42 switch 46 and address generator may be provided as part of a general purpose computer properly programmed to perform the functions of all of the above circuits. In short, the camera 34 could provide image signals to a properly programmed general purpose computer which would then perform all of the calibration and inspection steps described above in conjunction with Figs. 1 and 2.
It should further be appreciated that in this exemplary embodiment of the inspection system, the image acquisition system 32 does not store the image in a local memory (e.g. video frame memory 44). Rather, the image is placed in the computer's main memory through the DMA channel 50. It should thus be appreciated that the addresses generated by the DMA channel 50 may not be identical to those of the offset and scale memories 40a, 40b. The physical addresses in the memory 54 may not even be contiguous, depending upon the memory management scheme employed in the computer 52. Thus, in order for accurate corrections to be made to correct the pixel, there must be coordination between the address generator for the offset and scale memories and the addressing sequence of the DMA channel 50.
Also, in the embodiment shown in Fig. 2, the correction is performed pixel by pixel as the signals representing the video frame are being transferred from the camera 34 though the compensation circuits 38, 42 and to the video frame memory 44 or the DMA channel 50. Compensating the pixels for irregularities in real time makes it possible to continue processing video information at the same rate as without correction. Thus, the time it takes for the frame data to be transmitted from the camera 14 output to the frame memory 44 or DMA channel must be no greater than the speed at which the camera captures a frame (e.g.. 1/60 sec).
Figs. 3 and 4 are a series of flow diagram showing the processing performed by a processing apparatus which may, for example, be provided as part of an AOI system such as that shown in Fig. 1. The rectangular elements (e.g. block 64 in FIG. 3) in the flow diagram(s) are herein denoted "processing blocks" and represent steps or instructions or groups of instructions. Some of the processing blocks can represent an empirical procedure or a database operation while others can represent computer software instructions or groups of instructions. The diamond shaped elements in the flow diagrams (e.g. block 74 in FIG. 3) are herein denoted "decision blocks" and represent steps or instructions or groups of instructions which affect the processing of the processing blocks. Some of the decision blocks can also represent an empirical procedure or a database operation while others can represent computer software instructions or groups of instructions. Thus, some of the steps described in the flow diagram may be implemented via computer software while others may be implemented in a different manner e.g. via an empirical procedure.
Alternatively, some of the blocks can represent steps performed by functionally equivalent circuits such as a digital signal processor circuit or an application specific integrated circuit (ASIC). Alternatively, still all of the blocks in Figs. 3 and 4 can represent steps performed by a properly programmed general purpose computer or processor.
The flow diagram does not depict the syntax of any particular programming language. Rather, the flow diagram illustrates the functional information one of ordinary skill in the art requires to perform the steps or to fabricate circuits or to generate computer software to perform the processing required of the particular apparatus. It should be noted that where computer software can be used, many routine program elements, such as initialization of loops and variables and the use of temporary variables are not shown. It will be appreciated by those of ordinary skill in the art that unless otherwise indicated herein, the particular sequence of steps described is illustrative only and can be varied without departing from the spirit of the invention.
Turning now to Fig. 3, processing begins with steps 64 and 66 in which a pixel is received from a camera and a digitized pixel value is generated. These operations are familiar to anyone skilled in the art of designing video frame grabbers.
In step 68, a first compensation value is combined with the digitized pixel value generated in step 66 to provide a first order corrected digitized pixel value. In one embodiment (e.g. an embodiment such as that shown in Fig. 2), the first compensation value is provided as an offset correction value. In this case the offset correction value is combined with the digitized pixel value to provide an offset corrected digitized pixel value.
In some applications, it may only be necessary to provide a first order correction.
Thus, in this case no other corrections to the digitized pixel values are required. In other applications, however, further correction of the digitized pixel values may be desired or warranted. In this case, processing proceeds to step 70.
In processing step 70, a second correction value is applied to the first order corrected digitized pixel value to provide a second order corrected digitized pixel value. In the system of Fig. 2, for example, the second correction value is provided as a scale correction value which multiplies the corrected offset digitized pixel value to provide a corrected digitized pixel value.
It should be appreciated that in some applications only a single correction will be necessary and thus either of steps 68 or 70 may be omitted. That is, in some applications only the first correction value would be used while in other applications only the second correction value will be used.
Processing then proceeds to step 72 in which the corrected digitized pixel value is provided to video frame memory where the corrected digitized pixel value is then available for further processing by an image processor for example.
Processing then proceeds to decision block 74 where decision is made as to whether more pixels remain to be processed. If no additional pixels remain to be processed, then processing ends. If, on the other hand, additional pixels remain to be processed, then processing returns to step 64 where the next pixel is received by the camera and steps 66-74 are repeated until no more pixels remain to be processed.
It should be noted that before this system can be used, offset and scale memories
(e.g. memories 40 and 41 in Fig. 2) must be filled with appropriate values. During machine operation, the offset and scale memories 40 and 41 will normally be filled from files recorded in mass storage such as disk or flash memory.
Obtaining the correction coefficients in the first place requires a calibration procedure. In one calibration procedure, a camera to be corrected with its illuminator is aimed at a uniform neutral surface such as a gray card or a white card. Uniform neutral media can be obtained from a number of sources supplying the photography and optical trade and known to those skilled in the art of photometry and photography. A number of exposures of the chosen uniform neutral surface are taken at different intensities or flash durations which can be achieved via an illuminator control (e.g. illuminator control 58 in Fig. 2). Another calibration procedure for automatically obtaining correction coefficients is described below in conjunction with Fig. 4.
The number of exposures taken in a calibration process must be sufficient to provide enough data to calculate all necessary pixel correction coefficients for each pixel. For example, if only scale is to be corrected, a single exposure is sufficient. If scale and offset are to be corrected, two exposures at different illumination values will be required. Suitable arithmetic will be performed on each pixel value in order to separate the effects of offset and scale. If the corrections to be performed are other than straight line offset and scale, more than two sets of coefficients will likely be required, and more than two calibrating exposures will also be required. The number of coefficients is particular to the kind of compensation function being used. If more than two coefficients are needed per pixel, then more than the two memories shown in the drawing will also be required.
In a preferred embodiment, linear scale and offset are corrected, and thus two memories are used (e.g. memories 40 and 41 in Fig. 2), and two calibration exposures are performed. One exposure is taken at a low or zero value of illumination and one at an illumination value known not to saturate any pixel. In practice, saturation can be detected automatically and its effects eliminated. Methods of detecting saturation and backing off from it are well known to those of ordinary skill in the art.
In a preferred embodiment, both "raw frames" (i.e. the dark and bright frames) are transferred without compensation to arrays in central computer memory. It should be noted that any memory architecture capable of presenting the data to a processing unit could be used. The first pixel array is designated Pdark[0 .. N] and the second pixel array is designated Pbright[0..N]. The pixel arrays Pdark and Pbright are used to calculate two other arrays Scale[0 .. N] and Offset[0 .. N]. The variable 0 .. N represents not only a memory address but also a physical location in the field of view. If the illumination were truly uniform and the camera truly perfect, all the Pdark values would be zero, and all the Pbright values would be equal to each other and less than the saturation value. In this ideal case, all Scale values would be equivalent to unity, and all Offset values would be zero.
In practice, however, the Pdark values are not all zero, perhaps because the illuminator cannot be fully turned off, or there is stray light which cannot be shut out. Some pixel offsets may be negative, and others positive. The Pbright values are not all equal, because the illuminator is not uniform. A preferred embodiment of the invention corrects both of these sources of nonuniformity at the same time, provided a linear model describes the aberrations.
According to the earlier description, each incoming pixel will be added to the contents of the Offset memory and the result of the addition will be multiplied by the contents of the Scale memory, i.e.
Truepixel(n) = Scale(n) * (rawpixel (n)+ Offset(n))
The corrected pixel value Truepixel(n) is the provided to the appropriate memories and DMA channel.
Referring now to Fig. 4, a process for learning the scale and offset values stored in a compensation memory (e.g. compensation memory 19 in Fig. 1 or memories 39, 40a, 40b in Fig. 2) I shown. In a preferred embodiment, a processor learns the scale and offset values for each pixel by utilizing an automatic process as next described.
Processing begins with step 78 in which a uniform neutral surface is positioned in a calibration position. The neutral surface may be provided, for example, as a white card or a gray card (e.g. an eight inch by ten inch gray card with 18% density of the type manufactured by Eastman Kodak and referred to as Kodak Publication No. R-27).The calibration position is a position normally occupied by the specimen to be measured and is within a field of view of the image acquisition and illumination systems.
Next as shown in step 84 the compensation memory is initialized. It should be appreciated by those of ordinary skill in the art that in the case where the compensation memory is provided as separate memories (e.g. by partitioning a single physical memory or by utilizing physically separate memories) all of the memories should be appropriately initialized. In the AOI system of Fig. 2, for example, all cells of the offset memory 40a should be set to zero and all cells of the scale memory 40b should be set to unity.
Processing next proceeds to step 82 in which an illumination level is set to a first desired level to provide a particular illumination characteristic to the uniform neutral surface which is positioned in the calibration position. In one particular embodiment, the illumination level is set to zero meaning that no illumination is provided. In step 84, a first image frame is acquired or "grabbed." In the case where the first desired level corresponds to zero, the first frame corresponds to a "dark" frame. At this point, the pixel values in the first frame are uncompensated and ideally all pixel values in this frame should be zero. In practice, however, some of the pixel values in the dark frame are nonzero values. The nonzero values are due to offsets caused primarily by non-uniform or stray illumination of the neutral surface.
In step 86, the additive inverse of the value found in the first frame is stored in the compensation memory. In practice, this may be accomplished by first storing the measured values in a temporary array of pixels, computing the inverse of the measured values and then storing this array in a mass storage file for future use.
The array of additive inverse values is also be stored in the offset memory (e.g. offset memory 40a in Fig. 1) for immediate use. The offset memory 40a now contains sufficient information to correct the next grabbed frame for offset values.
Next in step 88, the illumination level is set to a desired second level to provide a second particular illumination characteristic to the uniform neutral surface which is positioned in the calibration position. In the case where the first illumination level was set to zero, the second illumination level preferably corresponds to the brightest non-saturating level contemplated for use.
It should be appreciated that the order of the lighting levels are used during calibration could be reversed (e.g. use bright level first and dark level second). It is also possible to use other lighting levels besides bright and dark.
In step 90, a second image frame is acquired or "grabbed." In the case where the second desired level corresponds to the brightest non-saturating level contemplated for use zero, the second frame corresponds to a "bright" frame. The pixel values are again "raw" and ideally all pixel values in this frame should have a value of one. In practice, however, some of the pixel values in the bright frame are not equal to one. Because all pixel values are compensated for offset, all pixel value variations in this frame are presumed to be due to intensity aberrations in the illumination field. This is due to offsets caused by non-uniform illumination of the neutral surface. Also, optics can lead to nonuniformities because more light passes through the center of an path than the sides and variations across a detector field of view or optical system of a camera can lead to nonuniformities. It should be noted that when grabbing the bright frame, a check is made to determine whether any pixels are saturated. If any are, then the illumination is reduced and another frame is grabbed. It is preferable to process a pixel values which afe not in saturation since this results in a linear system while processing of pixel values which are in saturation results in a non-linear system. While it is possible to calibrate using saturated values, it is more efficient in terms of time and computational efficiency to process a linear system.
In step 92, for each location in a temporary array of pixels, a scale factor is calculated. The scale factor will, when multiplied by the bright-frame offset-corrected pixel value, result in a constant nominal value. The constant nominal value must be less than the arithmetic processing limit of the data type being used. In a preferred embodiment, the data type is an unsigned eight bit integer, and the constant value is two hundred. Other embodiments may use different data types and different constant values, if appropriate, without departing from the spirit of the present invention. The values from the temporary array are stored in a mass-storage device for future use and are also stored in a scale memory (e.g. scale memory 40b in Fig. 2) for immediate use.
Processing then proceeds to processing step 94 and decision block 96 in which a next frame is grabbed and a decision is made as to whether that frame contains pixels of value equal to the chosen constant nominal value (in the example given above, this corresponds to a value of two hundred).
If decision is made that the frame does not contain pixels of value equal to the chosen constant nominal value, then an error is reported as shown in step 98. If decision is made in step 96 that the frame does contain pixels of value equal to the chosen constant nominal value, then processing ends. It should be appreciated that in an automatic optical inspection machine for inspecting printed circuit boards at the time of assembly, an inspection program may have several different lighting modes. For example, different lighting modes may be used to light a specimen being inspected from different angles in order to accentuate or wash out certain features. In each lighting mode, some of the available lighting elements are turned on and others are turned off. It should be noted that the shape of the light distribution may be different for each lighting mode, since each lighting mode uses a different combination of lights, and each different light possibly has is own individual distribution.
In a preferred embodiment of the present invention, the lighting system is a dome having numerous light-emitting diodes all aimed at the specimen but separately controlled, similar to the dome described in US Patent nos. 5,060,065 and 5,254,421. To the extent that one lighting mode's intensity distribution differs substantially from another's, it will be necessary and useful to prepare a separate set of correction coefficients for each separate lighting mode. Having prepared the various sets of correction coefficients, it is also necessary to bring each set into play at the time when its corresponding lighting mode is in use.
The various sets of correction coefficients can be brought into play at the time when its corresponding lighting mode is in use by using, for example, a bank switch (e.g. bank switch 46 in Fig. 2). The bank switch could be a distinct item (as shown in Fig. 2) or could simply be an extension of an address generator (e.g. the address generator 48 in Fig. 2). The bank switch is set to zero for lighting mode zero, one for lighting mode one, and so forth up to thirty-two (the number thirty-two is arbitrary and could in practice be larger or smaller than thirty-two without violating the spirit of this invention). Since each picture in one embodiment is 640 by 480 pixels, it consumes in round numbers half a million addresses, and the memory storing correction coefficients must be large enough to store all the coefficients needed for half a million pixels. Thirty two banks of a half million addresses each occupy 16 million addresses.
It should be understood by those of ordinary skill in the art that the correction being applied in the preferred embodiment is a linear one involving an offset and a scale factor. There are many different ways of computing the offset and scale without departing from the spirit of the invention.
It should also be appreciated by those of ordinary skill in the art that a nonlinear compensation could be provided as well, using any of a number of well known polynomial, piecewise-linear, tabular or function-based compensation functions. It is understood that the real-time arithmetic functions could take many forms, and working implementations could easily be formulated by one of ordinary skill in the art of framegrabber design. The specific realization of the circuits and software must operate in real time, providing arrays of corrected frame pixel values ready for immediate analysis by the a processor.
One advantage of the present invention is that the cost of illumination hardware can be reduced. Relatively low cost illuminators tend to have worse nonuniformities than relatively expensive illuminators. The present invention makes it possible to tolerate greater irregularity in the illumination, and therefore makes it possible to employ relatively low cost illuminators while still achieving an accuracy which is equivalent to that obtained with relatively high cost illuminators.
A further advantage of the present invention is that in the case where the image acquisition system includes both a camera and a digitizer, the invention simultaneously compensates for linear nonuniformities in the camera and in the digitizer as well as for illumination nonuniformity. If the digitizer has a constant offset, for example, that offset would be silently removed in the calibration process described herein.
Having described preferred embodiments of the invention, it will now become apparent to one of ordinary skill in the art that other embodiments incorporating their concepts may also be used. It is felt therefore that these embodiments should not be limited to disclosed embodiments but rather should be limited only by the spirit and scope of the appended claims. All publications and references cited herein are expressly incorporated herein by reference in their entirety. What is claimed is:

Claims

1. A system for compensating pixel values in a printed circuit board inspection machine, the system comprising: an image acquisition system for digitizing a pixel representing at least a part of the printed circuit board to provide a digitized pixel value at an output of the image acquisition system; a compensation circuit coupled to the output of said image acquisition system for receiving the digitized pixel value pixel from the image acquisition system and for applying one or more compensation values to the digitized pixel value to provide a compensated digitized pixel value at an output of the compensation circuit; a memory for storing the compensated digitized pixel value; and means for transferring the compensated digitized pixel value between said compensation circuit and said memory.
2.i. The system of Claim 1 wherein said compensation circuit comprises: a compensation memory having compensation values stored therein; and a compensating circuit having a first port coupled to said image acquisition system having a second port coupled to said compensation memory and having an output port coupled to said means for transferring, said compensating circuit for combining the digitized pixel value received from said image acquisition circuit at the first port with at least one compensation value received from said compensation memory at the second port to provide a compensated digitized pixel value at the output port.
3. The system of Claim 2 wherein said compensation memory comprises: a plurality of memory banks each of the memory banks adapted to store a separate set of compensation values.
4. The system of Claim 3 further comprising: an illumination system for illuminating the printed circuit board in a plurality of different lighting modes with each memory bank adapted to hold a set of compensation values for at least one particular lighting mode..
5. The system of Claim 4 further comprising a processor coupled to said memory, said processor for accessing the compensated digitized pixel value stored in said memory.
6- The system of Claim 4 wherein: said processor and said memory are provided as part of a frame grabber module; and said means for transferring is provided as a direct memory access (DMA) channel coupled between said compensating circuit and said frame grabber module.
7. The system of Claim 4 further wherein said processor is coupled to said illumination system to' control said illumination system.
8. The system of Claim 2 wherein said compensating circuit comprises: an adder circuit having a first port coupled to the first port of said image acquisition circuit and a second port; and a multiplier circuit having a first port coupled to the second port of said adder circuit and having a second port corresponding to the output port of said compensating circuit.
9. The system of Claim 8 wherein said memory circuit compπses: one or more offset memories, each of the one or more offset memories coupled to said adder circuit; and one or more scale memories, each of the one or more offset memories coupled to said multiplier circuit.
10. The system of Claim 9 wherein: said system operates in a plurality of lighting modes; said memory circuit comprises a plurality of offset memories, each of said plurality of offset memories adapted for use in a particular one of the plurality of lighting modes; and said scale circuit comprises a plurality of scale memories, each of said plurality of scale memories adapted for use in a particular one of the lighting modes.
11. The system of Claim 11 further comprising a bank switch having at least one terminal coupled to each of said plurality of offset memories and said plurality of scale memories and having at least one terminal coupled to said processor, wherein in response to a signal from said processor, said bank switch couples at least one of said plurality of offset memories to said adder circuit and at least one of said plurality of scale memories to said multiplier circuit.
12. An automated optical inspection system comprising: (a) a camera; (b) a video digitizer having an input coupled to receive an analog input from said camera and having an output adapted to provide a digital pixel value; (c) an adder having a first input coupled to the output of said video digitizer, a second input and an output; (d) a multiplier having a first input coupled to the output of said adder, a second input and an output; (e) at least one offset memory having offset correction values stored therein and having a first output coupled to the second input port of said adder, (f) at least scale memory having scale correction values stored therein and having a first output coupled to the second input of said multiplier; and (g) a video frame memory having an input coupled to the output of said multiplier circuit.
13. The system of Claim 12 further comprising a processor coupled to an output of said video frame memory.
14. The system of Claim 13 wherein said processor and said video frame memory are provided as part of a frame grabber module.
15. The system of Claim 14 further comprising a direct memory access (DMA) channel having a first port coupled to the output of said multiplier and a second port coupled to said video frame memory.
16. The system of Claim 15 further comprising an illumination system for illuminating the printed circuit board in a plurality of different lighting modes.
17. The system of Claim 16 wherein: said offset memory circuit comprises a plurality of offset memories each of said plurality of offset memories adapted for use in a particular one of the plurality of lighting modes; and said scale memory circuit comprises a plurality of scale memories, each of said plurality of scale memories adapted for use in a particular one of the lighting modes.
18. The system of Claim 17 further comprising: a processor; and a bank switch having at least one terminal coupled to each of said plurality of offset memories and said plurality of scale memories and having at least one terminal coupled to said processor, wherein in response to a signal from said processor, said bank switch couples a selected one of said plurality of offset memories to said adder circuit and a selected one of said plurality of scale memories to said multiplier circuit.
19. The system of Claim 18 further comprising an address generator having at least one port coupled to each of said plurality of offset and scale memories, at least one port coupled to said video frame memory; and at least one port coupled to said processor.
20. A method for compensating an inspection system, the method comprising the steps of: (a) exposing a calibration surface to a plurality of different illumination levels; (b) obtaining an uncompensated pixel value for each pixel address at each of the plurality of different illumination levels; (c) obtaining a set of compensation coefficients for each uncompensated pixel value; and (d) storing the set of compensation coefficients in a memory.
21. The method of Claim 20 further comprising the steps of: applying the set of compensation coefficients to uncompensated pixel values on a pixel-by-pixel basis to provide compensated pixel values; and storing the compensated pixel values in a memory.
22. The method of Claim 21 wherein: r the step of applying the set of compensation coefficients includes the step of sequentially applying the set of compensation coefficients to uncompensated pixel values on a pixel-by-pixel basis to provide compensated pixel values; and the step of storing the compensated pixel values in a memory further includes the step of sequentially storing the compensated pixel values in a memory.
23. A method for correcting errors in an inspection system, the method comprising the steps of: (a) acquiring a frame; (b) digitizing each pixel in the frame to provide a plurality of digitized pixel values; (o) generating a video frame memory address for each pixel in the frame, each video frame memory address indicating where the pixel is to be stored in a video frame memory; (d) for each pixel in the frame, combining correction values with the digitized pixel values to provide corrected pixel values; and (e) storing each of the corrected pixel values in a video frame memory address of the video frame memory.
24. The method of Claim 23 wherein the .step of combining correction values with the digitized pixel value to provide a corrected pixel value includes the step of: adding an offset correction value to the digitized pixel value to provide a corrected offset digitized pixel value; and multiplying the corrected offset digitized pixel value by a scale correction value to provide a corrected pixel value.
25. The method of Claim 23 further comprising the steps of: generating a video frame memory address for each pixel in the frame, each video frame memory address indicating where the pixel is to be stored in a video frame memory; for each pixel in the frame, combining correction values with the digitized pixel value to provide a corrected pixel value; and storing the corrected pixel value in a video frame memory address of the video frame memory.
26. A method of manufacturing a printed circuit board comprising the steps of: (a) performing a manufacturing operation on a printed circuit board; (b) acquiring an image of the printed circuit board wherein the image includes at least a portion of the printed circuit board; (c) digitizing each pixel in the acquired image to provide a plurality of digitized pixel values; and (d) for each pixel in the acquired image, combining correction values with the digitized pixel values to provide corrected pixel values..
27. The method of Claim 26 further comprising the step of: storing each of the corrected pixel values in a memory. retrieving the corrected pixel values from the memory; and processing the retrieved pixel values to inspect the portion of the printed circuit board where the image was acquired.
28. The process of Claim 26 wherein the step of combining correction values with the digitized pixel values to provide corrected pixel values includes the step of applying a first correction value to the digitized pixel value to provide a first order corrected digitized pixel value.
29. The process of Claim 28 further comprising the step of applying a second correction value to the first order corrected digitized pixel value to provide a second order corrected digitized pixel value.
30. A system for compensating pixel values in a printed circuit board inspection machine, the system comprising: an illumination system; an image acquisition system, coupled to said illumination system, said image acquisition system for digitizing a pixel representing at least a part of the printed circuit board for providing a digitized pixel value at an output of the image acquisition system; and , a processor coupled to the image acquisition system for receiving the digitized pbcel value pixel from the image acquisition system and for applying one or more compensation values to the digitized pixel value to compensate for variations in illumination. .
31. The system of Claim 31 further comprising a memory coupled to said processor and wherein said illumination system is adapted to illuminate an object being inspected in a plurality of different lighting modes and wherein said memory has stored therein compensation values for each of the plurality of different lighting modes.
32. The system of Claim 30 wherein said memory has stored therein a plurality of offset compensation values and a plurality of scale compensation values with each of said plurality of offset compensation values adapted for use in a particular one of the plurality of lighting modes and each of said plurality of scale memory values adapted for use in a particular one of the lighting modes.
33. A method of manufacturing a printed circuit board comprising the steps of: (a) acquiring a set calibration values for each of one or more lighting modes of an illumination system of printed circuit board inspection system; (b) acquiring an image of a printed circuit which has undergone a manufacturing operation wherein the image includes at least a portion of the printed circuit board; and (c) applying predetermined ones of the calibration values to each pixel in the acquired image of the printed circuit board to provide a plurality of corrected pixel values.
34. The method of Claim 33 further wherein the step of acquiring calibration values includes the steps of: exposing an image acquisition system to a calibration surface illuminated at a known lighting level; and computing calibration values to compensate for variations in illumination detected by the image acquisition system at the known lighting level.
PCT/US2001/017574 2000-06-23 2001-05-31 Compensation system and related techniques for use in a printed circuit board inspection system WO2002001209A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001266631A AU2001266631A1 (en) 2000-06-23 2001-05-31 Compensation system and related techniques for use in a printed circuit board inspection system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/602,272 2000-06-23
US09/602,272 US6760471B1 (en) 2000-06-23 2000-06-23 Compensation system and related techniques for use in a printed circuit board inspection system

Publications (1)

Publication Number Publication Date
WO2002001209A1 true WO2002001209A1 (en) 2002-01-03

Family

ID=24410692

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/017574 WO2002001209A1 (en) 2000-06-23 2001-05-31 Compensation system and related techniques for use in a printed circuit board inspection system

Country Status (4)

Country Link
US (1) US6760471B1 (en)
AU (1) AU2001266631A1 (en)
TW (1) TW563389B (en)
WO (1) WO2002001209A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6549647B1 (en) 2000-01-07 2003-04-15 Cyberoptics Corporation Inspection system with vibration resistant video capture
US6577405B2 (en) 2000-01-07 2003-06-10 Cyberoptics Corporation Phase profilometry system with telecentric projector
US6593705B1 (en) 2000-01-07 2003-07-15 Cyberoptics Corporation Rapid-firing flashlamp discharge circuit
FR2853068A1 (en) * 2003-03-26 2004-10-01 Bernard Pierre Andre Genot AUTOMATIC CORRECTION PROCEDURE FOR OPTOELECTRONIC MEASURING DEVICE AND ASSOCIATED MEANS
US8059280B2 (en) 2008-01-31 2011-11-15 Cyberoptics Corporation Method for three-dimensional imaging using multi-phase structured light
DE10343496B4 (en) * 2003-09-19 2015-08-06 Siemens Aktiengesellschaft Correction of an X-ray image taken by a digital X-ray detector and calibration of the X-ray detector
US10126252B2 (en) 2013-04-29 2018-11-13 Cyberoptics Corporation Enhanced illumination control for three-dimensional imaging

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6920624B2 (en) * 2002-01-17 2005-07-19 Seagate Technology, Llc Methodology of creating an object database from a Gerber file
JP3931152B2 (en) * 2003-04-10 2007-06-13 オリンパス株式会社 Imaging device evaluation method, image correction method, and imaging device
US7720580B2 (en) 2004-12-23 2010-05-18 Donnelly Corporation Object detection system for vehicle
WO2007049266A1 (en) * 2005-10-28 2007-05-03 Hi-Key Limited A method and apparatus for calibrating an image capturing device, and a method and apparatus for outputting image frames from sequentially captured image frames with compensation for image capture device offset
JP2007188128A (en) * 2006-01-11 2007-07-26 Omron Corp Measurement method and measurement device using color image
GB0612805D0 (en) * 2006-06-28 2006-08-09 Xact Pcb Ltd Registration system and method
EP2179892A1 (en) 2008-10-24 2010-04-28 Magna Electronics Europe GmbH & Co. KG Method for automatic calibration of a virtual camera
US8964032B2 (en) 2009-01-30 2015-02-24 Magna Electronics Inc. Rear illumination system
US9150155B2 (en) 2010-01-13 2015-10-06 Magna Electronics Inc. Vehicular camera and method for periodic calibration of vehicular camera
US10529053B2 (en) * 2016-12-02 2020-01-07 Apple Inc. Adaptive pixel uniformity compensation for display panels
WO2023014708A1 (en) * 2021-08-02 2023-02-09 Arch Systems Inc. Method for manufacturing system analysis and/or maintenance
US11868971B2 (en) 2021-08-02 2024-01-09 Arch Systems Inc. Method for manufacturing system analysis and/or maintenance
US11886177B1 (en) 2022-08-26 2024-01-30 Arch Systems Inc. System and method for manufacturing system data analysis
US11947340B1 (en) 2022-08-26 2024-04-02 Arch Systems Inc. System and method for machine program analysis

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4500202A (en) * 1982-05-24 1985-02-19 Itek Corporation Printed circuit board defect detection of detecting maximum line width violations
US4801810A (en) * 1987-07-13 1989-01-31 Gerber Scientific, Inc. Elliptical reflector illumination system for inspection of printed wiring boards
US4811410A (en) * 1986-12-08 1989-03-07 American Telephone And Telegraph Company Linescan inspection system for circuit boards
EP0426166A2 (en) * 1989-10-31 1991-05-08 Kla Instruments Corp. Automatic high speed optical inspection system
US5455870A (en) * 1991-07-10 1995-10-03 Raytheon Company Apparatus and method for inspection of high component density printed circuit board

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5060065A (en) * 1990-02-23 1991-10-22 Cimflex Teknowledge Corporation Apparatus and method for illuminating a printed circuit board for inspection
US5047861A (en) * 1990-07-31 1991-09-10 Eastman Kodak Company Method and apparatus for pixel non-uniformity correction
US5245421A (en) 1990-09-19 1993-09-14 Control Automation, Incorporated Apparatus for inspecting printed circuit boards with surface mounted components
US5260779A (en) * 1992-02-21 1993-11-09 Control Automation, Inc. Method and apparatus for inspecting a printed circuit board
US5764209A (en) * 1992-03-16 1998-06-09 Photon Dynamics, Inc. Flat panel display inspection system
US5757981A (en) * 1992-08-20 1998-05-26 Toyo Ink Mfg. Co., Ltd. Image inspection device
US5519441A (en) * 1993-07-01 1996-05-21 Xerox Corporation Apparatus and method for correcting offset and gain drift present during communication of data
US5638465A (en) * 1994-06-14 1997-06-10 Nippon Telegraph And Telephone Corporation Image inspection/recognition method, method of generating reference data for use therein, and apparatuses therefor
US5811808A (en) * 1996-09-12 1998-09-22 Amber Engineering, Inc. Infrared imaging system employing on-focal plane nonuniformity correction
US6304826B1 (en) * 1999-02-05 2001-10-16 Syscan, Inc. Self-calibration method and circuit architecture of image sensor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4500202A (en) * 1982-05-24 1985-02-19 Itek Corporation Printed circuit board defect detection of detecting maximum line width violations
US4811410A (en) * 1986-12-08 1989-03-07 American Telephone And Telegraph Company Linescan inspection system for circuit boards
US4801810A (en) * 1987-07-13 1989-01-31 Gerber Scientific, Inc. Elliptical reflector illumination system for inspection of printed wiring boards
EP0426166A2 (en) * 1989-10-31 1991-05-08 Kla Instruments Corp. Automatic high speed optical inspection system
US5455870A (en) * 1991-07-10 1995-10-03 Raytheon Company Apparatus and method for inspection of high component density printed circuit board

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6549647B1 (en) 2000-01-07 2003-04-15 Cyberoptics Corporation Inspection system with vibration resistant video capture
US6577405B2 (en) 2000-01-07 2003-06-10 Cyberoptics Corporation Phase profilometry system with telecentric projector
US6593705B1 (en) 2000-01-07 2003-07-15 Cyberoptics Corporation Rapid-firing flashlamp discharge circuit
US7027639B2 (en) 2000-01-07 2006-04-11 Cyberoptics Corporation High speed optical image acquisition system with extended dynamic range
FR2853068A1 (en) * 2003-03-26 2004-10-01 Bernard Pierre Andre Genot AUTOMATIC CORRECTION PROCEDURE FOR OPTOELECTRONIC MEASURING DEVICE AND ASSOCIATED MEANS
WO2004088287A2 (en) * 2003-03-26 2004-10-14 Bernard Genot Method of automatic correction for an optoelectronic measuring device
WO2004088287A3 (en) * 2003-03-26 2004-11-18 Bernard Genot Method of automatic correction for an optoelectronic measuring device
DE10343496B4 (en) * 2003-09-19 2015-08-06 Siemens Aktiengesellschaft Correction of an X-ray image taken by a digital X-ray detector and calibration of the X-ray detector
US8059280B2 (en) 2008-01-31 2011-11-15 Cyberoptics Corporation Method for three-dimensional imaging using multi-phase structured light
US10126252B2 (en) 2013-04-29 2018-11-13 Cyberoptics Corporation Enhanced illumination control for three-dimensional imaging

Also Published As

Publication number Publication date
TW563389B (en) 2003-11-21
AU2001266631A1 (en) 2002-01-08
US6760471B1 (en) 2004-07-06

Similar Documents

Publication Publication Date Title
US6760471B1 (en) Compensation system and related techniques for use in a printed circuit board inspection system
CN101124611B (en) Method and device of automatic white balancing of colour gain values
US20050046739A1 (en) System and method using light emitting diodes with an image capture device
KR930011510B1 (en) Scene based nonuniformity compensation for staring focal plane arrays
JP3587433B2 (en) Pixel defect detection device for solid-state imaging device
US6546203B2 (en) Camera with adjustable strobe energy
US20050117045A1 (en) Image pickup system, image processor, and camera
CN108174172B (en) Image pickup method and device, computer readable storage medium and computer equipment
CN1843028A (en) Method and apparatus for reducing effects of dark current and defective pixels in an imaging device
US20070262235A1 (en) Compensating for Non-Uniform Illumination of Object Fields Captured by a Camera
JPH05501793A (en) image correction circuit
WO2004045219A1 (en) Light source estimating device, light source estimating method, and imaging device and image processing method
JP2006033861A (en) Method and device for adjusting white balance
CN103546661A (en) Information processing apparatus, information processing method, and information processing program
KR20080049458A (en) Apparatus to automatically controlling white balance and method thereof
KR20120062722A (en) Method for estimating a defect in an image-capturing system, and associated systems
EP1808680A2 (en) Measuring method and apparatus using color images
WO2019112040A1 (en) Inspection system, inspection method, program, and storage medium
US7477294B2 (en) Method for evaluating and correcting the image data of a camera system
JP2005311581A (en) Image input system, transformation matrix calculating method, and program
CN1906633B (en) Image subtraction of illumination artifacts
JP2002330336A (en) Apparatus and method for capturing image
CN109945794B (en) Image processing system, computer-readable recording medium, and image processing method
JP3943167B2 (en) Color conversion method
WO2008120182A2 (en) Method and system for verifying suspected defects of a printed circuit board

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP