|Publication number||USRE38716 E1|
|Application number||US 09/607,318|
|Publication date||Mar 22, 2005|
|Filing date||Jun 30, 2000|
|Priority date||Dec 20, 1984|
|Publication number||09607318, 607318, US RE38716 E1, US RE38716E1, US-E1-RE38716, USRE38716 E1, USRE38716E1|
|Inventors||Amiram Caspi, Zvi Lapidot|
|Original Assignee||Orbotech, Ltd.|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (30), Non-Patent Citations (27), Referenced by (19), Classifications (12), Legal Events (4)|
|External Links: USPTO, USPTO Assignment, Espacenet|
Notice: More than one reissue application has been filed for the reissue of U.S. Pat. No. 5,774,572. The reissue applications are application numbers: Appl. No. 09/607,318, filed Jun. 30, 2000 (the present parent reissue application), and Appl. No. 10/151,248, filed May 21,2002 (a continuation reissue application of the present parent reissue application), all of which are reissues of U.S. Pat. No. 5,774,572.
This application is a continuation of application Ser. No. 07/961,070, filed Oct. 14, 1992, now abandoned, which is a continuation of application Ser. No. 07/804,511, filed Dec. 10, 1991, now abandoned; which is a continuation of application Ser. No. 06/684,583, filed Dec. 20, 1984, now abandoned.
This invention relates to automatic visual inspection systems, more particularly to systems for inspecting printed circuit boards, hybrid boards, and integrated circuits.
In its simplest form, a printed circuit board or panel comprises a non-conductive substrate on one or both surfaces of which are deposited conductive tracks or lines in a pattern dictated by the design of the electronic equipment supported by the board. More complex boards are constructed by laminating a number of single panels into a composite or multi-layered board; and the use of the latter has increased dramatically in recent years in an effort to conserve space and weight.
As component size has shrunk, component density on boards has increased with the result that line size and spacing have decreased over the years. Because of the “fine geometry” of modern boards, variations in line width and spacing have become more critical to proper operation of the boards. That is to say, minor variations in line thickness or spacing have a much greater chance to adversely affect performance of the printed circuit board. As a consequence, visual inspection, the conventional approach to quality control, has employed visual aids, such as magnifiers or microscopes, to detect defects in a board during its manufacture. Such defects would include line width and spacing, pad position relative to hole location, etc. Unfortunately, visual inspection is a time consuming, tedious task that causes operator fatigue and consequential reduction in consistancy and reliability of inspection, as well as throughput.
Because multi-layered boards cannot be tested electrically before lamination, visual inspection of the component panels of a multi-layered board before lamination is critical. A flaw in a single layer of an assembled board can result in scrapping of the entire board, or involve costly, and time consuming rework. Thus, as board complexity and component density and production requirements have increased, automation of manufacturing processes has been undertaken. However, a larger and larger fraction of the of producing boards lies in the inspection of the boards during various stages of manufacture.
Automatic visual inspection techniques have been developed in response to industry needs to more quickly, accurately and consistently inspect the printed circuit boards. Conventional systems include an electro-optical sub-system that intensely illuminates a board being inspected along a narrow strip perpendicular to the linear displacement of the board through the system, and a solid state camera that converts the brightness of each elemental area of the illuminated strip, termed a pixel, to a number representative of such brightness; and the number is stored in a digital memory. Scanning of the entire board is achieved by moving the board relative to the camera. The result is a grey scale image of the board, or part of the board stored in memory. A relatively small number in a cell of the memory represents a relatively dark region of the object (i.e., the substrate), and a relatively large number represents a brighter portion of the object, (i.e., a conductive line).
The contents of the memory are processed for the purpose of determining the location of transitions between bright and dark regions of the object. Such transitions represent the edges of lines and the processing of the data in the digital memory is carried out so as to produce what is termed a binary bit map of the object which is a map of the printed circuit board in terms of ZERO's and ONE's, where the ONE's trace the lines on the printed circuit board, and the ZERO's represent the substrate. Line width and spacing between lines can then be carried out by analyzing the binary map.
The time required to scan a given board, given a camera with a predetermined data processing rate, typically 10-15 MHz, will depend on the resolution desired. For example, a typical camera with an array of 2048 photodiodes imaging a board is capable of scanning a one inch swath of the board in each pass if a resolution of ½ mil is required. At 0.5 mil resolution, a swath one inch wide and 24 inches long is composed of 96 million pixels. Assuming camera speed of 10 MHz, about 10 seconds would be required for completing one pass during which data from one swath would be acquired. If the board were 18 inches wide, then at least 18 passes would be required to complete the scan of the board. More than 18 passes is required, however, to complete a scan of the board because an overlap of the passes is required to insure adequately covering the “seams” between adjacent passes. Combined with overhead time required, e.g., the time required to reposition the camera from swath to swath, data acquisition time becomes unacceptably large under the conditions outlined above.
The basic problems with any automatic visual inspection system can be summarized in terms of speed of data acquisition, amount of light to illuminate the board, and the depth of field of the optical system. Concomitant with increased requirements for reducing pixel size (i.e., increasing resolution) is an increase in the amount of light data acquisition. Physical constraints limit the amount of light that can be concentrated on the printed circuit boards so that decreasing the pixel size to increase resolution and detect variations in line width or spacing of “fine geometry” boards, actually slows the rate of data acquisition. Finally, decreasing pixel size, as resolution is increased, is accompanied by a reduction in the depth of field which adversely affects the accuracy of the acquired data from board to board.
It is therefore an object of the present invention to provide a new and improved automatic visual inspection system which is capable of acquiring data faster than conventional automatic visual inspection systems, and/or reducing the amount of illumination required for the board, and increasing the depth of field.
According to the present invention, a binary map of an object having edges is produced by first producing a digital grey scale image of the object with a given resolution, and processing the grey scale image to produce a binary map of the object at a resolution greater than said given resolution. If the ultimate resolution required is, for example, one mil (0.001 inches), then, the resolution of the digital grey scale image can be considerably less than one mil, and may be, for example, three mils. The larger than final pixel size involved in acquiring data from an object permits objects to be scanned faster, and either reduces the amount of light required for illuminating the object or permits the same amount of light to be used thus decreasing the effect on accuracy of noise due to statistical variations in the amount of light. Finally, increasing the pixel size during data acquisition improves the depth of field and renders the system less sensitive to variations in the thickness of the boards being tested.
Processing of the grey scale image includes the step of convolving the 2-dimensional digital grey scale image with a filter function related to the second derivative of a Gaussian function forming a 2-dimensional convolved image having signed values. The location of an edge in the object is achieved by finding zero crossings between adjacent oppositely signed values. Preferably, the zero crossings are achieved by an interpolation process that produces a binary bit map of the object at a resolution greater than the resolution of the grey scale image. The nature of the Gaussian function whose second derivative is used in the convolution with the grey scale image, namely its standard deviation, is empirically selected in accordance with system noise and the pattern of the traces on the printed circuit board such that the resulting bit map conforms as closely as desired to the lines on the printed circuit board.
The convolution can be performed with a difference-of-two-Gaussians, one positive and one negative. It may be achieved by carrying out a one-dimensional convolution of successive lines of the grey scale image to form a one-dimensional convolved image, and then carrying out an orthogonal one-dimensional convolution of successive lines of the one-dimensional convolved image to form a two-dimensional convolved image. Each one-dimensional image may be formed by multiple convolutions with a boxcar function.
Detection of the presence of lines less than a predetermined minimum width can be accomplished, independently of the attitude of the lines in the bit map by superimposing on an edge of a line, a quadrant of a circle whose radius is the minimum line thickness. By ANDing the contents of pixels in the bit map with ONE's in the corresponding pixels in the superposed quadrant, the production of a ZERO indicates a line width less than the predetermined width. A similar approach can be taken to detect line spacings less than a predetermined minimum. One quadrant is used for lines and spaces whose orientations on the board lies between 0° and 90°, and another quadrant is used for orientations between 90° and 180°.
An embodiment of the present invention is shown in the accompanying drawings wherein:
Referring now to the drawing, reference numeral 10 designates a conventional printed circuit board comprising substrate 11 on one surface of which are deposited conductive tracks or lines 12 in a manner well known in the art. A typical board may have 3 mil lines, and spacing between lines of a comparable dimension.
As is well known, the technique of depositing lines 12 on substrate 11 involves a photographic and etching process which may produce a result shown in
The photoetching process involved in producing lines on a printed circuit board sometimes results in the spacing s being less that the design spacing. In such case, quality control should reject the board or note the occurrence of a line spacing less than the specified line spacing.
In order to achieve these and other ends, conventional automatic visual inspection systems will produce the results shown in FIG. 4. That is to say, a grey scale image of the printed circuit board will be obtained and stored in a digital memory, the resolution of the grey scale image being selected to be consistent with the accuracy with which measurements in the image are to be made. Thus, if the requirement is for measuring the edge 13 of a trace to within say 1 mil, then the resolution of the grey scale image should be less than that, say 0.5 mil.
Curve 14 in
Conventionally, an algorithm is used for the purpose of determining within which pixel an edge will fall and this is illustrated by the assigned pixel values in vector 18 as shown in FIG. 4. That is to say, value 15a is assumed to exceed a predetermined threshold; and where this occurs, a bit map can be established based on such threshold in a manner illustrated in FIG. 4. Having assigned binary values to the bit map, the edge is defined as illustrated by curve 16 in FIG. 4.
One of the problems with the approach illustrated in
The present invention contemplates using larger pixels to acquire the grey scale image of the printed circuit board than are used in constructing the bit map while maintaining resolution accuracy. Using relatively large pixels to acquire the data increases the area scanned by the optical system in a given period of time as compared to the approach taken with a conventional device. This also increases the amount of light incident on each pixel in the object thus decreasing the effect of noise due to statistical variations in the amount of light incident on the pixel. Finally, this approach also increases the depth of field because of the larger pixel size accomodating larger deviations in board thickness.
Apparatus in accordance with the present invention is designated by reference numeral 30 and is illustrated in
Linear light source 36, shown in cross-section in
The output of circuit 45, which is serial, is a representation of the grey scale image of the object, namely surface 39 of board 31. This grey scale image is developed line by line as board 31 is moved with respect to the electro-optical sub-system 32. The function of preprocessor 70 is deferred; and at this time, it is sufficient to state that the digital values of the brightness of the elemental areas of the object are stored in a digital memory that is a part of convolver 47 whose operation is detailed below.
Referring at this time to
Convolver 47 carries out, on the digital data representative of the grey scale image of the printed circuit board, a two-dimensional convolution with the second derivative of a Gaussian function, or an approximation thereof, producing in the associated memory of the convolver, a convolved image of the object having signed values.
The precise location of the zero crossing need not be determined, only the pixel within which the crossing occurs is necessary. In order to make a direct comparison with the conventional technique illustrated in
Reference is now made to
where where the quantity A represents the magnitude of the convolved image at 54, B represents the magnitude of the convolved image at data point 53, a represents the dimension of a pixel, and b represents the distance of the zero crossing from data point 54. The object of this exercise is to assign a binary value to bits associated with data points 53 and 54, as well as the two interpolated data points 58 and 59. The binary values for data points 54 and 53 are known and they are ZERO and ONE respectively as shown in FIG. 7. What is unknown is the value associated with the interpolatadinterpolated data points 58 and 59, these values being indicated by the quantities x1 and x2. By inspection of Eq. (1), one ancan see that if b lies within the interval between zero and a as shown in
Below curve 56 in
In actual practice, a two-dimensional covolution convolution of the grey scale image with a two-dimensional second derivative (Laplacian) of a Gaussian is carried out. The result is a two-dimensional convolved image of signed values; an interpolation is carried on this convolved image as indicated in
Returning now to
As indicated previously, the signed values of the convolved image are different from essentially zero only adjacent transitions or edges in the object image. No information is contained in the convolved image by which a determination can be made that a pixel containing an essentially zero value is derived from a pixel associated with the substrate or from a pixel associated with a line. Thus, the edges of lines can be accurately determined by the process and apparatus described above, but the attribute of pixels remote from an edge (e.g., pixels farther than the radius of the derivative of the Gaussian operator) is unknown. The purpose of pre-processor 70 is to furnish an attribute to interpolator 65 to enable it to assign a binary value to each bit of the bit map in accordance with whether its corresponding pixel is located in a line or in the substrate. Thus, pre-processor 70 applies to each pixel in the grey scale image a threshold test and stores in associated memory 71 a record that indicates whether the threshold is exceeded. The threshold will be exceeded only for pixels located in a line on the printed circuit board. When convolver 47 produces the convolved image of the grey scale image of the board, the addresses of each pixel lying in a line on the board is available from memory 71. Thus, the attribute of each pixel in the bit map can be established. It is determined directly by the convolution sign near a zero-crossing, and by the threshold test farther away from the zero-crossing. This is because unavoidable variations in contrast which always exist cause the threshold test to be inaccurate. This is particularly true near an edge transition where large variations in contrast exist. In the method described here, therefore, the threshold test is used for only for pixels completely surrounded by dark or light areas. The attributes of pixels near the transition are determined, on the other hand, directly by the convolution sign.
The present invention also contemplates determining whether any line on the board has a portion with a thickness less than a predetermined vlaue value, regardless of the orientation of the line relative to the axes defining the pixel orientation. This result is achieved in the manner illustrated in
In practice, analysis of line width can be carried out automatically by sequentially applying the principles set forth above to each point on the edge of a line. A record can be made of each pixel in the bit map at which a ZERO detection occurs in the offset addresses, and hence the coordinates of each point on the board having too narrow a line can be determined and stored. It should be noted that the technique disclosed herein is applicable to any line on the board at any orientation.
The principles described above are equally applicable to determining whether the spacing between lines is less than a predetermined minimum. In this case, however, the imaginary circle is placed at at edge of a line such that it overlies the substrate, and the presence of a ONE in the offset addresses indicates reduced spacing.
The convolution function used in the present invention need not be a 2-dimensional function, and the convolution operation need not be carried out in one step. Rather, the function may be the difference of Gaussian functions, one that is positive, and one that is negative. The convolution operation can be carried out in two steps: convolving with the positive Gaussian function, and then convolving with the negative. Implementing this, the effect of convolution can be achieved by multiple convolving a line of data in the grey scale image with a boxcar function in one dimension, and then convolving the 1-dimensional convolved image with a boxcar function in an orthogonal direction.
In order to facilitate two dimensional filtering, or the convolution operation as described above, apparatus 100 shown in
The operation of apparatus 100 is based on a mathematical theorem that states that a 1-dimensional convolution of a given function with a Gaussian function can be closely approximated by multiple 1-dimensional convolutions of the given function with a boxcar function (i.e., a function that is unity between prescribed limits and zero elsewhere). This procedure is described in Bracewell, R.N. The Fourier Transform and Its Applications, McGraw-Hill Inc., 1978, chapter 8. Application of this theorem and its implementation to the grey-scale image of the board is achieved in the present invention by apparatus 100 which comprises a plurality of identical convolver unit modules, only one of which (designated by numeral 101) is shown in detail. Each module accepts a stream of values from a scanned two dimensional function, and performs a partial filtering operation. The output of that module is then fed to the next module for further filtering.
Each module contains a shift register made of many (e.g., 2048) cells which are fed sequentially with a stream of grey level values from the camera. Under control of pulses from a clock (not shown), the contents of each cell is shifted (to the right as seen in
Another embodiment of convolver, shown in
The horizontal block of apparatus 110 contains m units, each of which performs partial horizontal filtering or convolution. Two adjacent samples in cells 112 and 113 are summed by adder 114 which represent here the boxcar function. The output of the adder is fed into output cell 115. Cascading many horizontal units performs a 1-dimensional horizontal filtering. The output of the horizontal block is then fed into the vertical block.
The vertical block is made of identical units, each of which performs partial vertical filtering. Apparatus 116 shows one vertical unit. The signal is fed into the input cell 117. The output of that cell is down shifted along the shift register 118. Adder 119 adds the output of the shift register and the output of cell 117. The output of module 116 is fed into the input of the next module. The vertical modules perform a 1-dimensional convolution on the output of the horizontal module, completing in this manner a 2-dimensional convolution on the grey-scale image. All memory cells in the vertical or horizontal units as well as all shift registers are pulsed by a common clock (not shown) feeding the value of each cell into the adjacent cell.
While the above described apparatus performs repeating convolutions with a boxcar function comprised of two adjacent pixels, the convolutions can be achieved using a boxcar function comprising more than two adjacent pixels. This can be achieved, for example, by increasing the number of sampling cells and the number of shift registers, and consequently also increasing the number of inputs entering the adders per module.
As previously indicated, the convolution process requires a 2-dimensional convolution with the differences between Gaussian functions and this can be achieved in the manner indicated in
Finally, while the invention has been described in detail with reference to optical scanning of printed circuit boards, the inventive concept is applicable to other optical scanning problems, and more generally, to any 2-dimensional convolution problem. For example, the invention can be applied to inspecting hybrid boards as well as integrated circuits.
The advantages and improved results furnished by the method and apparatus of the present invention are apparent from the foregoing description of the preferred embodiment of the invention. Various changes and modifications may be made without departing from the spirit and scope of the invention as described in the claims that follow.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US3928842||May 9, 1974||Dec 23, 1975||Veriprint Systems Corp||Fingerprint comparator|
|US4048485||Mar 25, 1976||Sep 13, 1977||International Business Machines Corporation||Digital filter generating a discrete convolution function|
|US4200861||Sep 1, 1978||Apr 29, 1980||View Engineering, Inc.||Pattern recognition apparatus and method|
|US4259661||Sep 1, 1978||Mar 31, 1981||Burroughs Corporation||Apparatus and method for recognizing a pattern|
|US4282510||Jan 7, 1980||Aug 4, 1981||Rca Corporation||Apparatus for discerning the noticeable presence of spatial fluctuations of intensity within a two-dimensional visual field|
|US4303947||Apr 28, 1980||Dec 1, 1981||Xerox Corporation||Image interpolation system|
|US4330833||May 26, 1978||May 18, 1982||Vicom Systems, Inc.||Method and apparatus for improved digital image processing|
|US4442542 *||Jan 29, 1982||Apr 10, 1984||Sperry Corporation||Preprocessing circuitry apparatus for digital data|
|US4462046||Jul 2, 1982||Jul 24, 1984||Amaf Industries Incorporated||Machine vision system utilizing programmable optical parallel processing|
|US4472786 *||Apr 23, 1982||Sep 18, 1984||The United States Of America As Represented By The Secretary Of The Navy||Analog Gaussian convolver|
|US4543660||Apr 14, 1983||Sep 24, 1985||Tokyo Shibaura Denki Kabushiki Kaisha||Pattern features extracting apparatus and method|
|US4553260||Mar 18, 1983||Nov 12, 1985||Honeywell Inc.||Means and method of processing optical image edge data|
|US4555770 *||Oct 13, 1983||Nov 26, 1985||The United States Of America As Represented By The Secretary Of The Air Force||Charge-coupled device Gaussian convolution method|
|US4570180||May 26, 1983||Feb 11, 1986||International Business Machines Corporation||Method for automatic optical inspection|
|US4578812 *||Nov 30, 1983||Mar 25, 1986||Nec Corporation||Digital image processing by hardware using cubic convolution interpolation|
|US4648120||Jan 29, 1985||Mar 3, 1987||Conoco Inc.||Edge and line detection in multidimensional noisy, imagery data|
|US4658372||May 13, 1983||Apr 14, 1987||Fairchild Camera And Instrument Corporation||Scale-space filtering|
|US4724482||Oct 19, 1984||Feb 9, 1988||Telecommunications Radioelectriques||Infrared thermography system with sensitivity improved by progressive accumulation of image lines|
|US4736437||Apr 21, 1987||Apr 5, 1988||View Engineering, Inc.||High speed pattern recognizer|
|US4758888||Feb 17, 1987||Jul 19, 1988||Orbot Systems, Ltd.||Method of and means for inspecting workpieces traveling along a production line|
|US4863268||Apr 2, 1987||Sep 5, 1989||Diffracto Ltd.||Diffractosight improvements|
|US5146509||Aug 28, 1990||Sep 8, 1992||Hitachi, Ltd.||Method of inspecting defects in circuit pattern and system for carrying out the method|
|US5204910||Jun 8, 1992||Apr 20, 1993||Motorola, Inc.||Method for detection of defects lacking distinct edges|
|US5495535||Jan 28, 1993||Feb 27, 1996||Orbotech Ltd||Method of inspecting articles|
|US5586058||Apr 21, 1992||Dec 17, 1996||Orbot Instruments Ltd.||Apparatus and method for inspection of a patterned object by comparison thereof to a reference|
|US5619429||Nov 27, 1991||Apr 8, 1997||Orbot Instruments Ltd.||Apparatus and method for inspection of a patterned object by comparison thereof to a reference|
|US5774572||May 17, 1993||Jun 30, 1998||Orbotech Ltd.||Automatic visual inspection system|
|EP0594146A2||Oct 20, 1993||Apr 27, 1994||Advanced Interconnection Technology, Inc.||System and method for automatic optical inspection|
|WO2000011454A1||Aug 18, 1998||Mar 2, 2000||Orbotech Ltd.||Inspection of printed circuit boards using color|
|WO2000019372A1||Sep 28, 1998||Apr 6, 2000||Orbotech Ltd.||Pixel coding and image processing method|
|1||*||"A Very High Speed, Very Versitile Automatic PCB Inspection System", By Shimon Ullman, Presented in Wash. DC, May. 22-25, 1984.*|
|2||*||"Automatic Visual Inspection Of PC Boards in Seconds", Published on May 1, 1984.*|
|3||*||"Finding Edges and Lines in Images" by John F. Canny.*|
|4||"Orbot PC-20 Automatic Visual Inspection of PC Boards in Seconds," Literature published by Orbot Systems, Ltd., 1984, 6 pages.|
|5||*||"Theory of Edge Detection", Proc. R. Soc. Lond., vol. 8207 (1980), pp. 187-212 by Marr and Hildreth.*|
|6||*||A Survey of Edge Detection Techniques, Larry S. Davis, (1975) pp. 248-270.*|
|7||*||An Optimal Frequency Domain Filter For Edge Detection In Digital Pictures; K. Sam Shanmugam et al, IEEE Transaction On Pattern Analysis And Machine Intelligence, vol. PAMI-1, No. 1, Jan. 1979.pp.37-49.*|
|8||Canny John Francis "Finding Edges and Lines in Images," Master of Science Thesis, Massachusetts Institute of Technology, Jun. 1983 (entire thesis submitted, pp. 1-145).|
|9||Chapron Michel "A New Chromatic Edge Detector Used for Color Image Segmentation," Proceedings of the 11<th >IAPR International Conference on Pattern Recognition, The Hague, The Netherlands, IEEE, vol. III, Aug. 30-Sep. 3, 1992, Conference C: Image, Speech, and Signal Analysis.|
|10||Comaniciu Dorin et al., "Robust Analysis of Feature Spaces: Color Image Segmentation," Proceedings of the 1997 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Jun. 17-19, 1997, IEEE Computer Society, pp. 750-755.|
|11||Davis, Larry S. "A Survey of Edge Detection Techniques," Computer Graphics and Image Processing, Academic Press, Inc., vol. 4, 1975, pp. 248-270.|
|12||*||Edge and Region Analysis for Digital Image Data, Robert M. Haralick, pp. 60-73.*|
|13||Gonzalez Rafael C., et al., "Digital Image Processing," Addison-Wesley Publishing Company, Inc., 1992, Chapters 7 & 8, pp. 413-570.|
|14||Haralick Robert M. "Edge and Region Analysis for Digital Image Data," Computer Graphics and Image Processing, Academic Press, Inc., vol. 12, No. 1, Jan. 1980, pp. 60-73.|
|15||Joyce Lawrence et al., "Precision bounds in superresolution procession," Journal of the Optical Society of America A, Optics and Image Science, Optical Society of America, vol. 1, No. 2, Feb. 1984, pp. 149-168.|
|16||Mammone R. et al., "Superresolving image restoration using linear programming," Applied Optics, Optical Society of America, vol. 21, No. 3, Feb. 1, 1982, 496-501.|
|17||Marr, D., et al., "Theory of edge detection," Proceeding of Royal Society London, B 207, pp. 187-217, 1980.|
|18||Okuyama H. et al., "High-speed digital image processor with special-purpose hardware for two-dimensional convolution," Review of Scientific Instruments, American Institute of Physics, vol. 50, No. 10, Oct. 1979, pp. 1208-1212.|
|19||*||Okuyama, H. et al, "High-speed digital image processor with special-purpose hardware for two-dimensional convolution," Rev. Sci. Instrum., vol. 50, No. 10, Oct. 1979, pp. 1208-1212.*|
|20||*||Precision bounds in superresolution processing, Lawrence S., Joyce and William L. Rott, vol. 1, No. 2 Feb. 1984.*|
|21||Pujas Phillippe, et al., "Robust Colour Image Segmentation," Laboratoire d'Informatique, de Robotique et de Microelectronique de Montpellier Universite Montpellier II/CNRS, France, pp. 1-14.|
|22||Russ John C. "The Image Processing Handbook," 2<nd >Edition, CRC Press, Inc., 1995, Chapters 6 & 7, pp. 347-480.|
|23||Shafarenko Leila et al., "Automatic Watershed Segmentation of Randomly Textured Color Images," IEEE Transactions on Image Processing, IEEE, vol. 6, No. 11, Nov. 1997, pp. 1530-1544.|
|24||Shanmugam, K. Sam et al., "An Optimal Frequency Domain Filter for Edge Detection in Digital Pictures," IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE, vol. PAMI-1, No. 1, Jan. 1979, pp. 37-49.|
|25||*||Superresolving image restoration using linear programming, R. Mammone and G. Eichmann, vol. 21, No. 3 Feb. 01, 1982, pp. 496-501.*|
|26||*||Theory Of Edge Detection; D. Marr et al, Proceeding Of Royal Society, 8207; pp. 187-217 (1990).*|
|27||Ullman Shimon "A Very High Speed, Very Versatile Automatic PCB Inspection System," presented in Washington, D.C., Printed Circuit World Convention III, May 22-25, 1984, pp. 1-8.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7706592||Sep 20, 2006||Apr 27, 2010||Primax Electronics Ltd.||Method for detecting a boundary of a monetary banknote within an image|
|US7706593 *||Sep 20, 2006||Apr 27, 2010||Primax Electronics Ltd.||Verification method for determining areas within an image corresponding to monetary banknotes|
|US7738690 *||Sep 20, 2006||Jun 15, 2010||Primax Electronics Ltd.||Verification method for determining areas within an image corresponding to monetary banknotes|
|US7885450 *||Sep 20, 2006||Feb 8, 2011||Primax Electronics Ltd.||Method for characterizing texture of areas within an image corresponding to monetary banknotes|
|US7916924||Sep 19, 2006||Mar 29, 2011||Primax Electronics Ltd.||Color processing method for identification of areas within an image corresponding to monetary banknotes|
|US8139115||Oct 30, 2006||Mar 20, 2012||International Business Machines Corporation||Method and apparatus for managing parking lots|
|US8594383 *||May 27, 2009||Nov 26, 2013||Hewlett-Packard Development Company, L.P.||Method and apparatus for evaluating printed images|
|US9035673||Jan 25, 2010||May 19, 2015||Palo Alto Research Center Incorporated||Method of in-process intralayer yield detection, interlayer shunt detection and correction|
|US9147267 *||Nov 12, 2012||Sep 29, 2015||Siemens Aktiengesellschaft||Reconstruction of image data|
|US20060017676 *||Jul 25, 2005||Jan 26, 2006||Bowers Gerald M||Large substrate flat panel inspection system|
|US20080069423 *||Sep 19, 2006||Mar 20, 2008||Xu-Hua Liu||Color processing method for identification of areas within an image corresponding to monetary banknotes|
|US20080069424 *||Sep 20, 2006||Mar 20, 2008||Xu-Hua Liu||Method for characterizing texture of areas within an image corresponding to monetary banknotes|
|US20080069425 *||Sep 20, 2006||Mar 20, 2008||Xu-Hua Liu||Method for detecting a boundary of a monetary banknote within an image|
|US20080069426 *||Sep 20, 2006||Mar 20, 2008||Xu-Hua Liu||Verification method for determining areas within an image corresponding to monetary banknotes|
|US20080069427 *||Sep 20, 2006||Mar 20, 2008||Xu-Hua Liu||Verification method for determining areas within an image corresponding to monetary banknotes|
|US20100303305 *||May 27, 2009||Dec 2, 2010||Hila Nachlieli||Method and apparatus for evaluating printed images|
|US20110185322 *||Jan 25, 2010||Jul 28, 2011||Palo Alto Research Center Incorporated||Method of in-process intralayer yield detection, interlayer shunt detection and correction|
|US20130121555 *||Nov 12, 2012||May 16, 2013||Herbert Bruder||Reconstruction of image data|
|US20130265410 *||Apr 9, 2013||Oct 10, 2013||Mahle Powertrain, Llc||Color vision inspection system and method of inspecting a vehicle|
|International Classification||G06T5/00, G06T5/20, G06T7/00|
|Cooperative Classification||G06T2200/28, G06T2207/30141, G06T3/403, G06T7/0004, G06T7/0083|
|European Classification||G06T7/00S2, G06T7/00B1, G06T3/40E|
|Mar 13, 2001||AS||Assignment|
Owner name: ORBOTECH LTD., ISRAEL
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CASPI, AMIRAM;LAPIDOT, ZVI;REEL/FRAME:011587/0587;SIGNING DATES FROM 20010218 TO 20010221
|Dec 2, 2005||FPAY||Fee payment|
Year of fee payment: 8
|Feb 1, 2010||REMI||Maintenance fee reminder mailed|
|Jun 25, 2010||LAPS||Lapse for failure to pay maintenance fees|