|Publication number||US6606421 B1|
|Application number||US 09/578,843|
|Publication date||Aug 12, 2003|
|Filing date||May 25, 2000|
|Priority date||May 25, 2000|
|Publication number||09578843, 578843, US 6606421 B1, US 6606421B1, US-B1-6606421, US6606421 B1, US6606421B1|
|Inventors||Doron Shaked, Avraham Levy, Izhak Baharav|
|Original Assignee||Hewlett-Packard Development Company, L.P.|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (11), Referenced by (25), Classifications (13), Legal Events (5)|
|External Links: USPTO, USPTO Assignment, Espacenet|
The present invention relates generally to image processing and more specifically to a geometric deformation correction method and system for dot pattern images.
Bar-codes are information carrying graphical patterns designed for easy and reliable automatic retrieval. The most common bar-codes are known as one-dimensional bar-codes. These graphical patterns vary in a single dimension (e.g. the horizontal dimension), and are constant in the other dimension (e.g., the vertical dimension). One-dimensional bar-codes are employed in low information content applications like product index registry (e.g. automatic price tagging and inventory management), or serial number registry (e.g. test-tube tagging in automated medical tests). Common examples of one-dimensional bar-codes are those bar-codes that are affixed or printed on the packages of items purchased at a supermarket or other store. These bar-codes typically can only encode limited information, such as the price of the item and the manufacturer. The items having the bar-codes are scanned at a checkout counter to facilitate the tallying up of a total receipt.
In order to convey more information on the same surface area, two-dimensional bar-codes were developed. Two-dimensional bar-codes involve intricate patterns that vary in both the horizontal and the vertical dimensions. Two-dimensional bar-codes are used in applications that require more information content. For example, two-dimensional bar-codes can be used to encode mail addresses for automated mail reading and distribution systems. Mail carrier companies can use the two-dimensional bar code on shipping packages to encode shipper information, recipient information, tracking information, etc. In another example, two-dimensional bar-codes can be used to encode the compressed content of a printed page to avoid the need for optical character recognition at the receiving end.
Two-dimensional bar-codes are typically graphical patterns composed of dots that are rendered by using two-toned dots (e.g. black dots on a white background). These dots usually occupy a rectangular area. Most current systems use a bar-code printer to print an original bar-code, and the bar-code readers detect that original bar-code. However, it is desirable in many office applications to have a bar-code system that can scan and reliably recover information from copies of the original bar-code. For example, if the original bar-code is embedded in an office document and is given to a first worker, and the first worker desires to share the document with a co-worker, it would be desirable for the first worker to copy the document and provide the same to the co-worker having confidence that the information embedded in a bar-code in the document could be recovered by the co-worker if needed.
Unfortunately, the prior art bar-code and bar-code reading systems cannot reliably recover information encoded in the bar-code except from an original bar-code that is newly printed by a bar-code printer. For example, most systems have difficulty in reliably reading and recovering information from a bar-code that is a photocopy of the original. Moreover, prior art systems have an even greater difficulty in accurately reading and recovering information from a bar-code that is a photocopy of another photocopy of the bar-code (e.g., a bar-code that has been photocopied two or more times).
Accordingly, a challenge in the design of 2D bar-codes and systems to read such bar-codes is to develop a scheme that can produce bar-codes and reliably recover information from bar-codes, even after successive copies of the original, using office equipment. In other words, the system needs to be designed in such a way as to compensate for degradation of the bar-code, thereby making such a system robust. In summary, it is desirable that a bar-code design and bar-code system be designed in such a way as to ensure that the bar-codes can be recognized and the encoded information recovered even after successive copying and handling in a paper path.
The bar-code pattern is often degraded between the time of creation and its use. These degradations can include contrast reduction, stains, marks, and deformations. Many degradations can be corrected by utilizing one or more traditional methods, such as contrast enhancement, adaptive thresholding, and error-correction coding. However, geometric pattern deformation remains a challenge and does not lend itself to resolution by prior art methods. Geometric pattern deformation can occur, for example, when a pattern is photocopied.
The photocopying process can inject the following types of geometric deformations to the dots in a bar-code pattern. The first type of geometric deformations is shape deformations. Shape deformations cause the dots to change their size either shrinking or expanding the dots. Shape deformations typically depend on the brightness setting of the copier. For example, when the brightness setting is set to a darker setting, the dots tend to expand. When the brightness setting is set to a lighter setting, the dots tend to shrink.
The second type of geometric deformations is space deformations. Space deformations cause the dots corresponding to certain coordinates in the original image to be located at different coordinates in the copy. There are two types of space deformations: global deformations and local deformations. Global shape deformations, such as translation, rotation or affine, are those that change the coordinates of the dots in a way that is consistent with an equation that describes the deformations for the entire image. There are also local space deformations that are deformations that cannot be modeled as a global space deformation. These local space deformations are especially difficult to characterize and correct.
Accordingly, there remains a need for a method for correcting geometric deformations in bar-code patterns that overcomes the disadvantages set forth previously.
It is an object of the present invention to provide a method for correcting geometric deformations in a bar-code pattern.
It is another object of the present invention to provide a method for correcting shape deformations in a bar-code pattern.
It is a further object of the present invention to provide a method for correcting space deformations in a bar-code pattern.
A method and system for correcting geometric deformations in an aligned image. A shape deformation correction unit is provided for receiving the aligned image and based thereon for generating a shape-corrected image. A space deformation correction unit is coupled to the shape deformation correction unit and receives the shape-corrected image. The space deformation correction unit uses the shape-corrected image to generate edges and interfaces, and further generates a corrected image based on the interfaces and the shape-corrected image.
The present invention is illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements.
FIG. 1 illustrates a bar-code decoding system in which the geometric deformation correction unit of the present invention can be implemented.
FIG. 2 illustrates in greater detail the geometric deformation correction unit of FIG. 1 configured according to one embodiment of the present invention.
FIG. 3 is a flowchart illustrating a method for correcting geometric deformation in patterns in accordance with one embodiment of the present invention.
FIG. 4 is an exemplary structuring element that can be utilized for shape deformation correction in accordance with one embodiment of the present invention.
FIG. 5 illustrates how the present invention corrects shape deformation in an exemplary dot pattern image.
FIG. 6 illustrates how the present invention corrects space deformation in an exemplary dot pattern image.
In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to avoid unnecessarily obscuring the present invention. The following description and the drawings are illustrative of the invention and are not to be construed as limiting the invention.
As described previously, an image is typically rendered using two tones (e.g., black dots on a white background).
FIG. 1 illustrates a bar code decoding system 10 configured in accordance with one embodiment of the present invention. The bar code decoding system 10 includes a pre-processing unit 12 for receiving a scanned image 13 (e.g., an image that has been distorted through successive copying) and for performing pre-processing operations and for correcting global shape deformations in the scanned image 13 through enhancement and alignment operations to generate an aligned image 15. For example, global space deformations are corrected by the pre-processing unit 12. As described previously, global shape deformations, such as translation, rotation or affine, are those that change the coordinates of the dots in a way that is consistent with an equation that describes the deformations for the entire image. Global space deformations may be corrected by determining one or more parameters (e.g., translation vector or rotation angle) that model the global deformation.
The bar code decoding system 10 also has a geometric deformation correction unit 14 that is coupled to the pre-processing unit 12 for receiving the aligned image 15 and for correcting deformations in the aligned image 15 to generate a corrected image 17. The geometric deformation correction unit 14 also optionally includes a second input for receiving one of more reference features 11 (e.g., the average gray level or dot size of the original image) that can be utilized to correct deformations in the aligned image 15.
The bar code decoding system 10 also has a graphic demodulation and decoding unit 16 that is coupled to the geometric deformation correction unit 14 for receiving the corrected image 17 and for recovering information 19 (e.g., information encoded in a two-dimensional bar code) from the corrected image 17. Demodulation and decoding operations are well known by those of ordinary skill in the art and will not be described in greater detail herein.
FIG. 2 illustrates in greater detail the geometric deformation correction unit 14 of FIG. 1 configured according to one embodiment of the present invention. The geometric deformation correction unit 14 includes a shape-deformation correction unit 30 for correcting shape deformations in the aligned image 15, and a space-deformation correction unit 34 for correcting space deformations in the shape-corrected image 47 generated by the shape-deformation correction unit 30.
The shape-deformation correction unit 30 includes a distortion measure determination unit 40 and a dot size modification unit 44. The distortion measure determination unit 40 receives the aligned image 15 and based thereon determines the distortion measure 41 of the aligned image 15. Preferably, the distortion measure determination unit 40 also receives one or more features 11 of a reference image (e.g., one or more features of the original image) and uses those features to determine the distortion measure 41. The dot size modification unit 44 that is coupled to distortion measure determination unit 40 receives the aligned image 15 and the distortion measure 41 and performs shape deformation correction on the aligned image 15 based on the distortion measure 41 to generate a shape-corrected image 47. The terms, “dot or “dots,” as used herein, refers to any objects that are used to render an image regardless of actual shape. For example, the dots can have a rectangular shape or any other geometric shape.
The present invention insures that the image rendered with dot patterns and any information encoded therein are preserved even after successive copies of the original image.
The space-deformation correction unit 34 includes a directed edge detection unit 50 for generating edges 52, an interface detection unit 54 for generating interfaces 56 based on the edges 52, and a dot alignment unit 58 for performing space-deformation correction on the shape-corrected image based on the interfaces 56 to generate a corrected image 17.
FIG. 3 is a flowchart illustrating the steps of a method of geometric deformation correction in accordance with one embodiment of the present invention. First, at least one morphological operation is utilized to correct for shape deformations. Preferably, the shape deformations are modeled as a morphological dilation or erosion of the black pattern, which are then corrected by the present invention by erosion or dilation, respectively. Second, row/column gradient statistics are utilized to correct for local, approximately-separable, space deformations. A separable space deformation is when all dots in a column move the same amount in the horizontal direction, and all dots in a row move the same amount in the vertical direction so that the deformation of each dot can be deduced from its position. The term “approximately separable deformation” refers to deformations with a negligible amount of movement that cannot be explained by the separable model.
In step 300, the distortion measure determination unit 40 determines a distortion measure (e.g., a shape deformation measure) that reflects the extent to which portions of the pattern (e.g., the black dots) have eroded or expanded relative to the original pattern. As will be explained hereinafter, this step can be performed with or without utilizing one or more reference features 11.
In an exemplary implementation of step 300, the following steps are performed. First, an average tone or darkness of the aligned pattern is calculated. Next, an average tone or darkness of a reference pattern (e.g., the original pattern) is calculated. The average tone of the aligned pattern is then compared with the average tone of the reference pattern, and a tone difference is generated. The tone difference is then utilized to generate a shape deformation measure, which in one example, is an erosion radius or a dilation radius.
In general, the relative area of the black pattern in a visually significant bar-code is equal to the average gray value of the original image (i.e., the image that the bar-code renders). Accordingly, the following specific steps may be performed for determining the distortion measure. First, the aligned image is binarized using a threshold function, and the relative area, b, of the black part is compared to the average gray value, g, of the original image. If the relative area, b, is smaller than the average gray value, g, the required morphological operation is dilation of the black dots. On the other hand, if the relative area, b, is larger than the average gray value, g, the required morphological operation is erosion of the black dots. The radius, r, of the required morphological correction is a function of the absolute difference |g−b|. In one embodiment, this function may be approximated as a linear function: r=9|g−b|, where b and g are represented as fractions in the range [0, 1].
In an alternative exemplary implementation of step 300, an average approach performs the following steps. First, single dots in the aligned pattern are located. Second, information, such as an average gray level or dot size, is received. Next, the radius of the single dots in the aligned pattern is then compared to the information, and a shape deformation measure is generated based on this comparison. Alternatively, the radius of a single white dot may be compared to the radius of a single black dot in the aligned image, and a shape deformation measure may be generated based on this comparison without using information from the original pattern or other reference information (e.g., average gray value or dot size).
This approach first applies a super-resolution edge-detection method, which is well known to those of ordinary skill in the art, on the deformed dot shapes. Next, horizontal and vertical black runs are measured. Runs originating in n dots are measured nĚR+2r, where R is the dot radius, and r the deformation radius. Then, the deformation radius r is calculated by determining the radius that minimizes the best robust square fit of the measurements to the above model. For an example of this approach, please refer to Carl Staelin, and Larry McVoy, “mhz: Anatomy of a micro-benchmark”, in Proceedings of USENIX 1998 Annual Technical Conference, pp. 155, New Orleans, La., June 1998.
In step 304, a dot size modification unit 44 generates a shape-correct image 47 based on aligned image 15 and the distortion measure 41. Preferably, the dot size modification unit 44 selectively modifies the size of the dots to compensate for the respective expansion or shrinkage thereof caused by the shape deformation. Step 304 may be implemented by utilizing at least one morphological operation that either erodes or dilates a dot pattern with a specified shape deformation measure (e.g., a specified radius). It is noted that the dot size modification unit 44 is not limited to a module that changes the size of dots in an image, but instead can be any unit that compensates for shape deformation of an image by using a distortion measure.
Referring to FIG. 4, the present invention preferably utilizes a structuring element having a cross shape. The element includes entries: b0 in the center, b1 in coordinates (▒1,0) and (0,▒1), b2 in coordinates (▒2,0) and (0,▒2), and so on.
The shape compensation unit 44 generates a corrected image by performing a morphological gray scale dilation or erosion of the image with the above structuring element. In other words, the shape compensation unit 44 places the structuring element on every pixel in the input image. Then, the value of the dilation at the corresponding pixel is determined by calculating the minimum of the sums of neighboring pixel values with the structuring-element values placed on them. For erosion one simply takes the maximum of differences:
where SE is the set of valid structuring-element coordinates, and Im,n, Sm,n are image and structuring-element values at coordinates (m,n). (Sk,l obtains one of b0,b1,b2, . . . ).
The values b0,b1,b2, . . . should be such that they perform as a structuring element of a given radius. For integer valued radii the following can be used:
For non-integer radii, the following transformation can be used:
which was found to give a linear correction in terms of the distortion measure (e.g., deformation radius) as measured by the distortion measure determination unit 40. For example, if an image with a measured deformation radius of r0, is corrected with radius −r1, the subsequent deformation radius will measure r0−r1.
Alternatively, step 304 can be implemented by modifying the predetermined threshold that defines a black pattern in the aligned image. Specifically, the threshold defining the black pattern is modified by the shape compensation unit 44. Since the black pattern is obtained from the aligned image by using a threshold, the shape compensation unit 44 can modify the threshold, thereby modifying the area of the black pattern.
In step 308, directed edge detection unit 50 detects edges (e.g., horizontal edges and vertical edges). Preferably, the horizontal edges are detected by performing a directed horizontal gradient estimation, and the vertical edges are detected by performing a directed vertical gradient estimation. For example, a forward derivative can be utilized in the horizontal or vertical direction to estimate the directed edges in the respective direction. It is noted that any directed edge detection kernel can be utilized in this step.
Alternatively, the horizontal edges are detected by determining the zero crossing of a directed horizontal Laplacian operation, and the vertical edges are detected by performing a directed vertical Laplacian operation. The Laplacian operation and the determination of zero crossing are well-known to those of ordinary skill in the image processing art and will not be described herein.
In step 314, interface detection unit 54 determines the interfaces (e.g., dot-row interfaces and dot-column interfaces) based on edge information 52 (e.g., horizontal edges and vertical edges). When the bar-code covers a relatively small area of the total area (e.g., a corner of a paper), the space deformation can be safely approximated as separable. A separable space deformation is defined as the situation in which the row and column interfaces are aligned with pixel rows and columns, and the deformation is expressed only in the uneven distribution of the interfaces.
At column interfaces there are many transients between black dots on the right of the interface and white dots on the left, or vise versa. Accordingly, in order to find the column interfaces, the interface detection unit 54 sums the absolute values of horizontal gradients in columns and locates the columns with high peaks in the gradient sum. To find the row interfaces, interface detection unit 54 simply transposes the image and performs the same operation described above. Alternatively, the interface detection unit 54 sums the absolute values of vertical gradients in rows and locates the rows with high peaks in the gradient sum.
In the preferred embodiment, the interface detection unit 54 also provides for outlier interfaces. First, the interface detection unit 54 determines a range (e.g. a range measured from the last interface) in which to look for the new interface. If no interface is detected in that range, the interface detection unit 54 determines the interface to be a standard dot-size away from the last interface. In this manner, outlier interfaces are detected. Outlier interfaces are those which for some reason, have a weak gradient activity (e.g. a situation where in most rows, dots on both sides of the interface have identical values).
Alternatively, the interface detection unit 54 performs the following procedure that is especially useful in applications where the space deformation is distinctly not separable. First, the interface detection unit 54 estimates column interfaces row-by-row. In every row the location of each column interface is determined so as to satisfy one or more consistency requirements.
These consistency requirements can include, but are not limited to: (1) the interface should preferably agree with local large gradient magnitudes; (2) the interface should not deviate much from its location in the previous row; and (3) the interface should form a quasi-uniform pattern with near by interface locations in the same row.
The following is an exemplary implementation of the above-described alternative embodiment for the interface detection unit 54. First, interfaces are recorded in sub-pixel accuracy. Next, binary gradients are determined by thresholding the aligned image. If a gradient is located within a predetermined pixel distance (e.g., 1.5 pixel from an interface location in the previous row, the gradient is associated with that interface. Interfaces with no gradient association keep their location from the previous row. Interfaces with multiple gradient associations relate only to the closest gradient, and determine their new location as a weighted average between its location (e.g., weight of 0.3) and their location in the previous line (e.g., weight of 0.7). When all the interfaces for a row have been determined this way, their final location is a weighted-average between these locations and the average location of their respective neighbors on the left and right. It is noted that the average extent may range up to several interfaces on each side.
In step 318, the dot alignment unit 58 corrects space deformation in the shape-corrected image 47 based on the interfaces 56. In the preferred implementation, the dot alignment unit 58 virtually aligns the dots by augmenting the aligned image with a list of dot centers. The list describes a potentially non-uniform square grid of dot centers. The dot alignment unit 58 computes dot center coordinates as the center of the interface coordinates on both sides.
Alternatively, the dot alignment unit 58 can compose a new image from sub-images cropped around located dot centers. In this manner, dot centers in the shape-corrected image 47 are located on a uniform square grid, and each dot covers a square patch of pixels around it. This square patch is cropped out of the respective dot location in the shape-corrected image 47. In this approach, there are pixels in the shape-corrected image 47 that will not be found in the corrected image 17, and other pixels that are copied to several locations in the corrected image 17.
FIG. 5 illustrates how the present invention corrects shape deformation in an exemplary dot pattern image 500. The original dot pattern image 500 has a plurality of black and white pixels (e.g., black and white squares) that are arranged in rows and columns. After successive copies, the pixels dilate and cross row and column interfaces. For example, a copied image 508 shows how the original pattern is deformed after five copies. It is readily apparent that the white pixels, for example, have dilated to cross row interface 540 after successive copies. The present invention utilizes the shape deformation correction unit 30 to receive the copied image 508 and correct shape deformations to generate a shape corrected image 530. It is noted that corrected image 530 features row interfaces and column interfaces that are better aligned than those of the copied image 508. Also, single dots are easier to determine in the corrected image 530 than in the copied image 508.
FIG. 6 illustrates how the present invention corrects space deformation in an exemplary dot pattern image 600. The dot pattern image 600 has sixteen pixels arranged in rows and columns. In this example, there are two rows with each row having eight pixels. A first copy 610 shows how dots have undergone a separable deformation (e.g., when all the dots in the first row have been moved an equal amount in the negative y direction). Similarly, all the dots in the second row have also undergone a separable deformation in that they have been moved in an equal amount in the negative y direction. A second copy 620 shows dots that have undergone a non-separable deformation (e.g., when the movement of dots in the first row is independent and different for each dot in the first row). Similarly, all the dots in the second row have also undergone a non-separable deformation in that the movement of dots in the second row is independent and different for each dot in the second row. In either case, the present invention corrects for the space deformation by either measuring the coordinate deformation for the each row when the deformation is separable as in the case with the first copy 610 or by measuring the coordinate deformation for each dot in each row when the deformation is non-separable as in the case with the second copy 620. The present invention receives either the first copy 610 or the second copy 620 and corrects the space deformations to generate a corrected image 630.
The foregoing description has provided numerous examples of the present invention. It will be appreciated that various modifications and changes may be made thereto without departing from the broader scope of the invention as set forth in the appended claims.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5249242 *||Dec 23, 1991||Sep 28, 1993||Adobe Systems Incorporated||Method for enhancing raster pixel data|
|US5469267 *||Apr 8, 1994||Nov 21, 1995||The University Of Rochester||Halftone correction system|
|US5477244 *||Apr 4, 1994||Dec 19, 1995||Canon Kabushiki Kaisha||Testing method and apparatus for judging a printing device on the basis of a test pattern recorded on a recording medium by the printing device|
|US5541743 *||May 26, 1995||Jul 30, 1996||Dainippon Screen Mfg. Co., Ltd.||Method of and apparatus for generating halftone image with compensation for different dot gain characteristics|
|US5594815 *||May 22, 1995||Jan 14, 1997||Fast; Bruce B.||OCR image preprocessing method for image enhancement of scanned documents|
|US5694224 *||Dec 8, 1994||Dec 2, 1997||Eastman Kodak Company||Method and apparatus for tone adjustment correction on rendering gray level image data|
|US6128414 *||Sep 29, 1997||Oct 3, 2000||Intermec Ip Corporation||Non-linear image processing and automatic discriminating method and apparatus for images such as images of machine-readable symbols|
|US6282326 *||Dec 14, 1998||Aug 28, 2001||Eastman Kodak Company||Artifact removal technique for skew corrected images|
|US6295386 *||Nov 13, 1998||Sep 25, 2001||Samsung Electronics Co., Ltd.||Apparatus and a method for correcting image errors in a shuttle type of a scanner|
|US6297889 *||Dec 22, 1998||Oct 2, 2001||Xerox Corporation||Logic-based image processing method|
|US6317219 *||Sep 29, 1998||Nov 13, 2001||Samsung Electronics Co., Ltd.||Method and apparatus for compensating for a distortion between blocks of a scanned image|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US6728004 *||May 30, 2001||Apr 27, 2004||Xerox Corporation||Logic-based image processing method|
|US7085430 *||Nov 25, 2002||Aug 1, 2006||Imaging Dynamics Company Ltd.||Correcting geometric distortion in a digitally captured image|
|US7106902||Nov 27, 2002||Sep 12, 2006||Sanyo Electric Co., Ltd.||Personal authentication system and method thereof|
|US7143948 *||Nov 27, 2002||Dec 5, 2006||Sanyo Electric Co., Ltd.||Reading method of the two-dimensional bar code|
|US7463774 *||Jan 7, 2004||Dec 9, 2008||Microsoft Corporation||Global localization by fast image matching|
|US7481369 *||Jan 6, 2005||Jan 27, 2009||Denso Wave Incorporated||Method and apparatus for optically picking up an image of an information code|
|US7557799||Jun 17, 2004||Jul 7, 2009||Avago Technologies Ecbu Ip (Singapore) Pte. Ltd.||System for determining pointer position, movement, and angle|
|US7885465||Sep 16, 2008||Feb 8, 2011||Microsoft Corporation||Document portion identification by fast image mapping|
|US7889395 *||Jun 29, 2007||Feb 15, 2011||Canon Kabushiki Kaisha||Image processing apparatus, image processing method, and program|
|US7924469 *||Jun 29, 2007||Apr 12, 2011||Canon Kabushiki Kaisha||Image processing apparatus, image processing method, and program|
|US8237991 *||Jul 26, 2010||Aug 7, 2012||Canon Kabushiki Kaisha||Image processing apparatus, image processing method, and program|
|US8279179||Jun 3, 2009||Oct 2, 2012||Avago Technologies Ecbu Ip (Singapore) Pte. Ltd.||System for determining pointer position, movement, and angle|
|US8363282||Oct 31, 2003||Jan 29, 2013||Hewlett-Packard Development Company, L.P.||Halftone screening and bitmap based encoding of information in images|
|US8554012 *||Aug 31, 2009||Oct 8, 2013||Pfu Limited||Image processing apparatus and image processing method for correcting distortion in photographed image|
|US8614830||Sep 27, 2004||Dec 24, 2013||Hewlett-Packard Development Company, L.P.||Pixel exposure as a function of subpixels|
|US20010038460 *||May 30, 2001||Nov 8, 2001||Xerox Corporation||Logic-based image processing method|
|US20050147299 *||Jan 7, 2004||Jul 7, 2005||Microsoft Corporation||Global localization by fast image matching|
|US20050167498 *||Jan 6, 2005||Aug 4, 2005||Kunihiko Ito||Method and apparatus for optically picking up an image of an information code|
|US20050275621 *||Jun 14, 2004||Dec 15, 2005||Humanscale Corporation||Ergonomic pointing device|
|US20100135595 *||Aug 31, 2009||Jun 3, 2010||Pfu Limited||Image processing apparatus and image processing method|
|US20100290090 *||Jul 26, 2010||Nov 18, 2010||Canon Kabushiki Kaisha||Image processing apparatus, image processing method, and program|
|US20130120555 *||May 16, 2013||Georgia-Pacific Consumer Products Lp||Methods and Systems Involving Manufacturing Sheet Products|
|CN102265591B||Dec 26, 2008||Dec 18, 2013||富士通株式会社||Image processing system, image processing device, and image processing method|
|EP1553486A1 *||Jan 5, 2005||Jul 13, 2005||Microsoft Corporation||Global localization by fast image matching|
|EP2372996A1 *||Dec 26, 2008||Oct 5, 2011||Fujitsu Limited||Image processing system, image processing device, and image processing method|
|International Classification||G06K5/04, G06T5/00, G06K5/00, G06K19/06|
|Cooperative Classification||G06K5/04, G06K19/06037, G06T5/006, G06K5/00|
|European Classification||G06K5/04, G06K5/00, G06K19/06C3, G06T5/00G|
|Sep 7, 2000||AS||Assignment|
Owner name: HEWLETT-PACKARD COMPANY, COLORADO
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHAKED, DORON;LEVY, AVRAHAM;BAHARAV, IZHAK;REEL/FRAME:011180/0372;SIGNING DATES FROM 20000822 TO 20000830
|Jul 31, 2003||AS||Assignment|
|Feb 12, 2007||FPAY||Fee payment|
Year of fee payment: 4
|Nov 30, 2010||FPAY||Fee payment|
Year of fee payment: 8
|Mar 20, 2015||REMI||Maintenance fee reminder mailed|