US 7170644 B2 Abstract Methods and systems for dewarping images allow a user to reduce distortion in scanned images. A scanned image is processed using an optics model and an illumination model, and resampled to reduce distortion. The dewarping may be improved through iterative operations where a model parameter is varied. An optimal image is obtained which corresponds to a maximum metric as a measure of the quality of a resampled image.
Claims(29) 1. A method of reducing distortion in a scanned image having illumination data, comprising:
determining defocus distance data according to the illumination data;
determining dewarping factor data according to the defocus distance data; and
resampling the image according to the dewarping factor data.
2. The method of
3. The method of
4. The method of
Δs=ln(c _{1} /i)/c _{2},where:
Δs is a defocus distance;
i is the illumination data; and
c
_{1 }and c_{2 }are parameters.5. The method of
6. The method of
measuring calibration illumination data at a plurality of defocus points from a uniform calibration target; and
determining one or more parameters according to the calibration illumination data.
7. The method of
stepping one of the one or more parameters through a range to obtain metrics; and
determining a desired metric from the metrics.
8. The method of
determining a value of one of the one or more parameters associated with the desired metric;
determining a desired illumination model by using the value of the one of the one or more parameters associated with the desired metric; and
determining the defocus distance data according to the illumination data from the desired illumination model.
9. The method of
a. selecting a first value for one of the one or more parameters as a current value of the one of the one or more parameters;
b. updating the illumination model based on the current values of the one or more parameters;
c. determining the defocus distance data according to the illumination data from the updated illumination model;
d. determining dewarping factor data according to the determined defocus distance data;
e. resampling the image according to the determined dewarping factor data;
f. determining a metric according to the resampled image;
g. changing the current value of one of the one or more parameters; and
h. repeating the steps b–g until the current value of the one of the one or more parameters is no longer within the range.
10. The method of
11. The method of
12. The method of
where:
D is the dewarping factor;
Δs is the defocus distance; and
s is a parameter.
13. The method of
determining whether the parameter is known; and
calibrating the optics model if the parameter is not known.
14. The method of
13, wherein calibrating the optics model comprises:
measuring illumination of an object having a known size at two distances; and
determining the parameter according to the measured illumination at the two distances.
15. The method of
11, wherein determining the dewarping factor data comprises:
stepping the nominal parameter through a range to obtain metrics; and
determining a desired metric from the metrics.
16. The method of
determining a value of the parameter associated with the desired metric;
determining a desired optics model by using the value of the parameter associated with the desired metric; and
determining the dewarping factor data according to the defocus distance data from the desired optics model.
17. The method of
a. selecting a first value for the parameter as a current value of the parameter;
b. updating the optics model based on the current value of the parameter;
c. determining the dewarping factor data according to the defocus distance data from the updated optics model;
d. resampling the image according to the determined dewarping factor;
e. determining a metric according to the resampled image;
f. changing the current value of the parameter; and
g. repeating steps b–f until the current value of the parameter is no longer within the range.
18. The method of
19. The method of
20. The method of
F(x)=exp(x/a)/b, where:
F(x) is the dewarping factor;
x is a pixel position; and
a and b are parameters.
21. The method of
stepping one of the one or more parameters through a range to obtain metrics; and
determining a desired metric from the metrics.
22. The method of
determining a value of one of the one or more parameters associated with the desired metric;
determining a desired page curvature model by using the value of the one or more parameters associated with the desired metrics; and
determining the dewarping factor data according to the defocus distance data using the desired page curvature model.
23. The method of
a. selecting a first value for one of the one or more parameters as a current value of the one of the one or more parameters;
b. updating the page curvature model based on the current values of the one or more parameters;
c. determining the dewarping factor data according to the defocus distance data from the updated page curvature model;
d. resampling the image according to the determined dewarping factor data;
e. determining a metric according to the resampled image;
f. changing the current value of one of one or more parameters; and
g. repeating steps b–f until the current value of the one of one or more parameters is no longer within the range.
24. The method of
25. A system for dewarping an image having illumination data, comprising:
a controller;
a database;
an optics model circuit, routine or application;
an illumination model circuit, routine or application; and
an image-resampling circuit, routine or application;
wherein:
the illumination model circuit, routine or application determines defocus distance data from the illumination data,
the optics model circuit, routine or application determines dewarping factor data based on the determined defocus distance data, and
the image-resampling circuit, routine or application resamples the image based on the determined dewarping factor data.
26. The system of
27. The system of
28. The system of
29. The system of
a metric measuring circuit, routine or application; and
an optimization circuit, routine or application;
wherein:
the optimization circuit, routine or application steps a parameter of a model through a range, and updates the model based on a current value of the parameter at each step,
the metric measuring circuit, routine or application determines metrics for resampled images, each resampled image resampled based on the update model at each step, and
the optimization circuit, routine or application further identifies a desired metric from the determined metrics.
Description 1. Field of Invention This invention relates to methods and systems for reducing distortion in scanned images. 2. Description of Related Art Distortion in scanned images is well known. For example, when book pages are scanned, the resulting digital images usually contain some geometric distortion or warping. This distortion is caused by book pages not being in uniform intimate contact with the scanning surface or platen surface of a scanner. For example, portions of book pages that are near the binding of the book are usually the portions that are not in intimate contact with the platen surface. Accordingly, distortion occurs in image parts corresponding to these portions. Specialized scanners with height sensors have been suggested to achieve distortion-free scanning. For example, U.S. Pat. No. 5,659,404 discloses an image reading apparatus which reads document surface of an open-book-like document placed on a document platen in a face-upward condition and detects height of the document. U.S. Pat. Nos. 5,855,926, 5,969,829, 6,014,470 and 6,041,146 disclose similar apparatus. These apparatus are suggested largely to prevent image distortion from occurring. Correction methods have also been developed to correct skew in scanned images with distortion. For example, U.S. Pat. No. 5,497,236 discloses a method and apparatus for correcting for splay. U.S. Pat. No. 5,187,753 discloses a method and apparatus for identifying and correcting for document skew. However, the correction methods mentioned above rely on estimates of text rows as an image texture. The inventors have determined that such reliance is problematic because text is not always present throughout a page of interest; text-row methods have limitations in accuracy; and text-row methods can be computationally costly. Accordingly, there exists a need for methods that compensate for and/or reduce such geometric distortions that use other image features. This invention provides systems and methods that allow a user to reduce distortion in scanned images. This invention separately provides systems and methods that allow a user to use an optical model to reduce distortion in scanned images. This invention additionally provides systems and methods that allow a user to use an illumination model to reduce distortion in scanned images. This invention separately provides systems and methods that allow a user to use a page curvature model to reduce distortion in scanned images. This invention separately provides systems and methods that allow a user to use an iterative process to reduce distortion in scanned images. This invention separately provides systems and methods that allow a user to use a metric from a resampled image to reduce distortion in scanned images. This invention additionally provides systems and methods that allow a user to use a metric from a resampled image in dewarping a distorted image. Various exemplary embodiments of systems and methods according to this invention reduce distortion in scanned images. The systems and methods of the invention may be used in any of a wide variety of devices, including, for example, copiers, scanners and digital cameras. In various exemplary embodiments of systems and methods according to this invention, an image to be dewarped is obtained so that a user may obtain illumination data as a function of pixel position. Further, the user may obtain defocus data as a function of pixel position, according to an illumination model, and dewarping factor data as a function of pixel position, according to an optics model. Thereafter, the user may resample the image according to the dewarping factor data to reduce distortion in the image. In various exemplary embodiments of systems and methods according to this invention, the user may improve the reduction of distortion in the scanned image by varying one or more model parameters, iterating the dewarping process according to the varied one or more model parameters, measuring a quality metric of the resampled image after each dewarping process, and obtaining a desired resampled image according to an appropriate metric. These and other features and advantages of this invention are described in, or are apparent from, the following detailed description of various exemplary embodiments of the systems and methods according to this invention. Various exemplary embodiments of the systems and methods of this invention will be described in detail, with reference to the following figures, wherein: When a portion of the page, such as a portion near a binding of a book, is displaced from the platen surface Accordingly, the magnification ratio M
(5) (6) (7) (8) _{d}, and (9) (10)Also, Therefore:
In view of Eqs. (4), (6), (8) and (9), Eq. (11) leads to:
Similarly, in In view of Eqs. (5), (7), (8) and (10), the following can be derived from Eq. (13):
Furthermore, combining Eqs. (3), (12) and (14) provides:
In Eq. (15), s is a parameter of the optical system Eq. (15) may be used to reduce distortion and/or to dewarp distorted images. Because a distortion in the scanned image involves varying or nonuniform magnification of the width of a page across the length of that page due to a changing defocus distance Δs, once the defocus distance Δs is known for any given region of the page, the ratio of the widths Y′ The inventors have discovered that the defocus distance Δs can be estimated from a measurement of a feature, such as illumination or reflectance, in the scanned image. For example, an illumination model may be employed to obtain a relationship between an illumination i and the defocus distance Δs. In one exemplary embodiment of an illumination model according to this invention, the illumination i falls off exponentially as the defocus distance Δs increases, as described below. In various exemplary embodiments, before use, the illumination model is calibrated and/or established. In a book binding setting, there may be several parameters that affect illumination level or illumination, the most important being distance, angle, geometry of gap, and proximity to the book edge. For a reasonable approximation, the inventors have found that illumination level is primarily controlled by the distance. The inventors have found a strong correlation between defocus distance Δs and illumination level. Thus, it is reasonable to construct a model of defocus distance versus illumination level. In Accordingly, in various exemplary embodiments of methods and systems according to this invention, the illumination model uses the defocus distance Δs as the only factor that determines illumination variation. However, it should be appreciated that the other parameters could be used in a model, especially under certain circumstances where illumination level alone is insufficient. In various exemplary embodiments of methods and systems according to this invention, the relationship between the illumination i and the defocus distance Δs shown in For example, one exemplary embodiment of the illumination model takes the form of an exponential equation:
Δs is defocus distance; i is the illumination; and c The constants c For ease of description, it is assumed that an image is scanned using a conventional document scanner with a flat glass platen, with conventional optics and an image sensor, such as a continued scan charge coupled device (CCD) array. Also, it is assumed that a book is scanned with the binding parallel to one dimension of the sensor array. However, it should be appreciated that a book can be placed in any other orientation and scanned in different ways without departing from the scope and spirit of this invention. Furthermore, the pages of a book can be scanned in different orders. The pages can be scanned one after another, one page at a time. The pages can also be scanned such that two facing pages are scanned together, resulting in two scanned images, one for each of the two facing pages. In this situation, the left and right scanned images are treated separately and the two scanned images can be segmented from the single set of scanned illumination values by splitting the single set of scanned illumination values into the two images according to a dark column, corresponding to the binding, between the two images. Once the scanned image is obtained, illumination data can be measured to obtain the relationship between illumination and defocus distance, such as the relationship expressed in Eq. (16). In one exemplary embodiment, an image is viewed as an array of pixels, with each column parallel to the binding and each row perpendicular to the binding. In various exemplary embodiments, for each pixel column, illumination data can be measured as disclosed in U.S. patent application Ser. No. 09/750,568, which is incorporated herein by reference in its entirety. In the incorporated 568 application, foreground pixels and background pixels are defined for a scanned image. For a scanned text image with black text lines and white background, for example, the foreground pixels are pixels associated with the black text lines and the background pixels are pixels associated with the white background margins. The illumination data of the foreground pixels and the background pixels are measured. Also, the distribution of the foreground pixel illumination data and the distribution of the background pixel illumination data are used to determine illumination values of pixels corresponding to regions of the input image that are in intimate contact with the platen surface and illumination values of darkened pixels corresponding to regions of the input image that are near the binding of the page carrying the input image. Furthermore, illumination compensation values are derived, and image pixel data is compensated or scaled to lighten up the darkened pixels corresponding to regions of the input image that are near the binding of the page carrying the input image. The illumination measuring method disclosed in the incorporated 568 application is used to establish a relationship between the illumination i and pixel position x. The pixel position x is defined as the distance between a pixel column and the left-hand side boundary of the scanned image. The pixel position x increases as it moves away from the left-hand side boundary and approaches the “binding” that appears in the scanned image. The method of lightening the darkened portions disclosed in the incorporated 568 application may also be used, as will be discussed in greater detail below. From the relationship between the illumination values i and the pixel position x shown in The estimated defocus distance Δs at each pixel position, as shown in In various exemplary embodiments of methods and systems according to this invention, when the scanned image is locally scaled and resampled, each pixel in the scanned image is magnified according to the dewarping factor. The magnification is two-dimensional, and the pixel is magnified both in the direction that is parallel to the book binding, and in the direction that is perpendicular to the book binding. Accordingly, the resampling stretches and widens the scanned image in the direction that is perpendicular to the book binding. At the same time, the resampling straightens the text lines near the book binding in the direction that is parallel to the book binding. In various exemplary embodiments, the relationship between dewarping factor D and the pixel position x shown in x is the pixel position; and a and b are parameters that can be determined from data to create an exponential fit to the dewarping factor value D shown in Using the page curvature model F(x) shown in Eq. (18) is optional. The page curvature model is used when needed to represent a smooth, monotonic behavior expected from page curvature and/or when the parameters a and/or b are used in optimization, which will be disclosed in greater detail below. Nevertheless, determining the dewarping factor D used in resampling a scanned image can be directly made from Although the dewarping procedure described above is presented in separate steps for clarity of description, it should be appreciated that these steps can be combined without departing from the scope and spirit of this invention. In one exemplary embodiment of the systems and methods according to this invention, a dewarping process is improved by optimization. In this embodiment, a metric is defined to measure the quality of a resampled/dewarped image. An iterative or parallel process is used to maximize the quality metric. In such an optimization process, one or more model parameters are chosen. In various exemplary embodiments, the parameter a of the page curvature model of Eq. (18) may be chosen; the parameter b of the page curvature model may be chosen, and/or other parameters, such as the parameters c As used herein, the terms “optimize”, “optimal” and “optimization” connote a condition where one entity is deemed better than another entity because the difference between the two entities is greater than a desired difference. It should be appreciated that it would always be possible to determine a better entity as the desired difference decreases toward zero. Also, the terms “maximize”, “maximum” and “maximization” connote a condition where one entity is deemed greater than another entity because the difference between the two entities is greater than a desired difference. It should be appreciated that it would always be possible to determine a greater entity as the desired difference decreases toward zero. Similarly, the terms “minimize” and “minimal” connote a condition where one entity is deemed less than another entity because the difference between the two entities is greater than a desired difference. Again, it should be appreciated that it would always be possible to determine a lesser entity as the desired difference approaches zero. Accordingly, it should be appreciated that, these terms are not intended to describe an ultimate or absolute condition. Rather, it should be appreciated that these terms are intended to describe a condition that is relative to a desired level of accuracy represented by the magnitude of the desired difference between two or more entities. In various embodiments of systems and methods according to this invention, when approaching a result that is optimal, it is satisfactory to stop at a result with a desired result, without having to reach the optimal result. In various other embodiments of systems and methods according to this invention, when approaching a result that is maximum, it is satisfactory to stop at a result with a desired result, without having to reach the maximum result. A parameter chosen for optimization may be given a range and a step size. The parameter is stepped through the range based on the step size. For each step, a dewarping process is completed, a resampled image is generated and a metric that measures the quality of the resampled image is determined. The process continues until the metric is maximized. For example, when the parameter a of Eq. (18) is chosen for optimization, the parameter a is varied within a predetermined range of values with a predefined step size. For each value of the parameter a, the corresponding value of the parameter b is determined, as is done in the process of fitting Eq. (18). A dewarping process is completed for this pair of parameter values for a and b, producing a resampled image. A metric is determined for this resampled image resulted from using this pair of parameter values for the parameters a and b. Then, a next pair of parameter values for the parameters a and b is used, a new resampled image is produced and another metric is determined. The process continues until the metric is maximized. The pair of parameter values for the parameters a and b associated with the maximum metric is used in a final dewarping process, producing a final resampled image as an optimal result. In various exemplary embodiments, the metric is defined in a way similar to that disclosed in U.S. Pat. No. 5,187,753, which is incorporated herein by reference in its entirety. In the 753 patent, skew, rather than distortion, is determined. In the process disclosed in the 753 patent, the number of black/dark pixels (ON pixels) is summed for each row of pixels. Then, a variance in the number of black pixels among the lines (rows) is determined. Different variances are determined for different angles of image orientation. Skew is identified when the variance is a maximum. In various exemplary embodiments according to this invention, the concept of variance is used as a metric in optimization. Similar to what is disclosed in the 753 patent, the variance is summed from each row of pixels. However, different from the disclosure of the 753 patent, different variances are obtained for different values of the particular parameters chosen for optimization, not for different angles. In various exemplary embodiments, a chosen parameter for optimization may be stepped through the whole span of a predefined range to locate the maximum variance. The chosen parameter may also be stepped through only part of the span of the predetermined range. In this case, stepping the value of the chosen parameter stops when a maximum variance is identified, without having to step through the remainder of the span of the predetermined range. In this situation, the maximum variance can be identified when the variance corresponding to a current step is less, by a predetermined amount, than the variance corresponding to a previous step obtained when stepping through the span of the predetermined range. Of course, it should be appreciated that this assumes that there is only one maximum variance. In various exemplary embodiments, only top and bottom portions of the scanned image As shown in In step S In step S In step S It should be appreciated that, in In step S As discussed above in connection with Eq. (18), step S As shown in In step S In step S In step S It should be appreciated that, in step S Then, in step S In step S In step S In step S It should be appreciated that, in step The I/O interface As shown in The one or more control and/or data buses and/or application programming interfaces In the dewarping system The image source It should also be appreciated that, while the electronic image data can be generated at the time of printing an image from an original physical document, the electronic image data could have been generated at any time in the past. The image source In an exemplary embodiment of an operation according to this invention, the dewarping system The illumination model circuit, routine or application The page curvature model circuit, routine or application When a user desires an optimization process, the controller In this example, accordingly, the optics model circuit, routine or application Also, for example, when the user desires to improve reduction of distortion in the scanned image by iteration of the parameter a of the page curvature model, the optimization circuit, routine or application In this example, accordingly, the page curvature model circuit, routine or application While this invention has been described in conjunction with the exemplary embodiments outlined above, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, the exemplary embodiments of the invention, as set forth above, are intended to be illustrative, not limiting. Various changes may be made without departing from the spirit and scope of the invention. Patent Citations
Referenced by
Classifications
Legal Events
Rotate |