WO2002067568A1 - A system and method for the dynamic thresholding of grayscale image data - Google Patents

A system and method for the dynamic thresholding of grayscale image data Download PDF

Info

Publication number
WO2002067568A1
WO2002067568A1 PCT/US2002/004559 US0204559W WO02067568A1 WO 2002067568 A1 WO2002067568 A1 WO 2002067568A1 US 0204559 W US0204559 W US 0204559W WO 02067568 A1 WO02067568 A1 WO 02067568A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
average
image data
result
focus
Prior art date
Application number
PCT/US2002/004559
Other languages
French (fr)
Inventor
Richard G. Haltmaier
Original Assignee
Oak Technology, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oak Technology, Inc. filed Critical Oak Technology, Inc.
Publication of WO2002067568A1 publication Critical patent/WO2002067568A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/40062Discrimination between different image types, e.g. two-tone, continuous tone
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/16Image preprocessing
    • G06V30/162Quantising the image signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/405Halftoning, i.e. converting the picture signal of a continuous-tone original into a corresponding signal showing only two levels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Definitions

  • This invention relates generally to the processing of grayscale image data. More particularly, this invention relates to the conversion of grayscale image data containing text and line art features on a non-white background into a halftone image.
  • Grayscale images use many shades of gray to represent an image.
  • Halftone images in contrast, typically use black and white dots to form an image. The pattern and density of the black and white dots is varied to represent different shades.
  • Problems arise when converting a grayscale image into a halftone image when the grayscale image background is a non- white color or the foreground color is not black.
  • Current methods of converting grayscale image into halftone image data result in the text and line art features in the grayscale image on a non- hite background being obscured when converted into a halftone image.
  • the conversion algorithm must determine whether to fill the corresponding position on the halftone image being created with a black dot or a white space.
  • Current algorithms used in the conversion process have difficulty picking out the edge of text and line art features in grayscale images from backgrounds which are not white, resulting in lack of definition of the text and line art features in the new halftone image.
  • the present invention addresses problems caused by the current methods of converting grayscale image data into halftone images when the grayscale image data has a non- white background and contains text and line art features.
  • One embodiment of the invention utilizes a conversion algorithm which uses a combination of pixel averages formed from different subsets of pixel values of pixels located within the grayscale image data. The different pixel averages are weighted differently depending on the particular characteristics of the area of the grayscale image data being converted to halftone image data. Pixels are examined one pixel at a time in the grayscale image. The pixel being examined is the focus pixel. Running averages are maintained for both the average pixel value of pixels located in the row of pixels containing the focus pixel and the average pixel value of pixels located in the column pixels containing the focus pixel.
  • a pixel window is superimposed over the grayscale image area near the focus pixel, and the average pixel value of the pixels in that pixel window is tracked.
  • the conversion algorithm examines the area near the focus pixel in the grayscale image data for edges indicative of text and line art features and adjusts the weight given to the different averages depending upon whether or not an edge indicating a text or line art feature was found in the grayscale image data. If an edge is detected in the grayscale image, the conversion algorithm gives more weight to the local pixel average. If, on the other hand, the presence of an edge is not detected in the area of the focus pixel, more weight is given in the conversion algorithm to the running horizontal and vertical averages.
  • the conversion algorithm After assigning the proper weight to the averages, the conversion algorithm produces a threshold value which is compared against the pixel value and used to determine whether that pixel will be depicted as a 1, that is a black dot, or a 0, that is a white space in the halftone image.
  • a threshold value which is compared against the pixel value and used to determine whether that pixel will be depicted as a 1, that is a black dot, or a 0, that is a white space in the halftone image.
  • Figure 1 is a flow chart of the steps taken by an illustrated embodiment of the present invention to produce a halftone pixel value from a grayscale pixel value
  • Figure 2 is a block diagram depicting the calculating of the horizontal and vertical running averages used by the illustrated embodiment
  • Figure 3 is a block diagram depicting the calculation of the local pixel average used by the illustrated embodiment
  • Figure 4 A is a block diagram of the horizontal edge correlator used in the illustrated embodiment of the present invention.
  • Figure 4B is a depiction of the vertical edge correlator used by the illustrated embodiment of the present invention.
  • the illustrated embodiment of the present invention addresses the problems encountered when converting grayscale images into halftone images.
  • the illustrated embodiment addresses the difficulties encountered in converting grayscale images which contain line art and text features on a non-white background into halftone images.
  • Current conversion methods have difficulties converting the multi-bit grayscale values into single bit halftone values when the background of the image is non- white because the algorithms have difficulties extracting the edges of the text and line art features from the non- hite background.
  • the conversion process of the illustrated embodiment tracks continuous horizontal and vertical running pixel value averages which are aggregated into a single average, and also tracks a separate local pixel value average.
  • a weight driven conversion algorithm is used which takes the aggregate average and local pixel average into account, thereby enabling more efficient extraction of text and line art features from the non- white background of a grayscale image.
  • Figure 1 depicts the steps followed by the conversion algorithm of the illustrated embodiment to convert a multi-bit grayscale pixel value into a single bit halftone pixel value of either 1 or 0.
  • a halftone value of 1 corresponds to a black pixel color
  • a halftone value of 0 corresponds to a white pixel color.
  • the conversion algorithm begins by selecting a pixel to examine in the grayscale image data, a focus pixel ( step 2 ). The pixels immediately adjacent to the focus pixel are examined to see if their pixel values indicate the presence of an edge in the grayscale image data horizontally oriented to the focus pixel ( step 4 ).
  • the pixels in the grayscale image data immediately adjacent to the focus pixel are examined to see if their pixel values indicate the presence of an edge vertically oriented to the focus pixel (step 6 ).
  • These examinations utilize horizontal and vertical edge correlator algorithms which are explained more fully below.
  • the output from the horizontal edge examination (step 4 ) and the vertical edge examination (step 6 ) are added together and then divided by a normalizing factor such that the result of the division always yields a result that will be between 0 and 1 (step 8 ).
  • This result is utilized as a weight factor by the conversion algorithm.
  • the conversion algorithm uses the weight factor to shift the level of emphasis on different pixel averages used by the conversion algorithm.
  • the conversion algorithm next determines three pixel value averages of subsets of pixels located within the grayscale image data.
  • a horizontal pixel average is determined by averaging the pixel values of pixels located in the grayscale image data on the same horizontal row as the focus pixel (step 10 ).
  • a vertical pixel value average is determined by averaging the pixel values of pixels located in the grayscale image data in the same vertical column as the focus pixel ( step 12 ).
  • a local pixel average is determined consisting of the average pixel value of the pixels located in a pixel window centered on the focus pixel. In one embodiment of the present invention, such a pixel window is 5 pixels by 5 pixels.
  • the 25 individual pixel values are added together by the conversion algorithm and divided by 25 to arrive at a local pixel average for use by the conversion algorithm.
  • the conversion algorithm calculates a threshold value which is used to decide whether the focus pixel will be represented as a black or white pixel in the halftone image being created (step 16 ).
  • the conversion algorithm applies the weight factor (step 8 ) to the horizontal and vertical pixel averages ( step 10 and step 12 ) and the local pixel average ( step 14 ).
  • the conversion algorithm averages the horizontal and vertical running averages, multiplies them by 1 minus the weight factor ( i.e.: a number between 0 and 1) and adds the results to the result of the local average multiplied by the weight factor ( a number between 0 and 1 ).
  • the first multiplier used with the horizontal and vertical running averages will be smaller than the second multiplier used with the local pixel average.
  • the weight factor is small, the opposite is true.
  • This calculation gives more weight to the local pixel average if an edge is detected in the immediate vicinity of the focus pixel, and more weight to the horizontal and vertical running averages in the event no edge has been detected.
  • the result of the algorithm produces a threshold value (step 18 ) which is applied to the focus pixel value ( step 20 ) to produce a halftone output ( step 22 ). If the focus pixel value exceeds the threshold value, a halftone output of 1 indicating a black halftone pixel is produced. If the focus pixel value does not exceed the threshold value, a halftone output of 0 is produced, indicating a halftone white pixel is produced. Every pixel in the grayscale image data is analyzed in this manner.
  • Figure 2 depicts a focus pixel 26 located within grayscale image data.
  • the pixels in the same horizontal row 28 as the focus pixel 26 are used to calculate the running pixel value average of the horizontal pixels.
  • the pixels in the same vertical column 30 as the focus pixel 26 are used to calculate the pixel value average of the vertical pixels.
  • These averages are given more weight by the conversion algorithm in the illustrated embodiment in the event the examination of the the grayscale image data near the focus pixel does not detect the presence of an edge. If the running horizontal and pixel value averages are given more weight by the conversion algorithm, a lower threshold value is generated compared to the occasions when an edge is detected. The lower the threshold value, the higher the likelihood that the focus pixel value exceeds the threshold value and is to be represented by a black pixel in the halftone image being created.
  • Figure 3 depicts a focus pixel 34 located within grayscale image data and a pixel window 36 ( indicated by darkened lines in Figure 3 ) superimposed on the grayscale image data and centered on the focus pixel 34.
  • a pixel window 36 ( indicated by darkened lines in Figure 3 ) superimposed on the grayscale image data and centered on the focus pixel 34.
  • the pixel window 36 is used to calculate a local average of pixel values. The local average for pixel values is given more weight by the conversion algorithm in the event the horizontal and vertical edge correlator algorithms indicate the likely presence of an edge in the grayscale image data adjacent to the focus pixel 34 being examined. The edge correlator algorithms are explained more fully below.
  • the conversion algorithm By giving more weight to the local pixel value average in the presence of an edge, the conversion algorithm generates a threshold value closer to the value of a focus pixel than would be generated by emphasizing the horizontal and vertical running pixel averages. A larger threshold value forces the focus pixel value to be larger in order to generate a dark halftone output value. The presence of the edge values increases the local pixel pixel value average and the conversion algorithm must adjust accordingly .
  • Figure 4A indicates the horizontal edge correlator used in the illustrated embodiment of the present invention.
  • the pixel values of pixels adjacent to a focus pixel 38 are examined to evaluate the likely presence of an edge in the grayscale image data horizontally oriented to the focus pixel.
  • the pixel values for pixels 39, 40, and 41 are added together and subtracted from the aggregate of the pixel values for pixels 42, 43, and 44.
  • the result is used to inform the conversion algorithm of the presence of an horizontally oriented edge in the grayscale image data near the focus pixel.
  • the larger the result of the horizontal edge correlator the more likely it is that a horizontally oriented edge is present in the grayscale image data.
  • Figure 4B depicts a vertical edge correlator being applied around a selected focus pixel 46.
  • the pixel values for pixels 50, 51, and 52 are added together to form a first aggregate value, and the pixel values 47, 48, and 49 are added together to form a second aggregate value which is subtracted from the first aggregate value.
  • the result is used to inform the conversion algorithm of a vertically oriented edge in the grayscale image data near the focus pixel.
  • the higher the value produced by the computation the more likely it is that an edge vertically oriented to the focus pixel exists in the grayscale image data.

Abstract

A method for conversion of grayscale images with a non-white background into halftone images is disclosed. The illustrated embodiment of the present invention addresses the difficulties current conversion methods have in converting multi-bit grayscale images with non-white backgrounds into single bit halftone images. Through manipulation of the grayscale image data, the edges of text and line art features in the grayscale image data are located and extracted from the non-white background by performing calculations on subsets of pixels in horizontal rows (4) and vertical columns (6) cantered around a focus pixel. The extracted edge information is used to determine a weight factor to drive a threshold conversion algorithm (16), together with running horizontal (10), vertical (12), and local (14) pixel averages, resulting in a more defined halftone image output (22).

Description

A SYSTEM AND METHOD FOR THE DYNAMIC THRESHOLDING OF GRAYSCALE IMAGE DATA
Related Application
The current application claims priority from U.S. Application Serial No.
09/785,900, entitled A SYSTEM AND METHOD FOR THE DYNAMIC
THRESHOLDING OF GRAYSCALE IMAGE DATA, which was filed on 16 February
2001, all naming the same inventors and the same assignee as this application, which is incorporated by reference herein.
Technical Field
This invention relates generally to the processing of grayscale image data. More particularly, this invention relates to the conversion of grayscale image data containing text and line art features on a non-white background into a halftone image.
Background of the Invention
Grayscale images use many shades of gray to represent an image. Halftone images, in contrast, typically use black and white dots to form an image. The pattern and density of the black and white dots is varied to represent different shades. Problems arise when converting a grayscale image into a halftone image when the grayscale image background is a non- white color or the foreground color is not black. Current methods of converting grayscale image into halftone image data result in the text and line art features in the grayscale image on a non- hite background being obscured when converted into a halftone image. When converting a grayscale image which contains many different shades of gray, the conversion algorithm must determine whether to fill the corresponding position on the halftone image being created with a black dot or a white space. Current algorithms used in the conversion process have difficulty picking out the edge of text and line art features in grayscale images from backgrounds which are not white, resulting in lack of definition of the text and line art features in the new halftone image. Summary of the Invention
The present invention addresses problems caused by the current methods of converting grayscale image data into halftone images when the grayscale image data has a non- white background and contains text and line art features. One embodiment of the invention utilizes a conversion algorithm which uses a combination of pixel averages formed from different subsets of pixel values of pixels located within the grayscale image data. The different pixel averages are weighted differently depending on the particular characteristics of the area of the grayscale image data being converted to halftone image data. Pixels are examined one pixel at a time in the grayscale image. The pixel being examined is the focus pixel. Running averages are maintained for both the average pixel value of pixels located in the row of pixels containing the focus pixel and the average pixel value of pixels located in the column pixels containing the focus pixel. Additionally, a pixel window is superimposed over the grayscale image area near the focus pixel, and the average pixel value of the pixels in that pixel window is tracked. The conversion algorithm examines the area near the focus pixel in the grayscale image data for edges indicative of text and line art features and adjusts the weight given to the different averages depending upon whether or not an edge indicating a text or line art feature was found in the grayscale image data. If an edge is detected in the grayscale image, the conversion algorithm gives more weight to the local pixel average. If, on the other hand, the presence of an edge is not detected in the area of the focus pixel, more weight is given in the conversion algorithm to the running horizontal and vertical averages. After assigning the proper weight to the averages, the conversion algorithm produces a threshold value which is compared against the pixel value and used to determine whether that pixel will be depicted as a 1, that is a black dot, or a 0, that is a white space in the halftone image. By shifting the weight given to the various pixel averages depending upon the presence or non-presence of an edge, the conversion algorithm is better able to isolate text and line art features from a non- white background in a grayscale image. Brief Description of the Drawings
Figure 1 is a flow chart of the steps taken by an illustrated embodiment of the present invention to produce a halftone pixel value from a grayscale pixel value; Figure 2 is a block diagram depicting the calculating of the horizontal and vertical running averages used by the illustrated embodiment;
Figure 3 is a block diagram depicting the calculation of the local pixel average used by the illustrated embodiment;
Figure 4 A is a block diagram of the horizontal edge correlator used in the illustrated embodiment of the present invention; and
Figure 4B is a depiction of the vertical edge correlator used by the illustrated embodiment of the present invention.
Detailed Description of the Invention
The illustrated embodiment of the present invention addresses the problems encountered when converting grayscale images into halftone images. The illustrated embodiment addresses the difficulties encountered in converting grayscale images which contain line art and text features on a non-white background into halftone images. Current conversion methods have difficulties converting the multi-bit grayscale values into single bit halftone values when the background of the image is non- white because the algorithms have difficulties extracting the edges of the text and line art features from the non- hite background. The conversion process of the illustrated embodiment tracks continuous horizontal and vertical running pixel value averages which are aggregated into a single average, and also tracks a separate local pixel value average. A weight driven conversion algorithm is used which takes the aggregate average and local pixel average into account, thereby enabling more efficient extraction of text and line art features from the non- white background of a grayscale image.
Figure 1 depicts the steps followed by the conversion algorithm of the illustrated embodiment to convert a multi-bit grayscale pixel value into a single bit halftone pixel value of either 1 or 0. A halftone value of 1 corresponds to a black pixel color, and a halftone value of 0 corresponds to a white pixel color. The conversion algorithm begins by selecting a pixel to examine in the grayscale image data, a focus pixel ( step 2 ). The pixels immediately adjacent to the focus pixel are examined to see if their pixel values indicate the presence of an edge in the grayscale image data horizontally oriented to the focus pixel ( step 4 ). Similarly, the pixels in the grayscale image data immediately adjacent to the focus pixel are examined to see if their pixel values indicate the presence of an edge vertically oriented to the focus pixel ( step 6 ). These examinations utilize horizontal and vertical edge correlator algorithms which are explained more fully below.
The output from the horizontal edge examination ( step 4 ) and the vertical edge examination ( step 6 ) are added together and then divided by a normalizing factor such that the result of the division always yields a result that will be between 0 and 1 ( step 8 ). This result is utilized as a weight factor by the conversion algorithm. The conversion algorithm uses the weight factor to shift the level of emphasis on different pixel averages used by the conversion algorithm.
The conversion algorithm next determines three pixel value averages of subsets of pixels located within the grayscale image data. A horizontal pixel average is determined by averaging the pixel values of pixels located in the grayscale image data on the same horizontal row as the focus pixel ( step 10 ). A vertical pixel value average is determined by averaging the pixel values of pixels located in the grayscale image data in the same vertical column as the focus pixel ( step 12 ). A local pixel average is determined consisting of the average pixel value of the pixels located in a pixel window centered on the focus pixel. In one embodiment of the present invention, such a pixel window is 5 pixels by 5 pixels. The 25 individual pixel values are added together by the conversion algorithm and divided by 25 to arrive at a local pixel average for use by the conversion algorithm.
Once the three pixel averages and the weight factor have been calculated, the conversion algorithm calculates a threshold value which is used to decide whether the focus pixel will be represented as a black or white pixel in the halftone image being created ( step 16 ). The conversion algorithm applies the weight factor ( step 8 ) to the horizontal and vertical pixel averages ( step 10 and step 12 ) and the local pixel average ( step 14 ). The conversion algorithm averages the horizontal and vertical running averages, multiplies them by 1 minus the weight factor ( i.e.: a number between 0 and 1) and adds the results to the result of the local average multiplied by the weight factor ( a number between 0 and 1 ). When the weight factor is large, the first multiplier used with the horizontal and vertical running averages will be smaller than the second multiplier used with the local pixel average. When the weight factor is small, the opposite is true. This calculation gives more weight to the local pixel average if an edge is detected in the immediate vicinity of the focus pixel, and more weight to the horizontal and vertical running averages in the event no edge has been detected. The result of the algorithm produces a threshold value ( step 18 ) which is applied to the focus pixel value ( step 20 ) to produce a halftone output ( step 22 ). If the focus pixel value exceeds the threshold value, a halftone output of 1 indicating a black halftone pixel is produced. If the focus pixel value does not exceed the threshold value, a halftone output of 0 is produced, indicating a halftone white pixel is produced. Every pixel in the grayscale image data is analyzed in this manner.
Figure 2 depicts a focus pixel 26 located within grayscale image data. The pixels in the same horizontal row 28 as the focus pixel 26 are used to calculate the running pixel value average of the horizontal pixels. Similarly, the pixels in the same vertical column 30 as the focus pixel 26 are used to calculate the pixel value average of the vertical pixels. These averages are given more weight by the conversion algorithm in the illustrated embodiment in the event the examination of the the grayscale image data near the focus pixel does not detect the presence of an edge. If the running horizontal and pixel value averages are given more weight by the conversion algorithm, a lower threshold value is generated compared to the occasions when an edge is detected. The lower the threshold value, the higher the likelihood that the focus pixel value exceeds the threshold value and is to be represented by a black pixel in the halftone image being created.
Figure 3 depicts a focus pixel 34 located within grayscale image data and a pixel window 36 ( indicated by darkened lines in Figure 3 ) superimposed on the grayscale image data and centered on the focus pixel 34. Those skilled in the art will realize that while the depicted pixel window 36 is centered on the focus pixel 34, in other embodiments of the present invention the pixel window may overlay the focus pixel in a non-centered orientation. The pixel window 36 is used to calculate a local average of pixel values. The local average for pixel values is given more weight by the conversion algorithm in the event the horizontal and vertical edge correlator algorithms indicate the likely presence of an edge in the grayscale image data adjacent to the focus pixel 34 being examined. The edge correlator algorithms are explained more fully below. By giving more weight to the local pixel value average in the presence of an edge, the conversion algorithm generates a threshold value closer to the value of a focus pixel than would be generated by emphasizing the horizontal and vertical running pixel averages. A larger threshold value forces the focus pixel value to be larger in order to generate a dark halftone output value. The presence of the edge values increases the local pixel pixel value average and the conversion algorithm must adjust accordingly .
Figure 4A indicates the horizontal edge correlator used in the illustrated embodiment of the present invention. The pixel values of pixels adjacent to a focus pixel 38 are examined to evaluate the likely presence of an edge in the grayscale image data horizontally oriented to the focus pixel. The pixel values for pixels 39, 40, and 41 are added together and subtracted from the aggregate of the pixel values for pixels 42, 43, and 44. The result is used to inform the conversion algorithm of the presence of an horizontally oriented edge in the grayscale image data near the focus pixel. In general terms, the larger the the result of the horizontal edge correlator, the more likely it is that a horizontally oriented edge is present in the grayscale image data.
Similarly, Figure 4B depicts a vertical edge correlator being applied around a selected focus pixel 46. The pixel values for pixels 50, 51, and 52 are added together to form a first aggregate value, and the pixel values 47, 48, and 49 are added together to form a second aggregate value which is subtracted from the first aggregate value. The result is used to inform the conversion algorithm of a vertically oriented edge in the grayscale image data near the focus pixel. In general terms, the higher the value produced by the computation, the more likely it is that an edge vertically oriented to the focus pixel exists in the grayscale image data.
It will thus be seen that the invention efficiently attains the objectives stated in the previous description. Since certain changes may be made without departing from the scope of the present invention, it is intended that all matter contained in the above description or shown in the accompanying drawings be interpreted as illustrative and not in a literal sense. Practitioners of the art will realize that the sequence of steps in the conversion algorithm may be altered without departing from the scope of the present invention and that the illustrations contained herein are singular examples of a multitude of possible depictions of the present invention.

Claims

We claim:
1. A method of processing image data containing pixel values for pixels, said pixels being oriented in said image data in horizontal rows and vertical columns, said method comprising the steps of: examining a focus pixel in said image data; performing calculations on pixel values of a subset of pixels in horizontal rows immediately above and below the row of said focus pixel to form a first result, said calculations to detect whether the image region contains an edge; performing calculations on pixel values of a subset of pixels in vertical columns immediately to the left and to the right of the column of said focus pixel to form a second result, said calculations to detect whether the image region contains an edge.
2. The method of claim 1, wherein the method further comprises the steps of: using a pixel window superimposed on the image data and said focus pixel in order to determine a local pixel average; and calculating a pixel value average for the pixels contained within said pixel window to arrive at said local average.
3. The method of claim 2, wherein the method further comprises the steps of: determining a weight factor for use in said method, said weight factor being determined by summing said first result and said second result; and dividing said summing result by a pre-established normalization variable so as to form a third result with a value ranging from zero to one.
4. The method of claim 3, wherein the method further comprises the step of: determining a horizontal pixel average representing the average pixel value for a pixel in the row in which said focus pixel is located.
5. The method of claim 4, wherein the method further comprises the step of: determining a vertical pixel average representing the average pixel value for a pixel in the column of pixels in which said focus pixel is located.
6. The method of claim 5, wherein the method further comprises the step of: performing calculations on said vertical pixel average, said horizontal pixel average and said local pixel average, said calculations using said weight to determine a fourth result.
7. The method of claim 6 wherein the method further comprises the step of: performing a calculation with said fourth result and the pixel value of said focus pixel so as to produce a halftone output of zero.
8. The method of claim 6 wherein the method further comprises the step of: performing a calculation with said fourth result and the pixel value of said focus pixel so as to produce a halftone output of one.
9. The method of claim 1, said method comprising the additional step of: examining every pixel in said image data in sequence.
10. The method of claim 9 wherein a halftone value is produced for each pixel in said image data.
11. A medium for use with an electronic device, said medium holding computer- executable instructions for a method, said method comprising the steps of: examining a focus pixel in image data, said image data containing pixel values for pixels, said pixels being oriented in said image data in horizontal rows and vertical columns; performing calculations on pixel values of a subset of pixels in horizontal rows immediately above and below the row of said focus pixel to form a first result, said calculations to detect whether the image region contains an edge; performing calculations on pixel values of a subset of pixels in vertical columns immediately to the left and to the right of the column of said focus pixel to form a second result, said calculations to detect whether the image region contains an edge.
12. The medium of claim 11, wherein said method further comprises the steps of: using a pixel window superimposed on the image data and said focus pixel in order to determine a local pixel average; and summing the pixel values of every pixel in said pixel window and dividing the sum by the number of pixels in said pixel window to arrive at said local average.
13. The medium of claim 12, wherein said method further comprises the steps of: determining a weight factor for use in said method, said weight factor being determined by summing said first result and said second result; and dividing said summing result by a pre-established normalization variable so as to form a third result with a value ranging from zero to one.
14. The medium of claim 13 wherein said method further comprises the step of: determining a horizontal pixel average representing the average pixel value for a pixel in the row in which said focus pixel is located.
15. The medium of claim 14 wherein said method further comprises the step of: determining a vertical pixel average representing the average pixel value for a pixel in the column of pixels in which said focus pixel is located.
16. The medium of claim 15 wherein said method further comprises the step of: performing calculations on said vertical pixel average, said horizontal pixel average and said local pixel average, said calculations using said weight factor to determine a fourth result.
17. The medium of claim 16 wherein said method further comprises the step of: performing a calculation with said fourth result and the pixel value of said focus pixel so as to produce a halftone output of zero.
18. The medium of claim 16 wherein said method further comprises the step of: performing a calculation with said fourth result and the pixel value of said focus pixel so as to produce a halftone output of one.
19. The medium of claim 11, wherein said method comprises the additional step of: examining every pixel in said image data in sequence.
20. The medium of claim 19 wherein said method produces a halftone value for each pixel in said image data.
PCT/US2002/004559 2001-02-16 2002-02-15 A system and method for the dynamic thresholding of grayscale image data WO2002067568A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/785,900 2001-02-16
US09/785,900 US6498660B2 (en) 2001-02-16 2001-02-16 System and method for the dynamic thresholding of grayscale image data

Publications (1)

Publication Number Publication Date
WO2002067568A1 true WO2002067568A1 (en) 2002-08-29

Family

ID=25136969

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2002/004559 WO2002067568A1 (en) 2001-02-16 2002-02-15 A system and method for the dynamic thresholding of grayscale image data

Country Status (2)

Country Link
US (1) US6498660B2 (en)
WO (1) WO2002067568A1 (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6992789B2 (en) * 2001-06-15 2006-01-31 International Business Machines Corporation Method, system, and program for managing a multi-page document
US6950210B2 (en) * 2001-11-21 2005-09-27 Matsushita Electric Industrial Co., Ltd. Method and apparatus for adaptively binarizing color document images
JP4169522B2 (en) * 2002-03-22 2008-10-22 株式会社リコー Image processing apparatus, image processing program, and storage medium for storing the program
US7502525B2 (en) * 2003-01-27 2009-03-10 Boston Scientific Scimed, Inc. System and method for edge detection of an image
US7496237B1 (en) * 2004-01-02 2009-02-24 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Image processing for binarization enhancement via fuzzy reasoning
US8215556B2 (en) 2004-06-28 2012-07-10 Konica Minolta Laboratory U.S.A., Inc. Color barcode producing, reading and/or reproducing method and apparatus
US7533817B2 (en) * 2004-08-09 2009-05-19 Konica Minolta Systems Laboratory, Inc. Color barcode producing method and apparatus, color barcode reading method and apparatus and color barcode reproducing method and apparatus
US7669769B2 (en) * 2005-03-28 2010-03-02 Konica Minolta Systems Laboratory, Inc. Systems and methods for preserving and maintaining document integrity
US7551179B2 (en) 2005-08-10 2009-06-23 Seiko Epson Corporation Image display apparatus and image adjusting method
US7628330B2 (en) * 2006-09-29 2009-12-08 Konica Minolta Systems Laboratory, Inc. Barcode and decreased-resolution reproduction of a document image
US7766241B2 (en) * 2006-09-29 2010-08-03 Konica Minolta Systems Laboratory, Inc. Barcode for two-way verification of a document
EP1935340B1 (en) * 2006-12-19 2017-11-01 Agfa HealthCare NV Method for neutralizing image artifacts prior to the determination of the Signal-to-noise ratio in CR/DR radiography systems
CA2650180C (en) * 2008-01-17 2015-04-07 Imds America Inc. Image binarization using dynamic sub-image division
US9332154B2 (en) * 2008-01-17 2016-05-03 Imds America Inc. Image binarization using dynamic sub-image division
US20110158658A1 (en) 2009-12-08 2011-06-30 Vello Systems, Inc. Optical Subchannel-Based Cyclical Filter Architecture
JP6554887B2 (en) * 2015-04-14 2019-08-07 富士ゼロックス株式会社 Image generating apparatus, evaluation system, and program
CN109724137A (en) * 2018-07-23 2019-05-07 永康市蜂蚁科技有限公司 Multifunctional aluminium-copper composite heat supply pipe
CN117409090A (en) * 2023-12-14 2024-01-16 北京科技大学 Method and system for determining LF refining slagging effect based on image processing

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4638369A (en) * 1984-09-04 1987-01-20 Xerox Corporation Edge extraction technique

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6337925B1 (en) * 2000-05-08 2002-01-08 Adobe Systems Incorporated Method for determining a border in a complex scene with applications to image masking

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4638369A (en) * 1984-09-04 1987-01-20 Xerox Corporation Edge extraction technique

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BAXES, G.A.: "Digital image processing principles and applications", JOHN WILEY & SONS, 1994, pages 350 - 351, XP002950153 *
GONZALEZ, R.C. AND WOODS, R.E.: "Digital image processing", ADDISON-WESLEY PUBLISHING, 1992, pages 416 - 421, XP002950154 *

Also Published As

Publication number Publication date
US20020114010A1 (en) 2002-08-22
US6498660B2 (en) 2002-12-24

Similar Documents

Publication Publication Date Title
US6498660B2 (en) System and method for the dynamic thresholding of grayscale image data
KR101986592B1 (en) Recognition method of license plate number using anchor box and cnn and apparatus using thereof
RU2469403C2 (en) Image processing apparatus, method and program
Gatos et al. Adaptive degraded document image binarization
Lu et al. Color filter array demosaicking: new method and performance measures
US6195459B1 (en) Zone segmentation for image display
US8396324B2 (en) Image processing method and apparatus for correcting distortion caused by air particles as in fog
EP2545499B1 (en) Text enhancement of a textual image undergoing optical character recognition
US7519231B2 (en) Hierarchical scheme for blur detection in a digital image
EP1857975B1 (en) Histogram adjustment for high dynamic range image mapping
US8121403B2 (en) Methods and systems for glyph-pixel selection
JP4795473B2 (en) Image processing apparatus and control method thereof
US20020031268A1 (en) Picture/graphics classification system and method
US8351720B2 (en) Method and system providing edge enhanced image binarization
JP2007507802A (en) Text-like edge enhancement in digital images
US6701026B1 (en) Method and apparatus for cancelling lighting variations in object recognition
US7522781B2 (en) Method and apparatus for image processing based on a mapping function
CN115131351B (en) Engine oil radiator detection method based on infrared image
JP4852059B2 (en) Noise removal apparatus and noise removal program for improving binarization performance of document image
CN114005090A (en) Suspected smoke proposed area and deep learning-based smoke detection method
US7620246B2 (en) Method and apparatus for image processing
JP2007249436A (en) Image signal processor and processing method
JP5089797B2 (en) Image processing apparatus and control method thereof
CN112364871A (en) Part code spraying character segmentation method based on improved projection algorithm
US20070086059A1 (en) Image sharpness device and method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP