US20010022675A1 - Electronic image registration for a scanner - Google Patents

Electronic image registration for a scanner Download PDF

Info

Publication number
US20010022675A1
US20010022675A1 US09/867,901 US86790101A US2001022675A1 US 20010022675 A1 US20010022675 A1 US 20010022675A1 US 86790101 A US86790101 A US 86790101A US 2001022675 A1 US2001022675 A1 US 2001022675A1
Authority
US
United States
Prior art keywords
document
edge
corner
image
coordinate value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/867,901
Inventor
Nancy Kelly
Ramesh Nagarajan
Francis Tse
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xerox Corp
Original Assignee
Xerox Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xerox Corp filed Critical Xerox Corp
Priority to US09/867,901 priority Critical patent/US20010022675A1/en
Publication of US20010022675A1 publication Critical patent/US20010022675A1/en
Assigned to BANK ONE, NA, AS ADMINISTRATIVE AGENT reassignment BANK ONE, NA, AS ADMINISTRATIVE AGENT SECURITY AGREEMENT Assignors: XEROX CORPORATION
Assigned to JPMORGAN CHASE BANK, AS COLLATERAL AGENT reassignment JPMORGAN CHASE BANK, AS COLLATERAL AGENT SECURITY AGREEMENT Assignors: XEROX CORPORATION
Priority to US10/854,010 priority patent/US6999209B2/en
Assigned to XEROX CORPORATION reassignment XEROX CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: JPMORGAN CHASE BANK, N.A. AS SUCCESSOR-IN-INTEREST ADMINISTRATIVE AGENT AND COLLATERAL AGENT TO JPMORGAN CHASE BANK
Assigned to XEROX CORPORATION reassignment XEROX CORPORATION RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: JPMORGAN CHASE BANK, N.A. AS SUCCESSOR-IN-INTEREST ADMINISTRATIVE AGENT AND COLLATERAL AGENT TO BANK ONE, N.A.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof
    • G06T3/608Skewing or deskewing, e.g. by two-pass or three-pass rotation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • H04N1/3877Image rotation
    • H04N1/3878Skew detection or correction

Definitions

  • the present invention is directed to an electronic image registration system for an image input terminal. More specifically, the present invention is directed to the electronic image registration system for a scanner which reduces skew without utilizing additional mechanical hardware.
  • a conventional registration device is disclosed in the Xerox Disclosure Journal , Vol. 12, No. 1, entitled, “IMPROVED PERFORMANCE OF A DOCUMENT REGISTRATION FIGURE/PLATEN INTERFACE,” the entire contents of which are hereby incorporated by reference.
  • a copier scans the image through a glass platen.
  • the registration system also includes a plastic ramp which adjusts the height of the glass platen by means of an adjusting screw.
  • the plastic ramp also serves as a guide for registration fingers to move from an out of registration position to a registration position. In the registration position, the registration fingers butt up against the edge of the platen and protrude above the top surface such that a document conveyed on top of the surface of the platen is stopped at the fingers in the required registration position.
  • Xerox Disclosure Journal Vol. 10, No. 2, entitled, “LEADING EDGE DESKEW AND REGISTRATION DEVICE,” the entire contents of which are hereby incorporated by reference.
  • the Xerox Disclosure Journal article discloses a device which provides both deskewing and registration.
  • the registration system two paper guide plates are utilized. An original document moves along a first guide plate towards a second guide plate by a means of a conveyor system or pinch rollers. The document stops at a wait station as the leading edge of the document enters an area between the first guide plate and the second guide plate.
  • the leading edge registration device As the document enters this area, the leading edge registration device is in the raised position such that two guides on the registration device project up into the area between the first paper guide plate and the second guide plate. The leading edge registration device then moves a fixed amount opposite the direction of the travel of the document so as to deskew the document and place the document into a proper registration position. After the deskewing motion, the leading edge registration device lowers out of the paper path and the document can continue from the first paper guide plate to the second paper guide plate.
  • a drawback associated with conventional registration systems is that conventional registration systems require additional mechanical hardware in the paper path so as to physically register the document in the proper scanning position. This utilization of mechanical devices to physically register the document for proper scanning is a relatively slow solution which cannot be readily utilized in a high speed scanner. Moreover, such mechanical systems cannot be utilized in a centered registered constant velocity transport scanner since the exact fast scan location of the input can vary from document to document. Therefore, for a registration system to be utilized in a high speed scanner or in conjunction with a centered registered constant velocity transport system scanner, the registration system must be able to register the input of the document quickly, dynamically, variably, and accurately.
  • the present invention is directed to a method for detecting all the four corners of a document placed on a platen. Unlike the process mentioned in U.S. Pat. No. 5,528,387, the present invention makes no assumption with respect the input size of the document to determine the third and fourth corners (C 2 , C 3 ) of the input document.
  • One aspect of the present invention is a system for electronically registering an image on an input document.
  • the system includes scanning means for generating an image data stream representing an electronic image of the image on the input document; edge detecting means, operatively connected to said scanning means, for detecting edge data within the image data stream; first corner detecting means, operatively connected to said edge detecting means, for detecting a first corner of a leading edge of the input document based on the detected edge data and for establishing a first coordinate value therefrom; second corner detecting means, operatively connected to said edge detecting means, for detecting a second corner of a leading edge of the input document based on the detected edge data and for establishing a second coordinate value therefrom; edge range determining means for determining a minimum and maximum location for a leading edge of the scanned document and for determining a minimum and maximum location for a trailing edge of the scanned document; and window means for generating an image window representing valid image data to processed and rendered based on said minimum and maximum location for a leading edge of
  • Another aspect of the present invention is a method for electronically registering an image on an input document.
  • the method generates an image data stream representing an electronic image of the image on the input document; detects edge data within the image data stream; detects a first corner of a leading edge of the input document based on the detected edge data and for establishing a first coordinate value therefrom; detects a second corner of a leading edge of the input document based on the detected edge data and for establishing a second coordinate value therefrom; determines a minimum and maximum location for a leading edge of the scanned document; determines a minimum and maximum location for a trailing edge of the scanned document; and generates an image window representing valid image data to processed and rendered based on the minimum and maximum location for a leading edge of the scanned document, the minimum and maximum location for a trailing edge of the scanned document, the first coordinate value, and the second coordinate value.
  • FIG. 1 shows a graphical representation illustrating an overlaid relationship between an input document and an output document
  • FIG. 2 shows another graphical representation illustrating an overlaid relationship between an input document and an output document
  • FIG. 3 shows a flowchart illustrating the setting of C 0 for an input document
  • FIG. 4 shows a flowchart illustrating the generation of white fill areas for registering the input image area
  • FIG. 5 shows a block diagram illustrating a circuit which electronically registers an image area from an input document
  • FIG. 6 shows a block diagram illustrating a preferred circuit which electronically registers an image area from an input document
  • FIG. 7 is a block diagram showing the architecture for determining the corners of a scanned document
  • FIG. 8 is a flowchart illustrating the edge detection and corner detection process.
  • FIG. 9 is an illustration of the deskewing measurements.
  • FIG. 3 shows a flowchart illustrating the setting of the coordinate value VC 0 .
  • VC 0 is coordinate value representing one corner of the input document 1 as illustrated in FIGS. 1 and 2.
  • the VC o can be defined as (S CO , P CO ) wherein S CO is a scanline location value and P CO is a pixel location value.
  • image (video) data received from the scanner a full width array or CCD sensor cells, is analyzed to detect edge data.
  • Edge data is the data representing the transition between image data representing a background of the platen cover or the background of a constant velocity transport (CVT) device and a leading edge of an input document. This edge data is received in step S 1 .
  • the edge data is analyzed, as illustrated in FIG. 3, at step S 3 to detect the physical corner C 0 of the input document 1 as illustrated in FIGS. 1 and 2. If the physical corner of the input document C 0 is not detected within a predetermined number of scanlines, coordinate value VC 0 is set to a default value.
  • the coordinate value VC 0 (S CO , P CO ) is set to be equal to the measure coordinate value of the physical corner C 0 of the input document 1 of FIGS. 1 and 2 at step S 5 .
  • step S 11 determines whether the set coordinate value VC 0 is within a predetermined number of pixels of a nominal center value; i.e., is P CO within a predetermined number of pixels of the nominal center value.
  • This nominal center value is related to the center of the scanning area, and in the preferred embodiment, the nominal center value is equal to 2480. More specifically, the nominal center value corresponds to the pixel of the full width array which is centered in the fast scan direction for a particular paper width; i.e., if the full width array is 11 inches wide, the nominal center value will correspond to the pixel located at 5.5 inches.
  • step S 13 sets the coordinate VC 0 value to a second default value.
  • step S 15 determines whether the set coordinate value VC 0 was detected before the nominal center pixel. If the set coordinate value VC 0 was not detected before the nominal center pixel, step S 17 sets the coordinate value VC 0 to a third default value. Moreover, if the coordinate value VC 0 was detected before the nominal center pixel, the set coordinate value VC 0 remains the same.
  • the coordinate value of the corner VC 0 of the input document is determined by analyzing the image data being received by a scanner.
  • step S 19 continues to receive video data so that the center of the input document can be detected.
  • the received video data is analyzed at step S 21 to determine if the center of the input document has been detected.
  • the nominal center pixel of the full width array is monitored for the presence of edge data. When edge data is present, the center of the input document has been detected. If the center of the document has not been detected, step S 45 determines whether a predetermined number of scanlines have been processed. If a predetermined number of scanlines have not been processed, the process returns to step S 19 . It is noted that during this process to detect the center of the input document, a counter keeps track of the number of scanlines that have been processed.
  • a center value is set at step S 23 . It is further noted that if edge data is detected at the nominal center pixel at step S 21 , the center value is set at step S 23 to the value corresponding to the position of the detected leading edge data.
  • the center value is a coordinate value wherein the fast scan coordinate is already known by the position of the nominal center pixel of the full width array and the slow scan coordinate is set equal to the number of scanlines which have been processed at this point in time.
  • step S 25 creates a first white fill area which is initially one scanline high and equal in width to the page width (fast scan direction length) of the input document.
  • the page width of the input document is determined from sensors which are set prior to feeding the input document into the scanning area or can be inputted by the user through an user interface.
  • step S 27 determines whether the physical corner C 1 of the input document has been detected. If the physical corner C 1 (S C1 , P C1 ) of the input document has not been detected, step S 29 adds a scanline to the first white fill area.
  • step S 31 determines whether a predetermined number of scanlines have been processed since the setting of the center value. If a predetermined number of scanlines has not been processed, the process returns to step S 27 , wherein further image data is analyzed to detect the presence of the physical corner C 1 of the input document. On the other hand, if a predetermined number of scanlines has been processed, step S 31 sets a coordinate value VC 1 to a fourth default setting at step S 39 .
  • step S 27 detects the presence of the physical corner C 1 of the input document
  • step S 33 determines whether the detection of this corner is closer than a predetermined number of pixels from the nominal center pixel of the full width array. If the detected physical corner C 1 of the input document is closer than the predetermined number of pixels from the nominal center pixel of the full width array, it is assumed that the document is either dog-eared or black edged and step S 35 sets the coordinate value VC 1 to a fifth default setting. However, if the detected physical corner of the input document is not closer than a predetermined number of pixels from the nominal center, step S 37 sets the coordinate value VC 1 to the detected value.
  • step S 41 determines at step S 41 the skew angle of the input document and the undetected corners C 2 and C 3 from the coordinate values VC 0 and VC 1 using conventional methods.
  • step S 43 Upon determining the skew angle and the calculated coordinate values of VC 2 and VC 3 , step S 43 generates second and third white fill areas so as to define the actual image area to be processed as the output image.
  • FIG. 5 illustrates a block diagram of a circuit which registers the input image from an input document to be stored as a bitmap in a memory.
  • the input document is scanned by a full width array (FWA) 11 which in the preferred embodiment of the present invention is a line scanner utilized in connection with a constant velocity transport (CVT) system.
  • the FWA 11 produces image data which is fed into an edge detection circuit 13 which looks for edge data in the image data input stream.
  • the image data is also fed into a multiplexer 33 .
  • the FWA 11 also produces a scanline signal which indicates the beginning of each new scanline and a pixel clock signal which indicates the appearance of the new set of image data corresponding to a single pixel.
  • the edge detecting circuit 13 produces a signal indicating the presence of edge data in the image data input stream. This signal indicating the detection of edge data is fed into a C 0 detecting circuit 21 , a center detecting circuit 23 , and a C 1 detecting circuit 13 .
  • the C 0 detecting circuit 21 detects the initial presence of the first corner of the input document labeled as C 0 .
  • the C 0 detecting circuit 21 represents a detection implementation which can be done in hardware or in software by a microprocessor.
  • the C 0 detecting circuit 21 outputs a signal indicating the detection of the first corner to a C 0 setting circuit 25 and the C 1 detecting circuit 15 .
  • the VC 0 setting circuit 25 sets the coordinate value of the first corner of the input document VC 0 according to the various parameters illustrated in FIG. 3.
  • the VC 0 setting circuit 25 represents a setting implementation which can be done in hardware or in software by a microprocessor.
  • the VC 0 setting circuit 25 receives a signal from counter 19 which maintains the present number of scanlines which have been processed, a signal from the C o detecting circuit 21 , the pixel clock signal, and a center detecting signal from center detecting circuit 23 . From these four input signals, the C 0 setting circuit 25 produces the coordinate value VC 0 and outputs this value to a white field area generator 29 and image area circuit 31 .
  • the center detecting circuit 23 is a circuit which detects the presence of edge data associated with the nominal center of the FWA.
  • the center detecting circuit 23 represents a detection implementation which can be done in hardware or in software by a microprocessor. More specifically, the center detecting circuit 23 detects when the nominal center produces edge data.
  • the center detecting circuit 23 outputs a center detection signal to the VC 0 setting circuit 25 , a center setting circuit 27 , the white field area generator 29 , and the C 1 detecting circuit 15 .
  • the center setting circuit 27 sets a coordinate value of the center of the leading edge of the input document based on the center detecting signal from the center detecting circuit 23 and the value in the counter 19 .
  • the counter 19 represents a tracking implementation can be done in hardware or in software by a microprocessor.
  • the center setting circuit 27 sets the coordinate center value and outputs this value to the white field area generator 29 , image area circuit 31 , and a VC 1 setting circuit 17 .
  • the center setting circuit 27 represents a setting implementation which can be done in hardware or in software by a microprocessor.
  • the C 1 detecting circuit 15 only detects for edge data after the first corner of the input document and the center of the input document have been detected.
  • the C 1 detecting circuit 15 detects the second physical corner of the input document and produces a detection signal which is fed to the VC 1 setting circuit 17 .
  • the C 1 detecting circuit 15 represents a detection implementation which can be done in hardware or in software by a microprocessor.
  • the VC 1 setting circuit 17 sets the coordinate value of the second corner of the input document and outputs this value to the white field area generator 29 .
  • the VC 1 setting circuit 17 represents a setting implementation which can be done in hardware or in software by a microprocessor.
  • the white field area generator 29 produces blocks of data which are associated with a non-image area of the output image. This generation of blocks of data by the white field area generator 29 can be done in hardware or in software by a microprocessor or a combination of both. More specifically, in the preferred embodiment of the present invention, the non-image area of the output image is considered white which will be used to fill around an image area being printed on a document. However, it is noted that the non-image area of the output document could be associated as non-printing area so as to not to interfere with the color of the document.
  • the blocks of image data for the non-image areas of the output image are fed into multiplexer 33 .
  • Multiplexer 33 selects between the image data being received from the FWA 11 or the image data for the non-image area received from the white field area generator 29 .
  • the selection by multiplexer 33 can be realized by a hardware multiplexing circuit or a software masking or suppression routine done by a microprocessor.
  • the multiplexer 33 makes its selection based on a control signal from the image area circuit 31 .
  • Image area circuit 31 receives the scanline signal and pixel clock signal from the FWA 11 . Also, the image area circuit 31 receives the set coordinate values of VC 0 , VC 1 , and the center value of the input document.
  • the image area circuit determines the skew angle of the input document, calculates the position (coordinate values VC 2 and VC 3 ) of the third and fourth corners (C 2 , C 3 ) of the input document and determines the white field area of the output image.
  • the functions performed by the image area circuit 31 can be carried by hardware or in software by a microprocessor.
  • the image area circuit produces a select signal which selects the data generated by the white field area generator 29 when the image area circuit determines that that particular pixel in the output image corresponds to a non-image area.
  • the data selected by the multiplexer 33 is stored in a memory 35 as a bitmap of the output image for later transmission, storage, or printing.
  • FIG. 6 illustrates a preferred embodiment of the electronic document registration system of the present invention.
  • a sync circuit 101 receives grey video data, a Insync signal and a pgsync signal from an image input terminal (IIT) (not shown).
  • the IIT may be a digital platen scanner or a constant velocity transport digital scanner.
  • the sync circuit 101 generates a video signal, a valid video pixel signal, a Insync signal, and a pgsync signal which are fed to an edge detect circuit 105 .
  • the lnsync and pgsync signals are also fed to a processor 103 .
  • the edge detect circuit 105 generates a video signal, a valid signal, a lnsync signal, and a pgsync signal which are fed to a window generator 107 .
  • the window generator 107 generates an effptr (effect pointer) signal, a video signal, a valid signal, a lnsync signal, a psynw signal which are fed to a suppress and mask video circuit 109 .
  • the suppress and mask video circuit 109 generates a video signal which is fed to an image processing section for further image processing.
  • processor 103 programs the sync circuit 101 with a page length value (number of scan lines) and a line length value (number of pixels) registers with values are a predetermined percentage larger than the input document image size.
  • the predetermined percentage is 10%.
  • the IIT platen has a sensor which will assert the pgsync that is received by the sync circuit 101 .
  • the IIT will also mark the beginning of each scanline of video by asserting the Insync that is received by the sync circuit 101 .
  • the pgsync and Insync signals are momentary signals.
  • the sync circuit 101 will, in response to the pgsync and Insync signals from the IIT, assert a pgsync signal and a lnsync signal.
  • the duration of these lnsync and pgsync signals are determined by counters in the sync circuit 101 which count up to the values stored in page length and line length registers. These values are the values programmed into the sync circuit 101 , prior to scanning, by the processor 103 .
  • Processor 103 maintains a count of the sync circuit's lnsync signal after the assertion of pgsync by the sync circuit 101 . It is noted that initially the pgsync signal from the sync circuit 101 is not gated through to the output of the edge detect circuit. More specifically, the psync signal generated by the edge detect circuit 105 remains in a deasserted state until processor 103 locates the slow scan lead edge of the document.
  • Processor 103 locates the edges of the document in the video using the edge position data and gray video of the center pixel provided by the edge detect circuit 105 .
  • the location of the center pixel is programmable in the edge detect circuit 105 .
  • Edge detect circuit 105 measures the valid video pixel signal from the sync circuit 101 and detects for each scanline the first “black-to-white” transition pixel in the video as the fast scan start edge, and the last “white-to-black” transition pixel in the video as the fast scan end edge of the document. The pixel position of the two edges are stored along with the gray video value of the center pixel.
  • Processor 103 responding to an interrupt generated by the edge detect circuit 105 at the end of lnsync signal, reads the two edge positions and the center pixel value from the edge detect circuit 105 via a CPU bus. The processor 103 continues to acquire the edge position data from the edge detect circuit 105 until it has located the leading edges of the document. Default registration is applied when the processor 103 is unable to locate the corners and/or the lead edge the document.
  • the edge detect circuit 105 is programmed by processor 103 to gate through the pgsync signal to the psync output.
  • the assertion of the psync signal by the edge detect circuit 105 marks the beginning of the imaged document and controls slow scan registration.
  • Processor 103 determines the slow scan trail edge of the document by extrapolating from the located start edge and corners, the adjusted width of the source document programmed prior to start of scan. The adjusted width is equal to the nominal width of the source document minus a tolerance value.
  • psync signal has been asserted, a white masking window is generated by the window generator 107 and applied by the suppress and masking video circuit 109 to the slow scan lead edge of the detected document image to serve as “edge fade-out” and to prevent black wedges in the printed output if the document image is skewed.
  • the processor 103 then reprograms a new page length value in the sync circuit 101 to properly terminate the pgsync signal.
  • the sync circuit 101 will deassert pgsync when the page length counter reaches the new value stored in the page length register.
  • Processor 103 also locates the fast scan start and end of the document.
  • Processor 103 programs the window generator 107 to setup windows to suppress the video outside the located fast scan document image area.
  • processor 103 programs the width and height of the three non-image windows and one image window and the effect pointers for each window.
  • the effect pointers in the three non-image windows are programmed so the pixels in these windows will be suppressed by the suppress and masking video circuit 109 .
  • the effect pointer for the image window is programmed to apply the desired image processing effect to the pixels in this window.
  • the location of the image window sets the fast scan registration of the imaged document.
  • processor 103 may program an additional window in the window generator 107 to deassert psyncw. The height of this window and the page length value stored in the sync circuit 101 are coordinated to terminate before the arrival of the next pgsync form the IIT.
  • the pixels outside the document image can be suppressed rather then masked. If the size of the document image located by the processor 103 is larger then the size programmed prior to start of scan, the difference is suppressed as well. If the document image is less then the size programmed prior to start of scan, additional windows are created to mask the background video in addition to suppression to fill out the document imaged to the size programmed prior to start of scan.
  • FIG. 1 shows a graphical representation illustrating the overlaid relationship between the input document and the output image. It is noted that the skew illustrated in FIGS. 1 and 2 has been exaggerated in order to show the concepts of the present invention in a more clear matter.
  • the input document travels in the direction of the arrow such that the leading edge of the input document 1 will be the first edge read by the scanner.
  • the actual physical corners of the input document are denoted as C 0 , C 1 , C 2 , and C 3 .
  • C 0 represents the first corner of the input document that will be detected by the present invention.
  • C 1 represents the second physical corner of the document that will be detected by the present invention.
  • the full width array pixels read the background of the CVT scanner until the input document actually is placed in the optical path between the light source and the full width array.
  • the transition of the input document into the optical path causes the full width array to generate edge data which is detected by an edge detecting circuit.
  • the full width array will cause the full width array to generate a full scanline of edge data. However, if the input document is skewed, the first corner of the input document transitioning into the optical path will create a partial scanline of edge data. As more of the input document is positioned into the optical path, the edge data produced by the full width array will migrate in the fast scan direction until the next corner of the input document is transitioned into the optical path.
  • the center pixel of the full width array is monitored to determine when that pixel produces edge data.
  • the center value of the input document is determined.
  • the boundary of the first white field area is established.
  • the first white field area 4 is increased in area, scanline by scanline, until the present invention detects or establishes the coordinate value for VC 1 .
  • the width of the first white filed area is equal to the number of scanlines between the center value and the established coordinate value C 1 .
  • the length of the first white field area 4 is the width of the document which is established by the sensors in the document feeder apparatus.
  • the present invention is able to calculate the skew of the input document as well as the coordinate values VC 2 and VC 3 of the corner C 2 and C 3 , respectively, of the input document.
  • the present invention is able to electronically map out the position of the input document as it transverses across the full width array.
  • the mapped input document is rotated by any conventional rotation method such that the corners C 0 , C 1 , C 2 , and C 3 are transformed to newly calculated output image corners illustrated by solid squares in FIG. 1.
  • the deskewed output image is represented by the dotted line boundary area 2 of FIG. 1.
  • the output image is filled in with the proper non-image areas and image areas. More specifically, the deskewed output image includes the first white field area 4 , which is place in the output image.
  • a second white field area 5 is generated to cover an area associated with the scanlines between the coordinate values VC 1 and VC 3 (S C1 to S C3 ) of corners C 1 and C 3 and the pixels between the transformed corner corresponding to C 1 and the coordinate pixel value of VC 2 .
  • a third white field area is generated to cover an area having a width corresponding to the number of scanlines between the coordinate value VC 3 of corner C 3 and the transformed corner associated with C 3 (S C3 to CS C3 ).
  • the other dimension of this third white field area corresponds to the pixel value of the transformed corner associated with C 2 and the pixel value of the coordinate value VC 0 of corner value C 0 (CP C2 to P C0 ).
  • the output image is bordered with non-image data so as to prevent black edges in the output image.
  • the remaining area 3 of FIG. 1 is the image data area wherein the image data from the input document associated with this area is placed into the output image.
  • the image data from the input document, which is transferred to the output image area, is bounded by a first corner having the corner value associated with the pixel value of the coordinate value VC 0 and the scanline value of the coordinate value VC 1 .
  • the second corner of this bounded area has the coordinate value of the pixel value of the coordinate value VC 2 value (P C2 ) and the scanline value of the coordinate value VC 1 value (S C1 ).
  • a third corner of this bounded output image area has the pixel value of the coordinate value VC 2 (P C2 ) corner and the scanline value of the coordinate value VC 3 (S C3 ).
  • the last corner of this bounded output image area has a pixel value of the coordinate value VC 0 value (P C0 ) and a scanline value of the coordinate value VC 3 value (S C3 ).
  • FIG. 2 illustrates a variation of the registration process of FIG. 1.
  • the areas 4 ′, 5 ′, 6 ′, 7 , and 8 represent output image areas. More specifically, the white fill area generation method is adjusted to expand the output image area into the previously created non-image area. This process is realized by utilizing the skew information to more closely fit the image read from the input document onto the output document.
  • the output image area 3 can be increased by reading the edge point along e 0 at the line where the center of the input document is detected.
  • This edge point e 0 would be utilized as the fast scan end rather than using the coordinate value VC 0 .
  • the use of this technique depends on the accuracy of the edge data which cannot be determined until the center of the document has been detected.
  • the electronic document registration process described can be incorporated into a digital scanner or digital copier having two operating mode, copy-normal and copy-all.
  • the edges of the document image are located within the video scanned by the IIT.
  • a border of white-masking windows are applied to prevent the black backup roll of a CVT from appearing on the printed output as black borders.
  • the copy-all mode is provided when a user desires a maximum amount of document image and will accept black borders on the printed output.
  • a white-masking window is applied at the lead edge to serve as “edge fade-out”.
  • the first case which will be described briefly below, is when the original is of good quality, both fast scan and slow scan registration are determined from the edges of the document.
  • a processor locates the corners of the document, C 0 and C 1 as shown in FIGS. 1 and 2, and calculates the skew angle of the document.
  • the processor locates the slow scan lead edge of the document and asserts psync by executing an edge detection algorithm, which in the preferred embodiment of the present invention is a software routine, which detects the first slow scan “black-to-white” transition in the scanned video.
  • psync is a software routine, which detects the first slow scan “black-to-white” transition in the scanned video.
  • a white-masking window is applied at the slow scan lead edge of the document to prevent the black backup roll of the CVT from being imaged and to serve as “edge fade-out”.
  • the processor installs white-masking windows to prevent black wedges from being imaged on the fast scan start and end edge and the slow scan trailing edge of the printed output.
  • the processor also installs suppression windows outside of the white-masking windows to frame the detected image for processing. Together the size and location of the suppression windows and the white-masking windows determines the fast scan registration of the image.
  • the processor programs the psync or psyncw to be deasserted to mark the end of the processed image.
  • the number of lines in the processed image are predetermined based on the size of the original document detected by side guides and a sensor in the input paper tray of the CVT.
  • the second case which will be described briefly below, is when the condition of the original prevents the electronics from locating the corners of the document with sufficiently high confidence (dog-eared corners, teared or missing corners, . . . ) but still has a well defined lead edge, the document registration system uses the detected lead edge for slow scan registration but applies a default fast scan registration based on the document length detected by the side guides in the CVT.
  • the fast scan length of the processed image is assigned a default value equal to the size of the original document detected by the side guides of the CVT input paper tray. The length is centered about the center pixel location to enable the installation of the suppression and masking windows.
  • the processor locates the slow scan lead edge of the document image and asserts psync by executing the edge locating algorithm noted above which detects the first slow scan “black-to-white” transition in the scanned video.
  • the slow scan size of the lead edge white-masking window in this mode is assigned a default value equal to a minimum of 13 scanlines after the detection of the document lead edge.
  • the 13 scanlines is based on a maximum CVT skew of 3.75 mrads extending over 11 inches of document length.
  • the processor then installs suppression windows and white-masking windows based on the located slow scan lead edge and defaulted fast scan start and end for the document image.
  • the processor programs the psync or psyncw to be deasserted to mark the end of the processed image.
  • the number of scanlines in the processed image are predetermined based on the size of the original document detected by the side guides and the sensor in the input paper tray of the CVT.
  • the processor will assert psync if after a predetermined number of scanlines the slow scan lead edge of the document have yet to be located.
  • the predetermined number of scanlines is determined according to the concept of “dead reckoning” from the assertion of pgsync from the IIT.
  • the installation of the lead edge white-masking windows and the fast scan suppression/masking windows is the same as described above with respect to the second case.
  • automatic document detection is intended to locate “any rectangular piece” of document placed “anywhere” on the platen.
  • the present invention is directed to such a process wherein the process requires an initial prescan to locate edges (black->white transition followed by white->black transition). With the detection of the corners of the document with respect to the platen edges, a second scan is performed to image the appropriate region (either the outer bounding box containing the entire document, or the largest possible rectangular box that could be enclosed within the four corners to avoid any black borders).
  • the document size obtained during prescan enables the machine to either setup reduction/enlargement parameters to fit onto a preselected output paper or to select the appropriate output paper to fit 100% of the input image.
  • the present invention is directed to a method for detecting all the four corners of a document placed on a platen. This method is suited for systems that do not have a windowing chip to blank out non-rectangular regions. Unlike the process mentioned in U.S. Pat. No. 5,528,387, there is no assumption made on the input size of the document to determine the third and fourth corners (C 2 , C 3 ) of the input document.
  • An automatic document detection or AutoFind process can be broken into two distinctive blocks as seen in FIG. 7.
  • the first block 1000 involves hardware which looks for edges, either Black->White or White->Black transitions as described in U.S. Pat. No. 5,528,387.
  • the second block 1002 then analyzes the edge information to accurately determine the four corners of the document.
  • the corner values are initialized at step S 1001 . Thereafter, at step S 1002 , the edges of the document are detected and at step S 1003 it is determined if the first corner has been detected. If yes, the first corner coordinates are stored at step S 1008 . If it not determined that the first corner was detected, it is determined if the start or leading edge is less than a first threshold at step S 1004 .
  • step S 1005 If it is determined that the start edge is less than a first threshold, it is determined if the edge is a valid corner at step S 1005 . If not a valid corner, the process returns to detecting edges. On the other hand, if it is a valid corner, step S 1009 store the coordinates of the start edges.
  • step S 1006 determines if the end or trailing edge is greater than a second threshold. If it is determined that the end edge is greater than a second threshold, it is determined if the edge is a valid corner at step S 1007 . If not a valid corner, the process returns to detecting edges. On the other hand, if it is a valid corner, step S 1010 store the coordinates of the end edges.
  • the edge information from the edge detecting circuitry is obtained by averaging over a few pixels which eliminates picking up any noise in the data. An additional validity check is done to further ensure that the corners found are not due to some “dust” on the platen. A possible checking criteria for both the edges (StartEdge and EndEdge) is given below:
  • Edge i Edge i-j ⁇ , j ⁇ [ ⁇ 2,2]
  • is to compensate for skewed corners.
  • the present invention programs the scanner to scan either a window that encloses the four corners of the document or a window that is enclosed within the four corners of the document, based on the user's preference ⁇ Black Border Erase (“BBE”) feature OFF/ON ⁇ .
  • FIG. 9 shows an example of a skewed document placed on the platen.
  • the size of the output image can also be estimated as follows:
  • FS size
  • SS size
  • the appropriate paper tray can be selected to fit the entire image. But if a particular output paper is selected by the user, the scanned image could be automatically scaled with following ratios to fit the paper.
  • the present invention provides a flexible electronic registration system which provides registration of an input document without requiring a mechanical device.

Abstract

A system electronically registers an image on an input document. The system includes a scanner for generating an image data stream representing an electronic image of the image on the input document and an edge detecting circuit for detecting edge data within the image data stream. A circuit detects a first corner of a leading edge of the input document based on the detected edge data and for establishing a first coordinate value therefrom and a second corner of a leading edge of the input document based on the detected edge data and for establishing a second coordinate value therefrom. The system further determines a minimum and maximum location for a leading edge of the scanned document and determines a minimum and maximum location for a trailing edge of the scanned document. An image window is generated representing valid image data to processed and rendered based on the minimum and maximum location for a leading edge of the scanned document, the minimum and maximum location for a trailing edge of the scanned document, the first coordinate value, and the second coordinate value.

Description

    FIELD OF THE PRESENT INVENTION
  • The present invention is directed to an electronic image registration system for an image input terminal. More specifically, the present invention is directed to the electronic image registration system for a scanner which reduces skew without utilizing additional mechanical hardware. [0001]
  • BACKGROUND OF THE PRESENT INVENTION
  • In convention scanners which utilize a document handler to convey original documents to the scanning or input area, some type of mechanism is employed to register the document at a required exposure position. These mechanisms position the document more accurately and reduce the skew. In the conventional scanners, mechanical hardware is utilized to register the document for proper scanning. [0002]
  • An example of a conventional registration device is disclosed in the [0003] Xerox Disclosure Journal, Vol. 12, No. 1, entitled, “IMPROVED PERFORMANCE OF A DOCUMENT REGISTRATION FIGURE/PLATEN INTERFACE,” the entire contents of which are hereby incorporated by reference. In this conventional registration system, a copier scans the image through a glass platen. The registration system also includes a plastic ramp which adjusts the height of the glass platen by means of an adjusting screw. The plastic ramp also serves as a guide for registration fingers to move from an out of registration position to a registration position. In the registration position, the registration fingers butt up against the edge of the platen and protrude above the top surface such that a document conveyed on top of the surface of the platen is stopped at the fingers in the required registration position.
  • Another example of a conventional registration and deskew device is disclosed in [0004] Xerox Disclosure Journal, Vol. 10, No. 2, entitled, “LEADING EDGE DESKEW AND REGISTRATION DEVICE,” the entire contents of which are hereby incorporated by reference. The Xerox Disclosure Journal article discloses a device which provides both deskewing and registration. In the registration system, two paper guide plates are utilized. An original document moves along a first guide plate towards a second guide plate by a means of a conveyor system or pinch rollers. The document stops at a wait station as the leading edge of the document enters an area between the first guide plate and the second guide plate. As the document enters this area, the leading edge registration device is in the raised position such that two guides on the registration device project up into the area between the first paper guide plate and the second guide plate. The leading edge registration device then moves a fixed amount opposite the direction of the travel of the document so as to deskew the document and place the document into a proper registration position. After the deskewing motion, the leading edge registration device lowers out of the paper path and the document can continue from the first paper guide plate to the second paper guide plate.
  • A drawback associated with conventional registration systems is that conventional registration systems require additional mechanical hardware in the paper path so as to physically register the document in the proper scanning position. This utilization of mechanical devices to physically register the document for proper scanning is a relatively slow solution which cannot be readily utilized in a high speed scanner. Moreover, such mechanical systems cannot be utilized in a centered registered constant velocity transport scanner since the exact fast scan location of the input can vary from document to document. Therefore, for a registration system to be utilized in a high speed scanner or in conjunction with a centered registered constant velocity transport system scanner, the registration system must be able to register the input of the document quickly, dynamically, variably, and accurately. [0005]
  • Automatic document detection is intended to locate “any rectangular piece” of document placed “anywhere” on the platen. Such a process requires an initial prescan to locate edges (black->white transition followed by white->black transition). With the detection of the corners of the document with respect to the platen edges, a second scan is performed to image the appropriate region (either the outer bounding box containing the entire document, or the largest possible rectangular box that could be enclosed within the four corners to avoid any black borders). The document size obtained during prescan enables the machine to either setup reduction/enlargement parameters to fit onto a preselected output paper or to select the appropriate output paper to fit 100% of the input image. [0006]
  • Although this process is available, it desirable to have a process which does not rely on the assumption of the input document's size. The present invention is directed to a method for detecting all the four corners of a document placed on a platen. Unlike the process mentioned in U.S. Pat. No. 5,528,387, the present invention makes no assumption with respect the input size of the document to determine the third and fourth corners (C[0007] 2, C3) of the input document.
  • SUMMARY OF THE PRESENT INVENTION
  • One aspect of the present invention is a system for electronically registering an image on an input document. The system includes scanning means for generating an image data stream representing an electronic image of the image on the input document; edge detecting means, operatively connected to said scanning means, for detecting edge data within the image data stream; first corner detecting means, operatively connected to said edge detecting means, for detecting a first corner of a leading edge of the input document based on the detected edge data and for establishing a first coordinate value therefrom; second corner detecting means, operatively connected to said edge detecting means, for detecting a second corner of a leading edge of the input document based on the detected edge data and for establishing a second coordinate value therefrom; edge range determining means for determining a minimum and maximum location for a leading edge of the scanned document and for determining a minimum and maximum location for a trailing edge of the scanned document; and window means for generating an image window representing valid image data to processed and rendered based on said minimum and maximum location for a leading edge of the scanned document, said minimum and maximum location for a trailing edge of the scanned document, said first coordinate value, and said second coordinate value. [0008]
  • Another aspect of the present invention is a method for electronically registering an image on an input document. The method generates an image data stream representing an electronic image of the image on the input document; detects edge data within the image data stream; detects a first corner of a leading edge of the input document based on the detected edge data and for establishing a first coordinate value therefrom; detects a second corner of a leading edge of the input document based on the detected edge data and for establishing a second coordinate value therefrom; determines a minimum and maximum location for a leading edge of the scanned document; determines a minimum and maximum location for a trailing edge of the scanned document; and generates an image window representing valid image data to processed and rendered based on the minimum and maximum location for a leading edge of the scanned document, the minimum and maximum location for a trailing edge of the scanned document, the first coordinate value, and the second coordinate value. [0009]
  • Further objects and advantages of the present invention will become apparent from the following descriptions of the various embodiments and characteristic features of the present invention.[0010]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The following is a brief description of each drawing used to describe the present invention, and thus are being presented for illustrative purposes only and should not be limitative of the scope of the present invention, wherein: [0011]
  • FIG. 1 shows a graphical representation illustrating an overlaid relationship between an input document and an output document; [0012]
  • FIG. 2 shows another graphical representation illustrating an overlaid relationship between an input document and an output document; [0013]
  • FIG. 3 shows a flowchart illustrating the setting of C[0014] 0 for an input document;
  • FIG. 4 shows a flowchart illustrating the generation of white fill areas for registering the input image area; [0015]
  • FIG. 5 shows a block diagram illustrating a circuit which electronically registers an image area from an input document; [0016]
  • FIG. 6 shows a block diagram illustrating a preferred circuit which electronically registers an image area from an input document; [0017]
  • FIG. 7 is a block diagram showing the architecture for determining the corners of a scanned document; [0018]
  • FIG. 8 is a flowchart illustrating the edge detection and corner detection process; and [0019]
  • FIG. 9 is an illustration of the deskewing measurements.[0020]
  • DETAILED DESCRIPTION OF THE PRESENT INVENTION
  • The following will be a detailed description of the drawings illustrated in the present invention. In this description, as well as in the drawings, like references represent like devices, circuits, or circuits performing equivalent functions. [0021]
  • As noted above, FIG. 3 shows a flowchart illustrating the setting of the coordinate value VC[0022] 0. VC0 is coordinate value representing one corner of the input document 1 as illustrated in FIGS. 1 and 2.
  • More specifically, the VC[0023] o can be defined as (SCO, PCO) wherein SCO is a scanline location value and PCO is a pixel location value. To determine the coordinate value VC0, image (video) data received from the scanner, a full width array or CCD sensor cells, is analyzed to detect edge data. Edge data is the data representing the transition between image data representing a background of the platen cover or the background of a constant velocity transport (CVT) device and a leading edge of an input document. This edge data is received in step S1. The edge data is analyzed, as illustrated in FIG. 3, at step S3 to detect the physical corner C0 of the input document 1 as illustrated in FIGS. 1 and 2. If the physical corner of the input document C0 is not detected within a predetermined number of scanlines, coordinate value VC0 is set to a default value.
  • However, if the physical corner of the input document is detected at step S[0024] 3, the coordinate value VC0 (SCO, PCO) is set to be equal to the measure coordinate value of the physical corner C0 of the input document 1 of FIGS. 1 and 2 at step S5. At step S7, it is determined whether the set coordinate value VC0 is within a predetermined number of lines from the start of the scanning process; i.e., is SCO less than or equal to a predetermined scanline value. If the set coordinate value VC0 is not within a predetermined number of lines, the coordinate value VC0 value is set to a first default value at step S9.
  • On the other hand, if the set coordinate value VC[0025] 0 is within a predetermined number of lines, step S11 determines whether the set coordinate value VC0 is within a predetermined number of pixels of a nominal center value; i.e., is PCO within a predetermined number of pixels of the nominal center value.
  • This nominal center value is related to the center of the scanning area, and in the preferred embodiment, the nominal center value is equal to 2480. More specifically, the nominal center value corresponds to the pixel of the full width array which is centered in the fast scan direction for a particular paper width; i.e., if the full width array is 11 inches wide, the nominal center value will correspond to the pixel located at 5.5 inches. [0026]
  • If the set coordinate value VC[0027] 0 is within a predetermined number of pixels of the nominal center pixel, step S13 sets the coordinate VC0 value to a second default value.
  • On the other hand, if the set coordinate VC[0028] 0 value is not within a predetermined number of pixels of the nominal center pixel, step S15 determines whether the set coordinate value VC0 was detected before the nominal center pixel. If the set coordinate value VC0 was not detected before the nominal center pixel, step S17 sets the coordinate value VC0 to a third default value. Moreover, if the coordinate value VC0 was detected before the nominal center pixel, the set coordinate value VC0 remains the same.
  • As demonstrated in FIG. 3, the coordinate value of the corner VC[0029] 0 of the input document is determined by analyzing the image data being received by a scanner.
  • Once the coordinate value VC[0030] 0 is set, the registration process moves on to the flowchart illustrated in FIG. 4. In FIG. 4, after the coordinate value C0 is set, step S19 continues to receive video data so that the center of the input document can be detected. The received video data is analyzed at step S21 to determine if the center of the input document has been detected. To detect this center point, the nominal center pixel of the full width array is monitored for the presence of edge data. When edge data is present, the center of the input document has been detected. If the center of the document has not been detected, step S45 determines whether a predetermined number of scanlines have been processed. If a predetermined number of scanlines have not been processed, the process returns to step S19. It is noted that during this process to detect the center of the input document, a counter keeps track of the number of scanlines that have been processed.
  • If a predetermined number of scanlines have been processed, a center value is set at step S[0031] 23. It is further noted that if edge data is detected at the nominal center pixel at step S21, the center value is set at step S23 to the value corresponding to the position of the detected leading edge data. The center value is a coordinate value wherein the fast scan coordinate is already known by the position of the nominal center pixel of the full width array and the slow scan coordinate is set equal to the number of scanlines which have been processed at this point in time.
  • After the center coordinate value has been set at step S[0032] 23, step S25 creates a first white fill area which is initially one scanline high and equal in width to the page width (fast scan direction length) of the input document. The page width of the input document is determined from sensors which are set prior to feeding the input document into the scanning area or can be inputted by the user through an user interface.
  • Upon initiating the creation of the first white fill area, step S[0033] 27 determines whether the physical corner C1 of the input document has been detected. If the physical corner C1 (SC1, PC1) of the input document has not been detected, step S29 adds a scanline to the first white fill area. Upon adding to the first white fill area, step S31 determines whether a predetermined number of scanlines have been processed since the setting of the center value. If a predetermined number of scanlines has not been processed, the process returns to step S27, wherein further image data is analyzed to detect the presence of the physical corner C1 of the input document. On the other hand, if a predetermined number of scanlines has been processed, step S31 sets a coordinate value VC1 to a fourth default setting at step S39.
  • If step S[0034] 27 detects the presence of the physical corner C1 of the input document, step S33 determines whether the detection of this corner is closer than a predetermined number of pixels from the nominal center pixel of the full width array. If the detected physical corner C1 of the input document is closer than the predetermined number of pixels from the nominal center pixel of the full width array, it is assumed that the document is either dog-eared or black edged and step S35 sets the coordinate value VC1 to a fifth default setting. However, if the detected physical corner of the input document is not closer than a predetermined number of pixels from the nominal center, step S37 sets the coordinate value VC1 to the detected value.
  • Next, the process determines at step S[0035] 41 the skew angle of the input document and the undetected corners C2 and C3 from the coordinate values VC0 and VC1 using conventional methods. Upon determining the skew angle and the calculated coordinate values of VC2 and VC3, step S43 generates second and third white fill areas so as to define the actual image area to be processed as the output image.
  • FIG. 5 illustrates a block diagram of a circuit which registers the input image from an input document to be stored as a bitmap in a memory. Initially, the input document is scanned by a full width array (FWA) [0036] 11 which in the preferred embodiment of the present invention is a line scanner utilized in connection with a constant velocity transport (CVT) system. The FWA 11 produces image data which is fed into an edge detection circuit 13 which looks for edge data in the image data input stream. The image data is also fed into a multiplexer 33. The FWA 11 also produces a scanline signal which indicates the beginning of each new scanline and a pixel clock signal which indicates the appearance of the new set of image data corresponding to a single pixel.
  • The [0037] edge detecting circuit 13 produces a signal indicating the presence of edge data in the image data input stream. This signal indicating the detection of edge data is fed into a C0 detecting circuit 21, a center detecting circuit 23, and a C1 detecting circuit 13. The C0 detecting circuit 21 detects the initial presence of the first corner of the input document labeled as C0. The C0 detecting circuit 21 represents a detection implementation which can be done in hardware or in software by a microprocessor. When the first corner C0 has been detected by the detected circuit 21, the C0 detecting circuit 21 outputs a signal indicating the detection of the first corner to a C0 setting circuit 25 and the C1 detecting circuit 15. The VC0 setting circuit 25 sets the coordinate value of the first corner of the input document VC0 according to the various parameters illustrated in FIG. 3. The VC0 setting circuit 25 represents a setting implementation which can be done in hardware or in software by a microprocessor.
  • Thus, to properly set the coordinate value VC[0038] 0, the VC0 setting circuit 25 receives a signal from counter 19 which maintains the present number of scanlines which have been processed, a signal from the Co detecting circuit 21, the pixel clock signal, and a center detecting signal from center detecting circuit 23. From these four input signals, the C0 setting circuit 25 produces the coordinate value VC0 and outputs this value to a white field area generator 29 and image area circuit 31.
  • The [0039] center detecting circuit 23 is a circuit which detects the presence of edge data associated with the nominal center of the FWA. The center detecting circuit 23 represents a detection implementation which can be done in hardware or in software by a microprocessor. More specifically, the center detecting circuit 23 detects when the nominal center produces edge data. When the nominal center cell of the FWA 11 produces edge data, the center detecting circuit 23 outputs a center detection signal to the VC0 setting circuit 25, a center setting circuit 27, the white field area generator 29, and the C1 detecting circuit 15. The center setting circuit 27 sets a coordinate value of the center of the leading edge of the input document based on the center detecting signal from the center detecting circuit 23 and the value in the counter 19. The counter 19 represents a tracking implementation can be done in hardware or in software by a microprocessor. The center setting circuit 27 sets the coordinate center value and outputs this value to the white field area generator 29, image area circuit 31, and a VC1 setting circuit 17. The center setting circuit 27 represents a setting implementation which can be done in hardware or in software by a microprocessor.
  • The C[0040] 1 detecting circuit 15 only detects for edge data after the first corner of the input document and the center of the input document have been detected. The C1 detecting circuit 15 detects the second physical corner of the input document and produces a detection signal which is fed to the VC1 setting circuit 17. The C1 detecting circuit 15 represents a detection implementation which can be done in hardware or in software by a microprocessor. The VC1 setting circuit 17 sets the coordinate value of the second corner of the input document and outputs this value to the white field area generator 29. The VC1 setting circuit 17 represents a setting implementation which can be done in hardware or in software by a microprocessor.
  • The white [0041] field area generator 29 produces blocks of data which are associated with a non-image area of the output image. This generation of blocks of data by the white field area generator 29 can be done in hardware or in software by a microprocessor or a combination of both. More specifically, in the preferred embodiment of the present invention, the non-image area of the output image is considered white which will be used to fill around an image area being printed on a document. However, it is noted that the non-image area of the output document could be associated as non-printing area so as to not to interfere with the color of the document. The blocks of image data for the non-image areas of the output image are fed into multiplexer 33.
  • [0042] Multiplexer 33 selects between the image data being received from the FWA 11 or the image data for the non-image area received from the white field area generator 29. The selection by multiplexer 33 can be realized by a hardware multiplexing circuit or a software masking or suppression routine done by a microprocessor. The multiplexer 33 makes its selection based on a control signal from the image area circuit 31. Image area circuit 31 receives the scanline signal and pixel clock signal from the FWA 11. Also, the image area circuit 31 receives the set coordinate values of VC0, VC1, and the center value of the input document. From these values, the image area circuit determines the skew angle of the input document, calculates the position (coordinate values VC2 and VC3) of the third and fourth corners (C2, C3) of the input document and determines the white field area of the output image. The functions performed by the image area circuit 31 can be carried by hardware or in software by a microprocessor.
  • Thus, the image area circuit produces a select signal which selects the data generated by the white [0043] field area generator 29 when the image area circuit determines that that particular pixel in the output image corresponds to a non-image area. The data selected by the multiplexer 33 is stored in a memory 35 as a bitmap of the output image for later transmission, storage, or printing.
  • Although the determination of the various values above with respect to corners and skew has been described with respect to specific circuitry, in the preferred embodiment of the present invention, these values are calculated in a [0044] processor 103 as illustrated in FIG. 6. The architecture of FIG. 6 will be described below.
  • FIG. 6 illustrates a preferred embodiment of the electronic document registration system of the present invention. A [0045] sync circuit 101 receives grey video data, a Insync signal and a pgsync signal from an image input terminal (IIT) (not shown). The IIT may be a digital platen scanner or a constant velocity transport digital scanner. The sync circuit 101 generates a video signal, a valid video pixel signal, a Insync signal, and a pgsync signal which are fed to an edge detect circuit 105. The lnsync and pgsync signals are also fed to a processor 103.
  • The edge detect [0046] circuit 105 generates a video signal, a valid signal, a lnsync signal, and a pgsync signal which are fed to a window generator 107. The window generator 107 generates an effptr (effect pointer) signal, a video signal, a valid signal, a lnsync signal, a psynw signal which are fed to a suppress and mask video circuit 109. The suppress and mask video circuit 109 generates a video signal which is fed to an image processing section for further image processing.
  • The system illustrated in FIG. 6 and discussed above operates as follows: [0047]
  • Prior to the start of scanning, [0048] processor 103 programs the sync circuit 101 with a page length value (number of scan lines) and a line length value (number of pixels) registers with values are a predetermined percentage larger than the input document image size. In the preferred embodiment, the predetermined percentage is 10%.
  • The IIT platen has a sensor which will assert the pgsync that is received by the [0049] sync circuit 101. The IIT will also mark the beginning of each scanline of video by asserting the Insync that is received by the sync circuit 101. The pgsync and Insync signals are momentary signals.
  • The [0050] sync circuit 101 will, in response to the pgsync and Insync signals from the IIT, assert a pgsync signal and a lnsync signal. The duration of these lnsync and pgsync signals are determined by counters in the sync circuit 101 which count up to the values stored in page length and line length registers. These values are the values programmed into the sync circuit 101, prior to scanning, by the processor 103.
  • [0051] Processor 103 maintains a count of the sync circuit's lnsync signal after the assertion of pgsync by the sync circuit 101. It is noted that initially the pgsync signal from the sync circuit 101 is not gated through to the output of the edge detect circuit. More specifically, the psync signal generated by the edge detect circuit 105 remains in a deasserted state until processor 103 locates the slow scan lead edge of the document.
  • [0052] Processor 103 locates the edges of the document in the video using the edge position data and gray video of the center pixel provided by the edge detect circuit 105. The location of the center pixel is programmable in the edge detect circuit 105.
  • Edge detect [0053] circuit 105 measures the valid video pixel signal from the sync circuit 101 and detects for each scanline the first “black-to-white” transition pixel in the video as the fast scan start edge, and the last “white-to-black” transition pixel in the video as the fast scan end edge of the document. The pixel position of the two edges are stored along with the gray video value of the center pixel.
  • [0054] Processor 103, responding to an interrupt generated by the edge detect circuit 105 at the end of lnsync signal, reads the two edge positions and the center pixel value from the edge detect circuit 105 via a CPU bus. The processor 103 continues to acquire the edge position data from the edge detect circuit 105 until it has located the leading edges of the document. Default registration is applied when the processor 103 is unable to locate the corners and/or the lead edge the document.
  • Once [0055] processor 103 has located the slow scan lead edge of the document, the edge detect circuit 105 is programmed by processor 103 to gate through the pgsync signal to the psync output. The assertion of the psync signal by the edge detect circuit 105 marks the beginning of the imaged document and controls slow scan registration. Processor 103 determines the slow scan trail edge of the document by extrapolating from the located start edge and corners, the adjusted width of the source document programmed prior to start of scan. The adjusted width is equal to the nominal width of the source document minus a tolerance value.
  • Once the psync signal has been asserted, a white masking window is generated by the [0056] window generator 107 and applied by the suppress and masking video circuit 109 to the slow scan lead edge of the detected document image to serve as “edge fade-out” and to prevent black wedges in the printed output if the document image is skewed. The processor 103 then reprograms a new page length value in the sync circuit 101 to properly terminate the pgsync signal. The sync circuit 101 will deassert pgsync when the page length counter reaches the new value stored in the page length register.
  • [0057] Processor 103 also locates the fast scan start and end of the document. Processor 103 programs the window generator 107 to setup windows to suppress the video outside the located fast scan document image area. Referring to the example illustrated in FIG. 1, processor 103 programs the width and height of the three non-image windows and one image window and the effect pointers for each window. The effect pointers in the three non-image windows are programmed so the pixels in these windows will be suppressed by the suppress and masking video circuit 109. The effect pointer for the image window is programmed to apply the desired image processing effect to the pixels in this window. The location of the image window sets the fast scan registration of the imaged document.
  • It is noted that instead of reloading the [0058] sync circuit 101 with a new page length value to terminate pgsync, processor 103 may program an additional window in the window generator 107 to deassert psyncw. The height of this window and the page length value stored in the sync circuit 101 are coordinated to terminate before the arrival of the next pgsync form the IIT.
  • To reduce memory, the pixels outside the document image can be suppressed rather then masked. If the size of the document image located by the [0059] processor 103 is larger then the size programmed prior to start of scan, the difference is suppressed as well. If the document image is less then the size programmed prior to start of scan, additional windows are created to mask the background video in addition to suppression to fill out the document imaged to the size programmed prior to start of scan.
  • Further operations and principle of present invention will be explained in more detailed utilizing the illustrations of FIGS. 1 and 2. As noted before, FIG. 1 shows a graphical representation illustrating the overlaid relationship between the input document and the output image. It is noted that the skew illustrated in FIGS. 1 and 2 has been exaggerated in order to show the concepts of the present invention in a more clear matter. [0060]
  • As illustrated in FIG. 1, the input document travels in the direction of the arrow such that the leading edge of the [0061] input document 1 will be the first edge read by the scanner. In the drawing, the actual physical corners of the input document are denoted as C0, C1, C2, and C3. C0 represents the first corner of the input document that will be detected by the present invention. Moreover, C1 represents the second physical corner of the document that will be detected by the present invention.
  • As the input document travels through a CVT scanner, the full width array pixels read the background of the CVT scanner until the input document actually is placed in the optical path between the light source and the full width array. The transition of the input document into the optical path causes the full width array to generate edge data which is detected by an edge detecting circuit. [0062]
  • If the input document is not skewed, a properly registered input document will cause the full width array to generate a full scanline of edge data. However, if the input document is skewed, the first corner of the input document transitioning into the optical path will create a partial scanline of edge data. As more of the input document is positioned into the optical path, the edge data produced by the full width array will migrate in the fast scan direction until the next corner of the input document is transitioned into the optical path. [0063]
  • As discussed above with respect to the present invention, the center pixel of the full width array is monitored to determine when that pixel produces edge data. When the center cell produces edge data, the center value of the input document is determined. As illustrated in FIG. 1, upon determining the center value of the input document, the boundary of the first white field area is established. The first [0064] white field area 4 is increased in area, scanline by scanline, until the present invention detects or establishes the coordinate value for VC1. Thus, the width of the first white filed area is equal to the number of scanlines between the center value and the established coordinate value C1. The length of the first white field area 4 is the width of the document which is established by the sensors in the document feeder apparatus.
  • Once the values VC[0065] 0 and VC1 are established, the present invention is able to calculate the skew of the input document as well as the coordinate values VC2 and VC3 of the corner C2 and C3, respectively, of the input document. In other words, the present invention is able to electronically map out the position of the input document as it transverses across the full width array. To produce a deskewed output image, the mapped input document is rotated by any conventional rotation method such that the corners C0, C1, C2, and C3 are transformed to newly calculated output image corners illustrated by solid squares in FIG. 1. The deskewed output image is represented by the dotted line boundary area 2 of FIG. 1.
  • After the output image has been deskewed and properly mapped, the output image is filled in with the proper non-image areas and image areas. More specifically, the deskewed output image includes the first [0066] white field area 4, which is place in the output image. A second white field area 5 is generated to cover an area associated with the scanlines between the coordinate values VC1 and VC3 (SC1 to SC3) of corners C1 and C3 and the pixels between the transformed corner corresponding to C1 and the coordinate pixel value of VC2. (CPC1 to PC2) Lastly, a third white field area is generated to cover an area having a width corresponding to the number of scanlines between the coordinate value VC3 of corner C3 and the transformed corner associated with C3 (SC3 to CSC3). The other dimension of this third white field area corresponds to the pixel value of the transformed corner associated with C2 and the pixel value of the coordinate value VC0 of corner value C0 (CPC2 to PC0).
  • By generating these three white field areas, the output image is bordered with non-image data so as to prevent black edges in the output image. The remaining [0067] area 3 of FIG. 1 is the image data area wherein the image data from the input document associated with this area is placed into the output image.
  • The image data from the input document, which is transferred to the output image area, is bounded by a first corner having the corner value associated with the pixel value of the coordinate value VC[0068] 0 and the scanline value of the coordinate value VC1. The second corner of this bounded area has the coordinate value of the pixel value of the coordinate value VC2 value (PC2) and the scanline value of the coordinate value VC1 value (SC1). A third corner of this bounded output image area has the pixel value of the coordinate value VC2 (PC2) corner and the scanline value of the coordinate value VC3 (SC3). The last corner of this bounded output image area has a pixel value of the coordinate value VC0 value (PC0) and a scanline value of the coordinate value VC3 value (SC3).
  • FIG. 2 illustrates a variation of the registration process of FIG. 1. In this example, the [0069] areas 4′, 5′, 6′, 7, and 8 represent output image areas. More specifically, the white fill area generation method is adjusted to expand the output image area into the previously created non-image area. This process is realized by utilizing the skew information to more closely fit the image read from the input document onto the output document.
  • It is also noted that the [0070] output image area 3 can be increased by reading the edge point along e0 at the line where the center of the input document is detected. This edge point e0 would be utilized as the fast scan end rather than using the coordinate value VC0. However, the use of this technique depends on the accuracy of the edge data which cannot be determined until the center of the document has been detected.
  • It is further noted that the electronic document registration process described can be incorporated into a digital scanner or digital copier having two operating mode, copy-normal and copy-all. In the copy-normal mode the edges of the document image are located within the video scanned by the IIT. A border of white-masking windows are applied to prevent the black backup roll of a CVT from appearing on the printed output as black borders. The copy-all mode is provided when a user desires a maximum amount of document image and will accept black borders on the printed output. In both modes a white-masking window is applied at the lead edge to serve as “edge fade-out”. [0071]
  • There are three possible registration cases within the copy-normal mode. The first case, which will be described briefly below, is when the original is of good quality, both fast scan and slow scan registration are determined from the edges of the document. [0072]
  • In this situation, a processor locates the corners of the document, C[0073] 0 and C1 as shown in FIGS. 1 and 2, and calculates the skew angle of the document. The processor then locates the slow scan lead edge of the document and asserts psync by executing an edge detection algorithm, which in the preferred embodiment of the present invention is a software routine, which detects the first slow scan “black-to-white” transition in the scanned video. Subsequently, a white-masking window is applied at the slow scan lead edge of the document to prevent the black backup roll of the CVT from being imaged and to serve as “edge fade-out”.
  • Using the locations of the corners C[0074] 0 and C1, the detected slow scan lead edge of the document and the calculated lead edge skew angle, the processor installs white-masking windows to prevent black wedges from being imaged on the fast scan start and end edge and the slow scan trailing edge of the printed output. The processor also installs suppression windows outside of the white-masking windows to frame the detected image for processing. Together the size and location of the suppression windows and the white-masking windows determines the fast scan registration of the image.
  • The processor then programs the psync or psyncw to be deasserted to mark the end of the processed image. The number of lines in the processed image are predetermined based on the size of the original document detected by side guides and a sensor in the input paper tray of the CVT. The second case, which will be described briefly below, is when the condition of the original prevents the electronics from locating the corners of the document with sufficiently high confidence (dog-eared corners, teared or missing corners, . . . ) but still has a well defined lead edge, the document registration system uses the detected lead edge for slow scan registration but applies a default fast scan registration based on the document length detected by the side guides in the CVT. [0075]
  • In this situation, if the processor cannot locate corner C[0076] 1, the fast scan length of the processed image is assigned a default value equal to the size of the original document detected by the side guides of the CVT input paper tray. The length is centered about the center pixel location to enable the installation of the suppression and masking windows. The processor then locates the slow scan lead edge of the document image and asserts psync by executing the edge locating algorithm noted above which detects the first slow scan “black-to-white” transition in the scanned video. The slow scan size of the lead edge white-masking window in this mode is assigned a default value equal to a minimum of 13 scanlines after the detection of the document lead edge. The 13 scanlines is based on a maximum CVT skew of 3.75 mrads extending over 11 inches of document length.
  • The processor then installs suppression windows and white-masking windows based on the located slow scan lead edge and defaulted fast scan start and end for the document image. The processor programs the psync or psyncw to be deasserted to mark the end of the processed image. The number of scanlines in the processed image are predetermined based on the size of the original document detected by the side guides and the sensor in the input paper tray of the CVT. [0077]
  • The third case, which will be described briefly below, is when neither the corners nor the lead edge of the document are detected by the electronics, defaults will be applied to both the slow scan and fast scan registration. [0078]
  • In this situation, the processor will assert psync if after a predetermined number of scanlines the slow scan lead edge of the document have yet to be located. The predetermined number of scanlines is determined according to the concept of “dead reckoning” from the assertion of pgsync from the IIT. The installation of the lead edge white-masking windows and the fast scan suppression/masking windows is the same as described above with respect to the second case. [0079]
  • As noted above, automatic document detection is intended to locate “any rectangular piece” of document placed “anywhere” on the platen. The present invention is directed to such a process wherein the process requires an initial prescan to locate edges (black->white transition followed by white->black transition). With the detection of the corners of the document with respect to the platen edges, a second scan is performed to image the appropriate region (either the outer bounding box containing the entire document, or the largest possible rectangular box that could be enclosed within the four corners to avoid any black borders). The document size obtained during prescan enables the machine to either setup reduction/enlargement parameters to fit onto a preselected output paper or to select the appropriate output paper to fit 100% of the input image. [0080]
  • The present invention is directed to a method for detecting all the four corners of a document placed on a platen. This method is suited for systems that do not have a windowing chip to blank out non-rectangular regions. Unlike the process mentioned in U.S. Pat. No. 5,528,387, there is no assumption made on the input size of the document to determine the third and fourth corners (C[0081] 2, C3) of the input document. An automatic document detection or AutoFind process can be broken into two distinctive blocks as seen in FIG. 7. The first block 1000 involves hardware which looks for edges, either Black->White or White->Black transitions as described in U.S. Pat. No. 5,528,387. The second block 1002 then analyzes the edge information to accurately determine the four corners of the document. In the current implementation, a maximum of two edges can be detected in each scanline scanned, a FSStart (Black->White transition) and a FSEnd (White->Black transition). Scanners with gray platen covers provide enough contrast between the cover and the document, but the ones with white platen covers require the platen cover to be open for detecting the transitions. The steps involved in detecting the four corners of the document from the edge information is shown in the flowchart shown in FIG. 8. This flowchart represents the functionality carried out in the second block 1002 of FIG. 7.
  • As illustrated in FIG. 8, the corner values are initialized at step S[0082] 1001. Thereafter, at step S1002, the edges of the document are detected and at step S1003 it is determined if the first corner has been detected. If yes, the first corner coordinates are stored at step S1008. If it not determined that the first corner was detected, it is determined if the start or leading edge is less than a first threshold at step S1004.
  • If it is determined that the start edge is less than a first threshold, it is determined if the edge is a valid corner at step S[0083] 1005. If not a valid corner, the process returns to detecting edges. On the other hand, if it is a valid corner, step S1009 store the coordinates of the start edges.
  • If it is determined that the start edge is greater than a first threshold at step S[0084] 1004, step S1006 determines if the end or trailing edge is greater than a second threshold. If it is determined that the end edge is greater than a second threshold, it is determined if the edge is a valid corner at step S1007. If not a valid corner, the process returns to detecting edges. On the other hand, if it is a valid corner, step S1010 store the coordinates of the end edges.
  • In the preferred embodiment of the present invention, the edge information from the edge detecting circuitry is obtained by averaging over a few pixels which eliminates picking up any noise in the data. An additional validity check is done to further ensure that the corners found are not due to some “dust” on the platen. A possible checking criteria for both the edges (StartEdge and EndEdge) is given below:[0085]
  • Edgei=Edgei-j ±δ, jε[−2,2]
  • δ is to compensate for skewed corners. [0086]
  • At the end of a prescan, the following information can be obtained: [0087]
  • The first corner detected (C[0088] 0FS, C0SS).
  • The minimum FSStart location (C[0089] 1FS, C1SS) and the corresponding FSEnd (C1FS).
  • The maximum FSEnd location (C[0090] 2FS, C2SS) and the corresponding FSStart (CsFS).
  • The last corner detected (C[0091] 3FS, C3SS).
  • Using these coordinates, the present invention programs the scanner to scan either a window that encloses the four corners of the document or a window that is enclosed within the four corners of the document, based on the user's preference {Black Border Erase (“BBE”) feature OFF/ON}. FIG. 9 shows an example of a skewed document placed on the platen. One realizes that BBE OFF results in imaging the entire input image at the expense of black borders around the edges, while BBE ON eliminates the black borders resulting in loss of part of the image. Based on specific application, the user might choose one or the other. [0092]
  • To avoid jamming of paper while printing, typical xerographic engines require the edges of the output image to be white. Therefore, a few (X) millimeters of the output image are always blanked around the edges. So if BBE is ON, an additional test is performed to check if the document is skewed by more than X millimeters. The additional test involves analyzing the statistics of a few edges taken every ¼″ of the document. A “roughly straight” document (skewed less than X millimeters) would have all the points centered around the mean (μ) or in other words standard deviation (σ) would be less than some value σ[0093] o. And if a document is skewed and the standard deviation σ<σo; the outer rectangular coordinates are used for scanning, and the regular X millimeters edge erase feature would delete the black borders around the edges. This ensures no extra loss of image when BBE is ON.
  • Once the appropriate coordinates are determined then the size of the output image can also be estimated as follows: [0094]
  • FS size=|C[0095] 2FS−C1FS|, if BBE is OFF or FS skew<X millimeters.
  • =|C[0096] 1FS−C2FS|, if BBE is ON
  • SS size=|C[0097] 3SS−C0SS|, if BBE is OFF or SS skew<X millimeters.
  • =|C[0098] 2SS−C1SS, if BBE is ON
  • With the knowledge of FS and SS size, the appropriate paper tray can be selected to fit the entire image. But if a particular output paper is selected by the user, the scanned image could be automatically scaled with following ratios to fit the paper. [0099]
  • FS Ratio=(FS Output Paper Size)/(FS Size) [0100]
  • SS Ratio=(SS Output Paper Size)/(SS Size) [0101]
  • The present invention has been described in detail above; however, various modifications can be implemented without departing from the spirit of the present invention. For example, the preferred embodiment of the present invention has been described with respect to a CVT system; however, the present invention can be readily implemented with a platen scanning system whether the document is being placed on the platen manually or by a document handler. [0102]
  • Moreover, the various circuits illustrated in FIG. 6 may be implemented on ASICs and the various calculations implemented in software. [0103]
  • In recapitulation, the present invention provides a flexible electronic registration system which provides registration of an input document without requiring a mechanical device. [0104]
  • While the present invention has been described with reference to various embodiments disclosed herein before, it is not to be confined to the detail set forth above, but is intended to cover such modifications or changes as made within the scope of the attached claims. [0105]

Claims (10)

What is claimed is:
1. A system for electronically registering an image on an input document, comprising:
scanning means for generating an image data stream representing an electronic image of the image on the input document;
edge detecting means, operatively connected to said scanning means, for detecting edge data within the image data stream;
first corner detecting means, operatively connected to said edge detecting means, for detecting a first corner of a leading edge of the input document based on the detected edge data and for establishing a first coordinate value therefrom;
second corner detecting means, operatively connected to said edge detecting means, for detecting a second corner of a leading edge of the input document based on the detected edge data and for establishing a second coordinate value therefrom;
edge range determining means for determining a minimum and maximum location for a leading edge of the scanned document and for determining a minimum and maximum location for a trailing edge of the scanned document; and
window means for generating an image window representing valid image data to processed and rendered based on said minimum and maximum location for a leading edge of the scanned document, said minimum and maximum location for a trailing edge of the scanned document, said first coordinate value, and said second coordinate value.
2. The system as claimed in
claim 1
, wherein said first corner detecting means establishes the first coordinate value as being equal to the coordinate value of the detected corner when the first corner is detected within a predetermined number of scanlines.
3. The system as claimed in
claim 1
, wherein said first corner detecting means establishes the first coordinate value as being equal to the coordinate value of the detected corner when the first corner is detected within a predetermined pixels of a nominal center value.
4. The system as claimed in
claim 1
, wherein said window means creates a scanning window which encloses all four corners of the document being scanned.
5. The system as claimed in
claim 1
, wherein said window means creates a scanning window which is within all four corners of the document being scanned.
6. A method for electronically registering an image on an input document, comprising the steps of:
(a) generating an image data stream representing an electronic image of the image on the input document;
(b) detecting edge data within the image data stream;
(c) detecting a first corner of a leading edge of the input document based on the detected edge data and for establishing a first coordinate value therefrom;
(d) detecting a second corner of a leading edge of the input document based on the detected edge data and for establishing a second coordinate value therefrom;
(e) determining a minimum and maximum location for a leading edge of the scanned document;
(f) determining a minimum and maximum location for a trailing edge of the scanned document; and
(g) generating an image window representing valid image data to processed and rendered based on the minimum and maximum location for a leading edge of the scanned document, the minimum and maximum location for a trailing edge of the scanned document, the first coordinate value, and the second coordinate value.
7. The method as claimed in
claim 6
, wherein said step (c) establishes the first coordinate value as being equal to the coordinate value of the detected corner when the first corner is detected within a predetermined number of scanlines.
8. The method as claimed in
claim 6
, wherein said step (c) establishes the first coordinate value as being equal to the coordinate value of the detected corner when the first corner is detected within a predetermined pixels of a nominal center value.
9. The method as claimed in
claim 6
, wherein said step (g) creates a scanning window which encloses all four corners of the document being scanned.
10. The method as claimed in
claim 6
, wherein said step (g) creates a scanning window which is within all four corners of the document being scanned.
US09/867,901 1998-09-23 2001-05-30 Electronic image registration for a scanner Abandoned US20010022675A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US09/867,901 US20010022675A1 (en) 1998-09-23 2001-05-30 Electronic image registration for a scanner
US10/854,010 US6999209B2 (en) 1998-09-23 2004-05-26 Electronic image registration for a scanner

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15900198A 1998-09-23 1998-09-23
US09/867,901 US20010022675A1 (en) 1998-09-23 2001-05-30 Electronic image registration for a scanner

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15900198A Division 1998-09-23 1998-09-23

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US10/854,010 Division US6999209B2 (en) 1998-09-23 2004-05-26 Electronic image registration for a scanner

Publications (1)

Publication Number Publication Date
US20010022675A1 true US20010022675A1 (en) 2001-09-20

Family

ID=22570647

Family Applications (3)

Application Number Title Priority Date Filing Date
US09/867,833 Abandoned US20010022674A1 (en) 1998-09-23 2001-05-30 Electronic image registration for a scanner
US09/867,901 Abandoned US20010022675A1 (en) 1998-09-23 2001-05-30 Electronic image registration for a scanner
US10/854,010 Expired - Fee Related US6999209B2 (en) 1998-09-23 2004-05-26 Electronic image registration for a scanner

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/867,833 Abandoned US20010022674A1 (en) 1998-09-23 2001-05-30 Electronic image registration for a scanner

Family Applications After (1)

Application Number Title Priority Date Filing Date
US10/854,010 Expired - Fee Related US6999209B2 (en) 1998-09-23 2004-05-26 Electronic image registration for a scanner

Country Status (1)

Country Link
US (3) US20010022674A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040076341A1 (en) * 2001-03-30 2004-04-22 Sharp Laboratories Of America, Inc. System and method for digital document alignment
US20040240001A1 (en) * 2003-05-29 2004-12-02 Tehrani Justin A. System and method for fast scanning
US20060039629A1 (en) * 2004-08-21 2006-02-23 Xerox Corporation Document registration and skew detection system
US20060109520A1 (en) * 2004-11-19 2006-05-25 Xerox Corporation Method and apparatus for identifying document size
US20090208065A1 (en) * 2008-02-19 2009-08-20 Kabushiki Kaisha Toshiba Image reading apparatus, image reading method, and sheet processing apparatus
US20090271691A1 (en) * 2008-04-25 2009-10-29 Microsoft Corporation Linking digital and paper documents
US20100322519A1 (en) * 2009-06-23 2010-12-23 Yuuji Kasuya Image extraction device and image extraction method

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050162707A1 (en) * 2004-01-24 2005-07-28 Owens Brian K. Scanning apparatus and method for full page scans
WO2005081829A2 (en) 2004-02-26 2005-09-09 Mediaguide, Inc. Method and apparatus for automatic detection and identification of broadcast audio or video programming signal
US7133638B2 (en) * 2004-11-18 2006-11-07 Xerox Corporation Scanning method and an image-processing device including the same
US7679625B1 (en) * 2005-01-07 2010-03-16 Apple, Inc. Straightening digital images
US7755808B2 (en) 2005-11-17 2010-07-13 Xerox Corporation Document scanner dust detection systems and methods
US8164762B2 (en) * 2006-09-07 2012-04-24 Xerox Corporation Intelligent text driven document sizing
US7944592B2 (en) * 2006-12-18 2011-05-17 Hewlett-Packard Development Company, L.P. Image capture device
WO2008106465A1 (en) * 2007-02-26 2008-09-04 Mediaguide, Inc. Method and apparatus for automatic detection and identification of unidentified video signals
JP4867946B2 (en) * 2008-03-31 2012-02-01 ブラザー工業株式会社 Image reading device
US8064729B2 (en) * 2008-04-03 2011-11-22 Seiko Epson Corporation Image skew detection apparatus and methods
JP4525787B2 (en) * 2008-04-09 2010-08-18 富士ゼロックス株式会社 Image extraction apparatus and image extraction program
JP4807406B2 (en) * 2008-12-16 2011-11-02 ブラザー工業株式会社 Image reading device
CN101957991A (en) * 2010-09-17 2011-01-26 中国科学院上海技术物理研究所 Remote sensing image registration method
JP5966569B2 (en) * 2012-04-27 2016-08-10 ブラザー工業株式会社 Image reading apparatus and reading control program
US8971637B1 (en) 2012-07-16 2015-03-03 Matrox Electronic Systems Ltd. Method and system for identifying an edge in an image
JP6330505B2 (en) * 2014-06-18 2018-05-30 ブラザー工業株式会社 Image reading device
JP6330506B2 (en) 2014-06-18 2018-05-30 ブラザー工業株式会社 Image reading device
JP6880654B2 (en) * 2016-10-31 2021-06-02 株式会社リコー Image processing device, image forming device, image processing method and image processing program
JP6999511B2 (en) 2018-07-02 2022-01-18 東芝テック株式会社 Document reader and document scanning method

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4450579A (en) * 1980-06-10 1984-05-22 Fujitsu Limited Recognition method and apparatus
US4833722A (en) * 1987-07-24 1989-05-23 Eastman Kodak Company Apparatus and methods for locating edges and document boundaries in video scan lines
US5021674A (en) * 1989-01-14 1991-06-04 Erhardt & Leimer Gmbh Process for determining the location of edges and photoelectronic scanning device for scanning edges
US5093653A (en) * 1988-11-10 1992-03-03 Ricoh Company, Ltd. Image processing system having skew correcting means
US5257325A (en) * 1991-12-11 1993-10-26 International Business Machines Corporation Electronic parallel raster dual image registration device
US5313311A (en) * 1992-12-23 1994-05-17 Xerox Corporation Hybrid mechanical and electronic deskew of scanned images in an image input terminal
US5359677A (en) * 1990-12-11 1994-10-25 Sharp Kabushiki Kaisha Image reader and facsimile machine using such image reader
US5384621A (en) * 1994-01-04 1995-01-24 Xerox Corporation Document detection apparatus
US5487116A (en) * 1993-05-25 1996-01-23 Matsushita Electric Industrial Co., Ltd. Vehicle recognition apparatus
US5506918A (en) * 1991-12-26 1996-04-09 Kabushiki Kaisha Toshiba Document skew detection/control system for printed document images containing a mixture of pure text lines and non-text portions
US5528387A (en) * 1994-11-23 1996-06-18 Xerox Corporation Electronic image registration for a scanner
US5818976A (en) * 1993-10-25 1998-10-06 Visioneer, Inc. Method and apparatus for document skew and size/shape detection
US5854854A (en) * 1992-04-06 1998-12-29 Ricoh Corporation Skew detection and correction of a document image representation
US5940544A (en) * 1996-08-23 1999-08-17 Sharp Kabushiki Kaisha Apparatus for correcting skew, distortion and luminance when imaging books and the like
US6094501A (en) * 1997-05-05 2000-07-25 Shell Oil Company Determining article location and orientation using three-dimensional X and Y template edge matrices
US6271935B1 (en) * 1998-09-23 2001-08-07 Xerox Corporation Method to remove edge artifacts from skewed originals
US6310984B2 (en) * 1998-04-09 2001-10-30 Hewlett-Packard Company Image processing system with image cropping and skew correction
US6360026B1 (en) * 1998-03-10 2002-03-19 Canon Kabushiki Kaisha Method for determining a skew angle of a bitmap image and de-skewing and auto-cropping the bitmap image

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4335954A (en) 1981-03-04 1982-06-22 Xerox Corporation Copier registration method and apparatus
US4391505A (en) 1981-10-19 1983-07-05 Xerox Corporation Over-platen document registration apparatus
US4668995A (en) * 1985-04-12 1987-05-26 International Business Machines Corporation System for reproducing mixed images
US4708468A (en) 1985-12-30 1987-11-24 Xerox Corporation Self adjusting paper guide
US4809968A (en) 1988-03-21 1989-03-07 Xerox Corporation Side registration with subtle transverse corrugation
US5189711A (en) 1989-11-24 1993-02-23 Isaac Weiss Automatic detection of elliptical shapes
JP3249605B2 (en) * 1992-11-25 2002-01-21 イーストマン・コダックジャパン株式会社 Document edge detection device
KR0158123B1 (en) 1995-09-26 1998-12-15 김광호 Image data arranging method for a facsimile
JPH09163121A (en) * 1995-12-12 1997-06-20 Minolta Co Ltd Digital image forming device
US5901253A (en) * 1996-04-04 1999-05-04 Hewlett-Packard Company Image processing system with image cropping and skew correction
US5912448A (en) 1997-05-16 1999-06-15 Hewlett-Packard Company Method and apparatus for detecting paper skew in image and document scanning devices

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4450579A (en) * 1980-06-10 1984-05-22 Fujitsu Limited Recognition method and apparatus
US4833722A (en) * 1987-07-24 1989-05-23 Eastman Kodak Company Apparatus and methods for locating edges and document boundaries in video scan lines
US5093653A (en) * 1988-11-10 1992-03-03 Ricoh Company, Ltd. Image processing system having skew correcting means
US5021674A (en) * 1989-01-14 1991-06-04 Erhardt & Leimer Gmbh Process for determining the location of edges and photoelectronic scanning device for scanning edges
US5359677A (en) * 1990-12-11 1994-10-25 Sharp Kabushiki Kaisha Image reader and facsimile machine using such image reader
US5257325A (en) * 1991-12-11 1993-10-26 International Business Machines Corporation Electronic parallel raster dual image registration device
US5506918A (en) * 1991-12-26 1996-04-09 Kabushiki Kaisha Toshiba Document skew detection/control system for printed document images containing a mixture of pure text lines and non-text portions
US5854854A (en) * 1992-04-06 1998-12-29 Ricoh Corporation Skew detection and correction of a document image representation
US5313311A (en) * 1992-12-23 1994-05-17 Xerox Corporation Hybrid mechanical and electronic deskew of scanned images in an image input terminal
US5487116A (en) * 1993-05-25 1996-01-23 Matsushita Electric Industrial Co., Ltd. Vehicle recognition apparatus
US5818976A (en) * 1993-10-25 1998-10-06 Visioneer, Inc. Method and apparatus for document skew and size/shape detection
US5384621A (en) * 1994-01-04 1995-01-24 Xerox Corporation Document detection apparatus
US5528387A (en) * 1994-11-23 1996-06-18 Xerox Corporation Electronic image registration for a scanner
US5940544A (en) * 1996-08-23 1999-08-17 Sharp Kabushiki Kaisha Apparatus for correcting skew, distortion and luminance when imaging books and the like
US6094501A (en) * 1997-05-05 2000-07-25 Shell Oil Company Determining article location and orientation using three-dimensional X and Y template edge matrices
US6360026B1 (en) * 1998-03-10 2002-03-19 Canon Kabushiki Kaisha Method for determining a skew angle of a bitmap image and de-skewing and auto-cropping the bitmap image
US6310984B2 (en) * 1998-04-09 2001-10-30 Hewlett-Packard Company Image processing system with image cropping and skew correction
US6271935B1 (en) * 1998-09-23 2001-08-07 Xerox Corporation Method to remove edge artifacts from skewed originals

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040076341A1 (en) * 2001-03-30 2004-04-22 Sharp Laboratories Of America, Inc. System and method for digital document alignment
US7145699B2 (en) 2001-03-30 2006-12-05 Sharp Laboratories Of America, Inc. System and method for digital document alignment
US20040240001A1 (en) * 2003-05-29 2004-12-02 Tehrani Justin A. System and method for fast scanning
US7457010B2 (en) 2003-05-29 2008-11-25 Hewlett-Packard Development Company, L.P. System and method for fast scanning
US7515772B2 (en) * 2004-08-21 2009-04-07 Xerox Corp Document registration and skew detection system
US20060039629A1 (en) * 2004-08-21 2006-02-23 Xerox Corporation Document registration and skew detection system
US20060109520A1 (en) * 2004-11-19 2006-05-25 Xerox Corporation Method and apparatus for identifying document size
US7564593B2 (en) 2004-11-19 2009-07-21 Xerox Corporation Method and apparatus for identifying document size
US20090208065A1 (en) * 2008-02-19 2009-08-20 Kabushiki Kaisha Toshiba Image reading apparatus, image reading method, and sheet processing apparatus
US8159728B2 (en) * 2008-02-19 2012-04-17 Kabushiki Kaisha Toshiba Image reading apparatus, image reading method, and sheet processing apparatus
US20090271691A1 (en) * 2008-04-25 2009-10-29 Microsoft Corporation Linking digital and paper documents
US8286068B2 (en) * 2008-04-25 2012-10-09 Microsoft Corporation Linking digital and paper documents
US20100322519A1 (en) * 2009-06-23 2010-12-23 Yuuji Kasuya Image extraction device and image extraction method
US8494219B2 (en) * 2009-06-23 2013-07-23 Ricoh Company, Ltd. Image extraction device and image extraction method

Also Published As

Publication number Publication date
US20040212853A1 (en) 2004-10-28
US6999209B2 (en) 2006-02-14
US20010022674A1 (en) 2001-09-20

Similar Documents

Publication Publication Date Title
US6999209B2 (en) Electronic image registration for a scanner
US5528387A (en) Electronic image registration for a scanner
KR100394202B1 (en) Image correction device
US8009931B2 (en) Real-time processing of grayscale image data
US8064729B2 (en) Image skew detection apparatus and methods
US6064778A (en) Method and apparatus for near real-time document skew compensation
US8018629B2 (en) Image reading apparatus, image reading method, and program for implementing the method
JP3486587B2 (en) Image reading device, image forming device, image forming system, and storage medium
SE9603138D0 (en) Procedure and device for quality assurance when scanning / copying images / documents
JP2010166442A (en) Image reader and method of correcting wrinkle area thereof, and program
JPH1042157A (en) Picture processing method and picture processor
JP2002262083A (en) Image processor
US7085012B2 (en) Method for an image forming device to process a media, and an image forming device arranged in accordance with the same method
JP3881455B2 (en) Image correction apparatus, image correction method, and medium on which image correction method is recorded
JP2019004314A (en) Image reading apparatus
JPH11298683A (en) Image processor and image reader
JP2002290727A (en) Image processing method, its apparatus, its storage medium, and its program
JPH0468658A (en) Picture reader
JPH0235869A (en) Picture reader
JP2009284299A (en) Image processors, image forming apparatus, and program
JP2003134304A (en) Image reading apparatus, image processor, dust detecting method, storage medium and program
JPH10229486A (en) Image reader
JPH10224615A (en) Picture reader
JPH09130600A (en) Image reader
JPH0413363A (en) Offset correction method in picture reader

Legal Events

Date Code Title Description
AS Assignment

Owner name: BANK ONE, NA, AS ADMINISTRATIVE AGENT, ILLINOIS

Free format text: SECURITY AGREEMENT;ASSIGNOR:XEROX CORPORATION;REEL/FRAME:013111/0001

Effective date: 20020621

Owner name: BANK ONE, NA, AS ADMINISTRATIVE AGENT,ILLINOIS

Free format text: SECURITY AGREEMENT;ASSIGNOR:XEROX CORPORATION;REEL/FRAME:013111/0001

Effective date: 20020621

AS Assignment

Owner name: JPMORGAN CHASE BANK, AS COLLATERAL AGENT, TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNOR:XEROX CORPORATION;REEL/FRAME:015134/0476

Effective date: 20030625

Owner name: JPMORGAN CHASE BANK, AS COLLATERAL AGENT,TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNOR:XEROX CORPORATION;REEL/FRAME:015134/0476

Effective date: 20030625

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: XEROX CORPORATION, CONNECTICUT

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A. AS SUCCESSOR-IN-INTEREST ADMINISTRATIVE AGENT AND COLLATERAL AGENT TO BANK ONE, N.A.;REEL/FRAME:061388/0388

Effective date: 20220822

Owner name: XEROX CORPORATION, CONNECTICUT

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:JPMORGAN CHASE BANK, N.A. AS SUCCESSOR-IN-INTEREST ADMINISTRATIVE AGENT AND COLLATERAL AGENT TO JPMORGAN CHASE BANK;REEL/FRAME:066728/0193

Effective date: 20220822