CA2019134C - Method for compressing and decompressing forms by means of very large symbol matching - Google Patents

Method for compressing and decompressing forms by means of very large symbol matching

Info

Publication number
CA2019134C
CA2019134C CA002019134A CA2019134A CA2019134C CA 2019134 C CA2019134 C CA 2019134C CA 002019134 A CA002019134 A CA 002019134A CA 2019134 A CA2019134 A CA 2019134A CA 2019134 C CA2019134 C CA 2019134C
Authority
CA
Canada
Prior art keywords
filled
information
empty
forms
accordance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CA002019134A
Other languages
French (fr)
Other versions
CA2019134A1 (en
Inventor
Dan Shmuel Chevion
Ehud Dov Karnin
Eugeniusz Walach
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Publication of CA2019134A1 publication Critical patent/CA2019134A1/en
Application granted granted Critical
Publication of CA2019134C publication Critical patent/CA2019134C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/41Bandwidth or redundancy reduction
    • H04N1/411Bandwidth or redundancy reduction for the transmission or storage or reproduction of two-tone pictures, e.g. black and white pictures
    • H04N1/4115Bandwidth or redundancy reduction for the transmission or storage or reproduction of two-tone pictures, e.g. black and white pictures involving the recognition of specific patterns, e.g. by symbol matching

Abstract

This method relates to the compression of information contained in filled-in forms (O) by separate handling of the corresponding empty forms (CP) and of the information written into them (VP). Samples of the empty forms are prescanned, the data obtained digitized and stored in a computer memory to create a forms library. The original, filled-in form (O) to be compressed is then scanned, the data obtained digitized and the retrieved representation of the empty form (CP) is then subtracted, the difference being the digital representation of the filled-in information (VP), which may now be compressed by conventional methods or, preferably, by an adaptive compression scheme using at least two compression ratios depending on the relative content of black pixels in the data to be compressed.

Description

2~19134 SZ9~ 9-001 - 1 -METHOD EOR COMPRESSING AND DECOMPRESSING FORMS
BY MEANS OE VERY LARGE SYMBOL MATCHING

DESCRIPTION

This invention relates to a method for compressing and decompressing image information, in particular in cases where part of the image is invariant or standard, such as in printed forms and, thus, does not contribute to the information content. The inventive method employs the matching of prerecorded data representing the said standard information with the image information obtained from scanning the image in question.

The handling of paper documents is a daily routine in today's office environments. Considerations having to do with the care for the natural resources required for manufacturing the paper used for the documents, the speed of preparing and handling the documents to make them serve their purpose, and their storage and possible later retrieval, have resulted in efforts to reduce the number of documents circulated and to leave the working with the documents to automatic record handling devices. The physical handling of the documents is one important aspect in this connection, the other, possibly more important aspect is the treatment of the information contained in the documents.

The treatment of the information contained in documents generally involves the acquisition of the information by some reading device, the transformation of the acquired information into a machine-readable code, the storing of the coded information for later, and possibly repeated processing, the actual processing of the information, and finally the output of the results of the processing. This output may take visual form, as on a display unit, or in print, or be purely electronic.

The acquisition of the information by a reading device, such as an optical scanner, should be performed with a reasonably high resolution lest the information Z~19134 SZ9~9-001 - 2 -content should become mutilated or partially lost.
Accordingly, the reading device will produce a high volume of scan data which in turn require a large memory capacity for storage. As a typical example, a page of the A4 size (279 x 210 mm), scanned at 100 pels/cm (where "pel" stands for picture element and is either a white or black dot), requires about 700 kbytes of storage space. Even a rather modest number of documents, say a couple of thousands, would call for an unreasonably big memory.

To alleviate this problem, known document scanning systems are provided with data compression routines which save about one order of magnitude so that the compressed data of a scanned A4 page can be accommodated on 25 to 75 kbytes of storage space, depending, of course, on the content of the scanned image. Very clever algorithms, based on arithmetic coding, can achieve a further reduction of about 16%. It goes without saying that any compression system must permit later decompression of the information, be it for processing or output.

In a number of applications where very large volumes of documents must be handled, such as, e.g., in connection with a census, it is essential to further increase the compression ratio quite significantly, say by another order of magnitude.

One possible approach to this problem is described in "Combined Symbol Matching Facsimile Data Compression System" by W.K. Pratt, P.J. Capitant, W. Chen, E.R.
Hamilton and R.H. Wallis, in Proc. IEEE, Vol. 68, No.7, July 1980. There it was proposed to feed binary images into a character matching process. Recognized characters will be represented very efficiently by means of their alphanumeric form. Then the remaining residue information will be compressed separately as a conventional binary image. This amounts to an about two-fold increase in compression ratio, although the efficiency of this scheme largely depends on the percentage of recognized characters and on the degree of digitization noise.

2~19134 SZg--~9-001 - 3 An improved technique is disclosed in US-A-4,499,499 Brickman et al., where instead of single character recognition the matching of large symbols, such as individual words in the text, is considered. No attempt is, however, proposed to consider larger symbols than words for compression. Accordingly, this reference fails to disclose a method that would enable compression up to a satisfactory ratio.

It is an object of the present invention to propose a method for the compression of image data with a compression ratio considerably larger than offered by conventional compression methods. The inventive method is in particular useful for those images having a definite structure like the one to be found in forms. A large percentage of the documents used in the transaction of business is constituted by forms of various designs and layout, among them administrative questionnaires, checks, traffic tickets and geographical maps, to mention just a few examples.

Each original form (O), i.e. the form presented for processing, is composed of a standard or constant part (CP) which is recurrent in every form of its kind, and a variable part (VP), i.e. the information which is filled in to complete the form. Thus, in mathematical notation, O = CP U VP, where both CP and VP are bilevel images and U denotes a per pixel union operation such that a pixel in O is black if the corresponding pixel in either CP or VP (or both) is black.

The variable part VP generally differs from one form to another. Regarding the information contained in a great number of forms having the same constant part (CP), it is immediately obvious that it is sufficient to have one single copy of the constant part of that type of form, and one copy of each variable part (VP), i.e. as many variable parts as there are individual forms of that particular 2~19134 SZ9c89-001 - 4 -- type of form. Accordingly, each type of form can be stored efficiently by storing its CP, and the lot of individual forms of that type of form can be stored by just storing their respective variable parts (VP). Actually, this approach is employed frequently in the management of electronic forms.

While this is a simple idea, the question is how the idea can be applied to the vast number of paper forms which must be treated as binary images. One possible solution would be to print the basic pattern of lines, boxes and text of these forms using a special color ink which is transparent to conventional scanners. If a completed form of this type is scanned, the basic pattern (i.e. CP) would be invisible to the scanner, and only the variable part VP would be entered into the computer.
Considerable effort has been devoted to the development of a practical implementation of this approach as communicated by D.E. Nielsen, R.B. Arps and T.H. Morin, "Evaluation of Scanner Spectral Response for Insurance Industry Documents", 16/A44 NCI Program, Working Paper No.
2, May 1973, and F.B. Woods and R.B. Arps, "Evaluation of Scanner Spectral Response for Insurance Industry Documents", 16/A44 NCI Program, Working Paper No. 4, September 1973. The obvious disadvantage of this approach is that the use of special-ink-sensitive scanners would exclude the application of this approach to existing archives. Also, the use of special ink is certainly more cumbersome and costly.

Accordingly, it is an object of this invention to propose an efficient method to digitize the information content of forms, including those in existing archives.

The basic idea is to first scan an empty form to obtain the constant part CP and to store this in a memory.
In this way, a library of possible empty forms of interest in a particular application can be generated. As a completed form is presented to the system, it is scanned, digitized, and the resulting binary information stored.
Then the respective CP of the form scanned is identified 2(~ 9134 SZ9~9-001 - 5 ~

and "subtracted" from the binary information of the completed form. The difference thus obtained is the variable part VP, i.e. the information of interest. This will usually comprise only a fraction of the data of the completed form. Further compression by conventional means is possible, and the final storage will be very efficient.

For reconstruction, the compressed data is retrieved from storage, decompressed as usual, and then convolved with the data representing CP. However, in most cases, the further processing will involve only the variable part VP.

The method in accordance with this invention is directed to compressing, for storage or transmission, the information contained in filled-in forms (O) by separate handling of the corresponding empty forms (CP) and of the information written into them (VP), and involves the steps of prescanning the empty forms (CP), digitizing the data obtained, and storing the digitized representations relating to each of the empty forms (CP) in a computer memory to create a forms library, scanning the original, filled-in form (O) to be compressed, digitizing the data obtained, identifying the particular one of said empty forms (CP) in said forms library and retrieving the digital representation thereof, subtracting said retrieved representation of the empty form (CP) from said digital representation of the scanned filled-in form (O), the difference being the digital representation of the filled-in information (VP), and compressing the digital representation of the filled-in information (VP) by conventional methods.

Details of the inventive method will hereafter be described by way of example and with reference to the drawings in which:

Figure 1 represents a portion of an original filled-in form as it was scanned and stored in a computer memory (at enlarged scale);
Figure 2 represents the corresponding portion of the empty form;

Z~19134 SZ9r-89-001 - 6 -Figure 3 depicts the result of the registration process;
Figure 4 shows the result of a straightforward subtraction of the images of Figs. 1 and 2;
Figure 5 shows the filled-in portion of the original form with all traces of the empty form removed;
Figure 6 depicts the recombination of the filled-in information of Figure 5 with the empty form of Figure 2;
Figure 7 represents a complete, filled-in form;
Figure 8 represents the result of scanning the corresponding empty form CP;
Figure 9 shows the result of a straightforward subtraction of the images of Figs. 7 and 8;
Figure 10 shows the result of the application of the method in accordance with the invention to Figs. 7 and 8;
Figure 11 represents a reconstruction of the filled-in form from Figs. 8 and 10.

The method in accordance with the invention comprises essentially four stages, viz. the acquisition of the information, the registration of the prestored data with the scanned data, the "subtraction" process, and the compression. An additional stage would be the reconstruction of the original completed form.

The acquisition of the image information has to take into account that the brightness parameters for the constant and variable parts of the form may differ significantly: The hand-written variable part is usually heavier than the printed lines and text of the constant part CP. Therefore, the scanning parameters should be optimized separately for the CP and VP parts. The scanning itself is standard; the scanning device may be a conventional scanner or a digitizing video camera. The scanning is performed line by line, with a resolution on the order of 100 picture elements per centimeter line length, that is, with a dot size of about 0,1 mm diameter.

The scanning results in two strings of binary data, one string representing the constant part CP (i.e. the empty form), the other representing the completed or SZ~-89-001 - 7 -original, scanned form 0 = CP + VP which contains the constant part CP and the filled-in information VP. The task is then simply to extract the VP from the data representing the original form 0.

In practical scanning devices, when the same form is scanned twice, the data representing the scanned forms will be slightly different owing to tiny relative displacements which result in small linear or angular misalignments and, hence, deviations in their binary scanning signals. Also, the digitization process may introduce some scaling. This observation also applies to the alignment of the empty form when scanned for storing the CP data, and to the original form 0 when scanned for the acquisition of the VP data. It is, therefore, necessary to register the original form 0 with respect to the prestored empty form with the CP content. This is done by optimizing the parameters of a geometrical transformation capable of transforming one image onto the other. The optimization aims at rendering a certain error measure a minimum at the termination of the transformation process.

Known in the prior art are various techniques for data registration in a number of technical areas, such as pattern recognition, inspection, change detection, character recognition, etc. These techniques are disclosed in the following references: US-A-4,028,531;
US-A-4,441,207; US-A-4.644,582; US-A-4,651,341;
US-A-4,654,873; US-A-4,672,676; US-A-4,706,296;
US-A-4,815,146; H. S. Ranganath, "Hardware Implementation of Image Registration Algorithms", Image Vision Compute, Vol. 4, No. 3, August 1986, pp. 151-158; and W.K. Pratt, P.J. Capitant, W. Chen, E.R. Hamilton and R.H. Wallis, "Combined Symbol Matching Facsimile Data Compression System", Proc. IEEE, Vol. 68, No.7, July 1980; B.
Silverman, "Algorithm for Fast Digital Image Registration", IBMl Technical Disclosure Builetin, 1971, Registered Trade-Mark SZ~-89-001 - 8 -pp. 1291-1294. Some of the known techniques warrant a brief resume.

In accordance with the Least Squares (LS) or Least Absolute Value (LAV) approach of the last-mentioned reference, one has to take either the square or the absolute value of the difference between a transformed current image and a library image, and to look for the minimum on the set of all permitted transformations.

Under the Cross Correlation scheme, one maximizes, on the set of permitted transformations, the cross correlation of a transformed current image and a library image. Under the Moments Invariance concept, one assumes that the first and second moments of an image are invariant to the translation and rotation, and one computes the eigenvectors of a given distribution, and from these one determines the relative rotation of the two binary images.

A simple yet useful way to compute the transformation parameters is to solve a system of equations stemming from a given match between a set of points in one image and a corresponding set in the reference image. This scheme is divided into automatic point extraction and manual point extraction processes.

Unfortunately, all of the aforementioned approaches tend to be computationally heavy owing to the registration being a two-dimensional problem, and the number of operations being proportional to the number of image pixels. In the case of the handling of entire forms, as in connection with this invention, where huge data arrays must be registered, the prior art techniques are prohibitive from a computational point of view. Consider, for example, a form of the A4 (297 x 210 mm) size. Tile subsampling of this form, with a standard density of 100 pixels/cm, will yield a binary image having more than 5 million pixels. The registration process with prior art devices will require a number of computations of the same 2~)19134 SZ~-89-001 - 9 -order of magnitude. This is impractical for personal computers of any present design.

In accordance with the present invention, it is proposed to solve the registration problem by means of a dimensionality reduction. This is done by projecting the image on the x- and y-axes and using the Least Absolute Value approach for registering the resulting one-dimensional histograms. Thus, a one-dimensional histogram is defined as an array having having only one dimension, i.e. a vector, in contrast to an array having two or more dimensions, such as a matrix. The dimensionality reduction permits the required number of computations to be proportional to the height and to the width of the image (less than 5000 pixels for one A4 page). At the same time, the speed of the registration process will be drastically increased.

In order to make the registration process of the present invention work even for the case of slightly rotated and scaled images, the original image is partitioned into a number of relatively small overlapping segments. For each segment only a simple shift transformation is allowed. The transformation of the entire image can be represented as a combination of shifts of all of the individual segments. Naturally, the smaller the segments, the better can the scheme handle complicated transformations such as rotations. It has been found empirically that for a typical A4 form and a standard scanner, it is sufficient to have 16 blocks per page in a 4 x 4 arrangement.

Some degree of inter-block overlap is necessary in order to avoid the forming of undesirable white schisms separating the blocks which may otherwise be caused by differences in shift among the segments. On the other hand, an increase in the overlap margin reduces the flexibility of the transformation. It was found experimentally that an overlap by two pixels works well in the majority of practical applications.

2Q~9134 SZ~89-001 - 10 -The generation of the x- and y-histograms will now be explained. To generate the y-histogram, for each segment a vector containing in its it ith component the number of black pixels of the corresponding line is constructed.
This is done efficiently by scanning each line byte-wise, without unpacking the bytes to their pixels. The number of "l"s in the current byte is obtained by means of an appropriate look-up table, e.g., one having 256 entries and 9 outputs, and added to the running sum. This process yields the number of black pixels in each line.

The y-histogram is then registered one-dimensionally using the least absolute difference as follows: The current vector, computed as explained above, is shifted for matching with respect to the library vector (which was obtained in a similar manner from the pre-stored representation of the empty form). The shifting of the current vector is performed by the machine shift command either to the left or to the right by as many places as necessary. For each shift, the absolute difference is computed. The difference being a minimum indicates the optimum shift which will be used subsequently as the relative displacement of the corresponding segments.

The generation of the x-histogram is performed similarly: For each segment a vector containing in it ith component the number of black pixels of the corresponding column is constructed. This is done efficiently by scanning each line byte-wise, without unpacking the bytes to their pixels. Obviously, different action needs to be taken for each byte. For each one of the 28 possible bytes, an appropriate short program is prepared in advance. Also, a look-up table with 256 entries and one output for each entry giving the address of the corresponding part of the program is provided. For each byte, one proceeds to one of the 256 program sections of the look-up table, the appropriate section being chosen on the basis of the binary value of the given byte so that the ith component of the histogram vector is increased only if the corresponding pixel is black.

Z~19134 S~-89-001 - 11 -This may be illustrated by the following example:
Assuming that the current byte is 10000000. Clearly, a 1 should be added to the histogram at the place corresponding to the first pixel, leaving the remaining seven pixels unaffected. Going into the look-up table to the line corresponding to the byte 10000000, we find the appropriate address. At that address we find the required short program, which is executed, and we return to the next byte. This procedure ensures that the required number of operations will be proportional to the, number of black pixels (this is typically less than 10 % of the total number of pixels).

The x-histogram is then one-dimensionally registered using the least mean absolute difference, much in the same way as explained above in connection with the registration of the y-histogram.

The above-described procedure for the computation of the optimum translation parameters is repeated for each segment of the image. There is a need, however, to verify the consistency of the results computed for different blocks. This task is performed by a displacement control module having a twofold purpose: (1) Detect any registration errors. This can, for example, be done by computing the average of the shifts of the eight nearest segments. If the difference between any two results exceeds a certain threshold, e.g. 4, then the normal registration may be assumed to have failed, and the displacement control unit should take over. (2) Estimate the shift parameter for blocks where the normal procedure fails owing to a lack of information. This can occur where an empty form in a given segment has no black pixel. In this case the appropriate shift is estimated on the basis of the shift parameters computed for the nearest neighbours.

Once the optimum shifts are established, each segment is placed in the appropriate area of the output image array. The output image array is the place in the memory where the result of the registration process is created.

2~)19134 S~-89-001 - 12 -The segments of the scanned image are placed there at their associated locations after the respective shifts have been performed.

The placement in the direction of the y-axis can be controlled by a simple choice of the displacement index.
Assume, for example, that the segment under consideration starts at line 100 of the scanned image, and that after comparison with the corresponding segment in the forms library it is determined that an upward shift of 5 should be performed. This means that the segment under consideration should be added to the output image array starting from line 95. In view of the fact that virtually all computers work in the indexing mode (i.e. all memory addresses are written relative to a certain displace index, whereby the absolute memory location is obtained by summing relative address and displace index), shift of the entire segment in y-direction can be performed by changing a single register value, viz. the value of the index.

It is somewhat more difficult to control the placement in the direction of the x-axis. Here it might be necessary to actually shift the data the appropriate number of places (from 1 to 7). In this manner it is possible to obtain the output array without having to unpack the data to the bit form and back to the byte form.

As explained above, the segmentation of the scanned image is performed with a little overlap. Also, the mutual shifting of the individual segments may cause some overlap. Yet we are interested in a smooth output image with no visible traces of the segmentation procedure.
Obviously, those pixels in the output image array which are affected by more than one segment of the scanned image need special treatment. The preferred solution is to perform a Boolean OR operation on all segments involved.
The recommended procedure is to first clear the output image array and then process the scanned image segment by segment. Each new segment is added to the output array at the appropriate location by performing an OR operation for each pixel.

2~)19134 SZ~-89-001 - 13 -As mentioned above, one of the steps of the method in accordance with the invention calls for the registration of two one-dimensional arrays, viz. the x and y-histograms. One possibility of implementation is by way of the known Least Absolute Value (LAV) approach. However, there are other approaches available which may be more advantageous depending on the circumstances, such as the conventional Cross Correlation (CC) approach. The latter is indeed advantageous in terms of performance, but it is also much more complex from the computational point of view.

This computational complexity can often be reduced by a comparison of the relative locations of the peaks on the two histograms involved, i.e., where the histograms are maximum. For example, if the first histogram has a maximum value of 100 at location 10, and the second histogram has its maximum at location 20, then a shift of 10 would yield an acceptable match between the two histograms, at very low computational expense.

While the registration procedure has been explained in connection with binary images, i.e. images having black and white components only, it may be applied to grey level images as well. To this end, the grey level image is converted to its binary counterpart by thresholding the image or its gradient version, and then calculating the transformation parameters in the manner explained above.
The histogram projections may be calculated directly from the grey level image by summing all the grey levels of the pixels in a given line (or column).

Assuming that the registration of the scanned image O
with the prestored image CP has successfully been completed, the next step to be performed would be the subtraction of the constant part CP from the original image 0. Unfortunately, in the great majority of practical cases, the scanning of the original image O as well as the retrieval of the prestored data representing the constant part CP introduce noise (in addition to the noise stemming, e.g. from uncleanliness or crumpling of the SZg---89-001 - 14 -form). Accordingly, the straightforward subtraction would not yield the desired result, as can be seen from a comparison of Figs. 1 through 3 which respectively show the original filled form 0, the empty form CP and their straightforward difference. Of course, the aim is to remove the scanning noise, in the case of Figure 3 it is the faint remainder of the box surrounding the handwritten portion.

A method to do this will have to (1) remove from the original 0 as many black (i.e. equal to 1) pixels of the constant part CP as possible; and (2) leave unaltered all the pixels which belong to the variable part VP.

Clearly, it is relatively easy to achieve one of these goals at the expense of the other. Conventional solutions fail to achieve both goals at the same time. The method of the present invention starts from the work of W.K. Pratt, P.J. Capitant, W. Chen, E.R. Hamilton and R.H.
Wallis, "Combined Symbol Matching Facsimile Data Compression System" Proc. IEEE, Vol. 68, No.7, pp.
786-796, July 1980. Their approach was to obtain an estimate Pvp of the variable part VP by (Equation 1) VP (Pcp) or, stated differently, (Equation 2) VP (Pcp) where n denotes intersection, and the symbol denotes logical negation. In this case, goal (2) would be fulfilled completely. However, since a lot of black CP
pixels are located in the vicinity of Pcp~ but not on Pcp itself, a considerable number of pixels belonging to CP
will remain in Pvp~ As a result, Pvp-will be "larger"
than VP. This effect is, of course, undesirable because in the context of our image compression it means that the compression of Pvp~ will require many more bits of code than is necessary to represent VP.

2~19134 SZ~89-001 - 15 -Because of this drawback, Duda and Hart, "Pattern Classification and Scene Analysis", Wiley & Sons, 1976, suggested to first broaden Pcp. In the broadened version, BCP, one sets to "1" all the pixels such that in their vicinity there is at least one black pixel of Pcp~ With the broadening procedure, Pvp is obtained as:

(Equation 3) Pvp = - BCP = O n BCP

With this approach, provided the broadening process was wide enough, it is possible to remove the entire CP
area. Unfortunately, however, some parts of VP are also removed. This occurs whenever CP and VP intersect. Near the intersection areas, Pvp will be white, that is, the corresponding pixels will be equal to 0, even if VP was black with concomitant deterioration of the image quality.

The method of this invention avoids the drawbacks mentioned, i.e. it permits a good approximation of VP to be obtained, without undue compromise of either of the goals mentioned above, and with an efficiency that makes it applicable even to personal computers. At the same time, the method of the invention lends itself to an easy reconstruction of the original image O from Pcp and P To this end, under the method of the invention, Equation 1 (or 2) will be used wherever this does not cause distortion, namely where Pcp is black, Equation 3 will be used where the broadened version BCP is white, and special tests will be made to determine the optimum solution in cases where CP intersects VP (problematic areas).

The original image O is scanned one pixel at a time.
For each pixel P its immediate vicinity (nxn square) is considered. As many black pixels (1) of CP as possible are removed from 0. If in the empty image CP a pixel is black, this might as well be white in the difference image. If in the original image O a black pixel is found far away from a black pixel belonging to the empty form, then that pixel should also be black in the difference image. If none of these alternatives applies, then more sophisticated tests must be performed.

21~19134 SZ9-~9-001 - 16 -We shall denote by N the neighbourhood of P in 0, by Ncp the corresponding neighbourhood of P in P~p, and by Nvp the same neighbourhood in the final array VP (which is initially set to 0). The possible values of the pixel P in the various arrays can be as follows:

a. PO (the value of pixel P in O) is 0.

Naturally, in this case no action needs to be taken (i.e. no additional computations are required), and we can proceed to the next pixel.

b. Po = Pcp = 1 Here, the pixel P should be set to O (i.e. in the approximation VP the value of Pvp will be O; however, since the array VP is initialized to 0, in practice, no additional action needs to be taken, and one may proceed to the next pixel.

c. PO = 1, and Pvp =

In this problematic case, it is necessary to consider not only the values of P but also the values of pixels located in its vicinity. The determination of the desired value of Pvp can be based on the following tests:

1. Check whether P can belong to CP. If the answer is negative, then set Pvp = 1, and proceed to the next pixel. If the answer is positive, proceed to the next test (c.2.).

The checking can be done by verifying whether the entire window Ncp vanishes or, in general, whether the number of black pixels in Ncp does exceed a predetermined threshold. Indeed, if the vicinity of P
in CP is empty, then the black pixel in O could not have resulted from the distortion of CP and, accordingly, it must belong to VP.

2~1913A

SZ9~89-001 - 17 -2. Check whether P is connected to VP. If the answer is positve, then P itself belongs to VP, and Pvp should be set to "1 ". If the answer is negative, then the conclusion will be that P belongs to CP and, accordingly, Pvp =

In order to determine whether P is connected to VP, one must know Nvp. In other words, in oder to be able to compute Pvp~ one must know VP for all pixels in the vicinity of P. Of course, in practical cases, however, only part of Nvp will be known, viz. the part which was already traversed in the past so that its corresponding estimate of VP is already known.
Accordingly, in the connectivity test, instead of the actual Nvp, an array N will have to be used which by itself is a "first order" approximation of Nvp. This can be done as follows:

a. Expand (broaden) the neighbourhood Ncp by a factor of m.

This, in turn, can be achieved by shifting Ncp to the left, to the right, up and down. Then the broadened version BNlcp will be obtained by Boolean summation of all five of the beforementioned arrays, one original array and four shifted arrays. The broadening will be repeated on the array BNlcp in order to obtain BN2Cp. This process is continued until, after m steps, the final array BNmcp is obtained.

b. Compute the local approximation of VP as:

N - BNmcp - No n ( BNmcp ) c. Find N by combining the approximation computed above with "known" values of Nvp. Assume, for example, that the image is traversed from left to right and from top to bottom, and that the window size is n = 5. Then each of the neighbourhood arrays mentioned above will be organized as follows:

Z~19134 SZ9~9-001 - 18 -where the number ij defines the location of the pixel in the ith line and in the jth column. Under these assumptions, the pixels 11, 12, 13, 14, 15, 21, 22, 23, 24, 25, 31, 32 of the array O have already been analyzed and, therefore, the appropriate values for VP have been computed. The values at the remaining locations (33, 34, 35, 41, 42, 43, 44, 45. 51, 52, 53, 54, 55) will be taken from the array computed at point b) above.

Once N is known, it is easy to determine whether the pixel P, at the center, is connected to VP. A very simple criterion may be used: If the number of black pixels in N exceeds a predetermined threshold, then the pixel P under consideration belongs to VP.

Consider now, for example, a black pixel P with a neighbourhood No in a filled-in array, and a neighbourhood Ncp in the empty form array:

No NCP

OOPll OOlll This is clearly the first of the above-mentioned cases, and in the output array, P should be white (O).
However, if the following happens (second case):

2~1913~

SZ9~o9-001 - 19 -No Ncp OOPll OOOOO

then P should be left black. On the other hand, if the neighbourhood looks like this:

No NCP

then a more sophisticated test is necessary in order to determine whether P resulted from the noise at the bottom of Ncp (and, hence, should be set to O at the output array~, or P is black because it belongs to a line which was filled in (and, therefore, in the output image must be set to 1). In the last example above, the decision would be to set P to O. However, in the following example, it would be more reasonable to decide for P = 1:

No NCP

The performance of the method in accordance with the invention will now be described with reference to the drawings. By way of example, a small part of the original form O is shown at enlarged scale in Figure 1. It comprises a box from the pattern of boxes which make up the form, and some handwritten entry into that box. After the appropriate empty form was identified in the storage, its binary representation is retrieved. The corresponding ~9134 SZ~9-001 - 20 output is shown in Figure 2. Besides some deviations in line thickness and continuity, the representation of the empty form appears grossly misaligned with the representation of the original form of Figure 1. With the registration process performed, the images are made to be almost exactly superimposed as shown in Figure 3.

Now the data representing the empty form CP are subtracted from the data representing the original form 0.
The result of a straightforward subtraction is shown in Figure 4. As already mentioned above, the scanning process is prone to introduce some noise, and this is seen in Figure 4 as a remaining silhouette or shadow of the box shown in Figure 2. It should be noted that Figure 4 was only created to illustrate the subtraction, it is not normally generated. The application of the subtraction process in accordance with the invention will yield the result shown in Figure 5, where no trace of the constant part CP of the originally scanned form O remains.

Figure 6 is the result one would see at the end of the process, i.e. it represents a recombination of the empty form CP of Figure 2 with the filled-in portion of Figure 5. As desired, the image of Figure 6 looks essentially the same as the original of Figure 1 although, of course, on a pixel-by-pixel basis, the images are different.

Figure 7 is a representation of the entire original form (CP) from which the portion of Figure 1 was taken.
This form with its Hebrew text has been chosen on purpose so that the readers, most o* which are assumed not to be able to read Hebrew, may view this as an example of an abstract binary image. This image is scanned, and from the resulting data subtracted in a straightforward manner i.e.
in accordance with Equation 1 are the data prestored for the corresponding empty form of Figure 8 after the appropriate registration process has been completed. The result is shown in Figure 9. Clearly, in the framework of the present invention, this would not be acceptable as the image contains a great deal of undesired information, such 2~19134 SZ9~9-001 - 21 -as black pixels which result from the imperfect removal of CP. This shows that, as previously pointed out, straightforward subtraction does not yield acceptable results.

With the subtraction scheme of the present invention, the result of the subtraction will look like Figure 10:
The information content of the empty form CP has entirely been removed. Where a black pixels of the empty form CP
happens to overlay a black pixel belonging to the handwritten information, it may occur that the pixel is replaced by a white pixel visible as a "blank" in Figure 10. In order to check whether the change detection process might have introduced noise, such as a distortion of VP, for example, one may wish to combine the empty form of Figure 8 and the "change image" of Figure 10. The result is shown in Figure 11. While there are a few visible discontinuities, the match may be considered to be nearly perfect since the readability of the text is not impaired.

The method of the invention, thus, permits complete removal of the constant part CP of the original image and obtaining an undistorted image of the variable part VP of the original image. Consequently, it will not be necessary to store the entire original form with its information content on the order of 30 kBytes; it will rather suffice to store the image of Figure 8 having an information content of only about 6 kBytes, thus achieving a 5-fold compression ratio with practically no image quality deterioration. The compression of the variable part VP of the image may be made with conventional methods.

The compression step may also be performed with a more elegant scheme than those known in the art. The preferred scheme involves the application of at least two different compression ratios depending on the "density" of the information to be compressed. Assuming, for example, a two-ratio compression scheme, the first ratio may be termed "lossless" and the second "lossy". Accordingly, those portions of the data to be compressed that are very dense, i.e. that contain a comparatively great number of 2~1913~
SZ9~9-001 - 22 -black pixels, will be treated with the small ratio so as not to lose any of the black pixels when compressing, while portions of the data to be compressed containing comparatively few black pixels are treated with a large compression ratio with the accepted possibility that a few pixels that are in fact relevant to the information content may be lost.

This scheme, therefore, requires a prefiltering step to determine the dense and less dense portions of the image in order to control the application of the lossless or lossy compression ratios. Applied to the processing of forms containing information of a constant, i.e.
preprinted nature, and of a variable, i.e. handwritten nature, a very considerable saving in terms of bandwidth and storage space can be achieved since the handwritten portion usually qualifies for a great compression ratio.

Claims (11)

1. Method for compressing, for storage or transmission, the information contained in filled-in forms (O) by separate handling of the corresponding empty forms (CP) and of the information written into them (VP), characterized by the steps of:
- pre-scanning the empty forms (CP), digitizing the data obtained, and storing the digitized representations relating to each of the empty forms (CP) in a computer memory to create a forms library, - scanning the original, filled-in form (O) to be compressed, digitizing the data obtained, - identifying the particular one of said empty forms (CP) in said forms library and retrieving the digital representation thereof, - subtracting said retrieved representation of the empty form (CP) from said digital representation of the scanned filled-in form (O), the difference being the digital representation of the filled-in information (VP), and - compressing the digital representation of the filled-in information (VP) by appropriate methods.
2. Method in accordance with claim 1, characterized in that the scanning parameters, such as brightness and threshold level, are determined separately for the empty form (CP) and for said original filled-in form (O).
3. Method in accordance with claim 1, characterized in that prior to the subtraction step, registration information relating to the relative position of the filled-in information (VP) with respect to the completed form (O) is determined and registration of said original filled-in form (O) with said empty form (CP) is performed.
4. Method in accordance with claim 3, characterized in that the said registration information is determined through dimensionality reduction.
5. Method in accordance with claim 3, characterized in that said registration is performed in the following sequence of steps:
- partitioning the the scanned filled-in form (O) into small segments, - estimating, for each segment, the optimum shifts to be performed in x- and y directions, - placing each segment of said original filled-in form (O) at the appropriate area of the output image array using the shift information previously established, so that a complete, shifted image is obtained when the placements for all segments have been completed.
6. Method in accordance with claim 5, characterized in that said original filled-in form (O) is partitioned in a proportion corresponding at least approximately to 16 segments per page of 210 x 297 mm size.
7. Method in accordance with claim 5, characterized in that said partitioning is performed such that the segments overlap at their margins by a distance corresponding to about two pixels.
8. Method in accordance with claim 5, characterized in that said estimated optimum shifts for each pair of neighbouring segments are checked for consistency by assuring that their difference does not exceed a predetermined threshold and, if it does, automatically calling for operator intervention.
9. Method in accordance with claim 1, characterized in that said subtraction step involves removing, from the original filled-in form (O), all black pixels which belong to said corresponding empty form (CP) or which are located close to black pixels belonging to said corresponding empty form (CP), and retaining all black pixels which belong to said filled-in information (VP).
10. Method in accordance with claim 1, characterized in that said compression step involves the application of at least two different compression ratios depending on the content, in the data to be compressed, of portions containing a comparatively large number of black pixels and portions containing a comparatively small number of black pixels, with the data to be compressed being first subjected to a prefiltering step to determined the relative compressibility thereof.
11. Method in accordance with any one of the preceding claims, characterized in that all steps are performed with the said binary representations being maintained in a byte format without unpacking any of the bytes to their component pixels.
CA002019134A 1989-08-04 1990-06-15 Method for compressing and decompressing forms by means of very large symbol matching Expired - Fee Related CA2019134C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IL9122089A IL91220A (en) 1989-08-04 1989-08-04 Compression of information
IL91220 1989-08-04

Publications (2)

Publication Number Publication Date
CA2019134A1 CA2019134A1 (en) 1991-02-04
CA2019134C true CA2019134C (en) 1996-04-09

Family

ID=11060247

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002019134A Expired - Fee Related CA2019134C (en) 1989-08-04 1990-06-15 Method for compressing and decompressing forms by means of very large symbol matching

Country Status (7)

Country Link
US (1) US5182656A (en)
EP (1) EP0411231B1 (en)
JP (1) JPH03119486A (en)
CA (1) CA2019134C (en)
DE (1) DE68922998T2 (en)
ES (1) ES2074480T3 (en)
IL (1) IL91220A (en)

Families Citing this family (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5363214A (en) * 1990-05-30 1994-11-08 Xerox Corporation Facsimile transmission system
JP3170299B2 (en) * 1991-01-31 2001-05-28 株式会社リコー Image reading processing device
JPH04343190A (en) * 1991-05-21 1992-11-30 Hitachi Ltd Character data input system
EP0594901A1 (en) * 1992-10-27 1994-05-04 International Business Machines Corporation Image compression method
US5793887A (en) * 1993-11-16 1998-08-11 International Business Machines Corporation Method and apparatus for alignment of images for template elimination
CA2134255C (en) * 1993-12-09 1999-07-13 Hans Peter Graf Dropped-form document image compression
US5668897A (en) * 1994-03-15 1997-09-16 Stolfo; Salvatore J. Method and apparatus for imaging, image processing and data compression merge/purge techniques for document image databases
JPH08207380A (en) * 1994-11-25 1996-08-13 Xerox Corp Method and device for automatic entry in original form
JPH0981763A (en) * 1995-07-07 1997-03-28 Oki Data:Kk Method and device for compressing character and image mixed data
US5894525A (en) * 1995-12-06 1999-04-13 Ncr Corporation Method and system for simultaneously recognizing contextually related input fields for a mutually consistent interpretation
US5815595A (en) * 1995-12-29 1998-09-29 Seiko Epson Corporation Method and apparatus for identifying text fields and checkboxes in digitized images
US6072598A (en) * 1996-02-27 2000-06-06 Intel Corporation Method for enhancing usability of fax on small device
AU2116397A (en) * 1996-02-27 1997-09-16 Intel Corporation Method for enhancing usability of fax on small devices
US6519046B1 (en) * 1997-03-17 2003-02-11 Fuji Photo Film Co., Ltd. Printing method and system for making a print from a photo picture frame and a graphic image written by a user
JP3558493B2 (en) * 1997-06-10 2004-08-25 富士通株式会社 Paper alignment device, paper alignment method, and computer-readable recording medium recording paper alignment program
US6564319B1 (en) * 1997-12-29 2003-05-13 International Business Machines Corporation Technique for compressing digital certificates for use in smart cards
US6507662B1 (en) * 1998-09-11 2003-01-14 Quid Technologies Llc Method and system for biometric recognition based on electric and/or magnetic properties
US6507671B1 (en) 1998-12-11 2003-01-14 International Business Machines Corporation Method and system for dropping template from a filled in image
JP3581265B2 (en) * 1999-01-06 2004-10-27 シャープ株式会社 Image processing method and apparatus
US6728426B1 (en) 1999-08-23 2004-04-27 International Business Machines Corporation Compression of form images in gray-level
JP4424845B2 (en) 1999-12-20 2010-03-03 本田 正 Image data compression method and decompression method
JP4078009B2 (en) * 2000-02-28 2008-04-23 東芝ソリューション株式会社 CHARACTERISTIC RECORDING AREA DETECTION DEVICE FOR FORM, CHARACTER RECORDING AREA DETECTION METHOD FOR FORM, STORAGE MEDIUM, AND FORM FORMAT CREATION DEVICE
US6351566B1 (en) 2000-03-02 2002-02-26 International Business Machines Method for image binarization
US6658166B1 (en) 2000-03-08 2003-12-02 International Business Machines Corporation Correction of distortions in form processing
US6778703B1 (en) 2000-04-19 2004-08-17 International Business Machines Corporation Form recognition using reference areas
US7917844B1 (en) 2000-07-14 2011-03-29 International Business Machines Corporation Directory service for form processing
US6760490B1 (en) 2000-09-28 2004-07-06 International Business Machines Corporation Efficient checking of key-in data entry
US6640009B2 (en) 2001-02-06 2003-10-28 International Business Machines Corporation Identification, separation and compression of multiple forms with mutants
US7239747B2 (en) * 2002-01-24 2007-07-03 Chatterbox Systems, Inc. Method and system for locating position in printed texts and delivering multimedia information
US9224040B2 (en) 2003-03-28 2015-12-29 Abbyy Development Llc Method for object recognition and describing structure of graphical objects
US20110188759A1 (en) * 2003-06-26 2011-08-04 Irina Filimonova Method and System of Pre-Analysis and Automated Classification of Documents
US9015573B2 (en) 2003-03-28 2015-04-21 Abbyy Development Llc Object recognition and describing structure of graphical objects
RU2003108434A (en) * 2003-03-28 2004-09-27 "Аби Софтвер Лтд." (CY) METHOD FOR PRE-PROCESSING THE IMAGE OF THE MACHINE READABLE FORM OF THE UNFIXED FORMAT
RU2003108433A (en) * 2003-03-28 2004-09-27 Аби Софтвер Лтд. (Cy) METHOD FOR PRE-PROCESSING THE MACHINE READABLE FORM IMAGE
US7305612B2 (en) * 2003-03-31 2007-12-04 Siemens Corporate Research, Inc. Systems and methods for automatic form segmentation for raster-based passive electronic documents
RU2635259C1 (en) 2016-06-22 2017-11-09 Общество с ограниченной ответственностью "Аби Девелопмент" Method and device for determining type of digital document
US9740692B2 (en) 2006-08-01 2017-08-22 Abbyy Development Llc Creating flexible structure descriptions of documents with repetitive non-regular structures
US8233714B2 (en) 2006-08-01 2012-07-31 Abbyy Software Ltd. Method and system for creating flexible structure descriptions
US8108764B2 (en) * 2007-10-03 2012-01-31 Esker, Inc. Document recognition using static and variable strings to create a document signature
JP2010033360A (en) * 2008-07-29 2010-02-12 Canon Inc Information processor, job processing method, storage medium and program
JP5420363B2 (en) * 2009-09-28 2014-02-19 大日本スクリーン製造株式会社 Image inspection apparatus, image inspection method, and image recording apparatus
US8285074B2 (en) * 2010-09-01 2012-10-09 Palo Alto Research Center Incorporated Finding low variance regions in document images for generating image anchor templates for content anchoring, data extraction, and document classification
US8825409B2 (en) * 2010-09-08 2014-09-02 International Business Machines Corporation Tracing seismic sections to convert to digital format
JP5703898B2 (en) * 2011-03-30 2015-04-22 富士通株式会社 Form management system, form image management method, and program
US9082007B2 (en) * 2013-02-15 2015-07-14 Bank Of America Corporation Image recreation using templates
US11830605B2 (en) * 2013-04-24 2023-11-28 Koninklijke Philips N.V. Image visualization of medical imaging studies between separate and distinct computing system using a template
US10395133B1 (en) * 2015-05-08 2019-08-27 Open Text Corporation Image box filtering for optical character recognition
US10437778B2 (en) 2016-02-08 2019-10-08 Bank Of America Corporation Archive validation system with data purge triggering
US10460296B2 (en) 2016-02-08 2019-10-29 Bank Of America Corporation System for processing data using parameters associated with the data for auto-processing
US9823958B2 (en) 2016-02-08 2017-11-21 Bank Of America Corporation System for processing data using different processing channels based on source error probability
US10437880B2 (en) 2016-02-08 2019-10-08 Bank Of America Corporation Archive validation system with data purge triggering
US9952942B2 (en) 2016-02-12 2018-04-24 Bank Of America Corporation System for distributed data processing with auto-recovery
US10067869B2 (en) 2016-02-12 2018-09-04 Bank Of America Corporation System for distributed data processing with automatic caching at various system levels

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5776969A (en) * 1980-10-30 1982-05-14 Canon Inc Image editing device
DE3107521A1 (en) * 1981-02-27 1982-09-16 Siemens AG, 1000 Berlin und 8000 München METHOD FOR AUTOMATICALLY DETECTING IMAGE AND TEXT OR GRAPHIC AREAS ON PRINT ORIGINALS
US4430526A (en) * 1982-01-25 1984-02-07 Bell Telephone Laboratories, Incorporated Interactive graphics transmission system employing an adaptive stylus for reduced bandwidth
JPS58148565A (en) * 1982-02-26 1983-09-03 Mitsubishi Electric Corp Encoding method of multi-gradation picture signal
JPS58207184A (en) * 1982-05-27 1983-12-02 Ricoh Co Ltd Recording information recognizer
JPH0750483B2 (en) * 1985-05-22 1995-05-31 株式会社日立製作所 How to store additional information about document images
GB2190560B (en) * 1986-05-08 1990-06-20 Gen Electric Plc Data compression
US4754487A (en) * 1986-05-27 1988-06-28 Image Recall Systems, Inc. Picture storage and retrieval system for various limited storage mediums
EP0262462A3 (en) * 1986-09-30 1991-02-27 Siemens Aktiengesellschaft Method for the interpretation of form-type documents
JPS63115267A (en) * 1986-10-31 1988-05-19 Nippon I C S Kk Restoration processing device for entry item in slip or the like
US5001769A (en) * 1988-12-20 1991-03-19 Educational Testing Service Image processing system

Also Published As

Publication number Publication date
EP0411231A3 (en) 1991-07-31
CA2019134A1 (en) 1991-02-04
ES2074480T3 (en) 1995-09-16
US5182656A (en) 1993-01-26
IL91220A0 (en) 1990-03-19
IL91220A (en) 1995-03-30
DE68922998D1 (en) 1995-07-13
JPH03119486A (en) 1991-05-21
DE68922998T2 (en) 1995-12-14
EP0411231B1 (en) 1995-06-07
EP0411231A2 (en) 1991-02-06

Similar Documents

Publication Publication Date Title
CA2019134C (en) Method for compressing and decompressing forms by means of very large symbol matching
US5793887A (en) Method and apparatus for alignment of images for template elimination
US5631984A (en) Method and apparatus for separating static and dynamic portions of document images
US7376266B2 (en) Segmented layered image system
US5754697A (en) Selective document image data compression technique
US5778092A (en) Method and apparatus for compressing color or gray scale documents
JP3925971B2 (en) How to create unified equivalence classes
KR100938099B1 (en) Clustering
EP0411232B1 (en) Method for high-quality compression by binary text images
US5822454A (en) System and method for automatic page registration and automatic zone detection during forms processing
US7184589B2 (en) Image compression apparatus
US4494150A (en) Word autocorrelation redundancy match facsimile compression for text processing systems
KR100937542B1 (en) Segmented layered image system
US20120294524A1 (en) Enhanced Multilayer Compression of Image Files Using OCR Systems
US5307422A (en) Method and system for identifying lines of text in a document
US20070292028A1 (en) Activity detector
US7133559B2 (en) Image processing device, image processing method, image processing program, and computer readable recording medium on which image processing program is recorded
JP3977468B2 (en) Symbol classification device
JPH0879536A (en) Picture processing method
US5542007A (en) Form dropout compression method which handles form white-out and writing in shaded and white-out areas of the form
EP0649246B1 (en) A method of reducing document size for optical display
Viswanathan et al. Characteristics of digitized images of technical articles
JP2908495B2 (en) Character image extraction device
Shapiro How to Reduce the Size of Bank Check Image Archive?
CN113808225A (en) Lossless coding method for image

Legal Events

Date Code Title Description
EEER Examination request
MKLA Lapsed