CA2155891A1 - Optical character recognition system having context analyzer - Google Patents

Optical character recognition system having context analyzer

Info

Publication number
CA2155891A1
CA2155891A1 CA002155891A CA2155891A CA2155891A1 CA 2155891 A1 CA2155891 A1 CA 2155891A1 CA 002155891 A CA002155891 A CA 002155891A CA 2155891 A CA2155891 A CA 2155891A CA 2155891 A1 CA2155891 A1 CA 2155891A1
Authority
CA
Canada
Prior art keywords
ocr
character
image
text
hypotheses
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002155891A
Other languages
French (fr)
Inventor
Raymond Amand Lorie
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Publication of CA2155891A1 publication Critical patent/CA2155891A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/26Techniques for post-processing, e.g. correcting the recognition result
    • G06V30/262Techniques for post-processing, e.g. correcting the recognition result using context analysis, e.g. lexical, syntactic or semantic context
    • G06V30/274Syntactic or semantic context, e.g. balancing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Abstract

An optical character recognition (OCR) system is provided, in which syntactical and semantic rules, provided along with an input image to be scanned and applicable to the contents of the scanned image, are used in connection with the results of the OCR
scan to identify the scanned characters. As a result, the recognition rate and confidence are enhanced. By providing the checking based on syntactical and semantic rules within the OCR
system, application programs which would receive and use the OCR
results are freed from the added burden of having to perform their own syntactical and/or semantic checking on the OCR results the application programs receive from the OCR system.

Description

OPTICAL CHARACTER RECOGNITION SYSTEM HAVING CONTEXT ANALYZER

Field of the Invention The invention generally relates to the field of optical 5character recognition (OCR). More particularly, the invention provides a method and system architecture for optical character recognition which employs syntax and semantics information relating to the characters to be recognized, to improve recognition accuracy and confidence.
Background of the Invention The problem at hand is to recognize the textual information contained in a scanned image. The scanned images can come from a wide variety of source material, such as written answers to 15printed questions on forms, or mailing addresses on postal envelopes.
OCR systems employ various strategies for isolating small portions of the image (such as groups of numeric digits within telephone numbers or ZIP codes) as connected components, 20segmenting a connected component into one or several character images and recognizing each such image as representing a specific character.
The process of OCR is prone to errors. Therefore an OCR
system may be designed to identify several alternatives for 25segmenting a connected component, and several character choices for each character inside a segmentation alternative. The results are typically provided from the output of the OCR system to an application program, such as a text processor or a printer driver.
30Conventional OCR recognition engines exist, which recognize characters with a reasonable accurracy However, even a 9O~
accuracy rate at the character level means less than 50% at the word level, so over half of all words contain at least one error.
It is well known that the use of context information in conjunction with OCR helps to improve the level of accuracy realized. For instance, if a connected component is identified as a ZIP code (which consiste only of numeric characters), then any character choice within the connected component which is not a numeric character can be dismi~sed as an incorrect choice.
In conventional systems, the OCR subsystem simply provides any character choices it recognizes to the application program, and the exploitation of context is performed by the application program. However, there are drawbacks in such an approach.
If an OCR system scans an image and outputs a single choice for each scanned character, then the application program can try alternate character values for well known confusion possibilities (i and l, O and D, 3 and 5, etc.), or use a fuzzy search to find the best matching word in a dictionary. If the engine returns several choices, then the application program may try to find the most likely sequence of characters based on maximizing some total confidence estimate. But this is by no means an easy task; and it is inefficient use of programming resources to include, within application programs, the necessary logic and code for each particular application. Therefore, there is a need to relieve application programs of the responsibility for performing context-based checking of OCR results.
Summary of the Invention It therefore i6 an objec:t of the present invention to provide an OCR system which performs its own context-based checking, to improve recognition accuracy.
To achieve this and other objectives, there is provided in accordance with the invention a system architecture for an OCR
system including a general purpose, highly customizable, context 21558gl analyzer. Once the initial scan of the input image to be recognized has been made, the tentatively identified characters are subjected to a context analysis, in which rules of syntax and semantics are applied in order to verify that the scanned characters are consisten.t with those rules, thereby improving the confidence level that the recognized characters are indeed correct.
Since the constrai.nts are application-dependent, there is preferably a means for customi~ing the analyzer. In accordance with the invention, this is done through a language called Document Specification l.anguage (DSL). A program, or "specification," written in the Document Specification Language specifies both the syntax and the semantics of the information to be recognized. DSL is specialized, concise, and non-procedural.
Its inherent search mechani.sm makes i.t a high level language.
Once a specification for a given type of text to be recognized is written in DSL, the specification can be compiled into an internal representation (the context structure).
While the invention is primarily disclosed as a system, it will be understood by a person of ordinary skill in the art that a method, such as a method executable on a conventional data processor, includi.ng a CPU, memory, I~O, program storage, a connecting bus, and other appropriate components, could be programmed or otherwise designed to facilitate the practice of the method of the inventioll. Such a method would include appropriate method steps for performi.ng the invelltion. Also, an article of manufacture, such a.s a pre-recorded disk or other similar computer program product, for use with a data processing system, could inclu.de a storage medium and program means recorded thereon for directing the d.ata processing system to facilitate the practice of the method of the invention. It will be understood that such apparatus a.nd articles of manufacture also fall within the spirit and scope of the invention.

Brief Description of the Drawings FIG. 1 is a system block diagram showing an OCR system in accordance with the invention, and an application program which is to work in concert with the OCR system.
FIG. 2 is a table of OCR results from scanning an input image of the character sequence 2/5/94, each column of the table corresponding with one of the connected components (94 being a single connected component), and each tabulated item representing a recognized character sequence and a corresponding probability-of-accurate-recognition measurement.
FIG. 3 is a table of possible character types (n representing a numeric, and - and / respectively representing.a dash and a slash) produced from the tabulated items in the OCR
scan results table of FIG. 2.
FIG. 4 is a table of character choices for the model 2/5/94, based on the OCR results given in FIG. 2.
FIG. 5 is a table of character choices for a scan of the city name FRESNO, which satisfied the syntax phase of the recognition method of the invention, and are to be used in the semantics phase thereof.

Description of the_Preferred Embodiment FIG. 1 shows the general architecture of an OCR system in accordance with the invention, and i-ts operating environment.
The OCR system is generally shown. as 2. The OCR system 2 receives as inputs an input i.mage 4, containing an image of text to be recogni~ed, and.a document description 6. The image may be an array of image pixels prodllced by an image scanner in known fashion, or some other suita.ble representation of the character image. The document description 6 is preferably provided using the DSL, which will be described in detail below.
The document description is part of a predetermined text content constraint. More specifically, it i.ncludes syntax and semantics information with which the information in the input image is expected to conform. The document description is used to improve the accuracy of the OCR system's output in a manner to be described below.
The OCR system 2 performs optical character recognition on the input image 4, using the document description 6 to improve the accuracy of recognition, and provides the resultant recognized text as an output to an application program 8. The nature of the application program 8 is not essential to the invention. However, through the use of an OCR system in accordance with the inventioll, the application program 8 is relieved of the need to perform context based checking on the OCR
results, thereby saving processing time as well as the application program designer's programming time. Additionally, the application program is less expensive to purchase or develop, because its capabilities need not include context-based checking of the results of the OCR.
The operation of the system of FIG. 1 works essentially as follows: Through a process of compilation, the document description 6 is compiled to produce a context structure 10, which is usable by the OCR system 2 in character recognition.
The application program 8 invokes a context analyzer 12, within the OCR system 2, and directs the context analyzer 12 to access the context structure 10 ancl the input image 4.
The context analyzer ]2 invokes a recognition engine 14, which may be any s-litable device for performing OCR, such as a neural network, image processor, etc. Such recognition engines, which are known in the art, produce tokens which identify characters recognized (at least tentatively) in the scanned input image, and can also include confidence or probability measurements corresponding with the recognized characters.
The context analyzer 12 and the recognition engine 14 communicate through an object oriented buffer 16. The objects in the buffer are all character string variables identified by name.
Also, a set of semantics routines 18, for providing additional predetermined text content constraints which are used to increase the recognition accuracy, are provided. (Preferab]y, the semantics routines 18 include a suitable mechanism to include new user-defined routines. Many such mechanisms, such as system editors, are known.) The context analyzer 12 also accesses the semantic routines 18 through the object oriented buffer 16.
At execution time, the context analyzer 1~ behaves like a parser. The recognition engine 14 produces tokens. The context analyzer 12 tries to match the tokens with the syntax specification expressed by the context structure 10, and makes sure that semantic constraints given in the semantic routines 18 are not violated. Since there may be several choices for a character (that is, a given input image might be recognized as either a 0, an 0, or a ~), the parsing generally involves backtracking (known to those skille-l in the art of OCR). When the recognition process completes, the application program 8 retrieves the recognized information from the object buffer 16.
At this point, it is useful to include the following two definitions, which are taken from the Webs-ter's II New Riverside Dictionary ~New York: Berkley Publishirlg ~roup, 1984), pp. 628, ~9~:
SYNTAX: the way in which words are put together to form sentences, clauses, and phrases.
SEMANTICS: 1. the study of meaning in language, esp. with regard to historical changes. 2. the study of the relationships between signs and symbols.

As will be seen from the discussion which follows, the syntax of the text being recognized, i.e., the sequence of alphas, numerics, punctuation marks, etc., will be used in a first phase of checking of the OCR results. The semantics, i.e., the existence of a recognized name within a dictionary of known valid names, the consistency between city name and ZIP code within that city, etc., will be used in a second phase of checking.
The details of the DSL language are not essential to the invention, but are examplified in a preferred implementation as follows. A DSL program has one FIELD statement for each field.
A field is defined as a portion of the -text which is expected to be found within a scanned text image, the portion having a known continuity. For instance, if addresses are to be scanned, the ZIP code would be one field of the address, and the continuity is the fact that five numeric digits make up a ZIP code. A FIELD
statement has the following format:
FIELD field name field type field coordinates where the field name is a character string, the field type is actually the name of the type, and the field coordinates are four integers which delimit dimensiolls within the scanned image.
Here is an example of a FIELD statement for a certain field wl which contains a ZIP code:
FIELD wl ZIP 124, ]200, 150, 1400 TYPE INFORMATION
The type information, sucll as the ZIP code in the above example, must be known to the OCR system. It can be a basic alphabet, a defined alphabet, an elementary type, or a composite type. Elementary and composite types may exist in a predefined library, or be defined by the user, using the following facilities.

~ 21SS891 A basic alphabet is one of the following: NUM, UPPER_CASE, LOWER_CASE, etc.
An alphabet can also be defined in the DSL program by an ALPHABET statement containing the following items:
ALPHABET alphabet name alphahet definition where the alphabet definition is a union of basic alphabets and/or characters that belong to a given string. Here are a few examples of alphabet definitions:
ALPHABET hexadec (NUM, "ABCVEF") ALPHABET octal ("01234567") It will be recognized that, for instance, a hexadecimal alphabet includes the digits 0 through ~ and the letters A through F.
Therefore, by usi~g a predefined expression NUM to specify O
through 9, and the individual]y specified letters A through F, the first of these two examples incluc]es all sixteen hexadecimal digits.
ELEMENTARY and COMPOSITE types may be easily understood from their relationship to each other, elementary types generally being subsets of composite types. Each will be discussed separately.
First, consider several examples of elementary types. There are no two ways, or formats, of writing a 5-digit ZIP Gode such as 95120. That is, the on]y format in which a ZIP code is ever expected is simply five digits, with no intervening spaces or other characters.
Similarly, there are no-two ways of writing elementary types taken alone, such as the digits of an area code (such as 408), the three-digit prefix or "exchange" of a telephone number (such as 927), or the extension of a telephone number (such as 3999).
By contrast, now let us consi~er a few examples of composite types. There are several ways of writing a full telephone number, such as (408) 927-3999. For example, 408-927-3999 or 408 -9273999 may also be valid.
The strings 95120, 408, 927, 3999 can be seen as instances of elementary types, while the complete telephone number is an example of a composite type. An elementary type is defined by an ELEM_TYPE statement in the DSL program. Such a statement has the following format:

ELEM_TYPE type~name PHRASE or WORD name LENGTH COND.

where PHRASE and WORD are keywords which specify whether or not spaces are allowed, name specifies the alphabet to which characters in this field must belong, an~ LENGTH COND. is a condition such as LENGTH=5 or 6<=LENGTH<=9. Optionally, the name of a dictionary or routine may also he added at the end of the statement For example, definiIlg an elementary type for a ZIP code may be done as follows (temporarily disregarding the final name clause):

ELEM_TYPE ZIP WORD, NUM, LENGTH-5 Each composite type is defined by a TYPE statement in the DSL program. It has the following format:

TYPE name pairs list output where pairs is a ]ist of pairs of items in the format: element name (element type), describing the set of elements that compose this composite type, 1ist is a list of acceptable representation(s), and output is the representation to be used for the result string. Typically, the output representation will be a preferre~, or frequently occurring, one of the acceptable representations, but for particular applications, some other suitable representation may be used as the output representation.
In this format, the representation is a sequence of element names and/or string constants. For example, the definition of a phone number as a composite type using three elementary types is given as follows (three elementary types followed by the composite type which includes the elementary types):

ELEM_TYPE area WORD, NUM, LENGTH=3 ELEM_TYPE prefix WORD, NUM, LENGTH=3 ELEM_TYPE ext WORD, NUM, LENGTH=4 TYPE phone a(area), p(prefix), e(ext) REP "(" a ")" p "-" e REP a p e REP "(" a ")" p e OUTPUT "(" a ")" p "-" e Note that the REP statements given in the composite TYPE
statement are preferably ordered by order of likelihood of occurrence of the representations The OUTPUT representation will be the one that the output string will have, independently of how the information was initially written. Thus, by having several different REP statements geared to recognizing different ways in which the composite type rnay be expressed, and a single OUTPUT statement, all strings recognized, regardless of format, will be output in the same format.

NAMING OF ELEMENTS
Since all fields are named and elements inside a field are also named, each element in the form is uniquely identified by the following: field_name.element_name 1.0 For example, using the above definition of the composite type "phone", one may define a field containing the home phone number as:
FIELD home_phone, phone Then, home_phone.area uniquely identifies the home area code.

DICTIONARIES AND ROUTINES
The definition of an elementary type, as shown above, may specify the name of a rolltine or dictionary to improve recognition. Routines are incorporated in the system from a system library or a user library. Their definition in C is always:
int routine_name (p_in, p_out) where p_in points to the input information (a list of data elements, confidence, number of choices to be returned) and p_out points to the output information (choices and confidences). An integer returned is a 1 if there is a solution, O otherwise.
Suppose, for example, that the validity of a number can be checked by invoking a routine rtnl. The Routine clause in the ELEM_TYPE statement will be:
ROUTINE rtnl A routine may also modify the received data element.
Dictionaries are defined, using a DICTIONARY statement with the following format:
DICTIONARY dict_name (field_name_]) IN file_name The argument of the IN portion of the DICTIONARY record identifies a location (susll as a subdirectory) where the dictionary file resides. Here is an example of a DICTIONARY
record, for a dictionary of first names:
DICTIONARY fstname (name) IN /dir/namel.dict In this example, the dictionary would have the following format:
John Peter Mary The format may also include an extra column containing the frequency of each entry, i.e., the fre~uency of occurrence of each of the first names in the dictionary.
The TYPE definition for a first_name would then refer to this dictionary as follows:
TYPE first_name WORD, ..., DICT (fstname) In this example, the dictionary context condition is associated with an elementary type. That is, first names are an elementary type because they do not come in a plurality of different formats (except, of course, for combinations of upper and lower case characters) However, dictionaries and routines can also be applied to a collection of elements. For example, suppose we have a dictionary with multiple columns, expressed in the following general format:
DICTIONARY dict_name (field_name_l, field_name_2, ... ) IN
file_name The following is a concrete example of a dictionary containing address information that can be used in connection with scans of mailing addresses:
DICTIONARY address (state, ZIP, city) IN
/dirl/dir2/my_addr.text Items listed in the address dictionary would have the following format:
CA 95120 SAN_JOSE

Suppose, now, that three fields, designated w7, w8, and w9, are defined for state, ZIP code, and city. Then, one can associate with the ]ast field w9 a CHECK statement, as follows:
CHECK w9 address (w7.state, w8.ZIP, w9.city) that will enforce the constraints that the submitted arguments constitute a triple that is in the dictionary "address". That is, if an OCR scan of a text string identifies a city by name in part of the string, then any state and ZIP code information also identified elsewhere in the string must match one of the listings in the dictionary which include the identified city and any different valid states and ZIP codes.
THE CONTEXT ANALYZER
As mentioned earlier, the analyzer interprets the tokens returned by the recognition engine according to the constraints imposed by the DSL
program. The overall problem may be couched in terms of a mathematical programming problem. Interpreting the complete document consists of receiving a set of choices provided by the OCR system from the input image, and picking up a certain OCR
character choice and/or a certain syntactic or semantic alternative in the DSI, specification. Any OCR choice has a certain probability, and therefore a solution (an acceptable sequence of decisions) has itself a probability obtained by combining the elementary probabilities of the choices.
If all possible combinations of choices are considered as possible solutions, such a method ]eads to a combinatorial explosion of different combinations. However, in accordance with the invention, special techniques are used to limit the explosion to a manageable number of combinations, and to increase performance.
To explain the overall functioning of the system, a simple execution algorithm which relies on controlled enumeration is used. The implementation wi]l make sure that techniques such as branch and bound are used everywhere possible.
For example, let us consider the common numerical way of expressing the calendar date. The date February 5th, l99~ may be expressed as 2/5/94 or 2-5-94 (among others). Let us assume the 21S58~1 following DSL specification:

ELEM_TYPE smalln WORD, NUM, LENGTH~=2 TYPE date (mm(smalln), dd(smalln), yy(smalln)) REP (mm "/" dd "/" yy ) REP (mm "-" dd "-" yy) OUTPUT (mm "/" dd "/" yy) FIELD w6, date, together with a set of OCR results, which are shown in FIG. 2.
In this DSL specification, the two REP statements define two representations of the date, which differ in that the delimiters are dashes and slashes, respectively.
The context analyzer processes the OCR results in two phases: Phase 1 handles syntactic constraints; Phase 2 takes care of semantic constraints.

PHASE 1: SYNTACTIC CONSTRAINTS
In its first phase, the algorithm utilizes syntactic constraints by essentially enumerating the (generally small number of) syntax models for a particular field. The following steps are used:
1. Determine the character types that are relevant to the syntax definition of the field.
In the present example, this essentially involves distinguishing between numeric characters (or connected components of more than one numeric characters) and delimiter characters, and determining which of the two delimiter characters (dashes and slashes) have been recognized.
2. Convert the character hypotheses returned by the recognition engine into the corresponding type hypotheses. For example, for dates as given above, the relevant types are numeric, dash and slash. Then, for each connected component (multiple digit dates, months, and years), the type hypotheses can be derived from the character hypotheses.
For example, FIG. 3 is a table of possible types which were hypothesized from the OC~ results in FIG. 2. The dashes and slashes are as shown, and all numerics are represented by the letter n. In the right-most column of FIG. 2, the three numerics 9, 4, and 2 are possible choices. They are all single-digit numerics, and thus are represented by the single entry of n (representing a single-digit numeric) in the right-most column of FIG. 3. In the left-most column of FIG. 2, three possible choices were recognized, the two numerics 2 and 7, and the two-character sequence of 9-. Two entries are in the left-most column of FIG. 3: one with a single numeric digit n, representing the 2 and 7, and a numeric followed by a dash, representing the g_ 3. Enumerate the possible models, starting with the most probable types.
In the example, the analysis yields a list of possible syntactic models that could match the OCR results while satisfying the syntactic constraints in the DSL specification.
In this example, it happens that there is one possibly matching model for each of the two representations defined in the DSL
specification, above.
For the first representation, the model is n/n/nn, and for the second representation, the model is n-nn-nn. Each of the two models are made possible because of the occurrence of dashes and slashes in the scanned results of FIG. 2, and the possible types of FIG. 3. Conversely, if there were, for instance, no slash along with the 9 at the top of the fourth column of FIG. 2, then the model for the first representation would not be available based on the characters as they exist in the OCR results. It would be possible, however, to go into another level of context analysis by interpreting the 1 in the lO, immediately below the /9 in the fourth column of FIG. 4, as a slash rather than a 1.
This interpretation would then revive the model for the first representation.
The first representation is deemed more probable based on the probabilities given alon~ with the individual OCR results which are consistent with the respective models.
4. Replace the types by the actual values. This operation is straightforward. FIG. 4 shows a set of actual character choices for the first syntax. For instance, the month digit was recognized as either a 2 or a 7, with 2 having a higher recognition confidence. The probabilities can be accumulated along the paths. The full set of solutions would yield 2/5/99, 2/5/94, 2/5/92, 7/5/99, 7/5/94, and 7/5/92. However, if we assume that we have no other a priori information on the dates, the best choice is obtained by simply picking up the best choice for each character, according to the confidence levels expressed in parentheses in FIG. 2, yielding 2/5/99. The same is done for the other valid syntaxes; global probabilities are used to choose the optimum.
However, if more a priori knowledge on the semantics of dates exists, Phase 2 will exploit it.

PHASE 2: SEMANTIC CONSTRAINTS
Let us suppose that the a priori knowledge is imbedded in a year routine that returns a Boolean expression, of value 1 if the year is valid and O if the year is invalid. Then, the DSL program is modified to include the semantic check, as follows ELEM_TYPE smalln WORD, NUM, LENGTH<=2 ELEM_TYPE year WORD, NUM, LENGTH<=2, "year"

~6 TYPE date ~mm~smalln), dd(smalln), yy(year)) REP mm "/" dd "/" yy) REP mm "-" dd "-" yy) OUTPUT mm "/" dd "/" yy) FIELD w6, date, Phase 2 is invoked as soon as a result for a single element (such as yy) is identified in the process step 4, discussed above. If the element is associated a semantic constraint (such as "year") the constraint is checked. If the constraint is satisfied, the value is accepted. Otherwise the process continues to find the best solution.
When the current representation has been matched, the results and their overall confidence levels are stored in the object buffer. When all representations have been processed, the best solution of all those accumulated is picked. In the example, the process unfolds as follows:

for the first rep: first and only syntax: n/n/nn replacement by characters: 2/5/99 semantic check: 99 fails (assume the check is for past dates) next hypothesis: 94 semantic check: OK. Accept 2/5/94 store this hypothesis in the object buffer.
for the second rep: similar process. Accept 9-13-69 Pick up best.

GENERALIZATION
This concludes the discussion of the Date field (February 5, 1994) example. Now, more general cases will be considered.
Essentially, the generalization goes in two directions:

1. Extension to handle dictionaries and routines that are not boolean.
2. Extension to multi-phase checking.

NON-BOOLEAN CHECKING
Until now we have only seen how boolean routines (returning 1 or 0) are exploited. In fact, the process explained above generates solutions made out of combinations of letters that exist in the OCR results.
In the non-boolean case, routines are considered which, from the OCR values, generate results by computation. Then, results may have characters that are not in the OCR choices. The use of a dictionary enters into that category.
Consider, for example, a form that is only used in the State of California, and which contains a field city. Assume its content for a particular form is 'FRESNO'. At the beginning of the semantic checking phase, sets are available, containing character choices, as shown in FIG. 5.
That whole set of characters can be passed to a fuzzy search routine that will find the best matching values in a dictionary.

MULTI-STAGE CHECKING
We now extend the example to a field that contains the city and the ZIP code: FRESNO 93650. It is clear that several semantic constraints are relevant: (l) Fresno must be a valid city name, (2) 93650 must be a valid ZIP code, and (3) last but not least, 93650 mus-t be a ZIP code in Fresno. That is where multi-stage checking comes in.
One possible solution is to write, in DSL:
ELEM_TYPE ZIP WORD, NUM, length=5 ELEM_TYPE city PHRASE, my_string 1~

FIELD w2, city, FIELD w4, ZIP, CHECK DICT address (w2.city, w4.ZIP) The syntax checking will unfold as explained before. But, since future semantic checks will happen as a result of CHECK, the choices of solutions for city can simply accumulate in the object buffer the OCR results. The same will happen for ZIP.
Then the execution of CHECK will execute a fuzzy search that covers both city and ZIP.
The mechanism provides much flexibility. For example, another reasonable option is to accept only valid city names during the city checking (using a city dictionary), only valid ZIP codes during the ZIP checking, and then use a checking routine that picks up the best combination. The choice of alternative methods depend on the semantics. The needed flexibility is provided both by the language and the analyzer.

What follows is a high leveL pseudo code that describes the overall functioning of the context processor.
Context_Processor:
for each window for each representation find all syntactic models that satisfy the grammar (looking only at types);
for each syntactic model for each data element in the model /* replace by actual characters */
if no semantic, use most probable chars only;
if semantic is delayed (until CHECK with dict) save OCR results in Object buffer (OB);
if semantic through dict for elem type invoke search routine with OCR results;

1~3 if other semantic on individual element consider next character combination (and stack position);
if satisfies semantics, accept and go on with next data element;
else backtrack to get next combination;
After last data element save solution in OB
(keep only the n best where n can be specified);
backtrack to try next choice;
When no backtracking left loop to handle next syntactic model;
When no syntactic model left loop to handle next representation;
When last representation, loop to handle next window;
When last window, end;

CHECK_with_dictionary:
use info stored in OB to ~ind the best dictionary entries and updates OB.
For each "column" in the dictionary, the initial info is OCR
result choices or set of values (see parsing algorithm above).

CHECK_with_routine:
always use data e]ements that have been identified during the parsing.

THE OBJECT BUFFER
Earlier, the object bufer was described as a simple mechanism to pass values of named variables. However, semantic checking, as explained above, reql1ires more information about the values and confidence of the OCR engine. It also relies on the 2155%91 capability of changing a result or a set of results available for a subsequent stage of checking.
The recognition engine puts its results in the buffer as a specific predetermined structure. The preferred structure is defined hierarchically as follows: The structure is set up in terms of fiends. A field includes a sequence of connected components. A connected component includes a set of options. An option is a sequence of elementary patterns. A pattern is a set of possible characters.
Similarly, the temporary and final results of the context processor, produced as described above, are also stored in the buffer.
Finally, semantic routines can get data from the buffer;
they can also update the buffer by changing values, reducing a set of hypotheses to a single one or even replace a single value by a set for further checking and selection.

SUMMARY
An architecture has been disclosed for context checking in OCR. It is centered on a Document Specification Language.
Document description languages have been used for a long time as a means of communication, and mainly to help formatting. Here, DSL is geared towards helping recognition.
The system organization allows for sharing many functions among applications:
l. it factors out the central mechanism of choosing among choices, 2. it provides a uniform mechanism for invoking semantic routines, and 3. it provides a unified mechanism for interchanging the kind of data involved in the recognition process.
Once such a framework is used, it is expected to trigger the development of some library of DSL functions, very much like classes in an object-oriented programming language. The results will be more striking because the universe of types is much more restricted. The same thing will be true for some important semantic routines. A first example is a fuzzy search of a dictionary.
The following are some details regarding the implementation of the invention. Some components were initially implemented as ad hoc routines for processing some tax forms, credit applications and other forms. But, very quickly, the amount of ad hoc code proved to be substantial, and an implementation of an operational system was made, based on a somewhat simplified version of the proposal, in which process optimization was disregarded. This simplification did not adversely impact the ability to handle the above applications which were the primary objective of the invention.
Here are some preliminary results from evaluating the implementation of the invention. Note that the recognition rates are at the field level, not the character level:
telephone number (10 digits) syntactic checking ONLY:
field recognition rate increases from 28% to 44%

single word dictionary lookup (last name, incomplete dictionary) recognition rate increases from 9% to 27%

multi-field dictionary lookup (city, state, ZIP) recognition rate increases:
from 8% to 60% for city 215~891 from 18% to 64% for state from 28% to 50% for ZIP code For the experiments that produced these results, the ~uality of the images was poor, so that the O~R engine would produce a relatively low recognition rate. The experiment does, however, illustrate the efficiency of the context analyzer, in terms of the increase in percentages given above.
While the preferred embodiments of the present invention have been illustrated in detail, it should be apparent that modifications and adaptations to those embodiments may occur to one skilled in the art without departing from the scope of the present invention as set forth in the following claims.

Claims (14)

1. An optical character recognition (OCR) system comprising:
means for producing a scan of an input image of text to be recognized; and a context analyzer, coupled to receive the scan, for checking the scan for consistency with a predetermined text content constraint.
2. An OCR system as recited in claim 1, wherein:
the predetermined text content constraint includes a syntactical constraint and a semantic constraint; and the OCR system further comprises:
syntax means for checking the preliminary scan for consistency with a syntactical constraint, and semantics means for checking the preliminary scan for consistency with a semantic constraint.
3. An OCR system as recited in claim 2, wherein the semantics means is operative responsive to completion of operation of the systax means.
4. An OCR system as recited in claim 2, wherein the syntax means and the semantic means each include means for interpreting text content constraints in terms of a user-programmable document specification language.
5. An OCR system as recited in claim 4, wherein:
the syntax means includes:
(i) means for receiving a document description, programmed in the document specification language, pertaining to the text to be recognized, and (ii) means for compiling the document description to produce a context structure; and the context analyzer includes means for checking the preliminary scan for consistency with the context structure.
6. An OCR system as recited in claim 4, wherein:
the semantics means includes a library of semantic routines;
and the context analyzer includes means for checking the preliminary scan for consistency with the semantic routines.
7. An OCR system as recited in claim 6, further comprising means for facilitating modification of the library of semantic routines by a user of the OCR system.
8. An OCR system as recited in claim 4, wherein the document specification language includes instructions for defining at least one of:
(i) a field within the text to be recognized, (ii) a character type for characters occurring within the field, (iii) an alphabet, and (iv) a representation of a sequence of characters in terms of types of the characters of the sequence.
9. An OCR system as recited in claim 1, further comprising an object oriented buffer coupled to the context analyzer for passing values of variables.
10. An OCR system as recited in claim 1, wherein:

the OCR system further comprises a dictionary containing a set of valid text items; and the context analyzer includes means for performing a fuzzy search through the dictionary to identify best matching values, among the text items, for the
11. An OCR system as recited in claim 1, wherein:
the OCR system includes a plurality of dictionaries containing sets of valid text items for respective fields of the text to be recognized; and the context analyzer includes:
(i) means for performing respective fuzzy searches through the dictionaries to identify best matching values, among the text items, for respective fields of the text to be recognized, and (ii) means for comparing the best matching values of the respective fuzzy searches to identify a best combination of the best matching values.
12 An OCR system as recited in claim 2, further comprising:
a recognition engine coupled to the context analyzer for performing an initial character recognition procedure on the image; and an object oriented buffer coupled to the recognition engine for receiving and storing results of the initial character recognition procedure in a predetermined structure, for storing results of the context analyzer, and for providing updatable data to the semantic means.
13. A method for performing optical character recognition (OCR) on an image to recognize text in the image, the method comprising the steps of:

providing syntax definitions of an expected content of the field of the image;
determining character types that are relevant to the syntax definitions of the field;
operating a recognition engine on the image to produce character hypotheses of a content of the image, and probability values for the character hypotheses;
converting the character hypotheses into a character type hypotheses;
enumerating possible models for the image content based on the character type hypotheses and on the probability values;
replacing the character type hypotheses with character values to produce a set of solutions; and selecting one of the solutions as the recognized text.
14. A computer program product, for use with a processing system, for directing the processing system to perform optical character recognition (OCR) on an image to recognize text in the image, the computer program product comprising:
a recording medium;
means, recorded on the recording medium, for directing the processing system to provide syntax definitions of an expected content of the field of the image;
means, recorded on the recording medium, for directing the processing system to determine character types that are relevant to the syntax definitions of the field;
means, recorded on the recording medium, for directing the processing system to operate a recognition engine on the image to produce character hypotheses of a content of the image, and probability values for the character hypotheses;
means, recorded on the recording medium, for directing the processing system to convert the character hypotheses into a character type hypotheses;
means, recorded on the recording medium, for directing the processing system to enumerate possible models for the image content based on the character type hypotheses and on the probability values;
means, recorded on the recording medium, for directing the processing system to replace the character type hypotheses with character values to produce a set of solutions; and means, recorded on the recording medium, for directing the processing system to select one of the solutions as the recognized text.
CA002155891A 1994-10-18 1995-08-11 Optical character recognition system having context analyzer Abandoned CA2155891A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US32584994A 1994-10-18 1994-10-18
US08/325,849 1994-10-18

Publications (1)

Publication Number Publication Date
CA2155891A1 true CA2155891A1 (en) 1996-04-19

Family

ID=23269717

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002155891A Abandoned CA2155891A1 (en) 1994-10-18 1995-08-11 Optical character recognition system having context analyzer

Country Status (3)

Country Link
US (1) US6577755B1 (en)
EP (1) EP0708412A2 (en)
CA (1) CA2155891A1 (en)

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7349892B1 (en) 1996-05-10 2008-03-25 Aol Llc System and method for automatically organizing and classifying businesses on the World-Wide Web
US7594176B1 (en) * 2001-09-05 2009-09-22 Intuit Inc. Automated retrieval, evaluation, and presentation of context-sensitive user support
AU2002952106A0 (en) * 2002-10-15 2002-10-31 Silverbrook Research Pty Ltd Methods and systems (npw008)
JP2004152036A (en) * 2002-10-31 2004-05-27 Nec Saitama Ltd Cellular phone with character recognizing function, correction method of recognized character, and program
WO2004109587A1 (en) * 2003-06-02 2004-12-16 Sharp Kabushiki Kaisha Portable information terminal
US20050097019A1 (en) * 2003-11-04 2005-05-05 Jacobs Ronald F. Method and system for validating financial instruments
US7292726B2 (en) * 2003-11-10 2007-11-06 Microsoft Corporation Recognition of electronic ink with late strokes
US7499588B2 (en) 2004-05-20 2009-03-03 Microsoft Corporation Low resolution OCR for camera acquired documents
US20050278250A1 (en) * 2004-06-10 2005-12-15 Kays Zair Transaction processing payment system
US20060083431A1 (en) * 2004-10-20 2006-04-20 Bliss Harry M Electronic device and method for visual text interpretation
WO2006059246A2 (en) * 2004-11-08 2006-06-08 Dspv, Ltd. System and method of enabling a cellular/wireless device with imaging capabilities to decode printed alphanumeric characters
US7551782B2 (en) * 2005-02-15 2009-06-23 Dspv, Ltd. System and method of user interface and data entry from a video call
US20060287996A1 (en) * 2005-06-16 2006-12-21 International Business Machines Corporation Computer-implemented method, system, and program product for tracking content
US20070005592A1 (en) * 2005-06-21 2007-01-04 International Business Machines Corporation Computer-implemented method, system, and program product for evaluating annotations to content
US20070011012A1 (en) * 2005-07-11 2007-01-11 Steve Yurick Method, system, and apparatus for facilitating captioning of multi-media content
US7575171B2 (en) 2005-09-01 2009-08-18 Zvi Haim Lev System and method for reliable content access using a cellular/wireless device with imaging capabilities
US20090017765A1 (en) * 2005-11-04 2009-01-15 Dspv, Ltd System and Method of Enabling a Cellular/Wireless Device with Imaging Capabilities to Decode Printed Alphanumeric Characters
US7613632B2 (en) * 2005-11-14 2009-11-03 American Express Travel Related Services Company, Inc. System and method for performing automated testing of a merchant message
US7505180B2 (en) * 2005-11-15 2009-03-17 Xerox Corporation Optical character recognition using digital information from encoded text embedded in the document
US9020811B2 (en) * 2006-10-13 2015-04-28 Syscom, Inc. Method and system for converting text files searchable text and for processing the searchable text
US7986843B2 (en) 2006-11-29 2011-07-26 Google Inc. Digital image archiving and retrieval in a mobile device system
US20080162602A1 (en) * 2006-12-28 2008-07-03 Google Inc. Document archiving system
US8155444B2 (en) * 2007-01-15 2012-04-10 Microsoft Corporation Image text to character information conversion
US20090144056A1 (en) * 2007-11-29 2009-06-04 Netta Aizenbud-Reshef Method and computer program product for generating recognition error correction information
US9003531B2 (en) * 2009-10-01 2015-04-07 Kaspersky Lab Zao Comprehensive password management arrangment facilitating security
US11610653B2 (en) * 2010-09-01 2023-03-21 Apixio, Inc. Systems and methods for improved optical character recognition of health records
US9418385B1 (en) * 2011-01-24 2016-08-16 Intuit Inc. Assembling a tax-information data structure
US9082035B2 (en) * 2011-08-29 2015-07-14 Qualcomm Incorporated Camera OCR with context information
US9727535B2 (en) 2013-06-11 2017-08-08 Microsoft Technology Licensing, Llc Authoring presentations with ink
US10331948B1 (en) * 2015-05-08 2019-06-25 EMC IP Holding Company LLC Rules based data extraction
US10346702B2 (en) 2017-07-24 2019-07-09 Bank Of America Corporation Image data capture and conversion
US10192127B1 (en) 2017-07-24 2019-01-29 Bank Of America Corporation System for dynamic optical character recognition tuning
RU2691214C1 (en) * 2017-12-13 2019-06-11 Общество с ограниченной ответственностью "Аби Продакшн" Text recognition using artificial intelligence
CN111652219B (en) * 2020-06-03 2023-08-04 有米科技股份有限公司 Image-text identification detection and identification method, device, server and storage medium
US11881041B2 (en) 2021-09-02 2024-01-23 Bank Of America Corporation Automated categorization and processing of document images of varying degrees of quality

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3969698A (en) 1974-10-08 1976-07-13 International Business Machines Corporation Cluster storage apparatus for post processing error correction of a character recognition machine
US4674065A (en) * 1982-04-30 1987-06-16 International Business Machines Corporation System for detecting and correcting contextual errors in a text processing system
US4523330A (en) * 1982-12-23 1985-06-11 Ncr Canada Ltd - Ncr Canada Ltee Banking system and method
DE3870571D1 (en) 1987-10-16 1992-06-04 Computer Ges Konstanz METHOD FOR AUTOMATIC CHARACTER RECOGNITION.
US5040227A (en) 1990-03-12 1991-08-13 International Business Machines Corporation Image balancing system and method
US5151948A (en) 1990-03-12 1992-09-29 International Business Machines Corporation System and method for processing documents having amounts recorded thereon
US5267327A (en) * 1990-03-30 1993-11-30 Sony Corporation Apparatus and method for registering the handwriting of a user so it can be translated into block characters
US5497319A (en) * 1990-12-31 1996-03-05 Trans-Link International Corp. Machine translation and telecommunications system

Also Published As

Publication number Publication date
US6577755B1 (en) 2003-06-10
EP0708412A2 (en) 1996-04-24

Similar Documents

Publication Publication Date Title
US6577755B1 (en) Optical character recognition system having context analyzer
US5151950A (en) Method for recognizing handwritten characters using shape and context analysis
US4991094A (en) Method for language-independent text tokenization using a character categorization
Celentano et al. Compiler testing using a sentence generator
US6023536A (en) Character string correction system and method using error pattern
JP4350109B2 (en) Character recognition system for identifying scanned and real-time handwritten characters
JP2726568B2 (en) Character recognition method and device
Caprile et al. Nomen est omen: Analyzing the language of function identifiers
EP0583083A2 (en) Finite-state transduction of related word forms for text indexing and retrieval
JP3152868B2 (en) Search device and dictionary / text search method
EP1011057B1 (en) Identifying a group of words using modified query words obtained from successive suffix relationships
Dundas III Implementing dynamic minimal‐prefix tries
Heering et al. Incremental generation of lexical scanners
US6912516B1 (en) Place name expressing dictionary generating method and its apparatus
KR100692327B1 (en) An expression method of names of places, a recognition method of names of places and a recognition apparatus of names of places
Du et al. A unified object-oriented toolkit for discrete contextual computer vision
Wolff 'Computing'as Information Compression by Multiple Alignment, Unification and Search
Daciuk et al. Natural Language Dictionaries Implemented as Finite Automata.
Ehrenfeucht et al. String searching
Marti RLISP'88: an evolutionary approach to program design and reuse
Thomason Syntactic/semantic techniques in pattern recognition: a survey
Searls et al. Document image analysis using logic-grammar-based syntactic pattern recognition
Dengel et al. Fragmentary string matching by selective access to hybrid tries
Lucas Rapid best-first retrieval from massive dictionaries by lazy evaluation of a syntactic neural network
CA2617416C (en) Character recognition system identification of scanned and real time handwritten characters

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued