|Publication number||US5335289 A|
|Application number||US 07/837,450|
|Publication date||Aug 2, 1994|
|Filing date||Feb 14, 1992|
|Priority date||Feb 13, 1991|
|Also published as||EP0498978A1|
|Publication number||07837450, 837450, US 5335289 A, US 5335289A, US-A-5335289, US5335289 A, US5335289A|
|Inventors||Hazem Y. Abdelazim|
|Original Assignee||International Business Machines Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (6), Non-Patent Citations (12), Referenced by (29), Classifications (8), Legal Events (5)|
|External Links: USPTO, USPTO Assignment, Espacenet|
1. Field of Invention
The invention relates to a system and method for character recognition. More particularly, the invention relates to recognizing characters in cursive script which cursive script may be of any kind, but in particular, could be typewritten Arabic script, since Arabic writing is by its nature in cursive form and it is unacceptable to write Arabic with isolated characters.
2. Prior Art
Character recognition has been a subject of intensive research for a long time, to provide practical means for processing large volumes of already existing data automatically.
Taking typewritten script as an example, the characters used to produce words are, for many languages, isolated. However, with some languages such as Arabic, the characters are generally not isolated and are strung together to form cursive script. Thus a major problem arises when attempting to recognize cursive script in how to identify the constituent characters within the script.
A number of approaches have been described in the prior art. One approach involves a segmentation algorithm developed to resolve the cursiveness problem based on an Energy-like approach. The process first detects lines of script and 'Pieces of Arabic Word' (PAWs) by standard image processing techniques. A PAW is defined in the article as a part of an Arabic word separated by vertical boundaries (assuming the lines of text are horizontal).
Next the PAWs are segmented by producing energy-like curves based on the `black-to-black` height of their constituent pixel columns. A preselected threshold value traverses this curve yielding significant primitives and leaving out `dead` or silent zones.
Then two consecutive clustering procedures are carried out to group similar primitives into groups so as to facilitate the recognition process. Once the primitives have been recognized the appropriate characters are reconstructed.
This process has the disadvantage that segmentation of the PAWs takes place before recognition and so causes complexity by increasing the number of pattern classes to be recognized. In addition the recognition system is relatively slow due to the time exhausted in the segmentation step which precedes recognition.
In the article `Automatic Recognition of Bilingual Typewritten Text`, Proc. of COMPEURO '89 VLSI & Computer Peripherals IEEE Conference in Hamburg, vol.2 pp. 140-144, 8-12 May 1989, Mr Abdelazim and Mr Hashish discuss a recognition system based on a combined statistical/structural approach. The process follows a similar routine to the previous piece of prior art in that the `Pieces of Bilingual Word` ( PBWs ) are segmented into primitives using the same segmentation method as above.
The primitives are then clustered and passed through a decision tree module to produce a set of candidate primitives. The input primitive is then correlated with these candidates and depending on the degree of correlation is matched to one of the candidates. If the correlation is not above a desired level the input primitive is shifted one pixel in its reference frame and then fed back into the decision tree to repeat the process. Hence this piece of prior art also carries out segmentation before recognition and suffers from the same drawbacks as the previous piece of prior art.
The article `Arabic Typewritten Character Recognition Using Dynamic Comparison`, Proc. 1st Kuwait Computer Conference, pp. 455-462, Mar. 1989, describes a method based on dynamic programming techniques which allows character recognition with no segmentation phase. A word containing a set of characters (T) is isolated and an image of the word formed. Then the image is compared with a concatenated combination of representative character patterns obtained from a library of reference patterns. Several combinations may be tried and the boundaries of the characters in T are obtained by projecting the intersections of the `optimal path` (as described in the article) with the boundaries of the characters contained in the best combined reference from the library. This approach involves a large number of comparisons and is accordingly extremely time consuming.
The article `Computer Recognition of Arabic Cursive Scripts`, Pattern Recognition, Vol. 21, No. 4, 1988, pp. 293-302, discusses a method involving a contour following algorithm to produce the segmentation of Arabic words into their constituent characters. The input to the segmentation algorithm is a closed contour representing either a complete word or a subword. The segmentation process is based on the calculation of the contour height (h) which is the distance between the two extreme points of intersection of the contour with a vertical line.
Working on the principle that the contour height is usually very small at the boundaries between different characters the constituent characters are extracted. These characters are represented by their outer contour and any diacritics present, and are classified as a particular known character in a classifying stage.
This method lacks generality and is font-specific. It also is dependent on accurate segmentation occurring in order to allow the recognition phase to work well.
It is an object of the present invention to provide an improved system and method for recognizing cursive script, which does not require segmentation before recognition.
Viewed from a first aspect a method is provided for recognizing characters in cursive script in which the script is scanned to detect word boundaries and words are then segmented into characters, characterized by the steps of:
(i) choosing a word boundary and extracting portions of a word associated with said boundary;
(ii) comparing selected portions successively with a set of reference portions representing known characters until a character within said word has been identified as one of said known characters;
(iii) skipping a number of portions depending on the identified character and then repeating the process from step (ii) for the identification of next and subsequent characters.
The method of the present invention can be implemented in a variety of ways but in preferred embodiments the method is characterized by the steps of:
(a) forming a series of vectors representing features of the script at different positions in the characters constituting the script;
(b) comparing an initial vector, chosen with reference to a word boundary, with a known vector from a set of reference vectors;
(c) determining an accumulated uncertainty value relating to the degree with which the compared vector identifies a known character;
(d) comparing said accumulated uncertainty value with a predetermined threshold value;
(e) if said accumulated uncertainty value is less than said threshold value then recognizing the character as said known character, otherwise selecting subsequent vectors for comparison with said set of reference vectors, and repeating steps (c) to (d) until said uncertainty value is less than said threshold value;
(f) selecting a new initial vector in accordance with the character so identified, comparing said vector with said set of reference vectors, and repeating the process from step (c) for the identification of the next and subsequent characters.
Also in preferred embodiments the method is able to repeat steps (ii) and (iii) until another word boundary is reached, at which time the method is repeated from step (i) for a different word boundary. Hence the method can continue indefinitely until all the script has been read.
The method can be used to recognize any cursive script, but in preferred embodiments said cursive script is typewritten cursive script using characters selected from a predetermined font. Typewritten cursive script is advantageous since characters in typewritten cursive script are more accurately reproducable than in handwritten script.
The set of reference portions can be formed in a variety of different ways, such as by loading libraries of reference portions relating to different fonts stored on software, or by allowing the user to store his own fonts on a storage device using the devices provided by the system. In preferred embodiments, the set of reference portions is formed by the latter method during a learning phase by the steps of: entering into a storage device said set of reference portions representative of possible portions relating to characters in said font, and assigning each such portion a particular label; scanning each character belonging to said font in turn to produce a sequence of portions and hence a sequence of labels identified with said character, and storing said sequences in said storage device. This is advantageous since the user can enter his own customized fonts as required.
Viewed from a second aspect a system is provided for recognizing characters in cursive script in which the script is scanned to detect word boundaries (40) and words are then segmented into characters, characterized by:
sectioning means (50) for forming a series of portions representing features of the script at different positions in the characters constituting the script;
recognition and segmentation means (60) for comparing an initial portion, chosen with reference to a word boundary, with a known portion from a set of reference portions and to compare subsequent portions in said series similarly with known portions, until the cumulative results of the comparison identify a character with an uncertainty less than a predetermined threshold value, and then to repeat the comparison process starting with a new initial portion selected in accordance with the character identified.
The present invention will be described further, by way of example only, with reference to an embodiment thereof as illustrated in the accompanying drawings, in which:
FIG. 1 is a block diagram showing the processes carried out by an embodiment of the present invention.
FIG. 2(a) is a diagram which shows the feature extraction process employed in the embodiment.
FIG. 2(b) is a graph which illustrates the form of the a-priori probability distribution calculated in the learning phase.
FIG. 3 is a diagram and graph which illustrate the recognition/segmentation mechanism of the embodiment.
In the preferred embodiment a system will be described for recognizing characters in typewritten Arabic-script. In the following description the term `word` will be used to define a string of characters contained within two adjacent word boundaries.
FIG. 1 shows as a block diagram the processes carried out by the embodiment. Initially the cursive script enters a process 10 where it is scanned by a detector (not shown) to produce a digital representation of the script. This digital representation next enters a process 20 where line structure of the script is determined and then, via a process 30, word boundaries are detected and patterns relating to words are isolated. The aforementioned processes 10, 20, 30, collectively referred to as image preprocessing 40, are well known in the art, and hence will be discussed no further.
In the embodiment the script shall be considered to be written in the form of black print on white paper, although it will be appreciated by one skilled in the art that the invention is in no way limited to this particular representation. The digital pattern of a word thus consists of a series of black and white pixels. The process 50 of extracting features of the script represented in this format will be discussed with reference to FIG. 2(a).
Taking a line of text as a horizontal reference and starting at a word boundary, an initial column of pixels is read and a vector (v) is formed, such that the dimension of the vector depends on the number of black/white segments in the column, and the elements of each vector represent the number of pixels in the corresponding segment. This vector is then passed on to the next process, the detector moves one column to the left and the next column of pixels is read to form another vector. Taking the example in FIG. 2(a), the detector is positioned over the twelfth column 52 from the rightmost edge 53 of a sample word 51, the parameter Zi indicating the ith column and thus being assigned the value 12 in this instance. This column consists of four black pixels 54, followed by three white pixels 55, followed by five black pixels 56. A vector 57 formed by the above process is: ##EQU1##
Returning to FIG. 1 and before describing the `Recognition/Segmentation` phase 60 of the embodiment it is convenient to describe the learning process employed during the setting up of the system to provide a reference library for the recognition phase.
The ultimate goal of any character recognition system is the capability to read a wide variety of typewritten fonts. This is accomplished in the learning phase, where the system is set up to recognize a particular typewritten font. The learning phase is divided into two successive phases, namely, the unsupervised 70 and supervised 80 learning phases.
As shown in FIG. 1, Branch A, the main task performed in the unsupervised learning phase 70 is the generation of a "Code Book" 90. The "Code Book" 90 is a set of reference vectors representative of possible vectors relating to characters in the chosen font. The characters associated with the chosen font are entered into the image preprocessing process 40 and the associated vectors extracted by the feature extraction process 50. Several scans of the characters are made to increase statistical accuracy and the resultant vectors fed into an unsupervised learning process 70 through Branch A of FIG. 1. In this process 70 an unsupervised clustering algorithm, described, determines the different vectors present and stores them in the Code Book 90 as a set of reference vectors, assigning each vector a label by which it can be identified.
The learning process next enters the supervised learning phase 80. In this phase the statistics needed for the recognition/segmentation process 60 are computed. Isolated characters for a given font are fed to this module 80 as shown in FIG. 1, branch B, after passing through feature extraction process 50. Assuming that the total number of characters (pattern classes) is K, and that S samples are available for each character, the necessary statistics expressed by the a-priori conditional probability P(Zi→Cj)/Ωk, are given in an operation 100 by the following equation: ##EQU2##
The terms of the above equation are defined as below:
Ωk : is the kth pattern class (character), k=1. . .K.
Zi : is the ith column position within the character starting from the rightmost edge.
Cj : is the jth element in the Code Book
Zi->Cj : is the labelling of the ith column position with the jth element in the Code Book.
N(.) : is the frequency of occurrence of the event inside the argument.
The value of `Zi→Cj` is found by extracting a feature vector from the ith column using feature extraction process 50 already discussed, and then assigning the label of the nearest Code Book vector (using a simple absolute distance measure) to this vector. The absolute distance measure used is known as `City-Block` and is defined by the equation: ##EQU3##
The terms of the above equation are defined as below:
dj : is the absolute distance measure.
Vj : is the jth reference vector that corresponds to an element in the Code book.
X : is the feature vector extracted from the input character.
n : is the number of segments in the vector (the dimension
of the vector).
Thus Cj is the label of the reference vector Vj that is closest to X, and hence produces the smallest value of dj. J ranges over all the vectors in the Code Book.
An example of the a-priori probability distribution is shown in FIG. 2(b).
The a-posteriori probability needed in the recognition/segmentation process 60 can now simply be obtained by Baye's rule in an operation 110 as follows: ##EQU4##
This distribution is computed off-line for all possible Ω'k s Zi's and Cj's before the recognition process.
Also in the supervised learning process 80 the average width of each character is determined by calculating this width for several samples (eg. 10 samples) and then computing a simple statistical average given by: ##EQU5## where `Wi` is the width in pixels of the ith sample for a given character, and `Ns` is the number of samples. This completes the learning process of the system.
A script to be read is entered into the system through the detector and is first passed through the image preprocessing process 40. Then the feature extraction process 50 and the Recognition/Segmentation process 60 operate as described below:
1. Starting at a word boundary, a feature vector of the rightmost column of the digital form of the associated cursive word is extracted by the feature extraction process 50.
2. This vector is then passed to the recognition/segmentation process 60 where in a step 1 it is first assigned a label Cj, after matching with the nearest vector in the Code Book, as shown by the dotted line 112 in FIG. 1.
3. Based on the current position within the word (Zi), and the label value Cj, the instantaneous a-posteriori probabilities P(Ωk /Zi→Cj) are retrieved from the a-posteriori probabilities block 110, as shown by the dotted line 111 in a step 2 of process 60.
4. In a step 3 of process 60 an accumulated probability distribution Pa is computed by adding together the previous instantaneous probability distributions, and then normalising on the sum of the probabilities.
5. In a step 4 of process 60, the uncertainty at the present position Zi, given by the entropy H(i), is computed by the formula: ##EQU6##
6. In a step 5 of process 60, a decision is taken based on a Recognition rule which employs a preselected entropy threshold Hmin related to the desired recognition performance. The recognition rule involves the following steps:
If H(i) < H min :
1. Recognize the unknown character as the reference character with maximum Probability.
2. Suspend recognition process.
3. Jump to the left edge of the recognized character, based on the average width of the recognized character as determined in the learning phase, and extract a feature vector corresponding to the next column and goto step 2.
If H(i)>H min :1-i-→i+1, 2- if `i` is equivalent to
the last column of the word then terminate, else extract a
feature vector corresponding to the i+1 th column and goto step 2.
In a step 7 the process is repeated from step 1 when the end of a word is reached and another word boundary is selected.
Although the above description of the recognition/segmentation process 60 discusses starting at the rightmost edge of a word it will be appreciated by those skilled in the art that this is not a necessary feature of the invention, and that the process could work equally well by reading the columns starting from the leftmost edge, for example in reading script other than Arabic where the leftmost edge is a more appropriate reference.
FIG. 3 illustrates the recognition/segmentation mechanism of the preferred embodiment. It shows a word containing three characters and having a height 64 of 15 pixels and a width 65 of 50 pixels. The current pointer 63 is at a position 44 pixels from the rightmost edge 72 of the word. Two characters have already been recognized, the numeral 61 denoting the left border of the first character and the numeral 62 denoting the left border of the second character.
The recognition process is now trying to recognize the third character. The current position within the character 75 is Zi=12. The corresponding Code value 67 is Cj=15. The instantaneous a-posteriori probability distribution P (Ωk /Zi=12)→(Cj=15)) and the accumulated probability distribution Pa with respect to the right border of the third character 74 are as shown in FIG. 3 (reference numerals 69 and 71 respectively). The uncertainty value `H` is 0.887, while the threshold `Hmin` is set at 0.1. Hence the uncertainty must fall by at least 0.787 during the analysis of the remaining six feature vectors if the third character is to be recognized.
The preferred embodiment described hereinbefore treats the recognition of cursive script using a new approach. It describes a combined recognition/segmentation process 60 that includes the following features that distinguish it from the prior art:
1. The approach is systematic and generally applicable to any cursive font.
2. The developed system is quick, since no time consuming computations are involved in the recognition phase. The a-posteriori probabilities used in the recognition are only "retrieved" and not "computed", since the computation has been done off-line in the learning phase.
3. A remarkable property of this recognition/segmentation process, is that most of the Arabic characters can be recognized by observing only 40% of the character, thus saving a considerable computational effort and leading to a fast algorithm.
The above described preferred embodiment has been used with 31 typewritten fonts of which 20 were Arabic fonts and 1 was a cursive English font. The recognition speed reached 130 words per minute on average and showed a recognition rate of 98%. The 2% error rate was mainly due to skew noise imposed from the detector, and can be improved by increasing the number of samples used in the statistics.
While the invention has been described with respect to a specific embodiment, it will be understood by those having skill in the art that changes can be made to the specific embodiment without departing from the spirit of the invention.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US3643215 *||Nov 13, 1968||Feb 15, 1972||Emi Ltd||A pattern recognition device in which allowance is made for pattern errors|
|US4288779 *||Jul 5, 1979||Sep 8, 1981||Agency Of Industrial Science & Technology||Method and apparatus for character reading|
|US5034991 *||Apr 6, 1990||Jul 23, 1991||Hitachi, Ltd.||Character recognition method and system|
|US5067166 *||Mar 23, 1990||Nov 19, 1991||International Business Machines Corporation||Method and apparatus for dp matching using multiple templates|
|US5075896 *||Oct 25, 1989||Dec 24, 1991||Xerox Corporation||Character and phoneme recognition based on probability clustering|
|DE2516671A1 *||Apr 16, 1975||Jan 22, 1976||Robotron Veb K||Logic for reliable automatic character recognition - can distinguish between printed characters with variable spacing|
|1||Bozinovic et al. "Off-Line Cursive Script Word Recognition" IEEE Transactions on Pattern Analysis and Machine Intelligence, VII #1 Jan. 1989, pp. 68-83.|
|2||*||Bozinovic et al. Off Line Cursive Script Word Recognition IEEE Transactions on Pattern Analysis and Machine Intelligence, VII 1 Jan. 1989, pp. 68 83.|
|3||H. Y. Abdelazim & M. A. Haskish, "Automatic Reading of Bilingual Typewriter Text", IEEE Confer., Hamburg, GE, vol. 2, pp. 140-144, May 8-12, 1989.|
|4||*||H. Y. Abdelazim & M. A. Haskish, Automatic Reading of Bilingual Typewriter Text , IEEE Confer., Hamburg, GE, vol. 2, pp. 140 144, May 8 12, 1989.|
|5||M. Krlemakhem & M. C. Ferri, "Arabic Typewritten Character Recognition Using Dynamic Comparison", Proc. 1st Kuwait Computer Conf., pp. 455-462, Mar. 1989.|
|6||*||M. Krlemakhem & M. C. Ferri, Arabic Typewritten Character Recognition Using Dynamic Comparison , Proc. 1st Kuwait Computer Conf., pp. 455 462, Mar. 1989.|
|7||R. Casey et al., "Recursive Segmentation & Classification of Composite Character Patterns", IEEE Computer Society Pres., pp. 1023-1026, Oct. 19-22, 1982.|
|8||*||R. Casey et al., Recursive Segmentation & Classification of Composite Character Patterns , IEEE Computer Society Pres., pp. 1023 1026, Oct. 19 22, 1982.|
|9||T. Sheikh & Guindi, "Computer Recognition of Arabic Cursive Scripts" Pattern Recognition, vol. 21, No. 4, 1988, pp. 293-302.|
|10||*||T. Sheikh & Guindi, Computer Recognition of Arabic Cursive Scripts Pattern Recognition, vol. 21, No. 4, 1988, pp. 293 302.|
|11||T. Shimizu, "Pattern Recognition Device", Patent Abstracts of Japan, vol. 12, No. 468, p. 797, Dec. 8, 1988.|
|12||*||T. Shimizu, Pattern Recognition Device , Patent Abstracts of Japan, vol. 12, No. 468, p. 797, Dec. 8, 1988.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US5553284 *||Jun 6, 1995||Sep 3, 1996||Panasonic Technologies, Inc.||Method for indexing and searching handwritten documents in a database|
|US5594810 *||Jun 5, 1995||Jan 14, 1997||Apple Computer, Inc.||Method and apparatus for recognizing gestures on a computer system|
|US5602938 *||May 20, 1994||Feb 11, 1997||Nippon Telegraph And Telephone Corporation||Method of generating dictionary for pattern recognition and pattern recognition method using the same|
|US5931940 *||Jan 23, 1997||Aug 3, 1999||Unisys Corporation||Testing and string instructions for data stored on memory byte boundaries in a word oriented machine|
|US5991439 *||May 15, 1996||Nov 23, 1999||Sanyo Electric Co., Ltd||Hand-written character recognition apparatus and facsimile apparatus|
|US6044171 *||May 9, 1995||Mar 28, 2000||Polyakov; Vladislav G.||Method and apparatus for pattern recognition and representation using fourier descriptors and iterative transformation-reparametrization|
|US6212299||Mar 12, 1997||Apr 3, 2001||Matsushita Electric Industrial Co., Ltd.||Method and apparatus for recognizing a character|
|US6594393 *||May 12, 2000||Jul 15, 2003||Thomas P. Minka||Dynamic programming operation with skip mode for text line image decoding|
|US6973214 *||Jul 30, 2001||Dec 6, 2005||Mobigence, Inc.||Ink display for multi-stroke hand entered characters|
|US7254269 *||Apr 19, 2002||Aug 7, 2007||Hewlett-Packard Development Company, L.P.||Character recognition system|
|US7599530 *||Oct 1, 2004||Oct 6, 2009||Authentec, Inc.||Methods for matching ridge orientation characteristic maps and associated finger biometric sensor|
|US7885464 *||Dec 8, 2005||Feb 8, 2011||Kabushiki Kaisha Toshiba||Apparatus, method, and program for handwriting recognition|
|US8111911 *||Apr 27, 2009||Feb 7, 2012||King Abdulaziz City For Science And Technology||System and methods for arabic text recognition based on effective arabic text feature extraction|
|US8369612 *||Dec 14, 2011||Feb 5, 2013||King Abdulaziz City for Science & Technology||System and methods for Arabic text recognition based on effective Arabic text feature extraction|
|US8472707 *||Nov 26, 2012||Jun 25, 2013||King Abdulaziz City for Science & Technology||System and methods for Arabic text recognition based on effective Arabic text feature extraction|
|US8682077||Jun 11, 2010||Mar 25, 2014||Hand Held Products, Inc.||Method for omnidirectional processing of 2D images including recognizable characters|
|US8761500||May 12, 2013||Jun 24, 2014||King Abdulaziz City For Science And Technology||System and methods for arabic text recognition and arabic corpus building|
|US8908961 *||Apr 23, 2014||Dec 9, 2014||King Abdulaziz City for Science & Technology||System and methods for arabic text recognition based on effective arabic text feature extraction|
|US20030059115 *||Apr 19, 2002||Mar 27, 2003||Shinya Nakagawa||Character recognition system|
|US20050117785 *||Oct 1, 2004||Jun 2, 2005||Authentec, Inc.||Methods for matching ridge orientation characteristic maps and associated finger biometric sensor|
|US20060088216 *||Dec 8, 2005||Apr 27, 2006||Akinori Kawamura||Apparatus, method, and program for handwriting recognition|
|US20100272361 *||Apr 27, 2009||Oct 28, 2010||Khorsheed Mohammad S||System and methods for arabic text recognition based on effective arabic text feature extraction|
|US20120087584 *||Dec 14, 2011||Apr 12, 2012||Khorsheed Mohammad S||System and methods for arabic text recognition based on effective arabic text feature extraction|
|US20120281919 *||May 6, 2011||Nov 8, 2012||King Abdul Aziz City For Science And Technology||Method and system for text segmentation|
|US20140219562 *||Apr 23, 2014||Aug 7, 2014||King Abdulaziz City for Science & Technology||System and methods for arabic text recognition based on effective arabic text feature extraction|
|CN100492404C||Apr 30, 2007||May 27, 2009||哈尔滨工程大学||Method for recognizing print hand Arabic alphabets based on boundary characteristic|
|CN102142088A *||Aug 17, 2010||Aug 3, 2011||侯塞因K·艾尔奥玛依||Effective Arabic feature extraction-based Arabic identification method and system|
|CN102142088B||Aug 17, 2010||Jan 23, 2013||穆罕默德S·卡尔希德||Effective Arabic feature extraction-based Arabic identification method and system|
|WO1997022947A1 *||Nov 27, 1996||Jun 26, 1997||Motorola Inc.||Method and system for lexical processing|
|U.S. Classification||382/177, 382/228, 382/161, 382/186|
|Cooperative Classification||G06K2209/013, G06K9/68|
|Nov 3, 1997||FPAY||Fee payment|
Year of fee payment: 4
|Dec 14, 2001||FPAY||Fee payment|
Year of fee payment: 8
|Feb 15, 2006||REMI||Maintenance fee reminder mailed|
|Aug 2, 2006||LAPS||Lapse for failure to pay maintenance fees|
|Sep 26, 2006||FP||Expired due to failure to pay maintenance fee|
Effective date: 20060802