Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20060062461 A1
Publication typeApplication
Application numberUS 11/262,214
Publication dateMar 23, 2006
Filing dateOct 27, 2005
Priority dateJul 25, 2002
Also published asCN1606753A, CN100550036C, US6970599, US20040017946, WO2004012135A1
Publication number11262214, 262214, US 2006/0062461 A1, US 2006/062461 A1, US 20060062461 A1, US 20060062461A1, US 2006062461 A1, US 2006062461A1, US-A1-20060062461, US-A1-2006062461, US2006/0062461A1, US2006/062461A1, US20060062461 A1, US20060062461A1, US2006062461 A1, US2006062461A1
InventorsMichael Longe, Jianchao Wu, Lu Zhang
Original AssigneeMichael Longe, Jianchao Wu, Lu Zhang
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Chinese character handwriting recognition system
US 20060062461 A1
Abstract
A handwritten Chinese character input method and system is provided to allow users to enter Chinese characters to a data processor by adding less than three strokes and one selection movement such as mouse clicking or stylus or finger tapping. The system is interactive, predictive, and intuitive to use. By adding one or two strokes which are used to start writing a Chinese character, or in some case even no strokes are needed, users can find a desired character from a list of characters. The list is context sensitive. It varies depending on the prior character entered. Compared to other existing systems, this system can save users considerable time and efforts to entering handwritten characters.
Images(20)
Previous page
Next page
Claims(18)
1. A Chinese character handwriting input system, comprising:
recognition means for recognizing a category of handwriting stroke from a predefined number of stroke categories;
recognition means for recognizing one or more categories of handwriting stroke from a predefined number of stroke categories;
collection means for organizing a list of characters that commonly start with said more than one recognized category of handwriting stroke, said list of characters being displayed in a predefined sequence, wherein said predefined sequence is based on any of:
number of strokes necessary to write out a character;
use frequency of a character; and
contextual relation to the last character entered; and
selection means for selecting a desired character from said list of characters.
2. The system of claim 1, further comprising:
wildcard entry means for matching any stroke category.
3. A Chinese character handwriting input system, comprising:
recognition means for recognizing a category of handwriting stroke from a predefined number of stroke categories; and
collection means for organizing a list of characters that commonly start with one or more recognized categories of handwriting stroke, said list of characters being displayed in a predefined sequence, wherein said predefined sequence is based on any of:
number of strokes necessary to write out a character;
use frequency of a character; and
contextual relation to the last character entered;
selection means for selecting a desired character from said list of characters;
wherein said predetermined number of stroke categories comprise more than five basic categories.
4. A method for inputting handwritten Chinese characters, comprising the steps of:
adding a stroke into a pattern recognition system;
categorizing said added stroke into one of a predefined number of stroke categories;
finding characters based on frequency of character use; and
displaying a list of found characters.
5. The method of claim 4, further comprising the steps of:
if a desired character is in said list, selecting said desired character from said list;
if a desired character is not visible in said list, adding another stroke; and
displaying another list of found characters.
6. The method claim 4, further comprising the steps of:
displaying a numeric or iconic representation for a stroke that is added; and
displaying full stroke numeric or iconic representation for a character that is selected.
7. The method of claim 4, further comprising the steps of:
if a desired character is in said list, either of selecting said desired character from said list or adding another stroke and displaying another list of found characters.
8. The method of claim 4, further comprising the step of:
retaining an ink trail of each stroke that is added until a character is selected.
9. The method of claim 8, further comprising the step of:
color coding each ink trail either to indicate a level of confidence or differentiation in said categorization step.
10. The method of claim 4, further comprising the step of:
prompting a user to clarify between ambiguous stroke interpretations and/or to remedy a stroke's misinterpretation.
11. The method of claim 4, further comprising the step of:
providing means for removing one or more strokes of an input stroke sequence in reverse order.
12. The method of claim 4, further comprising the step of:
providing means for matching any of Latin letters, punctuation symbols, and emoticons with predefined or user-defined stroke sequences.
13. The method of claim 4, further comprising the step of:
selecting a character from said list with a user gesture;
wherein said user gesture allows said user to begin entry of strokes for a next character.
14. The method of claim 4, further comprising the step of:
providing user-defined gestures for any of stroke categories, sequences of strokes, and character components.
15. The method of claim 4, further comprising the step of:
providing means for explicit selection of stroke categories.
16. The method of claim 4, further comprising the step of:
displaying character components that start with one or more recognized stroke categories;
wherein selecting a character component results in the display of only the characters containing or starting with said selected component.
17. The method of claim 4, further comprising the step of:
allowing alternative stroke sequences for character or character component entry.
18. The method of claim 4, said step of finding characters based on frequency of use further comprising the step of:
finding said characters based on context.
Description
    CROSS REFERENCE TO RELATED APPLICATION
  • [0001]
    This application is a continuation of U.S. patent application Ser. No. 10/205,950, filed Jul. 25, 2002.
  • BACKGROUND OF THE INVENTION
  • [0002]
    1. Technical Field
  • [0003]
    This invention relates generally to text input technology. More particularly, the invention relates to a method and system that allows users to input handwritten Chinese characters to a data processor by entering the first few strokes required to write a character, so that users can perform characters input tasks in a fast, predictive way.
  • [0004]
    2. Description of the Prior Art
  • [0005]
    Around the globe, over 1.2 billion people speak Chinese. This includes the People's Republic of China, Taiwan, Singapore, and a large community of overseas Chinese in Asia and North America. Chinese character strokes and symbols are so different and so complicated that they can be sorted and grouped in a wide variety of ways. One can analytically sort out as many as 35-40 strokes of 4-10 symbols or more per Chinese character, depending on how they are grouped. Because of this unique structure of Chinese language, computer users cannot input Chinese characters using alphabetic keyboards as easily as inputting Western language.
  • [0006]
    A number of methods and systems for inputting Chinese characters to screen, such as the Three Corners method, Goo Coding System, 5-Stroke method, Changjie's Input scheme, etc., have been developed. However, none of these input methods provides an easy to use, standardized input/output scheme to speed up the retrieval, typewriting process, by taking full advantage of computer technology.
  • [0007]
    Several other methods and system for inputting handwritten Chinese characters are also deknown. For example, Apple Computer and the Institute of System Science in Singapore (Apple-ISS) have developed a system which features an application for dictation and a handwriting input method for Chinese. This system incorporates a dictionary assistance service wherein when a first character is recognized, the device displays a list of phrases based on the first character and the user may select the proper phrase without inputting any stroke. This technique effectively increases the input speed.
  • [0008]
    Another example is Synaptics' QuickStroke system which incorporates a prediction function based on a highly sophisticated neural network engine. This is not a graphics capture application where the users have to write out the entire character before the software can recognize which character is intended. Instead, it can recognize a character after only three to six strokes of the character have been written. It can be used with a standard mouse, Synaptics TouchPad™, or a Synaptics pen input TouchPad.
  • [0009]
    Another example is Zi Corporation's text input solutions based on an intelligent indexing engine which intuitively predicts and displays desired candidates. The solutions also include powerful personalization and learning capabilities—providing prediction of user-created terms and frequently used vocabulary.
  • [0010]
    It would be advantageous to provide a handwritten Chinese character input method and system to allow users to enter Chinese characters to a data processor by drawing just the first few strokes and one selection movement such as mouse clicking or stylus or finger tapping.
  • SUMMARY OF INVENTION
  • [0011]
    A handwritten Chinese character input method and system is provided to allow users to enter Chinese characters to a data processor by drawing just the first few strokes and one selection movement such as mouse clicking or stylus or finger tapping. The system is interactive, predictive, and intuitive to use. By adding one or two strokes which are used to start writing a Chinese character, users can find a desired character from a list of characters. The list is context sensitive, so in some cases no strokes are needed. It varies depending on the prior character entered. The system puts the handwritten-stroke-to-category mapping on top of the stroke category matching technology, including an optional “Match any stroke category” key or gesture. Compared to other existing systems, this system can save users considerable time and efforts to entering handwritten characters.
  • [0012]
    In one preferred embodiment, the handwritten Chinese character input system includes: (1) recognition means for recognizing a category of handwriting stroke from a list of stroke categories; (2) collection means for organizing a list of characters that commonly start with one or more recognized categories of handwriting strokes, the list of characters being displayed in a predetermined sequence; and (3) selection means for selecting a desired character from the list of characters.
  • [0013]
    In a typical embodiment, the strokes are classified into five basic categories, each having one or more sub-categories. The collection means contains predefined stroke order information. It also contain a display means to display a list of most frequently used characters when no strokes are entered, while strokes are being entered, and/or after a character is selected. The list of most frequently used characters is context sensitive. It varies depending upon the last Chinese character entered. The predetermined sequence may be based on any of: (1) number of strokes necessary to write out a character; (2) use frequency of a character; and (3) contextual relation to the last character entered.
  • [0014]
    The selection means is associated with any of: (1) mouse clicking; (2) stylus tapping; (3) finger tapping; and (4) button/key pressing.
  • [0015]
    The system also contains “stroke entry means,” such as an LCD touchscreen, stylus or finger pad, trackball, data glove, or other touch-sensitive (possibly flexible) surface.
  • [0016]
    The system may further includes means for displaying a numeric or iconic representation of each stroke that is entered and a full numeric or iconic representation of strokes for a Chinese character that is selected.
  • [0017]
    According to the preferred embodiment, a method for inputting handwritten Chinese characters includes the following steps:
      • adding a stroke into the stroke recognition apparatus;
      • categorizing the added stroke into one of a predetermined number of categories;
      • finding characters based on frequency of character use;
      • displaying a list of found characters;
      • if a desired character is in the list, selecting the desired character from the list;
      • if a desired character is not visible in the list, adding another stroke;
      • finding most common characters that appear after a previously selected character based on a present stroke sequence; and
      • displaying another list of found characters.
  • [0026]
    The method may further comprise the steps of:
      • displaying a numeric representation for a stroke that is added; and
      • displaying full stroke numeric representation for a character that is selected.
  • [0029]
    As an alternative, the method may comprises the steps of:
      • displaying an iconic representation for a stroke that is added; and
      • displaying full stroke iconic representation for a character that is selected.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • [0032]
    FIG. 1 is a schematic diagram illustrating an apparatus for inputting handwritten Chinese characters according to one preferred embodiment of the invention;
  • [0033]
    FIG. 2 is a flow diagram illustrating a method for inputting handwritten Chinese characters in a predictive manner according to another preferred embodiment of the invention;
  • [0034]
    FIG. 3 is a diagram illustrating five basic strokes and their numeric representation;
  • [0035]
    FIG. 4A is a pictorial diagram illustrating an overview of the Stroke Recognition Interface prior to any input;
  • [0036]
    FIG. 4B is a pictorial diagram illustrating the Stroke Recognition Interface when a first single horizontal stroke is added;
  • [0037]
    FIG. 4C is a pictorial diagram illustrating the Stroke Recognition Interface when a second horizontal stroke is added;
  • [0038]
    FIG. 4D is a pictorial diagram illustrating the Stroke Recognition Interface when a third horizontal stroke is added;
  • [0039]
    FIG. 4E is a pictorial diagram illustrating the Stroke Recognition Interface when a desired character appears to be the first character in the Selection List;
  • [0040]
    FIG. 4F is a pictorial diagram illustrating the Stroke Recognition Interface when the first character in the selection list is selected;
  • [0041]
    FIG. 4G is a pictorial diagram illustrating the Stroke Recognition Interface when a desired character is not the first character in the selection list;
  • [0042]
    FIG. 4H is a pictorial diagram illustrating the Stroke Recognition Interface when the desired character rather than the first character in the selection list is selected;
  • [0043]
    FIG. 4I is a pictorial diagram illustrating the Stroke Recognition Interface when the first desired character is selected and a stroke is added for another character;
  • [0044]
    FIG. 4J is a pictorial diagram illustrating the Stroke Recognition Interface when two strokes are added;
  • [0045]
    FIG. 4K is a pictorial diagram illustrating the Stroke Recognition Interface when third stroke is added;
  • [0046]
    FIG. 4L is a pictorial diagram illustrating the Stroke Recognition Interface where the desired character is indicated;
  • [0047]
    FIG. 4M is a pictorial diagram illustrating the Stroke Recognition Interface when the second desired character is selected;
  • [0048]
    FIG. 4N is a pictorial diagram illustrating the Stroke Recognition Interface where a third desired character appears in the most frequently used characters;
  • [0049]
    FIG. 4O is a pictorial diagram illustrating the Stroke Recognition Interface when a third desired character is selected without adding any stroke; and
  • [0050]
    FIG. 5 is a schematic diagram illustrating the input interface for touchscreen PDA according to the most preferred embodiment of the invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • [0051]
    FIG. 1 is a schematic diagram illustrating an apparatus for inputting handwritten Chinese characters according to one preferred embodiment of this invention. The apparatus includes three basic components: a Stroke Recognition Interface 20 for recognizing entered stroke patterns, an Input Device 24 for entering strokes, and a Processor 30 for performing data process tasks.
  • [0052]
    The Stroke Recognition Interface 20 has three basic areas: a Message Display Area 28, a Selection List Area 26, and a Stroke Input Area 22.
  • [0053]
    Message Display Area 28 is the place where the selected characters are displayed. It represents an email or SMS message, or whatever application intends to use the generated text.
  • [0054]
    Selection List Area 26 is the place to display the most common character choices for the strokes currently entered on the stroke input window. This area may also list common characters that follow the last character in the Message Display Area 28, that also begin with the strokes entered in the Stroke Input Area 22.
  • [0055]
    Stroke Input Area 22 is the heart of the Stroke Recognition Interface 20. The user begins drawing a character onscreen in this area, using an Input Device 24 such as a stylus, a finger, or a mouse, depending on input device and display device used. The display device echos and retains each stroke (an “ink trail”) until the character is selected.
  • [0056]
    Stroke Recognition Interface 20 may further includes a Stroke Number Display Area to display the interfaces interpretation, either numeric or iconic, of the strokes entered by the user. When a character is selected, the full stroke representation, either by numbers or by icons, is displayed here. This area is optional, but could be useful for helping users learn stroke orders and stroke categories.
  • [0057]
    The system may further include: the capabilities to match Latin letters and punctuation symbols and emoticons, with user-defined stroke sequences; user-defined gestures for predefined stroke categories, and unique gestures representing entire components/sequence/symbols; learning/adapting to user's handwriting style, skew, or cursive; optional training session with known characters; optional prompting user to clarify between ambiguous stroke interpretations, and/or a means to enter explicit strokes, e.g. via stroke category keys), and/or remedy a stroke misinterpretation; optional indication of level of confidence of stroke interpretations, e.g. color-coding each “ink trail” or a smiley-face that frowns when it is uncertain; means to display all strokes that make up a character, e.g. drag & drop from text editor to Stroke [Number] Display Area); as well as ability to delete the last stroke(s) in reverse order (and ink trail(s)) by some means.
  • [0058]
    FIG. 2 is a flow diagram illustrating a method for inputting handwritten Chinese characters in a predictive manner according to the preferred embodiment of the invention. The method includes the following steps:
    • Step 50: Adding a stroke into the Stroke Input Area 22;
    • Step 52: Categorizing the added stroke into a stroke category.
    • Step 54: Finding characters based on frequency of character use;
    • Step 56: Displaying a list of found characters. The list of characters is displayed in a predetermined sequence. The predetermined sequence may be based on (1) number of strokes necessary to write out a Chinese character; (2) use frequency of a Chinese character entered; or (3) contextual relation to the prior character entered;
    • Step 58: Checking whether the desired character in the list;
    • Step 60: If the desired character is not in the list, adding next stroke in the Message Display Area 28;
    • Step 70: If a desired character is in the list, selecting it by clicking a mouse or tapping a stylus or finger, depending on the input and display devices used;
    • Step 72: Putting the selected character in the Message Display Area 28;
    • Step 74: Checking whether the message is complete;
    • Step 76: Adding next stroke if the message is not complete;
    • Step 62 (continued from Step 60 or Step 76): Finding most common characters that appear after a previously selected character based on a present stroke sequence. This also happens before the first stroke, i.e. before Step 50] and
    • Step 80: Displaying a list of found characters and the process continues on Step 58.
  • [0071]
    The apparatus may have a function to actively display the interfaces interpretation, either numeric or iconic, of the strokes entered by the user. Therefore, the method described above may further comprise the steps of:
      • Displaying a numeric representation for a stroke that is added;
      • Displaying full stroke numeric representation for a character that is selected;
      • Displaying an iconic representation for a stroke that is added; and
      • Displaying full stroke iconic representation for a character that is selected.
  • [0076]
    As an alternative, Step 54 may be replaced by:
      • Finding characters that commonly start with one or more recognized stroke patterns.
  • [0078]
    FIG. 3 is a diagram showing five basic strokes and their numeric representation. There is a government standard of five stroke categories for simplified Chinese characters. There are other classification of the stroke categories. The method and system according to this invention apply to any kind of classification.
  • [0079]
    One of the major advantages of the recognition system according to this invention is the great reduction of ambiguities arising in the subtle distinction between certain subtypes of the stroke categories. To reduce ambiguities, there are further definitions on the subtypes. For example, a horizontal line with a slight hook upwards is stroke 1; a horizontal line with a slight hook down is stroke 5; a horizontal line angled upwards is stroke 1; and a curved line that starts right diagonally then evens out to horizontal or curved up is stroke 4, and etc.
  • [0080]
    One technique for resolving, or at least limiting, ambiguities, is the use of limited wildcards. These are stroke keys that match with any stroke that fits one type of ambiguity. For example, if the stroke may fit into either stroke category 4 or stroke category 5, the limited wildcard would match both 4 and 5.
  • [0081]
    Often the difference between a stroke of one type and a similar stroke of another type are too subtle for a computer to differentiate. This gets even more confusing when the user is sloppy and curves his straight strokes, or straightens his curved strokes, or gets the angle slightly off.
  • [0082]
    To account for all of the variation of an individual user, the system may learn the specific idiosyncrasies of its one user, and adapt to fit that person's handwriting style.
  • [0083]
    The specifics of the exaggeration needed may be determined as appropriate. Key to this aspect of the invention is that the user has to make diagonal strokes very diagonal, straight strokes very straight, curved strokes very curved, and angled strokes very angled.
  • [0084]
    The result on paper is a character that would look somewhat artificial and a caricature of its intended character. However, this greatly simplifies the disambiguation process for finding the strokes, which then helps the disambiguation of characters.
  • [0085]
    In the following paragraphs in conjunction with a series of pictorial diagrams, the operation process is described.
  • [0086]
    FIG. 4A illustrates an overview of the Stroke Recognition Interface before any stroke is added. Note that the Character Selection List shows the first ten most frequently used characters. If a user's first desired character is in the list, he just selects the character by clicking the mouse or by tapping a stylus or his finger, without need to add a stroke. If the desired character is not in the list, the user adds a stroke using mouse, stylus, or finger.
  • [0087]
    FIG. 4B illustrates the Stroke Recognition Interface when a first single horizontal stroke is added. The stroke category is determined to be “1”, and is listed in the Stroke Number Area. The Selection List is re-ordered to predict the most likely character to be chosen based on the first stroke.
  • [0088]
    FIG. 4C illustrates the Stroke Recognition Interface when a second horizontal stroke is added. After a second horizontal line is entered, the selection list is re-ordered again, showing only the most likely characters that start with two horizontal lines (stroke category 1). Note that the position and relative lengths of the strokes do not affect the selection list, only the stroke categories.
  • [0089]
    FIG. 4D illustrates the Stroke Recognition Interface when a third horizontal stroke is added. After a third horizontal line is entered, the selection list is re-ordered again, showing only the most likely characters that start with three horizontal lines (stroke category 1).
  • [0090]
    FIG. 4E illustrates the Stroke Recognition Interface when a desired character appears to be the first character in the Selection List. Note that the character drawn so far is identical to the first character listed in the selection list. If this were the character desired, simply click that character from the list.
  • [0091]
    FIG. 4F illustrates the Stroke Recognition Interface when the first character in the selection list is selected. If the user chooses the first character, it is added to the message; at the same time, the stroke numbers are displayed at the bottom, and the input area is cleared, ready for the next character. Note that to select a character, the user has to take one additional mouse click (or stylus or finger press/tapping) than there are strokes. Novice users may find this annoying until they get used to the system, and lean to take advantage of its predictive features.
  • [0092]
    FIG. 4G illustrates the Stroke Recognition Interface when a desired character is not the first character in the selection list. The strength of this system is its predictive abilities. If the user desired the very complex, but somewhat common, character pointed to in the above illustration, he needs not complete the stroke for that character. As soon as it is displayed in the selection list, it can be selected by clicking a mouse (or stylus or finger tapping) on the character.
  • [0093]
    FIG. 4H illustrates the Stroke Recognition Interface when the desired character rather than the first character in the selection list is selected. Once the complex character is selected, we see that it is a 15-stroke character, added to the message with only three strokes and one additional click. The user gets a 15-stroke character using four movements. The saving of movement and hence time is about four to one. Additionally, the entire stroke order is displayed now, so if the user was used to an alternate stroke order for the character, he can learn the Government Standard stroke order used by this system.
  • [0094]
    FIG. 4I illustrates the Stroke Recognition Interface when the first desired character is selected and a stroke is added for another character. Once the character is entered, the program is ready to accept the strokes for another character. Here the initial stroke is a different category, to enter in a very different character. Notice that the selection list is very different than it was with the first stroke of the previous character.
  • [0095]
    FIG. 4J illustrates the Stroke Recognition Interface when two strokes are added. Note that the strokes entered already form a character that matches the most likely choice in the selection list. The character that we are aiming for in this example is already displayed (see the fifth character from the left) after the second stroke is added. But we want to continue to demonstrate the disambiguation feature of the system.
  • [0096]
    FIG. 4K illustrates the Stroke Recognition Interface when the third stroke is added. After a third stroke is entered, the selection list contains two characters that are only slightly different from each other. In fact, these two characters have exactly the same stroke order, and choosing from the selection list is the only way to disambiguate the two characters. Note that the second character being pointed to one is less commonly used than not only the first, but also of a slightly more complex character.
  • [0097]
    FIG. 4L illustrates the Stroke Recognition Interface where the desired character is indicated. Note that the desired character was first visible after the second stroke was entered, and is still a likely choice in the selection list (see the fourth character from the left). If a desired character is removed from the selection list for some reason, it is indication that the stroke order entered by the user does not match the Government Standard stroke order used in the system.
  • [0098]
    FIG. 4M illustrates the Stroke Recognition Interface when the second desired character is selected. The character is selected, and added to the message. It is a 9-stroke character. We selected it at three strokes, but could have selected it at two strokes.
  • [0099]
    FIG. 4N illustrates the Stroke Recognition Interface where a third desired character appears in the most frequently used characters. For very common characters, there is no need to enter any strokes. The ten most frequently used characters are displayed even when no strokes are entered. If the user wants to enter one of these common characters, simply selecting it will add it to the message. Note that the selection list of the most frequently used characters is context sensitive. The system displays the ten most frequent characters to follow the last character entered.
  • [0100]
    FIG. 4O illustrates the Stroke Recognition Interface when a third desired character is selected without adding any stroke. This is a saving of seven to one for the third character.
  • [0101]
    FIG. 5 illustrates a recommended layout of the input interface according to the most preferred embodiment, where the message area is omitted and the text goes directly into the active application, so there is no need for a message area.
  • [0102]
    In a typical embodiment, the stroke entry means is a handwriting input area displayed on a touchscreen on a PDA. Each entered stroke is recognized as one of a set of stroke categories. The graphical keys, each assigned to a stroke category, are optionally available to display and enter strokes, as an alternative input means. One of the graphical keys represents “match any stroke category”.
  • [0103]
    The method described above may be carried out by a computer usable medium containing instructions in computer readable form. In other words, the method may be incorporated in a computer program, a logic device, mobile device, or firmware and/or may be downloaded from a network, e.g. a Web site over the Internet. It may be applied in all sorts of text entry.
  • [0104]
    Although the invention is described herein with reference to some preferred embodiments, one skilled in the art will readily appreciate that other applications may be substituted for those set forth herein without departing from the spirit and scope of the present invention.
  • [0105]
    Accordingly, the invention should only be limited by the claims included below.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US4573196 *Jan 19, 1983Feb 25, 1986Communications Intelligence CorporationConfusion grouping of strokes in pattern recognition method and system
US5586198 *Oct 12, 1994Dec 17, 1996Lakritz; DavidMethod and apparatus for identifying characters in ideographic alphabet
US5754686 *Feb 8, 1995May 19, 1998Canon Kabushiki KaishaMethod of registering a character pattern into a user dictionary and a character recognition apparatus having the user dictionary
US5812696 *Jun 4, 1997Sep 22, 1998Canon Kabushiki KaishaCharacter recognizing method and apparatus
US5812697 *Feb 14, 1997Sep 22, 1998Nippon Steel CorporationMethod and apparatus for recognizing hand-written characters using a weighting dictionary
US5870492 *Jun 4, 1992Feb 9, 1999Wacom Co., Ltd.Hand-written character entry apparatus
US5926566 *Nov 15, 1996Jul 20, 1999Synaptics, Inc.Incremental ideographic character input method
US6453079 *Jul 11, 2000Sep 17, 2002Claritech CorporationMethod and apparatus for displaying regions in a document image having a low recognition confidence
US6686907 *Dec 13, 2001Feb 3, 2004International Business Machines CorporationMethod and apparatus for inputting Chinese characters
US6801659 *Jan 4, 2000Oct 5, 2004Zi Technology Corporation Ltd.Text input system for ideographic and nonideographic languages
US6956968 *Jan 18, 2002Oct 18, 2005Zi Technology Corporation, Ltd.Database engines for processing ideographic characters and methods therefor
US6970599 *Jul 25, 2002Nov 29, 2005America Online, Inc.Chinese character handwriting recognition system
US7088861 *Feb 9, 2004Aug 8, 2006America Online, Inc.System and method for chinese input using a joystick
US20020168107 *Mar 14, 2002Nov 14, 2002International Business Machines CorporationMethod and apparatus for recognizing handwritten chinese characters
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7218781 *Nov 21, 2005May 15, 2007Tegic Communications, Inc.System and method for chinese input using a joystick
US7750891Jun 28, 2004Jul 6, 2010Tegic Communications, Inc.Selective input system based on tracking of motion parameters of an input device
US7778818Sep 21, 2007Aug 17, 2010Tegic Communications, Inc.Directional input system with automatic correction
US7821503May 11, 2009Oct 26, 2010Tegic Communications, Inc.Touch screen and graphical user interface
US7880730Sep 11, 2007Feb 1, 2011Tegic Communications, Inc.Keyboard system with automatic correction
US8041120Jun 26, 2007Oct 18, 2011Microsoft CorporationUnified digital ink recognition
US8094939Jun 26, 2007Jan 10, 2012Microsoft CorporationDigital ink-based search
US8201087Jan 31, 2008Jun 12, 2012Tegic Communications, Inc.Spell-check for a keyboard system with automatic correction
US8225203Nov 4, 2010Jul 17, 2012Nuance Communications, Inc.Spell-check for a keyboard system with automatic correction
US8237681Jul 2, 2010Aug 7, 2012Tegic Communications, Inc.Selective input system and process based on tracking of motion parameters of an input object
US8237682Oct 22, 2010Aug 7, 2012Tegic Communications, Inc.System and process for selectable input with a touch screen
US8294667Jul 15, 2010Oct 23, 2012Tegic Communications, Inc.Directional input system with automatic correction
US8315482Jun 26, 2007Nov 20, 2012Microsoft CorporationIntegrated platform for user input of digital ink
US8441454Jun 19, 2009May 14, 2013Tegic Communications, Inc.Virtual keyboard system with automatic correction
US8456441Jul 17, 2012Jun 4, 2013Tegic Communications, Inc.Selective input system and process based on tracking of motion parameters of an input object
US8466896Jul 18, 2012Jun 18, 2013Tegic Communications, Inc.System and apparatus for selectable input with a touch screen
US8560974 *Aug 29, 2012Oct 15, 2013Google Inc.Input method application for a touch-sensitive user interface
US8570292Mar 25, 2011Oct 29, 2013Tegic Communications, Inc.Virtual keyboard system with automatic correction
US8576167Oct 17, 2012Nov 5, 2013Tegic Communications, Inc.Directional input system with automatic correction
US8892996Jun 29, 2012Nov 18, 2014Nuance Communications, Inc.Spell-check for a keyboard system with automatic correction
US8976115Oct 12, 2007Mar 10, 2015Nuance Communications, Inc.Directional input system with automatic correction
US9019201 *Jan 8, 2010Apr 28, 2015Microsoft Technology Licensing, LlcEvolving universal gesture sets
US9092419May 18, 2012Jul 28, 2015Nuance Communications, Inc.Spell-check for a keyboard system with automatic correction
US9189079 *Dec 2, 2011Nov 17, 2015Apple Inc.Method, system, and graphical user interface for providing word recommendations
US9189147 *Jun 22, 2010Nov 17, 2015Microsoft Technology Licensing, LlcInk lag compensation techniques
US9244536Dec 2, 2011Jan 26, 2016Apple Inc.Method, system, and graphical user interface for providing word recommendations
US9355090 *Jul 2, 2008May 31, 2016Apple Inc.Identification of candidate characters for text input
US9372847 *Dec 4, 2009Jun 21, 2016Nhn CorporationMethod, device and computer readable recording medium for preventing input error when information is inputted through touch screen
US9400782Oct 25, 2013Jul 26, 2016Nuance Communications, Inc.Virtual keyboard system with automatic correction
US20040083198 *Jul 16, 2003Apr 29, 2004Bradford Ethan R.Dynamic database reordering system
US20050052406 *Jun 28, 2004Mar 10, 2005James StephanickSelective input system based on tracking of motion parameters of an input device
US20060072824 *Nov 21, 2005Apr 6, 2006Van Meurs PimSystem and method for Chinese input using a joystick
US20070218430 *Jan 13, 2006Sep 20, 2007Tamkang UniversityCalligraphy practicing system
US20080015841 *Sep 21, 2007Jan 17, 2008Longe Michael RDirectional Input System with Automatic Correction
US20080183472 *Jan 9, 2008Jul 31, 2008International Business Machine CorporationSpeech recognition system and program thereof
US20090002392 *Jun 26, 2007Jan 1, 2009Microsoft CorporationIntegrated platform for user input of digital ink
US20090003703 *Jun 26, 2007Jan 1, 2009Microsoft CorporationUnifield digital ink recognition
US20090060338 *Sep 4, 2007Mar 5, 2009Por-Sen JawMethod of indexing Chinese characters
US20090213134 *May 11, 2009Aug 27, 2009James StephanickTouch screen and graphical user interface
US20090284471 *Jun 19, 2009Nov 19, 2009Tegic Communications, Inc.Virtual Keyboard System with Automatic Correction
US20090295737 *Jul 2, 2008Dec 3, 2009Deborah Eileen GoldsmithIdentification of candidate characters for text input
US20100141597 *Dec 4, 2009Jun 10, 2010Nhn CorporationMethod, device and computer readable recording medium for preventing input error when information is inputted through touch screen
US20100277416 *Jul 15, 2010Nov 4, 2010Tegic Communications, Inc.Directional input system with automatic correction
US20110169726 *Jan 8, 2010Jul 14, 2011Microsoft CorporationEvolving universal gesture sets
US20110193797 *Nov 4, 2010Aug 11, 2011Erland UnruhSpell-check for a keyboard system with automatic correction
US20110310118 *Jun 22, 2010Dec 22, 2011Microsoft CorporationInk Lag Compensation Techniques
US20120079373 *Dec 2, 2011Mar 29, 2012Kenneth KociendaMethod, System, and Graphical User Interface for Providing Word Recommendations
US20120242516 *May 24, 2012Sep 27, 2012Tencent Technology (Shenzhen) Company LimitedWubi input system and method
US20130212511 *Feb 1, 2013Aug 15, 2013Samsung Electronics Co., Ltd.Apparatus and method for guiding handwriting input for handwriting recognition
CN102880400A *Jul 13, 2011Jan 16, 2013阿尔派株式会社Hand-writing character input device and hand-writing character input method
Classifications
U.S. Classification382/185
International ClassificationG06K9/18, G06F3/048, G06F3/00, G06F3/033, G06F3/01, G06K9/22
Cooperative ClassificationG06K9/222, G06F3/018, G06F3/04883
European ClassificationG06F3/0488G, G06F3/01M, G06K9/22H
Legal Events
DateCodeEventDescription
Jan 27, 2006ASAssignment
Owner name: TEGIC COMMUNICATIONS, INC., WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LONGE, MICHAEL;WU, JIANCHAO;ZHANG, LU;REEL/FRAME:017078/0654
Effective date: 20050926