|Publication number||US6442522 B1|
|Application number||US 09/415,660|
|Publication date||Aug 27, 2002|
|Filing date||Oct 12, 1999|
|Priority date||Oct 12, 1999|
|Also published as||US7254539, US20020156629|
|Publication number||09415660, 415660, US 6442522 B1, US 6442522B1, US-B1-6442522, US6442522 B1, US6442522B1|
|Inventors||Martin D. Carberry, Mark E. Epstein, Glenn T. Puchtel, Susan M. Wingate|
|Original Assignee||International Business Machines Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (9), Non-Patent Citations (10), Referenced by (116), Classifications (10), Legal Events (5)|
|External Links: USPTO, USPTO Assignment, Espacenet|
1. Technical Field
The present invention relates to natural language command systems, and more specifically, to natural language command systems that interface with multiple back-end applications.
2. Related Art
The ability to provide efficient mechanisms for interfacing with today's electronic commerce environment remains an ongoing challenge. A fundamental problem is that, in any given field, there exist numerous potential resources capable of servicing an end user's needs. Depending upon the situation, certain resources may be more effective or cost efficient than others. Deciding which resource to utilize and providing the systems to interface with multiple resources can become unmanageable in an ever changing electronic environment. When dealing with multiple resources, it is not unusual to be confronted with a lack of uniformity among service providers. Thus, for a given service, providers will often employ proprietary or independent systems that utilize their own formats or syntax for receiving and/or distributing information. This often results in a situation where an end-user begins to favor a resource that they are comfortable with or knowledgeable about. However, there is no guarantee that a chosen resource will provide the best response to a given query or instruction, since better suited resources may exist.
One exemplary field where the aforementioned problems are prevalent involves computer reservation systems (CRS). Travel agents use CRS to make reservations with airlines, hotels, car rental agencies, bus lines, and railroads. For example, a travel agent may want to know what the available flights are from Miami to Boston on a given date. Examples of CRS's include SABRE™, SYSTEM ONE™, APOLLO™, AMADEUS™ and WORLDSPAN™. Unfortunately, code language formats, which are a combination of cryptic codes and characters, vary widely between each CRS even though a typical request is simple and common regardless of the CRS being used.
Because different computer reservation systems utilize different code language formats, a travel agent needs to learn a number of different code languages if that agent desires to interact with more than one CRS. Learning each new CRS's code language costs the travel agency additional time and money. Moreover, a travel agent can easily become confused when interacting with one of the computer reservation systems because of the variation in code words. As a result, a travel agent may erroneously enter a code for one CRS while attempting to interact with a second CRS. Such mistakes decrease the travel agent's efficiency in serving customers.
U.S. Pat. No. 5,781,892, METHOD AND APPARATUS FOR INTERACTING WITH A COMPUTER RESERVATION SYSTEM, issued to Hunt et. al, on Jul. 14, 1998, hereby incorporated by reference, describes a CRS that uses an application program interface to convert the cryptic CRS codes into a single user friendly language. While this is helpful to some extent, the end-user must still learn to interact with the system in a defined syntax. U.S. Pat. No. 5,842,176, METHOD AND APPARATUS FOR INTERACTING WITH A COMPUTER RESERVATION SYSTEM, issued to Hunt et. al, on Nov. 24, 1998, hereby incorporated by reference, describes a system for allowing multiple sessions to be established with a single CRS. This invention does not help identify the best CRS for a given query. Moreover, while it is possible to manually query multiple CRS's, the cost and time associated with multiple queries can become prohibitive, because each query into a CRS has an associated cost.
A further problem not adequately solved by the prior is that an end-user often must submit a series of queries into a CRS to obtain the exact information required. For example, an end-user might first query for flights between Boston and Miami on a given date. If an adequate flight does not exist, the end-user may have to query for a flight on a different day, or query for a flight to a nearby airport. This series of queries can continue indefinitely until an adequate flight is located, at which time the end-user might book the flight. Accordingly, given the described problems in submitting queries, there exists a tremendous burden involved in re-entering the same set of information over and over again, with only minor variations. U.S. Pat. No. 5,832,454, RESERVATION SOFTWARE EMPLOYING MULTIPLE VIRTUAL AGENTS, issued to Jafri et al., on Nov. 3, 1998, hereby incorporated by reference, attempts to address the above problem by using embedded rules to generate “near-immediate” results to provide multiple priced itineraries. While this teaching may be helpful in certain limited circumstances, it often will merely generate unwanted information, and do nothing to limit the data entry required for refining an end-user query.
Accordingly, a need exists to provide a comprehensive system that comprises a uniform and easy-to-use interface that can communicate with multiple resources, a system for effectively selecting from the multiple resources, and a system that will allow users to easily modify and refine requests during a series of related queries.
The present invention addresses the above-mentioned problems by providing a system and method that allows requests to be serviced by one or more resources, or hosts. The system utilizes a program product with a natural language (NL) user interface, wherein the computer program comprises: (1) an input system for inputting an NL command; (2) a translation system that extracts a request from the NL command and stores the request in a host-independent format; and (3) a routing system for servicing the request, wherein the routing system comprises a mechanism for selecting a host, for converting the request into a host dependent directive, and for forwarding the directive to the selected host. The system may further include a speech recognition system, a local data source for servicing the NL command, templates for converting the request into the host dependent directive, a heuristic for selecting the host, and an output system for obtaining and outputting intelligent natural language responses.
In addition, the invention may comprise a context mechanism to interpret natural language instructions, wherein the context mechanism comprises: (1) a context database that stores responses obtained in response to previous requests, wherein each response comprises response elements; (2) a context requirement mechanism that determines if a current request comprising a current set of request elements is ambiguous; (3) a context retrieving mechanism that retrieves response elements from the context database; and (4) a disambiguation mechanism that uses the retrieved response elements to disambiguate the current set of request elements. In addition, the context mechanism can also store request elements and use the previous request elements to disambiguate the current set of request elements. The context mechanism can be in the form of a system or program product.
It is therefore an advantage of the present invention to provide a system for interfacing with a plurality of hosts using a natural language interface.
It is therefore a further advantage of the present invention to provide a host-independent format for storing requests.
It is therefore a further advantage of the present invention to provide format templates for converting host-independent requests into host-dependent directives.
It is therefore a further advantage of the present invention to provide a heuristic for selecting the host.
It is therefore a further advantage of the present invention to provide a context mechanism that allows an end-user to submit a series of related commands without having to resubmit each command in its entirety.
It is therefore a further advantage of the present invention to provide intelligent responses to the end-user.
These and other advantages will be described in more detail in reference to the drawings and detailed description provided below.
The preferred exemplary embodiment of the present invention will hereinafter be described in conjunction with the appended drawings where like designations denote like elements, and:
FIG. 1 depicts a block diagram of a computer system with a computer program in accordance with a preferred embodiment of the present invention;
FIG. 2 depicts a block diagram of the computer program residing in the computer system of FIG. 1;
FIG. 3 depicts a flow chart of an operation of the computer program residing in the computer system of FIG. 1;
FIG. 4 depicts a block diagram of a context mechanism;
FIG. 5 depicts the contents of a natural language request object;
FIG. 6 depicts a request database; and
FIG. 7 depicts a response database.
Referring now to FIG. 1, a computer system 10 depicting an embodiment of the present invention is shown comprising memory 12, a central processing unit (CPU) 14, input output system (I/O) 16, and bus 18. Memory 12 may comprise any known type of data storage and/or transmission media, including magnetic media, optical media, random access memory (RAM), read-only memory (ROM), a data cache, a data object, etc. Moreover, memory 12 may reside at a single physical location, comprising one or more types of data storage, or be distributed across a plurality of physical systems in various forms. CPU 14 may likewise comprise a single processing unit, or be distributed across one or more processing units in one or more locations, e.g., on a client and server. I/O 16 may comprise any known type of input output device, including a keyboard, mouse, voice recognition, speech output, CRT, printer, disk drives, etc. Bus 18 provides a communication link between each of the components in the computer system 10 and likewise may comprise any known type of transmission link, including electrical, optical, radio, etc. In addition, although not shown, additional components, such as cache memory, communication systems, system software, etc., may be incorporated into computer system 10.
It is understood that the present invention can be realized in hardware, software, or a combination of hardware and software. The computer system 10 according to the present invention can be realized in a centralized fashion in a single computer, or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system—or other apparatus adapted for carrying out the methods described herein—is suited. A typical combination of hardware and software could be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein. The present invention can also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which—when loaded in a computer system—is able to carry out these methods. Computer program, software program, or software, in the present context mean any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: (a) conversion to another language, code or notation; and/or (b) reproduction in a different material form.
Stored in memory 12 is software program 30 that allows an end-user to interface with a plurality of back end “hosts” using natural language (NL) commands. A host may be defined as a computerized resource for providing on-line information and services, such as a computerized reservation system (CRS). While this embodiment is described with reference to CRS's, it should be recognized that the invention covers any application that may potentially interact with a plurality of resources or hosts. As noted, the present invention uses NL commands for the interface. A natural language interface provides a system by which information can be extracted from normally spoken or written commands, phrases or sentences. An example of a software application that utilizes a natural language interface is VIA VOICE™ by IBM Corporation.
Referring now to FIG. 2, software program 30 is shown in detail. It should be appreciated that the grouping of the many functional components shown in FIG. 2 into systems 20, 22, 24, and 26, has been done primarily for the purposes of convenience in describing this embodiment and such groupings are not meant to limit. the invention. Control agent 28 controls and manages the operation for, and flow of information among, each of the systems 20, 22, 24, and 26. While depicted as a separate entity, control agent 28 could be implemented in any manner without departing from the scope of the invention, including being incorporated into one or more of systems 20, 22, 24, and 26.
Software program 30 comprises input system 20 that includes two mechanisms, text input 31 and speech input 32, for inputting NL commands. A typical NL command might be “show me the flights from Boston to Miami this Friday that leave December 31 after 5:00 P.M. on AMERICAN AIRLINES™.” This could be inputted in a text format via text input 31 implemented as, for example, a keyboard and/or mouse that interface with a graphical user interface (GUI). Alternatively, the command could be inputted as speech (i.e., an uttered command) into a speech engine 34, which forwards the uttered command to a speech recognition system 36. Speech recognition system 36 converts the uttered command into a text format. Speech engine 34 and speech recognition system 36 can comprise any system (e.g., VIA VOICE™ by IBM Corp., NATURALLY SPEAKING™ by DRAGON SYSTEMS™, etc.) and use any type of interface, including SMAPI, SAPI, JSAPI, etc. In addition, a grammar switch 38 may be included to restrict the end-user to a specific set of phrases. For example, grammar switch 38 may be set to recognize grammar specific to airline reservations, train reservations, cruise reservations, auto reservations, etc.
Once the command is in a text format, control agent 28 forwards the command to translation system 22, and more specifically to an NL translator 40 which parses the command into a set of “request elements.” There exist many different possible implementations for NL translator 40. For example, VIA VOICE™ comprises a grammar-based system that utilizes a grammar compiler 41 to convert a command to an annotated string comprised of the request elements. Other possible implementations could comprise statistical parsing or machine translation techniques, such as maximum entropy or source channel, that utilize a probabilistic scoring metric to score different interpretations. In a statistical-based system, the space of possible semantic meanings is searched, and a score is provided based on the likelihood that the English will match. For example, if the user said “show me the flights from A to B on UNITED on Saturday morning,” the system would compare this to hundreds of possible meanings in some canonical space. The statistical model learns that certain phrases in English lead to certain “clauses” in the “formal language” sentence. Then, at run time, the statistical translator searches the space of all possible formal language sentences (of which there are exponentially many), and scores them according to the model. The formal language sentence is then found as the output with the most probable formal language sentence.
Furthermore, the system could utilize word or phrase spotters to recognize specific semantic clauses. Such systems may use regular expressions, such as:
AIRLINE=[semantically irrelevant words] airline [more semantically irrelevant words],
which recognizes specific airlines, not by a grammar, but instead by seeing the airline name somewhere in the sentence. Other implementations involve “information retrieval,” where the system finds the most likely document from a set of candidates by scoring how well the words in the query match the ones in the candidate documents. Accordingly, while this embodiment describes a grammar based parsing system, it is understood that any other system for extracting elements from a sentence, such as those described above, could be used.
In the case of the command stated above, “show me the flights from Boston to Miami this Friday that leave December 31 after 3:00 P.M. on AMERICAN AIRLINES™,” the request elements might include, day of the month=31, month of the year=12, time=1500, city of origin=BOS, city of destination=MIA, and airline=AA. This information may then be outputted back to the user via output system 26, for example, displayed on a screen within a GUI or as speech.
In the event NL translator 40 cannot extract a complete set of request elements from the command, a context mechanism 42 may be used to disambiguate the request, e.g., fill in needed details, or cause control agent 28 to query the user for more details. Disambiguation may be defined as the process of translating an ambiguous or incomplete request into a complete request. For example, the command, “book me on the UNITED™ flight,” by itself is ambiguous. Without additional details (e.g., flight number, etc.), the command could not be submitted to a host. Context mechanism 42 disambiguates current requests by utilizing elements from previous response(s) and/or request(s) stored in a data cache. For example, suppose that after a response to the initial command, “show me the flights from Boston to Miami that leave December 31 after 5:00 P.M. on AMERICAN AIRLINES,” the user submits a new command, “how about on UNITED?” The command, “how about on UNITED?” by itself clearly lacks enough context to form a complete request. However, context mechanism 42 can compute a complete request by borrowing elements from the previous response or request, namely, day of the month=31, month of the year=12, time=1500, city of origin=BOS, city of destination=MIA, and combining them with elements from the current request, namely, airline=UN. In this case, the request element from the new request, i.e., UNITED AIRLINES, has replaced the corresponding request element from the previous request.
Accordingly, context mechanism 42 allows an end user to submit a series of related commands without having to re-enter repeated portions of those commands. Context mechanism 42 may utilize any algorithm that borrows data from prior response(s) and/or request(s) to disambiguate a present request. Moreover, when context mechanism 42 cannot provide context based upon prior request or response elements, context mechanism 42 can cause control agent 28 to query the user for more details. For example, if the first command was “how about on UNITED,” and no previous context existed, context mechanism 42 can cause control agent 28 to ask “where do you want to fly from and to?” The operation of context mechanism 42 is described in more detail below with reference to FIG. 4.
Once NL translator 40 has translated the command into a request comprised of a set of request elements, the request elements are forwarded to routing system 24, and are stored in a generic format in a natural language request (NLR) object 44. NLR object 44 comprises a set of fields that stores each of the request elements. A detailed example and description of NLR object 44 is provided below with regard to FIG. 5. At this point, the end user may want to modify the “un-serviced” request, which is displayed by output system 26. This can be accomplished by directly modifying the information in the NLR object, via the input system 20 with, for example, a graphical user interface (GUI).
Routing system 24 “services” the request either locally, e.g., via a local database or cache, or remotely, e.g., via a host. A request may be serviced, for example, by obtaining a response such as flight availability, or by performing some action, such as booking a flight. In order to service a request, a back-end router 46 first determines whether the request can be serviced locally by a data source 48. Data source 48, may for example, comprise a cache of recent requests and responses, thereby avoiding the need to remotely service repetitive requests. Accordingly, using a local data source reduces costs and saves time. Data source 48 may simply point to, or include, the same data stored by context mechanism 42.
If the request cannot be serviced locally, back-end router 46 must then determine which host or hosts 58 should be used to remotely service the request. Thus, for an airline reservation embodiment, hosts may comprise different CRS's, such as APOLLO, SABRE, WORLDSPAN, etc. The choice of which host to use may be defined by the user (via a command), by the system (e.g., set by a systems administrator), or by a heuristic 50. Heuristic 50 may comprise any decision making system, and include factors such as cost, capability, availability, speed and/or reliability. Thus, for example, heuristic 50 may comprise a software routine that calculates which of the plurality of hosts 58 is most likely to service the request the fastest based on previous inquiries. Alternatively, heuristic 50 could compute the likely cost to have each host 58 service the request, and choose the cheapest one. Obviously, many possible implementations for heuristic 50 exist and any such implementation is believed to fall within the scope of this invention. Once a host is chosen, the request, which is stored in NLR object 44 in a host-independent format, must be converted into a host-dependent directive, since each host 58 typically utilizes a unique syntax. An example of a host dependent directive for a CRS such as Apollo might look something like:
where A=flight availability, 31=day of the month, DEC=month, BOS=departing city, MIA=arriving city, and AA=airline. Encoder 52 converts the request into a host dependent directive using format templates 54. Each of format templates 54 corresponds to one of the plurality of hosts 58 and provides a mapping from a host-independent request to a unique host-dependent directive. Format templates 54 may be implemented such that they are easily accessible should a new host be added, or should an existing host change its syntax. For example, each of format templates 54 may be accessible through a GUI. Additional data, such as IATA codes 56, which provide the three letter codes for airports (e.g., MIA=Miami International Airport), rental cars, city codes, etc., may also be made available. Easy access to the database of IATA codes is also provide through a GUI since the codes are subject to change from time to time. Having the codes 56 stored locally avoids the need to make remote host inquiries to determine, for example, what airport a code represents.
Once the host-dependent directive is created, it is sent to control agent 28, and is forwarded to a selected back-end application 60, which submits the directive to the appropriate host 58. The selected host 58 services the directive by providing a response and/or taking some action. The response is returned to back-end application 60, and then to control agent 28. The response is then passed to the output system 26 where it is decoded via decoder 62 and provided to the end-user as output 64. Output 64 may comprise any type of output, e.g., a text display within a GUT, audio output via a telephony system, etc. During this process, selected host 58 and back-end application 60 may provide information (e.g., the availability of the selected CRS, the cost to service the request, the time to service the request, etc.) that heuristic 50 can use to select hosts 58 in the future. In addition, the response may be returned to translation system 22, where it is passed through normalizer 43 and default field filler 45, and then stored in host context mechanism 42 for the purposes described above. Normalizer 43 converts the host response from a host specific format into a generic format, and the default field filler 45 inserts details not provided by the host, for purposes described below.
Decoder 62 provides NL generation by utilizing various pieces of system data 63 to provide “intelligent” responses for the end-user via output 64. Intelligent responses comprise formulated NL responses that take into account relevant pieces of information from the system, in addition to the response data that may have been returned by the selected host. For instance, decoder 62 could examine the way in which a command was stated to decode and formulate an intelligent response. For example, if a user asked “what are the morning flights . . . ,” decoder 62 could generate a response “the morning flights are . . . ” Alternatively, if the user asked, “what are the flights between six and ten,” decoder 62 could output “the flights between six and ten are . . . ” This could be accomplished by having the decoder 62 formulate output based on the user's command, as well as the response. Thus, system data 63 may comprise a command cache that stores commands. The command cache could be implemented, for example, in context mechanism 42, NL translator 40, NLR object 44, or by a totally separate database.
Another reason for providing intelligent responses is that users like confirmation that the system understood the command. Thus, if the user first asked for flights between A and B, the system should reply “flights between A and B are . . . ” If the user then says “UNITED flights,” the system should say “the UNITED flights are . . . ” The decision on which pieces of information to omit (e.g., A and B in the second response) can be implementation specific. However, failing to omit “implicitly confirmed” data, results in responses like, “the UNITED morning flights from A to B serving breakfast and stopping in Chicago are . . . ,” which are typically not necessary. Thus, decoder 62 may be implemented to only provide confirmation of the new data, which could likewise be determined by examining the command cache.
A further implementation of decoder 62 involves responding to “explicit” portions of a command. For example, consider the case where the user asks something like, “what are the arrival times of flights from A to B.” This query will likely be interpreted as a request for flights from A to B, even though the user is explicitly interested in arrival times. An intelligent response to this query could include the list of flights with the arrival times highlighted in the output stream displayed within a GUI, or mentioned in the audio for telephony (e.g., “arrival times for flights from A to B are . . . ”). To implement this, NLR object 44 could include one or more control bits for each field to indicate if the user explicitly requested this information. In the case above, control bits in the arrival time field would be enabled to indicate that the response should include or highlight arrival time information.
It is understood that decoder 62 may comprise any type of algorithm or system for providing intelligent responses based upon system data 63, and any such implementations are believed to fall within the scope of this invention. Thus, system data 63 may comprise any information obtainable from software program 30, including but not limited to: data in the command cache, NLR object 44, and context mechanism; IATA codes 56, response data, and routing data 48.
Referring now to FIG. 3, a flowchart is shown depicting the operation of software program 30. First an NL command is submitted at step 70. Next, the NL command is translated and broken down into request elements at step 72. The request elements are then output for the user to inspect at step 74. The user is then given an opportunity to modify the request at step 76. If the request requires modification, the request is updated 78 and the new request is outputted 74. Subsequently, a back-end host is selected 80, and the request is encoded into a host-dependent directive 82. Next, the system checks to see if the request can be serviced by a local database 84. If it can be serviced locally, a response is generated and the response elements are displayed/spoken 92. If the request cannot be serviced locally 84, the directive is submitted to the chosen host 88, and a response is generated. The response is decoded into a set of response elements 90, and the elements are displayed or spoken 92. The user may at that point further modify or refine the request by submitting a new NL command.
Referring now to FIG. 4, a diagram is depicted showing the operation of context mechanism 42. Context mechanism 42, as noted above, may be used to “disambiguate” a request 104. Context mechanism 42 creates a disambiguated request 106 by using details or context from one or more previous responses 108 stored in host data cache 94 and/or previous requests stored in request data cache 95. Thus, for example, suppose a user first submitted the command, “show me the flights from Albany to Orlando on November 20,” and the system returned information for three flights, one on UNITED, one on DELTA, and one on AMERICAN. Now suppose a subsequent command was submitted, “book me on the UNITED flight.” Context mechanism 42 would search host data cache 94 to obtain the necessary details (e.g., the flight number for UNITED returned by the previous response) and create disambiguated request 106. Because a user is typically responding to what they were just told, context mechanism 42 allows the end-user to interact with the system in a natural way, without having to repeat or retrieve every detail.
Context mechanism 42 comprises a storing mechanism 96 that stores previous responses 108 in the host (or response) data cache 94. An example of host data cache 94 is described in more detail below with respect to FIG. 7. Storing mechanism 96 may also store previous requests in request data cache 95. An example of request data cache 95 is described in more detail below with respect to FIG. 6. It should be recognized that the host and request data caches 94 and 95, as well as any other data source described herein, can be implemented in any known fashion, e.g., as an SQL database, in a shared database, as temporary storage, in RAM, as a data object, etc. In addition, it is understood that the amount of records maintained in each cache depends upon the specific implementation. For example, request data cache 94 may be implemented such that only a single past request 104 is stored. In this case, NLR object 44 could act as request data cache 94.
Context mechanism 42 also includes a context requirement mechanism 98 that determines whether context is required, i.e., whether disambiguation is required. This process can be implemented in any known fashion. For example, suppose a user states “show me the later flights.” The word “later” could be used to indicate that context is required. Other options would be to include a “checking” system that could analyze a request 104 and determine if enough details are provided. If enough details are not provided, additional details would be pulled from a previous response or request using context retrieval mechanism 100. For the case where additional context is needed to disambiguate a request, a disambiguation mechanism 102 would be used.
Disambiguation mechanism 102 comprises logic that generates disambiguated request 106 based on: (1) an ambiguous set of request elements, and (2) elements from previous request(s) and/or response(s). The logic can be implemented in any manner. For example, disambiguation mechanism 102 may overlay current request elements (e.g., UNITED AIRLINES) onto elements from a previous response (e.g., TWA flights from Paris to London leaving on July 1) to form a disambiguated request (UNITED flights from Paris to London leaving on July 1). Disambiguation mechanism 102 can also include logic to perform calculations. For example, in order to disambiguate the command, “what DELTA flights return in ten days,” information from a prior request/response must be obtained, followed by a calculation of adding ten to the departing day to determine a return date. The disambiguated request can be outputted back to the user for verification. If disambiguation mechanism 102 cannot build a disambiguated request, it can query the user for the needed request elements.
Referring now to FIG. 5, a diagram is depicted showing NLR object 44. NLR object 44 may comprise multiple types of records for different types of travel. For example, record 110 is set up to store flight reservation, while records 112 and 114 may be set up to handle car reservation requests, cruise reservation requests, rail reservation requests, etc. Each record comprises a number of fields that contain different types of information. Record 110 comprises a flight information request that could have resulted from the command, “show me the flights from Miami to Boston on December 31 on AMERICAN departing around 3:00 P.M.” As is evident from the diagram, additional details, e.g., arrival time, could have been included in the command, but would not be necessary to obtain a response. Moreover, the type and number of fields are in no way limited, and may be determined by the particular implementation. For example, in addition to including flight request information, record 110 comprises a control info field 115 that can be used by output system 26 to generate an intelligent response. Thus, control info 115 could comprise a control bit for each field, which could be used to indicate whether to include that piece of data as part of the response. It is understood that there exists any number of possible implementations and uses for control information, and all such implementations and uses are believed to fall within the scope of this invention.
Referring now to FIGS. 6 and 7, a sample request data cache 95 is depicted as well as a corresponding sample host data cache 94. Request data cache 95, shown in FIG. 6, comprises two stored records, request #1 and request #2. Record 120, which contains request #1, reflects the most recent request stored in NLR object 44 shown in FIG. 5. Record 122, which contains request #2, reflects a previous request. Host data cache 94 comprises five records. The first two records 116 in host data cache 94 correspond to the request #1 in request data cache 95. The last three records 118 in host data cache 94 correspond to the request #2 in request data cache 95. Thus, two records (response #1 and response #2) were returned in response to request #1, “show me the flights from Miami to Boston on December 31 on AMERICAN departing around 3:00 P.M.” The responding records include flights 832 and 1014. The second request, “show me the flights from Albany to Orlando on November 20,” returned three records 118 and include flights 111 on UNITED, 907 on DELTA, and 4210 on AMERICAN.
Also included in each record of host data cache 94 are default fields 124 that comprise a host identifier (e.g., host #), response identifier (response #), and request identifier (identifier #). The host identifier determines which host, e.g., Worldspan, was used to service the request. The response identifier keeps track of each response for a given request. The request identifier determines which request the records were generated in response to. Response data cache 95 likewise includes a host identifier, (in the event the user specifies which host to use) and a request identifier. The request number acts as a time stamp to let the system know which request was most recently submitted. Although not required, each of the records in host data cache 94 is stored in a uniform or normalized fashion. This is accomplished by passing responses to normalizer 43 (see FIG. 2) that converts a host dependent response into a generic format. After a response is passed through normalizer 43, default field filler 45 (see FIG. 2) is used to add default information into default fields 124, such as the host identifier, response identifier, etc. Since default information is important for keeping track of response information within context mechanism 42, but is not typically provided by a host, default field filler 45 is utilized to insert such information. It is understood that the type of default information provided by default field filler 45 depends upon the particular implementation of context mechanism 42.
The foregoing description of the preferred embodiments of the invention have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise form disclosed, and obviously many modifications and variations are possible in light of the above teachings. Such modifications and variations that are apparent to a person skilled in the art are intended to be included within the scope of this invention as defined by the accompanying claims.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4887212 *||Oct 29, 1986||Dec 12, 1989||International Business Machines Corporation||Parser for natural language text|
|US5677835 *||Dec 22, 1994||Oct 14, 1997||Caterpillar Inc.||Integrated authoring and translation system|
|US5781892||Nov 13, 1995||Jul 14, 1998||Electronic Data Systems Corporation||Method and apparatus for interacting with a computer reservation system|
|US5832454||Oct 24, 1995||Nov 3, 1998||Docunet, Inc.||Reservation software employing multiple virtual agents|
|US5842176||Nov 13, 1995||Nov 24, 1998||Electronic Data Systems Corporation||Method and apparatus for interacting with a computer reservation system|
|US5897620||Jul 8, 1997||Apr 27, 1999||Priceline.Com Inc.||Method and apparatus for the sale of airline-specified flight tickets|
|US5924089 *||Sep 3, 1996||Jul 13, 1999||International Business Machines Corporation||Natural language translation of an SQL query|
|US6044347 *||Aug 5, 1997||Mar 28, 2000||Lucent Technologies Inc.||Methods and apparatus object-oriented rule-based dialogue management|
|EP0782318A2||Sep 26, 1996||Jul 2, 1997||International Business Machines Corporation||Client-server system|
|1||*||"VoiceXpress(TM) Installation & Getting Started Guide" Lernout & Hauspie(R) 1992-1997.*|
|2||"VoiceXpress™ Installation & Getting Started Guide" Lernout & Hauspie® 1992-1997.*|
|3||*||Creative Labs, "User's Guide" (C) Jul. 1993.*|
|4||Creative Labs, "User's Guide" © Jul. 1993.*|
|5||Eric Jarvis "Vans': Risks and Rewards Mar. 1986 pp. 119-128.|
|6||Eric Jarvis ‘Vans’: Risks and Rewards Mar. 1986 pp. 119-128.|
|7||Flight Data Management, Inc. The Airling Reservation Systems (TARS) Jul., 1999 pp. 1-9, (Available at http://fdminc.net/tarsdesc.htm).|
|8||*||Newman ("The Dragon(R) Naturally Speaking(TM) Guide" (C) 1999).*|
|9||Newman ("The Dragon® Naturally Speaking™ Guide" © 1999).*|
|10||Perry Flint Air Transport World Sep., 1998 vol. 35, Issue 9 pp. 54, 56, and 58.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US6873991||Oct 2, 2002||Mar 29, 2005||Matter Associates, L.P.||System and method for organizing information|
|US6963831 *||Oct 25, 2000||Nov 8, 2005||International Business Machines Corporation||Including statistical NLU models within a statistical parser|
|US6973639 *||Jan 24, 2001||Dec 6, 2005||Fujitsu Limited||Automatic program generation technology using data structure resolution unit|
|US6980949||Mar 14, 2003||Dec 27, 2005||Sonum Technologies, Inc.||Natural language processor|
|US6985865 *||Sep 26, 2001||Jan 10, 2006||Sprint Spectrum L.P.||Method and system for enhanced response to voice commands in a voice command platform|
|US7398209||Jun 3, 2003||Jul 8, 2008||Voicebox Technologies, Inc.||Systems and methods for responding to natural language speech utterance|
|US7447635 *||Oct 19, 2000||Nov 4, 2008||Sony Corporation||Natural language interface control system|
|US7502738||May 11, 2007||Mar 10, 2009||Voicebox Technologies, Inc.||Systems and methods for responding to natural language speech utterance|
|US7526466 *||Aug 15, 2006||Apr 28, 2009||Qps Tech Limited Liability Company||Method and system for analysis of intended meaning of natural language|
|US7536374 *||Aug 15, 2006||May 19, 2009||Qps Tech. Limited Liability Company||Method and system for using voice input for performing device functions|
|US7539619 *||Sep 7, 2004||May 26, 2009||Spoken Translation Ind.||Speech-enabled language translation system and method enabling interactive user supervision of translation and speech recognition accuracy|
|US7555431 *||Mar 2, 2004||Jun 30, 2009||Phoenix Solutions, Inc.||Method for processing speech using dynamic grammars|
|US7599831||Nov 10, 2005||Oct 6, 2009||Sonum Technologies, Inc.||Multi-stage pattern reduction for natural language processing|
|US7620549||Aug 10, 2005||Nov 17, 2009||Voicebox Technologies, Inc.||System and method of supporting adaptive misrecognition in conversational speech|
|US7634409||Dec 15, 2009||Voicebox Technologies, Inc.||Dynamic speech sharpening|
|US7640160||Dec 29, 2009||Voicebox Technologies, Inc.||Systems and methods for responding to natural language speech utterance|
|US7647225||Nov 20, 2006||Jan 12, 2010||Phoenix Solutions, Inc.||Adjustable resource based speech recognition system|
|US7657424||Feb 2, 2010||Phoenix Solutions, Inc.||System and method for processing sentence based queries|
|US7672841||Mar 2, 2010||Phoenix Solutions, Inc.||Method for processing speech data for a distributed recognition system|
|US7693720||Apr 6, 2010||Voicebox Technologies, Inc.||Mobile systems and methods for responding to natural language speech utterance|
|US7698131||Apr 9, 2007||Apr 13, 2010||Phoenix Solutions, Inc.||Speech recognition system for client devices having differing computing capabilities|
|US7702508||Dec 3, 2004||Apr 20, 2010||Phoenix Solutions, Inc.||System and method for natural language processing of query answers|
|US7711672||Dec 27, 2002||May 4, 2010||Lawrence Au||Semantic network methods to disambiguate natural language meaning|
|US7725307||Aug 29, 2003||May 25, 2010||Phoenix Solutions, Inc.||Query engine for processing voice based queries including semantic decoding|
|US7725320||Apr 9, 2007||May 25, 2010||Phoenix Solutions, Inc.||Internet based speech recognition system with dynamic grammars|
|US7725321||Jun 23, 2008||May 25, 2010||Phoenix Solutions, Inc.||Speech based query system using semantic decoding|
|US7729904||Dec 3, 2004||Jun 1, 2010||Phoenix Solutions, Inc.||Partial speech processing device and method for use in distributed systems|
|US7809570||Jul 7, 2008||Oct 5, 2010||Voicebox Technologies, Inc.||Systems and methods for responding to natural language speech utterance|
|US7818176||Oct 19, 2010||Voicebox Technologies, Inc.||System and method for selecting and presenting advertisements based on natural language processing of voice-based input|
|US7831426||Nov 9, 2010||Phoenix Solutions, Inc.||Network based interactive speech recognition system|
|US7873519||Oct 31, 2007||Jan 18, 2011||Phoenix Solutions, Inc.||Natural language speech lattice containing semantic variants|
|US7912702||Mar 22, 2011||Phoenix Solutions, Inc.||Statistical language model trained with semantic variants|
|US7917367||Mar 29, 2011||Voicebox Technologies, Inc.||Systems and methods for responding to natural language speech utterance|
|US7949529||May 24, 2011||Voicebox Technologies, Inc.||Mobile systems and methods of supporting natural language human-machine interactions|
|US7983917||Oct 29, 2009||Jul 19, 2011||Voicebox Technologies, Inc.||Dynamic speech sharpening|
|US8015006||May 30, 2008||Sep 6, 2011||Voicebox Technologies, Inc.||Systems and methods for processing natural language speech utterances with context-specific domain agents|
|US8069046||Oct 29, 2009||Nov 29, 2011||Voicebox Technologies, Inc.||Dynamic speech sharpening|
|US8073681||Oct 16, 2006||Dec 6, 2011||Voicebox Technologies, Inc.||System and method for a cooperative conversational voice user interface|
|US8112275||Apr 22, 2010||Feb 7, 2012||Voicebox Technologies, Inc.||System and method for user-specific speech recognition|
|US8135660||Oct 10, 2009||Mar 13, 2012||Qps Tech. Limited Liability Company||Semantic network methods to disambiguate natural language meaning|
|US8140327||Apr 22, 2010||Mar 20, 2012||Voicebox Technologies, Inc.||System and method for filtering and eliminating noise from natural language utterances to improve speech recognition and parsing|
|US8140335||Dec 11, 2007||Mar 20, 2012||Voicebox Technologies, Inc.||System and method for providing a natural language voice user interface in an integrated voice navigation services environment|
|US8145489||Jul 30, 2010||Mar 27, 2012||Voicebox Technologies, Inc.||System and method for selecting and presenting advertisements based on natural language processing of voice-based input|
|US8150694||Jun 1, 2011||Apr 3, 2012||Voicebox Technologies, Inc.||System and method for providing an acoustic grammar to dynamically sharpen speech interpretation|
|US8155962||Jul 19, 2010||Apr 10, 2012||Voicebox Technologies, Inc.||Method and system for asynchronously processing natural language utterances|
|US8195468||Jun 5, 2012||Voicebox Technologies, Inc.||Mobile systems and methods of supporting natural language human-machine interactions|
|US8200608||Mar 2, 2010||Jun 12, 2012||Qps Tech. Limited Liability Company||Semantic network methods to disambiguate natural language meaning|
|US8204844||Oct 10, 2009||Jun 19, 2012||Qps Tech. Limited Liability Company||Systems and methods to increase efficiency in semantic networks to disambiguate natural language meaning|
|US8229733||Feb 9, 2006||Jul 24, 2012||John Harney||Method and apparatus for linguistic independent parsing in a natural language systems|
|US8229734||Jun 23, 2008||Jul 24, 2012||Phoenix Solutions, Inc.||Semantic decoding of user queries|
|US8326627||Dec 30, 2011||Dec 4, 2012||Voicebox Technologies, Inc.||System and method for dynamically generating a recognition grammar in an integrated voice navigation services environment|
|US8326634||Dec 4, 2012||Voicebox Technologies, Inc.||Systems and methods for responding to natural language speech utterance|
|US8326637||Dec 4, 2012||Voicebox Technologies, Inc.||System and method for processing multi-modal device interactions in a natural language voice services environment|
|US8332224||Oct 1, 2009||Dec 11, 2012||Voicebox Technologies, Inc.||System and method of supporting adaptive misrecognition conversational speech|
|US8352277||Jan 8, 2013||Phoenix Solutions, Inc.||Method of interacting through speech with a web-connected server|
|US8370147||Dec 30, 2011||Feb 5, 2013||Voicebox Technologies, Inc.||System and method for providing a natural language voice user interface in an integrated voice navigation services environment|
|US8396824||May 30, 2007||Mar 12, 2013||Qps Tech. Limited Liability Company||Automatic data categorization with optimally spaced semantic seed terms|
|US8447607||Jun 4, 2012||May 21, 2013||Voicebox Technologies, Inc.||Mobile systems and methods of supporting natural language human-machine interactions|
|US8452598||May 28, 2013||Voicebox Technologies, Inc.||System and method for providing advertisements in an integrated voice navigation services environment|
|US8515765||Oct 3, 2011||Aug 20, 2013||Voicebox Technologies, Inc.||System and method for a cooperative conversational voice user interface|
|US8527274||Feb 13, 2012||Sep 3, 2013||Voicebox Technologies, Inc.||System and method for delivering targeted advertisements and tracking advertisement interactions in voice recognition contexts|
|US8589161||May 27, 2008||Nov 19, 2013||Voicebox Technologies, Inc.||System and method for an integrated, multi-modal, multi-device natural language voice services environment|
|US8612209 *||Sep 20, 2011||Dec 17, 2013||Nuance Communications, Inc.||Classifying text via topical analysis, for applications to speech recognition|
|US8620659||Feb 7, 2011||Dec 31, 2013||Voicebox Technologies, Inc.||System and method of supporting adaptive misrecognition in conversational speech|
|US8682660 *||May 16, 2009||Mar 25, 2014||Resolvity, Inc.||Method and system for post-processing speech recognition results|
|US8719009||Sep 14, 2012||May 6, 2014||Voicebox Technologies Corporation||System and method for processing multi-modal device interactions in a natural language voice services environment|
|US8719026||Feb 4, 2013||May 6, 2014||Voicebox Technologies Corporation||System and method for providing a natural language voice user interface in an integrated voice navigation services environment|
|US8731929||Feb 4, 2009||May 20, 2014||Voicebox Technologies Corporation||Agent architecture for determining meanings of natural language utterances|
|US8738380||Dec 3, 2012||May 27, 2014||Voicebox Technologies Corporation||System and method for processing multi-modal device interactions in a natural language voice services environment|
|US8762152||Oct 1, 2007||Jun 24, 2014||Nuance Communications, Inc.||Speech recognition system interactive agent|
|US8849652||May 20, 2013||Sep 30, 2014||Voicebox Technologies Corporation||Mobile systems and methods of supporting natural language human-machine interactions|
|US8849670||Nov 30, 2012||Sep 30, 2014||Voicebox Technologies Corporation||Systems and methods for responding to natural language speech utterance|
|US8886536||Sep 3, 2013||Nov 11, 2014||Voicebox Technologies Corporation||System and method for delivering targeted advertisements and tracking advertisement interactions in voice recognition contexts|
|US8942985 *||Nov 16, 2004||Jan 27, 2015||Microsoft Corporation||Centralized method and system for clarifying voice commands|
|US8965753||Nov 13, 2013||Feb 24, 2015||Nuance Communications, Inc.||Method to assign word class information|
|US8983839||Nov 30, 2012||Mar 17, 2015||Voicebox Technologies Corporation||System and method for dynamically generating a recognition grammar in an integrated voice navigation services environment|
|US9015049||Aug 19, 2013||Apr 21, 2015||Voicebox Technologies Corporation||System and method for a cooperative conversational voice user interface|
|US9031845 *||Feb 12, 2010||May 12, 2015||Nuance Communications, Inc.||Mobile systems and methods for responding to natural language speech utterance|
|US9076448||Oct 10, 2003||Jul 7, 2015||Nuance Communications, Inc.||Distributed real time speech recognition system|
|US9105266||May 15, 2014||Aug 11, 2015||Voicebox Technologies Corporation|
|US9171541||Feb 9, 2010||Oct 27, 2015||Voicebox Technologies Corporation||System and method for hybrid processing in a natural language voice services environment|
|US9190063||Oct 31, 2007||Nov 17, 2015||Nuance Communications, Inc.||Multi-language speech recognition system|
|US9263039||Sep 29, 2014||Feb 16, 2016||Nuance Communications, Inc.||Systems and methods for responding to natural language speech utterance|
|US9269097||Nov 10, 2014||Feb 23, 2016||Voicebox Technologies Corporation||System and method for delivering targeted advertisements and/or providing natural language processing based on advertisements|
|US9305548||Nov 18, 2013||Apr 5, 2016||Voicebox Technologies Corporation||System and method for an integrated, multi-modal, multi-device natural language voice services environment|
|US20010044932 *||Jan 24, 2001||Nov 22, 2001||Keiji Hashimoto||Automatic program generation technology using data structure resolution unit|
|US20030130976 *||Dec 27, 2002||Jul 10, 2003||Lawrence Au||Semantic network methods to disambiguate natural language meaning|
|US20040068513 *||Oct 2, 2002||Apr 8, 2004||Carroll David B.||System and method for organizing information|
|US20040236580 *||Mar 2, 2004||Nov 25, 2004||Bennett Ian M.||Method for processing speech using dynamic grammars|
|US20050080614 *||Dec 3, 2004||Apr 14, 2005||Bennett Ian M.||System & method for natural language processing of query answers|
|US20050086059 *||Dec 3, 2004||Apr 21, 2005||Bennett Ian M.||Partial speech processing device & method for use in distributed systems|
|US20060106614 *||Nov 16, 2004||May 18, 2006||Microsoft Corporation||Centralized method and system for clarifying voice commands|
|US20060167678 *||Nov 10, 2005||Jul 27, 2006||Ford W R||Surface structure generation|
|US20070055525 *||Aug 31, 2006||Mar 8, 2007||Kennewick Robert A||Dynamic speech sharpening|
|US20070067155 *||Sep 20, 2005||Mar 22, 2007||Sonum Technologies, Inc.||Surface structure generation|
|US20070094222 *||Aug 15, 2006||Apr 26, 2007||Lawrence Au||Method and system for using voice input for performing network functions|
|US20070094223 *||Aug 15, 2006||Apr 26, 2007||Lawrence Au||Method and system for using contextual meaning in voice to text conversion|
|US20070094225 *||Aug 15, 2006||Apr 26, 2007||Lawrence Au||Method and system for using natural language input to provide customer support|
|US20070185702 *||Feb 9, 2006||Aug 9, 2007||John Harney||Language independent parsing in natural language systems|
|US20070244847 *||Aug 15, 2006||Oct 18, 2007||Lawrence Au||Semantic network methods to disambiguate natural language meaning|
|US20070294200 *||May 30, 2007||Dec 20, 2007||Q-Phrase Llc||Automatic data categorization with optimally spaced semantic seed terms|
|US20070294229 *||May 30, 2007||Dec 20, 2007||Q-Phrase Llc||Chat conversation methods traversing a provisional scaffold of meanings|
|US20080059188 *||Oct 31, 2007||Mar 6, 2008||Sony Corporation||Natural Language Interface Control System|
|US20080189268 *||Oct 3, 2007||Aug 7, 2008||Lawrence Au||Mechanism for automatic matching of host to guest content via categorization|
|US20090313005 *||Jun 11, 2008||Dec 17, 2009||International Business Machines Corporation||Method for assured lingual translation of outgoing electronic communication|
|US20100030723 *||Oct 10, 2009||Feb 4, 2010||Lawrence Au||Semantic network methods to disambiguate natural language meaning|
|US20100030724 *||Feb 4, 2010||Lawrence Au||Semantic network methods to disambiguate natural language meaning|
|US20100145700 *||Feb 12, 2010||Jun 10, 2010||Voicebox Technologies, Inc.||Mobile systems and methods for responding to natural language speech utterance|
|US20100161317 *||Mar 2, 2010||Jun 24, 2010||Lawrence Au||Semantic network methods to disambiguate natural language meaning|
|US20100191553 *||Jan 26, 2010||Jul 29, 2010||Mcintosh Michael David||System and Method for GDS Cryptic Code Interaction with Various Travel Content Sources|
|US20110012943 *||Mar 5, 2008||Jan 20, 2011||Leonard Tsai||Liquid Crystal Display Uniformity|
|US20120010875 *||Jan 12, 2012||Nuance Communications Austria Gmbh||Classifying text via topical analysis, for applications to speech recognition|
|US20150006518 *||Jun 27, 2013||Jan 1, 2015||Microsoft Corporation||Visualizations based on natural language query|
|WO2004032395A2 *||Oct 2, 2003||Apr 15, 2004||Matter Associates, L.P.||System and method for organizing information|
|WO2004032395A3 *||Oct 2, 2003||Oct 28, 2004||Matter Associates L P||System and method for organizing information|
|WO2004049306A1 *||Nov 24, 2003||Jun 10, 2004||Roy Rosser||Autonomous response engine|
|U.S. Classification||704/257, 704/275, 704/E15.026|
|International Classification||G10L15/18, H04M3/493|
|Cooperative Classification||Y10S707/99934, G10L15/1822, H04M3/4936|
|European Classification||H04M3/493S, G10L15/18U|
|Dec 21, 1999||AS||Assignment|
Owner name: IBM CORPORATION, NEW YORK
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CARBERRY, MARTIN D.;EPSTEIN, MARK E.;PUCHTEL, GLENN T.;AND OTHERS;REEL/FRAME:010517/0200;SIGNING DATES FROM 19991201 TO 19991203
|Nov 18, 2005||FPAY||Fee payment|
Year of fee payment: 4
|Mar 6, 2009||AS||Assignment|
Owner name: NUANCE COMMUNICATIONS, INC., MASSACHUSETTS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:022354/0566
Effective date: 20081231
|Mar 1, 2010||FPAY||Fee payment|
Year of fee payment: 8
|Jan 29, 2014||FPAY||Fee payment|
Year of fee payment: 12