US6604075B1 - Web-based voice dialog interface - Google Patents

Web-based voice dialog interface Download PDF

Info

Publication number
US6604075B1
US6604075B1 US09/524,964 US52496400A US6604075B1 US 6604075 B1 US6604075 B1 US 6604075B1 US 52496400 A US52496400 A US 52496400A US 6604075 B1 US6604075 B1 US 6604075B1
Authority
US
United States
Prior art keywords
information
interpreter
web
dialog
grammar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US09/524,964
Inventor
Michael Kenneth Brown
Stephen Charles Glinski
Brian Carl Schmult
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alcatel Lucent SAS
Sound View Innovations LLC
Original Assignee
Lucent Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lucent Technologies Inc filed Critical Lucent Technologies Inc
Priority to US09/524,964 priority Critical patent/US6604075B1/en
Assigned to LUCENT TECHNOLOGIES INC. reassignment LUCENT TECHNOLOGIES INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SCHMULT, BRIAN CARL, BROWN, MICHAEL KENNETH, GLINSKI, STEPHEN CHARLES
Application granted granted Critical
Publication of US6604075B1 publication Critical patent/US6604075B1/en
Assigned to CREDIT SUISSE AG reassignment CREDIT SUISSE AG SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ALCATEL-LUCENT USA INC.
Assigned to ALCATEL-LUCENT USA INC. reassignment ALCATEL-LUCENT USA INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: LUCENT TECHNOLOGIES INC.
Assigned to SOUND VIEW INNOVATIONS, LLC reassignment SOUND VIEW INNOVATIONS, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ALCATEL LUCENT
Assigned to ALCATEL-LUCENT USA INC. reassignment ALCATEL-LUCENT USA INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: CREDIT SUISSE AG
Assigned to NOKIA OF AMERICA CORPORATION reassignment NOKIA OF AMERICA CORPORATION CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: ALCATEL-LUCENT USA INC.
Assigned to ALCATEL LUCENT reassignment ALCATEL LUCENT NUNC PRO TUNC ASSIGNMENT (SEE DOCUMENT FOR DETAILS). Assignors: NOKIA OF AMERICA CORPORATION
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • H04M3/4938Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals comprising a voice browser which renders and interprets, e.g. VoiceXML
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • G10L2015/228Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/40Electronic components, circuits, software, systems or apparatus used in telephone systems using speech recognition

Definitions

  • the present invention relates generally to the Internet and other computer networks, and more particularly to techniques for communicating information over such networks via an audio interface.
  • the continued growth of the Internet has made it a primary source of information on a wide variety of topics.
  • Access to the Internet and other types of computer networks is typically accomplished via a computer equipped with a browser program.
  • the browser program provides a graphical user interface which allows a user to request information from servers accessible over the network, and to view and otherwise process the information so obtained.
  • Techniques for extending Internet access to users equipped with a telephone or other type of audio interface device have been developed, and are described in, for example, D. L. Atkins et al., “Integrated Web and Telephone A Language Interface to Networked Voice Response Units,” Workshop on Internet Programming Languages, ICCL '98, Loyola University, Chicago, Ill., May 1998, both of which are incorporated by reference herein.
  • the first category includes those approaches that use HyperText Markup Language (HTML) and extensions such as Cascading Style Sheets (CSS) to redefine the meaning of HTML tags.
  • HTML HyperText Markup Language
  • CSS Cascading Style Sheets
  • the second of the two categories noted above includes those approaches that utilize a new language specialized for voice interfaces, such as Voice eXtensible Markup Language (VoiceXML) from the VoiceXML Forum (which includes Lucent, AT&T and Motorola), Speech Markup Language (SpeechML) from IBM, or Talk Markup Language (TalkML) from Hewlett-Packard.
  • VoIPXML Voice eXtensible Markup Language
  • SpeechML Speech Markup Language
  • TalkML Talk Markup Language
  • Hewlett-Packard Hewlett-Packard.
  • client-side programming such as Java and Javascript
  • server-side methods such as Server-Side Include (SSI) and Common Gateway Interface (CGI) programming.
  • SSI Server-Side Include
  • CGI Common Gateway Interface
  • an application developer In order to create a rich dialog interface to a computer application using these language-based approaches, an application developer generally must write explicit specifications of the sentences to be understood by the system, such that the actual spoken input can be transformed into the equivalent of a mouse-click or keyboard entry to a web form.
  • IVR Interactive Voice Response
  • the speech synthesizer generates speech which characterizes the structure and content of a web page retrieved over the network.
  • the speech is delivered to a user via a telephone or other type of audio interface device.
  • the grammar generator utilizes textual information parsed from the retrieved web page to produce a grammar.
  • the grammar is then supplied to the speech recognizer and used to interpret voice commands generated by the user.
  • the grammar may also be utilized by the speech synthesizer to create phonetic information, such that similar phonemes are used in both the speech recognizer and the speech synthesizer.
  • the speech synthesizer, grammar generator and speech recognizer, as well as other elements of the IVR platform, may be used to implement a dialog system in which a dialog is conducted with the user in order to control the output of the web page information to the user.
  • a given retrieved web page may include, for example, text to be read to the user by the speech synthesizer, a program script for executing operations on a host processor, and a hyperlink for each of a set of designated spoken responses which may be received from the user.
  • the web page may also include one or more hyperlinks that are to be utilized when the speech recognizer rejects a given spoken user input as unrecognizable.
  • the present invention provides an improved voice dialog interface for use in web-based applications implemented over the Internet or other computer network.
  • a web-based voice dialog interface is configured to communicate information between a user at a client machine and one or more servers coupled to the client machine via the Internet or other computer network.
  • the interface in an illustrative embodiment includes a web page interpreter for receiving information relating to one or more web pages.
  • the web page interpreter generates a rendering of at least a portion of the information for presentation to a user in an audibly-perceptible format.
  • the web page interpreter may make use of certain pre-specified voice-related tags, e.g., HTML extensions.
  • a grammar processing device utilizes interpreted web page information received from the web page interpreter to generate syntax information and semantic information.
  • a speech recognizer processes received user speech in accordance with the syntax information, and a natural language interpreter processes the resulting recognized speech in accordance with the semantics information to generate output for delivery to a web server in conjunction with a voice dialog which includes the user speech and the rendering of the web page(s).
  • the output may be processed by a common gateway interface (CGI) formatter prior to delivery to a CGI associated with the web server.
  • CGI common gateway interface
  • the grammar processing device may include a grammar compiler, and may implement a grammar generation process to generate a grammar specification language which is supplied as input to a grammar compiler.
  • the grammar generation process may utilize a thesaurus to expand the grammar specification language.
  • the web page interpreter may further generate a client library associated with interpretations of web pages previously performed on a common client machine.
  • the client library will generally include a script language definition of semantic actions, and may be utilized by a web server in generating an appropriate response to a user speech portion of a dialog.
  • dialog control may be handled by representing a given dialog turn in a single web page.
  • a finite-state dialog controller may be implemented as a sequence of web pages each representing a dialog turn.
  • the processing operations of the web-based voice dialog interface are associated with an application developed using a dialog application development tool.
  • the dialog application development tool may include an authoring tool which (i) utilizes a grammar specification language to generate output in a web page format for delivery to one or more clients, and (ii) parses code to generate a CGI output for delivery to the web server.
  • the techniques of the invention allow a voice dialog processing system to reduce client-server traffic and perform immediate execution of client-side operations.
  • Other advantages include less computational burden on the web server, the elimination of any need for specialized natural language knowledge at the web server, a simplified interface, and unified control at both the client and the server.
  • FIG. 1 is a block diagram of an illustrative web-based processing system which includes a voice dialog interface in accordance with the invention.
  • FIG. 2 illustrates a finite-state dialog process involving a set of web pages and implemented using the web-based processing system of FIG. 1 .
  • FIG. 3 illustrates the operation of a web-based dialog application development tool in accordance with the invention.
  • web page as used herein is intended to include a single web page, a set of web pages, a web site, and any other type or arrangement of information accessible over the World Wide Web, over other portions of the Internet, or over other types of communication networks.
  • processing system as used herein is intended to include any type of computer-based system or other type of system which includes hardware and/or software elements configured to provide one or more of the voice dialog functions described herein.
  • the present invention in an illustrative embodiment automates the application development process in a web-based voice dialog interface.
  • the interface in the context of the illustrative embodiment will be described herein using a number of extensions to conventional HyperText Markup Language (HTML).
  • HTML HyperText Markup Language
  • the illustrative embodiment utilizes HTML, the invention can be implemented in conjunction with other languages, e.g., Phone Markup Language (PML), Voice eXtensible Markup Language (VoiceXML), Speech Markup Language (SpeechML), Talk Markup Language (TalkML), etc.
  • HTML extensions may be embedded in the scope of an HTML anchor as follows:
  • URL represents the Uniform Resource Locator and title is the string of mouse-sensitive words of the hyperlink.
  • the special_tags are generally ignored by conventional visual web browsers that are not designed to recognize them, but have special meaning to voice browsers, such as the PhoneBrowser built on the Lucent Speech Processing System (LSPS) platform developed by Lucent Technologies Inc. of Murray Hill, N.J. Examples of the special tags include the following:
  • VOICE ”parameters” Set parameters for voice synthesis.
  • IGNORETITLE Inhibits Automatic Speech Recognition (ASR) processing of the title of this link; usually used with Grammar Specification Language (GSL).
  • ASR Automatic Speech Recognition
  • GSL Grammar Specification Language
  • NOPERMUTE Inhibits combinatoric processing of the title of this link for ASR; forces the user to speak the entire title.
  • DISOVERRIDE Causes the link title to take precedence over normal anchor titles during disam- biguation, including built-in PhoneBrowser commands. If several items specify DISOVERRIDE then disambigua- tion will take place among them.
  • PRIORITY # Set the command priority level, higher #'s take precedence.
  • URLINSERT Causes the ASR or DTMF response string triggering this anchor to be inserted in the URL in place of a “%s”.
  • BARGEIN ⁇ “ON”
  • Turn barge-in on or off (default is on). ”OFF” ⁇ INITIALTIMEOUT Specify how many seconds can elapse seconds from the time the recognizer is started to the time the user starts speaking.
  • the URL (required) is taken.
  • GAPTIMEOUT seconds Specify how many seconds can elapse from the time the user stops speaking to the time that recognition takes place. If nothing is recognized during this time, it is presumed that the utterance was not recognized, and the URL (required) is taken. A default value of two seconds is normally supplied, and this should be specified only in special circumstances.
  • MAXTIMEOUT seconds Specify how many seconds can elapse from the time the recognizer is started to the time that recognition takes place. If no speech starts by this time, or nothing has been recognized, the URL (required) is taken.
  • tags form the basis for defining a language interface that is richer than simple hyperlink titles.
  • LSPSGSL LSPSGSLHREF
  • URLINSERT The first two allow the specification of a rich speech recognition (SR) grammar and vocabulary. In a more general purpose implementation, these might be replaced with other tags, such as GRAMMAR and GRAMHREF, respectively, as described in the above-cited U.S. patent application Ser. No. 09/168,405.
  • the third tag, URLINSERT allows arbitrary SR output to be communicated to a web server through a Common Gateway Interface (CGI) program.
  • CGI Common Gateway Interface
  • the above-listed IGNORETITLE and NOPERMUTE tags will now be described in greater detail.
  • the current implementation of PhoneBrowser normally processes hyperlink titles to automatically generate navigation command grammars.
  • the processing involves computing all possible combinations of meaningful words of a title (i.e., simple function words like “the,” “and,” etc. are not used in isolation), thereby allowing word deletions so that the user may speak some, and not all, of the words in a title phrase.
  • This simple language model expansion mechanism gives the user some flexibility to speak a variety of commands to obtain the same results.
  • the IGNORETITLE tag causes the system to inhibit all processing of the hyperlink title. This is usually only useful when combined with one of the grammar definition tags, but may also be used for certain timout effects.
  • the NOPERMUTE tag inhibits processing of the title word combinatorics, making only the full explicit title phrase available in the speech grammar.
  • tags are shown by way of illustrative example only, and should not be construed as limiting the invention in any way. Other embodiments of the invention may utilize other types of tags.
  • Conventional methods for creating web-based speech applications generally involve design of speech grammars for SR and the design of a natural language command interpreter to process the SR output.
  • Grammars are usually defined in finite-state form but are sometimes expressed as context-free gram mars(CFGs).
  • Natural language interpreters generally include a natural language parser and an execution module to perform the actions specified in the natural language input. This combination provides the basic mechanism for processing a discourse of spoken utterances.
  • Discourse in this case, is defined as a one-sided sequence of expressions e.g., one agent speaking one or more sentences.
  • GSL Grammar Specification Language
  • the process of developing web-based speech applications can be automated by using an extension of these principles for HTML-based speech applications.
  • each statement is a sentence.
  • Each word could become a phrase in a more general example.
  • Parentheses enclose exclusive OR forms, where each word or phrase is separated by vertical bars, and these expressions can be nested.
  • Square brackets contain the name of a C function that will be called when the adjoining word (or phrase) is spoken in this sentence.
  • Curly brackets enclose argument strings that will be sent to the C function. When the user says “rotate the green cup” the outcome is the C function call:
  • the actual GSL implementation is also more complicated than illustrated here.
  • the compiler performs macro expansion, takes cyclic and recursive expressions, performs recursion transformations, performs four stages of optimization, and generates syntactic and semantic parsers.
  • the semantic function interface follows the Unix protocol using the well-known Unix func (argc, argv) format.
  • the semantic parser can be separated from the syntactic parser and used as a natural language keyboard interface.
  • semantic specification expressions can be written by attaching C functions to verbs while collecting adjectives and nouns into arguments.
  • this process can be simplified further for the application developer by providing a natural language lexicon containing word classifications.
  • This lexicon can either reside in the client (e.g., in a browser) or in a web server.
  • a server-side lexicon would generally be needed.
  • Each HTML page may use a different lexicon and it is desirable to share lexicons across many servers, so a lexicon may reside on a server different from the semantics-processing server.
  • the lexicon information could be sent to the server using the POST mechanism of the HyperText Transfer Protocol (HTTP).
  • HTTP HyperText Transfer Protocol
  • Lexicon driven semantics generally require a higher level representation of language structure.
  • Phrase structure grammar variables are used to define the sentence structure, which can be broken down into more detailed descriptions, eventually leading to word categories. Word categories are typically parts of speech such as noun, adjective and verb designators. Parsing of a sentence is performed bottom up until a complete phrase structure is recognized. The semantics are then extracted from the resultant parse tree. Verb phrases are mapped into semantic actions while noun phrases are mapped into function arguments.
  • Converting syntax to semantics at the client has a number of advantages, including: less computational burden on the web server; distribution of computation to clients; no need for specialized knowledge of natural language at the server; a simplified interface; unified control at both the client and server; and fast response to local commands.
  • FIG. 1 shows a processing system 100 which implements a web-based voice dialog interface in accordance with the illustrative embodiment of the invention.
  • the portions of the system 100 other than web server 128 are assumed for this example to be implemented on the client-side, e.g., in a browser associated with a client computer or other type of client processing device.
  • a client in accordance with the invention may any type of computer, computer system, processing device or other type of device, e.g., a telephone, a television set-top box, a computer equipped with telephony features, etc., capable of receiving and/or transmitting audio information.
  • the client-side portions of the system 100 are assumed to be coupled to the web server 128 via a conventional network connection, e.g., a connection established over a network in a conventional manner using the Transmission Control Protocol/Internet Protocol (TCP/IP) standard or other suitable communication protocol(s).
  • a conventional network connection e.g., a connection established over a network in a conventional manner using the Transmission Control Protocol/Internet Protocol (TCP/IP) standard or other suitable communication protocol(s).
  • TCP/IP Transmission Control Protocol/Internet Protocol
  • the system 100 receives HTML information from the Internet or other computer network in an HTML interpreter 102 which processes the HTML information to generate a rendering 104 , i.e., an audibly-perceptible output of the corresponding HTML information for delivery to a user.
  • the rendering 104 may include both visual and audio output.
  • the HTML information is also delivered to a grammar compiler 106 which processes the information to generate a syntax 110 and a set of lexical semantics 112 .
  • the grammar compiler 106 may be of the type described in M. K. Brown and J. G. Wilpon, “A Grammar Compiler for Connected Speech Recognition,” IEEE Trans. ASSP, Vol. 39, No. 1, pp. 17-28, January 1991, which is incorporated by reference herein.
  • the HTML interpreter 102 also generates a client library 114 .
  • the grammar compiler 106 may incorporate or otherwise utilize a grammar generation process, such as that described in greater detail in the above-cited U.S. patent application Ser. No. 09/168,405, filed Oct. 6, 1998 in the name of inventors M. K. Brown et al. and entitled “Web-Based Platform for Interactive Voice Response.”
  • a grammar generation process can receive as input parsed HTML, and generate GSL therefrom.
  • the grammar compiler 106 may be configured to take this GSL as input and create an optimized finite-state network for a speech recognizer. More particularly, the GSL may be used, e.g., to program the grammar compiler 106 with an expanded set of phrases so as to allow a user to speak partial phrases taken from a hyperlink title.
  • a stored thesaurus can be used to replace words with synonyms so as to further expand the allowed language.
  • the grammar compiler 106 is an example of a “grammar processing device” suitable for use with the present invention. Such a device in other embodiments may incorporate a grammar generator, or may be configured to receive input from a grammar generator.
  • speech received from a user is processed in an automatic speech recognizer (ASR) 120 utilizing,the syntax 110 generated by the grammar compiler 106 .
  • ASR automatic speech recognizer
  • the output of the ASR is applied to a natural language interpreter 122 which utilizes the lexical semantics 112 generated by the grammar compiler 106 .
  • the output of the natural language interpreter 122 is supplied to client exective 124 and CGI formatter 126 , both of which communicate with a web server 128 .
  • the client executive 124 processes the interpreted speech from the interpreter 122 in accordance with information in the client library 114 .
  • the client executive 124 can be one of a variety of interpreters, such as Java, Javascript or VisualBasic interpreters.
  • the CGI formatter 126 can also be written in one of these languages and executed from the client executive 124 , but may be more efficiently implemented as part of a client browser.
  • the ASR 120 and natural language interpreter 122 may be different elements of a single speech recognition device.
  • the system 100 can of course be utilized in conjunction with multiple servers in numerous different arrangements.
  • the incoming HTML information in the system 100 of FIG. 1 is thus processed for multiple simultaneous purposes, i.e., to generate the rendering 104 , to extract a natural language model containing both syntactic and semantic information in the form of respective syntax 110 and lexical semantics 112 , and to generate a script language definition of semantic actions via the client library 114 .
  • extracting semantics on the client side in the manner illustrated in FIG. 1 allows the system 100 to reduce client-server traffic and perform immediate execution of client-side operations.
  • a general URL format suitable for use in calling a CGI in the illustrative embodiment includes five components: protocol, host, path, PATH_INFO, and QUERY_STRING, in the following syntax:
  • protocol can generally be one of a number of known protocols, such as, e.g., http, ftp, wais, etc., but for use with a CGI the protocol is generally http; host is usually a fully qualified domain name but may be relative to the local domain; path is a slash-separated list of directories ending with a recognized file; PATH_INFO is additional slash-separated information that may contain a root directory for CGI processing; and QUERY_STRING is an ampersand-separated list of name-value pairs for use by a CGI program. The last two items become available to the CGI program as environment values in the environment of the CGI at the web server 128 . Processing of the URL by the client and web server is as follows:
  • client connects to host (or sends complete URL to proxy and proxy connects to host) web server;
  • server parses path searching from the public filesystem root until it recognizes a path element
  • server sets QUERY_STRING with remaining URL string.
  • the URL may not contain white-space characters but QUERY_STRING blanks can be represented with “+” characters.
  • the underlying platform has been extracted from the grammar specification tag.
  • the presence of semantics in the GSL string indicates that the QUERY_INFO string should contain a preprocessed semantic expression rather than the unprocessed SR output string.
  • URLINSERT will result in analysis of the SR output text yielding the URL:
  • the function name does not need to appear first within the execution scope, although it may be easier to read this style.
  • the Rotate operation is performed by calling the Rotate function defined in the client library 114 of FIG. 1 .
  • the Rotate function can be defined in Java, for example, and called upon receiving the appropriate speech command.
  • a dialog turn generally refers to a multi-sided sequence of expressions. Handling dialog in a voice dialog interface generally requires an ability to sequence through what is commonly called a dialog turn.
  • a dialog turn may be defined as two or more “plys” in a dialog tree or other type of dialog graph necessary to complete an exchange of information.
  • a dialog graph refers generally to a finite-state representation of a complete set of dialog exchanges between two or more agents, and generally contains states and edges as does any mathematical graph.
  • the dialog graph may be virtual in the sense that the underlying implementation is rule-based, since rule-based systems maintain “state” but may not be finite in scope.
  • a “ply” is a discourse by one agent. When discussing dialogs of more than two agents, the conventional terminology “dialog turn” may be inadequate, and other definitions may be used.
  • web-based dialogs may model a given computer or other processing device as a single agent that may be multi-faceted, even though the actual system may, include multiple servers.
  • the primary, multi-faceted agent may then serve as a portal to the underlying agents.
  • control of dialog for the single agent can be handled by representing a single two-ply dialog turn in a single HTML page.
  • a sequence of such pages forms a finite-state dialog controller.
  • FIG. 2 illustrates a finite state dialog controller 200 of this type.
  • the dialog controller 200 uses the HTML extensions described previously. Controlled speech synthesis output of a given web page is presented to a user, and the current context of command grammar is defined and utilized, in a manner similar to that previously described in conjunction with FIG. 1 .
  • the finite state dialog controller 200 of FIG. 2 operates on a set of web pages which include in this example web pages 202 , 204 , 206 and 208 .
  • Web page 202 is an HTML page which represents a “Welcome” page, and includes “Start” and “Help” hyperlinks.
  • the “Help” hyperlink leads to web page 204 , which includes a “How to” section and a “Start” hyperlink.
  • the “Start” hyperlinks on pages 202 and 204 both lead to page 206 , which includes computed HTML corresponding to an output of the form “I want to do ⁇ 1 . . . ⁇ to ⁇ 2 . . . ⁇ .”
  • the web page 208 represents the next dialog turn.
  • the HTML for a given dialog turn is constructed using a CGI 210 which may be configured to include application-specific knowledge.
  • the CGI 210 interacts with a database interface (DBI) 212 and a database driver (DBD) 214 .
  • the DBI 212 is coupled via the DBD 214 to a commercial database management system (DBMS) 216 .
  • DBMS database management system
  • Suitable DBIs and DBDs are freely available on the Internet for most of the popular commercial DBMS products.
  • the CGI 210 further interacts with an application program interface (API) 218 to an underlying set of one or more application(s) 220 .
  • API application program interface
  • Conditions are system states that prompt the interface system or the application to take the initiative. Such a mechanism was used in the SAM system described in the above-cited M. K. Brown et al. reference. Additional details regarding conditions in the context of dialog can be found in, e.g., J. Chu-Carroll and M. K. Brown, “An evidential model for tracking initiative in collaborative dialogue interactions,” User Modeling and User-Adapted Interaction Journal, Special Issue on Computational Models for Mixed Initiative Interaction, 1998; J. Chu-Carroll and M. K. Brown, “Initiative in collaborative interactions—Its cues and effects,” In Working Notes of the AAAI-97 Spring Symposium on Computational Models for Mixed Initiative Interaction, pages 16-22, 1997; and J. Chu-Carroll and M. K. Brown, “Tracking initiative in collaborative dialogue interactions,” In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics (ACL-97), pages 262-270, 1997, all of which are incorporated by reference herein.
  • Dialog system conditions may be used to trigger a dialog manager to take charge for a particular period, with the dialog manager subsequently relinquishing control as the system returns to normal operation.
  • condition types include the following: error conditions, task constraints, missing information, new language, ambiguity, user confusion, more assistance available, hazard warning, command confirmation, and hidden event explanation.
  • Error conditions generally fall into three classes: application errors, interface errors, and user errors.
  • Application errors occur when the application is given information or commands that are invalid in the current application state. For example, database information may be inconsistent with new data, etc. This kind of error needs to be handled by an application having knowledge of the associated processing, but may also require additional HTML content to provide user feedback. For example, the user may be taken to a help system.
  • Interface errors in this context are speech recognition errors that in many cases are easy for the user to correct by simply issuing a designated command such as a “go back” command. In some cases, processing may not easily be reversed, so an additional confirmation step is advisable when speech recognition errors could be costly. Keeping the grammar context limited, whenever possible, decreases the likelihood of recognition errors but can also create a variety of other problems when the user is prone to making a mistake about how the application functions.
  • a user command may be syntactically and semantically correct but not possible because the application is unable to comply. Handling task constraints requires a tighter coupling between the application and the interface. In most cases, the application will need to signal the interface of inability to process and command and perhaps suggest ways that the desired goal can be achieved. This signal may be at a low application level having no knowledge of natural language. The interface then must expand this low level signal into a complete natural language expression, perhaps initiating a side dialog to deal with the problem.
  • the user will provide only some of the information necessary to complete a task. For example, the user might tell a travel information agent that they “want to go to Boston.” While the system might already know that the user is in, e.g., New York City, it is still necessary to know the travel date(s), time of day, and possible ground transportation desired. In this case, offering more assistance may be desirable, or simply asking for the needed information may suffice.
  • commands can be ambiguous.
  • the system can handle this by listing a number of possible explicit interpretations using, e.g., different words to express the same meaning or a more elaborate full description of the possible interpretations. The user can then choose an interpretation or rephrase the command and try again.
  • User confusion may be detected by measuring user performance parameters such as long response times, frequent use of incomplete or ambiguous commands, lack of progress to a goal, etc. As such, user confusion is not detected quickly by the system but is a condition that results from an averaging of user performance. As such a user confusion index slowly increases, the system should offer increasing levels of assistance, increasing the verbosity of conversation. An expert user will thus be able to quickly achieve goals with low confusion scores.
  • Hazard warnings and command confirmation work together to protect the user and system from performing dangerous, possibly irreversible actions. Examples include changing database entries that remove previous data, purchasing non-refundable airline tickets, etc. In many cases, these actions may not be visible or obvious to the user, or it may be desirable to explain to the user not only what the system is doing on behalf of the user, but also how the system is doing it.
  • Explicit requests for help can be handled either by a built-in help system that can offer general help about how to use the voice interface commands, or by navigating to a help site populated with HTML pages containing a help system dialog and/or CGI programs to implement a more sophisticated help interface.
  • CGIs have the additional advantage that the calling page can send its URL in the QUERY_STRING, thereby enabling the help dialog system to return automatically to the same place in the application dialog after the help system has completed its work.
  • the QUERY_STRING information can also be used by the help system to offer context-sensitive help accessed from a global help system database. The user can also return to the application either by using a “go back” command or using a “go home” command to start over.
  • the system can take the initiative when the user fails to respond or fails to speak a recognizable command within specified time periods.
  • Each type of timeout can take the user to a specific part of a help system that explains why the system took charge and what the user can do next.
  • the present invention also provides dialog application development tools, which help an application developer quickly build new web-based dialog applications. These tools may be implemented at least in part as extensions of conventional HTML authoring tools, such as Netscape Composer or Microsoft Word.
  • a dialog application development tool in accordance with the invention may, e.g., use the word classification lexicon described earlier so as to allow default function assignments to be made automatically while a grammar is being specified. The application developer can then override these defaults with explicit choices. Simultaneously, the tool can automatically write code for parsing the QUERY_INFO strings containing the encoded semantic expressions. This parsing code may then be combined with a semantic transformation processor provided to the developer as part of a web-based dialog system development kit (SDK).
  • SDK dialog system development kit
  • FIG. 3 illustrates the operation of a dialog application development tool 300 in accordance with the invention.
  • the application development tool 300 includes an authoring tool 302 which utilizes GSL to generate an HTML output 304 , and parses included or called code to generate CGI output 306 .
  • the HTML output 304 is delivered via Internet or other web service to a client 310 , e.g., to a browser program running on a client computer.
  • the CGI output 306 is delivered to a web server 128 which also has associated therewith an API 312 and a semantic transformation processor 316 .
  • the web server 128 communicates with the client 310 over a suitable network connection.
  • the semantic transformation processor 316 runs on the web server 128 , e.g., as a module of the web server CGI program, and it transforms the parsed semantic expressions from the authoring tool 302 into calls to application functions that perform semantic actions through the API 312 .
  • the API 312 may be written using any of a variety of well-known languages. Language interface definitions to be included in the CGI code can be provided as part of the dialog application development tool for the most popular languages, e.g., C, C++, Java, Javascript, VisualBasic, Perl, etc.
  • Simple language model expansion relaxes the constraints on the user slightly, allowing the user to speak a variety of phrases containing key words from the original title. Further language model expansion can be obtained, e.g., by using a thesaurus to substitute other words having similar meaning for words that appeared in the original title.
  • a hyperlink title can be parsed into its phrase structure representation, and then transformed into another phrase structure of the same type, e.g., interrogotory, assertion or imperative, from which more phrase expressions can be derived.
  • the application developer can then write simple hyperlink title statements representing the basic meaning assigned to that link, using either a natural language expression (e.g., English sentences as used in the above example) or a higher level description using phrase structure grammar tags.
  • a natural language expression e.g., English sentences as used in the above example
  • phrase structure grammar tags When using natural language, the system generally must first convert the natural language into phrase structure form to perform structure transformations.
  • phrase structure format the application developer generally must use an intermediate level of expression that specifies word classes or categories, so that the system will know how to expand the phrase structure tokens into natural language words.
  • This capability can be built into an dialog application development tool, providing the application developer with a wide variety of choices in developing new speech controlled web content.
  • this additional capability makes the development of speech-activated web sites with rich dialog control easy to implement for application developers who are not experts in speech processing.

Abstract

A web-based voice dialog interface for use in communicating dialog information between a user at a client machine and one or more servers coupled to the client machine via the Internet or other computer network. The interface in an illustrative embodiment includes a web page interpreter for receiving information relating to one or more web pages. The web page interpreter generates a rendering of at least a portion of the information for presentation to a user in an audibly-perceptible format. A grammar processing device utilizes interpreted web page information received from the web page interpreter to generate syntax information and semantic information. A speech recognizer processes received user speech in accordance with the syntax information, and a natural language interpreter processes the resulting recognized speech in accordance with the semantics information to generate output for delivery to a web server in conjunction with a voice dialog which includes the user speech and the rendering of the web page(s). The output may be processed by a common gateway interface (CGI) formatter prior to delivery to a CGI associated with the web server.

Description

PRIORITY CLAIM
The present application claims the priority of U.S. Provisional Application No. 60/135,130 filed May 20, 1999 and entitled “Web-Based Voice Dialog Interface.”
FIELD OF THE INVENTION
The present invention relates generally to the Internet and other computer networks, and more particularly to techniques for communicating information over such networks via an audio interface.
BACKGROUND OF THE INVENTION
The continued growth of the Internet has made it a primary source of information on a wide variety of topics. Access to the Internet and other types of computer networks is typically accomplished via a computer equipped with a browser program. The browser program provides a graphical user interface which allows a user to request information from servers accessible over the network, and to view and otherwise process the information so obtained. Techniques for extending Internet access to users equipped with a telephone or other type of audio interface device have been developed, and are described in, for example, D. L. Atkins et al., “Integrated Web and Telephone A Language Interface to Networked Voice Response Units,” Workshop on Internet Programming Languages, ICCL '98, Loyola University, Chicago, Ill., May 1998, both of which are incorporated by reference herein.
Current approaches to web-based voice dialog generally fall into two categories. The first category includes those approaches that use HyperText Markup Language (HTML) and extensions such as Cascading Style Sheets (CSS) to redefine the meaning of HTML tags.
The second of the two categories noted above includes those approaches that utilize a new language specialized for voice interfaces, such as Voice eXtensible Markup Language (VoiceXML) from the VoiceXML Forum (which includes Lucent, AT&T and Motorola), Speech Markup Language (SpeechML) from IBM, or Talk Markup Language (TalkML) from Hewlett-Packard. These languages may be viewed as presentation mechanisms that address primarily the syntactic issues of the voice interface. The semantics of voice applications on the web are generally handled using custom solutions involving either client-side programming such as Java and Javascript or server-side methods such as Server-Side Include (SSI) and Common Gateway Interface (CGI) programming. In order to create a rich dialog interface to a computer application using these language-based approaches, an application developer generally must write explicit specifications of the sentences to be understood by the system, such that the actual spoken input can be transformed into the equivalent of a mouse-click or keyboard entry to a web form.
Examples of web-based voice dialog systems are described in U.S. patent application Ser. No. 09/168,405, filed Oct. 6, 1998 in the name of inventors M. K. Brown et al. and entitled “Web-Based Platform for Interactive Voice Response,” which is incorporated by reference herein. More specifically, this application discloses an Interactive Voice Response (IVR) platform which includes a speech synthesizer, a grammar generator and a speech recognizer. The speech synthesizer generates speech which characterizes the structure and content of a web page retrieved over the network. The speech is delivered to a user via a telephone or other type of audio interface device. The grammar generator utilizes textual information parsed from the retrieved web page to produce a grammar. The grammar is then supplied to the speech recognizer and used to interpret voice commands generated by the user. The grammar may also be utilized by the speech synthesizer to create phonetic information, such that similar phonemes are used in both the speech recognizer and the speech synthesizer.
The speech synthesizer, grammar generator and speech recognizer, as well as other elements of the IVR platform, may be used to implement a dialog system in which a dialog is conducted with the user in order to control the output of the web page information to the user. A given retrieved web page may include, for example, text to be read to the user by the speech synthesizer, a program script for executing operations on a host processor, and a hyperlink for each of a set of designated spoken responses which may be received from the user. The web page may also include one or more hyperlinks that are to be utilized when the speech recognizer rejects a given spoken user input as unrecognizable.
Despite the advantages provided by the existing approaches described above, a need remains for further improvements in web-based voice dialog interfaces. More specifically, a need exists for a technique which can provide many of the advantages of both categories of approaches, while avoiding the application development difficulties often associated with the specialized language based approaches.
SUMMARY OF THE INVENTION
The present invention provides an improved voice dialog interface for use in web-based applications implemented over the Internet or other computer network.
In accordance with the invention, a web-based voice dialog interface is configured to communicate information between a user at a client machine and one or more servers coupled to the client machine via the Internet or other computer network. The interface in an illustrative embodiment includes a web page interpreter for receiving information relating to one or more web pages. The web page interpreter generates a rendering of at least a portion of the information for presentation to a user in an audibly-perceptible format. The web page interpreter may make use of certain pre-specified voice-related tags, e.g., HTML extensions. A grammar processing device utilizes interpreted web page information received from the web page interpreter to generate syntax information and semantic information. A speech recognizer processes received user speech in accordance with the syntax information, and a natural language interpreter processes the resulting recognized speech in accordance with the semantics information to generate output for delivery to a web server in conjunction with a voice dialog which includes the user speech and the rendering of the web page(s). The output may be processed by a common gateway interface (CGI) formatter prior to delivery to a CGI associated with the web server.
The grammar processing device may include a grammar compiler, and may implement a grammar generation process to generate a grammar specification language which is supplied as input to a grammar compiler. The grammar generation process may utilize a thesaurus to expand the grammar specification language.
In accordance with another aspect of the invention, the web page interpreter may further generate a client library associated with interpretations of web pages previously performed on a common client machine. The client library will generally include a script language definition of semantic actions, and may be utilized by a web server in generating an appropriate response to a user speech portion of a dialog.
In accordance with a further aspect of the invention, dialog control may be handled by representing a given dialog turn in a single web page. In this case, a finite-state dialog controller may be implemented as a sequence of web pages each representing a dialog turn.
In accordance with yet another aspect of the invention, the processing operations of the web-based voice dialog interface are associated with an application developed using a dialog application development tool. The dialog application development tool may include an authoring tool which (i) utilizes a grammar specification language to generate output in a web page format for delivery to one or more clients, and (ii) parses code to generate a CGI output for delivery to the web server.
Advantageously, the techniques of the invention allow a voice dialog processing system to reduce client-server traffic and perform immediate execution of client-side operations. Other advantages include less computational burden on the web server, the elimination of any need for specialized natural language knowledge at the web server, a simplified interface, and unified control at both the client and the server.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of an illustrative web-based processing system which includes a voice dialog interface in accordance with the invention.
FIG. 2 illustrates a finite-state dialog process involving a set of web pages and implemented using the web-based processing system of FIG. 1.
FIG. 3 illustrates the operation of a web-based dialog application development tool in accordance with the invention.
DETAILED DESCRIPTION OF THE INVENTION
The present invention will be illustrated below in conjunction with an exemplary web-based processing system. It should be understood, however, that the invention is not limited to use with any particular type of system, network, network communication protocol or configuration. The term “web page” as used herein is intended to include a single web page, a set of web pages, a web site, and any other type or arrangement of information accessible over the World Wide Web, over other portions of the Internet, or over other types of communication networks. The term “processing system” as used herein is intended to include any type of computer-based system or other type of system which includes hardware and/or software elements configured to provide one or more of the voice dialog functions described herein.
The present invention in an illustrative embodiment automates the application development process in a web-based voice dialog interface. The interface in the context of the illustrative embodiment will be described herein using a number of extensions to conventional HyperText Markup Language (HTML). It should be noted that, although the illustrative embodiment utilizes HTML, the invention can be implemented in conjunction with other languages, e.g., Phone Markup Language (PML), Voice eXtensible Markup Language (VoiceXML), Speech Markup Language (SpeechML), Talk Markup Language (TalkML), etc.
HTML Extensions
The above-noted HTML extensions may be embedded in the scope of an HTML anchor as follows:
<A>HREF=“URL” special_tags>title</A>
where URL represents the Uniform Resource Locator and title is the string of mouse-sensitive words of the hyperlink. The special_tags are generally ignored by conventional visual web browsers that are not designed to recognize them, but have special meaning to voice browsers, such as the PhoneBrowser built on the Lucent Speech Processing System (LSPS) platform developed by Lucent Technologies Inc. of Murray Hill, N.J. Examples of the special tags include the following:
SILENT Inhibits Text-to-Speech (TTS) processing
of the title of this link, making it silent.
VOICE=”parameters” Set parameters for voice synthesis.
IGNORETITLE Inhibits Automatic Speech Recognition
(ASR) processing of the title of this link;
usually used with Grammar Specification
Language (GSL).
NOPERMUTE Inhibits combinatoric processing of the
title of this link for ASR; forces the user
to speak the entire title.
LSPSGSL=”string” Defines a GSL grammar to be used by
ASR for this link. This must use the
LSPS syntax, and is platform-dependent.
LSPSGSLHREF=”URL” Defines a GSL grammar, as above, ob-
tained from a URL.
DISOVERRIDE Causes the link title to take precedence
over normal anchor titles during disam-
biguation, including built-in
PhoneBrowser commands. If several items
specify DISOVERRIDE then disambigua-
tion will take place among them.
PRIORITY=# Set the command priority level, higher #'s
take precedence.
URLINSERT Causes the ASR or DTMF response string
triggering this anchor to be inserted in the
URL in place of a “%s”. Typically used in
a QUERY_INFO string.
BARGEIN={ “ON” | Turn barge-in on or off (default is on).
”OFF” }
INITIALTIMEOUT= Specify how many seconds can elapse
seconds from the time the recognizer is started to
the time the user starts speaking. If no
speech starts by this time, the URL
(required) is taken.
GAPTIMEOUT=seconds Specify how many seconds can elapse
from the time the user stops speaking to
the time that recognition takes place. If
nothing is recognized during this time,
it is presumed that the utterance was not
recognized, and the URL (required) is
taken. A default value of two seconds is
normally supplied, and this should be
specified only in special circumstances.
MAXTIMEOUT=seconds Specify how many seconds can elapse
from the time the recognizer is started
to the time that recognition takes place. If
no speech starts by this time, or nothing
has been recognized, the URL (required)
is taken.
Three of the above-listed tags form the basis for defining a language interface that is richer than simple hyperlink titles. For the LSPS platform, which will be used in the illustrative embodiment, these are LSPSGSL, LSPSGSLHREF, and URLINSERT. The first two allow the specification of a rich speech recognition (SR) grammar and vocabulary. In a more general purpose implementation, these might be replaced with other tags, such as GRAMMAR and GRAMHREF, respectively, as described in the above-cited U.S. patent application Ser. No. 09/168,405. The third tag, URLINSERT, allows arbitrary SR output to be communicated to a web server through a Common Gateway Interface (CGI) program. As will be described in greater detail below, these extensions provide the basis for a more powerful set of web-based speech application tools.
The above-listed IGNORETITLE and NOPERMUTE tags will now be described in greater detail. The current implementation of PhoneBrowser normally processes hyperlink titles to automatically generate navigation command grammars. The processing involves computing all possible combinations of meaningful words of a title (i.e., simple function words like “the,” “and,” etc. are not used in isolation), thereby allowing word deletions so that the user may speak some, and not all, of the words in a title phrase. This simple language model expansion mechanism gives the user some flexibility to speak a variety of commands to obtain the same results. The IGNORETITLE tag causes the system to inhibit all processing of the hyperlink title. This is usually only useful when combined with one of the grammar definition tags, but may also be used for certain timout effects. The NOPERMUTE tag inhibits processing of the title word combinatorics, making only the full explicit title phrase available in the speech grammar.
It should be understood that the above-described tags are shown by way of illustrative example only, and should not be construed as limiting the invention in any way. Other embodiments of the invention may utilize other types of tags.
Unified Syntactic/Semantic Specifications
Conventional methods for creating web-based speech applications generally involve design of speech grammars for SR and the design of a natural language command interpreter to process the SR output. Grammars are usually defined in finite-state form but are sometimes expressed as context-free gram mars(CFGs). Natural language interpreters generally include a natural language parser and an execution module to perform the actions specified in the natural language input. This combination provides the basic mechanism for processing a discourse of spoken utterances. Discourse, in this case, is defined as a one-sided sequence of expressions e.g., one agent speaking one or more sentences.
Many existing SR products use a grammar definition language called Grammar Specification Language (GSL). GSL in its original versions was generally limited to syntactic definition. Later versions of GSL incorporate semantic definitions into the syntactic specification. The resulting grammar compiler automatically creates the command interpreter as well as the finite-state or CFG representation of the language syntax.
In accordance with the present invention, the process of developing web-based speech applications can be automated by using an extension of these principles for HTML-based speech applications.
Original semantic GSL expressions take the following example form, from a robot control grammar described in M. K. Brown, B. M. Buntschuh and J. G. Wilpon, “SAM: A Perceptive Spoken Language Understanding Robot,” IEEE Trans. SMC, Vol. 22, No. 6, pp. 1390-1402, September 1992, which is incorporated by reference herein:
{(move[Move]|rotate[Rotate])the{1(red|green)(cup|block)}}.
In this example, each statement is a sentence. Each word could become a phrase in a more general example. Parentheses enclose exclusive OR forms, where each word or phrase is separated by vertical bars, and these expressions can be nested. Square brackets contain the name of a C function that will be called when the adjoining word (or phrase) is spoken in this sentence. Curly brackets enclose argument strings that will be sent to the C function. When the user says “rotate the green cup” the outcome is the C function call:
Rotate(“green cup”);
Another way to implement semantic actions is to use a dispatch function as follows:
{[Exec]{0 (move|rotate)} the {1 (red|green)(cup|block)}}.
In this case, the dispatch function Exec is called with argument 0 set to “rotate,” thereby signaling Exec to call the Rotate function.
This specification form is very general. C functions can be defined anywhere within a sentence statement and arguments can be arbitrarily scoped and nested (even reusing the same text repeatedly). Functions defined within the scope of an argument in the scope of another function will return a computed argument value to the enclosing function at execution time. Hence, a complete function call tree is created.
The simple example given above only specifies six sentence possibilities. More typical definitions would specify complex syntax and semantics having many thousands of sentence possibilities (the full robot grammar for this example specified 6×1020 sentences in about 1.5 pages of GSL code).
The actual GSL implementation is also more complicated than illustrated here. The compiler performs macro expansion, takes cyclic and recursive expressions, performs recursion transformations, performs four stages of optimization, and generates syntactic and semantic parsers. The semantic function interface follows the Unix protocol using the well-known Unix func (argc, argv) format. The semantic parser can be separated from the syntactic parser and used as a natural language keyboard interface.
Lexicon Driven Semantics
It is known that semantic specification expressions can be written by attaching C functions to verbs while collecting adjectives and nouns into arguments. In accordance with the invention, this process can be simplified further for the application developer by providing a natural language lexicon containing word classifications. This lexicon can either reside in the client (e.g., in a browser) or in a web server.
Using the above-noted URLINSERT mechanism that inserts an SR output string directly into a URL, a server-side lexicon would generally be needed. Each HTML page may use a different lexicon and it is desirable to share lexicons across many servers, so a lexicon may reside on a server different from the semantics-processing server. With a minor extension of the URLINSERT mechanism the lexicon information could be sent to the server using the POST mechanism of the HyperText Transfer Protocol (HTTP). However, this approach puts an increased burden on the server. A server-side solution using a variety of such lexicons is also inconsistent with the stateless nature of existing web server technology.
Lexicon driven semantics generally require a higher level representation of language structure. Phrase structure grammar variables are used to define the sentence structure, which can be broken down into more detailed descriptions, eventually leading to word categories. Word categories are typically parts of speech such as noun, adjective and verb designators. Parsing of a sentence is performed bottom up until a complete phrase structure is recognized. The semantics are then extracted from the resultant parse tree. Verb phrases are mapped into semantic actions while noun phrases are mapped into function arguments.
Client-Side Semantics
Converting syntax to semantics at the client has a number of advantages, including: less computational burden on the web server; distribution of computation to clients; no need for specialized knowledge of natural language at the server; a simplified interface; unified control at both the client and server; and fast response to local commands.
FIG. 1 shows a processing system 100 which implements a web-based voice dialog interface in accordance with the illustrative embodiment of the invention. The portions of the system 100 other than web server 128 are assumed for this example to be implemented on the client-side, e.g., in a browser associated with a client computer or other type of client processing device. A client in accordance with the invention may any type of computer, computer system, processing device or other type of device, e.g., a telephone, a television set-top box, a computer equipped with telephony features, etc., capable of receiving and/or transmitting audio information.
The client-side portions of the system 100 are assumed to be coupled to the web server 128 via a conventional network connection, e.g., a connection established over a network in a conventional manner using the Transmission Control Protocol/Internet Protocol (TCP/IP) standard or other suitable communication protocol(s).
The system 100 receives HTML information from the Internet or other computer network in an HTML interpreter 102 which processes the HTML information to generate a rendering 104, i.e., an audibly-perceptible output of the corresponding HTML information for delivery to a user. The rendering 104 may include both visual and audio output. The HTML information is also delivered to a grammar compiler 106 which processes the information to generate a syntax 110 and a set of lexical semantics 112. The grammar compiler 106 may be of the type described in M. K. Brown and J. G. Wilpon, “A Grammar Compiler for Connected Speech Recognition,” IEEE Trans. ASSP, Vol. 39, No. 1, pp. 17-28, January 1991, which is incorporated by reference herein. The HTML interpreter 102 also generates a client library 114.
It should be noted that the grammar compiler 106 may incorporate or otherwise utilize a grammar generation process, such as that described in greater detail in the above-cited U.S. patent application Ser. No. 09/168,405, filed Oct. 6, 1998 in the name of inventors M. K. Brown et al. and entitled “Web-Based Platform for Interactive Voice Response.” For example, such a grammar generation process can receive as input parsed HTML, and generate GSL therefrom. The grammar compiler 106 may be configured to take this GSL as input and create an optimized finite-state network for a speech recognizer. More particularly, the GSL may be used, e.g., to program the grammar compiler 106 with an expanded set of phrases so as to allow a user to speak partial phrases taken from a hyperlink title. In addition, a stored thesaurus can be used to replace words with synonyms so as to further expand the allowed language.
The grammar compiler 106 is an example of a “grammar processing device” suitable for use with the present invention. Such a device in other embodiments may incorporate a grammar generator, or may be configured to receive input from a grammar generator.
In the system 100 of FIG. 1, speech received from a user is processed in an automatic speech recognizer (ASR) 120 utilizing,the syntax 110 generated by the grammar compiler 106. The output of the ASR is applied to a natural language interpreter 122 which utilizes the lexical semantics 112 generated by the grammar compiler 106. The output of the natural language interpreter 122 is supplied to client exective 124 and CGI formatter 126, both of which communicate with a web server 128. The client executive 124 processes the interpreted speech from the interpreter 122 in accordance with information in the client library 114. The client executive 124 can be one of a variety of interpreters, such as Java, Javascript or VisualBasic interpreters. The CGI formatter 126 can also be written in one of these languages and executed from the client executive 124, but may be more efficiently implemented as part of a client browser.
Although shown as separate elements in the system 100, the ASR 120 and natural language interpreter 122 may be different elements of a single speech recognition device. Moreover, although illustrating as including a single web server, the system 100 can of course be utilized in conjunction with multiple servers in numerous different arrangements.
The incoming HTML information in the system 100 of FIG. 1 is thus processed for multiple simultaneous purposes, i.e., to generate the rendering 104, to extract a natural language model containing both syntactic and semantic information in the form of respective syntax 110 and lexical semantics 112, and to generate a script language definition of semantic actions via the client library 114.
Advantageously, extracting semantics on the client side in the manner illustrated in FIG. 1 allows the system 100 to reduce client-server traffic and perform immediate execution of client-side operations.
The CGI format as implemented in the CGI formatter 126 will now be described in greater detail. A general URL format suitable for use in calling a CGI in the illustrative embodiment includes five components: protocol, host, path, PATH_INFO, and QUERY_STRING, in the following syntax:
{protocol}://{host}/{path}/{PATH_INFO}?{QUERY_STRING}
where protocol can generally be one of a number of known protocols, such as, e.g., http, ftp, wais, etc., but for use with a CGI the protocol is generally http; host is usually a fully qualified domain name but may be relative to the local domain; path is a slash-separated list of directories ending with a recognized file; PATH_INFO is additional slash-separated information that may contain a root directory for CGI processing; and QUERY_STRING is an ampersand-separated list of name-value pairs for use by a CGI program. The last two items become available to the CGI program as environment values in the environment of the CGI at the web server 128. Processing of the URL by the client and web server is as follows:
1. client connects to host (or sends complete URL to proxy and proxy connects to host) web server;
2. client Issues GET or POST request using the remainder of the URL after the host;
3. server parses path searching from the public filesystem root until it recognizes a path element;
4. server continues parsing path until either end of string or ‘?’ token is seen, setting PATH_INFO; and
5. server sets QUERY_STRING with remaining URL string. The URL may not contain white-space characters but QUERY_STRING blanks can be represented with “+” characters.
Continuing with the previous robot grammar example, for server-side execution the speech grammar specification can be written into a hyperlink:
<A HREF=“http://hdst/pathinfo?%s” URLINSERT
GSL=“{(move[Move]|rotate[Rotate])
the{1(red|green)(cup|block)}.”>
Title</A>
In this example, the underlying platform has been extracted from the grammar specification tag. The presence of semantics in the GSL string indicates that the QUERY_INFO string should contain a preprocessed semantic expression rather than the unprocessed SR output string. In this case, URLINSERT will result in analysis of the SR output text yielding the URL:
http://host/pathinfo?EXEC=“(Rotate+1=‘green+cup’}”
A concise format is used. The curly brackets delimit scope. Argument numbers indicate argument positions, and do not need to be in order or consecutive (i.e., some or all arguments can be undefined). Nested functions can be handled by nesting the call format as the following example illustrates:
. . . ?EXEC=“{func1+1=′{func2+1=‘arg1’+2=‘arg2’}}”
The function name does not need to appear first within the execution scope, although it may be easier to read this style.
Execution on the client side would normally be limited by security measures, since the content from the web server may originate from an unreliable source. For purposes of simplicity and clarity of illustration, however, such security concerns will not be considered in the present description. These concerns can be addressed using convention security techniques that are well-understood in the art.
On the client side, the Rotate operation is performed by calling the Rotate function defined in the client library 114 of FIG. 1. The Rotate function can be defined in Java, for example, and called upon receiving the appropriate speech command.
Web-Based Dialog
The term “dialog” generally refers to a multi-sided sequence of expressions. Handling dialog in a voice dialog interface generally requires an ability to sequence through what is commonly called a dialog turn. A dialog turn may be defined as two or more “plys” in a dialog tree or other type of dialog graph necessary to complete an exchange of information. A dialog graph refers generally to a finite-state representation of a complete set of dialog exchanges between two or more agents, and generally contains states and edges as does any mathematical graph. The dialog graph may be virtual in the sense that the underlying implementation is rule-based, since rule-based systems maintain “state” but may not be finite in scope. A “ply” is a discourse by one agent. When discussing dialogs of more than two agents, the conventional terminology “dialog turn” may be inadequate, and other definitions may be used.
It should be noted that web-based dialogs may model a given computer or other processing device as a single agent that may be multi-faceted, even though the actual system may, include multiple servers. The primary, multi-faceted agent may then serve as a portal to the underlying agents.
In accordance with the invention, control of dialog for the single agent can be handled by representing a single two-ply dialog turn in a single HTML page. A sequence of such pages forms a finite-state dialog controller.
FIG. 2 illustrates a finite state dialog controller 200 of this type. The dialog controller 200 uses the HTML extensions described previously. Controlled speech synthesis output of a given web page is presented to a user, and the current context of command grammar is defined and utilized, in a manner similar to that previously described in conjunction with FIG. 1.
The finite state dialog controller 200 of FIG. 2 operates on a set of web pages which include in this example web pages 202, 204, 206 and 208. Web page 202 is an HTML page which represents a “Welcome” page, and includes “Start” and “Help” hyperlinks. The “Help” hyperlink leads to web page 204, which includes a “How to” section and a “Start” hyperlink. The “Start” hyperlinks on pages 202 and 204 both lead to page 206, which includes computed HTML corresponding to an output of the form “I want to do {1 . . . } to {2 . . . }.” The web page 208 represents the next dialog turn.
In the controller 200, the HTML for a given dialog turn is constructed using a CGI 210 which may be configured to include application-specific knowledge. As shown in FIG. 2, the CGI 210 interacts with a database interface (DBI) 212 and a database driver (DBD) 214. The DBI 212 is coupled via the DBD 214 to a commercial database management system (DBMS) 216. Suitable DBIs and DBDs are freely available on the Internet for most of the popular commercial DBMS products. The CGI 210 further interacts with an application program interface (API) 218 to an underlying set of one or more application(s) 220.
When a user speaks a client-side command, such as “speak faster” or “speak louder,” the command is executed immediately and the presentation continues. When a navigation command associated with a hyperlink is spoken, control is transferred to the corresponding new web page, dialog turn, and presentation and speech grammar context. The process can then continue on to a new dialog state. In this way, using many relatively small web pages, a complete client-server dialog system can be created.
Condition Handling
Conditions are system states that prompt the interface system or the application to take the initiative. Such a mechanism was used in the SAM system described in the above-cited M. K. Brown et al. reference. Additional details regarding conditions in the context of dialog can be found in, e.g., J. Chu-Carroll and M. K. Brown, “An evidential model for tracking initiative in collaborative dialogue interactions,” User Modeling and User-Adapted Interaction Journal, Special Issue on Computational Models for Mixed Initiative Interaction, 1998; J. Chu-Carroll and M. K. Brown, “Initiative in collaborative interactions—Its cues and effects,” In Working Notes of the AAAI-97 Spring Symposium on Computational Models for Mixed Initiative Interaction, pages 16-22, 1997; and J. Chu-Carroll and M. K. Brown, “Tracking initiative in collaborative dialogue interactions,” In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics (ACL-97), pages 262-270, 1997, all of which are incorporated by reference herein.
Dialog system conditions may be used to trigger a dialog manager to take charge for a particular period, with the dialog manager subsequently relinquishing control as the system returns to normal operation.
Examples of condition types include the following: error conditions, task constraints, missing information, new language, ambiguity, user confusion, more assistance available, hazard warning, command confirmation, and hidden event explanation.
These conditions can be created by the user, the system or both, and are listed above in approximate order of severity. The first five conditions are severe enough to prevent processing of a command until the condition is addressed. User confusion is a more general condition that may prevent further progress or may simply slow progress. The remaining conditions will not prevent progress but will prompt the system to issue declarative statements to the user.
Error conditions generally fall into three classes: application errors, interface errors, and user errors. Application errors occur when the application is given information or commands that are invalid in the current application state. For example, database information may be inconsistent with new data, etc. This kind of error needs to be handled by an application having knowledge of the associated processing, but may also require additional HTML content to provide user feedback. For example, the user may be taken to a help system.
Interface errors in this context are speech recognition errors that in many cases are easy for the user to correct by simply issuing a designated command such as a “go back” command. In some cases, processing may not easily be reversed, so an additional confirmation step is advisable when speech recognition errors could be costly. Keeping the grammar context limited, whenever possible, decreases the likelihood of recognition errors but can also create a variety of other problems when the user is prone to making a mistake about how the application functions.
A user command may be syntactically and semantically correct but not possible because the application is unable to comply. Handling task constraints requires a tighter coupling between the application and the interface. In most cases, the application will need to signal the interface of inability to process and command and perhaps suggest ways that the desired goal can be achieved. This signal may be at a low application level having no knowledge of natural language. The interface then must expand this low level signal into a complete natural language expression, perhaps initiating a side dialog to deal with the problem.
Often the user will provide only some of the information necessary to complete a task. For example, the user might tell a travel information agent that they “want to go to Boston.” While the system might already know that the user is in, e.g., New York City, it is still necessary to know the travel date(s), time of day, and possible ground transportation desired. In this case, offering more assistance may be desirable, or simply asking for the needed information may suffice.
Occasionally the user will speak a new word or words that the system has not heard before. This causes the interface to divert to a dialog about the new word(s). The user can be asked to tell the system the type of word (adjective, noun, verb, etc.) and possibly associate the new word with other words the system already knows about. Acquiring the acoustic patterns of new words is also possible using phonetic transcription grammars, with speech recognition, but is technically more difficult.
It should be noted that commands can be ambiguous. The system can handle this by listing a number of possible explicit interpretations using, e.g., different words to express the same meaning or a more elaborate full description of the possible interpretations. The user can then choose an interpretation or rephrase the command and try again.
User confusion may be detected by measuring user performance parameters such as long response times, frequent use of incomplete or ambiguous commands, lack of progress to a goal, etc. As such, user confusion is not detected quickly by the system but is a condition that results from an averaging of user performance. As such a user confusion index slowly increases, the system should offer increasing levels of assistance, increasing the verbosity of conversation. An expert user will thus be able to quickly achieve goals with low confusion scores.
Hazard warnings and command confirmation work together to protect the user and system from performing dangerous, possibly irreversible actions. Examples include changing database entries that remove previous data, purchasing non-refundable airline tickets, etc. In many cases, these actions may not be visible or obvious to the user, or it may be desirable to explain to the user not only what the system is doing on behalf of the user, but also how the system is doing it.
It is usually important not to prevent the user from making mistakes by simply ignoring invalid requests, because the user will find it difficult to learn about such mistakes. Leaving all invalid commands out of the grammar for a given context may therefore result in user confusion. Instead, a well designed error handling system will recognize the erroneous command and send the user to a source of context-sensitive help for information on the proper use of commands in the current system state. User errors involving misunderstanding of the application may require cooperation between an application help system and an interface help system, since the user may not only be using the application incorrectly at a given point but have thereby arrived at an incorrect state in the dialog. The help facility then needs to know how to quickly get the user to the correct state and instruct the user on how to proceed.
There are several ways the system can help the user either automatically or explicitly. Explicit requests for help can be handled either by a built-in help system that can offer general help about how to use the voice interface commands, or by navigating to a help site populated with HTML pages containing a help system dialog and/or CGI programs to implement a more sophisticated help interface. CGIs have the additional advantage that the calling page can send its URL in the QUERY_STRING, thereby enabling the help dialog system to return automatically to the same place in the application dialog after the help system has completed its work. The QUERY_STRING information can also be used by the help system to offer context-sensitive help accessed from a global help system database. The user can also return to the application either by using a “go back” command or using a “go home” command to start over.
Using the above-described INITIALTIMEOUT, GAPTIMEOUT, and MAXTIMEOUT special_tags and a standard HTML<META HTTP-EQUIV=“Refresh”. . .>tag, the system can take the initiative when the user fails to respond or fails to speak a recognizable command within specified time periods. Each type of timeout can take the user to a specific part of a help system that explains why the system took charge and what the user can do next.
Dialog Application Development Tools
The present invention also provides dialog application development tools, which help an application developer quickly build new web-based dialog applications. These tools may be implemented at least in part as extensions of conventional HTML authoring tools, such as Netscape Composer or Microsoft Word.
A dialog application development tool in accordance with the invention may, e.g., use the word classification lexicon described earlier so as to allow default function assignments to be made automatically while a grammar is being specified. The application developer can then override these defaults with explicit choices. Simultaneously, the tool can automatically write code for parsing the QUERY_INFO strings containing the encoded semantic expressions. This parsing code may then be combined with a semantic transformation processor provided to the developer as part of a web-based dialog system development kit (SDK).
Additional details regarding elements suitable for use in such an SDK are described in, e.g., M. K. Brown and B. M. Buntschuh, “A Context-Free Grammar Compiler for Speech Understanding Systems,” ICSLP'94, Vol. 1, pp. 21-24, Yokohama, Japan, September 1994, which is incorporated by reference herein.
FIG. 3 illustrates the operation of a dialog application development tool 300 in accordance with the invention. The application development tool 300 includes an authoring tool 302 which utilizes GSL to generate an HTML output 304, and parses included or called code to generate CGI output 306. The HTML output 304 is delivered via Internet or other web service to a client 310, e.g., to a browser program running on a client computer. The CGI output 306 is delivered to a web server 128 which also has associated therewith an API 312 and a semantic transformation processor 316. The web server 128 communicates with the client 310 over a suitable network connection.
At execution time, the semantic transformation processor 316 runs on the web server 128, e.g., as a module of the web server CGI program, and it transforms the parsed semantic expressions from the authoring tool 302 into calls to application functions that perform semantic actions through the API 312. The API 312 may be written using any of a variety of well-known languages. Language interface definitions to be included in the CGI code can be provided as part of the dialog application development tool for the most popular languages, e.g., C, C++, Java, Javascript, VisualBasic, Perl, etc.
Automatic Language Model Expansion
One possible difficulty remaining for the application developer is definition of all the ways a user might state each possible command to the speech interface. Simple language model expansion, as described previously, relaxes the constraints on the user slightly, allowing the user to speak a variety of phrases containing key words from the original title. Further language model expansion can be obtained, e.g., by using a thesaurus to substitute other words having similar meaning for words that appeared in the original title. In addition, a hyperlink title can be parsed into its phrase structure representation, and then transformed into another phrase structure of the same type, e.g., interrogotory, assertion or imperative, from which more phrase expressions can be derived.
The application developer can then write simple hyperlink title statements representing the basic meaning assigned to that link, using either a natural language expression (e.g., English sentences as used in the above example) or a higher level description using phrase structure grammar tags. When using natural language, the system generally must first convert the natural language into phrase structure form to perform structure transformations. When using phrase structure format, the application developer generally must use an intermediate level of expression that specifies word classes or categories, so that the system will know how to expand the phrase structure tokens into natural language words.
This capability can be built into an dialog application development tool, providing the application developer with a wide variety of choices in developing new speech controlled web content. In combination with existing web development tool technology, this additional capability makes the development of speech-activated web sites with rich dialog control easy to implement for application developers who are not experts in speech processing.
It should be noted that various evolving web-based voice browser language proposals are now being considered by the World Wide Web Consortium (W3C) Voice Browser Working Group. These emerging standards may influence the particular implementation details associated with a given embodiment of the invention.
The above-described embodiments of the invention are intended to be illustrative only. Numerous alternative embodiments within the scope of the following claims will be apparent to those skilled in the art.

Claims (20)

What is claimed is:
1. An apparatus for implementing a web-based voice dialog interface, the apparatus comprising:
a first interpreter for receiving information relating to one or more web pages, the first interpreter generating a rendering of at least a portion of the information for presentation to a user in an audibly-perceptible format;
a grammar processing device having an input coupled to an output of the first interpreter, the grammar processing device utilizing interpreted web page information received from the first interpreter to generate syntax information and semantic information;
a speech recognizer which processes user speech in accordance with the syntax information generated by the grammar processing device; and
a second interpreter having an input coupled to an output of the speech recognizer, the second interpreter processing recognized speech in accordance with the semantics information from the grammar processing device to generate output for delivery to a web server in conjunction with a dialog which includes at least a portion of the rendering and the user speech.
2. The apparatus of claim 1 wherein the grammar processing device comprises a grammar compiler.
3. The apparatus of claim 2 wherein the grammar processing device implements a grammar generation process to generate a grammar specification language which is supplied as input to the grammar compiler.
4. The apparatus of claim 3 wherein the grammar generation process utilizes a thesaurus to expand the grammar specification language.
5. The apparatus of claim 1 wherein the first interpreter comprises a web page interpreter capable of interpreting web pages formatted at least in part using HTML.
6. The apparatus of claim 1 wherein the second interpreter comprises a natural language interpreter.
7. The apparatus of claim 1 wherein the output generated by the second interpreter is further processed by a common gateway interface formatter prior to delivery to the web server.
8. The apparatus of claim 1 wherein the common gateway interface formatter formats the output generated by the second interpreter into a format suitable for a common gateway interface associated with the web server.
9. The apparatus of claim 8 wherein the common gateway interface is coupled to a database management system.
10. The apparatus of claim 1 wherein the first interpreter further generates a client library associated with interpretations of web pages previously performed on a common client machine, the client library including a script language definition of semantic actions.
11. The apparatus of claim 10 further including a client executive program which processes information in the client library for delivery to the web server.
12. The apparatus of claim 1 wherein the web page information is at least partially in an HTML format.
13. The apparatus of claim 12 wherein the first interpreter includes a capability for interpreting a plurality of voice-related HTML tags.
14. The apparatus of claim 1 wherein dialog control is handled by representing a given dialog turn in a single web page.
15. The apparatus of claim 14 wherein a finite state dialog controller is implemented as a sequence of web pages each representing a dialog turn.
16. The apparatus of claim 1 wherein the processing operations of the dialog are associated with an application developed using a dialog application development tool.
17. The apparatus of claim 16 wherein the dialog application development tool comprises an authoring tool which utilizes a grammar specification language to generate output in a web page format for delivery to one or more clients, and parses code to generate a common gateway interface output for delivery to the web server.
18. A method for implementing a web-based voice dialog interface, the method comprising the steps of:
generating a rendering of at least a portion of a set of information relating to one or more web pages received over a network, for presentation to a user in an audibly-perceptible format;
utilizing interpreted web page, information to generate syntax information and semantic information;
processing user speech in accordance with the syntax information; and
processing recognized speech in accordance with the semantics information to generate output for delivery to a web server in conjunction with a dialog which includes at least a portion of the rendering and the user speech.
19. A machine-readable medium for storing one or more programs for implementing a web-based dialog interface, wherein the one or more programs when executed by a processing system carry out the steps of:
generating a rendering of at least a portion of a set of information relating to one or more web pages received over a network, for presentation to a user in an audibly-perceptible format;
utilizing interpreted web page information to generate syntax information and semantic information;
processing user speech in accordance with the syntax information to generate recognized speech; and
processing the recognized speech in accordance with the semantics information to generate output for delivery to a web server in conjunction with a dialog which includes at least a portion of the rendering and the user speech.
20. A processing system comprising:
at least one computer for implementing at least a portion of an web-based voice dialog interface, the interface including: (i) a first interpreter for receiving information relating to one or more web pages, the first interpreter generating a rendering of at least a portion of the information for presentation to a user in an audibly-perceptible format; (ii) a grammar processing device having an input coupled to an output of the first interpreter, the grammar processing device utilizing interpreted web page information received from the first interpreter to generate syntax information and semantic information; (iii) a speech recognizer which processes user speech in accordance with the syntax information generated by the grammar processing device; and (iv) a second interpreter having an input coupled to an output of the speech recognizer, the second interpreter processing recognized speech in accordance with the semantics information from the grammar processing device to generate output for delivery to a web server in conjunction with a dialog which includes at least a portion of the rendering and the user speech.
US09/524,964 1999-05-20 2000-03-14 Web-based voice dialog interface Expired - Lifetime US6604075B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US09/524,964 US6604075B1 (en) 1999-05-20 2000-03-14 Web-based voice dialog interface

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13513099P 1999-05-20 1999-05-20
US09/524,964 US6604075B1 (en) 1999-05-20 2000-03-14 Web-based voice dialog interface

Publications (1)

Publication Number Publication Date
US6604075B1 true US6604075B1 (en) 2003-08-05

Family

ID=27624847

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/524,964 Expired - Lifetime US6604075B1 (en) 1999-05-20 2000-03-14 Web-based voice dialog interface

Country Status (1)

Country Link
US (1) US6604075B1 (en)

Cited By (169)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010055370A1 (en) * 2000-05-16 2001-12-27 Kommer Robert Van Voice portal hosting system and method
US20020002463A1 (en) * 2000-03-24 2002-01-03 John Kroeker Web-based speech recognition with scripting and semantic objects
US20020062216A1 (en) * 2000-11-23 2002-05-23 International Business Machines Corporation Method and system for gathering information by voice input
US20020069059A1 (en) * 2000-12-04 2002-06-06 Kenneth Smith Grammar generation for voice-based searches
US20020095286A1 (en) * 2001-01-12 2002-07-18 International Business Machines Corporation System and method for relating syntax and semantics for a conversational speech application
US20020111809A1 (en) * 1999-11-09 2002-08-15 West Teleservices Holding Company Automated third party verification system
US20020124187A1 (en) * 2000-09-28 2002-09-05 Recourse Technologies, Inc. System and method for analyzing protocol streams for a security-related event
US20020133627A1 (en) * 2001-03-19 2002-09-19 International Business Machines Corporation Intelligent document filtering
US20020133354A1 (en) * 2001-01-12 2002-09-19 International Business Machines Corporation System and method for determining utterance context in a multi-context speech application
US20020133355A1 (en) * 2001-01-12 2002-09-19 International Business Machines Corporation Method and apparatus for performing dialog management in a computer conversational interface
US20020138262A1 (en) * 2000-03-24 2002-09-26 John Kroeker Web-based speech recognition with scripting and semantic objects
US20020138266A1 (en) * 2001-01-12 2002-09-26 International Business Machines Corporation Method and apparatus for converting utterance representations into actions in a conversational system
US20020161824A1 (en) * 2001-04-27 2002-10-31 International Business Machines Corporation Method for presentation of HTML image-map elements in non visual web browsers
US20020161805A1 (en) * 2001-04-27 2002-10-31 International Business Machines Corporation Editing HTML dom elements in web browsers with non-visual capabilities
US20020165988A1 (en) * 2000-06-07 2002-11-07 Khan Umair A. System, method, and article of manufacture for wireless enablement of the world wide web using a wireless gateway
US20020165719A1 (en) * 2001-05-04 2002-11-07 Kuansan Wang Servers for web enabled speech recognition
US20020169798A1 (en) * 2001-01-10 2002-11-14 Tomoo Ooishi Contents inspecting system and contents inspecting method used therefor
US20020169806A1 (en) * 2001-05-04 2002-11-14 Kuansan Wang Markup language extensions for web enabled recognition
US20020173964A1 (en) * 2001-03-30 2002-11-21 International Business Machines Corporation Speech driven data selection in a voice-enabled program
US20020178182A1 (en) * 2001-05-04 2002-11-28 Kuansan Wang Markup language extensions for web enabled recognition
US20020193998A1 (en) * 2001-05-31 2002-12-19 Dvorak Joseph L. Virtual speech interface system and method of using same
US20020198719A1 (en) * 2000-12-04 2002-12-26 International Business Machines Corporation Reusable voiceXML dialog components, subdialogs and beans
US20030009517A1 (en) * 2001-05-04 2003-01-09 Kuansan Wang Web enabled recognition architecture
US20030023431A1 (en) * 2001-07-26 2003-01-30 Marc Neuberger Method and system for augmenting grammars in distributed voice browsing
US20030046346A1 (en) * 2001-07-11 2003-03-06 Kirusa, Inc. Synchronization among plural browsers
US20030046074A1 (en) * 2001-06-15 2003-03-06 International Business Machines Corporation Selective enablement of speech recognition grammars
US20030130854A1 (en) * 2001-10-21 2003-07-10 Galanes Francisco M. Application abstraction with dialog purpose
US20030139930A1 (en) * 2002-01-24 2003-07-24 Liang He Architecture for DSR client and server development platform
US20030167168A1 (en) * 2002-03-01 2003-09-04 International Business Machines Corporation Automatic generation of efficient grammar for heading selection
US20030187658A1 (en) * 2002-03-29 2003-10-02 Jari Selin Method for text-to-speech service utilizing a uniform resource identifier
US20030195751A1 (en) * 2002-04-10 2003-10-16 Mitsubishi Electric Research Laboratories, Inc. Distributed automatic speech recognition with persistent user parameters
US20030200077A1 (en) * 2002-04-19 2003-10-23 Claudia Leacock System for rating constructed responses based on concepts and a model answer
US20030200080A1 (en) * 2001-10-21 2003-10-23 Galanes Francisco M. Web server controls for web enabled recognition and/or audible prompting
US20030221181A1 (en) * 2002-05-24 2003-11-27 Petr Hejl Developing and running selected applications in communications
US20030233239A1 (en) * 2002-06-14 2003-12-18 International Business Machines Corporation Voice browser with integrated TCAP and ISUP interfaces
US20040034531A1 (en) * 2002-08-15 2004-02-19 Wu Chou Distributed multimodal dialogue system and method
US20040153323A1 (en) * 2000-12-01 2004-08-05 Charney Michael L Method and system for voice activating web pages
US6775805B1 (en) * 1999-10-06 2004-08-10 International Business Machines Corporation Method, apparatus and program product for specifying an area of a web page for audible reading
US6804331B1 (en) 2002-03-27 2004-10-12 West Corporation Method, apparatus, and computer readable media for minimizing the risk of fraudulent receipt of telephone calls
US6819758B2 (en) 2001-12-21 2004-11-16 West Corporation Method, system, and computer-readable media for performing speech recognition of indicator tones
US20040230434A1 (en) * 2003-04-28 2004-11-18 Microsoft Corporation Web server controls for web enabled recognition and/or audible prompting for call controls
US20040230637A1 (en) * 2003-04-29 2004-11-18 Microsoft Corporation Application controls for speech enabled recognition
US20040236580A1 (en) * 1999-11-12 2004-11-25 Bennett Ian M. Method for processing speech using dynamic grammars
US20040258217A1 (en) * 2003-05-30 2004-12-23 Kim Ki Chul Voice notice relay service method and apparatus
US20050015238A1 (en) * 2003-07-17 2005-01-20 International Business Machines Corporation Computational linguistic statements for providing an autonomic computing environment
US6862343B1 (en) 2002-03-27 2005-03-01 West Corporation Methods, apparatus, scripts, and computer readable media for facilitating secure capture of sensitive data for a voice-based transaction conducted over a telecommunications network
US20050055218A1 (en) * 2001-10-24 2005-03-10 Julia Luc E. System and method for speech activated navigation
US20050091059A1 (en) * 2003-08-29 2005-04-28 Microsoft Corporation Assisted multi-modal dialogue
US6895084B1 (en) * 1999-08-24 2005-05-17 Microstrategy, Inc. System and method for generating voice pages with included audio files for use in a voice page delivery system
US20050111651A1 (en) * 2003-11-21 2005-05-26 Armando Chavez Script translation
US20050135571A1 (en) * 2003-12-19 2005-06-23 At&T Corp. Method and apparatus for automatically building conversational systems
US20050143975A1 (en) * 2003-06-06 2005-06-30 Charney Michael L. System and method for voice activating web pages
US20050154591A1 (en) * 2004-01-10 2005-07-14 Microsoft Corporation Focus tracking in dialogs
US20050171781A1 (en) * 2004-01-08 2005-08-04 Poploskie Jon M. Speech information system
US20050182630A1 (en) * 2004-02-02 2005-08-18 Miro Xavier A. Multilingual text-to-speech system with limited resources
US6937702B1 (en) 2002-05-28 2005-08-30 West Corporation Method, apparatus, and computer readable media for minimizing the risk of fraudulent access to call center resources
US6950793B2 (en) 2001-01-12 2005-09-27 International Business Machines Corporation System and method for deriving natural language representation of formal belief structures
US20050261901A1 (en) * 2004-05-19 2005-11-24 International Business Machines Corporation Training speaker-dependent, phrase-based speech grammars using an unsupervised automated technique
US20050278179A1 (en) * 2004-06-09 2005-12-15 Overend Kevin J Method and apparatus for providing network support for voice-activated mobile web browsing for audio data streams
US20050283367A1 (en) * 2004-06-17 2005-12-22 International Business Machines Corporation Method and apparatus for voice-enabling an application
US20060025997A1 (en) * 2002-07-24 2006-02-02 Law Eng B System and process for developing a voice application
US20060092915A1 (en) * 2004-10-28 2006-05-04 Bellsouth Intellectual Property Management Corporation Methods and systems for accessing information across a network
DE102004056166A1 (en) * 2004-11-18 2006-05-24 Deutsche Telekom Ag Speech dialogue system and method of operation
US20060111906A1 (en) * 2004-11-19 2006-05-25 International Business Machines Corporation Enabling voice click in a multimodal page
US7054308B1 (en) * 2000-11-07 2006-05-30 Verizon Laboratories Inc. Method and apparatus for estimating the call grade of service and offered traffic for voice over internet protocol calls at a PSTN-IP network gateway
US20060122833A1 (en) * 2000-10-16 2006-06-08 Nasreen Quibria Method of and system for providing adaptive respondent training in a speech recognition application based upon the inherent response of the respondent
US20060122837A1 (en) * 2004-12-08 2006-06-08 Electronics And Telecommunications Research Institute Voice interface system and speech recognition method
US20060190422A1 (en) * 2005-02-18 2006-08-24 Beale Kevin M System and method for dynamically creating records
US20060190252A1 (en) * 2003-02-11 2006-08-24 Bradford Starkie System for predicting speech recognition accuracy and development for a dialog system
US20060203980A1 (en) * 2002-09-06 2006-09-14 Telstra Corporation Limited Development system for a dialog system
US7110745B1 (en) * 2001-12-28 2006-09-19 Bellsouth Intellectual Property Corporation Mobile gateway interface
US7130800B1 (en) 2001-09-20 2006-10-31 West Corporation Third party verification system
US20070043570A1 (en) * 2003-07-18 2007-02-22 Koninklijke Philips Electronics N.V. Method of controlling a dialoging process
US7191133B1 (en) * 2001-02-15 2007-03-13 West Corporation Script compliance using speech recognition
US7203653B1 (en) 1999-11-09 2007-04-10 West Corporation Automated third party verification system
US20070088556A1 (en) * 2005-10-17 2007-04-19 Microsoft Corporation Flexible speech-activated command and control
US20070088677A1 (en) * 2005-10-13 2007-04-19 Microsoft Corporation Client-server word-breaking framework
US20070094032A1 (en) * 1999-11-12 2007-04-26 Bennett Ian M Adjustable resource based speech recognition system
US20070136067A1 (en) * 2003-11-10 2007-06-14 Scholl Holger R Audio dialogue system and voice browsing method
US20070162280A1 (en) * 2002-12-12 2007-07-12 Khosla Ashok M Auotmatic generation of voice content for a voice response system
US20070185716A1 (en) * 1999-11-12 2007-08-09 Bennett Ian M Internet based speech recognition system with dynamic grammars
US7275086B1 (en) * 1999-07-01 2007-09-25 Intellisync Corporation System and method for embedding a context-sensitive web portal in a computer application
US7286521B1 (en) * 2000-07-21 2007-10-23 Tellme Networks, Inc. Localized voice over internet protocol communication
US20070260972A1 (en) * 2006-05-05 2007-11-08 Kirusa, Inc. Reusable multimodal application
US20070294927A1 (en) * 2006-06-26 2007-12-27 Saundra Janese Stevens Evacuation Status Indicator (ESI)
US20080052076A1 (en) * 2006-08-22 2008-02-28 International Business Machines Corporation Automatic grammar tuning using statistical language model generation
US20080059153A1 (en) * 1999-11-12 2008-03-06 Bennett Ian M Natural Language Speech Lattice Containing Semantic Variants
US7373300B1 (en) 2002-12-18 2008-05-13 At&T Corp. System and method of providing a spoken dialog interface to a website
US20080114747A1 (en) * 2006-11-09 2008-05-15 Goller Michael D Speech interface for search engines
US20080126078A1 (en) * 2003-04-29 2008-05-29 Telstra Corporation Limited A System and Process For Grammatical Interference
US20080134058A1 (en) * 2006-11-30 2008-06-05 Zhongnan Shen Method and system for extending dialog systems to process complex activities for applications
US7457397B1 (en) * 1999-08-24 2008-11-25 Microstrategy, Inc. Voice page directory system in a voice page creation and delivery system
US20090055179A1 (en) * 2007-08-24 2009-02-26 Samsung Electronics Co., Ltd. Method, medium and apparatus for providing mobile voice web service
US20090055184A1 (en) * 2007-08-24 2009-02-26 Nuance Communications, Inc. Creation and Use of Application-Generic Class-Based Statistical Language Models for Automatic Speech Recognition
US7526539B1 (en) * 2000-01-04 2009-04-28 Pni Corporation Method and apparatus for a distributed home-automation-control (HAC) window
US7552055B2 (en) 2004-01-10 2009-06-23 Microsoft Corporation Dialog component re-use in recognition systems
US20090254348A1 (en) * 2008-04-07 2009-10-08 International Business Machines Corporation Free form input field support for automated voice enablement of a web page
US20090254346A1 (en) * 2008-04-07 2009-10-08 International Business Machines Corporation Automated voice enablement of a web page
US7653545B1 (en) * 1999-06-11 2010-01-26 Telstra Corporation Limited Method of developing an interactive system
US7664641B1 (en) 2001-02-15 2010-02-16 West Corporation Script compliance and quality assurance based on speech recognition and duration of interaction
US7684989B1 (en) * 2003-04-14 2010-03-23 Travelers Property Casualty Corp. Method and system for integrating an interactive voice response system into a host application system
US7739326B1 (en) 2002-06-18 2010-06-15 West Corporation System, method, and computer readable media for confirmation and verification of shipping address data associated with transaction
US7739115B1 (en) 2001-02-15 2010-06-15 West Corporation Script compliance and agent feedback
US7917367B2 (en) 2005-08-05 2011-03-29 Voicebox Technologies, Inc. Systems and methods for responding to natural language speech utterance
US7949529B2 (en) * 2005-08-29 2011-05-24 Voicebox Technologies, Inc. Mobile systems and methods of supporting natural language human-machine interactions
US7966187B1 (en) 2001-02-15 2011-06-21 West Corporation Script compliance and quality assurance using speech recognition
US7983917B2 (en) 2005-08-31 2011-07-19 Voicebox Technologies, Inc. Dynamic speech sharpening
US8015006B2 (en) 2002-06-03 2011-09-06 Voicebox Technologies, Inc. Systems and methods for processing natural language speech utterances with context-specific domain agents
US20110238414A1 (en) * 2010-03-29 2011-09-29 Microsoft Corporation Telephony service interaction management
US8060371B1 (en) 2007-05-09 2011-11-15 Nextel Communications Inc. System and method for voice interaction with non-voice enabled web pages
US8065151B1 (en) * 2002-12-18 2011-11-22 At&T Intellectual Property Ii, L.P. System and method of automatically building dialog services by exploiting the content and structure of websites
US8073681B2 (en) 2006-10-16 2011-12-06 Voicebox Technologies, Inc. System and method for a cooperative conversational voice user interface
US8140335B2 (en) 2007-12-11 2012-03-20 Voicebox Technologies, Inc. System and method for providing a natural language voice user interface in an integrated voice navigation services environment
US8145489B2 (en) 2007-02-06 2012-03-27 Voicebox Technologies, Inc. System and method for selecting and presenting advertisements based on natural language processing of voice-based input
US8180643B1 (en) 2001-02-15 2012-05-15 West Corporation Script compliance using speech recognition and compilation and transmission of voice and text records to clients
US8219407B1 (en) 2007-12-27 2012-07-10 Great Northern Research, LLC Method for processing the output of a speech recognizer
US8260619B1 (en) 2008-08-22 2012-09-04 Convergys Cmg Utah, Inc. Method and system for creating natural language understanding grammars
US8326637B2 (en) 2009-02-20 2012-12-04 Voicebox Technologies, Inc. System and method for processing multi-modal device interactions in a natural language voice services environment
US8332224B2 (en) 2005-08-10 2012-12-11 Voicebox Technologies, Inc. System and method of supporting adaptive misrecognition conversational speech
US20120323580A1 (en) * 2010-11-17 2012-12-20 International Business Machines Corporation Editing telecom web applications through a voice interface
US8448059B1 (en) * 1999-09-03 2013-05-21 Cisco Technology, Inc. Apparatus and method for providing browser audio control for voice enabled web applications
US8571606B2 (en) 2001-08-07 2013-10-29 Waloomba Tech Ltd., L.L.C. System and method for providing multi-modal bookmarks
US8589161B2 (en) 2008-05-27 2013-11-19 Voicebox Technologies, Inc. System and method for an integrated, multi-modal, multi-device natural language voice services environment
US20140040722A1 (en) * 2012-08-02 2014-02-06 Nuance Communications, Inc. Methods and apparatus for voiced-enabling a web application
US20140039885A1 (en) * 2012-08-02 2014-02-06 Nuance Communications, Inc. Methods and apparatus for voice-enabling a web application
US8830831B1 (en) * 2003-10-09 2014-09-09 NetCracker Technology Solutions Inc. Architecture for balancing workload
US8868425B2 (en) 1998-10-02 2014-10-21 Nuance Communications, Inc. System and method for providing network coordinated conversational services
US8898065B2 (en) 2011-01-07 2014-11-25 Nuance Communications, Inc. Configurable speech recognition system using multiple recognizers
US8918506B1 (en) 2002-10-10 2014-12-23 NetCracker Technology Solutions Inc. Architecture for a system and method for work and revenue management
US9031845B2 (en) 2002-07-15 2015-05-12 Nuance Communications, Inc. Mobile systems and methods for responding to natural language speech utterance
US20150281446A1 (en) * 2014-03-25 2015-10-01 Intellisist, Inc. Computer-Implemented System And Method For Protecting Sensitive Information Within A Call Center In Real Time
US9171541B2 (en) 2009-11-10 2015-10-27 Voicebox Technologies Corporation System and method for hybrid processing in a natural language voice services environment
US9292252B2 (en) 2012-08-02 2016-03-22 Nuance Communications, Inc. Methods and apparatus for voiced-enabling a web application
US9292253B2 (en) 2012-08-02 2016-03-22 Nuance Communications, Inc. Methods and apparatus for voiced-enabling a web application
US9305548B2 (en) 2008-05-27 2016-04-05 Voicebox Technologies Corporation System and method for an integrated, multi-modal, multi-device natural language voice services environment
US9502025B2 (en) 2009-11-10 2016-11-22 Voicebox Technologies Corporation System and method for providing a natural language content dedication service
US20160381220A1 (en) * 2000-02-04 2016-12-29 Parus Holdings, Inc. Personal Voice-Based Information Retrieval System
US20170069315A1 (en) * 2015-09-09 2017-03-09 Samsung Electronics Co., Ltd. System, apparatus, and method for processing natural language, and non-transitory computer readable recording medium
US9594737B2 (en) 2013-12-09 2017-03-14 Wolfram Alpha Llc Natural language-aided hypertext document authoring
US9626703B2 (en) 2014-09-16 2017-04-18 Voicebox Technologies Corporation Voice commerce
US9632650B2 (en) 2006-03-10 2017-04-25 Microsoft Technology Licensing, Llc Command searching enhancements
US9684721B2 (en) 2006-09-07 2017-06-20 Wolfram Alpha Llc Performing machine actions in response to voice input
US20170186427A1 (en) * 2015-04-22 2017-06-29 Google Inc. Developer voice actions system
US9734817B1 (en) * 2014-03-21 2017-08-15 Amazon Technologies, Inc. Text-to-speech task scheduling
US9747896B2 (en) 2014-10-15 2017-08-29 Voicebox Technologies Corporation System and method for providing follow-up responses to prior natural language inputs of a user
US20170337923A1 (en) * 2016-05-19 2017-11-23 Julia Komissarchik System and methods for creating robust voice-based user interface
US9851950B2 (en) 2011-11-15 2017-12-26 Wolfram Alpha Llc Programming in a precise syntax using natural language
US9886944B2 (en) 2012-10-04 2018-02-06 Nuance Communications, Inc. Hybrid controller for ASR
US9898459B2 (en) 2014-09-16 2018-02-20 Voicebox Technologies Corporation Integration of domain information into state transitions of a finite state transducer for natural language processing
US9965663B2 (en) 2015-04-08 2018-05-08 Fractal Antenna Systems, Inc. Fractal plasmonic surface reader antennas
US9972317B2 (en) 2004-11-16 2018-05-15 Microsoft Technology Licensing, Llc Centralized method and system for clarifying voice commands
US10068016B2 (en) 2013-10-17 2018-09-04 Wolfram Alpha Llc Method and system for providing answers to queries
US10095691B2 (en) 2016-03-22 2018-10-09 Wolfram Research, Inc. Method and apparatus for converting natural language to machine actions
US10157612B2 (en) 2012-08-02 2018-12-18 Nuance Communications, Inc. Methods and apparatus for voice-enabling a web application
US10175938B2 (en) 2013-11-19 2019-01-08 Microsoft Technology Licensing, Llc Website navigation via a voice user interface
US10255921B2 (en) 2015-07-31 2019-04-09 Google Llc Managing dialog data providers
US10331784B2 (en) 2016-07-29 2019-06-25 Voicebox Technologies Corporation System and method of disambiguating natural language processing requests
US10338959B2 (en) 2015-07-13 2019-07-02 Microsoft Technology Licensing, Llc Task state tracking in systems and services
US10431214B2 (en) 2014-11-26 2019-10-01 Voicebox Technologies Corporation System and method of determining a domain and/or an action related to a natural language input
US20190372916A1 (en) * 2018-05-30 2019-12-05 Allstate Insurance Company Processing System Performing Dynamic Training Response Output Generation Control
US10528670B2 (en) * 2017-05-25 2020-01-07 Baidu Online Network Technology (Beijing) Co., Ltd. Amendment source-positioning method and apparatus, computer device and readable medium
US10614799B2 (en) 2014-11-26 2020-04-07 Voicebox Technologies Corporation System and method of providing intent predictions for an utterance prior to a system detection of an end of the utterance
US10635281B2 (en) 2016-02-12 2020-04-28 Microsoft Technology Licensing, Llc Natural language task completion platform authoring for third party experiences
US10971157B2 (en) 2017-01-11 2021-04-06 Nuance Communications, Inc. Methods and apparatus for hybrid speech recognition processing
US11093708B2 (en) * 2018-12-13 2021-08-17 Software Ag Adaptive human to machine interaction using machine learning
US11250093B2 (en) 2018-07-25 2022-02-15 Accenture Global Solutions Limited Natural language control of web browsers
US20220399014A1 (en) * 2021-06-15 2022-12-15 Motorola Solutions, Inc. System and method for virtual assistant execution of ambiguous command
US11599332B1 (en) 2007-10-04 2023-03-07 Great Northern Research, LLC Multiple shell multi faceted graphical user interface
US11659041B2 (en) * 2012-09-24 2023-05-23 Blue Ocean Robotics Aps Systems and methods for remote presence

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5819220A (en) * 1996-09-30 1998-10-06 Hewlett-Packard Company Web triggered word set boosting for speech interfaces to the world wide web
US5915001A (en) 1996-11-14 1999-06-22 Vois Corporation System and method for providing and using universally accessible voice and speech data files
US5937422A (en) * 1997-04-15 1999-08-10 The United States Of America As Represented By The National Security Agency Automatically generating a topic description for text and searching and sorting text by topic using the same
US5974413A (en) * 1997-07-03 1999-10-26 Activeword Systems, Inc. Semantic user interface
US5999904A (en) 1997-07-02 1999-12-07 Lucent Technologies Inc. Tracking initiative in collaborative dialogue interactions
US6101473A (en) * 1997-08-08 2000-08-08 Board Of Trustees, Leland Stanford Jr., University Using speech recognition to access the internet, including access via a telephone
US6144938A (en) * 1998-05-01 2000-11-07 Sun Microsystems, Inc. Voice user interface with personality
US6173266B1 (en) * 1997-05-06 2001-01-09 Speechworks International, Inc. System and method for developing interactive speech applications
US6173279B1 (en) * 1998-04-09 2001-01-09 At&T Corp. Method of using a natural language interface to retrieve information from one or more data resources
US6421453B1 (en) * 1998-05-15 2002-07-16 International Business Machines Corporation Apparatus and methods for user recognition employing behavioral passwords

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5819220A (en) * 1996-09-30 1998-10-06 Hewlett-Packard Company Web triggered word set boosting for speech interfaces to the world wide web
US5915001A (en) 1996-11-14 1999-06-22 Vois Corporation System and method for providing and using universally accessible voice and speech data files
US5937422A (en) * 1997-04-15 1999-08-10 The United States Of America As Represented By The National Security Agency Automatically generating a topic description for text and searching and sorting text by topic using the same
US6173266B1 (en) * 1997-05-06 2001-01-09 Speechworks International, Inc. System and method for developing interactive speech applications
US5999904A (en) 1997-07-02 1999-12-07 Lucent Technologies Inc. Tracking initiative in collaborative dialogue interactions
US5974413A (en) * 1997-07-03 1999-10-26 Activeword Systems, Inc. Semantic user interface
US6101473A (en) * 1997-08-08 2000-08-08 Board Of Trustees, Leland Stanford Jr., University Using speech recognition to access the internet, including access via a telephone
US6173279B1 (en) * 1998-04-09 2001-01-09 At&T Corp. Method of using a natural language interface to retrieve information from one or more data resources
US6144938A (en) * 1998-05-01 2000-11-07 Sun Microsystems, Inc. Voice user interface with personality
US6421453B1 (en) * 1998-05-15 2002-07-16 International Business Machines Corporation Apparatus and methods for user recognition employing behavioral passwords

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
A. Abella et al., "Development Principles for Dialog-Based Interfaces," European Coordinating Committee for Artificial Intelligence (ECCAI) Conference, Budapest University of Economic Sciences, Hungary, pp. 1-7, Aug. 11-16, 1996.
D.L. Atkins et al., "Integrated Web and Telephone Service Creation," Bell Labs Technical Journal, pp. 19-35, Winter 1997.
E. Szurkowski et al., "An Interactive Consumer Video Services Platform Architecture," Telecom '95 Technical Forum, Geneva, Switzerland, 6 pages, Oct. 1995.
J. Chu-Carroll et al., "Initiative in Collaborative Interactions-Its Cues and Effects," 7 pages, In Working Notes of AAAI-97, 1997.
J.C. Ramming, "PML: A Language Interface to Networked Voice Response Units," Workshop on Internet Programming Languages, ICCL '98, Loyola University, Chicago, Illinois, pp. 1-11, May 1998.
M.K. Brown et al., "A Context-Free Grammar Compiler for Speech Understanding Systems," in ICSLP '94, vol. 1, Yokohama, Japan, pp. 21-24, Sep. 1994.
M.K. Brown et al., "A Grammar Compiler for Connected Speech Recognition," IEEE Transactions on Signal Processing, vol. 39, No. 1, pp. 17-28, Jan. 1991.

Cited By (415)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9761241B2 (en) 1998-10-02 2017-09-12 Nuance Communications, Inc. System and method for providing network coordinated conversational services
US8868425B2 (en) 1998-10-02 2014-10-21 Nuance Communications, Inc. System and method for providing network coordinated conversational services
US7653545B1 (en) * 1999-06-11 2010-01-26 Telstra Corporation Limited Method of developing an interactive system
US7275086B1 (en) * 1999-07-01 2007-09-25 Intellisync Corporation System and method for embedding a context-sensitive web portal in a computer application
US6895084B1 (en) * 1999-08-24 2005-05-17 Microstrategy, Inc. System and method for generating voice pages with included audio files for use in a voice page delivery system
US7457397B1 (en) * 1999-08-24 2008-11-25 Microstrategy, Inc. Voice page directory system in a voice page creation and delivery system
US8448059B1 (en) * 1999-09-03 2013-05-21 Cisco Technology, Inc. Apparatus and method for providing browser audio control for voice enabled web applications
US6775805B1 (en) * 1999-10-06 2004-08-10 International Business Machines Corporation Method, apparatus and program product for specifying an area of a web page for audible reading
US10019713B1 (en) 1999-11-09 2018-07-10 Red Hat, Inc. Apparatus and method for verifying transactions using voice print
US7225133B1 (en) 1999-11-09 2007-05-29 West Corporation Automated third party verification system
US8768709B1 (en) 1999-11-09 2014-07-01 West Corporation Apparatus and method for verifying transactions using voice print
US8954331B1 (en) 1999-11-09 2015-02-10 West Corporation Automated third party verification system utilizing a video file
US7206746B1 (en) 1999-11-09 2007-04-17 West Corporation Third party verification system
US9674353B1 (en) 1999-11-09 2017-06-06 Open Invention Network, Llc Automated third party verification system
US20020111809A1 (en) * 1999-11-09 2002-08-15 West Teleservices Holding Company Automated third party verification system
US7895043B1 (en) 1999-11-09 2011-02-22 West Corporation Automated third party verification system
US8532997B1 (en) 1999-11-09 2013-09-10 West Corporation Automated third party verification system
US9530136B1 (en) 1999-11-09 2016-12-27 Open Invention Network, Llc Apparatus and method for verifying transactions using voice print
US6990454B2 (en) 1999-11-09 2006-01-24 West Corporation Automated third party verification system
US7457754B1 (en) 1999-11-09 2008-11-25 West Corporation Automated third party verification system
US8046230B1 (en) 1999-11-09 2011-10-25 West Corporation Automated third party verification system
US7788102B1 (en) 1999-11-09 2010-08-31 West Corporation Automated third party verification system
US8849671B1 (en) 1999-11-09 2014-09-30 West Corporation Automated third party verification system
US8095369B1 (en) 1999-11-09 2012-01-10 West Corporation Apparatus and method for verifying transactions using voice print
US7533024B1 (en) 1999-11-09 2009-05-12 West Corporation Automated third party verification system
US7203653B1 (en) 1999-11-09 2007-04-10 West Corporation Automated third party verification system
US8219405B1 (en) 1999-11-09 2012-07-10 West Corporation Automated third party verification system
US7725321B2 (en) 1999-11-12 2010-05-25 Phoenix Solutions, Inc. Speech based query system using semantic decoding
US20080059153A1 (en) * 1999-11-12 2008-03-06 Bennett Ian M Natural Language Speech Lattice Containing Semantic Variants
US7873519B2 (en) 1999-11-12 2011-01-18 Phoenix Solutions, Inc. Natural language speech lattice containing semantic variants
US8229734B2 (en) 1999-11-12 2012-07-24 Phoenix Solutions, Inc. Semantic decoding of user queries
US7657424B2 (en) 1999-11-12 2010-02-02 Phoenix Solutions, Inc. System and method for processing sentence based queries
US7672841B2 (en) 1999-11-12 2010-03-02 Phoenix Solutions, Inc. Method for processing speech data for a distributed recognition system
US9190063B2 (en) 1999-11-12 2015-11-17 Nuance Communications, Inc. Multi-language speech recognition system
US8352277B2 (en) 1999-11-12 2013-01-08 Phoenix Solutions, Inc. Method of interacting through speech with a web-connected server
US20070094032A1 (en) * 1999-11-12 2007-04-26 Bennett Ian M Adjustable resource based speech recognition system
US7698131B2 (en) 1999-11-12 2010-04-13 Phoenix Solutions, Inc. Speech recognition system for client devices having differing computing capabilities
US7702508B2 (en) 1999-11-12 2010-04-20 Phoenix Solutions, Inc. System and method for natural language processing of query answers
US20070185716A1 (en) * 1999-11-12 2007-08-09 Bennett Ian M Internet based speech recognition system with dynamic grammars
US9076448B2 (en) 1999-11-12 2015-07-07 Nuance Communications, Inc. Distributed real time speech recognition system
US7647225B2 (en) 1999-11-12 2010-01-12 Phoenix Solutions, Inc. Adjustable resource based speech recognition system
US20080052063A1 (en) * 1999-11-12 2008-02-28 Bennett Ian M Multi-language speech recognition system
US8762152B2 (en) 1999-11-12 2014-06-24 Nuance Communications, Inc. Speech recognition system interactive agent
US7725307B2 (en) 1999-11-12 2010-05-25 Phoenix Solutions, Inc. Query engine for processing voice based queries including semantic decoding
US20040236580A1 (en) * 1999-11-12 2004-11-25 Bennett Ian M. Method for processing speech using dynamic grammars
US7725320B2 (en) 1999-11-12 2010-05-25 Phoenix Solutions, Inc. Internet based speech recognition system with dynamic grammars
US20080052077A1 (en) * 1999-11-12 2008-02-28 Bennett Ian M Multi-language speech recognition system
US7831426B2 (en) 1999-11-12 2010-11-09 Phoenix Solutions, Inc. Network based interactive speech recognition system
US7729904B2 (en) 1999-11-12 2010-06-01 Phoenix Solutions, Inc. Partial speech processing device and method for use in distributed systems
US20080021708A1 (en) * 1999-11-12 2008-01-24 Bennett Ian M Speech recognition system interactive agent
US20050086046A1 (en) * 1999-11-12 2005-04-21 Bennett Ian M. System & method for natural language processing of sentence based queries
US20050086049A1 (en) * 1999-11-12 2005-04-21 Bennett Ian M. System & method for processing sentence based queries
US7912702B2 (en) 1999-11-12 2011-03-22 Phoenix Solutions, Inc. Statistical language model trained with semantic variants
US7526539B1 (en) * 2000-01-04 2009-04-28 Pni Corporation Method and apparatus for a distributed home-automation-control (HAC) window
US20160381220A1 (en) * 2000-02-04 2016-12-29 Parus Holdings, Inc. Personal Voice-Based Information Retrieval System
US10320981B2 (en) 2000-02-04 2019-06-11 Parus Holdings, Inc. Personal voice-based information retrieval system
US20080183469A1 (en) * 2000-03-24 2008-07-31 Eliza Corporation Web-Based Speech Recognition With Scripting and Semantic Objects
US20020002463A1 (en) * 2000-03-24 2002-01-03 John Kroeker Web-based speech recognition with scripting and semantic objects
US20020138262A1 (en) * 2000-03-24 2002-09-26 John Kroeker Web-based speech recognition with scripting and semantic objects
US8510412B2 (en) 2000-03-24 2013-08-13 Eliza Corporation Web-based speech recognition with scripting and semantic objects
US7366766B2 (en) * 2000-03-24 2008-04-29 Eliza Corporation Web-based speech recognition with scripting and semantic objects
US7370086B2 (en) * 2000-03-24 2008-05-06 Eliza Corporation Web-based speech recognition with scripting and semantic objects
US8024422B2 (en) 2000-03-24 2011-09-20 Eliza Corporation Web-based speech recognition with scripting and semantic objects
US8457970B2 (en) * 2000-05-16 2013-06-04 Swisscom Ag Voice portal hosting system and method
US20010055370A1 (en) * 2000-05-16 2001-12-27 Kommer Robert Van Voice portal hosting system and method
US20020165988A1 (en) * 2000-06-07 2002-11-07 Khan Umair A. System, method, and article of manufacture for wireless enablement of the world wide web using a wireless gateway
US20080095147A1 (en) * 2000-07-21 2008-04-24 Jackson Donald C Method and apparatus for localized voice over internet protocol usage
US8705519B2 (en) 2000-07-21 2014-04-22 Microsoft Corporation Method and apparatus for localized voice over internet protocol usage
US7286521B1 (en) * 2000-07-21 2007-10-23 Tellme Networks, Inc. Localized voice over internet protocol communication
US7568227B2 (en) * 2000-09-28 2009-07-28 Symantec Corporation System and method for analyzing protocol streams for a security-related event
US20020124187A1 (en) * 2000-09-28 2002-09-05 Recourse Technologies, Inc. System and method for analyzing protocol streams for a security-related event
US20110231190A1 (en) * 2000-10-16 2011-09-22 Eliza Corporation Method of and system for providing adaptive respondent training in a speech recognition application
US10522144B2 (en) 2000-10-16 2019-12-31 Eliza Corporation Method of and system for providing adaptive respondent training in a speech recognition application
US20170162200A1 (en) * 2000-10-16 2017-06-08 Eliza Corporation Method of and system for providing adaptive respondent training in a speech recognition application
US7933775B2 (en) * 2000-10-16 2011-04-26 Eliza Corporation Method of and system for providing adaptive respondent training in a speech recognition application based upon the inherent response of the respondent
US9578169B2 (en) 2000-10-16 2017-02-21 Eliza Corporation Method of and system for providing adaptive respondent training in a speech recognition application
US20060122833A1 (en) * 2000-10-16 2006-06-08 Nasreen Quibria Method of and system for providing adaptive respondent training in a speech recognition application based upon the inherent response of the respondent
US7054308B1 (en) * 2000-11-07 2006-05-30 Verizon Laboratories Inc. Method and apparatus for estimating the call grade of service and offered traffic for voice over internet protocol calls at a PSTN-IP network gateway
US7146323B2 (en) * 2000-11-23 2006-12-05 International Business Machines Corporation Method and system for gathering information by voice input
US20020062216A1 (en) * 2000-11-23 2002-05-23 International Business Machines Corporation Method and system for gathering information by voice input
US20040153323A1 (en) * 2000-12-01 2004-08-05 Charney Michael L Method and system for voice activating web pages
US7640163B2 (en) * 2000-12-01 2009-12-29 The Trustees Of Columbia University In The City Of New York Method and system for voice activating web pages
US20020069059A1 (en) * 2000-12-04 2002-06-06 Kenneth Smith Grammar generation for voice-based searches
US7487440B2 (en) * 2000-12-04 2009-02-03 International Business Machines Corporation Reusable voiceXML dialog components, subdialogs and beans
US6973429B2 (en) * 2000-12-04 2005-12-06 A9.Com, Inc. Grammar generation for voice-based searches
US20020198719A1 (en) * 2000-12-04 2002-12-26 International Business Machines Corporation Reusable voiceXML dialog components, subdialogs and beans
US20020169798A1 (en) * 2001-01-10 2002-11-14 Tomoo Ooishi Contents inspecting system and contents inspecting method used therefor
US7496514B2 (en) * 2001-01-12 2009-02-24 International Business Machines Corporation Method and Apparatus for managing dialog management in a computer conversation
US8438031B2 (en) * 2001-01-12 2013-05-07 Nuance Communications, Inc. System and method for relating syntax and semantics for a conversational speech application
US7085723B2 (en) 2001-01-12 2006-08-01 International Business Machines Corporation System and method for determining utterance context in a multi-context speech application
US20020133354A1 (en) * 2001-01-12 2002-09-19 International Business Machines Corporation System and method for determining utterance context in a multi-context speech application
US20020095286A1 (en) * 2001-01-12 2002-07-18 International Business Machines Corporation System and method for relating syntax and semantics for a conversational speech application
US7127402B2 (en) 2001-01-12 2006-10-24 International Business Machines Corporation Method and apparatus for converting utterance representations into actions in a conversational system
US20020133355A1 (en) * 2001-01-12 2002-09-19 International Business Machines Corporation Method and apparatus for performing dialog management in a computer conversational interface
US20020138266A1 (en) * 2001-01-12 2002-09-26 International Business Machines Corporation Method and apparatus for converting utterance representations into actions in a conversational system
US6950793B2 (en) 2001-01-12 2005-09-27 International Business Machines Corporation System and method for deriving natural language representation of formal belief structures
US20080015864A1 (en) * 2001-01-12 2008-01-17 Ross Steven I Method and Apparatus for Managing Dialog Management in a Computer Conversation
US20070265847A1 (en) * 2001-01-12 2007-11-15 Ross Steven I System and Method for Relating Syntax and Semantics for a Conversational Speech Application
US7249018B2 (en) 2001-01-12 2007-07-24 International Business Machines Corporation System and method for relating syntax and semantics for a conversational speech application
US7257537B2 (en) * 2001-01-12 2007-08-14 International Business Machines Corporation Method and apparatus for performing dialog management in a computer conversational interface
US8504371B1 (en) 2001-02-15 2013-08-06 West Corporation Script compliance and agent feedback
US8180643B1 (en) 2001-02-15 2012-05-15 West Corporation Script compliance using speech recognition and compilation and transmission of voice and text records to clients
US8352276B1 (en) 2001-02-15 2013-01-08 West Corporation Script compliance and agent feedback
US7966187B1 (en) 2001-02-15 2011-06-21 West Corporation Script compliance and quality assurance using speech recognition
US9299341B1 (en) 2001-02-15 2016-03-29 Alorica Business Solutions, Llc Script compliance using speech recognition and compilation and transmission of voice and text records to clients
US8489401B1 (en) 2001-02-15 2013-07-16 West Corporation Script compliance using speech recognition
US8811592B1 (en) 2001-02-15 2014-08-19 West Corporation Script compliance using speech recognition and compilation and transmission of voice and text records to clients
US8484030B1 (en) 2001-02-15 2013-07-09 West Corporation Script compliance and quality assurance using speech recognition
US8990090B1 (en) 2001-02-15 2015-03-24 West Corporation Script compliance using speech recognition
US8229752B1 (en) 2001-02-15 2012-07-24 West Corporation Script compliance and agent feedback
US7664641B1 (en) 2001-02-15 2010-02-16 West Corporation Script compliance and quality assurance based on speech recognition and duration of interaction
US9131052B1 (en) 2001-02-15 2015-09-08 West Corporation Script compliance and agent feedback
US8219401B1 (en) 2001-02-15 2012-07-10 West Corporation Script compliance and quality assurance using speech recognition
US7191133B1 (en) * 2001-02-15 2007-03-13 West Corporation Script compliance using speech recognition
US8108213B1 (en) 2001-02-15 2012-01-31 West Corporation Script compliance and quality assurance based on speech recognition and duration of interaction
US7739115B1 (en) 2001-02-15 2010-06-15 West Corporation Script compliance and agent feedback
US8326626B1 (en) 2001-02-15 2012-12-04 West Corporation Script compliance and quality assurance based on speech recognition and duration of interaction
US20020133627A1 (en) * 2001-03-19 2002-09-19 International Business Machines Corporation Intelligent document filtering
US7415538B2 (en) * 2001-03-19 2008-08-19 International Business Machines Corporation Intelligent document filtering
US6832196B2 (en) * 2001-03-30 2004-12-14 International Business Machines Corporation Speech driven data selection in a voice-enabled program
US20020173964A1 (en) * 2001-03-30 2002-11-21 International Business Machines Corporation Speech driven data selection in a voice-enabled program
US6941509B2 (en) 2001-04-27 2005-09-06 International Business Machines Corporation Editing HTML DOM elements in web browsers with non-visual capabilities
US20020161824A1 (en) * 2001-04-27 2002-10-31 International Business Machines Corporation Method for presentation of HTML image-map elements in non visual web browsers
US20020161805A1 (en) * 2001-04-27 2002-10-31 International Business Machines Corporation Editing HTML dom elements in web browsers with non-visual capabilities
US7409349B2 (en) 2001-05-04 2008-08-05 Microsoft Corporation Servers for web enabled speech recognition
US7610547B2 (en) * 2001-05-04 2009-10-27 Microsoft Corporation Markup language extensions for web enabled recognition
US20020178182A1 (en) * 2001-05-04 2002-11-28 Kuansan Wang Markup language extensions for web enabled recognition
US7506022B2 (en) 2001-05-04 2009-03-17 Microsoft.Corporation Web enabled recognition architecture
US20020169806A1 (en) * 2001-05-04 2002-11-14 Kuansan Wang Markup language extensions for web enabled recognition
US20030009517A1 (en) * 2001-05-04 2003-01-09 Kuansan Wang Web enabled recognition architecture
US20020165719A1 (en) * 2001-05-04 2002-11-07 Kuansan Wang Servers for web enabled speech recognition
US20020193998A1 (en) * 2001-05-31 2002-12-19 Dvorak Joseph L. Virtual speech interface system and method of using same
US6760705B2 (en) * 2001-05-31 2004-07-06 Motorola, Inc. Virtual speech interface system and method of using same
US20100049521A1 (en) * 2001-06-15 2010-02-25 Nuance Communications, Inc. Selective enablement of speech recognition grammars
US20030046074A1 (en) * 2001-06-15 2003-03-06 International Business Machines Corporation Selective enablement of speech recognition grammars
US20080189111A1 (en) * 2001-06-15 2008-08-07 International Business Machines Corporation Selective enablement of speech recognition grammars
US7610204B2 (en) 2001-06-15 2009-10-27 Nuance Communications, Inc. Selective enablement of speech recognition grammars
US9196252B2 (en) * 2001-06-15 2015-11-24 Nuance Communications, Inc. Selective enablement of speech recognition grammars
US7366673B2 (en) * 2001-06-15 2008-04-29 International Business Machines Corporation Selective enablement of speech recognition grammars
US20030046346A1 (en) * 2001-07-11 2003-03-06 Kirusa, Inc. Synchronization among plural browsers
US7584249B2 (en) * 2001-07-11 2009-09-01 Inderpal Singh Mumick Synchronization among plural browsers using a state manager
US7886004B2 (en) * 2001-07-11 2011-02-08 Kirusa Inc. Exchange of events based synchronization of browsers
USRE48126E1 (en) * 2001-07-11 2020-07-28 Gula Consulting Limited Liability Company Synchronization among plural browsers using a state manager
US20080098130A1 (en) * 2001-07-11 2008-04-24 Mumick Inderpal S Synchronization among pluralbrowsers
US6983307B2 (en) * 2001-07-11 2006-01-03 Kirusa, Inc. Synchronization among plural browsers
US20090287849A1 (en) * 2001-07-11 2009-11-19 Inderpal Singh Mumick Exchange Of Events Based Synchronization Of Browsers
US20060080391A1 (en) * 2001-07-11 2006-04-13 Kirusa, Inc. Synchronization among plural browsers
US20030023431A1 (en) * 2001-07-26 2003-01-30 Marc Neuberger Method and system for augmenting grammars in distributed voice browsing
US8571606B2 (en) 2001-08-07 2013-10-29 Waloomba Tech Ltd., L.L.C. System and method for providing multi-modal bookmarks
US7130800B1 (en) 2001-09-20 2006-10-31 West Corporation Third party verification system
US8165883B2 (en) 2001-10-21 2012-04-24 Microsoft Corporation Application abstraction with dialog purpose
US7711570B2 (en) 2001-10-21 2010-05-04 Microsoft Corporation Application abstraction with dialog purpose
US20040113908A1 (en) * 2001-10-21 2004-06-17 Galanes Francisco M Web server controls for web enabled recognition and/or audible prompting
US8229753B2 (en) * 2001-10-21 2012-07-24 Microsoft Corporation Web server controls for web enabled recognition and/or audible prompting
US8224650B2 (en) 2001-10-21 2012-07-17 Microsoft Corporation Web server controls for web enabled recognition and/or audible prompting
US20030200080A1 (en) * 2001-10-21 2003-10-23 Galanes Francisco M. Web server controls for web enabled recognition and/or audible prompting
US20030130854A1 (en) * 2001-10-21 2003-07-10 Galanes Francisco M. Application abstraction with dialog purpose
US20050131675A1 (en) * 2001-10-24 2005-06-16 Julia Luc E. System and method for speech activated navigation
US7289960B2 (en) * 2001-10-24 2007-10-30 Agiletv Corporation System and method for speech activated internet browsing using open vocabulary enhancement
US20050055218A1 (en) * 2001-10-24 2005-03-10 Julia Luc E. System and method for speech activated navigation
US7716055B1 (en) 2001-11-01 2010-05-11 West Corporation Apparatus and method for verifying transactions using voice print
US6819758B2 (en) 2001-12-21 2004-11-16 West Corporation Method, system, and computer-readable media for performing speech recognition of indicator tones
US7110745B1 (en) * 2001-12-28 2006-09-19 Bellsouth Intellectual Property Corporation Mobile gateway interface
US7062444B2 (en) * 2002-01-24 2006-06-13 Intel Corporation Architecture for DSR client and server development platform
US20030139930A1 (en) * 2002-01-24 2003-07-24 Liang He Architecture for DSR client and server development platform
US20030167168A1 (en) * 2002-03-01 2003-09-04 International Business Machines Corporation Automatic generation of efficient grammar for heading selection
US7054813B2 (en) * 2002-03-01 2006-05-30 International Business Machines Corporation Automatic generation of efficient grammar for heading selection
US6804331B1 (en) 2002-03-27 2004-10-12 West Corporation Method, apparatus, and computer readable media for minimizing the risk of fraudulent receipt of telephone calls
US6862343B1 (en) 2002-03-27 2005-03-01 West Corporation Methods, apparatus, scripts, and computer readable media for facilitating secure capture of sensitive data for a voice-based transaction conducted over a telecommunications network
US20030187658A1 (en) * 2002-03-29 2003-10-02 Jari Selin Method for text-to-speech service utilizing a uniform resource identifier
US20030195751A1 (en) * 2002-04-10 2003-10-16 Mitsubishi Electric Research Laboratories, Inc. Distributed automatic speech recognition with persistent user parameters
US9069836B2 (en) 2002-04-10 2015-06-30 Waloomba Tech Ltd., L.L.C. Reusable multimodal application
US9489441B2 (en) 2002-04-10 2016-11-08 Gula Consulting Limited Liability Company Reusable multimodal application
US9866632B2 (en) 2002-04-10 2018-01-09 Gula Consulting Limited Liability Company Reusable multimodal application
US8380491B2 (en) * 2002-04-19 2013-02-19 Educational Testing Service System for rating constructed responses based on concepts and a model answer
US20030200077A1 (en) * 2002-04-19 2003-10-23 Claudia Leacock System for rating constructed responses based on concepts and a model answer
US20030221181A1 (en) * 2002-05-24 2003-11-27 Petr Hejl Developing and running selected applications in communications
US6937702B1 (en) 2002-05-28 2005-08-30 West Corporation Method, apparatus, and computer readable media for minimizing the risk of fraudulent access to call center resources
US8112275B2 (en) 2002-06-03 2012-02-07 Voicebox Technologies, Inc. System and method for user-specific speech recognition
US8015006B2 (en) 2002-06-03 2011-09-06 Voicebox Technologies, Inc. Systems and methods for processing natural language speech utterances with context-specific domain agents
US8140327B2 (en) 2002-06-03 2012-03-20 Voicebox Technologies, Inc. System and method for filtering and eliminating noise from natural language utterances to improve speech recognition and parsing
US8731929B2 (en) 2002-06-03 2014-05-20 Voicebox Technologies Corporation Agent architecture for determining meanings of natural language utterances
US8155962B2 (en) 2002-06-03 2012-04-10 Voicebox Technologies, Inc. Method and system for asynchronously processing natural language utterances
US20110002449A1 (en) * 2002-06-14 2011-01-06 Nuance Communications, Inc. Voice browser with integrated tcap and isup interfaces
US20030233239A1 (en) * 2002-06-14 2003-12-18 International Business Machines Corporation Voice browser with integrated TCAP and ISUP interfaces
US8364490B2 (en) 2002-06-14 2013-01-29 Nuance Communications, Inc. Voice browser with integrated TCAP and ISUP interfaces
US7822609B2 (en) * 2002-06-14 2010-10-26 Nuance Communications, Inc. Voice browser with integrated TCAP and ISUP interfaces
US8817953B1 (en) 2002-06-18 2014-08-26 West Corporation System, method, and computer readable media for confirmation and verification of shipping address data associated with a transaction
US9232058B1 (en) 2002-06-18 2016-01-05 Open Invention Network, Llc System, method, and computer readable media for confirmation and verification of shipping address data associated with a transaction
US8239444B1 (en) 2002-06-18 2012-08-07 West Corporation System, method, and computer readable media for confirmation and verification of shipping address data associated with a transaction
US7739326B1 (en) 2002-06-18 2010-06-15 West Corporation System, method, and computer readable media for confirmation and verification of shipping address data associated with transaction
US9031845B2 (en) 2002-07-15 2015-05-12 Nuance Communications, Inc. Mobile systems and methods for responding to natural language speech utterance
US7712031B2 (en) 2002-07-24 2010-05-04 Telstra Corporation Limited System and process for developing a voice application
US20060025997A1 (en) * 2002-07-24 2006-02-02 Law Eng B System and process for developing a voice application
US20040034531A1 (en) * 2002-08-15 2004-02-19 Wu Chou Distributed multimodal dialogue system and method
US20060203980A1 (en) * 2002-09-06 2006-09-14 Telstra Corporation Limited Development system for a dialog system
US8046227B2 (en) * 2002-09-06 2011-10-25 Telestra Corporation Limited Development system for a dialog system
US8918506B1 (en) 2002-10-10 2014-12-23 NetCracker Technology Solutions Inc. Architecture for a system and method for work and revenue management
US10360563B1 (en) 2002-10-10 2019-07-23 Netcracker Technology Solutions LLC Architecture for a system and method for work and revenue management
US20070162280A1 (en) * 2002-12-12 2007-07-12 Khosla Ashok M Auotmatic generation of voice content for a voice response system
US8090583B1 (en) 2002-12-18 2012-01-03 At&T Intellectual Property Ii, L.P. System and method of automatically generating building dialog services by exploiting the content and structure of websites
US8065151B1 (en) * 2002-12-18 2011-11-22 At&T Intellectual Property Ii, L.P. System and method of automatically building dialog services by exploiting the content and structure of websites
US7373300B1 (en) 2002-12-18 2008-05-13 At&T Corp. System and method of providing a spoken dialog interface to a website
US7580842B1 (en) 2002-12-18 2009-08-25 At&T Intellectual Property Ii, Lp. System and method of providing a spoken dialog interface to a website
US20090292529A1 (en) * 2002-12-18 2009-11-26 At&T Corp. System and method of providing a spoken dialog interface to a website
US8949132B2 (en) 2002-12-18 2015-02-03 At&T Intellectual Property Ii, L.P. System and method of providing a spoken dialog interface to a website
US8688456B2 (en) 2002-12-18 2014-04-01 At&T Intellectual Property Ii, L.P. System and method of providing a spoken dialog interface to a website
US8249879B2 (en) 2002-12-18 2012-08-21 At&T Intellectual Property Ii, L.P. System and method of providing a spoken dialog interface to a website
US8442834B2 (en) 2002-12-18 2013-05-14 At&T Intellectual Property Ii, L.P. System and method of providing a spoken dialog interface to a website
US8060369B2 (en) 2002-12-18 2011-11-15 At&T Intellectual Property Ii, L.P. System and method of providing a spoken dialog interface to a website
US7917363B2 (en) 2003-02-11 2011-03-29 Telstra Corporation Limited System for predicting speech recognition accuracy and development for a dialog system
US20060190252A1 (en) * 2003-02-11 2006-08-24 Bradford Starkie System for predicting speech recognition accuracy and development for a dialog system
US8010365B1 (en) 2003-04-14 2011-08-30 The Travelers Indemnity Company Method and system for integrating an interactive voice response system into a host application system
US7684989B1 (en) * 2003-04-14 2010-03-23 Travelers Property Casualty Corp. Method and system for integrating an interactive voice response system into a host application system
US7260535B2 (en) 2003-04-28 2007-08-21 Microsoft Corporation Web server controls for web enabled recognition and/or audible prompting for call controls
US20040230434A1 (en) * 2003-04-28 2004-11-18 Microsoft Corporation Web server controls for web enabled recognition and/or audible prompting for call controls
US20080126078A1 (en) * 2003-04-29 2008-05-29 Telstra Corporation Limited A System and Process For Grammatical Interference
US20040230637A1 (en) * 2003-04-29 2004-11-18 Microsoft Corporation Application controls for speech enabled recognition
US8296129B2 (en) 2003-04-29 2012-10-23 Telstra Corporation Limited System and process for grammatical inference
US20040258217A1 (en) * 2003-05-30 2004-12-23 Kim Ki Chul Voice notice relay service method and apparatus
US20050143975A1 (en) * 2003-06-06 2005-06-30 Charney Michael L. System and method for voice activating web pages
US9202467B2 (en) * 2003-06-06 2015-12-01 The Trustees Of Columbia University In The City Of New York System and method for voice activating web pages
US20050015238A1 (en) * 2003-07-17 2005-01-20 International Business Machines Corporation Computational linguistic statements for providing an autonomic computing environment
US20080140386A1 (en) * 2003-07-17 2008-06-12 International Business Machines Corporation Computational Linguistic Statements for Providing an Autonomic Computing Environment
US7328156B2 (en) * 2003-07-17 2008-02-05 International Business Machines Corporation Computational linguistic statements for providing an autonomic computing environment
US7788082B2 (en) 2003-07-17 2010-08-31 International Business Machines Corporation Computational linguistic statements for providing an autonomic computing environment
US20070043570A1 (en) * 2003-07-18 2007-02-22 Koninklijke Philips Electronics N.V. Method of controlling a dialoging process
US8311835B2 (en) 2003-08-29 2012-11-13 Microsoft Corporation Assisted multi-modal dialogue
US20050091059A1 (en) * 2003-08-29 2005-04-28 Microsoft Corporation Assisted multi-modal dialogue
US8830831B1 (en) * 2003-10-09 2014-09-09 NetCracker Technology Solutions Inc. Architecture for balancing workload
US20070136067A1 (en) * 2003-11-10 2007-06-14 Scholl Holger R Audio dialogue system and voice browsing method
US20050111651A1 (en) * 2003-11-21 2005-05-26 Armando Chavez Script translation
US8718242B2 (en) 2003-12-19 2014-05-06 At&T Intellectual Property Ii, L.P. Method and apparatus for automatically building conversational systems
US20050135571A1 (en) * 2003-12-19 2005-06-23 At&T Corp. Method and apparatus for automatically building conversational systems
US7660400B2 (en) 2003-12-19 2010-02-09 At&T Intellectual Property Ii, L.P. Method and apparatus for automatically building conversational systems
US20100098224A1 (en) * 2003-12-19 2010-04-22 At&T Corp. Method and Apparatus for Automatically Building Conversational Systems
US8175230B2 (en) 2003-12-19 2012-05-08 At&T Intellectual Property Ii, L.P. Method and apparatus for automatically building conversational systems
US8462917B2 (en) 2003-12-19 2013-06-11 At&T Intellectual Property Ii, L.P. Method and apparatus for automatically building conversational systems
US20050171781A1 (en) * 2004-01-08 2005-08-04 Poploskie Jon M. Speech information system
US7552055B2 (en) 2004-01-10 2009-06-23 Microsoft Corporation Dialog component re-use in recognition systems
US8160883B2 (en) 2004-01-10 2012-04-17 Microsoft Corporation Focus tracking in dialogs
US20050154591A1 (en) * 2004-01-10 2005-07-14 Microsoft Corporation Focus tracking in dialogs
US7596499B2 (en) * 2004-02-02 2009-09-29 Panasonic Corporation Multilingual text-to-speech system with limited resources
US20050182630A1 (en) * 2004-02-02 2005-08-18 Miro Xavier A. Multilingual text-to-speech system with limited resources
US7778830B2 (en) 2004-05-19 2010-08-17 International Business Machines Corporation Training speaker-dependent, phrase-based speech grammars using an unsupervised automated technique
US20050261901A1 (en) * 2004-05-19 2005-11-24 International Business Machines Corporation Training speaker-dependent, phrase-based speech grammars using an unsupervised automated technique
US7818178B2 (en) 2004-06-09 2010-10-19 Alcatel-Lucent Usa Inc. Method and apparatus for providing network support for voice-activated mobile web browsing for audio data streams
US20050278179A1 (en) * 2004-06-09 2005-12-15 Overend Kevin J Method and apparatus for providing network support for voice-activated mobile web browsing for audio data streams
US8768711B2 (en) 2004-06-17 2014-07-01 Nuance Communications, Inc. Method and apparatus for voice-enabling an application
US20050283367A1 (en) * 2004-06-17 2005-12-22 International Business Machines Corporation Method and apparatus for voice-enabling an application
US20060092915A1 (en) * 2004-10-28 2006-05-04 Bellsouth Intellectual Property Management Corporation Methods and systems for accessing information across a network
US10748530B2 (en) 2004-11-16 2020-08-18 Microsoft Technology Licensing, Llc Centralized method and system for determining voice commands
US9972317B2 (en) 2004-11-16 2018-05-15 Microsoft Technology Licensing, Llc Centralized method and system for clarifying voice commands
DE102004056166A1 (en) * 2004-11-18 2006-05-24 Deutsche Telekom Ag Speech dialogue system and method of operation
US20060111906A1 (en) * 2004-11-19 2006-05-25 International Business Machines Corporation Enabling voice click in a multimodal page
US7650284B2 (en) 2004-11-19 2010-01-19 Nuance Communications, Inc. Enabling voice click in a multimodal page
US20060122837A1 (en) * 2004-12-08 2006-06-08 Electronics And Telecommunications Research Institute Voice interface system and speech recognition method
US20060190422A1 (en) * 2005-02-18 2006-08-24 Beale Kevin M System and method for dynamically creating records
US7593962B2 (en) * 2005-02-18 2009-09-22 American Tel-A-Systems, Inc. System and method for dynamically creating records
US8849670B2 (en) 2005-08-05 2014-09-30 Voicebox Technologies Corporation Systems and methods for responding to natural language speech utterance
US9263039B2 (en) 2005-08-05 2016-02-16 Nuance Communications, Inc. Systems and methods for responding to natural language speech utterance
US8326634B2 (en) 2005-08-05 2012-12-04 Voicebox Technologies, Inc. Systems and methods for responding to natural language speech utterance
US7917367B2 (en) 2005-08-05 2011-03-29 Voicebox Technologies, Inc. Systems and methods for responding to natural language speech utterance
US8620659B2 (en) 2005-08-10 2013-12-31 Voicebox Technologies, Inc. System and method of supporting adaptive misrecognition in conversational speech
US8332224B2 (en) 2005-08-10 2012-12-11 Voicebox Technologies, Inc. System and method of supporting adaptive misrecognition conversational speech
US9626959B2 (en) 2005-08-10 2017-04-18 Nuance Communications, Inc. System and method of supporting adaptive misrecognition in conversational speech
US8195468B2 (en) 2005-08-29 2012-06-05 Voicebox Technologies, Inc. Mobile systems and methods of supporting natural language human-machine interactions
US8849652B2 (en) 2005-08-29 2014-09-30 Voicebox Technologies Corporation Mobile systems and methods of supporting natural language human-machine interactions
US8447607B2 (en) 2005-08-29 2013-05-21 Voicebox Technologies, Inc. Mobile systems and methods of supporting natural language human-machine interactions
US7949529B2 (en) * 2005-08-29 2011-05-24 Voicebox Technologies, Inc. Mobile systems and methods of supporting natural language human-machine interactions
US9495957B2 (en) 2005-08-29 2016-11-15 Nuance Communications, Inc. Mobile systems and methods of supporting natural language human-machine interactions
US8069046B2 (en) 2005-08-31 2011-11-29 Voicebox Technologies, Inc. Dynamic speech sharpening
US8150694B2 (en) 2005-08-31 2012-04-03 Voicebox Technologies, Inc. System and method for providing an acoustic grammar to dynamically sharpen speech interpretation
US7983917B2 (en) 2005-08-31 2011-07-19 Voicebox Technologies, Inc. Dynamic speech sharpening
US7624099B2 (en) * 2005-10-13 2009-11-24 Microsoft Corporation Client-server word-breaking framework
US20070088677A1 (en) * 2005-10-13 2007-04-19 Microsoft Corporation Client-server word-breaking framework
US20070088556A1 (en) * 2005-10-17 2007-04-19 Microsoft Corporation Flexible speech-activated command and control
US8620667B2 (en) * 2005-10-17 2013-12-31 Microsoft Corporation Flexible speech-activated command and control
US9632650B2 (en) 2006-03-10 2017-04-25 Microsoft Technology Licensing, Llc Command searching enhancements
US10104174B2 (en) 2006-05-05 2018-10-16 Gula Consulting Limited Liability Company Reusable multimodal application
US11539792B2 (en) 2006-05-05 2022-12-27 Gula Consulting Limited Liability Company Reusable multimodal application
US11368529B2 (en) 2006-05-05 2022-06-21 Gula Consulting Limited Liability Company Reusable multimodal application
US20070260972A1 (en) * 2006-05-05 2007-11-08 Kirusa, Inc. Reusable multimodal application
US10785298B2 (en) 2006-05-05 2020-09-22 Gula Consulting Limited Liability Company Reusable multimodal application
US8670754B2 (en) 2006-05-05 2014-03-11 Waloomba Tech Ltd., L.L.C. Reusable mulitmodal application
US10516731B2 (en) 2006-05-05 2019-12-24 Gula Consulting Limited Liability Company Reusable multimodal application
US8213917B2 (en) 2006-05-05 2012-07-03 Waloomba Tech Ltd., L.L.C. Reusable multimodal application
US20070294927A1 (en) * 2006-06-26 2007-12-27 Saundra Janese Stevens Evacuation Status Indicator (ESI)
US20080052076A1 (en) * 2006-08-22 2008-02-28 International Business Machines Corporation Automatic grammar tuning using statistical language model generation
US8346555B2 (en) 2006-08-22 2013-01-01 Nuance Communications, Inc. Automatic grammar tuning using statistical language model generation
US10380201B2 (en) 2006-09-07 2019-08-13 Wolfram Alpha Llc Method and system for determining an answer to a query
US9684721B2 (en) 2006-09-07 2017-06-20 Wolfram Alpha Llc Performing machine actions in response to voice input
US11222626B2 (en) 2006-10-16 2022-01-11 Vb Assets, Llc System and method for a cooperative conversational voice user interface
US8515765B2 (en) 2006-10-16 2013-08-20 Voicebox Technologies, Inc. System and method for a cooperative conversational voice user interface
US9015049B2 (en) 2006-10-16 2015-04-21 Voicebox Technologies Corporation System and method for a cooperative conversational voice user interface
US10755699B2 (en) 2006-10-16 2020-08-25 Vb Assets, Llc System and method for a cooperative conversational voice user interface
US10515628B2 (en) 2006-10-16 2019-12-24 Vb Assets, Llc System and method for a cooperative conversational voice user interface
US8073681B2 (en) 2006-10-16 2011-12-06 Voicebox Technologies, Inc. System and method for a cooperative conversational voice user interface
US10297249B2 (en) 2006-10-16 2019-05-21 Vb Assets, Llc System and method for a cooperative conversational voice user interface
US10510341B1 (en) 2006-10-16 2019-12-17 Vb Assets, Llc System and method for a cooperative conversational voice user interface
US20080114747A1 (en) * 2006-11-09 2008-05-15 Goller Michael D Speech interface for search engines
US7742922B2 (en) 2006-11-09 2010-06-22 Goller Michael D Speech interface for search engines
US20080134058A1 (en) * 2006-11-30 2008-06-05 Zhongnan Shen Method and system for extending dialog systems to process complex activities for applications
US9082406B2 (en) * 2006-11-30 2015-07-14 Robert Bosch Llc Method and system for extending dialog systems to process complex activities for applications
US9542940B2 (en) 2006-11-30 2017-01-10 Robert Bosch Llc Method and system for extending dialog systems to process complex activities for applications
US9269097B2 (en) 2007-02-06 2016-02-23 Voicebox Technologies Corporation System and method for delivering targeted advertisements and/or providing natural language processing based on advertisements
US8145489B2 (en) 2007-02-06 2012-03-27 Voicebox Technologies, Inc. System and method for selecting and presenting advertisements based on natural language processing of voice-based input
US8527274B2 (en) 2007-02-06 2013-09-03 Voicebox Technologies, Inc. System and method for delivering targeted advertisements and tracking advertisement interactions in voice recognition contexts
US9406078B2 (en) 2007-02-06 2016-08-02 Voicebox Technologies Corporation System and method for delivering targeted advertisements and/or providing natural language processing based on advertisements
US10134060B2 (en) 2007-02-06 2018-11-20 Vb Assets, Llc System and method for delivering targeted advertisements and/or providing natural language processing based on advertisements
US8886536B2 (en) 2007-02-06 2014-11-11 Voicebox Technologies Corporation System and method for delivering targeted advertisements and tracking advertisement interactions in voice recognition contexts
US11080758B2 (en) 2007-02-06 2021-08-03 Vb Assets, Llc System and method for delivering targeted advertisements and/or providing natural language processing based on advertisements
US8060371B1 (en) 2007-05-09 2011-11-15 Nextel Communications Inc. System and method for voice interaction with non-voice enabled web pages
US8335690B1 (en) 2007-08-23 2012-12-18 Convergys Customer Management Delaware Llc Method and system for creating natural language understanding grammars
US9251786B2 (en) * 2007-08-24 2016-02-02 Samsung Electronics Co., Ltd. Method, medium and apparatus for providing mobile voice web service
US20090055179A1 (en) * 2007-08-24 2009-02-26 Samsung Electronics Co., Ltd. Method, medium and apparatus for providing mobile voice web service
US20090055184A1 (en) * 2007-08-24 2009-02-26 Nuance Communications, Inc. Creation and Use of Application-Generic Class-Based Statistical Language Models for Automatic Speech Recognition
US8135578B2 (en) * 2007-08-24 2012-03-13 Nuance Communications, Inc. Creation and use of application-generic class-based statistical language models for automatic speech recognition
US11599332B1 (en) 2007-10-04 2023-03-07 Great Northern Research, LLC Multiple shell multi faceted graphical user interface
US8326627B2 (en) 2007-12-11 2012-12-04 Voicebox Technologies, Inc. System and method for dynamically generating a recognition grammar in an integrated voice navigation services environment
US9620113B2 (en) 2007-12-11 2017-04-11 Voicebox Technologies Corporation System and method for providing a natural language voice user interface
US8452598B2 (en) 2007-12-11 2013-05-28 Voicebox Technologies, Inc. System and method for providing advertisements in an integrated voice navigation services environment
US8370147B2 (en) 2007-12-11 2013-02-05 Voicebox Technologies, Inc. System and method for providing a natural language voice user interface in an integrated voice navigation services environment
US8983839B2 (en) 2007-12-11 2015-03-17 Voicebox Technologies Corporation System and method for dynamically generating a recognition grammar in an integrated voice navigation services environment
US8140335B2 (en) 2007-12-11 2012-03-20 Voicebox Technologies, Inc. System and method for providing a natural language voice user interface in an integrated voice navigation services environment
US8719026B2 (en) 2007-12-11 2014-05-06 Voicebox Technologies Corporation System and method for providing a natural language voice user interface in an integrated voice navigation services environment
US10347248B2 (en) 2007-12-11 2019-07-09 Voicebox Technologies Corporation System and method for providing in-vehicle services via a natural language voice user interface
US8219407B1 (en) 2007-12-27 2012-07-10 Great Northern Research, LLC Method for processing the output of a speech recognizer
US9502027B1 (en) 2007-12-27 2016-11-22 Great Northern Research, LLC Method for processing the output of a speech recognizer
US9753912B1 (en) 2007-12-27 2017-09-05 Great Northern Research, LLC Method for processing the output of a speech recognizer
US9805723B1 (en) 2007-12-27 2017-10-31 Great Northern Research, LLC Method for processing the output of a speech recognizer
US8793137B1 (en) 2007-12-27 2014-07-29 Great Northern Research LLC Method for processing the output of a speech recognizer
US20090254346A1 (en) * 2008-04-07 2009-10-08 International Business Machines Corporation Automated voice enablement of a web page
US20090254348A1 (en) * 2008-04-07 2009-10-08 International Business Machines Corporation Free form input field support for automated voice enablement of a web page
US8831950B2 (en) 2008-04-07 2014-09-09 Nuance Communications, Inc. Automated voice enablement of a web page
US9047869B2 (en) * 2008-04-07 2015-06-02 Nuance Communications, Inc. Free form input field support for automated voice enablement of a web page
US10089984B2 (en) 2008-05-27 2018-10-02 Vb Assets, Llc System and method for an integrated, multi-modal, multi-device natural language voice services environment
US9711143B2 (en) 2008-05-27 2017-07-18 Voicebox Technologies Corporation System and method for an integrated, multi-modal, multi-device natural language voice services environment
US10553216B2 (en) 2008-05-27 2020-02-04 Oracle International Corporation System and method for an integrated, multi-modal, multi-device natural language voice services environment
US9305548B2 (en) 2008-05-27 2016-04-05 Voicebox Technologies Corporation System and method for an integrated, multi-modal, multi-device natural language voice services environment
US8589161B2 (en) 2008-05-27 2013-11-19 Voicebox Technologies, Inc. System and method for an integrated, multi-modal, multi-device natural language voice services environment
US8260619B1 (en) 2008-08-22 2012-09-04 Convergys Cmg Utah, Inc. Method and system for creating natural language understanding grammars
US8738380B2 (en) 2009-02-20 2014-05-27 Voicebox Technologies Corporation System and method for processing multi-modal device interactions in a natural language voice services environment
US9953649B2 (en) 2009-02-20 2018-04-24 Voicebox Technologies Corporation System and method for processing multi-modal device interactions in a natural language voice services environment
US8719009B2 (en) 2009-02-20 2014-05-06 Voicebox Technologies Corporation System and method for processing multi-modal device interactions in a natural language voice services environment
US9570070B2 (en) 2009-02-20 2017-02-14 Voicebox Technologies Corporation System and method for processing multi-modal device interactions in a natural language voice services environment
US8326637B2 (en) 2009-02-20 2012-12-04 Voicebox Technologies, Inc. System and method for processing multi-modal device interactions in a natural language voice services environment
US9105266B2 (en) 2009-02-20 2015-08-11 Voicebox Technologies Corporation System and method for processing multi-modal device interactions in a natural language voice services environment
US10553213B2 (en) 2009-02-20 2020-02-04 Oracle International Corporation System and method for processing multi-modal device interactions in a natural language voice services environment
US9502025B2 (en) 2009-11-10 2016-11-22 Voicebox Technologies Corporation System and method for providing a natural language content dedication service
US9171541B2 (en) 2009-11-10 2015-10-27 Voicebox Technologies Corporation System and method for hybrid processing in a natural language voice services environment
US8990071B2 (en) 2010-03-29 2015-03-24 Microsoft Technology Licensing, Llc Telephony service interaction management
US20110238414A1 (en) * 2010-03-29 2011-09-29 Microsoft Corporation Telephony service interaction management
US8676589B2 (en) * 2010-11-17 2014-03-18 International Business Machines Corporation Editing telecom web applications through a voice interface
US8788272B2 (en) 2010-11-17 2014-07-22 International Business Machines Corporation Systems and methods for editing telecom web applications through a voice interface
US20120323580A1 (en) * 2010-11-17 2012-12-20 International Business Machines Corporation Editing telecom web applications through a voice interface
US10049669B2 (en) 2011-01-07 2018-08-14 Nuance Communications, Inc. Configurable speech recognition system using multiple recognizers
US10032455B2 (en) 2011-01-07 2018-07-24 Nuance Communications, Inc. Configurable speech recognition system using a pronunciation alignment between multiple recognizers
US8930194B2 (en) 2011-01-07 2015-01-06 Nuance Communications, Inc. Configurable speech recognition system using multiple recognizers
US8898065B2 (en) 2011-01-07 2014-11-25 Nuance Communications, Inc. Configurable speech recognition system using multiple recognizers
US9953653B2 (en) 2011-01-07 2018-04-24 Nuance Communications, Inc. Configurable speech recognition system using multiple recognizers
US10248388B2 (en) 2011-11-15 2019-04-02 Wolfram Alpha Llc Programming in a precise syntax using natural language
US10606563B2 (en) 2011-11-15 2020-03-31 Wolfram Alpha Llc Programming in a precise syntax using natural language
US10929105B2 (en) 2011-11-15 2021-02-23 Wolfram Alpha Llc Programming in a precise syntax using natural language
US9851950B2 (en) 2011-11-15 2017-12-26 Wolfram Alpha Llc Programming in a precise syntax using natural language
US20140040722A1 (en) * 2012-08-02 2014-02-06 Nuance Communications, Inc. Methods and apparatus for voiced-enabling a web application
US9292252B2 (en) 2012-08-02 2016-03-22 Nuance Communications, Inc. Methods and apparatus for voiced-enabling a web application
US20140039885A1 (en) * 2012-08-02 2014-02-06 Nuance Communications, Inc. Methods and apparatus for voice-enabling a web application
US9292253B2 (en) 2012-08-02 2016-03-22 Nuance Communications, Inc. Methods and apparatus for voiced-enabling a web application
US9400633B2 (en) * 2012-08-02 2016-07-26 Nuance Communications, Inc. Methods and apparatus for voiced-enabling a web application
US9781262B2 (en) * 2012-08-02 2017-10-03 Nuance Communications, Inc. Methods and apparatus for voice-enabling a web application
US10157612B2 (en) 2012-08-02 2018-12-18 Nuance Communications, Inc. Methods and apparatus for voice-enabling a web application
US11659041B2 (en) * 2012-09-24 2023-05-23 Blue Ocean Robotics Aps Systems and methods for remote presence
US9886944B2 (en) 2012-10-04 2018-02-06 Nuance Communications, Inc. Hybrid controller for ASR
US10068016B2 (en) 2013-10-17 2018-09-04 Wolfram Alpha Llc Method and system for providing answers to queries
US10175938B2 (en) 2013-11-19 2019-01-08 Microsoft Technology Licensing, Llc Website navigation via a voice user interface
US9594737B2 (en) 2013-12-09 2017-03-14 Wolfram Alpha Llc Natural language-aided hypertext document authoring
US9734817B1 (en) * 2014-03-21 2017-08-15 Amazon Technologies, Inc. Text-to-speech task scheduling
US10469663B2 (en) * 2014-03-25 2019-11-05 Intellisist, Inc. Computer-implemented system and method for protecting sensitive information within a call center in real time
US20150281446A1 (en) * 2014-03-25 2015-10-01 Intellisist, Inc. Computer-Implemented System And Method For Protecting Sensitive Information Within A Call Center In Real Time
US10430863B2 (en) 2014-09-16 2019-10-01 Vb Assets, Llc Voice commerce
US11087385B2 (en) 2014-09-16 2021-08-10 Vb Assets, Llc Voice commerce
US9898459B2 (en) 2014-09-16 2018-02-20 Voicebox Technologies Corporation Integration of domain information into state transitions of a finite state transducer for natural language processing
US10216725B2 (en) 2014-09-16 2019-02-26 Voicebox Technologies Corporation Integration of domain information into state transitions of a finite state transducer for natural language processing
US9626703B2 (en) 2014-09-16 2017-04-18 Voicebox Technologies Corporation Voice commerce
US10229673B2 (en) 2014-10-15 2019-03-12 Voicebox Technologies Corporation System and method for providing follow-up responses to prior natural language inputs of a user
US9747896B2 (en) 2014-10-15 2017-08-29 Voicebox Technologies Corporation System and method for providing follow-up responses to prior natural language inputs of a user
US10614799B2 (en) 2014-11-26 2020-04-07 Voicebox Technologies Corporation System and method of providing intent predictions for an utterance prior to a system detection of an end of the utterance
US10431214B2 (en) 2014-11-26 2019-10-01 Voicebox Technologies Corporation System and method of determining a domain and/or an action related to a natural language input
US9965663B2 (en) 2015-04-08 2018-05-08 Fractal Antenna Systems, Inc. Fractal plasmonic surface reader antennas
US10740578B2 (en) 2015-04-08 2020-08-11 Fractal Antenna Systems, Inc. Fractal plasmonic surface reader
US20170186427A1 (en) * 2015-04-22 2017-06-29 Google Inc. Developer voice actions system
US10008203B2 (en) * 2015-04-22 2018-06-26 Google Llc Developer voice actions system
US11657816B2 (en) 2015-04-22 2023-05-23 Google Llc Developer voice actions system
US10839799B2 (en) 2015-04-22 2020-11-17 Google Llc Developer voice actions system
US10338959B2 (en) 2015-07-13 2019-07-02 Microsoft Technology Licensing, Llc Task state tracking in systems and services
US10255921B2 (en) 2015-07-31 2019-04-09 Google Llc Managing dialog data providers
US11120806B2 (en) 2015-07-31 2021-09-14 Google Llc Managing dialog data providers
US11727941B2 (en) 2015-07-31 2023-08-15 Google Llc Managing dialog data providers
US20170069315A1 (en) * 2015-09-09 2017-03-09 Samsung Electronics Co., Ltd. System, apparatus, and method for processing natural language, and non-transitory computer readable recording medium
US11756539B2 (en) * 2015-09-09 2023-09-12 Samsung Electronic Co., Ltd. System, apparatus, and method for processing natural language, and non-transitory computer readable recording medium
US10553210B2 (en) * 2015-09-09 2020-02-04 Samsung Electronics Co., Ltd. System, apparatus, and method for processing natural language, and non-transitory computer readable recording medium
US10635281B2 (en) 2016-02-12 2020-04-28 Microsoft Technology Licensing, Llc Natural language task completion platform authoring for third party experiences
US10095691B2 (en) 2016-03-22 2018-10-09 Wolfram Research, Inc. Method and apparatus for converting natural language to machine actions
US20170337923A1 (en) * 2016-05-19 2017-11-23 Julia Komissarchik System and methods for creating robust voice-based user interface
US10331784B2 (en) 2016-07-29 2019-06-25 Voicebox Technologies Corporation System and method of disambiguating natural language processing requests
US10971157B2 (en) 2017-01-11 2021-04-06 Nuance Communications, Inc. Methods and apparatus for hybrid speech recognition processing
US10528670B2 (en) * 2017-05-25 2020-01-07 Baidu Online Network Technology (Beijing) Co., Ltd. Amendment source-positioning method and apparatus, computer device and readable medium
US20190372916A1 (en) * 2018-05-30 2019-12-05 Allstate Insurance Company Processing System Performing Dynamic Training Response Output Generation Control
US11601384B2 (en) 2018-05-30 2023-03-07 Allstate Insurance Company Processing system performing dynamic training response output generation control
US10897434B2 (en) * 2018-05-30 2021-01-19 Allstate Insurance Company Processing system performing dynamic training response output generation control
US11250093B2 (en) 2018-07-25 2022-02-15 Accenture Global Solutions Limited Natural language control of web browsers
US11093708B2 (en) * 2018-12-13 2021-08-17 Software Ag Adaptive human to machine interaction using machine learning
US20220399014A1 (en) * 2021-06-15 2022-12-15 Motorola Solutions, Inc. System and method for virtual assistant execution of ambiguous command
US11935529B2 (en) * 2021-06-15 2024-03-19 Motorola Solutions, Inc. System and method for virtual assistant execution of ambiguous command

Similar Documents

Publication Publication Date Title
US6604075B1 (en) Web-based voice dialog interface
US8572209B2 (en) Methods and systems for authoring of mixed-initiative multi-modal interactions and related browsing mechanisms
EP1163665B1 (en) System and method for bilateral communication between a user and a system
CA2280331C (en) Web-based platform for interactive voice response (ivr)
US20020077823A1 (en) Software development systems and methods
US6456974B1 (en) System and method for adding speech recognition capabilities to java
US9263039B2 (en) Systems and methods for responding to natural language speech utterance
US9626959B2 (en) System and method of supporting adaptive misrecognition in conversational speech
US7640163B2 (en) Method and system for voice activating web pages
US5819220A (en) Web triggered word set boosting for speech interfaces to the world wide web
CA2467134C (en) Semantic object synchronous understanding for highly interactive interface
US8645122B1 (en) Method of handling frequently asked questions in a natural language dialog service
US20060235694A1 (en) Integrating conversational speech into Web browsers
JP2003015860A (en) Speech driven data selection in voice-enabled program
JP2001034451A (en) Method, system and device for automatically generating human machine dialog
GB2407657A (en) Automatic grammar generator comprising phase chunking and morphological variation
US20050131695A1 (en) System and method for bilateral communication between a user and a system
Brown et al. Web page analysis for voice browsing
Niesler et al. Natural language understanding in the DACST-AST dialogue system
Liu Building complex language processors in VoiceXML.
Qureshi Reconfiguration of speech recognizers through layered-grammar structure to provide ease of navigation and recognition accuracy in speech-web.
Ju Voice-enabled click and dial system
Zhuk Speech Technologies on the Way to a Natural User Interface
Chitte Constructing modular speech interface to remote applications.
Yang et al. Research on realizing speech-operated on-board traveler information system

Legal Events

Date Code Title Description
AS Assignment

Owner name: LUCENT TECHNOLOGIES INC., NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BROWN, MICHAEL KENNETH;GLINSKI, STEPHEN CHARLES;SCHMULT, BRIAN CARL;REEL/FRAME:010680/0317;SIGNING DATES FROM 20000310 TO 20000313

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: CREDIT SUISSE AG, NEW YORK

Free format text: SECURITY INTEREST;ASSIGNOR:ALCATEL-LUCENT USA INC.;REEL/FRAME:030510/0627

Effective date: 20130130

FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: ALCATEL-LUCENT USA INC., NEW JERSEY

Free format text: MERGER;ASSIGNOR:LUCENT TECHNOLOGIES INC.;REEL/FRAME:033053/0885

Effective date: 20081101

AS Assignment

Owner name: SOUND VIEW INNOVATIONS, LLC, NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ALCATEL LUCENT;REEL/FRAME:033416/0763

Effective date: 20140630

AS Assignment

Owner name: ALCATEL-LUCENT USA INC., NEW JERSEY

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG;REEL/FRAME:033949/0531

Effective date: 20140819

FPAY Fee payment

Year of fee payment: 12

AS Assignment

Owner name: NOKIA OF AMERICA CORPORATION, DELAWARE

Free format text: CHANGE OF NAME;ASSIGNOR:ALCATEL-LUCENT USA INC.;REEL/FRAME:050476/0085

Effective date: 20180103

AS Assignment

Owner name: ALCATEL LUCENT, FRANCE

Free format text: NUNC PRO TUNC ASSIGNMENT;ASSIGNOR:NOKIA OF AMERICA CORPORATION;REEL/FRAME:050668/0829

Effective date: 20190927