Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20050261908 A1
Publication typeApplication
Application numberUS 10/849,642
Publication dateNov 24, 2005
Filing dateMay 19, 2004
Priority dateMay 19, 2004
Also published asUS7925512
Publication number10849642, 849642, US 2005/0261908 A1, US 2005/261908 A1, US 20050261908 A1, US 20050261908A1, US 2005261908 A1, US 2005261908A1, US-A1-20050261908, US-A1-2005261908, US2005/0261908A1, US2005/261908A1, US20050261908 A1, US20050261908A1, US2005261908 A1, US2005261908A1
InventorsCharles Cross, Brien Muschett
Original AssigneeInternational Business Machines Corporation
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Method, system, and apparatus for a voice markup language interpreter and voice browser
US 20050261908 A1
Abstract
The present invention can include a method of allocating an interpreter module within an application program. The application program can create one or more interpreter module instances. The method also can include updating a property descriptor of the interpreter module instance and directing the interpreter module instance to allocate speech and audio resources. Content then can be loaded into the interpreter module instance and run.
Images(3)
Previous page
Next page
Claims(20)
1. Within an application program, a method of allocating an interpreter module comprising:
the application program creating an interpreter module instance;
updating a property descriptor of the interpreter module instance;
directing the interpreter module instance to allocate speech and audio resources; and
loading content into the interpreter module instance and running the content.
2. The method of claim 1, further comprising configuring event listeners for the interpreter module instance.
3. The method of claim 1, wherein the application program is a visual browser and the interpreter module instance is a voice markup language interpreter.
4. The method of claim 1, wherein the application program is a voice server and the interpreter module instance is a voice browser.
5. The method of claim 1, said directing step comprising instructing the interpreter module instance to allocate a text-to-speech component, an automatic speech recognition component, and an audio processing subsystem, wherein the audio processing subsystem is distinct from the text-to-speech component and the automatic speech recognition component.
6. The method of claim 5, wherein the audio processing subsystem records audio for the automatic speech recognition component and plays audio for the text-to-speech component.
7. The method of claim 6, wherein the audio processing subsystem further records user speech received via a communications link.
8. The method of claim 1, wherein a plurality of interpreter module instances are created, said method further comprising establishing a threading policy within the application program for operation of the plurality of interpreter module instances, wherein each interpreter module instance operates asynchronously from the other interpreter module instances.
9. A system for processing speech within a host application program comprising:
a voice markup language interpreter that is instantiated by the host application program;
an application programming interface through which the voice markup language interpreter communicates with the host application program; and
an updateable property descriptor specifying a listening mode and a language to be used by the voice markup language interpreter;
wherein said voice markup language interpreter is configured to allocate speech resources and audio resources under direction of the host application program, wherein the audio resources are distinct from the speech resources.
10. The system of claim 9, wherein the host application program is a visual browser.
11. The system of claim 9, wherein the host application program is a voice server.
12. The system of claim 9, wherein said system functions within a processing architecture comprising the speech resources and audio resources, wherein the speech resources comprise a text-to-speech component and an automatic speech recognition component, wherein the audio resources are configured to record audio and provide recorded audio to the automatic speech recognition component and to play audio generated by the text-to-speech component.
13. A machine readable storage, having stored thereon a computer program having a plurality of code sections executable by a portable computing device for causing the portable computing device to perform the steps of:
the application program creating an interpreter module instance;
updating a property descriptor of the interpreter module instance;
directing the interpreter module instance to allocate speech and audio resources; and
loading content into the interpreter module instance and running the content.
14. The machine readable storage of claim 13, further comprising configuring event listeners for the interpreter module instance.
15. The machine readable storage of claim 13, wherein the application program is a visual browser and the interpreter module instance is a voice markup language interpreter.
16. The machine readable storage of claim 13, wherein the application program is a voice server and the interpreter module instance is a voice browser.
17. The machine readable storage of claim 13, said directing step comprising instructing the interpreter module instance to allocate a text-to-speech component, an automatic speech recognition component, and an audio processing subsystem, wherein the audio processing subsystem is distinct from the text-to-speech component and the automatic speech recognition component.
18. The machine readable storage of claim 17, wherein the audio processing subsystem records audio for the automatic speech recognition component and plays audio for the text-to-speech component.
19. The machine readable storage of claim 18, wherein the audio processing subsystem further records user speech received via a communications link.
20. The machine readable storage of claim 13, wherein a plurality of interpreter module instances are created, said method further comprising establishing a threading policy within the application program for operation of the plurality of interpreter module instances, wherein each interpreter module instance operates asynchronously from the other interpreter module instances.
Description
    BACKGROUND
  • [0001]
    1. Field of the Invention
  • [0002]
    The present invention relates to multimodal browsers and voice servers and, more particularly, to voice markup language interpreters.
  • [0003]
    2. Description of the Related Art
  • [0004]
    Visual browsers are complex application programs that can render graphic markup languages such as Hypertext Markup Language (HTML) or Extensible HTML (XHTML). As such, visual browsers lack the ability to process audible input and/or output. Still, visual browsers enjoy a significant user base.
  • [0005]
    Voice browsers are the audio counterparts of visual browsers. More particularly, voice browsers can render voice markup languages such as Voice Extensible Markup Language (VXML), thereby allowing users to interact with the voice browser using speech. Voice browsers, however, are unable to process or render graphic markup languages.
  • [0006]
    Recent developments in Web-based applications have led to the development of multimodal interfaces. Multimodal interfaces allow users to access multimodal content, or content having both graphical and audible queues. Through a multimodal interface, the user can choose to interact or access content using graphic input such as a keyboard or pointer entry, using an audible queue such as a speech input, or using combination of both. For example, one variety of multimodal interface is a multimodal browser that can render XHTML and Voice markup language, also referred to as X+V markup language.
  • [0007]
    To provide both graphic and voice functionality, developers are left with the option of developing a new multimodal browser or, alternatively, redesigning existing visual browsers to provide voice functionality. The complexity of visual browsers, and browsers in general, however, makes such efforts both time consuming and costly.
  • SUMMARY OF THE INVENTION
  • [0008]
    The inventive arrangements disclosed herein provide a solution for providing speech and/or voice processing functionality within a host application program. In one embodiment, a library of voice markup language functions is provided as a voice markup language interpreter that is accessible via an application programming interface. In another embodiment, one or more instances of the voice interpreter can be created by a host application program thereby providing speech processing capabilities for the host application program. For example, the inventive arrangements disclosed herein can be used to voice-enable a visual browser or as a voice browser for use in a voice server.
  • [0009]
    One aspect of the present invention can include a method of allocating an interpreter module within an application program. The application program can create one or more interpreter module instances. The method also can include updating a property descriptor of the interpreter module instance and directing the interpreter module instance to allocate speech and audio resources. Content then can be loaded into the interpreter module instance and run.
  • [0010]
    Another aspect of the present invention can include a system for processing speech within a host application program. The system can include a voice markup language interpreter that is instantiated by the host application program and an application programming interface through which the voice markup language interpreter communicates with the host application program. The system further can include an updateable property descriptor specifying a listening mode and a language to be used by the voice markup language interpreter. The voice markup language interpreter can be configured to allocate speech resources and audio resources under direction of the host application program, wherein the audio resources are distinct from the speech resources.
  • [0011]
    Another aspect of the present invention can include a machine readable storage being programmed to cause a machine to perform the various steps disclosed herein.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • [0012]
    There are shown in the drawings, embodiments that are presently preferred; it being understood, however, that the invention is not limited to the precise arrangements and instrumentalities shown.
  • [0013]
    FIG. 1 is a schematic diagram illustrating a system in which a voice markup language interpreter can be used in accordance with one embodiment of the present invention.
  • [0014]
    FIG. 2 is a flow chart illustrating a method of allocating a voice markup language interpreter in accordance with another embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • [0015]
    FIG. 1 is a schematic diagram illustrating a system 100 in which a voice markup language interpreter can be used in accordance with one embodiment of the present invention. As shown, the system 100 can include a computer system 102 having an application program (application) 105, and a voice markup language interpreter (interpreter) 115. The system 100 further can include audio resources such as an audio subsystem 125 and speech processing resources such as an automatic speech recognition (ASR) engine 130 and a text-to-speech (TTS) engine 135. As shown, the interpreter 115 can run in the same address space as the application 105.
  • [0016]
    The computer system 102 can be a server for hosting one or more applications such as voice browsers, interactive voice response systems, voice servers, or the like. For example, in one embodiment, the application 105 can be a visual browser that is to be voice or speech enabled. Accordingly, the application 105 can function as a multimodal browser once the interpreter 115 is instantiated. In another embodiment, the application 105 can be a voice server. In that case, the interpreter 115 can function as, or form, a voice browser. Regardless, the application 105 can be configured to create one or more instances of the interpreter 115, for example a pool of interpreters 115, as may be required, depending upon intended use.
  • [0017]
    The interpreter 115 can include an application programming interface (API) 110 and a property descriptor 120. The interpreter 115 can be implemented as a lightweight software component. When more than one instance of the interpreter 1115 is instantiated, for example, the interpreter 115 instances can function as multiply concurrent and serially reusable processing modules.
  • [0018]
    The API 110 provides a library of functions, methods, and the like for accessing the functionality of the interpreter 115. As such, the API 110 provides an interface through which the application 105 and the interpreter 115 can communicate. The property descriptor 120 is a configurable electronic document that specifies operational parameters of the interpreter 1115. In one embodiment, the property descriptor 120 can specify modes of operation and a locale. For example, one mode of operation can include a listening mode such as “always listening” or “push to talk”, or “push to activate”. The listening mode determines when audio data is streamed to the speech recognition engine and how the end of an utterance is determined. That is, the listening mode can specify how audio events are to be detected and handled. The locale can specify the language to be used in speech processing functions, whether speech recognition or text-to-speech.
  • [0019]
    Table 1 below illustrates additional properties that can be specified in or by the property descriptor 120.
    TABLE 1
    CACHE_FC_SIZE Property used to define the maximum size
    of the file cache.
    CACHE_FC_THOLD Property used to define the file cache
    threshold.
    CACHE_FSE_LEN Property used to define the maximum size of
    a file entry for the platform file system.
    CACHE_MC_SIZE Property used to define the maximum size of
    the memory cache.
    CACHE_NAME Property used to define the symbolic name of
    the resource cache to use.
    CALL_TIMEOUT Property used to configure the length of time
    the browser should wait to connect to a call
    if not provided one.
    FETCH_EXPIRES Property used to define the default expiration
    time for fetched resources.
    FETCH_THREADS Property used to define the initial number of
    fetch threads to used for fetching resources.
    FETCH_TIMEOUT Property used to define the default fetch
    timeout.
    LOCALE_LIST Property used to define the possible set of
    locales to be used by the VoiceXML
    application.
    OVERRIDE_SERVICES Property used to override the default
    mechanism for obtaining browser services.
    OVERRIDE_SITE_DOC Property used to override the site document
    URL for this browser session.
    PP_CAPACITY Property used to set the size of the parser
    tool capacity of the interpreter.
    PP_PRELOAD Property used to set the preload count for the
    parser pool of the interpreter.
    SITE_DOC Property used to set the site document URL.
  • [0020]
    In one embodiment, the interpreter 115 can function as a voice markup language interpreter. Such can be the case, for example, where the application 105 is implemented as a visual browser. The interpreter 115 can be configured to parse and render any of a variety of voice markup languages such as Voice Extensible Markup Language (VXML) or any subset thereof. For example, the interpreter 115 can be configured to render the subset of VXML used by the Extensible Hypertext Markup Language (XHTML) and Voice markup language, commonly referred to as X+V markup language. In this manner, the interpreter 115 can function in a complementary fashion with the application 105 to provide multimodal browsing. The application 105 can process graphical markup language and provide any voice markup language to the interpreter 115 for rendering.
  • [0021]
    In another embodiment, the interpreter 115 can provide the core voice markup language rendering capabilities for implementing a voice browser. In that case, the application 105 can be a voice server.
  • [0022]
    As noted, the system 100 can include a variety of resources such as the audio subsystem 125, the ASR engine 130, and the TTS engine 135. The audio resources are distinct from the speech resources. More particularly, the audio subsystem 125 is distinct from both the ASR engine 130 and the TTS engine 135. Rather than incorporating audio handling capabilities within the speech resources, i.e. the ASR engine 130 and/or the TTS engine 135, the audio subsystem 125 can handle such functions. The interpreter 115 can manipulate the speech resources through the speech services API 116. This allows the interpreter 115 to be implemented independently of the speech resources, thereby facilitating the use of speech resources from different vendors.
  • [0023]
    Thus, in one embodiment, the audio subsystem 125 can capture or record audio from a user input and provide that audio to the ASR engine 130. Similarly, the audio subsystem 125 can obtain recorded and/or synthetic speech from the TTS engine 135 and/or other audio playback system and provide that audio to a user. The audio subsystem 125 further can route audio between the various speech resources and a user device.
  • [0024]
    The audio subsystem 125 can include one or more audio listeners. For example, the audio subsystem 125 can include play and record listeners. The record listener can detect and record audio, including speech, received from a user, for example via a communications link. Such speech can be recorded and provided to the ASR engine 130. The play listener can detect speech generated by the TTS engine 135 to be played back to a user.
  • [0025]
    Because each of the processing resources is distinct. i.e. the audio subsystem 125, the ASR engine 130, and the TTS engine 135, each can be allocated individually. Such an arrangement further allows audio to be handled in a manner that is independent from the processing functions to be performed upon the audio.
  • [0026]
    While the application 105 and the interpreter 115 can function in a cooperative manner, the audio subsystem 125, the ASR engine 130, and the TTS engine 135 need not be part of the same system. That is, in one embodiment, the processing resources can execute in one or more other computer systems. Such computer systems can be proximate to, or remotely located from the computer system 102. For example, the audio and speech resources can be provided as individual services that are accessible to the interpreter 115 and application 105 via a communications network 122, which can include, but is not limited to, a local area network, a wide area network, the public switched telephone network, a wireless or mobile communications network, the Internet, and/or the like. Still, in another embodiment, the resources can be located within a same computer system as the application 105 and/or the interpreter 115.
  • [0027]
    In operation, once one or more instances of the interpreter 115 are created by the application 105. Once created, the application 105 can access the audio and speech resources via the interpreter 115. That is, the interpreter 115 can render voice markup languages and utilize the audio subsystem 125, the ASR engine 130, and the TTS engine 135. Accordingly, voice services can be provided to a user accessing the computer system 102 via a telephone 140 or a computer system 145 over another communications network 122.
  • [0028]
    The application program 105 can be synchronized with the interpreter 115 through events and state change information, i.e. through the addition of XML event listeners and state listeners. Events and state changes are propagated from the interpreter 115 to the application 105 through these event listeners. The application 105 uses the API's for adding event and state change listeners to the interpreter 115. A listener is an object oriented programming technique for implementing a callback function. Using a state change event allows API's to function properly as some API's may fail if the interpreter 115 is in the wrong state. Accordingly, the application 105 can wait until the interpreter 115 is in the correct state, using the state change listener, before calling those API's that are sensitive to the internal state of the interpreter 115.
  • [0029]
    FIG. 2 is a flow chart illustrating a method 200 of allocating a voice markup language interpreter in accordance with another embodiment of the present invention. The method 200 can be performed by an application program having a need for voice processing functionality. Accordingly, the method 200 can begin in a state where the application program has detected a need for voice processing or multimodal operation, for example by parsing a markup language document and identifying one or more tags associated with speech and/or audio processing.
  • [0030]
    In step 205, the application program, via the API provided as part of the interpreter, can create an instance of the interpreter. For example, the instance can be created using a factory design pattern or a constructor. In step 210, the application program can modify the property descriptor of the interpreter in accordance with the desired listening mode and language to be used to interact with the interpreter. The application can be programmed to configure the property descriptor to cause the interpreter to operate in a particular fashion or for a particular mode of operation.
  • [0031]
    In step 212, an ECMAScript Scope and Scope Factory can be set. The interpreter used with the multimodal browser can share the ECMAScript engine from the visual browser, i.e. the application, through an abstract interface called Scope. Scope is an abstraction of the hierarchical VoiceXML variable scopes. A setScopeFactory method enables the application to pass a callback function to the interpreter which allows the interpreter to create new scopes (ECMAScript objects) at runtime.
  • [0032]
    Additionally, the interpreter used with the multimodal browser shares the Document Object Model (DOM) of the document being rendered by the visual browser. This is done with an API setECMAScriptScope. Synchronization between speech recognition events and update of visual input elements can then be implemented by the interpreter directly updating the DOM using the Scope interface and the “document” variable contained in the Scope object passed in through setECMAScriptScope.
  • [0033]
    In step 215, the application program can instruct the newly created interpreter instance to begin allocating resources. More particularly, the interpreter can be instructed to allocate speech resources such as an ASR engine and/or a TTS engine. In step 220, the application program can instruct the interpreter to allocate the audio subsystem. As noted, the audio subsystem can be allocated separately from the speech resources as the audio subsystem is distinct from the speech resources. In step 225, the application program optionally can instruct the interpreter to add event listeners. For example, in the case where the interpreter is to function with a visual browser, the event listeners can be Extensible Markup Language (XML) listeners.
  • [0034]
    In step 230, content can be loaded into the interpreter from the application program. For example, in the case where the interpreter functions as a voice browser in a voice server context, a site VXML or other site voice markup language document can be set. The current VXML or other voice markup language document can be set for the current browser session. In the case where the interpreter functions as a multimodal browser, VXML link fragments for Command and Control and Content Navigation (C3N) can be loaded. Further, VXML form fragments can be loaded as content to be rendered.
  • [0035]
    In step 235, the content can be executed or run. For example, where the interpreter functions with a visual browser, the interpreter can enable document level link grammars and run a form fragment by identifier. Where the interpreter functions as a voice browser, the current voice markup language document can be run. In any case, the interpreter can begin listening for events.
  • [0036]
    The application can listen and respond to events generated by the interpreter in step 240. Notably, the application can determine whether the received event is a user event such as a VoiceXML user event from a C3N grammar. If so, the interpreter can execute a user interface response to a C3N event. If the event is an XML event, a Document Object Model level 2 (DOM2) event, or an event formatted using another suitable protocol can be created and propagated through the DOM.
  • [0037]
    In step 245, if the interpreter is finished running the loaded content, the method can continue to step 230 to load and execute additional content. If not, the method can loop back to step 240 to continue listening and responding to further events.
  • [0038]
    While the method 200 has been descriptive of a single interpreter, it should be appreciated that multiple instances of the interpreter can be created and run. Accordingly, in another embodiment, a pool of one or more interpreter instances can be created by the application program. A threading policy can be established in the application program to facilitate the asynchronous operation of each of the interpreter instances.
  • [0039]
    The present invention can be realized in hardware, software, or a combination of hardware and software. The present invention can be realized in a centralized fashion in one computer system or in a distributed fashion where different elements are spread across several interconnected computer systems. Any kind of computer system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software can be a general-purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the methods described herein.
  • [0040]
    The present invention also can be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein, and which when loaded in a computer system is able to carry out these methods. Computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
  • [0041]
    This invention can be embodied in other forms without departing from the spirit or essential attributes thereof. Accordingly, reference should be made to the following claims, rather than to the foregoing specification, as indicating the scope of the invention.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US5440615 *Mar 31, 1992Aug 8, 1995At&T Corp.Language selection for voice messaging system
US6311159 *Oct 5, 1999Oct 30, 2001Lernout & Hauspie Speech Products N.V.Speech controlled computer user interface
US6546082 *May 2, 2000Apr 8, 2003International Business Machines CorporationMethod and apparatus for assisting speech and hearing impaired subscribers using the telephone and central office
US6574601 *Jan 13, 1999Jun 3, 2003Lucent Technologies Inc.Acoustic speech recognizer system and method
US6604077 *Feb 5, 2002Aug 5, 2003At&T Corp.System and method for providing remote automatic speech recognition and text to speech services via a packet network
US6625576 *Jan 29, 2001Sep 23, 2003Lucent Technologies Inc.Method and apparatus for performing text-to-speech conversion in a client/server environment
US6801604 *Jun 25, 2002Oct 5, 2004International Business Machines CorporationUniversal IP-based and scalable architectures across conversational applications using web services for speech and audio processing resources
US6807529 *Feb 27, 2002Oct 19, 2004Motorola, Inc.System and method for concurrent multimodal communication
US6810379 *Apr 24, 2001Oct 26, 2004Sensory, Inc.Client/server architecture for text-to-speech synthesis
US6907256 *Apr 19, 2001Jun 14, 2005Nec CorporationMobile terminal with an automatic translation function
US6944594 *May 30, 2001Sep 13, 2005Bellsouth Intellectual Property CorporationMulti-context conversational environment system and method
US6965925 *Dec 31, 1998Nov 15, 2005Nortel Networks, LtdDistributed open architecture for media and telephony services
US6983250 *Oct 22, 2001Jan 3, 2006Nms Communications CorporationMethod and system for enabling a user to obtain information from a text-based web site in audio form
US7016845 *May 30, 2003Mar 21, 2006Oracle International CorporationMethod and apparatus for providing speech recognition resolution on an application server
US7054818 *Jan 14, 2004May 30, 2006V-Enablo, Inc.Multi-modal information retrieval system
US7092728 *May 2, 2001Aug 15, 2006Cisco Technology, Inc.Unified messaging system configured for converting short message service messages to audible messages
US7099826 *May 31, 2002Aug 29, 2006Sony CorporationText-to-speech synthesis system
US7174006 *Jun 18, 2001Feb 6, 2007Nms Communications CorporationMethod and system of VoiceXML interpreting
US7206391 *Jun 4, 2004Apr 17, 2007Apptera Inc.Method for creating and deploying system changes in a voice application system
US7210098 *Feb 18, 2003Apr 24, 2007Kirusa, Inc.Technique for synchronizing visual and voice browsers to enable multi-modal browsing
US7225249 *Sep 24, 1998May 29, 2007Mci, LlcIntegrated systems for providing communications network management services and interactive generating invoice documents
US7251604 *Nov 1, 2002Jul 31, 2007Sprint Spectrum L.P.Systems and method for archiving and retrieving navigation points in a voice command platform
US7454346 *Oct 4, 2000Nov 18, 2008Cisco Technology, Inc.Apparatus and methods for converting textual information to audio-based output
US7571100 *Dec 3, 2002Aug 4, 2009Speechworks International, Inc.Speech recognition and speaker verification using distributed speech processing
US20020110248 *Feb 13, 2001Aug 15, 2002International Business Machines CorporationAudio renderings for expressing non-audio nuances
US20020152067 *Apr 15, 2002Oct 17, 2002Olli ViikkiArrangement of speaker-independent speech recognition
US20020165719 *Sep 20, 2001Nov 7, 2002Kuansan WangServers for web enabled speech recognition
US20030055884 *Jul 2, 2002Mar 20, 2003Yuen Michael S.Method for automated harvesting of data from a Web site using a voice portal system
US20030061052 *Sep 23, 2002Mar 27, 2003Richard BreuerMethod of switching between dialog systems
US20030120494 *Dec 18, 2002Jun 26, 2003Jost Uwe HelmutControl apparatus
US20030139928 *Jan 22, 2002Jul 24, 2003Raven Technology, Inc.System and method for dynamically creating a voice portal in voice XML
US20030167172 *Feb 27, 2002Sep 4, 2003Greg JohnsonSystem and method for concurrent multimodal communication
US20040054523 *Sep 16, 2002Mar 18, 2004Glenayre Electronics, Inc.Integrated voice navigation system and method
US20040128136 *Sep 22, 2003Jul 1, 2004Irani Pourang PoladInternet voice browser
US20040172254 *Jan 14, 2004Sep 2, 2004Dipanshu SharmaMulti-modal information retrieval system
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7676371Jun 13, 2006Mar 9, 2010Nuance Communications, Inc.Oral modification of an ASR lexicon of an ASR engine
US7801728Sep 21, 2010Nuance Communications, Inc.Document session replay for multimodal applications
US7809575Feb 27, 2007Oct 5, 2010Nuance Communications, Inc.Enabling global grammars for a particular multimodal application
US7822608Oct 26, 2010Nuance Communications, Inc.Disambiguating a speech recognition grammar in a multimodal application
US7827033Dec 6, 2006Nov 2, 2010Nuance Communications, Inc.Enabling grammars in web page frames
US7840409Feb 27, 2007Nov 23, 2010Nuance Communications, Inc.Ordering recognition results produced by an automatic speech recognition engine for a multimodal application
US7848314Dec 7, 2010Nuance Communications, Inc.VOIP barge-in support for half-duplex DSR client on a full-duplex network
US7917365Jun 16, 2005Mar 29, 2011Nuance Communications, Inc.Synchronizing visual and speech events in a multimodal application
US7945851 *May 17, 2011Nuance Communications, Inc.Enabling dynamic voiceXML in an X+V page of a multimodal application
US7957976Jun 7, 2011Nuance Communications, Inc.Establishing a multimodal advertising personality for a sponsor of a multimodal application
US8032825Oct 4, 2011International Business Machines CorporationDynamically creating multimodal markup documents
US8055504Nov 8, 2011Nuance Communications, Inc.Synchronizing visual and speech events in a multimodal application
US8069047Nov 29, 2011Nuance Communications, Inc.Dynamically defining a VoiceXML grammar in an X+V page of a multimodal application
US8073692Nov 2, 2010Dec 6, 2011Nuance Communications, Inc.Enabling speech recognition grammars in web page frames
US8073697Dec 6, 2011International Business Machines CorporationEstablishing a multimodal personality for a multimodal application
US8073698Dec 6, 2011Nuance Communications, Inc.Enabling global grammars for a particular multimodal application
US8082148Dec 20, 2011Nuance Communications, Inc.Testing a grammar used in speech recognition for reliability in a plurality of operating environments having different background noise
US8086463Sep 12, 2006Dec 27, 2011Nuance Communications, Inc.Dynamically generating a vocal help prompt in a multimodal application
US8090584Jun 16, 2005Jan 3, 2012Nuance Communications, Inc.Modifying a grammar of a hierarchical multimodal menu in dependence upon speech command frequency
US8121837Apr 24, 2008Feb 21, 2012Nuance Communications, Inc.Adjusting a speech engine for a mobile computing device based on background noise
US8145493Sep 11, 2006Mar 27, 2012Nuance Communications, Inc.Establishing a preferred mode of interaction between a user and a multimodal application
US8150698Feb 26, 2007Apr 3, 2012Nuance Communications, Inc.Invoking tapered prompts in a multimodal application
US8214242Jul 3, 2012International Business Machines CorporationSignaling correspondence between a meeting agenda and a meeting discussion
US8229081Jul 24, 2012International Business Machines CorporationDynamically publishing directory information for a plurality of interactive voice response systems
US8239205Aug 7, 2012Nuance Communications, Inc.Establishing a multimodal advertising personality for a sponsor of a multimodal application
US8290780Jun 24, 2009Oct 16, 2012International Business Machines CorporationDynamically extending the speech prompts of a multimodal application
US8332218Dec 11, 2012Nuance Communications, Inc.Context-based grammars for automated speech recognition
US8374874Sep 11, 2006Feb 12, 2013Nuance Communications, Inc.Establishing a multimodal personality for a multimodal application in dependence upon attributes of user interaction
US8380513Feb 19, 2013International Business Machines CorporationImproving speech capabilities of a multimodal application
US8416714Aug 5, 2009Apr 9, 2013International Business Machines CorporationMultimodal teleconferencing
US8494858Feb 14, 2012Jul 23, 2013Nuance Communications, Inc.Establishing a preferred mode of interaction between a user and a multimodal application
US8498873Jun 28, 2012Jul 30, 2013Nuance Communications, Inc.Establishing a multimodal advertising personality for a sponsor of multimodal application
US8510117Jul 9, 2009Aug 13, 2013Nuance Communications, Inc.Speech enabled media sharing in a multimodal application
US8515757Mar 20, 2007Aug 20, 2013Nuance Communications, Inc.Indexing digitized speech with words represented in the digitized speech
US8521534Sep 12, 2012Aug 27, 2013Nuance Communications, Inc.Dynamically extending the speech prompts of a multimodal application
US8543704 *Apr 6, 2006Sep 24, 2013International Business Machines CorporationMethod and apparatus for multimodal voice and web services
US8566087Sep 13, 2012Oct 22, 2013Nuance Communications, Inc.Context-based grammars for automated speech recognition
US8571872Sep 30, 2011Oct 29, 2013Nuance Communications, Inc.Synchronizing visual and speech events in a multimodal application
US8600755Jan 23, 2013Dec 3, 2013Nuance Communications, Inc.Establishing a multimodal personality for a multimodal application in dependence upon attributes of user interaction
US8670987Mar 20, 2007Mar 11, 2014Nuance Communications, Inc.Automatic speech recognition with dynamic grammar rules
US8706490Aug 7, 2013Apr 22, 2014Nuance Communications, Inc.Indexing digitized speech with words represented in the digitized speech
US8706500Nov 1, 2011Apr 22, 2014Nuance Communications, Inc.Establishing a multimodal personality for a multimodal application
US8713542Feb 27, 2007Apr 29, 2014Nuance Communications, Inc.Pausing a VoiceXML dialog of a multimodal application
US8725513Apr 12, 2007May 13, 2014Nuance Communications, Inc.Providing expressive user interaction with a multimodal application
US8744861Mar 1, 2012Jun 3, 2014Nuance Communications, Inc.Invoking tapered prompts in a multimodal application
US8781840Jan 31, 2013Jul 15, 2014Nuance Communications, Inc.Retrieval and presentation of network service results for mobile device using a multimodal browser
US8788620Apr 4, 2007Jul 22, 2014International Business Machines CorporationWeb service support for a multimodal client processing a multimodal application
US8843376Mar 13, 2007Sep 23, 2014Nuance Communications, Inc.Speech-enabled web content searching using a multimodal browser
US8862471Jul 29, 2013Oct 14, 2014Nuance Communications, Inc.Establishing a multimodal advertising personality for a sponsor of a multimodal application
US8862475Apr 12, 2007Oct 14, 2014Nuance Communications, Inc.Speech-enabled content navigation and control of a distributed multimodal browser
US8909532Mar 23, 2007Dec 9, 2014Nuance Communications, Inc.Supporting multi-lingual user interaction with a multimodal application
US8938392Feb 27, 2007Jan 20, 2015Nuance Communications, Inc.Configuring a speech engine for a multimodal application based on location
US8965772Mar 20, 2014Feb 24, 2015Nuance Communications, Inc.Displaying speech command input state information in a multimodal browser
US9076454Jan 25, 2012Jul 7, 2015Nuance Communications, Inc.Adjusting a speech engine for a mobile computing device based on background noise
US9083798Dec 22, 2004Jul 14, 2015Nuance Communications, Inc.Enabling voice selection of user preferences
US9123337Mar 11, 2014Sep 1, 2015Nuance Communications, Inc.Indexing digitized speech with words represented in the digitized speech
US9208783Feb 27, 2007Dec 8, 2015Nuance Communications, Inc.Altering behavior of a multimodal application based on location
US9208785May 10, 2006Dec 8, 2015Nuance Communications, Inc.Synchronizing distributed speech recognition
US9292183Jun 20, 2013Mar 22, 2016Nuance Communications, Inc.Establishing a preferred mode of interaction between a user and a multimodal application
US20060015335 *Jul 13, 2004Jan 19, 2006Ravigopal VennelakantiFramework to enable multimodal access to applications
US20060287858 *Jun 16, 2005Dec 21, 2006Cross Charles W JrModifying a grammar of a hierarchical multimodal menu with keywords sold to customers
US20060287865 *Jun 16, 2005Dec 21, 2006Cross Charles W JrEstablishing a multimodal application voice
US20060288309 *Jun 16, 2005Dec 21, 2006Cross Charles W JrDisplaying available menu choices in a multimodal browser
US20060288328 *Jun 16, 2005Dec 21, 2006Cross Charles W JrDynamically creating multimodal markup documents
US20070265851 *May 10, 2006Nov 15, 2007Shay Ben-DavidSynchronizing distributed speech recognition
US20070274297 *May 10, 2006Nov 29, 2007Cross Charles W JrStreaming audio from a full-duplex network through a half-duplex device
US20080065388 *Sep 12, 2006Mar 13, 2008Cross Charles WEstablishing a Multimodal Personality for a Multimodal Application
US20080065389 *Sep 12, 2006Mar 13, 2008Cross Charles WEstablishing a Multimodal Advertising Personality for a Sponsor of a Multimodal Application
US20080208586 *Feb 27, 2007Aug 28, 2008Soonthorn AtivanichayaphongEnabling Natural Language Understanding In An X+V Page Of A Multimodal Application
US20080208594 *Feb 27, 2007Aug 28, 2008Cross Charles WEffecting Functions On A Multimodal Telephony Device
US20080228495 *Mar 14, 2007Sep 18, 2008Cross Jr Charles WEnabling Dynamic VoiceXML In An X+ V Page Of A Multimodal Application
US20080235021 *Mar 20, 2007Sep 25, 2008Cross Charles WIndexing Digitized Speech With Words Represented In The Digitized Speech
US20080235029 *Mar 23, 2007Sep 25, 2008Cross Charles WSpeech-Enabled Predictive Text Selection For A Multimodal Application
US20080249782 *Apr 4, 2007Oct 9, 2008Soonthorn AtivanichayaphongWeb Service Support For A Multimodal Client Processing A Multimodal Application
US20090144428 *Apr 6, 2006Jun 4, 2009International Business Machines CorporationMethod and Apparatus For Multimodal Voice and Web Services
US20100299146 *May 19, 2009Nov 25, 2010International Business Machines CorporationSpeech Capabilities Of A Multimodal Application
US20110010180 *Jan 13, 2011International Business Machines CorporationSpeech Enabled Media Sharing In A Multimodal Application
US20110032845 *Feb 10, 2011International Business Machines CorporationMultimodal Teleconferencing
US20110047452 *Nov 2, 2010Feb 24, 2011Nuance Communications, Inc.Enabling grammars in web page frame
Classifications
U.S. Classification704/270.1, 704/E15.044
International ClassificationG10L15/26, H04M3/493, G10L11/00
Cooperative ClassificationH04M3/4938
European ClassificationH04M3/493W
Legal Events
DateCodeEventDescription
Jun 22, 2004ASAssignment
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CROSS, JR., CHARLES W.;MUSCHETT, BRIEN H.;REEL/FRAME:015517/0597
Effective date: 20040519
May 13, 2009ASAssignment
Owner name: NUANCE COMMUNICATIONS, INC., MASSACHUSETTS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:022689/0317
Effective date: 20090331
Owner name: NUANCE COMMUNICATIONS, INC.,MASSACHUSETTS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:022689/0317
Effective date: 20090331
Sep 10, 2014FPAYFee payment
Year of fee payment: 4