WO2007130256A2 - Reusable multimodal application - Google Patents

Reusable multimodal application Download PDF

Info

Publication number
WO2007130256A2
WO2007130256A2 PCT/US2007/009102 US2007009102W WO2007130256A2 WO 2007130256 A2 WO2007130256 A2 WO 2007130256A2 US 2007009102 W US2007009102 W US 2007009102W WO 2007130256 A2 WO2007130256 A2 WO 2007130256A2
Authority
WO
WIPO (PCT)
Prior art keywords
multimodal
application
visual
user
reusable
Prior art date
Application number
PCT/US2007/009102
Other languages
French (fr)
Other versions
WO2007130256A3 (en
Inventor
Ewald C. Anderl
Original Assignee
Anderl Ewald C
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anderl Ewald C filed Critical Anderl Ewald C
Priority to EP07755388.1A priority Critical patent/EP2050015B1/en
Priority to EP07107463A priority patent/EP1873661A2/en
Publication of WO2007130256A2 publication Critical patent/WO2007130256A2/en
Publication of WO2007130256A3 publication Critical patent/WO2007130256A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/178Techniques for file synchronisation in file systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/038Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/487Arrangements for providing information services, e.g. recorded voice services or time announcements
    • H04M3/493Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals
    • H04M3/4938Interactive information services, e.g. directory enquiries ; Arrangements therefor, e.g. interactive voice response [IVR] systems or voice portals comprising a voice browser which renders and interprets, e.g. VoiceXML
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/038Indexing scheme relating to G06F3/038
    • G06F2203/0381Multimodal input, i.e. interface arrangements enabling the user to issue commands by simultaneous use of input devices of different nature, e.g. voice plus gesture on digitizer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2201/00Electronic components, circuits, software, systems or apparatus used in telephone systems
    • H04M2201/42Graphical user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2207/00Type of exchange or network, i.e. telephonic medium, in which the telephonic communication takes place
    • H04M2207/18Type of exchange or network, i.e. telephonic medium, in which the telephonic communication takes place wireless networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/42204Arrangements at the exchange for service or number selection by voice

Definitions

  • the present invention relates generally to the field of networked computing. More particularly, the invention provides a reusable multimodal application on a mobile device.
  • multimodality comprises any human mode of interaction on the input side of an application, for example, the user's voice, and/or any visual mode, etc., that allows users to speak, hear, type, touch or see in that application, and one or more human interaction modes on the output side of the application such as the ability to hear and visually see the output.
  • Multimodal interactions thus extend web or other application user interface to allow multiple modes of interaction, offering users, for example, the choice of using their voice, or an input device such as a key pad, keyboard, mouse or stylus.
  • users will, for example, be able to listen to spoken prompts and audio, and to view information on graphical displays.
  • a reusable multimodal application is provided on the mobile device.
  • a user transmits multimodal commands to the multimodal platform via the mobile network.
  • the one or more modes of communication that are inputted are transmitted to the multimodal platform(s) via the mobile network(s) and thereafter synchronized and processed at the multimodal platform.
  • the synchronized and processed information is transmitted to the multimodal application. If required, the user verifies and appropriately modifies the synchronized and processed information.
  • the verified and modified information is transferred from the multimodal application to the visual application.
  • the final result(s) are derived by inputting the verified and modified information into the visual application.
  • the multimodal application seamlessly combines graphics, text and audio output with speech, text, and touch input to deliver dramatically enhanced end user and services experiences. Compared to single-mode voice and visual search applications, the multimodal application of this invention is easier and more intuitive to use.
  • the method and system disclosed herein provides a multimodal application that allows the use of a plurality of modes of communication, whichever is desired or most effective depending on the data needed or the usage environment.
  • Also disclosed herein is a method using a multimodal application that serves, using a simple set of interface rules, a standard multimodal interface to any visual search interface.
  • the multimodal application of this invention requires input from the user to determine a specific selection among a list of possible results. For example, the multimodal application could be used for searching a music album from among hundreds of possible selections.
  • the method disclosed herein enhances the selection process of menus on a mobile device by allowing the user to select from a visual list of choices using one or more modes of input.
  • Also disclosed herein is a method providing a multimodal application that precludes the need of performing custom development for each application in order to provide a multimodal functionality to the mobile device.
  • Also disclosed herein is a method of implementing multimodal functionality without requiring a replacement of the entire software or hardware infrastructure of the mobile device, for example, without the need to install a new browser on the mobile device.
  • Also disclosed herein is a method to enable telecommunications carriers to extend their existing visual based portals with text or touchtone input modes, and store-fronts on the mobile device, with a search capability capable of accepting multiple modes of input. For example, it is convenient for the users to speak the service or information topic that they are interested in, and see the service appear immediately on the phone, often bypassing several levels and menus in the process. It is also much easier to speak the name of a category, title, or artist and see that filled in as the text search criteria automatically. By making more content on the mobile device easily accessible, content providers and carriers can realize increased revenues.
  • the method disclosed herein also reduces the time taken to access a desired menu choice by optimally enabling multiple modes on a mobile device. For example, the method disclosed herein can reduce the time of choosing a desired menu from approximately thirty seconds and five key clicks down to three seconds and only one key click. Even if the menu structure in a portal on the mobile device were to change frequently, the multimodal application would enable the user to continue to effectively conduct transactions on the mobile device without difficulty.
  • a reusable multimodal application offers a significant revenue opportunity, and more importantly, a market capture and retention opportunity for the mobile operator. Capturing this opportunity is essential in the face of declining average revenue per user (ARPU) and increasing competitive pressure.
  • ARPU average revenue per user
  • a multimodal application offers opportunities including additional usage, bi-directional pull through of voice and data services, increased revenue from content providers, advertising revenue, premium services, churn reduction and upgrade potentials.
  • the multimodal application gives workers operating in a mobility context, the opportunity to access and leverage the same systems and information that colleagues close to intranet resources enjoy.
  • FIG. 1 illustrates a method of accepting multimodal inputs and deriving synchronized and processed information, the method implemented in a system comprising a plurality of mobile devices operated by users who are connected to a plurality of mobile networks that contains a plurality of multimodal platforms.
  • FIG.2 illustrates a system for accepting multimodal inputs and deriving synchronized and processed information, comprising a plurality of mobile devices operated by users who are connected to a plurality of mobile networks that contains a plurality of multimodal platforms.
  • FIG.3 illustrates the multiple modes of interaction between the user and the multimodal application.
  • FIG. 1 illustrates a method of accepting multimodal inputs and deriving synchronized and processed information, the method implemented in a system comprising a plurality of mobile devices operated by users who are connected to a plurality of mobile networks that contains a plurality of multimodal platforms.
  • the plurality of mobile devices contain a plurality of applications.
  • the plurality of applications also comprise visual applications.
  • a multimodal application enables multimodality in the plurality of applications that reside on a plurality of mobile devices 101.
  • the multimodal application is invoked by invoking the visual application on the mobile device based on the request of the user 102.
  • the multimodal application accepts input information from the user in one or more modes of communication 103, such as in voice 103a, text 103b and other input modes 103c.
  • the one or more modes of communication that are inputted are transmitted to the multimodal platform(s) via the mobile network(s) 104 and then synchronized and processed 105 at the multimodal platform.
  • the synchronized and processed information is transmitted to the multimodal application 106.
  • the synchronized and processed information is provided to the user for verification and modification in one or more communication modes 107.
  • the following example illustrates the synchronizing and processing step. If the input information is a search request in the form of an audio command along with text input, then the audio command and the text input, along with the associated search grammar of the multimodal application is transferred to the multimodal platform through the mobile network.
  • the two modes of input i.e., the audio command and text input are synchronized.
  • Grammar elements associated with the command and search grammar are recognized by the multimodal platform.
  • Processed information in the form of search words is determined by the multimodal platform based on the recognized grammar elements, and the synchronized and processed information is transferred back to the multimodal application.
  • the user verifies and appropriately modifies the synchronized and processed information 108.
  • the verified and modified information is transferred from the multimodal application to the visual application 109.
  • the final result(s) is derived by inputting the verified and modified results into the visual application.
  • the final results are provided to the user in one or more modes of communication 110.
  • the system and method disclosed herein allows users to simultaneously use voice, text, graphics, keypad, stylus and haptic modes to interface with wireless services and applications.
  • FIG. 2 illustrates a system for accepting multimodal inputs and deriving synchronized and processed information, comprising a plurality of mobile devices operated by users who are connected to a plurality of mobile networks that contains a plurality of multimodal platforms.
  • the mobile device 202 comprises a multimodal application 202a that is capable of receiving inputs from the user 201 in multiple modes of input.
  • the multimodal application 202a uses a set of interface rules to provide a standard input interface on the mobile device 202.
  • the mobile device 202 communicates with a multimodal platform 204 via a mobile network 203.
  • the system disclosed herein comprises a plurality of mobile devices, multimodal platforms and mobile networks.
  • the multimodal platform 204 further comprises of a voice browser 204a, stack 204b, user personalization module 204c, multimodal module 204d, billing interface 204e, a markup content handler 204f, event and session manager 204 g, and synchronization module 204h.
  • the voice browser 204a allows users to conduct searches using audio commands.
  • the stack 204b is a reserved area of memory used to keep track of internal operations.
  • the user personalization module 204c stores user specific information.
  • the multimodal module 204d contains a grammar module for recognizing the grammar elements associated with an audio command.
  • the billing interface 204e generates user specific billing information.
  • the markup content handler 204f provides the visual markup or data content associated with the visual interface.
  • telecommunication carriers may monetize multimodal applications immediately, thereby leveraging devices already widely deployed in their networks.
  • the event and session manager 204g manages the events and sessions for networking activities associated with the multimodal platform 204.
  • the synchronization module 204h synchronizes the voice, visual and haptic modes of communication.
  • the method and system disclosed herein supports a plurality of mobile network 203, inclusive of, but not restricted to code division multiple access (CDMA), CDMA lx/3x, global system for mobile communications (GSM), general packet radio service, (GPRS), universal mobile telecommunications system (UMTS), integrated digital enhanced network (iDEN), etc.
  • the multimodal platform 204 receives the multimodal commands from the multimodal application 202a.
  • the multimodal platform 204 synchronizes and processes the input information and transfers the synchronized and processed information to the multimodal application 202a located on the mobile device 202.
  • the multimodal platform 204 enables wireless carriers and service providers to offer applications with integrated voice and visual interfaces.
  • the multimodal platform 204 may facilitate communication with mobile device 202 in multiple communication modes.
  • the multimodal platform 204 may be adapted to send audio information to and receive audio information from wireless telephone through a switch using a voice channel.
  • the multimodal platform 204 may likewise be adapted to send visual data to and receive visual data from the mobile device 202 through a switch using a data channel.
  • the multimodal platform 204 may be adapted to change between these multiple modes of communication, or make multiple modes available simultaneously, according to instructions or existing communications conditions.
  • the multimodal platform 204 may be embodied as a computing device programmed with instructions to perform these functions, hi one embodiment of the invention, the voice and data connections run simultaneously over an internet protocol (IP) connection between the multimodal platform 204 and the multimodal application 202a.
  • IP internet protocol
  • the multimodal platform 204 is described in greater detail in U.S. patent no. 6,983,307, titled “Synchronization Among Plural Browsers", and U.S. application no. 10/369,361 titled “Technique for Synchronizing Visual and Voice Browsers to Enable Multi-Modal Browsing,” filed February 18, 2003.
  • the multimodal application 202a that accepts a plurality of modes of input, can be implemented in a number of ways.
  • the multimodal application can be implemented as a Java 2 micro edition (J2ME MIDlet), as a browser plug-in, etc.
  • J2ME MIDlet Java 2 micro edition
  • the visual application invokes the multimodal application 202a with appropriate input parameters.
  • the input parameters comprise the search grammar, the search service to use, the base home URL to visit in order to display, etc.
  • an appropriate audible prompt such as text, for text to speech (TTS) output, or as an audio file can be provided.
  • the multimodal application's 202a appearance can be customized by specifying user interface (UI) parameters. For example, a custom logo can be introduced in the multimodal application.
  • the multimodal application 202a comprises global grammar elements that can be invoked by the user using predefined multimodal invocation commands.
  • the multimodal application 202a can accept audio input, which, along with the search grammar, is transferred, via the mobile network, to a multimodal platform.
  • the search grammar can be directly passed as an extensible markup language (XML) document, or a URL to a voice extensible markup language (VXML) page, or an extensible hyper text markup language (xHTML).
  • XML is a predefined set of rules or a language that enables a user to browse or interact with a device using voice recognition technology.
  • XML is a text document that contains mark-up tags for conveying the structure of data and enables efficient data interchange between devices on the internet
  • xHTML is a combination of HTML and XML that is specifically applicable for internet enabled devices.
  • the multimodal application 202a provides multiple modes of communication, inclusive of but not restricted to the voice mode, visual and haptic modes.
  • voice mode the microphone on the mobile device captures audio commands of the user.
  • visual mode data is captured on the mobile device on the keypad. For example, alpha-numeric data which may be represented in American standard code for information interchange (ASCII) form can be visually displayed.
  • ASCII American standard code for information interchange
  • the multimodal application 202a interfaces with the native visual and voice resources of the mobile device.
  • the multimodal application 202a can be installed on devices such as, but not restricted to Symbian operating system of Symbian Inc., USA, MS smartphone of Microsoft Inc., J2ME, binary run-time environment for wireless (brew), and Palm operating system of Palm Inc., USA, MS Pocket PC of Microsoft Inc., and MS Pocket PC phone edition of Microsoft Inc.
  • the mobile device comprises a communication component and computing component.
  • the computing component typically has a memory that stores data and instructions; a processor adapted to execute the instructions and manipulate the data stored in the memory; means for input, for example, a keypad, touch screen, microphone, etc.; and, means for output, for example, liquid crystal display (LCD), cathode ray tube (CRT), audio speaker, etc.
  • the communication component is a means for communicating with other mobile devices over a network, for example, an Ethernet port, a modem, a wireless transmitter/receiver for communicating in a wireless communications network, etc.
  • the multimodal application 202a can take multiple forms and can address a variety of user needs and enable different types of multimodality.
  • a user desires to fill in a visual form on the UI using voice commands.
  • the grammar elements in the voice command are recognized by the multimodal platform, and the synchronized and processed information is transferred back to the multimodal application 202a.
  • the multimodal application 202a provides the synchronized and processed information as input to the visual application, for example in the form of an extended URL and the search term(s) filled into the visual form.
  • the synchronized and processed information may be in the form of a single recognized search term, or as a list of possible recognized terms.
  • the user can choose the correct item from the list of possible recognized terms and, once satisfied with the search term, the user can activate the search as is normally done with a visual-only application.
  • Examples of different variants of the multimodal application 202a are as follows.
  • the "SearchBar” variant of the multimodal application 202a accepts audio input, which, along with the associated search grammar, is transferred via the mobile network to the multimodal platform.
  • the SearchBar enables a user to go directly to a specific page of interest through voice input instead of having to navigate through several links.
  • the SearchBar provides the result as input to the visual application, for example, in the form of an extended URL, and the search term(s) filled into the visual form.
  • the "Inputbar" variant of the multimodal application 202a is applied where more general information is required by the visual application. For example, consider the case when a user needs to purchase an item using their mobile device.
  • the user needs to fill in their residential address in the "shipping address" section of the form displayed on the mobile device.
  • the user then brings up the InputBar, and fills in the form using multiple modes of input, for example, fills in the form using both voice commands and the keypad.
  • the "DictationBar" version of the multimodal application 202a is applied where the input is freeform, such as a text or e-mail message. For example, consider a case where a user sends a short message service (SMS) reply. The user selects DictationBar to input the text. The user can then visually correct the text that is not spelled accurately, i.e., recognized incorrectly.
  • SMS short message service
  • the user can accomplish this correction activity by visually selecting the inaccurate text section and thereafter speaking or modifying the text section by visually typing or selecting from alternate displayed text that has a close recognition confidence.
  • the "PortalBar” version of the SeachBar is used to access web-pages, by directly accessing the web-pages using multiple modes of input, from a general portal, for example Yahoo!, without the requirement for navigating through multiple links.
  • the "IP Bar” version of the multimodal application 202a enables a user to bookmark desired URL's with predefined voice commands, and the user can then access the desired URL's using voice commands.
  • the bookmarking function is further described in co-pending application 10/211,117, titled “System and Method for Providing Multi-Modal Bookmarks," filed August 2, 2002.
  • the multimodal application 202a can be preloaded on the mobile device, or downloaded onto the mobile device on demand, or it may be pre-burned onto the read only memory (ROM) of the mobile device.
  • the multimodal application 202a can also be implemented as a multimodal web page, or as a web browser.
  • the multimodal system architecture of this invention allows the use of standard capabilities, for example Java, Applets, the integration and interfacing with web, installing new applications on the device, etc.
  • the multimodal system architecture of this invention can leverage all these capabilities without requiring a replacement of the entire software or hardware infrastructure of the mobile device, for example, without requiring the installation of a new browser on the mobile device.
  • a multimodal infrastructure with a complete and simultaneous activation of all its modes of communication, including voice, key input and visuals demands a significant amount of the mobile device's and the multimodal platform's resources.
  • the method and system disclosed herein provides a preferential mode activation feature, wherein, only a preferred mode chosen by the user is activated at any point in time. For example, the visual mode will be activated when the user taps once on the multimodal application 202a and indicates the preference for the visual mode only, following which, the user activates the voice mode by speaking or tapping twice.
  • the multimodal application 202a supports both sequential multimodality and simultaneous multimodality.
  • Sequential multimodality allows users to move seamlessly between visual and voice modes. Sequential multimodality offers real value when different steps of a single application are more effective or efficient in one mode than the other. For example, in a navigation application, it may be easier to speak the name of the place (voice mode) than to type it, yet it may be preferable to view a map (visual mode) than to listen to directions that may involve a half dozen turns. The swap between two modes may be initiated by the application, or by the user. Sequential multimodality is described in greater detail in United States patent application 10/119,614, titled “Mode-Swapping in Multi-Modal Telephonic Application," filed April 10, 2002. Briefly, the state of the two modes, i.e., the visual and voice mode are synchronized.
  • the multimodal application generates events relating to navigational activity being performed by a user in one mode.
  • a representation of the events that have occurred in a multimodal session are recorded and are subsequently used to set the input in the second mode to a state equivalent to that which the input in the first mode would be in if the user had performed, on the input in the second mode, a navigational activity equivalent to that which the user performed on the input in the first mode.
  • simultaneous multimodality where the device has both modes active, the user can communicate in the visual and voice mode simultaneously.
  • a user can point to a street on the map and say: "Plan route, avoiding this street.”
  • the user may enter the number "5000" in the amount box using the keypad, then simply speak "Transfer from Account 123 to Account 456; Enter” and all three entry boxes will be populated correctly and the information delivered to the multimodal platform 204.
  • the synchronized and processed information from the multimodal platform 204 can be delivered in voice mode, visual, or both and provide positive confirmation of the transaction.
  • FIG. 3 illustrates the multiple modes of interaction between the user and the multimodal application.
  • the user communicates with the multimodal application 202a, using one or more of the following modes: audio mode 301, visual mode 302, such as through a stylus, and haptic mode 303, such as through a haptic device.
  • the different modes are synchronized 305.
  • a haptic mode is a mode of communication, or interface with a computing device.
  • a mobile phone with haptic capabilities enables a haptic mode of input.
  • the haptic mode of communication is enabled through a tactile method, and uses a haptic device that senses body movement or in general, a user's intention. For example, using a haptic glove, a user can feel and move a ball, and this movement is simultaneously effected on the display of the device, wherein the ball can be made to move correspondingly on the display of the device.
  • the multimodal application 202a can also accept other modes of input 304, for example, global positioning system (GPS) inputs.
  • GPS global positioning system
  • the user can say: "Make a reservation over there" while pointing their mobile device at a restaurant across the road.
  • the mobile device is GPS enabled and is capable of deriving the position coordinates of objects it is pointed at.
  • the GPS input is also transferred to the multimodal platform 204.
  • the multimodal system architecture of this invention enables push to talk over cellular (PoC) phones, wherein the push to talk command (PTT command) is transmitted to the multimodal platform for initiating or terminating a session.
  • PoC push to talk over cellular
  • a session is initiated when the multimodal platform 204 becomes aware of the user, i.e., when the user is provided a plurality of modality interfaces and then the user invokes an initiation command through a predefined input. Similarly, the user can end a session with a predefined input, or session ends if the multimodal platform 204 ceases to register activity at the user's end.
  • the following example illustrates the multimodal application's 202a ability to provide multimodal capabilities.
  • a user desires to locate "Edison, NJ 08817" using the weblink Yahoo Maps on the user's mobile device.
  • the user can double tap on the multimodal application 202a residing in the UI of the mobile device, and then the user can provide the spoken command: "Locate Edison, NJ 08817".
  • the method and system disclosed herein provides a means for mixing visual input tapping and speech inputs in a manner that is an easy and natural experience to the end user. Once the user has finished inputting the search parameters, the user can submit the map request.
  • This invention can be effectively applied in a variety of networks and usage environments. For example, it can be internet based with web and WAP interfaces to mobile devices, or it can be linked to a corporate intranet, or other private networks.
  • the multimodal application 202a can allow an employee of a firm to provide search input parameters specific to the local and network applications and resources of the firm. For example, an employee using the multimodal application 202a can search for "John Smith", and access John Smith's contact information in the corporate address book.
  • the multimodal application 202a could formulate input parameters and have them available for accessing not only network resources and web based applications, but also for accessing resources within the mobile device.
  • the following example illustrates the use of this invention in field services.
  • Bill While inspecting a defective coke machine at the local gas station, Bill pulls out his handset and initiates the diagnostics application. Bill, then says: “Diagnostics for Coke machine”. The device returns a list of available diagnostic tests. Bill scrolls and selects the "Cooling diagnostics” link, the second in the list, and sees a summary of the recommended diagnostics procedures for testing the machine. After performing a few diagnostic procedures, Bill concludes that one part needs to be replaced. Again using his handset, he switches to the purchasing part of the field application by saying: “New quote”. The spoken command opens a quotation/order form. Bill says: “Add compressor XRT-65, quantity one", adding the correct part to the parts quotation. Then he issues the verbal commands: "Close quote” and "Fax to 555-233-2390" which faxes the parts quotations directly to the main office for processing.

Abstract

A method and system are disclosed herein for accepting multimodal inputs and deriving synchronized and processed information. A reusable multimodal application is provided on the mobile device. A user transmits a multimodal command to the multimodal platform via the mobile network. The one or more modes of communication that are inputted are transmitted to the multimodal platform(s) via the mobile network(s) and thereafter synchronized and processed at the multimodal platform. The synchronized and processed information is transmitted to the multimodal application. If required, the user verifies and appropriately modifies the synchronized and processed information. The verified and modified information are transferred from the multimodal application to the visual application. The final result(s) are derived by inputting the verified and modified results into the visual application.

Description

REUSABLE MULTIMODAL APPLICATION
CROSS-REFERENCE TO RELATED APPLICATIONS
This application is related to U.S. patent application no. 10/211,117, titled "System and Method for Providing Multi-Modal Bookmarks," filed August 2, 2002, U.S. patent no. 6,983,307, titled "Synchronization among plural browsers", U.S. application no. 10/119,614, titled "Mode-Swapping in Multi-Modal Telephonic Application," filed April 10, 2002, and U.S. application no. 10/369,361 titled "Technique for Synchronizing Visual and Voice Browsers to Enable Multi-Modal Browsing," filed February 18, 2003. The entirety of each of the aforementioned patent applications is hereby incorporated herein for all purposes.
BACKGROUND
The present invention relates generally to the field of networked computing. More particularly, the invention provides a reusable multimodal application on a mobile device. As used herein, multimodality comprises any human mode of interaction on the input side of an application, for example, the user's voice, and/or any visual mode, etc., that allows users to speak, hear, type, touch or see in that application, and one or more human interaction modes on the output side of the application such as the ability to hear and visually see the output. Multimodal interactions thus extend web or other application user interface to allow multiple modes of interaction, offering users, for example, the choice of using their voice, or an input device such as a key pad, keyboard, mouse or stylus. For output, users will, for example, be able to listen to spoken prompts and audio, and to view information on graphical displays.
The market for ring tones, wall papers, and other content is a large and rapidly growing business for mobile operators and content providers. In addition, a significant number of commercial transactions take place over wireless application protocol (WAP) capable mobile devices. The content in the top-level menu visual interface of the WAP capable mobile devices need to be easily accessible to the user in order to effectively perform commercial transactions. Content that cannot be easily found and located by subscribers directly is a lost revenue opportunity for mobile operators and content providers.
Increasingly, applications are moved from a static environment, for example, a desktop computer, to a mobile environment or a set-top box environment, where the mobile devices are smaller and packed with functionalities. The keypad input facility in the mobile device is not user friendly for all types of input operations, and the ability to interact is constrained by the form factor of the device. There is an opportunity to improve the effectiveness in the use of current mobile visual applications on mobile devices, for example, for mobile devices using a browser, WAP, or x hyper text markup language (xHTML).
There is an unmet market need for a method and system that precludes the need of performing custom development for each application in order to provide a multimodal functionality to the mobile device.
There is an unmet market need for a method and system that implements multimodal functionality without requiring a replacement of the entire software or hardware infrastructure of the mobile device.
SUMMARY OF THE INVENTION
Disclosed herein is a method and system for accepting multimodal inputs and deriving synchronized and processed information. A reusable multimodal application is provided on the mobile device. A user transmits multimodal commands to the multimodal platform via the mobile network. The one or more modes of communication that are inputted are transmitted to the multimodal platform(s) via the mobile network(s) and thereafter synchronized and processed at the multimodal platform. The synchronized and processed information is transmitted to the multimodal application. If required, the user verifies and appropriately modifies the synchronized and processed information. The verified and modified information is transferred from the multimodal application to the visual application. The final result(s) are derived by inputting the verified and modified information into the visual application.
The multimodal application seamlessly combines graphics, text and audio output with speech, text, and touch input to deliver dramatically enhanced end user and services experiences. Compared to single-mode voice and visual search applications, the multimodal application of this invention is easier and more intuitive to use. The method and system disclosed herein provides a multimodal application that allows the use of a plurality of modes of communication, whichever is desired or most effective depending on the data needed or the usage environment.
Also disclosed herein is a method using a multimodal application that serves, using a simple set of interface rules, a standard multimodal interface to any visual search interface. The multimodal application of this invention requires input from the user to determine a specific selection among a list of possible results. For example, the multimodal application could be used for searching a music album from among hundreds of possible selections.
The method disclosed herein enhances the selection process of menus on a mobile device by allowing the user to select from a visual list of choices using one or more modes of input.
Also disclosed herein is a method providing a multimodal application that precludes the need of performing custom development for each application in order to provide a multimodal functionality to the mobile device.
Also disclosed herein is a method of implementing multimodal functionality without requiring a replacement of the entire software or hardware infrastructure of the mobile device, for example, without the need to install a new browser on the mobile device.
Also disclosed herein is a method to enable telecommunications carriers to extend their existing visual based portals with text or touchtone input modes, and store-fronts on the mobile device, with a search capability capable of accepting multiple modes of input. For example, it is convenient for the users to speak the service or information topic that they are interested in, and see the service appear immediately on the phone, often bypassing several levels and menus in the process. It is also much easier to speak the name of a category, title, or artist and see that filled in as the text search criteria automatically. By making more content on the mobile device easily accessible, content providers and carriers can realize increased revenues.
The method disclosed herein also reduces the time taken to access a desired menu choice by optimally enabling multiple modes on a mobile device. For example, the method disclosed herein can reduce the time of choosing a desired menu from approximately thirty seconds and five key clicks down to three seconds and only one key click. Even if the menu structure in a portal on the mobile device were to change frequently, the multimodal application would enable the user to continue to effectively conduct transactions on the mobile device without difficulty.
A reusable multimodal application offers a significant revenue opportunity, and more importantly, a market capture and retention opportunity for the mobile operator. Capturing this opportunity is essential in the face of declining average revenue per user (ARPU) and increasing competitive pressure. By delivering a user-friendly multimodal experience, barriers to a user's adoption of new mobile applications and services are significantly reduced. A multimodal application offers opportunities including additional usage, bi-directional pull through of voice and data services, increased revenue from content providers, advertising revenue, premium services, churn reduction and upgrade potentials. The multimodal application gives workers operating in a mobility context, the opportunity to access and leverage the same systems and information that colleagues close to intranet resources enjoy.
BIUEF DESCRIPTION OF THE DRAWINGS
The foregoing summary, as well as the following detailed description of the embodiments, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the invention, there is shown in the drawings exemplary constructions of the invention; however, the invention is not limited to the specific methods and instrumentalities disclosed.
FIG. 1 illustrates a method of accepting multimodal inputs and deriving synchronized and processed information, the method implemented in a system comprising a plurality of mobile devices operated by users who are connected to a plurality of mobile networks that contains a plurality of multimodal platforms.
FIG.2 illustrates a system for accepting multimodal inputs and deriving synchronized and processed information, comprising a plurality of mobile devices operated by users who are connected to a plurality of mobile networks that contains a plurality of multimodal platforms.
FIG.3 illustrates the multiple modes of interaction between the user and the multimodal application.
DETAILED DESCRIPTION OF THE INVENTION
FIG. 1 illustrates a method of accepting multimodal inputs and deriving synchronized and processed information, the method implemented in a system comprising a plurality of mobile devices operated by users who are connected to a plurality of mobile networks that contains a plurality of multimodal platforms. The plurality of mobile devices contain a plurality of applications. The plurality of applications also comprise visual applications. A multimodal application enables multimodality in the plurality of applications that reside on a plurality of mobile devices 101. The multimodal application is invoked by invoking the visual application on the mobile device based on the request of the user 102. The multimodal application accepts input information from the user in one or more modes of communication 103, such as in voice 103a, text 103b and other input modes 103c. The one or more modes of communication that are inputted are transmitted to the multimodal platform(s) via the mobile network(s) 104 and then synchronized and processed 105 at the multimodal platform. The synchronized and processed information is transmitted to the multimodal application 106. The synchronized and processed information is provided to the user for verification and modification in one or more communication modes 107.
The following example illustrates the synchronizing and processing step. If the input information is a search request in the form of an audio command along with text input, then the audio command and the text input, along with the associated search grammar of the multimodal application is transferred to the multimodal platform through the mobile network. The two modes of input, i.e., the audio command and text input are synchronized. Grammar elements associated with the command and search grammar are recognized by the multimodal platform. Processed information in the form of search words is determined by the multimodal platform based on the recognized grammar elements, and the synchronized and processed information is transferred back to the multimodal application.
If required, the user verifies and appropriately modifies the synchronized and processed information 108. The verified and modified information is transferred from the multimodal application to the visual application 109. The final result(s) is derived by inputting the verified and modified results into the visual application. The final results are provided to the user in one or more modes of communication 110. The system and method disclosed herein allows users to simultaneously use voice, text, graphics, keypad, stylus and haptic modes to interface with wireless services and applications.
FIG. 2 illustrates a system for accepting multimodal inputs and deriving synchronized and processed information, comprising a plurality of mobile devices operated by users who are connected to a plurality of mobile networks that contains a plurality of multimodal platforms. The mobile device 202 comprises a multimodal application 202a that is capable of receiving inputs from the user 201 in multiple modes of input. The multimodal application 202a, uses a set of interface rules to provide a standard input interface on the mobile device 202. The mobile device 202 communicates with a multimodal platform 204 via a mobile network 203. The system disclosed herein comprises a plurality of mobile devices, multimodal platforms and mobile networks. The multimodal platform 204 further comprises of a voice browser 204a, stack 204b, user personalization module 204c, multimodal module 204d, billing interface 204e, a markup content handler 204f, event and session manager 204 g, and synchronization module 204h. The voice browser 204a allows users to conduct searches using audio commands. The stack 204b is a reserved area of memory used to keep track of internal operations. The user personalization module 204c stores user specific information. The multimodal module 204d contains a grammar module for recognizing the grammar elements associated with an audio command. The billing interface 204e generates user specific billing information. The markup content handler 204f provides the visual markup or data content associated with the visual interface. Using the proposed invention, telecommunication carriers may monetize multimodal applications immediately, thereby leveraging devices already widely deployed in their networks. The event and session manager 204g manages the events and sessions for networking activities associated with the multimodal platform 204. The synchronization module 204h synchronizes the voice, visual and haptic modes of communication.
The method and system disclosed herein supports a plurality of mobile network 203, inclusive of, but not restricted to code division multiple access (CDMA), CDMA lx/3x, global system for mobile communications (GSM), general packet radio service, (GPRS), universal mobile telecommunications system (UMTS), integrated digital enhanced network (iDEN), etc. The multimodal platform 204 receives the multimodal commands from the multimodal application 202a. The multimodal platform 204 synchronizes and processes the input information and transfers the synchronized and processed information to the multimodal application 202a located on the mobile device 202. The multimodal platform 204 enables wireless carriers and service providers to offer applications with integrated voice and visual interfaces. In accordance with the embodiments of the method disclosed herein, the multimodal platform 204 may facilitate communication with mobile device 202 in multiple communication modes. For example, the multimodal platform 204 may be adapted to send audio information to and receive audio information from wireless telephone through a switch using a voice channel. The multimodal platform 204 may likewise be adapted to send visual data to and receive visual data from the mobile device 202 through a switch using a data channel. Moreover, the multimodal platform 204 may be adapted to change between these multiple modes of communication, or make multiple modes available simultaneously, according to instructions or existing communications conditions. The multimodal platform 204 may be embodied as a computing device programmed with instructions to perform these functions, hi one embodiment of the invention, the voice and data connections run simultaneously over an internet protocol (IP) connection between the multimodal platform 204 and the multimodal application 202a. The multimodal platform 204 is described in greater detail in U.S. patent no. 6,983,307, titled "Synchronization Among Plural Browsers", and U.S. application no. 10/369,361 titled "Technique for Synchronizing Visual and Voice Browsers to Enable Multi-Modal Browsing," filed February 18, 2003.
The multimodal application 202a that accepts a plurality of modes of input, can be implemented in a number of ways. For example, the multimodal application can be implemented as a Java 2 micro edition (J2ME MIDlet), as a browser plug-in, etc. When a visual application, for example a WAP or xHTML browser requires a search, the visual application invokes the multimodal application 202a with appropriate input parameters. The input parameters comprise the search grammar, the search service to use, the base home URL to visit in order to display, etc. If required, an appropriate audible prompt, such as text, for text to speech (TTS) output, or as an audio file can be provided. The multimodal application's 202a appearance can be customized by specifying user interface (UI) parameters. For example, a custom logo can be introduced in the multimodal application. The multimodal application 202a comprises global grammar elements that can be invoked by the user using predefined multimodal invocation commands.
The multimodal application 202a can accept audio input, which, along with the search grammar, is transferred, via the mobile network, to a multimodal platform. For example, the search grammar can be directly passed as an extensible markup language (XML) document, or a URL to a voice extensible markup language (VXML) page, or an extensible hyper text markup language (xHTML). VXML is a predefined set of rules or a language that enables a user to browse or interact with a device using voice recognition technology. XML is a text document that contains mark-up tags for conveying the structure of data and enables efficient data interchange between devices on the internet xHTML is a combination of HTML and XML that is specifically applicable for internet enabled devices.
The multimodal application 202a provides multiple modes of communication, inclusive of but not restricted to the voice mode, visual and haptic modes. When the voice mode is used, the microphone on the mobile device captures audio commands of the user. When the visual mode is used, data is captured on the mobile device on the keypad. For example, alpha-numeric data which may be represented in American standard code for information interchange (ASCII) form can be visually displayed. The multimodal application 202a interfaces with the native visual and voice resources of the mobile device. The multimodal application 202a can be installed on devices such as, but not restricted to Symbian operating system of Symbian Inc., USA, MS smartphone of Microsoft Inc., J2ME, binary run-time environment for wireless (brew), and Palm operating system of Palm Inc., USA, MS Pocket PC of Microsoft Inc., and MS Pocket PC phone edition of Microsoft Inc. The mobile device comprises a communication component and computing component. The computing component typically has a memory that stores data and instructions; a processor adapted to execute the instructions and manipulate the data stored in the memory; means for input, for example, a keypad, touch screen, microphone, etc.; and, means for output, for example, liquid crystal display (LCD), cathode ray tube (CRT), audio speaker, etc.. The communication component is a means for communicating with other mobile devices over a network, for example, an Ethernet port, a modem, a wireless transmitter/receiver for communicating in a wireless communications network, etc.
Depending on the usage context, the multimodal application 202a can take multiple forms and can address a variety of user needs and enable different types of multimodality. Consider the case wherein a user desires to fill in a visual form on the UI using voice commands. The grammar elements in the voice command are recognized by the multimodal platform, and the synchronized and processed information is transferred back to the multimodal application 202a. The multimodal application 202a provides the synchronized and processed information as input to the visual application, for example in the form of an extended URL and the search term(s) filled into the visual form. The synchronized and processed information may be in the form of a single recognized search term, or as a list of possible recognized terms. The user can choose the correct item from the list of possible recognized terms and, once satisfied with the search term, the user can activate the search as is normally done with a visual-only application.
Examples of different variants of the multimodal application 202a are as follows. The "SearchBar" variant of the multimodal application 202a accepts audio input, which, along with the associated search grammar, is transferred via the mobile network to the multimodal platform. The SearchBar enables a user to go directly to a specific page of interest through voice input instead of having to navigate through several links. The SearchBar provides the result as input to the visual application, for example, in the form of an extended URL, and the search term(s) filled into the visual form. The "Inputbar" variant of the multimodal application 202a is applied where more general information is required by the visual application. For example, consider the case when a user needs to purchase an item using their mobile device. The user needs to fill in their residential address in the "shipping address" section of the form displayed on the mobile device. The user then brings up the InputBar, and fills in the form using multiple modes of input, for example, fills in the form using both voice commands and the keypad. The "DictationBar" version of the multimodal application 202a is applied where the input is freeform, such as a text or e-mail message. For example, consider a case where a user sends a short message service (SMS) reply. The user selects DictationBar to input the text. The user can then visually correct the text that is not spelled accurately, i.e., recognized incorrectly. The user can accomplish this correction activity by visually selecting the inaccurate text section and thereafter speaking or modifying the text section by visually typing or selecting from alternate displayed text that has a close recognition confidence. The "PortalBar" version of the SeachBar is used to access web-pages, by directly accessing the web-pages using multiple modes of input, from a general portal, for example Yahoo!, without the requirement for navigating through multiple links. The "IP Bar" version of the multimodal application 202a enables a user to bookmark desired URL's with predefined voice commands, and the user can then access the desired URL's using voice commands. The bookmarking function is further described in co-pending application 10/211,117, titled "System and Method for Providing Multi-Modal Bookmarks," filed August 2, 2002.
The multimodal application 202a can be preloaded on the mobile device, or downloaded onto the mobile device on demand, or it may be pre-burned onto the read only memory (ROM) of the mobile device. The multimodal application 202a can also be implemented as a multimodal web page, or as a web browser.
The multimodal system architecture of this invention allows the use of standard capabilities, for example Java, Applets, the integration and interfacing with web, installing new applications on the device, etc. The multimodal system architecture of this invention can leverage all these capabilities without requiring a replacement of the entire software or hardware infrastructure of the mobile device, for example, without requiring the installation of a new browser on the mobile device. In the current art, a multimodal infrastructure with a complete and simultaneous activation of all its modes of communication, including voice, key input and visuals, demands a significant amount of the mobile device's and the multimodal platform's resources. The method and system disclosed herein provides a preferential mode activation feature, wherein, only a preferred mode chosen by the user is activated at any point in time. For example, the visual mode will be activated when the user taps once on the multimodal application 202a and indicates the preference for the visual mode only, following which, the user activates the voice mode by speaking or tapping twice.
The multimodal application 202a supports both sequential multimodality and simultaneous multimodality.
Sequential multimodality allows users to move seamlessly between visual and voice modes. Sequential multimodality offers real value when different steps of a single application are more effective or efficient in one mode than the other. For example, in a navigation application, it may be easier to speak the name of the place (voice mode) than to type it, yet it may be preferable to view a map (visual mode) than to listen to directions that may involve a half dozen turns. The swap between two modes may be initiated by the application, or by the user. Sequential multimodality is described in greater detail in United States patent application 10/119,614, titled "Mode-Swapping in Multi-Modal Telephonic Application," filed April 10, 2002. Briefly, the state of the two modes, i.e., the visual and voice mode are synchronized. The multimodal application generates events relating to navigational activity being performed by a user in one mode. A representation of the events that have occurred in a multimodal session are recorded and are subsequently used to set the input in the second mode to a state equivalent to that which the input in the first mode would be in if the user had performed, on the input in the second mode, a navigational activity equivalent to that which the user performed on the input in the first mode. In the case of simultaneous multimodality, where the device has both modes active, the user can communicate in the visual and voice mode simultaneously. For example, in a mapping example, a user can point to a street on the map and say: "Plan route, avoiding this street." In a retail banking application, with "From Account", "To Account", and "Amount" boxes on the screen, the user may enter the number "5000" in the amount box using the keypad, then simply speak "Transfer from Account 123 to Account 456; Enter" and all three entry boxes will be populated correctly and the information delivered to the multimodal platform 204. The synchronized and processed information from the multimodal platform 204 can be delivered in voice mode, visual, or both and provide positive confirmation of the transaction.
FIG. 3 illustrates the multiple modes of interaction between the user and the multimodal application. The user communicates with the multimodal application 202a, using one or more of the following modes: audio mode 301, visual mode 302, such as through a stylus, and haptic mode 303, such as through a haptic device. The different modes are synchronized 305. A haptic mode is a mode of communication, or interface with a computing device. For example, a mobile phone with haptic capabilities enables a haptic mode of input. The haptic mode of communication is enabled through a tactile method, and uses a haptic device that senses body movement or in general, a user's intention. For example, using a haptic glove, a user can feel and move a ball, and this movement is simultaneously effected on the display of the device, wherein the ball can be made to move correspondingly on the display of the device.
In addition to the audio, visual and haptic modes of input, the multimodal application 202a can also accept other modes of input 304, for example, global positioning system (GPS) inputs. For example, the user can say: "Make a reservation over there" while pointing their mobile device at a restaurant across the road. Assume that the mobile device is GPS enabled and is capable of deriving the position coordinates of objects it is pointed at. In this case, in addition to the audio and haptic input, the GPS input is also transferred to the multimodal platform 204. The multimodal system architecture of this invention enables push to talk over cellular (PoC) phones, wherein the push to talk command (PTT command) is transmitted to the multimodal platform for initiating or terminating a session. A session is initiated when the multimodal platform 204 becomes aware of the user, i.e., when the user is provided a plurality of modality interfaces and then the user invokes an initiation command through a predefined input. Similarly, the user can end a session with a predefined input, or session ends if the multimodal platform 204 ceases to register activity at the user's end.
The following example illustrates the multimodal application's 202a ability to provide multimodal capabilities. Consider a case wherein a user desires to locate "Edison, NJ 08817" using the weblink Yahoo Maps on the user's mobile device. The user can double tap on the multimodal application 202a residing in the UI of the mobile device, and then the user can provide the spoken command: "Locate Edison, NJ 08817". The method and system disclosed herein provides a means for mixing visual input tapping and speech inputs in a manner that is an easy and natural experience to the end user. Once the user has finished inputting the search parameters, the user can submit the map request.
This invention can be effectively applied in a variety of networks and usage environments. For example, it can be internet based with web and WAP interfaces to mobile devices, or it can be linked to a corporate intranet, or other private networks. For example, in a corporate application, the multimodal application 202a can allow an employee of a firm to provide search input parameters specific to the local and network applications and resources of the firm. For example, an employee using the multimodal application 202a can search for "John Smith", and access John Smith's contact information in the corporate address book. The multimodal application 202a could formulate input parameters and have them available for accessing not only network resources and web based applications, but also for accessing resources within the mobile device. The following example illustrates the use of this invention in field services. While inspecting a defective coke machine at the local gas station, Bill pulls out his handset and initiates the diagnostics application. Bill, then says: "Diagnostics for Coke machine". The device returns a list of available diagnostic tests. Bill scrolls and selects the "Cooling diagnostics" link, the second in the list, and sees a summary of the recommended diagnostics procedures for testing the machine. After performing a few diagnostic procedures, Bill concludes that one part needs to be replaced. Again using his handset, he switches to the purchasing part of the field application by saying: "New quote". The spoken command opens a quotation/order form. Bill says: "Add compressor XRT-65, quantity one", adding the correct part to the parts quotation. Then he issues the verbal commands: "Close quote" and "Fax to 555-233-2390" which faxes the parts quotations directly to the main office for processing.
The foregoing examples have been provided merely for the purpose of explanation and are in no way to be construed as limiting of the present method and system disclosed herein. While the invention has been described with reference to various embodiments, it is understood that the words which have been used herein are words of description and illustration, rather than words of limitations. Further, although the invention has been described herein with reference to particular means, materials and embodiments, the invention is not intended to be limited to the particulars disclosed herein; rather, the invention extends to all functionally equivalent structures, methods and uses, such as are within the scope of the appended claims. Those skilled in the art, having the benefit of the teachings of this specification, may effect numerous modifications thereto and changes may be made without departing from the scope and spirit of the invention in its aspects.

Claims

CLAIMSI claim:
1. A method for inputting information and deriving results from a plurality of mobile devices operated by users who are connected to a mobile network containing a plurality of multimodal platforms, wherein said plurality of mobile devices contain a plurality of applications, wherein said plurality of applications also include visual applications, the method comprising the steps of:
providing a reusable multimodal application that enables multimodality in said plurality of applications on said plurality of mobile devices;
invoking said reusable multimodal application, wherein said step of invoking is performed by a visual application on a mobile device on the request of the user of said mobile device;
accepting the input information of the user by the reusable multimodal application in one or more modes of communication;
transmitting said input information to said multimodal platform via said mobile network;
synchronizing said input information that are in one or more modes of communication to create synchronized information;
processing said synchronized information and transmitting the synchronized and processed information to the reusable multimodal application;
providing said synchronized and processed information to the user for verification and modification in one or more input modes; transferring said verified and modified information from the reusable multimodal application to the visual application; and
providing final results in one or more modes of communication, wherein said final results are derived by inputting said verified and modified results into the visual application.
2. The method of claim 1 , wherein said modes of communication comprises audio, visual and haptic modes.
3. The method of claim 1, wherein if the mode of the users input to the mobile device is audio and text, the steps of transmitting said search request and determining the synchronized and processed information, further comprises the steps of:
transmitting said search request along with the associated search grammar derived by the reusable multimodal application via the mobile network to the multimodal platform;
recognizing grammar elements associated with the search request and said search grammar, wherein said grammar elements associations are predetermined; and
synchronizing the multiple modes of input to create synchronized information;
processing said synchronized information and determining search words based on said recognizing grammar elements and transferring said search words to the reusable multimodal application.
4. The method of claim 1, wherein the visual application invokes appropriate input parameters at the reusable multimodal application comprising the search grammar and the search service to use.
5. The method of claim 1, wherein said step of invoking the reusable multimodal application is activated upon a voice command from the user.
6. The method of claim 1 , wherein the reusable multimodal application is a search bar that enables a user to go directly to a specific web page of interest through voice input instead of having to navigate through several links on a browser.
7. The method of claim 1, wherein the reusable multimodal application is an input bar wherein more general information is required by the visual application.
8. The method of claim 1, wherein the input to the multimodal mobile search is freeform text.
9. The method of claim 1, wherein the reusable multimodal application is a web portal.
10. The method of claim 1, wherein the input to the reusable multimodal application is an uniform resource locator address.
11. The method of claim 1, wherein the reusable multimodal application supports sequential multimodality, allowing users to move seamlessly between visual and voice modes.
12. The method of claim 1, wherein the reusable multimodal application supports simultaneous use of voice and visual modes.
13. The method of claim 1, wherein the reusable multimodal application further comprises global grammar elements that can be invoked by the user using predefined multimodal invocations.
14. A system for conducting a search on a mobile device of a user, comprising: a mobile network;
a mobile device connected to said mobile network, further comprising a visual application;
a reusable multimodal application installed on said mobile device, wherein said reusable multimodal application uses a set of interface rules to provide a standard input interface to a visual search interface on the mobile device, wherein said reusable multimodal application receives multimodal commands from said user;
a multimodal platform located in said mobile network that receives and synchronizes said multimodal commands, processes said synchronized multimodal commands to create synchronized and processed information, and transfers said synchronized and processed information to said reusable multimodal application; and,
a visual interface that displays the synchronized and processes information to the user.
PCT/US2007/009102 2006-05-05 2007-04-14 Reusable multimodal application WO2007130256A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP07755388.1A EP2050015B1 (en) 2006-05-05 2007-04-14 Reusable multimodal application
EP07107463A EP1873661A2 (en) 2006-05-05 2007-05-03 Reusable multimodal application

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/418,896 US8213917B2 (en) 2006-05-05 2006-05-05 Reusable multimodal application
US11/418,896 2006-05-05

Publications (2)

Publication Number Publication Date
WO2007130256A2 true WO2007130256A2 (en) 2007-11-15
WO2007130256A3 WO2007130256A3 (en) 2008-05-02

Family

ID=38662555

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/009102 WO2007130256A2 (en) 2006-05-05 2007-04-14 Reusable multimodal application

Country Status (3)

Country Link
US (11) US8213917B2 (en)
EP (1) EP2050015B1 (en)
WO (1) WO2007130256A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8213917B2 (en) 2006-05-05 2012-07-03 Waloomba Tech Ltd., L.L.C. Reusable multimodal application
US8571606B2 (en) 2001-08-07 2013-10-29 Waloomba Tech Ltd., L.L.C. System and method for providing multi-modal bookmarks

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080045201A1 (en) * 2006-08-17 2008-02-21 Kies Jonathan K Remote feature control of a mobile device
US9208783B2 (en) * 2007-02-27 2015-12-08 Nuance Communications, Inc. Altering behavior of a multimodal application based on location
US8862475B2 (en) * 2007-04-12 2014-10-14 Nuance Communications, Inc. Speech-enabled content navigation and control of a distributed multimodal browser
WO2009049196A1 (en) * 2007-10-11 2009-04-16 Manesh Nasser K Multi-modal mobile platform
US10133372B2 (en) * 2007-12-20 2018-11-20 Nokia Technologies Oy User device having sequential multimodal output user interface
US20090182562A1 (en) * 2008-01-14 2009-07-16 Garmin Ltd. Dynamic user interface for automated speech recognition
SG142399A1 (en) * 2008-05-02 2009-11-26 Creative Tech Ltd Apparatus for enhanced messaging and a method for enhanced messaging
US8862681B2 (en) 2008-06-25 2014-10-14 Microsoft Corporation Multimodal conversation transfer
US8788977B2 (en) 2008-11-20 2014-07-22 Amazon Technologies, Inc. Movement recognition as input mechanism
US8832585B2 (en) * 2009-09-25 2014-09-09 Apple Inc. Device, method, and graphical user interface for manipulating workspace views
EP4318463A3 (en) 2009-12-23 2024-02-28 Google LLC Multi-modal input on an electronic device
US11416214B2 (en) 2009-12-23 2022-08-16 Google Llc Multi-modal input on an electronic device
US8457883B2 (en) 2010-04-20 2013-06-04 Telenav, Inc. Navigation system with calendar mechanism and method of operation thereof
US9263045B2 (en) * 2011-05-17 2016-02-16 Microsoft Technology Licensing, Llc Multi-mode text input
US8943150B2 (en) * 2011-09-12 2015-01-27 Fiserv, Inc. Systems and methods for customizing mobile applications based upon user associations with one or more entities
US9847083B2 (en) * 2011-11-17 2017-12-19 Universal Electronics Inc. System and method for voice actuated configuration of a controlling device
CN103946838B (en) 2011-11-24 2017-10-24 微软技术许可有限责任公司 Interactive multi-mode image search
US9035874B1 (en) 2013-03-08 2015-05-19 Amazon Technologies, Inc. Providing user input to a computing device with an eye closure
US9832452B1 (en) 2013-08-12 2017-11-28 Amazon Technologies, Inc. Robust user detection and tracking
US11199906B1 (en) * 2013-09-04 2021-12-14 Amazon Technologies, Inc. Global user input management
GB2518002B (en) * 2013-09-10 2017-03-29 Jaguar Land Rover Ltd Vehicle interface system
JP6593008B2 (en) 2014-10-07 2019-10-23 株式会社リコー Information processing apparatus, communication method, program, and system
US10425418B2 (en) * 2014-10-07 2019-09-24 Ricoh Company, Ltd. Information processing apparatus, communications method, and system
US20160198499A1 (en) * 2015-01-07 2016-07-07 Samsung Electronics Co., Ltd. Method of wirelessly connecting devices, and device thereof
US10262660B2 (en) 2015-01-08 2019-04-16 Hand Held Products, Inc. Voice mode asset retrieval
US10061565B2 (en) 2015-01-08 2018-08-28 Hand Held Products, Inc. Application development using mutliple primary user interfaces
US11081087B2 (en) * 2015-01-08 2021-08-03 Hand Held Products, Inc. Multiple primary user interfaces
US10402038B2 (en) 2015-01-08 2019-09-03 Hand Held Products, Inc. Stack handling using multiple primary user interfaces
US10726197B2 (en) * 2015-03-26 2020-07-28 Lenovo (Singapore) Pte. Ltd. Text correction using a second input
DE102015215044A1 (en) * 2015-08-06 2017-02-09 Volkswagen Aktiengesellschaft Method and system for processing multimodal input signals
CN105701165B (en) * 2015-12-30 2019-08-13 Oppo广东移动通信有限公司 Browser model switching method and switching device
KR102592907B1 (en) * 2018-06-22 2023-10-23 삼성전자주식회사 Method and device for recognizing a text
JP7205697B2 (en) * 2019-02-21 2023-01-17 株式会社リコー Communication terminal, shared system, display control method and program

Family Cites Families (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5844979A (en) 1995-02-16 1998-12-01 Global Technologies, Inc. Intelligent switching system for voice and data
US5828468A (en) 1996-05-17 1998-10-27 Nko, Inc. Point of presence (POP) for digital facsimile network with spoofing capability to maintain fax session
US6195357B1 (en) 1996-09-24 2001-02-27 Intervoice Limited Partnership Interactive information transaction processing system with universal telephony gateway capabilities
US5944791A (en) 1996-10-04 1999-08-31 Contigo Software Llc Collaborative web browser
US6282511B1 (en) 1996-12-04 2001-08-28 At&T Voiced interface with hyperlinked information
US6018710A (en) 1996-12-13 2000-01-25 Siemens Corporate Research, Inc. Web-based interactive radio environment: WIRE
US6208839B1 (en) 1996-12-19 2001-03-27 Motorola, Inc. Remote token based information acquistion system
US6101510A (en) 1997-01-29 2000-08-08 Microsoft Corporation Web browser control for incorporating web browser functionality into application programs
US6211869B1 (en) 1997-04-04 2001-04-03 Avid Technology, Inc. Simultaneous storage and network transmission of multimedia data with video host that requests stored data according to response time from a server
US6125376A (en) 1997-04-10 2000-09-26 At&T Corp Method and apparatus for voice interaction over a network using parameterized interaction definitions
CA2401726C (en) 1997-06-25 2010-10-19 Richard James Humpleman Browser based command and control home network
US6157705A (en) 1997-12-05 2000-12-05 E*Trade Group, Inc. Voice control of a server
WO1999046920A1 (en) 1998-03-10 1999-09-16 Siemens Corporate Research, Inc. A system for browsing the world wide web with a traditional telephone
IE980959A1 (en) 1998-03-31 1999-10-20 Datapage Ireland Ltd Document Production
EP1068693B1 (en) 1998-04-03 2011-12-21 Vertical Networks, Inc. System and method for transmitting voice and data using intelligent bridged tdm and packet buses
US6859451B1 (en) 1998-04-21 2005-02-22 Nortel Networks Limited Server for handling multimodal information
US6496122B2 (en) 1998-06-26 2002-12-17 Sharp Laboratories Of America, Inc. Image display and remote control system capable of displaying two distinct images
AR020608A1 (en) 1998-07-17 2002-05-22 United Video Properties Inc A METHOD AND A PROVISION TO SUPPLY A USER REMOTE ACCESS TO AN INTERACTIVE PROGRAMMING GUIDE BY A REMOTE ACCESS LINK
US6807254B1 (en) 1998-11-06 2004-10-19 Nms Communications Method and system for interactive messaging
US6757718B1 (en) 1999-01-05 2004-06-29 Sri International Mobile navigation of network-based electronic information using spoken input
SE9900652D0 (en) 1999-02-24 1999-02-24 Pipebeach Ab A voice browser and a method at a voice browser
US6606611B1 (en) 1999-02-27 2003-08-12 Emdadur Khan System and method for audio-only internet browsing using a standard telephone
US6604075B1 (en) 1999-05-20 2003-08-05 Lucent Technologies Inc. Web-based voice dialog interface
US6766298B1 (en) 1999-09-03 2004-07-20 Cisco Technology, Inc. Application server configured for dynamically generating web pages for voice enabled web applications
US7685252B1 (en) 1999-10-12 2010-03-23 International Business Machines Corporation Methods and systems for multi-modal browsing and implementation of a conversational markup language
US6807574B1 (en) 1999-10-22 2004-10-19 Tellme Networks, Inc. Method and apparatus for content personalization over a telephone interface
US6424945B1 (en) 1999-12-15 2002-07-23 Nokia Corporation Voice packet data network browsing for mobile terminals system and method using a dual-mode wireless connection
US6349132B1 (en) 1999-12-16 2002-02-19 Talk2 Technology, Inc. Voice interface for electronic documents
US7116765B2 (en) 1999-12-16 2006-10-03 Intellisync Corporation Mapping an internet document to be accessed over a telephone system
KR20010063197A (en) 1999-12-22 2001-07-09 윤종용 Data service method with character which is requested information by user in telecommunication system
JP4498523B2 (en) 2000-02-29 2010-07-07 パナソニック株式会社 Bookmark list display method and mobile phone
US6570966B1 (en) 2000-03-17 2003-05-27 Nortel Networks Limited Intermixing data and voice on voice circuits
US20030208472A1 (en) 2000-04-11 2003-11-06 Pham Peter Manh Method and apparatus for transparent keyword-based hyperlink
GB2364480B (en) 2000-06-30 2004-07-14 Mitel Corp Method of using speech recognition to initiate a wireless application (WAP) session
US6808254B2 (en) * 2000-11-30 2004-10-26 Brother Kogyo Kabushiki Kaisha Ink jet printer head
US7028306B2 (en) 2000-12-04 2006-04-11 International Business Machines Corporation Systems and methods for implementing modular DOM (Document Object Model)-based multi-modal browsers
US7020841B2 (en) 2001-06-07 2006-03-28 International Business Machines Corporation System and method for generating and presenting multi-modal applications from intent-based markup scripts
US6983307B2 (en) 2001-07-11 2006-01-03 Kirusa, Inc. Synchronization among plural browsers
US8238881B2 (en) 2001-08-07 2012-08-07 Waloomba Tech Ltd., L.L.C. System and method for providing multi-modal bookmarks
US7289606B2 (en) 2001-10-01 2007-10-30 Sandeep Sibal Mode-swapping in multi-modal telephonic applications
US20030187656A1 (en) 2001-12-20 2003-10-02 Stuart Goose Method for the computer-supported transformation of structured documents
JP4199670B2 (en) * 2002-01-15 2008-12-17 アバイア テクノロジー コーポレーション Communication application server for converged communication services
US7177814B2 (en) 2002-02-07 2007-02-13 Sap Aktiengesellschaft Dynamic grammar for voice-enabled applications
US7210098B2 (en) 2002-02-18 2007-04-24 Kirusa, Inc. Technique for synchronizing visual and voice browsers to enable multi-modal browsing
US6912581B2 (en) * 2002-02-27 2005-06-28 Motorola, Inc. System and method for concurrent multimodal communication session persistence
US7327833B2 (en) 2002-03-20 2008-02-05 At&T Bls Intellectual Property, Inc. Voice communications menu
US6999930B1 (en) 2002-03-27 2006-02-14 Extended Systems, Inc. Voice dialog server method and system
US8213917B2 (en) 2006-05-05 2012-07-03 Waloomba Tech Ltd., L.L.C. Reusable multimodal application
US20040034531A1 (en) 2002-08-15 2004-02-19 Wu Chou Distributed multimodal dialogue system and method
US20040214555A1 (en) * 2003-02-26 2004-10-28 Sunil Kumar Automatic control of simultaneous multimodality and controlled multimodality on thin wireless devices
US7657652B1 (en) * 2003-06-09 2010-02-02 Sprint Spectrum L.P. System for just in time caching for multimodal interaction
US20050010892A1 (en) * 2003-07-11 2005-01-13 Vocollect, Inc. Method and system for integrating multi-modal data capture device inputs with multi-modal output capabilities
KR20050023941A (en) 2003-09-03 2005-03-10 삼성전자주식회사 Audio/video apparatus and method for providing personalized services through voice recognition and speaker recognition
US7552055B2 (en) * 2004-01-10 2009-06-23 Microsoft Corporation Dialog component re-use in recognition systems
US7676754B2 (en) * 2004-05-04 2010-03-09 International Business Machines Corporation Method and program product for resolving ambiguities through fading marks in a user interface
US20050273487A1 (en) 2004-06-04 2005-12-08 Comverse, Ltd. Automatic multimodal enabling of existing web content
KR100617711B1 (en) 2004-06-25 2006-08-28 삼성전자주식회사 Method for initiating voice recognition in wireless terminal
US20060165104A1 (en) * 2004-11-10 2006-07-27 Kaye Elazar M Content management interface
US7917365B2 (en) * 2005-06-16 2011-03-29 Nuance Communications, Inc. Synchronizing visual and speech events in a multimodal application
US7672931B2 (en) 2005-06-30 2010-03-02 Microsoft Corporation Searching for content using voice search queries
EP1873661A2 (en) 2006-05-05 2008-01-02 Kirusa, Inc. Reusable multimodal application

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of EP2050015A4 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8571606B2 (en) 2001-08-07 2013-10-29 Waloomba Tech Ltd., L.L.C. System and method for providing multi-modal bookmarks
US9069836B2 (en) 2002-04-10 2015-06-30 Waloomba Tech Ltd., L.L.C. Reusable multimodal application
US9489441B2 (en) 2002-04-10 2016-11-08 Gula Consulting Limited Liability Company Reusable multimodal application
US9866632B2 (en) 2002-04-10 2018-01-09 Gula Consulting Limited Liability Company Reusable multimodal application
US8213917B2 (en) 2006-05-05 2012-07-03 Waloomba Tech Ltd., L.L.C. Reusable multimodal application
US8670754B2 (en) 2006-05-05 2014-03-11 Waloomba Tech Ltd., L.L.C. Reusable mulitmodal application
US10104174B2 (en) 2006-05-05 2018-10-16 Gula Consulting Limited Liability Company Reusable multimodal application
US10516731B2 (en) 2006-05-05 2019-12-24 Gula Consulting Limited Liability Company Reusable multimodal application
US10785298B2 (en) 2006-05-05 2020-09-22 Gula Consulting Limited Liability Company Reusable multimodal application
US11368529B2 (en) 2006-05-05 2022-06-21 Gula Consulting Limited Liability Company Reusable multimodal application
US11539792B2 (en) 2006-05-05 2022-12-27 Gula Consulting Limited Liability Company Reusable multimodal application

Also Published As

Publication number Publication date
US20150379102A1 (en) 2015-12-31
US8213917B2 (en) 2012-07-03
US8670754B2 (en) 2014-03-11
US20180131760A1 (en) 2018-05-10
US20140195231A1 (en) 2014-07-10
US11368529B2 (en) 2022-06-21
US20230208912A1 (en) 2023-06-29
WO2007130256A3 (en) 2008-05-02
EP2050015A4 (en) 2013-02-27
US9866632B2 (en) 2018-01-09
EP2050015A2 (en) 2009-04-22
US20190124148A1 (en) 2019-04-25
US10516731B2 (en) 2019-12-24
US9489441B2 (en) 2016-11-08
US20120245946A1 (en) 2012-09-27
US10785298B2 (en) 2020-09-22
US20200128074A1 (en) 2020-04-23
US20070260972A1 (en) 2007-11-08
US10104174B2 (en) 2018-10-16
US20210243254A1 (en) 2021-08-05
US20170214740A1 (en) 2017-07-27
US11539792B2 (en) 2022-12-27
US20220286506A1 (en) 2022-09-08
US9069836B2 (en) 2015-06-30
EP2050015B1 (en) 2014-11-26

Similar Documents

Publication Publication Date Title
US11539792B2 (en) Reusable multimodal application
EP1952279B1 (en) A system and method for conducting a voice controlled search using a wireless mobile device
EP2747389B1 (en) Mobile terminal having auto answering function and auto answering method for use in the mobile terminal
WO2008065662A2 (en) A method and apparatus for starting applications
US7881705B2 (en) Mobile communication terminal and information acquisition method for position specification information
US20030040341A1 (en) Multi-modal method for browsing graphical information displayed on mobile devices
US20080304639A1 (en) System and method for communicating with interactive service systems
KR20140099606A (en) Sharing Method of Service Page and Electronic Device operating the same
US20020174177A1 (en) Voice activated navigation of a computer network
CN107818046A (en) The A/B method of testings and device of the application program page
KR20010039743A (en) Method and apparatus for splitting markup flows into discrete screen displays
EP1873661A2 (en) Reusable multimodal application
US20060034443A1 (en) Method of displaying a map containing locations, and accesses to information resources relating to said locations
US20030073433A1 (en) Mobile telecommunications device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07755388

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2007755388

Country of ref document: EP