Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20050137877 A1
Publication typeApplication
Application numberUS 10/738,460
Publication dateJun 23, 2005
Filing dateDec 17, 2003
Priority dateDec 17, 2003
Also published asUS8751241, US20080215336
Publication number10738460, 738460, US 2005/0137877 A1, US 2005/137877 A1, US 20050137877 A1, US 20050137877A1, US 2005137877 A1, US 2005137877A1, US-A1-20050137877, US-A1-2005137877, US2005/0137877A1, US2005/137877A1, US20050137877 A1, US20050137877A1, US2005137877 A1, US2005137877A1
InventorsChristopher Oesterling, William Mazzara, Jeffrey Stefan
Original AssigneeGeneral Motors Corporation
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Method and system for enabling a device function of a vehicle
US 20050137877 A1
Abstract
The current invention provides a method and system for enabling a device function of a vehicle. A speech input stream is received at a telematics unit. A speech input context is determined for the received speech input stream. The received speech input stream is processed based on the determination and the device function of the vehicle is enabled responsive to the processed speech input stream. A vehicle device in control of the enabled device function of the vehicle is directed based on the processed speech input stream. A computer usable medium with suitable computer program code is employed for enabling a device function of a vehicle.
Images(6)
Previous page
Next page
Claims(18)
1. A method for enabling a device function of a vehicle, the method comprising:
receiving a speech input stream at a telematics unit;
determining a speech input context for the received speech input stream;
processing the received speech input stream based on the determination; and
enabling the device function of the vehicle responsive to the processed speech input stream.
2. The method of claim 1 wherein determining a speech input context for the received speech input stream comprises:
monitoring the speech input stream at a context recognizer, the context recognizer comprising a context verbiage;
comparing the speech input stream to the context verbiage; and
selecting one of a plurality of domain specific actuators based on the determined speech input context.
3. The method of claim 1 wherein processing the received speech input stream comprises:
accessing a set of rules and structures for formatting the speech input stream according to the determined speech input context; and
formatting the received speech input stream based on the set of rules and the structures.
4. The method of claim 3, wherein the set of rules and structures are contained in a domain specific actuator.
5. The method of claim 1 wherein enabling the device function of the vehicle comprises:
writing the processed speech input stream in an activation cache;
activating a vehicle device corresponding to the device function of the vehicle; and
supplying the processed speech input stream from the activation cache to the vehicle device.
6. The method of claim 1 further comprising:
directing a vehicle device in control of the enabled device function of the vehicle based on the processed speech input stream.
7. A computer usable medium including computer program code for enabling a device function of a vehicle comprising:
computer program code for receiving a speech input stream at a telematics unit;
computer program code for determining a speech input context for the received speech input stream;
computer program code for processing the received speech input stream based on the determination; and
computer program code for enabling the device function of the vehicle responsive to the processed speech input stream.
8. The computer usable medium of claim 7 wherein computer program code for determining a speech input context for the received speech input stream comprises:
computer program code for monitoring the speech input stream at a context recognizer, the context recognizer comprising a context verbiage;
computer program code for comparing the speech input stream to the context verbiage; and
computer program code for selecting one of a plurality of domain specific actuators based on the determined speech input context.
9. The computer usable medium of claim 7 wherein processing the received speech input stream comprises:
computer program code for accessing a set of rules and structures for formatting the speech input stream according to the determined speech input context; and
computer program code for formatting the received speech input stream based on the set of rules and the structures.
10. The computer usable medium of claim 9 wherein the set of rules and structures are contained in a domain specific actuator.
11. The computer usable medium of claim 7 wherein enabling the device function of the vehicle comprises:
computer program code for writing the processed speech input stream in an activation cache;
computer program code for activating a vehicle device corresponding to the enabled device function of the vehicle; and
computer program code for supplying the processed speech input stream from the activation cache to the vehicle device.
12. The computer usable medium of claim 7 further comprising:
computer program code for directing a vehicle device in control of the enabled device function of the vehicle based on the processed speech input stream.
13. A system for enabling a device function of a vehicle, the system comprising:
means for receiving a speech input stream at a telematics unit;
means for determining a speech input context for the received speech input stream;
means for processing the received speech input stream based on the determination; and
means for enabling the device function of the vehicle responsive to the processed speech input stream.
14. The system of claim 13 wherein determining a speech input context for the received speech input stream comprises:
means for monitoring the speech input stream at a context recognizer, the context recognizer comprising a context verbiage;
means for comparing the speech input stream to the context verbiage; and
means for selecting one of a plurality of domain specific actuators based on the determined speech input context.
15. The system of claim 13 wherein processing the received speech input stream comprises:
means for accessing a set of rules and structures for formatting the speech input stream according to the determined speech input context; and
means for formatting the received speech input stream based on the set of rules and the structures.
16. The system of claim 15 wherein the set of rules and structures are contained in a domain specific actuator.
17. The system of claim 13 wherein enabling the device function of the vehicle comprises:
means for writing the processed speech input stream in an activation cache;
means for activating a vehicle device corresponding to the enabled device function of the vehicle; and
means for supplying the processed speech input stream from the activation cache to the vehicle device.
18. The system of claim 13 further comprising:
means for directing a vehicle device in control of the enabled device function of the vehicle based on the processed speech input stream.
Description
    FIELD OF THE INVENTION
  • [0001]
    This invention relates generally to telematics systems. In particular the invention relates to a method and system for enabling a device function of a vehicle.
  • BACKGROUND OF THE INVENTION
  • [0002]
    One of the fastest growing areas of communications technology is related to automobile network solutions. The demand and potential for wireless vehicle communication, networking and diagnostic services have recently increased. Although many vehicles on the road today have limited wireless communication functions, such as unlocking a door and setting or disabling a car alarm, new vehicles offer additional wireless communication systems that help personalize comfort settings, run maintenance and diagnostic functions, place telephone calls, access call-center information, update controller systems, determine vehicle location, assist in tracking vehicle after a theft of the vehicle and provide other vehicle-related services. Drivers can call telematics call centers and receive navigational, concierge, emergency, and location services, as well as other specialized help such as locating the geographical position of a stolen vehicle and honking the horn of a vehicle when the owner cannot locate it in a large parking garage. Telematics service providers can offer enhanced telematics services by supplying a subscriber with a digital handset.
  • [0003]
    With speech recognition available in today's vehicles a driver can control devices within the vehicle without removing their hands from the steering wheel. Drivers receive various forms of information while operating a vehicle such as phone numbers or destination addresses. While a driver is on the road, it is not convenient for them to record the information and then input that information to a vehicle device such as an in-vehicle phone or navigation system. Information of interest to a driver can be a part of a conversation the driver has with another person and not in a format directly usable by a vehicle device.
  • [0004]
    The driver can receive a business address as part of a conversation with a person at the business. To use that address with the vehicles navigation system, the driver must remember or record the address, enable the navigation system and input the address to the navigation system. This requirement is both an inconvenience for the driver and a limitation that decreases the driver's satisfaction with the capabilities of the navigation system.
  • [0005]
    It is desirable therefore, to provide a method and system for enabling a device function of a vehicle, that overcomes the challenges and obstacles described above.
  • SUMMARY OF THE INVENTION
  • [0006]
    The current invention provides a method for enabling a device function of a vehicle. A speech input stream is received at a telematics unit. A speech input context is determined for the received speech input stream. The received speech input stream is processed based on the determination and the device function of the vehicle is enabled responsive to the processed speech input stream. The method further comprises directing a vehicle device in control of the device function based on the processed speech input stream.
  • [0007]
    Another aspect of the current invention provides a computer usable medium including computer program code for enabling a device function of a vehicle. The computer usable medium comprises: computer program code for receiving a speech input stream at a telematics unit; computer program code for determining a speech input context for the received speech input stream; computer program code for processing the received speech input stream based on the determination; and computer program code for enabling the device function of the vehicle responsive to the processed speech input stream. The computer usable medium further comprises computer program code for directing a vehicle device in control of the device function based on the processed speech input stream.
  • [0008]
    Another aspect of the current invention provides a system for enabling a device function of a vehicle. The system comprises: means for receiving a speech input stream at a telematics unit; means for determining a speech input context for the received speech input stream; means for processing the received speech input stream based on the determination; and means for enabling the device function of the vehicle responsive to the processed speech input stream. The system further comprises means for directing a vehicle device in control of the device function based on the processed speech input stream.
  • [0009]
    The aforementioned and other features and advantages of the invention will become further apparent from the following detailed description of the presently preferred embodiment, read in conjunction with the accompanying drawings. The detailed description and drawings are merely illustrative of the invention rather than limiting, the scope of the invention being defined by the appended claims and equivalents thereof.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • [0010]
    FIG. 1 is a schematic diagram of a system for enabling a device function of a vehicle in accordance with one embodiment of the current invention;
  • [0011]
    FIG. 2 is a flow diagram of a method for enabling a device function of a vehicle in accordance with one embodiment of the current invention;
  • [0012]
    FIG.3 is a flow diagram detailing the step of determining the speech input context at block 220 of FIG.2;
  • [0013]
    FIG. 4 is a flow diagram detailing the step of processing the received speech input stream at block 230 of FIG. 2; and
  • [0014]
    FIG. 5 is a flow diagram detailing the step enabling the device function of the vehicle at block 240 of FIG. 2.
  • DETAILED DESCRIPTION OF THE PRESENTLY PREFERRED EMBODIMENTS
  • [0015]
    FIG. 1 is a schematic diagram of a system for enabling a device function of a vehicle in accordance with one embodiment of the current invention at 100. The system for enabling a device function of a vehicle at 100 comprises: a mobile vehicle 110, a telematics unit 120, one or more wireless carrier systems 140, or one or more satellite carrier systems 141, one or more communication networks 142, and one or more call centers 180. Mobile vehicle 110 is a vehicle such as a car or truck equipped with suitable hardware and software for transmitting and receiving speech and data communications. Vehicle 110 has a multimedia system 118 having one or more speakers 117.
  • [0016]
    In one embodiment of the invention, telematics unit comprises: a digital signal processor (DSP) 122 connected to a wireless modem 124; a global positioning system (GPS) receiver or GPS unit 126; an in-vehicle memory 128; a microphone 130; one or more speakers 132; an embedded or in-vehicle phone 134 or an email access appliance 136; and a display 138. DSP 122 is also referred to as a microcontroller, controller, host processor, ASIC, or vehicle communications processor. GPS unit 126 provides longitude and latitude coordinates of the vehicle, as well as a time stamp and a date stamp. In-vehicle phone 134 is an analog, digital, dual-mode, dual-band, multi-mode or multi-band cellular phone.
  • [0017]
    Telematics unit 120 can store a processed speech input stream, GPS location data, and other data files in in-vehicle memory 128. Telematics unit 120 can set or reset calling-state indicators and can enable or disable various cellular-phone functions, telematics-unit functions and vehicle functions when directed by program code running on DSP 122. Telematics unit 120 can send and receive over-the-air messages using, for example, a pseudo-standard air-interface function or other proprietary and non-proprietary communication links.
  • [0018]
    DSP 122 executes various computer programs and computer program code, within telematics unit 120, which control programming and operational modes of electronic and mechanical systems. DSP 122 controls communications between telematics unit 120, wireless carrier system 140 or satellite carrier system 141 and call center 180. A speech-recognition engine 119, which can translate human speech input through microphone 130 to digital signals used to control functions of telematics unit, is installed in telematics unit 120. The interface to telematics unit 120 includes one or more buttons (not shown) on telematics unit 120, on multimedia system 118, or on an associated keyboard or keypad that are also used to control functions of telematics unit. A text to speech synthesizer 121 can convert text strings to audible messages that are played through speaker 132 of telematics unit 120 or through speakers 117 of multimedia system 118.
  • [0019]
    Speech recognition engine 119 and buttons are used to activate and control various functions of telematics unit 120. For example, programming of in-vehicle phone 134 is controlled with verbal commands that are translated by speech-recognition software executed by DSP 122. Alternatively, pushing buttons on interface of telematics unit 120 or on in-vehicle phone 134 is used to program in-vehicle phone 134. In another embodiment, the interface to telematics unit 120 includes other forms of preference and data entry including touch-screens, wired or wireless keypad remotes, or other wirelessly connected devices such as Bluetooth-enabled devices or 802.11-enabled devices.
  • [0020]
    In one embodiment of the current invention, speech recognition engine 119 comprises a configurable listener automaton 111 that receives a speech input stream and processes the speech input stream according to a set of rules and structures defined in a domain specific actuator. The listener automaton 111 writes the processed speech input stream to an activation cache that is a portion of in-vehicle memory 128. DSP 122 executes computer program code comprising a context recognizer and associated domain specific actuators, within telematics unit 120, which control operation and configuration of the listener automaton 111. DSP 122 controls communications between telematics unit 120, listener automaton 111, and activation cache in in-vehicle memory 128. Data in the activation cache is supplied to the vehicle devices 115 through vehicle bus 112.
  • [0021]
    DSP 122 controls, generates and accepts digital signals transmitted between telematics unit 120 and a vehicle communication bus 112 that is connected to various vehicle components 114, vehicle devices 115, various sensors 116, and multimedia system 118 in mobile vehicle 110. DSP 122 can activate various programming and operation modes, as well as provide for data transfers. In facilitating interactions among the various communication and electronic modules, vehicle communication bus 112 utilizes bus interfaces such as controller-area network (CAN), J1850, International Organization for Standardization (ISO) Standard 9141, ISO Standard 11898 for high-speed applications, and ISO Standard 11519 for lower speed applications.
  • [0022]
    Mobile vehicle 110 via telematics unit 120 sends and receives radio transmissions from wireless carrier system 140, or satellite carrier system 141. Wireless carrier system 140, or satellite carrier system 141 is any suitable system for transmitting a signal from mobile vehicle 110 to communication network 142.
  • [0023]
    Communication network 142 includes services from mobile telephone switching offices, wireless networks, public-switched telephone networks (PSTN), and Internet protocol (IP) networks. Communication network 142 comprises a wired network, an optical network, a fiber network, another wireless network, or any combination thereof. Communication network 142 connects to mobile vehicle 110 via wireless carrier system 140, or satellite carrier system 141.
  • [0024]
    Communication network 142 can send and receive short messages according to established protocols such as dedicated short range communication standard (DSRC), IS-637 standards for short message service (SMS), IS-136 air-interface standards for SMS, and GSM 03.40 and 09.02 standards. In one embodiment of the invention, similar to paging, an SMS communication is posted along with an intended recipient, such as a communication device in mobile vehicle 110.
  • [0025]
    Call center 180 is a location where many calls are received and serviced at the same time, or where many calls are sent at the same time. In one embodiment of the invention, the call center is a telematics call center, facilitating communications to and from telematics unit 120 in mobile vehicle 110. In another embodiment, the call center 180 is a voice call center, providing verbal communications between a communication service advisor 185, in call center 180 and a subscriber. In another embodiment, call center 180 contains each of these functions.
  • [0026]
    Communication services advisor 185 is a real advisor or a virtual advisor. A real advisor is a human being in verbal communication with a user or subscriber. A virtual advisor is a synthesized speech interface responding to requests from user or subscriber. In one embodiment, the virtual advisor includes one or more recorded messages. In another embodiment, the virtual advisor generates speech messages using a call center based text to speech synthesizer (TTS). In another embodiment, the virtual advisor includes both recorded and TTS generated messages.
  • [0027]
    Call center 180 provides services to telematics unit 120. Communication services advisor 185 provides one of a number of support services to a subscriber. Call center 180 can transmit and receive data via a data signal to telematics unit 120 in mobile vehicle 110 through wireless carrier system 140, satellite carrier systems 141, or communication network 142.
  • [0028]
    Call center 180 can determine mobile identification numbers (MINs) and telematics unit identifiers associated with a telematics unit access request, compare MINs and telematics unit identifiers with a database of identifier records, and send calling-state messages to the telematics unit 120 based on the request and identification numbers.
  • [0029]
    Communication network 142 connects wireless carrier system 140 or satellite carrier system 141 to a user computer 150, a wireless or wired phone 160, a handheld device 170, such as a personal digital assistant, and call center 180. User computer 150 or handheld device 170 has a wireless modem to send data through wireless carrier system 140, or satellite carrier system 141, which connects to communication network 142. In another embodiment, user computer 150 or handheld device 170 has a wired modem that connects to communications network 142. Data is received at call center 180. Call center 180 has any suitable hardware and software capable of providing web services to help transmit messages and data signals from user computer 150 or handheld device 170 to telematics unit 120 in mobile vehicle 110.
  • [0030]
    FIG. 2 is a flow diagram of a method for enabling a device function of a vehicle in accordance with one embodiment of the current invention at 200. The method enabling a device function of a vehicle at 200 begins (block 205) when a speech-input stream is received at a telematics unit from a speech source (block 210). The speech source can be human speech or speech generated by a speech synthesizer. A speech input context is determined for the received speech input stream (block 220). The speech input context identifies the framework in which to interpret the received speech input stream. The speech input context associates the speech input stream to a specific device function of the vehicle such as navigation or personal calling.
  • [0031]
    The received speech input is processed based on the determined speech input context (block 230). The device function of the vehicle is enabled responsive to the processed speech input stream (block 240). The vehicle device in control of the enabled device function of the vehicle is directed based on the processed speech input stream (block 250). An example of a vehicle device is the navigation system of the vehicle and the corresponding device function of the vehicle is navigation. The method ends (block 295).
  • [0032]
    FIG. 3 is a flow diagram detailing the step of determining the speech input context at block 220 of FIG.2. The step of determining the speech input context at 300 begins (block 305) with monitoring the speech input stream at a context recognizer (block 310). The context recognizer comprises a context verbiage. The speech input stream is compared to the context verbiage (block 320). An example of verbiage contained in the context recognizer is the word “street” preceded by a text string. This verbiage is use to identify an address as a component of the speech input stream.
  • [0033]
    In one embodiment, a speech input stream comprised of numerical utterances followed by non-numerical utterances is associated with a navigation destination address context. In another embodiment, a speech input stream comprised of numerical utterances is associated with a directory assistance context.
  • [0034]
    Each device function of the vehicle is assigned a domain specific actuator. The domain specific actuator contains a set of rules and structures that determine how to format the speech input stream for the corresponding vehicle device that controls the particular device function of the vehicle. One of a plurality of domain specific actuators is selected based on the comparison of the speech input stream to the context verbiage (block 330) and the step ends (block 395).
  • [0035]
    In one example of the system and method for enabling a device function of a vehicle, a subscriber contacts directory assistance to obtain a phone number for a business. The directory assistance operator speaks the phone number for the business. The spoken phone number is the speech input stream in this example. The context recognizer identifies the string of numbers as a phone number by matching the received phone number to context verbiage corresponding to a phone number string. The context recognizer having determined that a phone number is being received selects a domain specific actuator for personal calling. The speech input stream is then formatted so that the phone number is available for use by the subscriber's in-vehicle phone or personal phonebook. The phone number is written to the activation cache and the personal calling device function is thereby enabled with the phone number data.
  • [0036]
    In another example, following on the previous example, the subscriber's personal calling is directed to request what action the subscriber would like to take regarding the received phone number. The personal calling device sends the subscriber a prompt asking the subscriber if they wish to dial or to store the phone number.
  • [0037]
    FIG. 4 is a flow diagram detailing the step of processing the received speech input stream at block 230 of FIG. 2. The step of processing the received speech input stream at 400 begins (block 405) by accessing a set of rules and structures for formatting the speech input stream according the determined speech input context (block 410). The set of rules and structures are contained in the domain specific actuator. The received speech input stream is formatted based on the set of rules and structures (block 420). For example, if the speech input stream includes a phone number, the speech input stream is formatted so that the phone number and other relevant data, such as the entity associated with the phone number, is available to and in the proper format for personal calling. The step ends (block 495).
  • [0038]
    FIG. 5 is a flow diagram detailing the step enabling the device function of the vehicle at block 240 of FIG. 2. The step of enabling the device function of the vehicle at 500 begins (block 505) with writing the processed speech input stream in an activation cache (block 510). The activation cache is a memory location where a vehicle device can access the processed speech input stream. The vehicle device corresponding to the enabled device function of the vehicle is activated (block 520). The processed speech input stream from the activation cache is supplied to the vehicle device (block 530) and the step ends (block 595). In the example where the device function of the vehicle is personal calling the vehicle device corresponding to personal calling is the in-vehicle phone. A phone number processed from the speech input stream and written to the activation cache would be supplied to the in-vehicle phone for dialing or storing.
  • [0039]
    While embodiments of the invention disclosed herein are presently considered to be preferred, various changes and modifications can be made without departing from the spirit and scope of the invention. The scope of the invention is indicated in the appended claims, and all changes that come within the meaning and range of equivalents are intended to be embraced therein.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US6505161 *May 1, 2000Jan 7, 2003Sprint Communications Company L.P.Speech recognition that adjusts automatically to input devices
US6597018 *Mar 16, 2001Jul 22, 2003Matsushita Electric Industrial Co., Ltd.Semiconductor light emitter and flat panel display lighting system
US6598018 *Dec 15, 1999Jul 22, 2003Matsushita Electric Industrial Co., Ltd.Method for natural dialog interface to car devices
US6693517 *Apr 20, 2001Feb 17, 2004Donnelly CorporationVehicle mirror assembly communicating wirelessly with vehicle accessories and occupants
US6732077 *May 28, 1996May 4, 2004Trimble Navigation LimitedSpeech recognizing GIS/GPS/AVL system
US20020049535 *Aug 10, 2001Apr 25, 2002Ralf RigoWireless interactive voice-actuated mobile telematics system
US20040002866 *Jun 28, 2002Jan 1, 2004Deisher Michael E.Speech recognition command via intermediate device
US20050065779 *Aug 2, 2004Mar 24, 2005Gilad OdinakComprehensive multiple feature telematics system
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7091825 *Mar 1, 2004Aug 15, 2006Sahai Anil KMethod and system for vehicle control using walkie-talkie type cellular phone
US7302371 *Dec 28, 2004Nov 27, 2007General Motors CorporationCaptured test fleet
US7831431Oct 31, 2006Nov 9, 2010Honda Motor Co., Ltd.Voice recognition updates via remote broadcast signal
US7904300 *Aug 10, 2005Mar 8, 2011Nuance Communications, Inc.Supporting multiple speech enabled user interface consoles within a motor vehicle
US7999654 *Dec 22, 2005Aug 16, 2011Toyota Jidosha Kabushiki KaishaRemote control method and system, vehicle with remote controllable function, and control server
US8112275Apr 22, 2010Feb 7, 2012Voicebox Technologies, Inc.System and method for user-specific speech recognition
US8140327Apr 22, 2010Mar 20, 2012Voicebox Technologies, Inc.System and method for filtering and eliminating noise from natural language utterances to improve speech recognition and parsing
US8140335Dec 11, 2007Mar 20, 2012Voicebox Technologies, Inc.System and method for providing a natural language voice user interface in an integrated voice navigation services environment
US8144839Sep 9, 2009Mar 27, 2012Alcatel LucentCommunication method and system for determining a sequence of services linked to a conversation
US8145489Jul 30, 2010Mar 27, 2012Voicebox Technologies, Inc.System and method for selecting and presenting advertisements based on natural language processing of voice-based input
US8150694Jun 1, 2011Apr 3, 2012Voicebox Technologies, Inc.System and method for providing an acoustic grammar to dynamically sharpen speech interpretation
US8155962Jul 19, 2010Apr 10, 2012Voicebox Technologies, Inc.Method and system for asynchronously processing natural language utterances
US8195468 *Apr 11, 2011Jun 5, 2012Voicebox Technologies, Inc.Mobile systems and methods of supporting natural language human-machine interactions
US8255154Nov 1, 2011Aug 28, 2012Boadin Technology, LLCSystem, method, and computer program product for social networking utilizing a vehicular assembly
US8326627Dec 30, 2011Dec 4, 2012Voicebox Technologies, Inc.System and method for dynamically generating a recognition grammar in an integrated voice navigation services environment
US8326634Feb 2, 2011Dec 4, 2012Voicebox Technologies, Inc.Systems and methods for responding to natural language speech utterance
US8326637Feb 20, 2009Dec 4, 2012Voicebox Technologies, Inc.System and method for processing multi-modal device interactions in a natural language voice services environment
US8332224Oct 1, 2009Dec 11, 2012Voicebox Technologies, Inc.System and method of supporting adaptive misrecognition conversational speech
US8370147Dec 30, 2011Feb 5, 2013Voicebox Technologies, Inc.System and method for providing a natural language voice user interface in an integrated voice navigation services environment
US8421590Apr 8, 2011Apr 16, 2013Toyota Jidosha Kabushiki KaishaRemote control method and system, vehicle with remote controllable function, and control server
US8447607Jun 4, 2012May 21, 2013Voicebox Technologies, Inc.Mobile systems and methods of supporting natural language human-machine interactions
US8452598Dec 30, 2011May 28, 2013Voicebox Technologies, Inc.System and method for providing advertisements in an integrated voice navigation services environment
US8473152Nov 1, 2011Jun 25, 2013Boadin Technology, LLCSystem, method, and computer program product for utilizing a communication channel of a mobile device by a vehicular assembly
US8515765Oct 3, 2011Aug 20, 2013Voicebox Technologies, Inc.System and method for a cooperative conversational voice user interface
US8527274Feb 13, 2012Sep 3, 2013Voicebox Technologies, Inc.System and method for delivering targeted advertisements and tracking advertisement interactions in voice recognition contexts
US8589161May 27, 2008Nov 19, 2013Voicebox Technologies, Inc.System and method for an integrated, multi-modal, multi-device natural language voice services environment
US8620659Feb 7, 2011Dec 31, 2013Voicebox Technologies, Inc.System and method of supporting adaptive misrecognition in conversational speech
US8719009Sep 14, 2012May 6, 2014Voicebox Technologies CorporationSystem and method for processing multi-modal device interactions in a natural language voice services environment
US8719026Feb 4, 2013May 6, 2014Voicebox Technologies CorporationSystem and method for providing a natural language voice user interface in an integrated voice navigation services environment
US8731929Feb 4, 2009May 20, 2014Voicebox Technologies CorporationAgent architecture for determining meanings of natural language utterances
US8738380Dec 3, 2012May 27, 2014Voicebox Technologies CorporationSystem and method for processing multi-modal device interactions in a natural language voice services environment
US8849652May 20, 2013Sep 30, 2014Voicebox Technologies CorporationMobile systems and methods of supporting natural language human-machine interactions
US8849670Nov 30, 2012Sep 30, 2014Voicebox Technologies CorporationSystems and methods for responding to natural language speech utterance
US8886536Sep 3, 2013Nov 11, 2014Voicebox Technologies CorporationSystem and method for delivering targeted advertisements and tracking advertisement interactions in voice recognition contexts
US8983839Nov 30, 2012Mar 17, 2015Voicebox Technologies CorporationSystem and method for dynamically generating a recognition grammar in an integrated voice navigation services environment
US9015049Aug 19, 2013Apr 21, 2015Voicebox Technologies CorporationSystem and method for a cooperative conversational voice user interface
US9031845Feb 12, 2010May 12, 2015Nuance Communications, Inc.Mobile systems and methods for responding to natural language speech utterance
US9103691 *Nov 12, 2008Aug 11, 2015Volkswagen AgMultimode user interface of a driver assistance system for inputting and presentation of information
US9105266May 15, 2014Aug 11, 2015Voicebox Technologies CorporationSystem and method for processing multi-modal device interactions in a natural language voice services environment
US9171541Feb 9, 2010Oct 27, 2015Voicebox Technologies CorporationSystem and method for hybrid processing in a natural language voice services environment
US9224394 *Mar 23, 2010Dec 29, 2015Sirius Xm Connected Vehicle Services IncService oriented speech recognition for in-vehicle automated interaction and in-vehicle user interfaces requiring minimal cognitive driver processing for same
US9263039Sep 29, 2014Feb 16, 2016Nuance Communications, Inc.Systems and methods for responding to natural language speech utterance
US9269097Nov 10, 2014Feb 23, 2016Voicebox Technologies CorporationSystem and method for delivering targeted advertisements and/or providing natural language processing based on advertisements
US9305548Nov 18, 2013Apr 5, 2016Voicebox Technologies CorporationSystem and method for an integrated, multi-modal, multi-device natural language voice services environment
US9406078Aug 26, 2015Aug 2, 2016Voicebox Technologies CorporationSystem and method for delivering targeted advertisements and/or providing natural language processing based on advertisements
US9495957Aug 25, 2014Nov 15, 2016Nuance Communications, Inc.Mobile systems and methods of supporting natural language human-machine interactions
US9502025Nov 10, 2010Nov 22, 2016Voicebox Technologies CorporationSystem and method for providing a natural language content dedication service
US20050190041 *Mar 1, 2004Sep 1, 2005Sahai Anil K.Method and system for vehicle control using walkie-talkie type cellular phone
US20060106584 *Dec 28, 2004May 18, 2006Oesterling Christopher LCaptured test fleet
US20070038461 *Aug 10, 2005Feb 15, 2007International Business Machines CorporationSupporting multiple speech enabled user interface consoles within a motor vehicle
US20080071534 *Sep 14, 2006Mar 20, 2008General Motors CorporationMethods for using an interactive voice recognition system
US20080266051 *Dec 22, 2005Oct 30, 2008Toyota Jidosha Kaushiki KaishaRemote Control Method and System, Vehicle with Remote Controllable Function, and Control Server
US20090150156 *Dec 11, 2007Jun 11, 2009Kennewick Michael RSystem and method for providing a natural language voice user interface in an integrated voice navigation services environment
US20100086110 *Sep 9, 2009Apr 8, 2010Alcatel-Lucent Via The Electronic Patent Assignment System (Epas)Communication method and system for determining a sequence of services linked to a conversation
US20100204994 *Apr 22, 2010Aug 12, 2010Voicebox Technologies, Inc.Systems and methods for responding to natural language speech utterance
US20100250243 *Mar 23, 2010Sep 30, 2010Thomas Barton SchalkService Oriented Speech Recognition for In-Vehicle Automated Interaction and In-Vehicle User Interfaces Requiring Minimal Cognitive Driver Processing for Same
US20100286985 *Jul 19, 2010Nov 11, 2010Voicebox Technologies, Inc.Systems and methods for responding to natural language speech utterance
US20110022393 *Nov 12, 2008Jan 27, 2011Waeller ChristophMultimode user interface of a driver assistance system for inputting and presentation of information
US20110137684 *Dec 8, 2009Jun 9, 2011Peak David FSystem and method for generating telematics-based customer classifications
US20110187513 *Apr 8, 2011Aug 4, 2011Toyota Jidosha Kabushiki KaishaRemote control method and system, vehicle with remote controllable function, and control server
US20110231182 *Apr 11, 2011Sep 22, 2011Voicebox Technologies, Inc.Mobile systems and methods of supporting natural language human-machine interactions
EP2164212A1 *Aug 26, 2009Mar 17, 2010Alcatel LucentCommunication method and system for determining a sequence of services associated with a conversation
WO2005091744A2 *Dec 21, 2004Oct 6, 2005Sahai Anil KMethod and system for vehicle control using walkie-talkie type cellular phone
WO2005091744A3 *Dec 21, 2004Jan 19, 2006Anil K SahaiMethod and system for vehicle control using walkie-talkie type cellular phone
WO2010029244A1 *Aug 26, 2009Mar 18, 2010Alcatel LucentCommunication method and system for determining a sequence of services related to a conversation
Classifications
U.S. Classification704/275, 704/E15.045
International ClassificationG10L15/26
Cooperative ClassificationG10L15/26
European ClassificationG10L15/26A
Legal Events
DateCodeEventDescription
Dec 17, 2003ASAssignment
Owner name: GENERAL MOTORS CORPORATION, MICHIGAN
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:OSTERLING, CHRISTOPHER;MAZZARA, WILLIAM E.;STEFAN, JEFFREY M.;REEL/FRAME:014825/0765
Effective date: 20031201