|Publication number||US7395959 B2|
|Application number||US 11/260,584|
|Publication date||Jul 8, 2008|
|Filing date||Oct 27, 2005|
|Priority date||Oct 27, 2005|
|Also published as||US7980465, US8328089, US20070098145, US20080221883, US20110276595|
|Publication number||11260584, 260584, US 7395959 B2, US 7395959B2, US-B2-7395959, US7395959 B2, US7395959B2|
|Inventors||Dustin Kirkland, Ameet M. Paranjape|
|Original Assignee||International Business Machines Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (7), Non-Patent Citations (4), Referenced by (15), Classifications (12), Legal Events (4)|
|External Links: USPTO, USPTO Assignment, Espacenet|
1. Technical Field
The present invention relates in general to improved telephony devices and in particular to providing hands free contact database information entry at a communication device by recording an ongoing conversation and extracting contact database information from the converted text of the ongoing conversation.
2. Description of the Related Art
Many communication devices now include address books provided through a database organized for holding contact information. Contact databases often include the same type of information that a person would include in a paper address book, such as names, business associations, telephone numbers, physical addresses, email addresses, and other contact information.
Many communication devices include multiple buttons or a touch sensitive touch screen, through which a user may select a series of buttons to enter a name or phone number for storage within the contact database. While button selection or touch-screen selection of letters and numbers provides one way for a user to enter information into a contact database, this method is often difficult when the user is trying to enter information and simultaneously hold a conversation through the same communication device. Further, button selection or touch-screen selection of contact information is more difficult when a user is also driving.
In one example, if a user is driving while also carrying on a conversation through a portable communication device and the other participant in the conversation provides the user with a telephone number during the conversation, the user could try to remember the number, could write the number down on paper, or could attempt to select buttons to enter the number into the contact database of the portable communication device. All of these options, however, are limited: In the first, the user may quickly forget the number or address; in the second and third, the user must take at least one hand off the wheel to record the information, either written or through selection of buttons, while driving.
Therefore, in view of the foregoing, there is a need for a method, system, and program for hands-free entry of contact information for storage in a contact database of a communication device.
Therefore, the present invention provides a method, system, and program for hands free entry of contact information for storage in a contact database of a communication device.
In one embodiment, a recording system at a communication device detects a user initiation to record. Responsive to detecting the user initiation to record, the recording system records the ongoing conversation supported between the communication device and a second remote communication device. The recording system converts the recording of the conversation into text. Next, the recording system extracts contact information from the text. Then, the recording system stores the extracted contact information in an entry of the contact database, such that contact information is added to the contact database of the communication device without manual entry of the contact information by the user.
The communication device detects user initiation to record by detecting a predesignated keystroke, voice command, or other preselected input trigger. Once recording starts, the communication device detects entry of the same input trigger or a different input trigger as a user indication to stop recording.
During recording, the communication device may record both the local voice at the communication device and the remote voice at the second remote communication device, only the local voice, or only the remote voice. In addition, during recording, the communication device may prompt the user to enter a particular type of information and may mute the speaker at the communication device.
In extracting contact information from the text, the communication device detects at least one preselected tag wording within the text, searches the text for at least one portion matching a textual characteristics associated with the preselected tag, and responsive to identifying the portion matching the textual characteristic, assigns the portion to a particular field of the entry of the contact database associated with the preselected tag. In addition, in extracting contact information from the text, the communication device detects a lack of at least one specified tag wording within the text, infers the information for the specified tag wording from other information collectable at the communication device about a second user at the second communication device, and assigns the inferred information to a particular field of the entry of the contact database associated with the tag.
In response to extracting the contact information from the text, the communication device may present the contact information to the user at the communication device or to the second user at the second communication device for approval prior to storing the extracted contact information in the entry of the contact database of the communication device.
The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself however, as well as a preferred mode of use, further objects and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:
With reference now to
As illustrated, a network 140 facilitates a communicative connection between telephony device 110 and telephony device 120. Each of telephony devices 110 and 120 enable a user at each telephony device to communicate with users at other telephony devices through voice communications and other communication media.
Telephony devices 110 and 120 may include stand alone or portable telephony devices or may be implemented through any computing system communicatively connected to network 140. In addition, telephony devices 110 and 120 may include multiple types of peripheral interfaces that support input and output of communications. While telephony devices 110 and 120 are described with respect to voice based communications, it will be understood that telephony devices 110 and 120 may support multiple types of communication mediums, including, but not limited to, text messaging, electronic mail, video and audio streaming, and other forms of communication.
Network 140 may include multiple types of networks for supporting communications via a single or multiple separate telephony service providers. For example, network 140 may be the public switching telephone network (PSTN) and the Internet. It will be understood that network 140 may also include private telephone networks, other types of packet switching based networks, and other types of networks that support communications.
A telephony service provider may support telephony service through infrastructure within network 140. In addition, a telephony service provider may support telephony service through infrastructure, including server systems, such as a telephony server 130, that communicatively connect to network 140 and support telephony service to one or more telephony devices via network 140.
One or more telephony service providers provide telephony service to each of telephony devices 110 and 120. A user may subscribe to a particular service provider to provide telephony service via network 140 or may communicatively connect to network 140 and then select from among available service providers for a particular call or communication session.
In the example, each of telephony devices 110 and 120 include a contact database, such as contact database 112 and 122. Contact databases 112 and 122 include multiple entries with contact information for individuals, businesses, and other entities. Each entry may include multiple fields of contact data including, but not limited to, first name, last name, title, company, work phone number, home phone number, mobile phone number, email address, home address, and other personal and business data fields. It will be understood that while contact databases 112 and 122 are depicted within telephony devices 110 and 120 respectively, contact databases 112 and 122 may be remotely available from a separate computer system or data storage system via network 140 or a direct connection.
Entries may be added to contact databases 112 and 122 from multiple sources. In one example, a user may manually enter contact data for an entry in a contact database at one of telephony devices 110 and 120 through the input interface of the telephony device. In another example, entries for contact databases 112 and 122 may be downloaded from other systems communicatively connected or directly linked to telephony devices 110 and 120, respectively. Additionally, in the embodiment depicted, recording systems 114 and 124 may record spoken contact data, convert the spoken contact data into text, and parse the text to detect contact for the fields of a new entry in the contact database. By recording a dialog during a telephone call, extracting the contact data from the dialog, and writing the contact data into the fields an entry of a contact database of the telephony device, the contact information is then accessible from the contact database at a later time without requiring manual entry of the information. It will be understood that while recording systems 114 and 124 are depicted as components of telephony devices 110 and 120 respectively, recording systems 114 and 124 may be remotely accessible from a separate computer system or data storage system via network 140 or a direct connection.
Referring now to
Computer system 200 includes a bus 222 or other communication device for communicating information within computer system 200, and at least one processing device such as processor 212, coupled to bus 222 for processing information. Bus 222 preferably includes low-latency and higher latency paths that are connected by bridges and adapters and controlled within computer system 200 by multiple bus controllers. When implemented as an email server, computer system 200 may include multiple processors designed to improve network servicing power.
Processor 212 may be a general-purpose processor such as IBM's PowerPC™ processor that, during normal operation, processes data under the control of an operating system 260, application software 270, middleware (not depicted), and other code accessible from a dynamic storage device such as random access memory (RAM) 214, a static storage device such as Read Only Memory (ROM) 216, a data storage device, such as mass storage device 218, or other data storage medium. Operating system 260 may provide a graphical user interface (GUI) to the user. In one embodiment, application software 270 contains machine executable instructions for controlling telephony communications that when executed on processor 212 carry out the operations depicted in the flowcharts of
The recording system of the present invention may be provided as a computer program product, included on a machine-readable medium having stored thereon the machine executable instructions used to program computer system 200 to perform a process according to the present invention. The term “machine-readable medium” as used herein includes any medium that participates in providing instructions to processor 212 or other components of computer system 200 for execution. Such a medium may take many forms including, but not limited to, non-volatile media, volatile media, and transmission media. Common forms of non-volatile media include, for example, a floppy disk, a flexible disk, a hard disk, magnetic tape or any other magnetic medium, a compact disc ROM (CD-ROM) or any other optical medium, punch cards or any other physical medium with patterns of holes, a programmable ROM (PROM), an erasable PROM (EPROM), electrically EPROM (EEPROM), a flash memory, any other memory chip or cartridge, or any other medium from which computer system 400 can read and which is suitable for storing instructions. In the present embodiment, an example of a non-volatile medium is mass storage device 218 which as depicted is an internal component of computer system 200, but will be understood to also be provided by an external device. Volatile media include dynamic memory such as RAM 214. Transmission media include coaxial cables, copper wire or fiber optics, including the wires that comprise bus 222. Transmission media can also take the form of acoustic or light waves, such as those generated during radio frequency or infrared data communications.
Moreover, the present invention may be downloaded as a computer program product, wherein the program instructions may be transferred from a remote computer such as a server 240 or client system 242 to requesting computer system 200 by way of data signals embodied in a carrier wave or other propagation medium via a network link 234 (e.g. a modem or network connection) to a communications interface 232 coupled to bus 222. Communications interface 232 provides a two-way data communications coupling to network link 234 that may be connected, for example, to a local area network (LAN), wide area network (WAN), or directly to an Internet Service Provider (ISP). In particular, network link 234 may provide wired and/or wireless network communications to one or more networks, such as network 140.
Network link 234 and network 140 both use electrical, electromagnetic, or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 234 and through communication interface 232, which carry the digital data to and from computer system 200, are forms of carrier waves transporting the information.
When implemented as a telephony device, communication interface 232 may support a single or multiple communication channels with a single or multiple other telephony devices or other communication devices, such as telephony device 246, via network 140.
When implemented as a telephony server or network server, computer system 200 may include multiple communication interfaces accessible via multiple peripheral component interconnect (PCI) bus bridges connected to an input/output controller. In this manner, computer system 200 allows connections to multiple network computers via multiple separate ports.
In addition, computer system 200 typically includes multiple peripheral components that facilitate communication. These peripheral components are connected to multiple controllers, adapters, and expansion slots, such as input/output (I/O) interface 226, coupled to one of the multiple levels of bus 222. For example, input device 224 may include, for example, a microphone, a keyboard, a mouse, or other input peripheral device, communicatively enabled on bus 222 via I/O interface 226 controlling inputs. In addition, for example, a display device 220 communicatively enabled on bus 222 via I/O interface 226 for controlling outputs may include, for example, one or more graphical display devices, but may also include other output interfaces, such as an audio output interface. In alternate embodiments of the present invention, additional input and output peripheral components may be added.
Those of ordinary skill in the art will appreciate that the hardware depicted in
With reference now to
In the example, recording system 114 includes recording preferences 310. Recording preferences 310 specify a user's preferences for recording ongoing conversations and extracting contact data information from those ongoing conversations. In one embodiment, a user may select recording preferences 310 through a user interface of telephony device 110. In another embodiment, a user may download recording preferences 310 to telephony device 110 from another system or access recording preferences 310 from a network accessible location, such as telephony server 130. In addition, it will be understood that user or enterprise selection of recording preferences 310 may be performed through other available interfaces of telephony device 110.
In addition, in the example, recording system 114 includes a recording trigger controller 302. Recording trigger controller 302 detects triggers for recording ongoing conversations and controls set up of a new recording entry in a recordings database 312 and recording of the ongoing conversation into the new recording entry of recordings database 312. In one example, recording trigger controller 302 detects entry of a button specified by a user in recording preferences 310 as a trigger. Triggers may include, for example, a keystroke entry or a voice entry.
Further, in the example, recording system 114 includes a speech to text converter 304. Speech to text converter 304 converts the recordings of ongoing conversations from speech into text. In one example, speech to text converter 304 converts recordings of ongoing conversations from speech to text as the recording is occurring. In another example, speech to text converter 304 converts recordings of ongoing conversations from speech to text once the recording is concluded or upon detection of a user entered trigger to convert.
Recording system 114 also includes a tag detector 306. Tag detector 306 scans both speech and text and identifies speech or text that matches tags set within a tag database 308. Tag detector 306 or another accessible controller may automatically set tags within tag database 308. In addition, a user may specify tags within tag database 308 through a same or different interface from the interface for setting recording preferences 310.
A tag within tag database 308 or recording preferences 310 may include, for example, “name”, “address”, “business number” or other identifiers that are matched with fields specified for contact database 112. For example, the tag “name” may be matched with the fields FIRST_NAME and LAST_NAME. In another example, the tag “work number” may be matched with the field WORK_NUMBER.
In addition, a tag within tag database 308 may include an associated characteristic of the type of information or pattern of the information typically associated with the tag. For example, the tag “name” may be specified to typically include two words, but would also include other characteristics, such as a name prefix of “Mr”, “Mrs” or “Miss” or a name suffix of “Jr” or “3rd”. In another example, the tag “business number” may be specified to typically include either seven or ten digits unless the first digit is a “0” or other indicator of a country prefix.
Thus, as tag detector 306 identifies tags within recorded conversations, tag detector then identifies the unique information associated with the tag and assigns the information to the associated field or fields. For example, if tag detector 306 detects a text tag of “email” within the converted text of a recorded conversation, then tag detector 306 searches the text before and after the tag for text matching the characteristic pattern of an email address, such as a phrase ending in “dot com” or “dot org”, and assigns the identified unique email address to the field EMAIL_ADDRESS.
In addition to detecting tags, tag detector 306 may detect the lack of presence of particular tags designated within tag database 308 as required or requested. For example, tag database 308 may include the tag “name” marked as required, and set the default for the tag to the caller identification if tag detector 306 detects the lack of presence of the tag “name” within a particular conversation.
Once tag detector 306 detects the end of a recording, recording preferences 310 may specify the next actions to be taken by tag detector 306. For example, if recording preferences 310 specify automatic entry creation, then tag detector 306 will automatically create a new database entry within contact database 112 and automatically fill in the fields of the new database entry with the assigned information.
Referring now to
In particular, in the example depicted at reference numeral 402, current recording preferences specify triggers of spoken keywords and buttons, to apply to starting and stopping and recording of an ongoing conversation. Through using positioning of cursor 404 or other input, a user may select button 412 to add or change the current triggers. For example, a user may select to add a different voice trigger for stopping a recording. It will be understood that a user may specify triggers dependent upon the types of detectable inputs at input interfaces of telephony device 110. It is important to note that a user may select spoken keyword triggers, and thus trigger the input of information into a contact database of a telephony device hands free through voice driven commands and through voice recording. Alternatively, a user may select a trigger button that is convenient for the user to press and trigger the input of information into a contact database of a telephony device hands free through voice driven commands and through voice recording.
In addition, in the example, current recording preferences specify conversion and tagging after a conversation ends. A user may select button 414 to add or change the conversion and tagging time point. For example, a user may select button 414 to then select a different conversion point dependent upon the identity of the other participant in the call or type of call, such as a long distance call. Further, a user may select a different conversion time point such as while the recording of the ongoing conversation is occurring or at the conclusion of the recording of the ongoing conversation. It will be understood that a user may select from among those conversion preferences available and that a user may select multiple conversion preferences.
Additionally, the conversion and tagging preferences indicates a preference for tag detector 306 to automatically create a new database entry for information parsed from the converted and tagged conversation. In another example, a user may select button 414 and select a preference to automatically prompt the local user or remote user with the tagged conversation and obtain approval prior to creating a new database for the information.
Further, in the example, current recording preferences specify to control recording upon a start trigger where if a button is the trigger then recording should only include the local voice and if the trigger is the vocal trigger of “RECORD NOW”, then recording trigger controller 302 should mute the output of the local voice to the remote telephony device for the local user to speak the name into the recording of the ongoing conversation. A user may also select button 416 to add or change the recording controls. It will be understood that a user may select one or more control preferences based on selected triggers and types of recordings available.
With reference now to
In the example, Bob would like to record the contact information rather than manually entering the information or writing the information down separately, so Bob speaks the command “RECORD NOW”, as illustrated at reference numeral 502. Recording trigger controller 302 detects the voice command of “RECORD NOW” because recording trigger controller 302 monitors a conversation for particular voice triggers or for other specified triggers. As illustrated at reference numeral 504, once recording trigger controller 302 detects the start trigger of “RECORD NOW”, recording trigger controller 302 creates a new recording entry within recordings database 312, starts recording the conversation with Bob's local speech muted from broadcast to Alice's telephony device, and plays a prompt to Bob to “speak the name”. Bob speaks “Alice Doe”, as illustrated at reference numeral 506. Next, as illustrated at reference numeral 508, recording trigger controller 302 detects Bob speak following the prompt and detects a pause of longer than two seconds in Bob's speech. Upon detection of the pause after the spoken entry, recording trigger controller 302 returns volume to the speaker and continues recording both Bob's local speech and Alice's remote speech.
Next in the example, Bob begins to vocally prompt Alice to speak certain pieces of information. In the example, Bob prompts Alice to speak a cell number and a work number. In addition, in the example, Alice unilaterally offers her email address and then speaks her email address. Bob concludes the recording of contact information by speaking “RECORD NOW” again, as illustrated at reference numeral 510. As illustrated at reference numeral 512, upon detection of the trigger “RECORD NOW”, recording trigger controller 302 stops the recording and converts the recorded speech into text. Then, as illustrated at reference numeral 520, tag detector 306 parses the text to detect tags and information associated with tags.
As illustrated at reference numeral 322, tag detector 306 first detects a scripted tag of “name” included in the prompt by recording trigger controller 302. Tag detector 306 retrieves from tag database 308 any fields associated with the tag “name” and identifies a field “FIRST_NAME” and “LAST_NAME”. As illustrated at reference numeral 324, tag detector 306 associates the first word of the converted text following the scripted tag with the field FIRST_NAME and associates the second word of the converted text with the field LAST_NAME. It will be understood that in other embodiments, where a single tag includes multiple associated fields for multiple words, tag detector 306 may prompt the user to approve the association or may mark the fields in a particular manner to prompt the user to check the matches associations of fields with text.
Next, as illustrated at reference numeral 326, tag detector 306 next detects from Bob's speech the tag of “cell number”. Tag detector 306 retrieves from tag database 308 the characteristic of a “cell number” of a consecutive string of seven to ten numbers and also any fields associated with the tag “cell number”, which in the example is the field CELL_NUMBER. Tag detector 306 searches for a converted text number, identifies the number 5551234567 matching the characteristic string pattern following the tag and associates the number with the field CELL_NUMBER. Similarly, as depicted at reference numeral 330, tag detector 306 detects from Bob's speech the tag of “work number”, retrieves the associated field of WORK_NUMBER from tag database 308 and associates the next number in the converted text with the field WORK_NUMBER, as illustrated at reference numeral 332.
Thereafter, as illustrated at reference numeral 334, tag detector 306 detects in the converted text spoken by Alice the tag “email” and retrieves, from tag database 308, the pattern of data that identifies information associated with the tag “email”. In the example, tag detector 306 identifies the spoken text following the tag that matches the pattern of an email address of “alice at domain dot com” and converts the converted text into the pattern of the email address associated with field EMAIL_ADDRESS as illustrated at reference numeral 336.
Finally, as illustrated at reference numeral 338, tag detector 306 detects the end of the recording. In the example, tag detector 306 automatically triggers creation of a new entry in contact database 112 and automatically fills in the fields of the new entry according to the information matched with each field identifier. It will be understood that in filling in the new entry in contact database 112, tag detector 306 may prompt the local user and may prompt the remote user to verify the converted information. Alternatively, tag detector 306, at the end of the recording, may store the converted information with the recording in recordings database 312.
Referring now to
As illustrated at reference numeral 600, a recorded conversation is detected with the steps performed by recording trigger controller 302 indicated within textual marks of “<” and “>”. In the example, network 140 supports a conversation between a user “Alice” and a user “Bob”. The telephony device used by Bob records portions of the conversation between Alice and Bob, however, and in particular only records those portions of the conversation spoken by Bob. In particular, the conversation begins with Alice wanting to give Bob her contact information. Bob enters a key of “*” to trigger recording, as illustrated at reference numeral 602. Based on the recording preferences illustrated in
In particular, in the example, recording trigger controller 302 records Bob's question of “What is your cell number” indicated at reference numeral 606, his statement of “I heard 5551234567” indicated at reference numeral 610 and his statement of “Hey thanks” indicated at reference numeral 612. Recording trigger controller 302 does not record, however, Alice's statements from the remote telephony device of “5551234567” indicated at reference numeral 608 and of “that is correct” indicated at reference numeral 611. Bob then concludes the recording of the conversation by entering a key of “*”, as indicated at reference numeral 614. Upon detection of the trigger to end the recording, as indicated at reference numeral 616, recording trigger controller 302 stops the recording and converts the recorded speech into text.
Next, as illustrated at reference numeral 620, tag detector 306 parses the text to detect tags and information associated with tags.
First, in the example, a user may select a preference that if no name tag is detected in a recording, then tag detector 306 should infer that the caller identification should be designated in the fields associated with the name. In the example, once tag detector 306 reaches the end of the converted text recording and detects that no name tag was included in the converted text, then as illustrated at reference numerals 622 and 624, tag detector 306 infers that the name for the entry is the same as the caller identification and assigns the fields associated with the name tag with the caller identification.
Next, in the example tag detector 306 detects the tag “cell number” as illustrated at reference numeral 626. Upon detection of the tag “cell number”, tag detector locates numbers in the text matching the expected pattern for the tag and assigns the numbers “5551234567” to the field CELL_NUMBER as illustrated at reference numeral 628.
Finally, as illustrated at reference numeral 630, tag detector 306 detects the end of the recording. In the example, tag detector 306 automatically triggers creation of a new entry in contact database 112 and automatically fills in the fields of the new entry according to the information matched with each field identifier, as previously described.
With reference now to
Block 710 depicts converting the recording conversation speech into text. Next, block 712 illustrates detecting the contact tags within the text. Thereafter, block 714 depicts detecting the information associated with the tag characteristics of detected tags. Next, block 716 illustrates assigning the associated info to the fields for each tag. Thereafter, block 718 depicts extracting other non-tagged information from the converted text and associated the non-tagged information with a contact database entry field. Further, block 720 illustrates detecting those tags required, but not included in the text and inferring or requesting for each field associated with the required tags. Thereafter, block 722 depicts storing the detected contact information in the corresponding fields of a newly created contact database entry, and the process ends.
While the invention has been particularly shown and described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US6832245 *||Nov 30, 2000||Dec 14, 2004||At&T Corp.||System and method for analyzing communications of user messages to rank users and contacts based on message content|
|US6961414 *||Jan 31, 2001||Nov 1, 2005||Comverse Ltd.||Telephone network-based method and system for automatic insertion of enhanced personal address book contact data|
|US7050551 *||Jun 24, 2002||May 23, 2006||Kyocera Wireless Corp.||System and method for capture and storage of forward and reverse link audio|
|US7257210 *||Feb 11, 2005||Aug 14, 2007||Intellect Wireless Inc.||Picture phone with caller id|
|US20050069095 *||Sep 25, 2003||Mar 31, 2005||International Business Machines Corporation||Search capabilities for voicemail messages|
|US20060159242 *||Nov 30, 2005||Jul 20, 2006||Clark David W||Systems and methods for registration and retrieval of voice mail contact information|
|US20070041521 *||Aug 10, 2005||Feb 22, 2007||Siemens Communications, Inc.||Method and apparatus for automated voice dialing setup|
|1||TelephonyWorld, "LG Electronics Selects Art to Supply Advanced Cell Phone Voice Control", published May 16, 2004. Retrieved online from <http://www.telephonyworld.com/cgi-bin/news/viewnews.cgi?category=all&id=1084682793> on Sep. 21, 2005.|
|2||TelephonyWorld, "Z-TEL Launches Personal Voice Assistant-Hands-Free Calling, Computer-Free Emailing and Device-Independent Contact Storage", published Feb. 3, 2003. Retrieved online from <http://www.telephonyworld.com/cgi-bin/news/viewnews.cgi?category=all&id=1044327864> on Sep. 21, 2005.|
|3||voicedex Carkit, copyright Hotech. Retrieved online from <http://www.hotech.com.tw/download/carkitdeluxe.pdf> on Sep. 21, 2005.|
|4||Wrolstad, Jay, "4G Wireless Voice-to-Text Technology Enters Test Phase", published date Aug. 13, 2001 by wirelessnewsfactor.com. Retrieved online from <http://www.wirelessnewsfactor.com/perl/story/12756.html> on Sep. 21, 2005.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7813481 *||Feb 18, 2005||Oct 12, 2010||At&T Mobility Ii Llc||Conversation recording with real-time notification for users of communication terminals|
|US7907705 *||Oct 10, 2006||Mar 15, 2011||Intuit Inc.||Speech to text for assisted form completion|
|US8296152||Mar 22, 2011||Oct 23, 2012||Oto Technologies, Llc||System and method for automatic distribution of conversation topics|
|US8326622 *||Sep 23, 2008||Dec 4, 2012||International Business Machines Corporation||Dialog filtering for filling out a form|
|US8328089 *||Jul 18, 2011||Dec 11, 2012||Nuance Communications, Inc.||Hands free contact database information entry at a communication device|
|US8340974 *||Dec 30, 2008||Dec 25, 2012||Motorola Mobility Llc||Device, system and method for providing targeted advertisements and content based on user speech data|
|US8600025||Dec 22, 2009||Dec 3, 2013||Oto Technologies, Llc||System and method for merging voice calls based on topics|
|US9191502||Aug 31, 2010||Nov 17, 2015||At&T Mobility Ii Llc||Conversation recording with real-time notification for users of communication terminals|
|US9344560||Oct 30, 2015||May 17, 2016||At&T Mobility Ii Llc||Conversation recording with real-time notification for users of communication terminals|
|US20100076760 *||Sep 23, 2008||Mar 25, 2010||International Business Machines Corporation||Dialog filtering for filling out a form|
|US20100169091 *||Dec 30, 2008||Jul 1, 2010||Motorola, Inc.||Device, system and method for providing targeted advertisements and content|
|US20100332220 *||Aug 31, 2010||Dec 30, 2010||Hursey John T||Conversation Recording with Real-Time Notification for Users of Communication Terminals|
|US20110150198 *||Dec 22, 2009||Jun 23, 2011||Oto Technologies, Llc||System and method for merging voice calls based on topics|
|US20110200181 *||Mar 22, 2011||Aug 18, 2011||Oto Technologies, Llc||System and method for automatic distribution of conversation topics|
|US20110276595 *||Jul 18, 2011||Nov 10, 2011||Nuance Communications, Inc.||Hands free contact database information entry at a communication device|
|U.S. Classification||235/380, 379/93.15, 379/88.14, 704/E15.045|
|Cooperative Classification||H04M1/274516, H04M1/2745, H04M2250/68, G10L15/26|
|European Classification||G10L15/26A, H04M1/2745, H04M1/2745C|
|Nov 9, 2005||AS||Assignment|
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KIRKLAND, DUSTIN;PARANJAPE, AMEET M.;REEL/FRAME:016997/0491;SIGNING DATES FROM 20050927 TO 20050929
|May 13, 2009||AS||Assignment|
Owner name: NUANCE COMMUNICATIONS, INC., MASSACHUSETTS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:022689/0317
Effective date: 20090331
Owner name: NUANCE COMMUNICATIONS, INC.,MASSACHUSETTS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:INTERNATIONAL BUSINESS MACHINES CORPORATION;REEL/FRAME:022689/0317
Effective date: 20090331
|Jan 9, 2012||FPAY||Fee payment|
Year of fee payment: 4
|Dec 23, 2015||FPAY||Fee payment|
Year of fee payment: 8