|Publication number||US6393305 B1|
|Application number||US 09/326,717|
|Publication date||May 21, 2002|
|Filing date||Jun 7, 1999|
|Priority date||Jun 7, 1999|
|Also published as||EP1074974A2, EP1074974A3|
|Publication number||09326717, 326717, US 6393305 B1, US 6393305B1, US-B1-6393305, US6393305 B1, US6393305B1|
|Inventors||Vesa Ulvinen, Jari Paloniemi|
|Original Assignee||Nokia Mobile Phones Limited|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (12), Referenced by (70), Classifications (29), Legal Events (5)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This invention relates generally to biometric systems and methods and, in particular, to systems that identify a speaker by the automatic recognition of the speaker's voice and, more particularly, to a wireless telecommunications system employing voice recognition.
Biometric systems typically employ and measure some physical characteristic of a particular individual to uniquely identify that individual. The characteristic could be, by example, a fingerprint, a retinal pattern, or a voice pattern. The use of this latter characteristic is especially attractive for those systems that already include a microphone, such as telecommunications system, as no hardware expense may need to be incurred in order to implement the identification system. After having uniquely identified a speaker as being a particular, authorized individual, the system can then grant the speaker access to some location or to some resource. That is, this type of biometric system can be viewed as an electronic, voice actuated lock.
One problem that arises in many such systems is that the system is trained to recognize a particular speaker using a limited set of spoken words. For example, the speaker may be expected to say his or her name, and/or some predetermined password. While this approach may be suitable for many applications, in other applications the limited set of words used for identification may not be desirable, and may in fact lead some other persons to attempt to defeat the voice recognition-based biometric system. For example, a person attempting to defeat the system may simply surreptitiously tape record a person speaking the word or words that the biometric system expects to be spoken, and then play back the authorized person's speech to the voice input transducer of the biometric system.
It is well known in the mobile telecommunications art to provide a mobile telephone, such as a vehicle-installed cellular telephone, with a voice recognition capability in order to replace or augment the normal user input device(s). For example, the user can dial a number by speaking the digits, or by speaking a name having a stored telephone number. Some commands could be given to the telephone in the same manner.
In general, current user identification methods are based on measuring one static feature: e.g., a written password, a spoken password (voice recognition), a fingerprint, an image of the eye and so on. In the identifying situation the user knows what is measured and how.
It is an object of this invention to provide an improved biometric system, in particular a voice actuated recognition system, that relies on a random set of words and or images.
It is a further object of this invention to provide a mobile station having a speech transducer, and a method and apparatus to authenticate or authorize a user of a wireless telecommunication system to operate in, or through, or with a resource reachable through the wireless telecommunication system, only if the user's speech characteristics match pre-stored characteristics associated with word selected randomly from a training set of words.
The foregoing and other problems are overcome and the objects of the invention are realized by methods and apparatus in accordance with embodiments of this invention.
According to this invention, when a user enters an identifying situation he or she does not know beforehand what the identification stimulus will be and, thus, what the user's reaction or response will be. Using current technology a most straightforward way to implement the invention is with voice recognition. In this case the user is presented with a voice stimulus, or a text stimulus, or a graphical image stimulus, and the user reacts with his or her voice. The stimulus can be direct (e.g., the user speaks a displayed word) or indirect (e.g., the user responds to a question that only the user knows the answer to). Since even the correct user does not know beforehand the details of the identification situation, it becomes very difficult or impossible to know beforehand what the expected correct response will be.
A method is disclosed to authorize or authenticate a user of a wireless telecommunication system, and includes steps of (a) selecting a word at random from a set of reference words, or synthesizing a random reference word; (b) prompting the user to speak the reference word; and (c) authenticating the user to operate in, or through, or with a resource reachable through the wireless telecommunication system, only if the user's speech characteristics match predetermined characteristics associated with the reference word.
In one embodiment the steps of selecting or synthesizing, prompting, and authenticating are performed in a mobile station having a speech transducer for inputting the user's speech, while in another embodiment at least one of the steps of selecting or synthesizing, prompting, and authenticating are performed in a wireless telecommunications network that is coupled between the mobile station and a telephone network. In yet another embodiment at least one of the steps of selecting or synthesizing, prompting, and authenticating are performed in a data communications network resource that is coupled through a data communications network, such as the Internet, and the wireless telecommunications network to the mobile station.
The step of prompting may include a step of displaying alphanumeric text and/or a graphical image to the user using a display of the mobile station.
The above set forth and other features of the invention are made more apparent in the ensuing Detailed Description of the Invention when read in conjunction with the attached Drawings, wherein:
FIG. 1 is a block diagram of a mobile station that is constructed and operated in accordance with this invention;
FIG. 2 is an elevational view of the mobile station shown in FIG. 1, and which further illustrates a cellular communication system to which the mobile station is bidirectionally coupled through wireless RF links; and
FIG. 3 is block diagram that shows in greater detail a plurality of data communications network resources in accordance with further embodiments of this invention.
Reference is made to FIGS. 1 and 2 for illustrating a wireless user terminal or mobile station 10, such as but not limited to a cellular radiotelephone or a personal communicator, that is suitable for practicing this invention. The mobile station 10 includes an antenna 12 for transmitting signals to and for receiving signals from a base site or base station 30. The base station 30 is a part of a wireless telecommunications network or system 32, that may include a mobile switching center (MSC) 34. The MSC 34 provides a connection to landline trunks, such as the public switched telephone network (PSTN) 35, when the mobile station 10 is involved in a call.
The mobile station includes a modulator (MOD) 14A, a transmitter 14, a receiver 16, a demodulator (DEMOD) 16A, and a controller 18 that provides signals to and receives signals from the transmitter 14 and receiver 16, respectively. These signals include signaling information in accordance with the air interface standard of the applicable cellular system, and also user speech and/or user generated data. The particular air interface standard and/or access type is not germane to the operation of this system, as mobile stations and wireless systems employing most if not all air interface standards and access types (e.g., TDMA, CDMA, FDMA, etc.) can benefit from the teachings of this invention.
It is understood that the controller 18 also includes the circuitry required for implementing the audio and logic functions of the mobile station. By example, the controller 18 may be comprised of a digital signal processor device, a microprocessor device, and various analog to digital converters, digital to analog converters, and other support circuits. The control and signal processing functions of the mobile station 10 are allocated between these devices according to their respective capabilities. In many embodiments the mobile station 10 will include a voice encoder/decoder (yocoder) 18A of any suitable type.
A user interface includes a conventional earphone or speaker 17, a conventional microphone 19, a display 20, and a user input device, typically a keypad 22, all of which are coupled to the controller 18. The keypad 22 includes the conventional numeric (0-9) and related keys (#,*) 22 a, and other keys 22 b used for operating the mobile station 10. These other keys 22 b may include, by example, a SEND key, various menu scrolling and soft keys, and a PWR key. The mobile station 10 also includes a battery 26 for powering the various circuits that are required to operate the mobile station. The mobile station 10 also includes various memories, shown collectively as the memory 24, wherein are stored a plurality of constants and variables that are used by the controller 18 during the operation of the mobile station. The memory 24 may also store all or some of the values of various wireless system parameters and the number assignment module (NAM). An operating program for controlling the operation of controller 18 is also stored in the memory 24 (typically in a ROM device).
In accordance with the teachings of this invention, the controller 18 includes a speech recognition function (SRF) 29 that receives digitized input that originates from the microphone 19, and which is capable of processing the digitized input and for comparing the characteristics of the user's speech with pre-stored characteristics stored in the memory 24. If a match occurs then the controller 18 is operable to grant the speaker access to some resource, for example to a removable electronic card 28 which authorizes or enables the speaker to, in a typical application, make a telephone call from the mobile station 10. For example, the subscriber data required to make a telephone call, such as the Mobile Identification Number (MIN), and/or some authentication-related key or other data, can be stored in the card 28, and access to this information is only granted when the user speaks a word or words that are expected by the SRF 29, and which match predetermined enrollment (training) data already stored in the memory 24.
Further in accordance with this invention, the training data could as well be stored in some other memory, such as a memory 28A within the card 28, or in a memory 32A located in the system 32 (FIG. 3), or in some remote memory that is accessible through the system 32. For example, and referring specifically to FIG. 2, a memory 39 storing the training data set could be located in a data communications network (e.g., the Internet) entity or resource 38, which is accessible from the PSTN 35 through a network interface 36 (e.g., an Internet Service Provider or ISP), and a local area or wide area data communications network 37 (e.g., the Internet). In this case it can be appreciated that at least some of the data is packetized and sent in TCP/IP format.
In general, the identification system and software, as well as the prestored speech samples and characteristics may be located in the mobile station 10, in a server of the network 37 or the system 32, or in the system of a service provider.
In accordance with the an aspect of this invention the user can be prompted to speak one or a set of words, with the specific word to be spoken being selected randomly from the set of known words by the SRF 29. Assuming that the set of known words has a non-trivial number of elements, then it becomes difficult for another person to defeat the SRF 29 by recording a word or words expected to be spoken by the user.
The user can be prompted to speak the selected word or words in various ways. In a simplest way the SRF 29 displays the selected word on the display 20. Alternatively, the SRF 29 can use a speech synthesizer and the mobile station's speaker 17 to audibly prompt the user for the word to be spoken. In another embodiment the display 20 is used to present some graphical image corresponding to a word to be spoken (e.g., a tree). In a further embodiment some generic graphical image is used to suggest to the user a predetermined word to be spoken, and that was previously agreed upon during the training or enrollment stage. For example, it can be agreed upon that when presented with the graphical image of a tree the user will speak the word “birch”, and that when presented with a graphical image of a city skyline the user will speak the word “Chicago”. In this latter embodiment, and even if an unauthorized person where to gain possession of the user's mobile station 10, it is unlikely that the unauthorized person will give the correct reply word when presented with a particular graphical image or icon, let alone speak the reply word in a manner that would be recognized by the SRF 29 as a valid response.
If the set of training words are stored in the mobile station 10, whether in the memory 24 or the card 28, the words can be encrypted to prevent unauthorized access and/or modification.
Referring to FIG. 3, it can also be appreciated that the SRF 29 can be resident outside of the mobile station 10, such as at one or more network entities or resources 38A-38D (e.g., a credit card supplier, stock broker, retailer, or bank.) In this embodiment, and assuming for example that the user wishes to access his account at the bank 38D, the SRF 29 signals back to the mobile station 10 a randomly selected word to be spoken by the user, via the network 37, network interface 36, and wireless system 32. The user speaks the word and, in one embodiment, the spectral and temporal characteristics of the user's utterance are transmitted from the mobile station 10 as a digital data stream (not as speech per se) to the SRF 29 of the bank 38D for processing and comparison. In another embodiment the user's spoken utterance is transmitted in a normal manner, such as by transmitting voice encoder/decoder (vocoder 18A) parameters, which are converted to speech in the system 32. This speech is then routed to the SRF 29 of the bank 38D for processing and comparison. It should be noted that the spectral and temporal characteristics transmitted in the first embodiment could be the vocoder 18A output parameters as well, which are then transmitted on further to the SRF 29 of the bank 38D, without being first converted to a speech signal in the system 32. In this case the necessary signaling protocol must first be defined and established so that the system 32 knows to bypass its speech decoder.
It is also within the scope of the teaching of this invention to provide a centralized SRF 29A, whose responsibility it is to authenticate users for other locations. For example, assume that the user of the mobile station 10 telephones the bank 38D and wishes to access an account. In this case the user authentication process is handled by the intervention of the SRF 29A which has a database (DB) 29B of recognition word sets and associated speech characteristics for a plurality of different users. The SRF 29A, after processing the user's speech signal, signals the bank 38D that the user is either authorized or is not authorized. This process could be handled in several ways, such as by connecting the user's call directly to the SRF 29A, or by forwarding the user's voice characteristics from the bank 38D to the SRF 29A. In either case the bank 38D is not required to have the SRF 29, nor are the other network resources 38A-38C.
It should be noted that the set of recognition words stored in the DB 29B could be different for every user. It should be further noted that this process implies that at some time the user interacts with the SRFs 29, or just with the SRF 29A, in order to execute an enrollment or training process whereby the user's database entries (set of recognition words and the associated speech temporal and spectral characteristics) are created. As was noted above, at least some of these speech characteristics could be based on or include voice encoder 18A parameters.
As an exemplary embodiment of this invention about 20-50 prestored voice samples can be used, and the stimulus and the sample are randomly or pseudorandomly selected among these (e.g., text-dependent speaker verification). In that the user records the samples himself or herself, the connection between the stimulus and the sample may be meaningful only for the user. Also, due to the provided stimulus the user is not required to memorize one or more passwords or numeric codes. Furthermore, there can be different sets of samples for different network services. For example, one set of samples may be used to obtain access to a network e-mail facility, while another set of samples may be used to obtain access to a network voice mail facility. As employed herein the term “random” is considered to encompass both truly random as well as pseudorandom.
For the case where speech synthesizing techniques improve sufficiently, it is also possible that the prestored samples are not required, but instead the system creates one or more synthesized reference word(s) that are compared to the user's voice response (text-independent speaker verification). The generated reference word is preferably generated randomly or pseudorandomly.
Furthermore, it should be appreciated that the teachings of this invention could be combined with the use of one or more other types of identification systems and techniques, such as fingerprint identification. Also, various ones of the stimulus types described above could be used in combination. For example, the user may be presented with a randomly selected or generated alphanumeric string that the user is expected vocalize, as well as with a related or totally unrelated graphical image to which the user is expected to verbally respond.
While the invention has been described in the context of preferred and exemplary embodiments, it should be realized that a number of modifications to these teachings may occur to one skilled in the art. By example, any suitable speech processing techniques that are known for use in speech recognition systems can be employed, and the teachings of this invention are not limited for use to any specific technique.
Furthermore, while the user may be prompted to speak a reference “word”, it can be appreciated that the “word” may actually be a phrase comprised of a plurality of words and also possibly numbers (e.g., a date, or an address).
Thus, while the invention has been particularly shown and described with respect to preferred embodiments thereof, it will be understood by those skilled in the art that changes in form and details may be made therein without departing from the scope and spirit of the invention.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5640485||Apr 6, 1995||Jun 17, 1997||Nokia Mobile Phones Ltd.||Speech recognition method and system|
|US5692032||Nov 27, 1995||Nov 25, 1997||Nokia Mobile Phones Ltd.||Mobile terminal having one key user message acknowledgment function|
|US5774525 *||Aug 14, 1997||Jun 30, 1998||International Business Machines Corporation||Method and apparatus utilizing dynamic questioning to provide secure access control|
|US5794142||Jan 29, 1996||Aug 11, 1998||Nokia Mobile Phones Limited||Mobile terminal having network services activation through the use of point-to-point short message service|
|US5805674 *||Mar 8, 1996||Sep 8, 1998||Anderson, Jr.; Victor C.||Security arrangement and method for controlling access to a protected system|
|US5845205||Aug 1, 1997||Dec 1, 1998||Nokia Mobile Phones Ltd.||Fully automatic credit card calling system|
|US5870683||Sep 18, 1996||Feb 9, 1999||Nokia Mobile Phones Limited||Mobile station having method and apparatus for displaying user-selectable animation sequence|
|US5897616 *||Jun 11, 1997||Apr 27, 1999||International Business Machines Corporation||Apparatus and methods for speaker verification/identification/classification employing non-acoustic and/or acoustic models and databases|
|US5903832||Dec 21, 1995||May 11, 1999||Nokia Mobile Phones Llimited||Mobile terminal having enhanced system selection capability|
|US6161090 *||Mar 24, 1999||Dec 12, 2000||International Business Machines Corporation||Apparatus and methods for speaker verification/identification/classification employing non-acoustic and/or acoustic models and databases|
|US6185536 *||Mar 4, 1998||Feb 6, 2001||Motorola, Inc.||System and method for establishing a communication link using user-specific voice data parameters as a user discriminator|
|US6263311 *||Jan 11, 1999||Jul 17, 2001||Advanced Micro Devices, Inc.||Method and system for providing security using voice recognition|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US6741851 *||Aug 31, 2000||May 25, 2004||Samsung Electronics Co., Ltd.||Method for protecting data stored in lost mobile terminal and recording medium therefor|
|US7188110 *||Dec 11, 2000||Mar 6, 2007||Sony Corporation||Secure and convenient method and apparatus for storing and transmitting telephony-based data|
|US7212613||Sep 18, 2003||May 1, 2007||International Business Machines Corporation||System and method for telephonic voice authentication|
|US7222072||Feb 13, 2003||May 22, 2007||Sbc Properties, L.P.||Bio-phonetic multi-phrase speaker identity verification|
|US7240036||Oct 17, 2000||Jul 3, 2007||Gtech Global Services Corporation||Method and system for facilitation of wireless e-commerce transactions|
|US7328012 *||Jun 22, 2005||Feb 5, 2008||Harris Corporation||Aircraft communications system and related method for communicating between portable wireless communications device and ground|
|US7415410 *||Dec 26, 2002||Aug 19, 2008||Motorola, Inc.||Identification apparatus and method for receiving and processing audible commands|
|US7549170 *||Apr 30, 2003||Jun 16, 2009||Microsoft Corporation||System and method of inkblot authentication|
|US7567901||Apr 13, 2007||Jul 28, 2009||At&T Intellectual Property 1, L.P.||Bio-phonetic multi-phrase speaker identity verification|
|US7636855 *||Jan 30, 2004||Dec 22, 2009||Panasonic Corporation||Multiple choice challenge-response user authorization system and method|
|US7734930||Jul 9, 2007||Jun 8, 2010||Microsoft Corporation||Click passwords|
|US7793109 *||Dec 17, 2002||Sep 7, 2010||Mesa Digital, Llc||Random biometric authentication apparatus|
|US7805310 *||Feb 26, 2002||Sep 28, 2010||Rohwer Elizabeth A||Apparatus and methods for implementing voice enabling applications in a converged voice and data network environment|
|US7933589 *||Oct 17, 2000||Apr 26, 2011||Aeritas, Llc||Method and system for facilitation of wireless e-commerce transactions|
|US8139723||Jun 16, 2006||Mar 20, 2012||International Business Machines Corporation||Voice authentication system and method using a removable voice ID card|
|US8321209||Nov 10, 2009||Nov 27, 2012||Research In Motion Limited||System and method for low overhead frequency domain voice authentication|
|US8326625||Nov 10, 2009||Dec 4, 2012||Research In Motion Limited||System and method for low overhead time domain voice authentication|
|US8396711 *||May 1, 2006||Mar 12, 2013||Microsoft Corporation||Voice authentication system and method|
|US8458485||Jun 17, 2009||Jun 4, 2013||Microsoft Corporation||Image-based unlock functionality on a computing device|
|US8504365 *||Apr 11, 2008||Aug 6, 2013||At&T Intellectual Property I, L.P.||System and method for detecting synthetic speaker verification|
|US8510104||Sep 14, 2012||Aug 13, 2013||Research In Motion Limited||System and method for low overhead frequency domain voice authentication|
|US8630391||Mar 2, 2012||Jan 14, 2014||International Business Machines Corporation||Voice authentication system and method using a removable voice ID card|
|US8650636||Jun 17, 2011||Feb 11, 2014||Microsoft Corporation||Picture gesture authentication|
|US8805685 *||Aug 5, 2013||Aug 12, 2014||At&T Intellectual Property I, L.P.||System and method for detecting synthetic speaker verification|
|US8817964||Feb 11, 2008||Aug 26, 2014||International Business Machines Corporation||Telephonic voice authentication and display|
|US8910253||Oct 19, 2012||Dec 9, 2014||Microsoft Corporation||Picture gesture authentication|
|US8976943 *||Sep 25, 2012||Mar 10, 2015||Ebay Inc.||Voice phone-based method and system to authenticate users|
|US9042867||Dec 12, 2012||May 26, 2015||Agnitio S.L.||System and method for speaker recognition on mobile devices|
|US9142218 *||Aug 7, 2014||Sep 22, 2015||At&T Intellectual Property I, L.P.||System and method for detecting synthetic speaker verification|
|US9236051||Jun 24, 2009||Jan 12, 2016||At&T Intellectual Property I, L.P.||Bio-phonetic multi-phrase speaker identity verification|
|US9239993 *||May 23, 2013||Jan 19, 2016||Bytemark, Inc.||Method and system for distributing electronic tickets with visual display|
|US9246914 *||May 16, 2011||Jan 26, 2016||Nokia Technologies Oy||Method and apparatus for processing biometric information using distributed computation|
|US9355239||May 8, 2013||May 31, 2016||Microsoft Technology Licensing, Llc||Image-based unlock functionality on a computing device|
|US9412382 *||Sep 21, 2015||Aug 9, 2016||At&T Intellectual Property I, L.P.||System and method for detecting synthetic speaker verification|
|US20010011028 *||Jan 30, 2001||Aug 2, 2001||Telefonaktiebolaget Lm Ericsson||Electronic devices|
|US20020026419 *||Dec 12, 2000||Feb 28, 2002||Sony Electronics, Inc.||Apparatus and method for populating a portable smart device|
|US20020026423 *||Dec 12, 2000||Feb 28, 2002||Sony Electronics, Inc.||Automated usage-independent and location-independent agent-based incentive method and system for customer retention|
|US20020029203 *||May 2, 2001||Mar 7, 2002||Pelland David M.||Electronic personal assistant with personality adaptation|
|US20030004726 *||Nov 22, 2001||Jan 2, 2003||Meinrad Niemoeller||Access control arrangement and method for access control|
|US20030061173 *||Sep 27, 2001||Mar 27, 2003||Hiroshi Ogino||Electronic gathering of product information and purchasing of products|
|US20030120934 *||Dec 17, 2002||Jun 26, 2003||Ortiz Luis Melisendro||Random biometric authentication apparatus|
|US20030191947 *||Apr 30, 2003||Oct 9, 2003||Microsoft Corporation||System and method of inkblot authentication|
|US20030199267 *||May 19, 2003||Oct 23, 2003||Fujitsu Limited||Security system for information processing apparatus|
|US20040107108 *||Feb 26, 2002||Jun 3, 2004||Rohwer Elizabeth A||Apparatus and methods for implementing voice enabling applications in a coverged voice and data network environment|
|US20040128131 *||Dec 26, 2002||Jul 1, 2004||Motorola, Inc.||Identification apparatus and method|
|US20040162726 *||Feb 13, 2003||Aug 19, 2004||Chang Hisao M.||Bio-phonetic multi-phrase speaker identity verification|
|US20050063522 *||Sep 18, 2003||Mar 24, 2005||Kim Moon J.||System and method for telephonic voice authentication|
|US20050171851 *||Jan 30, 2004||Aug 4, 2005||Applebaum Ted H.||Multiple choice challenge-response user authorization system and method|
|US20050273333 *||Jun 2, 2004||Dec 8, 2005||Philippe Morin||Speaker verification for security systems with mixed mode machine-human authentication|
|US20060085189 *||Oct 15, 2004||Apr 20, 2006||Derek Dalrymple||Method and apparatus for server centric speaker authentication|
|US20060183474 *||Jun 22, 2005||Aug 17, 2006||Harris Corporation||Aircraft communications system and related method for communicating between portable wireless communications device and ground|
|US20060293898 *||Jun 22, 2005||Dec 28, 2006||Microsoft Corporation||Speech recognition system for secure information|
|US20070036289 *||Jun 16, 2006||Feb 15, 2007||Fu Guo K||Voice authentication system and method using a removable voice id card|
|US20070055517 *||Aug 30, 2005||Mar 8, 2007||Brian Spector||Multi-factor biometric authentication|
|US20070198264 *||Apr 13, 2007||Aug 23, 2007||Chang Hisao M||Bio-phonetic multi-phrase speaker identity verification|
|US20070255564 *||May 1, 2006||Nov 1, 2007||Microsoft Corporation||Voice authentication system and method|
|US20080016369 *||Jul 9, 2007||Jan 17, 2008||Microsoft Corporation||Click Passwords|
|US20080195395 *||Feb 8, 2007||Aug 14, 2008||Jonghae Kim||System and method for telephonic voice and speech authentication|
|US20090202060 *||Feb 11, 2008||Aug 13, 2009||Kim Moon J||Telephonic voice authentication and display|
|US20090259468 *||Apr 11, 2008||Oct 15, 2009||At&T Labs||System and method for detecting synthetic speaker verification|
|US20090259470 *||Jun 24, 2009||Oct 15, 2009||At&T Intellectual Property 1, L.P.||Bio-Phonetic Multi-Phrase Speaker Identity Verification|
|US20100325721 *||Jun 17, 2009||Dec 23, 2010||Microsoft Corporation||Image-based unlock functionality on a computing device|
|US20110112830 *||Nov 10, 2009||May 12, 2011||Research In Motion Limited||System and method for low overhead voice authentication|
|US20120016662 *||May 16, 2011||Jan 19, 2012||Nokia Corporation||Method and apparatus for processing biometric information using distributed computation|
|US20130022180 *||Sep 25, 2012||Jan 24, 2013||Ebay Inc.||Voice phone-based method and system to authenticate users|
|US20130262163 *||May 23, 2013||Oct 3, 2013||Bytemark, Inc.||Method and System for Distributing Electronic Tickets with Visual Display|
|US20130317824 *||Aug 5, 2013||Nov 28, 2013||At&T Intellectual Property I, L.P.||System and Method for Detecting Synthetic Speaker Verification|
|US20140350938 *||Aug 7, 2014||Nov 27, 2014||At&T Intellectual Property I, L.P.||System and method for detecting synthetic speaker verification|
|US20160012824 *||Sep 21, 2015||Jan 14, 2016||At&T Intellectual Property I, L.P.||System and method for detecting synthetic speaker verification|
|WO2007027931A2 *||Aug 30, 2006||Mar 8, 2007||Authentivox||Multi-factor biometric authentication|
|U.S. Classification||455/563, 379/88.02, 455/566, 455/411, 704/246, 704/E17.003, 704/273|
|International Classification||G06Q20/32, G06Q20/40, H04M1/27, G10L17/00, H04M1/725, G07C9/00|
|Cooperative Classification||G10L17/00, G06Q20/32, H04M1/72519, G06Q20/40, G06Q20/40145, G06Q20/322, G07C9/00087, G07C9/00103, H04M1/271|
|European Classification||G06Q20/40, G06Q20/32, G06Q20/40145, G06Q20/322, G07C9/00B8, G10L17/00U, G07C9/00B6D4|
|Aug 26, 1999||AS||Assignment|
Owner name: NOKIA MOBILE PHONES LIMITED, FINLAND
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ULVINEN, VESA;PALONIEMI, JARI;REEL/FRAME:010194/0262;SIGNING DATES FROM 19990727 TO 19990809
|Oct 28, 2005||FPAY||Fee payment|
Year of fee payment: 4
|Oct 21, 2009||FPAY||Fee payment|
Year of fee payment: 8
|Oct 23, 2013||FPAY||Fee payment|
Year of fee payment: 12
|Jul 7, 2015||AS||Assignment|
Owner name: NOKIA TECHNOLOGIES OY, FINLAND
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:NOKIA CORPORATION;REEL/FRAME:036067/0222
Effective date: 20150116