WO1992006468A1 - Methods and apparatus for verifying the originator of a sequence of operations - Google Patents

Methods and apparatus for verifying the originator of a sequence of operations Download PDF

Info

Publication number
WO1992006468A1
WO1992006468A1 PCT/GB1991/001681 GB9101681W WO9206468A1 WO 1992006468 A1 WO1992006468 A1 WO 1992006468A1 GB 9101681 W GB9101681 W GB 9101681W WO 9206468 A1 WO9206468 A1 WO 9206468A1
Authority
WO
WIPO (PCT)
Prior art keywords
derived
features
operations
models
sequence
Prior art date
Application number
PCT/GB1991/001681
Other languages
French (fr)
Inventor
Michael John Carey
Eluned Sarah Parris
John Scott Bridle
Original Assignee
Ensigma Limited
The Secretary Of State For Defence
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ensigma Limited, The Secretary Of State For Defence filed Critical Ensigma Limited
Priority to AU86496/91A priority Critical patent/AU665745B2/en
Priority to US08/039,054 priority patent/US5526465A/en
Publication of WO1992006468A1 publication Critical patent/WO1992006468A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification

Definitions

  • the present invention relates to verifying that a sequence of operations has been carried out by a specific entity.
  • entity is a person and the sequence of operations may, for example, be speaking a digit or a letter, or writing a letter or a word.
  • the invention relates particularly to the verification that an utterance was made by a predetermined person, but it is believed that the invention can be applied to other actions carried out by persons such as recognising written words.
  • HMM Hidden Markov Models
  • a method of verifying that a sequence of operations originates from a specific entity comprising the steps of
  • apparatus for verifying that a sequence of operations originated from a specific entity comprising
  • the specific entity is usually a person, although the entity may be an object, for example an object undergoing non-destructive testing when the sequence of operations may be signals originated by the object under test.
  • the sequence of operations may, for example, be the utterance of a sound or the signing of a signature.
  • the sounds may be alpha-numeric characters or words and the characters or words may be uttered as isolated items, or connected items as in continuous speech.
  • the invention has the advantage that it tends to reduce false acceptances and false rejections in speaker verification.
  • Signals resulting from incoming speech may be digitised at relatively short intervals and processed over relatively long intervals to provide sets or "frames" of digital signals derived from spectral components. By rejecting some of these components before or after further processing, the effects of telephone link limitations and distortion can be reduced so that speaker verification over telephone systems is possible.
  • a method of speech verification or recognition including obtaining digital signals representative of speech
  • the finite state machine models employed by the invention may be refined when an appropriate method of finding a suitable partial differential is known. Such a method is described below.
  • a method of modifying Hidden Markov Models using a gradient based algorithm Preferably a number of iterations are carried out, and after each iteration the modified models are tested against stored data to determine whether improvements have taken place, the processes finishing when improvements become insignificant.
  • the invention also includes apparatus for carrying out the third and fourth aspects of the invention.
  • FIG. 2 is a block diagram of a computer card shown as a block in Figure 1
  • Figures 3 and 4 are flow charts showing how cepstral and related features can be extracted from signals representing sounds
  • Figure 5 is a flow chart showing speaker verification for isolated words
  • Figures 6 and 7 form a flow chart showing the calculation of probabilities in speaker verification using connected words
  • Figure 8 is a flow chart showing the construction of HMM models
  • Figure 9 is a diagram illustrating an alpha-net which may be used in modifying HMMs employed in the invention.
  • a person whose speech is to be verified may use a telephone 10 at a location remote from a personal computer 11 containing a circuit card 12 which together carry out verification and indicate the result.
  • the telephone 10 will be connected by way of exchanges and telephone lines 13 to the input of the card 12 which contains an analogue-to- digital (A/D) and digital-to-analogue (D/A) converter 32 (see Figure 2), and a digital signal processor (DSP) 33 in which the program for speaker verification and data for the program are stored.
  • the card 12 also contains a memory 34, and interface logic 35 for coupling the DSP to a host computer such as the personal computer mentioned above.
  • a telephony interface 36 converts from a two wire telephone line to four wires: an A/D input pair and a D/A output pair.
  • the interface 36 also contains a circuit for ring detection which provides an output on a control line 37, and "on hook” and "off hook” operations at the beginning and end of telephone messages.
  • An audio interface 38 includes a pre-amplifier allowing an audio input for the card 12 to be connected to a microphone.
  • An output for connection to a loudspeaker is also provided to allow audio messages or synthesised speech as an alternative to screen messages.
  • a switch 39 is operated as required to connect either the telephony interface or the audio interface to the converter 32.
  • the A/D samples incoming speech typically at a rate of 8,000 samples per second and spectral representations of the input samples are produced at a frequency called the frame rate, typically every 20 ms.
  • Spectral representation is in the form of the outputs of a bank of narrow band filters each centred on a different frequency, with these frequencies spread across the spectrum of the incoming telephone signals.
  • the use of a bank of filters for this purpose is well known, and the filters may for example be formed by discrete components or by digital filters achieved by programming the DSP, for example as described in Chapter 4 of the book "Digital Signal Processing Design" by A. Bateman and W. Yates, see particularly Section 4.27 including Example 4.2 on page 148.
  • Table 1 gives an example of centre frequencies and bandwidths for a suitable filter bank having 11 filters.
  • each filter gives a power output and the DSP program sums these outputs over each frame to give a frame output for each filter.
  • the outputs of the filters are, in this embodiment, subjected to the known technique of cepstral processing which is described for example in the paper "Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Sentences", by S.B. Davis and P. Mermelstein, IEEE Transactions on Acoustics, Speech and Signal Processing, Vol. ASSP-28, No. 4, August 1980, pages 357 to 366, see particularly page 359.
  • Cepstral processing results in the derivation of a number of coefficients which can be regarded as descriptors for the spectrum of the speech signal.
  • the first coefficient represents the total energy of the spectrum
  • the second coefficient represents the general slope of the spectrum with increase in frequency
  • the third coefficient gives an indication of how "peaky" the spectrum is.
  • n designating the filters in the filter bank is set to zero and then incremented (operations 41 and 42).
  • the logarithm of the output power for each frame of the first filter is calculated and stored in an operation 43 and then the operations 42 to 43 are repeated for the other filters as n increases until under the control of a test 44 the logarithms of all the filter outputs have been stored.
  • f n is the log of the power output of the n th filter and j is the number of the cepstrum coefficient.
  • the j th cepstral coefficient is found by an operation 46 and a test 47 where a variable m j is increased each time n is incremented by adding the product of the logarithm of the output power of the n th filter and the appropriate cosine transform coefficient a jn (which equals cos[j(k-1 ⁇ 2) so
  • a test 48 i n conj unction with previ ous operations 49 and 50 causes the operations 45 and 46, and the test 47 to be repeated j times, so generating j cepstral coefficients.
  • the resulting representation of the spectrum has advantages in that the cepstrum can be readily processed for further purposes by giving different weightings to the coefficients.
  • the zero and first cepstral coefficients known as MFCCO and MFCCl may be given zero weights.
  • Processing preferably also includes deriving an indication related to the rate of change of each coefficient (first order difference) and its second order difference.
  • An algorithm for this purpose begins by setting the number j of the coefficient to one in an operation 52 and setting (operation 53) a variable k nominally representing a particular recent frame to -k max where k max is the number of previous and succeeding frames relative to the frame k to be used in forming each j th first order rate of difference d j .
  • m j (k) is the j th coefficient of the k th frame.
  • d j 2m j (2) + lm j (1) + 0 - lm j (-1) - 2m j (-2) + 0, respectively, where the figures in brackets give the frame position relative to the k th frame.
  • the second order j th difference e j is calculated in an operation 58 and uses
  • e j in a single operation.
  • the e j calculated in this way is for the nominal frame k but derived partly from a frame k+2 two frames later.
  • the second order difference derived is a trend which is almost always nearly the same as could, alternatively, be obtained from using d j (k-1) and d j (k+1).
  • Second order coefficients could be calculated with values of d j from more frames and first order coefficients could be calculated with values from two frames only.
  • each frame is represented by a number of values which form elements in a “feature vector”. There is one such vector for each frame.
  • feature vector There is one such vector for each frame.
  • 7 corresponding to the first order differences of these coefficients Preferably an additional 7 corresponding to second order differences are also used.
  • Feature extraction may be carried out by any suitable alternative method such as the known methods of linear prediction and discrete Fourier transform whose outputs may also be converted to the cepstrum of the speech signal.
  • an HMM is a finite state machine, which in the field of speech recognition typically comprises from 3 to 10 states coupled by transitions, usually from one state to the next and from one state to itself.
  • Each state has an associated probability distribution function (pdf) which allows the calculation of the probability that a given feature vector would be produced by the HMM when in that state.
  • PDF probability distribution function
  • Each pdf is a multidimensional function specified by a plurality of pairs of mean values and variances each of which is derived from the normal distribution of an element in the feature vectors as is mentioned in more detail below.
  • the DSP on the card 12 stores data for two sets of HMMs: the first set is means and variances derived from the nominally normal distributions of the elements of feature vectors of a large number of utterances of the digits 0 to 9 from many different persons (males and females) and the models in this set are known as world models; and the second set in which each model is derived from the nominally normal distributions of the elements of feature vectors of, typically, 5 utterances of each digit spoken by a person whose speech is to be verified.
  • the second set in which each model is derived from the nominally normal distributions of the elements of feature vectors of, typically, 5 utterances of each digit spoken by a person whose speech is to be verified.
  • the data for the HMMs of these two sets is stored as means and variances in files in the memory of the DSP.
  • the memory may also store probabilities of transition from one state to another and also from one state to itself, these transition probabilities also being calculated in a known way from the digit utterances.
  • a person whose speech is to be verified enters an identification code into the computer 11.
  • the card 12 causes the computer to carry out an operation 15 to prompt for a random digit by displaying a request for this digit to be spoken or by generating a voice synthesised request.
  • an operation 16 When the person utters the digit a sequence of feature vectors is extracted and stored in an operation 16 and the probability of this sequence being generated by the world model for the digit prompted is calculated in an operation 17. This probability is calculated using the Viterbi algorithm, which again is well known in the field of speech recognition.
  • the Viterbi algorithm considers each feature vector in the sequence and the probability that each state of the HMM could have produced that vector in deriving a probability.
  • the Viterbi algorithm takes into account the transition probabilities from one state to another and the probability calculated from the previous state. In this way the Viterbi algorithm finds the most likely combination of states and transitions and calculates a log probability that a sequence of feature vectors matches a particular HMM model.
  • the Viterbi algorithm and its use in calculating the probabilities from HMM models is described in Chapter 8, particularly Sections 8.4 and 8.11 of the book "Speech Synthesis and Recognition" by J.N. Holmes, published by Van Nostrand Reinhold (UK) in 1988.
  • the log probability that the sequence of vectors could have been generated by the alleged speaker's personal model of the prompted digit is now calculated in the same way (operation 18), the calculated world model log probability is subtracted from the calculated personal model probability and the result is stored (operation 19).
  • a positive value for this result indicates that the personal probability is greater than the world probability and that therefore it is more likely that the digit was uttered by the alleged speaker than by an impostor.
  • a test 20 is used to determine whether the last prompt in the operation 15 was, in this example, for the fifth in a string of random digits. If not then operations 15 to 20 are repeated but otherwise an operation 22 is carried out in which the results stored in the operation are compared and a decision on verification is given on the basis of a poll with the majority of acceptances or rejections determining the decision. The decision is indicated on the display of the computer 12 or by means of a voice synthesised message in an operation 23 and the algorithm ends.
  • the beginning and end of an utterance is found by comparing the probability that the features currently extracted could have been generated by a "silence" state, defined by means and variances, with the probability that the features could have been generated by the beginning and end state, respectively, of an HMM representing an expected word.
  • Improvements in speaker verification can usually be achieved if a phrase of connected words, that is continuously spoken words, is used in preference to isolated words. For example five numerals could be spoken as a continuous phrase or a string of five numerals could be split into two continuously spoken parts.
  • a computer or such as the P.C 11 may be programmed to display a prompt for the required phrase so that the model of the expected response can be formed by joining models for individual words end to end to make an overall model in the form of a string of word models.
  • silences between words by including a state representing silence between the end of one word model and the beginning of the next word model and to allow transitions either directly from one word model to the next or by way of the silence state which may also have a transition to itself to allow for longer silences.
  • Figures 6 and 7 are in the form of a flow chart for calculating probabilities of connected words.
  • each incoming frame in a complete utterance is dealt with in turn to calculate the probability, for each state in the string of models, that a sequence of states ending with that state could have generated the utterance.
  • Figure 7 uses these probabilities to segment the utterance into words and extract the probability that each of these words was spoken.
  • a variable FRAME is set to 1 corresponding to the first frame of an utterance received, and then feature extraction is carried out as described above in connection with Figures 3 and 4 (operation 81). Since the next group of operations is to be carried out for every state in the string of models a variable STATE is set to the total number of states in this string (operation 82) so that in this group of operations the last state in the string is considered first. The probability that the features of frame 1 could have been generated by this last state is calculated in an operation 83 from the probability distribution for this state.
  • This probability is now added to the maximum probability obtained in previous iterations for states having transitions to this last state (operation 84), this maximum including the multiplication by the probability of the transition to the last state as calculated by addition of the log of the transition probability.
  • operation 84 a type of Viterbi algorithm is operated.
  • operation 85 is carried out in which the identification of the state which had this previous maximum probability is stored.
  • An operation 86 and a test 87 cause operations 83, 84 and 85 to be repeated for every state in the string.
  • the STATE variable is again set to the total number of states in the string of models in an operation 91, the variable FRAME is set to the last frame occurring before silence and a variable WORD is set to the total number of words represented by the string of models (operations 92 and 93).
  • the probability of the last word in the utterance is the probability calculated in the operation 84 for the last state in the string and the last frame in the utterance.
  • An operation 94 stores this probability.
  • a variable FRAME COUNT is set to zero in an operation 95 and then a test 96 is carried out to determine whether the previous state of this word model is in the previous word model as indicated by the identification stored in the operation 85.
  • the test 96 now determines for the previous frame of the last word whether, from the indication for this frame stored in the operation 85, the previous state was in the last or previous word model. This process continues until the test 96 gives a positive response indicating that the algorithm has backtracked through all the states in the last word to the beginning of the word.
  • the number of frames in the word is available from the variable FRAME COUNT.
  • the variable WORD is now decremented (operation 100) and if all the words in the string have not been considered as indicated by a test 101, the operations 94 to 100 are repeated for the previous word in the string after decrementing the variable FRAME in an operation 102.
  • the outcome of the test 101 indicates that all the words in the string have been considered the probabilities of each word are available as stored in the operation 94 and these probabilities can be considered as above in an acceptance calculation to give an indication as to whether the speaker is verified as genuine or not.
  • a program may be used in which the probabilities derived from the world models of each of, for example, five digits may be multiplied together and compared with a similar product derived by matching feature vectors against personal models. If the product of probabilities from the world models is smaller than the product from the personal models then the speech is verified.
  • speaker verification using the invention is not limited to the utterance of digits.
  • Other characters such as letters from the English or other alphabets may be used, as may be complete words if either each character of the word is spoken separately or a known continuous speech recognition algorithm is used to separate one character or word from another.
  • a set of world models for each digit is built using the Baum-Welch re-estimation process which is another well known technique in the field of speech recognition. These initial world models are common to all speakers. Personal models for each of the digits are then constructed using the same process from, typically, five examples of each of the digits collected for each person whose speech is to be verified.
  • Each world model is derived from a number of utterances (Q MAX ) of the digit represented by that model each taken from a different speaker.
  • transition probabilities means and variances are initialised in an operation 60 as follows.
  • the transition probabilities a(i)(i) from state i to itself and the transition probabilities a(i)(j) from state i to state j are initialised to
  • a(i)(j) 0.5
  • the frames of each utterance are assigned linearly to the HMM states so that each state has, typically, one tenth of the frames (P MAX ) of the utterance assigned to it.
  • P MAX the frames of the utterance assigned to it.
  • each frame has k features the mean ⁇ (j)(k) for the normal or Gaussian distribution of state j feature k is initialised from
  • the variance ⁇ 2 (j)(k) of state j feature k is calculated using
  • P and Q are set to zero in an operation 61 to allow the probabilities of each of the P MAX frames of each of the Q MAX utterances of the digit to be calculated given the HMM for that model in an operation 62 which is repeated P MAX Q MAX times by operation of tests 63 and 64, and increment operations 65 and 66.
  • the operations 69, 70 and 71 provide a new re-estimated model which is used, for a number of iterations, to allow the calculation of frame probabilities from the Q MAX utterances to be recalculated followed by repetitions of the operations 67 to 71.
  • the number of iterations is determined by a test 73 using a value "MAX ITER" which is typically about 10 but alternatively iteration may be continued until a test (not shown) indicates that convergence of the transition probabilities, means and variances has occurred.
  • the personal HMMs typically have 7 states and are also left to right models. Again each feature in each state is described by a normal or Gaussian distribution.
  • HMM parameters can be estimated by the Baum-Welch algorithm or by the Viterbi algorithm. Since the Viterbi algorithm is simpler and faster it is used for the personal HMMs where a person being enrolled for speaker verification has to wait until the enrolment process is complete.
  • the algorithm used for the personal HMMs is the same as Figure 7 except the operations 67, 68 and 69 are replaced by an operation which calculates the forward probabilities using the Viterbi algorithm and a following operation to trace back to find the best sequence of states for each word. Also the operations 70 and 71 are carried out in a different way as explained below.
  • the transition probabilities, means and variances are initialised in the same way as for the world models but while the means and variances are re-estimated the transition probabilities remain fixed during the re-estimation.
  • the Viterbi operation is as described in the above mentioned Section 8.4 of Chapter 8 of the book by J.N. Holmes.
  • the trace back operation keeps track of the state giving rise to the maximum value for each frame of each word and shows the sequence of states having the highest probability for each word.
  • the frames for each word are assigned to the best fitting state found on trace back.
  • the new mean (j)(k) for feature k in state j is re-estimated in the operation 70 using the frames assigned to state j as follows:
  • the re-estimation process is repeated for the number of iterations required. This is either a fixed number, for example 3, or until the model has converged.
  • Continuous speech can also be used in deriving the world and personal models and the algorithms of Figures 6 and 7 may be used for this purpose when during training a prompt again shows a string of numerals which are to be spoken.
  • the frame count available from the operation 97 when the test 96 indicates that the end of a word has been reached is used to segment the words and to identify the frames in each word. Since the initial word models are based on the features of these frames, the operation 81 also stores the features of each frame when the world and personal models are being derived. As before the frames are initially allocated linearly to model states to allow means and variances to be calculated for initialisation and then the models are re-estimated using either the Baum-Welch or the Viterbi algorithm.
  • the resulting world and personal models are used for the operation of the system described above but improvements in discrimination between personal models and the world model, and hence the overall operation of the system, can be expected if the models are further adapted using discriminative training to make best possible use of the differences between sets of utterances used in forming the personal models and the world models rather than using the utterances to improve the likelihood results provided by the models.
  • the preferred way of doing this is to use a gradient algorithm but, as is mentioned below, for this purpose the rate of change of the likelihood function for the output probability of a model as a function of means and variances is required. The rate of change of the output probability has to be calculated with respect to each state probability in turn.
  • the error in the output (that is the difference between the actual and required output) is taken and the error back-propagation algorithm for the perceptron (a neural net concept) is used to work out the appropriate error derivative with respect to a given state.
  • MLP multi-layer perceptron
  • Each pair of models, the personal model and the world model, for each digit is treated as an example of an alpha-net and the alpha-net training technique is used to increase discrimination between the two models. When the discrimination has been maximised the models are ready to be used for the process of verification.
  • the Alpha algorithm used in the application of HMMs to speech recognition, computes sums os/er alternative state sequences. This Alpha computation can be thought of as performed by a particular form of recurrent network which is called an alpha-net.
  • the parameters of the network are parameters of the HMMs such as means, variances.
  • ⁇ jt the likelihood of the model generating all the observed data sequence up to and including time t, in terms of b jt the likelihood of it generating the data at time t given that it is in state j at time t, a ij the probability of state j given that the state at the previous time was i, and the ⁇ 's at the previous time.
  • ⁇ for the final state of each model is the likelihood of all the data given that model.
  • the prior probabilities reflect the amount of training material of each kind, that is expected speaker trials and other speaker trials. During use the prior probabilities will depend on other factors.
  • FIG. 9 An example of an alpha-net is shown in Figure 9 where the personal model of a digit includes three states 25, 26 and 27 with transitions between the states and each state having a transition to itself.
  • the corresponding world model of the same digit has states 28, 29 and 30 with similar transitions.
  • the alpha-net is formed by assuming an initial silent state 31 and transitions from this state to the two models.
  • the outputs of the net represent the probability of the digit when uttered being generated by the personal and world models, respectively.
  • Adaption is by changing the means and variances for each of the states 25 to 30 to give optimum results and the technique used for training makes use of the identity of the Baum-Welch backward pass algorithm and the MLP back propagation of partial derivatives.
  • is a coefficient which controls the rate of adaption and the last part of the last term of the equations signifies that the coefficient is to be multiplied by the sign of the rate of change.
  • the adaption rate can be decreased periodically by changing the value of ⁇ by deducting a fixed amount from ⁇ or by taking a proportion of ⁇ .
  • the equations 1 and 2, or 3 and 4, with 5 and 6 are used repeatedly to calculate new means and variances for all states of the two models.
  • the result is a pair of models: a modified personal and a corresponding world model.
  • Next sequences of stored vectors representing the digit when spoken by about 50 speakers other than the speaker corresponding to the personal model are used to test for improvements in the discrimination afforded by the modified models.
  • the process is then repeated for the models of every other digit so that a pair of models is obtained for every digit.
  • two distributions of output probabilities are obtained, one corresponding to the world model and one corresponding to the personal model.
  • HMMs HMMs. Speaker verification has applications other than by way of telephone links; for example in access control both for locations and buildings but also computers. Applications of the invention also occur in recognising spoken PINs for cash dispensing machines.

Abstract

Speaker verification is important in such applications as financial transactions which are to be carried out automatically by telephone. False acceptances of a speaker cause serious problems but so do frequent false rejections in view of the annoyance caused. Some of the problems of speaker verification are reduced in the invention by forming Hidden Markov Models (HMMs) for each of a mumber of words using features of utterances of these words from a large number of speakers. These models are known as world models. In addition for every person whose speech is to be recognised, one HMM is formed for each of the words as uttered by that person. These models are known as personal models. In verification a person is prompted to repeat a string of isolated or connected words (15) and features from each of these words are extracted (16). Next the probabilities that these features could have been generated by the world models for these words and by the personal model of that person are calculated, respectively (17 and 18) and these probabilities are compared (19) for each word. A decision (23) on verification is based on a poll (22) of these comparisons.

Description

METHODS AND APPARATUS FOR VERIFYING
THE ORIGINATOR OF A SEQUENCE OF OPERATIONS
The present invention relates to verifying that a sequence of operations has been carried out by a specific entity. Usually the entity is a person and the sequence of operations may, for example, be speaking a digit or a letter, or writing a letter or a word. Thus the invention relates particularly to the verification that an utterance was made by a predetermined person, but it is believed that the invention can be applied to other actions carried out by persons such as recognising written words.
Speech recognition using Hidden Markov Models (HMM) is a well known technique and HMMs have also been applied to signature verification. Speaker verification is important in many applications, particularly where financial transactions are to be carried out automatically by telephone and where access to premises is to be controlled. False acceptances of speech are likely to cause serious problems when unauthorised transactions or access are a l l owed . Almost as important are fal se rej ecti ons where a person who should be verified is not. False rejections cause annoyance especially when they occur frequently.
Speech verification over telephone links raises its own problems due to the limited bandwidth of such links and distortion which often occurs.
Three previous examples of speaker verification systems are described in U.S. Patents 4,363,102, 4,694,493 and 4,910,782.
According to a first aspect of the present invention there is provided a method of verifying that a sequence of operations originates from a specific entity, comprising the steps of
extracting a test sequence of sets of features of the results of the operations, one set corresponding to each operation,
matching the said test sequence of sets of features against a first stored probabilistic finite state machine model derived from sets of features of the results of the same sequence of operations when originated by a plurality of entities, matching the said test sequence of features against a second stored finite state machine model derived from sets of features of the results of the same sequence of operations when originated by the specific entity, and
comparing the results of the matching steps to indicate whether the test sequence of operations originated from the specific entity.
According to a second aspect of the invention there is provided apparatus for verifying that a sequence of operations originated from a specific entity, comprising
means for storing data specifying first and second finite state machines, the data for the first machine having been derived from sets of features of the results of a sequence of operations originated by a plurality of entities, the data for the second machine having been derived from sets of features of the results of the same sequence of operations originated by a specific entity, means for extracting a test sequence of sets of features from the results of a sequence of operations which are alleged to have been originated by the said specific entity,
means for matching the said test sequence against the first and second said machines, respectively, and
means for comparing results from the matching means to indicate whether the test sequence was originated by the said specific entity.
The specific entity is usually a person, although the entity may be an object, for example an object undergoing non-destructive testing when the sequence of operations may be signals originated by the object under test. As has been mentioned, where the entity is a person the sequence of operations may, for example, be the utterance of a sound or the signing of a signature. The sounds may be alpha-numeric characters or words and the characters or words may be uttered as isolated items, or connected items as in continuous speech.
The invention has the advantage that it tends to reduce false acceptances and false rejections in speaker verification. Signals resulting from incoming speech may be digitised at relatively short intervals and processed over relatively long intervals to provide sets or "frames" of digital signals derived from spectral components. By rejecting some of these components before or after further processing, the effects of telephone link limitations and distortion can be reduced so that speaker verification over telephone systems is possible.
According to a third aspect of the invention, therefore, there is provided a method of speech verification or recognition including obtaining digital signals representative of speech,
carrying out cepstral processing of the digital signals, and carrying out speech verification or recognition based on cepstral coefficients resulting from the processing but omitting the zero and/or first of the coefficients.
By using a gradient algorithm the finite state machine models employed by the invention, usually HMMs, may be refined when an appropriate method of finding a suitable partial differential is known. Such a method is described below.
Thus, according to a fourth aspect of the present invention there is provided a method of modifying Hidden Markov Models using a gradient based algorithm. Preferably a number of iterations are carried out, and after each iteration the modified models are tested against stored data to determine whether improvements have taken place, the processes finishing when improvements become insignificant. The invention also includes apparatus for carrying out the third and fourth aspects of the invention.
Certain embodiments of the invention will now be described by way of example with reference to the accompanying drawings, in which:- Figure 1 is a block diagram of an apparatus according to the invention,
Figure 2 is a block diagram of a computer card shown as a block in Figure 1,
Figures 3 and 4 are flow charts showing how cepstral and related features can be extracted from signals representing sounds, Figure 5 is a flow chart showing speaker verification for isolated words,
Figures 6 and 7 form a flow chart showing the calculation of probabilities in speaker verification using connected words,
Figure 8 is a flow chart showing the construction of HMM models, and
Figure 9 is a diagram illustrating an alpha-net which may be used in modifying HMMs employed in the invention.
In the arrangement of Figure 1, a person whose speech is to be verified may use a telephone 10 at a location remote from a personal computer 11 containing a circuit card 12 which together carry out verification and indicate the result. In general the telephone 10 will be connected by way of exchanges and telephone lines 13 to the input of the card 12 which contains an analogue-to- digital (A/D) and digital-to-analogue (D/A) converter 32 (see Figure 2), and a digital signal processor (DSP) 33 in which the program for speaker verification and data for the program are stored. The card 12 also contains a memory 34, and interface logic 35 for coupling the DSP to a host computer such as the personal computer mentioned above. A telephony interface 36 converts from a two wire telephone line to four wires: an A/D input pair and a D/A output pair. The interface 36 also contains a circuit for ring detection which provides an output on a control line 37, and "on hook" and "off hook" operations at the beginning and end of telephone messages. An audio interface 38 includes a pre-amplifier allowing an audio input for the card 12 to be connected to a microphone. An output for connection to a loudspeaker is also provided to allow audio messages or synthesised speech as an alternative to screen messages. A switch 39 is operated as required to connect either the telephony interface or the audio interface to the converter 32.
The A/D samples incoming speech typically at a rate of 8,000 samples per second and spectral representations of the input samples are produced at a frequency called the frame rate, typically every 20 ms. Spectral representation is in the form of the outputs of a bank of narrow band filters each centred on a different frequency, with these frequencies spread across the spectrum of the incoming telephone signals. The use of a bank of filters for this purpose is well known, and the filters may for example be formed by discrete components or by digital filters achieved by programming the DSP, for example as described in Chapter 4 of the book "Digital Signal Processing Design" by A. Bateman and W. Yates, see particularly Section 4.27 including Example 4.2 on page 148. Table 1 gives an example of centre frequencies and bandwidths for a suitable filter bank having 11 filters.
Figure imgf000006_0001
At each sample, each filter gives a power output and the DSP program sums these outputs over each frame to give a frame output for each filter. The outputs of the filters are, in this embodiment, subjected to the known technique of cepstral processing which is described for example in the paper "Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Sentences", by S.B. Davis and P. Mermelstein, IEEE Transactions on Acoustics, Speech and Signal Processing, Vol. ASSP-28, No. 4, August 1980, pages 357 to 366, see particularly page 359. Cepstral processing results in the derivation of a number of coefficients which can be regarded as descriptors for the spectrum of the speech signal. For example the first coefficient represents the total energy of the spectrum, the second coefficient represents the general slope of the spectrum with increase in frequency and the third coefficient gives an indication of how "peaky" the spectrum is. The following steps may be used to carry out cepstral processing: the logarithms of the outputs of the filters are calculated and then a discrete cosine transform is carried out on the log outputs.
In Figure 3 which shows an algorithm for cepstral processing, a variable n designating the filters in the filter bank is set to zero and then incremented (operations 41 and 42). The logarithm of the output power for each frame of the first filter is calculated and stored in an operation 43 and then the operations 42 to 43 are repeated for the other filters as n increases until under the control of a test 44 the logarithms of all the filter outputs have been stored.
The next part of the algorithm of Figure 3 calculates and stores each of mj cepstral coefficients according to the expression
Figure imgf000007_0001
where fn is the log of the power output of the nth filter and j is the number of the cepstrum coefficient.
This expression is given in a different form as equation (1) on page 359 of the above mentioned paper by Davis and Mermelstein.
After setting n to 1 (operation 45) the jth cepstral coefficient is found by an operation 46 and a test 47 where a variable mj is increased each time n is incremented by adding the product of the logarithm of the output power of the nth filter and the appropriate cosine transform coefficient ajn (which equals cos[j(k-½) so
Figure imgf000008_0001
providing the sum of n such operations. Next a test 48 i n conj unction with previ ous operations 49 and 50 causes the operations 45 and 46, and the test 47 to be repeated j times, so generating j cepstral coefficients.
The resulting representation of the spectrum has advantages in that the cepstrum can be readily processed for further purposes by giving different weightings to the coefficients. For example, to mitigate the effect of telephone line distortion on the spectrum of the speech signal, the zero and first cepstral coefficients, known as MFCCO and MFCCl may be given zero weights. Processing preferably also includes deriving an indication related to the rate of change of each coefficient (first order difference) and its second order difference. An algorithm for this purpose (see Figure 4) begins by setting the number j of the coefficient to one in an operation 52 and setting (operation 53) a variable k nominally representing a particular recent frame to -kmax where kmax is the number of previous and succeeding frames relative to the frame k to be used in forming each jth first order rate of difference dj.
An operation 54 and a test 55 cause (2kmax + 1) iterations to occur of a calculation:- dj = kmj(k) + dj
where mj(k) is the jth coefficient of the kth frame.
For example the first and last iterations of the five iterations when kmax = 2 are:- dj = -2mj(-2) + 0, and
dj = 2mj(2) + lmj(1) + 0 - lmj(-1) - 2mj(-2) + 0, respectively, where the figures in brackets give the frame position relative to the kth frame.
By incrementing j (operation 56) and carrying out a test, the operations 54 and 55 are repeated until j first order differences have been obtained. Other values may be used for kmax and, in effect, kmax = 1 is used to calculate the second order differences in the remainder of Figure 4. With kmax = 1, the operation 53, the incremental part of the operation 54 (k=(k+1 )), the test 55 and the associated loop are not required. Controlling the difference calculation is by operations 52' and 56', and test 57' which are the same as 52, 56 and 57, respectively.
The second order jth difference ej is calculated in an operation 58 and uses
ej = dj(k+2) - dj(k)
to give ej in a single operation. The ej calculated in this way is for the nominal frame k but derived partly from a frame k+2 two frames later. The second order difference derived is a trend which is almost always nearly the same as could, alternatively, be obtained from using dj(k-1) and dj(k+1). Second order coefficients could be calculated with values of dj from more frames and first order coefficients could be calculated with values from two frames only.
At the end of processing which is known as "feature extraction" each frame is represented by a number of values which form elements in a "feature vector". There is one such vector for each frame. Typically in this embodiment there are 14 elements in each feature vector, 7 corresponding to the cepstral coefficients (two of which may be omitted, as mentioned above) and 7 corresponding to the first order differences of these coefficients. Preferably an additional 7 corresponding to second order differences are also used.
Feature extraction may be carried out by any suitable alternative method such as the known methods of linear prediction and discrete Fourier transform whose outputs may also be converted to the cepstrum of the speech signal.
Speaker verification in this invention depends on the use of
Hidden Markov Models. Briefly, an HMM is a finite state machine, which in the field of speech recognition typically comprises from 3 to 10 states coupled by transitions, usually from one state to the next and from one state to itself. Each state has an associated probability distribution function (pdf) which allows the calculation of the probability that a given feature vector would be produced by the HMM when in that state. Each pdf is a multidimensional function specified by a plurality of pairs of mean values and variances each of which is derived from the normal distribution of an element in the feature vectors as is mentioned in more detail below.
Where, for example, a speaker is to be verified from a string of, for example, 5 digits then the DSP on the card 12 stores data for two sets of HMMs: the first set is means and variances derived from the nominally normal distributions of the elements of feature vectors of a large number of utterances of the digits 0 to 9 from many different persons (males and females) and the models in this set are known as world models; and the second set in which each model is derived from the nominally normal distributions of the elements of feature vectors of, typically, 5 utterances of each digit spoken by a person whose speech is to be verified. Thus there are as many models in the second set as there are speakers to be verified, these models being known as personal models. The data for the HMMs of these two sets is stored as means and variances in files in the memory of the DSP. The memory may also store probabilities of transition from one state to another and also from one state to itself, these transition probabilities also being calculated in a known way from the digit utterances.
In the algorithm for speaker verification (Figure 5) which is carried out by the DSP, a person whose speech is to be verified enters an identification code into the computer 11. The card 12 then causes the computer to carry out an operation 15 to prompt for a random digit by displaying a request for this digit to be spoken or by generating a voice synthesised request. When the person utters the digit a sequence of feature vectors is extracted and stored in an operation 16 and the probability of this sequence being generated by the world model for the digit prompted is calculated in an operation 17. This probability is calculated using the Viterbi algorithm, which again is well known in the field of speech recognition. Briefly the Viterbi algorithm considers each feature vector in the sequence and the probability that each state of the HMM could have produced that vector in deriving a probability. The Viterbi algorithm takes into account the transition probabilities from one state to another and the probability calculated from the previous state. In this way the Viterbi algorithm finds the most likely combination of states and transitions and calculates a log probability that a sequence of feature vectors matches a particular HMM model. The Viterbi algorithm and its use in calculating the probabilities from HMM models is described in Chapter 8, particularly Sections 8.4 and 8.11 of the book "Speech Synthesis and Recognition" by J.N. Holmes, published by Van Nostrand Reinhold (UK) in 1988.
The log probability that the sequence of vectors could have been generated by the alleged speaker's personal model of the prompted digit is now calculated in the same way (operation 18), the calculated world model log probability is subtracted from the calculated personal model probability and the result is stored (operation 19). A positive value for this result indicates that the personal probability is greater than the world probability and that therefore it is more likely that the digit was uttered by the alleged speaker than by an impostor.
A test 20 is used to determine whether the last prompt in the operation 15 was, in this example, for the fifth in a string of random digits. If not then operations 15 to 20 are repeated but otherwise an operation 22 is carried out in which the results stored in the operation are compared and a decision on verification is given on the basis of a poll with the majority of acceptances or rejections determining the decision. The decision is indicated on the display of the computer 12 or by means of a voice synthesised message in an operation 23 and the algorithm ends.
The beginning and end of an utterance is found by comparing the probability that the features currently extracted could have been generated by a "silence" state, defined by means and variances, with the probability that the features could have been generated by the beginning and end state, respectively, of an HMM representing an expected word.
Improvements in speaker verification can usually be achieved if a phrase of connected words, that is continuously spoken words, is used in preference to isolated words. For example five numerals could be spoken as a continuous phrase or a string of five numerals could be split into two continuously spoken parts. In recognition, a computer or such as the P.C 11 may be programmed to display a prompt for the required phrase so that the model of the expected response can be formed by joining models for individual words end to end to make an overall model in the form of a string of word models. It is preferable, however, to allow for silences between words by including a state representing silence between the end of one word model and the beginning of the next word model and to allow transitions either directly from one word model to the next or by way of the silence state which may also have a transition to itself to allow for longer silences.
Figures 6 and 7 are in the form of a flow chart for calculating probabilities of connected words. In Figure 6 each incoming frame in a complete utterance is dealt with in turn to calculate the probability, for each state in the string of models, that a sequence of states ending with that state could have generated the utterance. Figure 7 uses these probabilities to segment the utterance into words and extract the probability that each of these words was spoken.
In operation 80 a variable FRAME is set to 1 corresponding to the first frame of an utterance received, and then feature extraction is carried out as described above in connection with Figures 3 and 4 (operation 81). Since the next group of operations is to be carried out for every state in the string of models a variable STATE is set to the total number of states in this string (operation 82) so that in this group of operations the last state in the string is considered first. The probability that the features of frame 1 could have been generated by this last state is calculated in an operation 83 from the probability distribution for this state. This probability is now added to the maximum probability obtained in previous iterations for states having transitions to this last state (operation 84), this maximum including the multiplication by the probability of the transition to the last state as calculated by addition of the log of the transition probability. In the first iteration there are no such previously calculated probabilities but it can be seen that as the operation 84 is repeated a type of Viterbi algorithm is operated. Next an operation 85 is carried out in which the identification of the state which had this previous maximum probability is stored. An operation 86 and a test 87 cause operations 83, 84 and 85 to be repeated for every state in the string. Having dealt with the first frame the frame number is incremented, in an operation 88, unless a test 89 for a period of silence of about half a second indicates that the response has ended. Thus while frames of the response are still available the operations 81 to 86 are repeated continuously.
When silence is detected by the test 89 the STATE variable is again set to the total number of states in the string of models in an operation 91, the variable FRAME is set to the last frame occurring before silence and a variable WORD is set to the total number of words represented by the string of models (operations 92 and 93). The probability of the last word in the utterance is the probability calculated in the operation 84 for the last state in the string and the last frame in the utterance. An operation 94 stores this probability. A variable FRAME COUNT is set to zero in an operation 95 and then a test 96 is carried out to determine whether the previous state of this word model is in the previous word model as indicated by the identification stored in the operation 85. If not the frame count is increased by 1 and the frame number is decreased by 1 in operations 97 and 98. The test 96 now determines for the previous frame of the last word whether, from the indication for this frame stored in the operation 85, the previous state was in the last or previous word model. This process continues until the test 96 gives a positive response indicating that the algorithm has backtracked through all the states in the last word to the beginning of the word. The number of frames in the word is available from the variable FRAME COUNT. The variable WORD is now decremented (operation 100) and if all the words in the string have not been considered as indicated by a test 101, the operations 94 to 100 are repeated for the previous word in the string after decrementing the variable FRAME in an operation 102. When the outcome of the test 101 indicates that all the words in the string have been considered the probabilities of each word are available as stored in the operation 94 and these probabilities can be considered as above in an acceptance calculation to give an indication as to whether the speaker is verified as genuine or not.
Many variations and alternatives to the various flow charts are apparent and one such alternative for Figure 5 or in connected speech is to require the speaker to utter all, or part, of a personal identification number (PIN) having identified himself to the computer so that the computer is "aware" of the expected digits and uses a program in which world and personal models of these digits in calculating probabilities and deciding on the verification result. As an additional security measure a speech recogniser algorithm, of which many are known, may be used to recognise the PIN. Another alternative is to allow the speaker to utter a string of digits which he chooses, when the DSP employs a program which first recognises each digit in the string and then calculates probabilities from the world and personal models of these digits. Instead the program may compare each sequence of vectors against every world and personal model and select the highest probability pair relating to the same digit and base a decision on probabilities calculated from this pair.
As an alternative to the operations 19 and 22, a program may be used in which the probabilities derived from the world models of each of, for example, five digits may be multiplied together and compared with a similar product derived by matching feature vectors against personal models. If the product of probabilities from the world models is smaller than the product from the personal models then the speech is verified.
Any set of feature vectors which gives rise to probabilities calculated from the world and personal models which are below a certain level are rejected to prevent spurious or arbitrary utterances giving a false validation.
Of course speaker verification using the invention, for example by the methods described above, is not limited to the utterance of digits. Other characters such as letters from the English or other alphabets may be used, as may be complete words if either each character of the word is spoken separately or a known continuous speech recognition algorithm is used to separate one character or word from another.
In building the models for the digits, a set of world models for each digit is built using the Baum-Welch re-estimation process which is another well known technique in the field of speech recognition. These initial world models are common to all speakers. Personal models for each of the digits are then constructed using the same process from, typically, five examples of each of the digits collected for each person whose speech is to be verified.
Derivation of the world and personal models is now described in more detail.
Each world model is derived from a number of utterances (QMAX) of the digit represented by that model each taken from a different speaker.
Referring to Figure 8, the transition probabilities, means and variances are initialised in an operation 60 as follows. The transition probabilities a(i)(i) from state i to itself and the transition probabilities a(i)(j) from state i to state j are initialised to
a(i)(i) = 0.5
a(i)(j) = 0.5 For the purpose of initialisation the frames of each utterance are assigned linearly to the HMM states so that each state has, typically, one tenth of the frames (PMAX) of the utterance assigned to it. Assuming each frame has k features the mean μ(j)(k) for the normal or Gaussian distribution of state j feature k is initialised from
Figure imgf000016_0002
where the summation is taken over all frames assigned to state j and Sj is the total number of frames assigned to j.
The variance σ2(j)(k) of state j feature k is calculated using
Figure imgf000016_0001
where the summation is taken over all frames assigned to state j and Sj is the total number of frames assigned to j.
Next P and Q are set to zero in an operation 61 to allow the probabilities of each of the PMAX frames of each of the QMAX utterances of the digit to be calculated given the HMM for that model in an operation 62 which is repeated PMAXQMAX times by operation of tests 63 and 64, and increment operations 65 and 66.
Five operations follow, all according to the Baum-Welch described in Chapter 8 of the above mentioned book by J.N. Holmes. These operations are
calculate the forward probabilities (operation 67),
calculate the backward probabilities (operation 68),
calculate new transition probabilities (operation 69),
calculate new means (operation 70), and
calculate new variances (operation 71).
The operations 69, 70 and 71 provide a new re-estimated model which is used, for a number of iterations, to allow the calculation of frame probabilities from the QMAX utterances to be recalculated followed by repetitions of the operations 67 to 71. The number of iterations is determined by a test 73 using a value "MAX ITER" which is typically about 10 but alternatively iteration may be continued until a test (not shown) indicates that convergence of the transition probabilities, means and variances has occurred. These parameters as finally calculated then define the HMM world model for the digit whose utterances were used.
The personal HMMs typically have 7 states and are also left to right models. Again each feature in each state is described by a normal or Gaussian distribution.
HMM parameters can be estimated by the Baum-Welch algorithm or by the Viterbi algorithm. Since the Viterbi algorithm is simpler and faster it is used for the personal HMMs where a person being enrolled for speaker verification has to wait until the enrolment process is complete.
The algorithm used for the personal HMMs is the same as Figure 7 except the operations 67, 68 and 69 are replaced by an operation which calculates the forward probabilities using the Viterbi algorithm and a following operation to trace back to find the best sequence of states for each word. Also the operations 70 and 71 are carried out in a different way as explained below.
Each personal HMM is built from 5 utterances of the digit it models so QMAX = 5. The transition probabilities, means and variances are initialised in the same way as for the world models but while the means and variances are re-estimated the transition probabilities remain fixed during the re-estimation.
The Viterbi operation is as described in the above mentioned Section 8.4 of Chapter 8 of the book by J.N. Holmes. The trace back operation keeps track of the state giving rise to the maximum value for each frame of each word and shows the sequence of states having the highest probability for each word. The frames for each word are assigned to the best fitting state found on trace back.
The new mean
Figure imgf000017_0002
(j)(k) for feature k in state j is re-estimated in the operation 70 using the frames assigned to state j as follows:
Figure imgf000017_0001
where the summation is taken over all frames assigned to state j and Sj is the total number of frames assigned to j. Similarly the new variance (j)(k) is re-estimated in the operation 71 for feature k of state j using the frames assigned to state j as follows:
Figure imgf000018_0001
where the summation is taken over all frames assigned to state j and Sj is the total number of frames assigned to j.
The re-estimation process is repeated for the number of iterations required. This is either a fixed number, for example 3, or until the model has converged.
Continuous speech can also be used in deriving the world and personal models and the algorithms of Figures 6 and 7 may be used for this purpose when during training a prompt again shows a string of numerals which are to be spoken. The frame count available from the operation 97 when the test 96 indicates that the end of a word has been reached is used to segment the words and to identify the frames in each word. Since the initial word models are based on the features of these frames, the operation 81 also stores the features of each frame when the world and personal models are being derived. As before the frames are initially allocated linearly to model states to allow means and variances to be calculated for initialisation and then the models are re-estimated using either the Baum-Welch or the Viterbi algorithm.
The resulting world and personal models are used for the operation of the system described above but improvements in discrimination between personal models and the world model, and hence the overall operation of the system, can be expected if the models are further adapted using discriminative training to make best possible use of the differences between sets of utterances used in forming the personal models and the world models rather than using the utterances to improve the likelihood results provided by the models. The preferred way of doing this is to use a gradient algorithm but, as is mentioned below, for this purpose the rate of change of the likelihood function for the output probability of a model as a function of means and variances is required. The rate of change of the output probability has to be calculated with respect to each state probability in turn. To do this, the error in the output (that is the difference between the actual and required output) is taken and the error back-propagation algorithm for the perceptron (a neural net concept) is used to work out the appropriate error derivative with respect to a given state. Treating a number of Markov models as a single unit and then using multi-layer perceptron (MLP) error back-propagation is described by J. S. Bridle in his paper "Alpha-nets: a recurrent 'neural' network architecture with a Hidden Markov Model interpretation", Speech Communication, Vol. 9, No. 1, February 1990, pp 83-92. Each pair of models, the personal model and the world model, for each digit is treated as an example of an alpha-net and the alpha-net training technique is used to increase discrimination between the two models. When the discrimination has been maximised the models are ready to be used for the process of verification.
The Alpha algorithm, used in the application of HMMs to speech recognition, computes sums os/er alternative state sequences. This Alpha computation can be thought of as performed by a particular form of recurrent network which is called an alpha-net. The parameters of the network are parameters of the HMMs such as means, variances. The alpha calculation
Figure imgf000019_0001
computes αjt the likelihood of the model generating all the observed data sequence up to and including time t, in terms of bjt the likelihood of it generating the data at time t given that it is in state j at time t, aij the probability of state j given that the state at the previous time was i, and the α's at the previous time. When all the input sequence has been processed the value of α for the final state of each model is the likelihood of all the data given that model. For any given word we have two models: that for the individual person gives us Lp, that for the rest of the world gives us Lw. The probability Pp that the given data was produced by the supposed person is now computed using Bayes' rule and the prior probability P of the person (that is the probability, determined by previous factors, that the word is spoken by the expected speaker),
Figure imgf000020_0001
During training the prior probabilities reflect the amount of training material of each kind, that is expected speaker trials and other speaker trials. During use the prior probabilities will depend on other factors.
An example of an alpha-net is shown in Figure 9 where the personal model of a digit includes three states 25, 26 and 27 with transitions between the states and each state having a transition to itself. The corresponding world model of the same digit has states 28, 29 and 30 with similar transitions. The alpha-net is formed by assuming an initial silent state 31 and transitions from this state to the two models. The outputs of the net represent the probability of the digit when uttered being generated by the personal and world models, respectively. The maximum discrimination when a word is uttered by the person whose speech is to be verified occurs when Pp = 1 and Pw = 0. Thus the alpha-net is to be optimised to approximate to this result and the contrary result of Pp = 0 and Pw = 1 when the digit is spoken by someone else. Adaption is by changing the means and variances for each of the states 25 to 30 to give optimum results and the technique used for training makes use of the identity of the Baum-Welch backward pass algorithm and the MLP back propagation of partial derivatives.
Use is made of a log probability score
J = -log Pc,
where the correct class is c (c = p or c = w) which can be optimised using the gradient algorithm. To simplify the means (m) and standard deviations (σ) according to a simple sign of gradient rule the following equations can be used mj(t) = mj(t - 1) - ξ sign equation 1, and σj(t) = σj(t - 1) - ξ sign equation 2.
Figure imgf000021_0001
where the subscript j refers to the jth state in a model, ξ is a coefficient which controls the rate of adaption and the last part of the last term of the equations signifies that the coefficient is to be multiplied by the sign of the rate of change. Alternatively the full gradient algorithm may be used when the equations are mj(t) = mj(t - 1) - ξ equation 3, and σj(t) = σj(t - 1) - ξ
Figure imgf000021_0002
equation 4.
The adaption rate can be decreased periodically by changing the value of ξ by deducting a fixed amount from ξ or by taking a proportion of ξ.
In order to use these equations it is necessary to be able to calculate the partial differentials which they include.
By using the equation
Figure imgf000021_0003
where Lwj is the likelihood of the jth state of the model, it can be shown that ) equation 5
Figure imgf000021_0004
where Pwj is the probability of the jth state of the world model,
= 1 if w = c, otherwise 0, and
is the Baum-Welch re-estimate of the mean.
Figure imgf000021_0005
By usi ng the equati on
Figure imgf000022_0001
it can be shown that equation 6
Figure imgf000022_0002
where is the Baum-Welch re-estimate of the variance.
Figure imgf000022_0003
Thus to improve the discrimination between a personal model and the world model for a digit, the equations 1 and 2, or 3 and 4, with 5 and 6 are used repeatedly to calculate new means and variances for all states of the two models. The result is a pair of models: a modified personal and a corresponding world model. Next sequences of stored vectors representing the digit when spoken by about 50 speakers other than the speaker corresponding to the personal model are used to test for improvements in the discrimination afforded by the modified models. The process is then repeated for the models of every other digit so that a pair of models is obtained for every digit. By applying the stored sequences of vectors to one of the pairs of models two distributions of output probabilities are obtained, one corresponding to the world model and one corresponding to the personal model. In general the two distributions overlap and where overlap occurs there is a region of uncertainty or error. By comparing the overlap of the distributions before and after models are modified by the above process a check can be made to determine whether an improvement has occurred. This checking process is carried out for all pairs of models. As a result adjustments to ξ can be made, if necessary, and the process of recalculation of all models can be carried out again in a further iteration to obtain further improvements. After a number of iterations has been carried out it is found that further iterations provide only negligible improvements and the models are then in their final state ready for use. Having described several specific examples it will be clear that the invention can be put into operation in many different ways and for many different purposes. Non-destructive testing has been mentioned and the invention may be applied to recognising specific written words or letters where the words or letters are modelled by
HMMs. Speaker verification has applications other than by way of telephone links; for example in access control both for locations and buildings but also computers. Applications of the invention also occur in recognising spoken PINs for cash dispensing machines.
The references given in this specification form part of, and are hereby incorporated into, the specification.

Claims

1. A method of verifying that a sequence of operations originates from a specific entity, comprising the steps of
extracting a test sequence of sets of features of the results of the operations, one set corresponding to each operation,
matching the said test sequence of sets of features against a first stored finite state machine model derived from sets of features of the results of the same sequence of operations when originated by a plurality of entities,
matching the said test sequence of features against a second stored finite state machine model derived from sets of features of the results of the same sequence of operations when originated by the speci f ic enti ty, and
comparing the results of the matching steps to indicate whether the test sequence of operations originated from the specific entity.
2. A method according to Claim 1 wherein the entity is a person and the sequence of operations is the utterance of sequences of sounds.
3. A method according to Claim 1 wherein the entity is a person and the sequence of operations is the writing of a word or letter.
4. A method according to Claim 2 wherein each feature comprises a multi-element vector in which the elements together provide a representation of the sequence of sounds in an interval, and the set of vectors represents the sequence of sounds over a succession of the said intervals.
5. A method according to Claim 2 or 4 wherein the utterance is the utterance of an item chosen from an alpha-numeric character and a word.
6. A method according to Claim 2 or 4 wherein the utterance is the utterance of a string of connected spoken items chosen from alphanumeric characters and words.
7. A method according to Claim 2 or 4 wherein
each matching step comprises the calculation of a probability, and
comparing the results of the matching steps is by subtracting one log probability from another, the sign of the result indicating whether the utterance was from a specific person.
8. A method according to Claim 5 or 7 insofar as dependent on Claim 5 wherein
matching the said test sequence against a first stored finite state machine model comprises matching against a selected one of a plurality of first stored finite state machines models each derived from respective utterances of an item when uttered by each of a plurality of persons, and
matching the said test sequence against a second stored finite state machine model comprises matching against that one of a plurality of second finite state machine models which is derived from the same item as the said selected one model, each second model being derived from utterances of a respective item when uttered by a specific person.
9. A method according to Claim 6 wherein
matching the said test sequence against a first stored finite state machine model comprises matching against a model comprising a stored item model for each item in the utterance, the item models being serially linked and each derived from respective utterances of that item when uttered by each of a plurality of persons, and
matching the said test sequence against a second stored finite state machine model comprises matching against a model comprising a stored item model for each item in the utterance, the item models being serially linked and each derived from respective utterances of that item when uttered by a specific person.
10. A method of speaker verification including
prompting a speaker to utter a series of isolated or connected items chosen from alpha-numeric characters and words,
carrying out a method according to any of Claims 5 to 10 insofar as dependent on Claim 5 or 6 for each utterance made in response to prompting.
11. A method according to Claim 8 or 10 wherein
comparing the results of the matching step is by comparing probabilities calculated for each of a plurality of utterances from first and second said models derived from utterances of the same items, and indicating whether the plurality of utterances was from the specific person on the basis of the majority of results of the probability comparisons.
12. A method according to Claim 9 or 10 wherein matching the said test sequence includes for each of the first and second finite state machine models
calculating for each state in that finite state machine model the maximum probability that a sequence of states, obtainable from that finite state machine model and ending in that state, could generate the test sequence,
determining the end of each item in the sequence of sets of features based on the said maximum probability for each state,
using the said maximum probability for each state at the end of an item as the probability of that item, and
comparing the result of the matching steps includes comparing the maximum probabilities for each item as obtained for the first and second finite state machine models.
13. A method according to any preceding claim wherein extracting the test sequence includes cepstral processing to provide features in the form of cepstral coefficients.
14. A method according to Claim 13 wherein the zero cepstral coefficient is omitted from the sets of features.
15. A method according to Claim 13 or 14 wherein the first cepstral coefficient is omitted from the sets of features.
16. A method according to Claim 13, 14 or 15 wherein extracting the test sequence includes
providing a set of power values for each feature in which each power value represents the power of sound in one of a series of equal intervals over a respective frequency band, providing cepstral processing for the power values by calculating a logarithm value for each power value and carrying out a cosine transform on the logarithm values, and
using at least some of the resulting cepstral coefficients as respective said features.
17. A method according to Claim 16 including, for at least one cepstral coefficient, determining first difference related features based on values of the said one coefficient as derived from a number of the said sets of power values which occur in different said intervals, and using the first difference related features as features of the said sets of features.
18. A method according to CLaim 17 including deriving a number of second difference related features based on a plurality of first difference related features derived from different values of one cepstral coefficient, and using the second difference related features as features of the said sets of features.
19. A method according to Claim 17 or 18 wherein the first difference related feature is derived by calculating the difference between alternate values of a cepstral coefficient in a series of values derived from the power values in a series of the said intervals.
20. A method according to Claim 18, or Claim 19 insofar as dependent on Claim 18, wherein the second difference related feature is derived by calculating the difference between alternate first differences in a series derived from the series of cepstral coefficient values.
21. A method according to any of Claims 17 to 20 wherein each first difference related feature is derived by weighting values of a cepstral coefficient in a series of values derived from the power values in a series of the said intervals using respective multipliers which decrease to a minimum and increase after the minimum, forming the sum of the coefficients weighted with decreasing multipliers, forming the sum of the coefficients weighted with increasing multipliers, and forming the difference between the two sums to give the first difference related feature.
22. A method according to any of Claims 18 to 21 wherein each second difference related feature is derived by weighting first difference related features in a series derived from the series of cepstral coefficients using respective multipliers which decrease to a minimum and increase after the minimum, forming the sum of first difference related features weighted with decreasing multipliers, forming the sum of first difference related features weighted with increasing multipliers, and forming the difference between the two sums to give the second difference related features.
23. A method according to any preceding claim wherein the finite state machines are Hidden Markov Models.
24. Apparatus for verifying that a sequence of operations originated from a specific entity, comprising
means for storing data specifying first and second finite state machines, the data for the first machine having been derived from sets of features of the results of a sequence of operations originated by a plurality of entities, the data for the second machine having been derived from sets of features of the results of the same sequence of operations originated by a specific entity, means for extracting a test sequence of sets of features from the results of a sequence of operations which are alleged to have been originated by the said specific entity,
means for matching the said test sequence against the first and second said machines, respectively, and
means for comparing results from the matching means to indicate whether the test sequence was originated by the said specific entity.
25. Apparatus according to Claim 24 wherein the entity is a person and the sequences of operations are the utterances of sequences of sounds.
26. Apparatus according to Claim 24 wherein the entity is a person and the sequences of operations are the signing of signatures.
27. Apparatus according to Claim 25 wherein the finite state machines are Hidden Markov Models.
28. Apparatus according to Claim 27 wherein the data specifying the Hidden Markov Models comprises a set of pairs of means (m) and variances (σ2) for each state, the pairs of means and variances having been derived from respective distributions of values each representing a spectral component of a sound in the sequences used to derive the Hidden Markov Models.
29. Apparatus according to Claim 28 wherein the means and variances are calculated from cepstral coefficients derived from the said spectral components and first and/or difference related features derived from the cepstral coefficients.
30. Apparatus according to Claim 29 wherein the zero and first cepstral coefficients are omitted when deriving the means and variances.
31. Apparatus according to Claim 28, 29 or 30 wherein the means and variances are modified means and modified variances calculated using a gradient algorithm in an alpha-net comprising the two finite state machines.
32. Apparatus according to Claim 31 wherein the modified mean mj(t) and the modified standard deviation σj(t) are calculated from mj (t) = mj (t - 1 ) - ξ sign
Figure imgf000029_0001
and σj (t) = σj (t - 1 ) - ξ sign
Figure imgf000029_0002
where mj(t - 1) and σj(t - 1) represent the previous mean and standard deviation,
ξ is a constant, and
J is a score value derived from the likelihood output of one of the first and second finite state machines.
33. Apparatus according to Claim 31 wherein the modified mean mj(t) and the modified standard deviation σj(t) are calculated from mj(t) = mj(t - 1) - , and σj(t) = σj(t - 1) -
Figure imgf000029_0003
where mj(t - 1) and σj(t - 1) represent the previous mean and standard deviation,
ξ is a constant, and
J is a score value derived from the likelihood output of one of the first and second finite state machines.
34. A method of verifying that operations originate from a specific entity, comprising the steps of
extracting test results of the operations,
matching the said test results against a first model derived from results of the same operations when originated by a plurality of entities,
matching the said test results against a second model derived from results of the same operations when originated by the specific entity, and
comparing the results of the matching steps to indicate whether the test results originated from the specific entity.
35. A method of speaker verification substantially as hereinbefore described with reference to the accompanying drawings.
36. Apparatus for speaker verification substantially as hereinbefore described with reference to the accompanying drawings.
PCT/GB1991/001681 1990-10-03 1991-09-30 Methods and apparatus for verifying the originator of a sequence of operations WO1992006468A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU86496/91A AU665745B2 (en) 1990-10-03 1991-09-30 Methods and apparatus for verifying the originator of a sequence of operations
US08/039,054 US5526465A (en) 1990-10-03 1991-09-30 Methods and apparatus for verifying the originator of a sequence of operations

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB9021489.1 1990-10-03
GB909021489A GB9021489D0 (en) 1990-10-03 1990-10-03 Methods and apparatus for verifying the originator of a sequence of operations

Publications (1)

Publication Number Publication Date
WO1992006468A1 true WO1992006468A1 (en) 1992-04-16

Family

ID=10683160

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1991/001681 WO1992006468A1 (en) 1990-10-03 1991-09-30 Methods and apparatus for verifying the originator of a sequence of operations

Country Status (5)

Country Link
US (1) US5526465A (en)
AU (1) AU665745B2 (en)
GB (2) GB9021489D0 (en)
WO (1) WO1992006468A1 (en)
ZA (1) ZA917886B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5679001A (en) * 1992-11-04 1997-10-21 The Secretary Of State For Defence In Her Britannic Majesty's Government Of The United Kingdom Of Great Britain And Northern Ireland Children's speech training aid
US5799278A (en) * 1995-09-15 1998-08-25 International Business Machines Corporation Speech recognition system and method using a hidden markov model adapted to recognize a number of words and trained to recognize a greater number of phonetically dissimilar words.
US6125284A (en) * 1994-03-10 2000-09-26 Cable & Wireless Plc Communication system with handset for distributed processing

Families Citing this family (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2105034C (en) * 1992-10-09 1997-12-30 Biing-Hwang Juang Speaker verification with cohort normalized scoring
WO1995005656A1 (en) * 1993-08-12 1995-02-23 The University Of Queensland A speaker verification system
DE4416598A1 (en) * 1994-05-11 1995-11-16 Deutsche Bundespost Telekom Securing telecommunication connection against unauthorised use
JPH0973440A (en) * 1995-09-06 1997-03-18 Fujitsu Ltd System and method for time-series trend estimation by recursive type neural network in column structure
US5960391A (en) * 1995-12-13 1999-09-28 Denso Corporation Signal extraction system, system and method for speech restoration, learning method for neural network model, constructing method of neural network model, and signal processing system
US6073101A (en) * 1996-02-02 2000-06-06 International Business Machines Corporation Text independent speaker recognition for transparent command ambiguity resolution and continuous access control
US6137863A (en) * 1996-12-13 2000-10-24 At&T Corp. Statistical database correction of alphanumeric account numbers for speech recognition and touch-tone recognition
US6061654A (en) * 1996-12-16 2000-05-09 At&T Corp. System and method of recognizing letters and numbers by either speech or touch tone recognition utilizing constrained confusion matrices
US5924070A (en) * 1997-06-06 1999-07-13 International Business Machines Corporation Corporate voice dialing with shared directories
US5897616A (en) 1997-06-11 1999-04-27 International Business Machines Corporation Apparatus and methods for speaker verification/identification/classification employing non-acoustic and/or acoustic models and databases
US6219453B1 (en) 1997-08-11 2001-04-17 At&T Corp. Method and apparatus for performing an automatic correction of misrecognized words produced by an optical character recognition technique by using a Hidden Markov Model based algorithm
US6154579A (en) * 1997-08-11 2000-11-28 At&T Corp. Confusion matrix based method and system for correcting misrecognized words appearing in documents generated by an optical character recognition technique
US5913192A (en) * 1997-08-22 1999-06-15 At&T Corp Speaker identification with user-selected password phrases
US6141661A (en) * 1997-10-17 2000-10-31 At&T Corp Method and apparatus for performing a grammar-pruning operation
US6205428B1 (en) 1997-11-20 2001-03-20 At&T Corp. Confusion set-base method and apparatus for pruning a predetermined arrangement of indexed identifiers
US6208965B1 (en) 1997-11-20 2001-03-27 At&T Corp. Method and apparatus for performing a name acquisition based on speech recognition
US6122612A (en) * 1997-11-20 2000-09-19 At&T Corp Check-sum based method and apparatus for performing speech recognition
US6205261B1 (en) 1998-02-05 2001-03-20 At&T Corp. Confusion set based method and system for correcting misrecognized words appearing in documents generated by an optical character recognition technique
EP0956818B1 (en) * 1998-05-11 2004-11-24 Citicorp Development Center, Inc. System and method of biometric smart card user authentication
US6421453B1 (en) * 1998-05-15 2002-07-16 International Business Machines Corporation Apparatus and methods for user recognition employing behavioral passwords
US6400805B1 (en) 1998-06-15 2002-06-04 At&T Corp. Statistical database correction of alphanumeric identifiers for speech recognition and touch-tone recognition
US7937260B1 (en) 1998-06-15 2011-05-03 At&T Intellectual Property Ii, L.P. Concise dynamic grammars using N-best selection
US6157731A (en) * 1998-07-01 2000-12-05 Lucent Technologies Inc. Signature verification method using hidden markov models
US6233557B1 (en) 1999-02-23 2001-05-15 Motorola, Inc. Method of selectively assigning a penalty to a probability associated with a voice recognition system
IL129451A (en) 1999-04-15 2004-05-12 Eli Talmor System and method for authentication of a speaker
US7590538B2 (en) * 1999-08-31 2009-09-15 Accenture Llp Voice recognition system for navigating on the internet
US6526544B1 (en) * 1999-09-14 2003-02-25 Lucent Technologies Inc. Directly verifying a black box system
US6711699B1 (en) * 2000-05-04 2004-03-23 International Business Machines Corporation Real time backup system for information based on a user's actions and gestures for computer users
US6961703B1 (en) * 2000-09-13 2005-11-01 Itt Manufacturing Enterprises, Inc. Method for speech processing involving whole-utterance modeling
US7143044B2 (en) * 2000-12-29 2006-11-28 International Business Machines Corporation Translator for infants and toddlers
GB2372366A (en) * 2001-02-16 2002-08-21 Imagination Tech Ltd Speaker verification
WO2002067245A1 (en) * 2001-02-16 2002-08-29 Imagination Technologies Limited Speaker verification
US20040104062A1 (en) * 2002-12-02 2004-06-03 Yvon Bedard Side panel for a snowmobile
US7240007B2 (en) * 2001-12-13 2007-07-03 Matsushita Electric Industrial Co., Ltd. Speaker authentication by fusion of voiceprint match attempt results with additional information
US20030149881A1 (en) * 2002-01-31 2003-08-07 Digital Security Inc. Apparatus and method for securing information transmitted on computer networks
US7143073B2 (en) * 2002-04-04 2006-11-28 Broadcom Corporation Method of generating a test suite
US8171298B2 (en) * 2002-10-30 2012-05-01 International Business Machines Corporation Methods and apparatus for dynamic user authentication using customizable context-dependent interaction across multiple verification objects
JP2004191705A (en) * 2002-12-12 2004-07-08 Renesas Technology Corp Speech recognition device
TWI223791B (en) * 2003-04-14 2004-11-11 Ind Tech Res Inst Method and system for utterance verification
US7363223B2 (en) * 2004-08-13 2008-04-22 International Business Machines Corporation Policy analysis framework for conversational biometrics
US7584098B2 (en) * 2004-11-29 2009-09-01 Microsoft Corporation Vocabulary-independent search of spontaneous speech
US8239200B1 (en) * 2008-08-15 2012-08-07 Google Inc. Delta language model
DK2364495T3 (en) * 2008-12-10 2017-01-16 Agnitio S L Method of verifying the identity of a speaking and associated computer-readable medium and computer
US9015093B1 (en) 2010-10-26 2015-04-21 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks
US8775341B1 (en) 2010-10-26 2014-07-08 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks
US10438591B1 (en) 2012-10-30 2019-10-08 Google Llc Hotword-based speaker recognition
US9384738B2 (en) 2014-06-24 2016-07-05 Google Inc. Dynamic threshold for speaker verification
US9715874B2 (en) * 2015-10-30 2017-07-25 Nuance Communications, Inc. Techniques for updating an automatic speech recognition system using finite-state transducers

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0099476A2 (en) * 1982-06-25 1984-02-01 Kabushiki Kaisha Toshiba Identity verification system
EP0121248A1 (en) * 1983-03-30 1984-10-10 Nec Corporation Speaker verification system and process

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4783804A (en) * 1985-03-21 1988-11-08 American Telephone And Telegraph Company, At&T Bell Laboratories Hidden Markov model speech recognition arrangement
US4910782A (en) * 1986-05-23 1990-03-20 Nec Corporation Speaker verification system
US5033087A (en) * 1989-03-14 1991-07-16 International Business Machines Corp. Method and apparatus for the automatic determination of phonological rules as for a continuous speech recognition system
US5311601A (en) * 1990-01-12 1994-05-10 Trustees Of Boston University Hierarchical pattern recognition system with variable selection weights
US5293452A (en) * 1991-07-01 1994-03-08 Texas Instruments Incorporated Voice log-in using spoken name input

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0099476A2 (en) * 1982-06-25 1984-02-01 Kabushiki Kaisha Toshiba Identity verification system
EP0121248A1 (en) * 1983-03-30 1984-10-10 Nec Corporation Speaker verification system and process

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
ICASSP '88/1988 International Conference on Acoustics, Speech and Signal Processing, 11-14 April 1988, New York, US, volume 1, IEEE (New York, US) G. Velius: "Variants of cepstrum based speaker identity verification", pages 583-586, see paragraph 5: "Experiment II: Feature weighting and distance measures" *
ICASSP '88/1988 International Conference on Acoustics, Speech and Signal Processing, 11-14 April 1988, New York, US, volume 1, IEEE (New York, US) N. Tishby: "Information theoretic factorization of speaker and language in hidden Markov models, with application to speaker recognition", pages 87-90, see paragraph 5 "Application to speaker verification" *
ICASSP '89/1989 International Conference on Acoustics, Speech and Signal Processing, 23-26 May 1989, Glasgow, GB, volume 1, IEEE (New York, US) J.M. Naik et al.: "Speaker verification over long distance telephone lines", pages 524-527, see page 527 "Speaker verification using hidden Markov modeling" *
ICASSP '90/1990 International Conference on Acoustics, Speech and Signal Processing, 3-6 April 1990, Albuquerque US, volume 1, IEEE (New York, US) H. Gish: "Robust discrimination in automatic speaker identification", pages 289-292, see paragraph II: "The basis ISIS model" *
IEEE Communications Magazine, volume 28, no. 1, January 1990 (New York, US) J.M. Naik: "Speaker verification: A tutorial", pages 42-48, see pages 43,44: "Pattern matching"; pages 45,46: "An example of a speaker verification system" *
IEEE Transactions on Acoustics, Speech and Signal Processing, volume 36, no. 6, June 1988 (New York, US) F.K. Soong et al.: "On the use of instantaneous and transitional spectral information in speaker recognition", pAges 871-879, see paragraph I: "Introduction" *
The Journal of the Acoustical Society of America, volume 46, no. 4, part 2, 1969 (New York, US) J.E. Luck: "Automatic speaker verification using cepstral measurements", pages 1026-1032, see page 1029, right-hand column, lines 11-17 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5679001A (en) * 1992-11-04 1997-10-21 The Secretary Of State For Defence In Her Britannic Majesty's Government Of The United Kingdom Of Great Britain And Northern Ireland Children's speech training aid
US5791904A (en) * 1992-11-04 1998-08-11 The Secretary Of State For Defence In Her Britannic Majesty's Government Of The United Kingdom Of Great Britain And Northern Ireland Speech training aid
US6125284A (en) * 1994-03-10 2000-09-26 Cable & Wireless Plc Communication system with handset for distributed processing
US5799278A (en) * 1995-09-15 1998-08-25 International Business Machines Corporation Speech recognition system and method using a hidden markov model adapted to recognize a number of words and trained to recognize a greater number of phonetically dissimilar words.

Also Published As

Publication number Publication date
GB2248513A (en) 1992-04-08
US5526465A (en) 1996-06-11
ZA917886B (en) 1992-10-28
GB9021489D0 (en) 1990-11-14
GB9120698D0 (en) 1991-11-13
GB2248513B (en) 1994-08-31
AU8649691A (en) 1992-04-28
AU665745B2 (en) 1996-01-18

Similar Documents

Publication Publication Date Title
US5526465A (en) Methods and apparatus for verifying the originator of a sequence of operations
US9646614B2 (en) Fast, language-independent method for user authentication by voice
US6278970B1 (en) Speech transformation using log energy and orthogonal matrix
US6760701B2 (en) Subword-based speaker verification using multiple-classifier fusion, with channel, fusion, model and threshold adaptation
US6195634B1 (en) Selection of decoys for non-vocabulary utterances rejection
EP0744734B1 (en) Speaker verification method and apparatus using mixture decomposition discrimination
US6401063B1 (en) Method and apparatus for use in speaker verification
US6249760B1 (en) Apparatus for gain adjustment during speech reference enrollment
EP0647344B1 (en) Method for recognizing alphanumeric strings spoken over a telephone network
US6922668B1 (en) Speaker recognition
EP0953972A2 (en) Simultaneous speaker-independent voice recognition and verification over a telephone network
JPS62231997A (en) Voice recognition system and method
EP1417677B1 (en) Method and system for creating speaker recognition data, and method and system for speaker recognition
US8032380B2 (en) Method of accessing a dial-up service
US6519563B1 (en) Background model design for flexible and portable speaker verification systems
CA2304747C (en) Pattern recognition using multiple reference models
US20080071538A1 (en) Speaker verification method
Pandey et al. Multilingual speaker recognition using ANFIS
Karthikeyan et al. Hybrid machine learning classification scheme for speaker identification
Kasuriya et al. Comparative study of continuous hidden Markov models (CHMM) and artificial neural network (ANN) on speaker identification system
Thakur et al. Speaker Authentication Using GMM-UBM
Emori et al. Vocal tract length normalization using rapid maximum‐likelihood estimation for speech recognition
WO1997037345A1 (en) Speech processing
JPH0247758B2 (en)

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IT LU NL SE