Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.


  1. Advanced Patent Search
Publication numberUS3700815 A
Publication typeGrant
Publication dateOct 24, 1972
Filing dateApr 20, 1971
Priority dateApr 20, 1971
Also published asCA938725A1
Publication numberUS 3700815 A, US 3700815A, US-A-3700815, US3700815 A, US3700815A
InventorsGeorge Rowland Doddington, James Loton Flanagan, Robert Carl Lummis
Original AssigneeBell Telephone Labor Inc
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Automatic speaker verification by non-linear time alignment of acoustic parameters
US 3700815 A
Speaker verification, as opposed to speaker identification, is carried out by matching a sample of a person's speech with a reference version of the same text derived from prerecorded samples of the same speaker. Acceptance or rejection of the person as the claimed individual is based on the concordance of a number of acoustic parameters, for example, formant frequencies, pitch period and speech energy. The degree of match is assessed by time aligning the sample and reference utterance. Time alignment is achieved by a nonlinear process which maximizes the similarity between the sample and reference through a piece-wise linear continuous transformation of the time scale. The extent of time transformation that is required to achieve maximum similarity also influences the decision to accept or reject the identity claim.
Previous page
Next page
Description  (OCR text may contain errors)

United States Patent 51 3,700,815

Doddington et al. 1 Oct. 24, 1972 [54] AUTOMATIC SPEAKER 3,525,811 8/1970 Trice ..l79/ 1 SB VERIFICATION BY NON-LINEAR TIME 3,466,394 9/1969 French 179/1 SB ALIGNMENT OF ACOUSTIC PARAMETERS Primary Examiner--Kathleen l-l. Claffy Assistant Examiner-Jon Bradford Leaheey [72] Inventors' F Rowland noddingmn, Attorney-R. J. Guenther and William L. Keefauver Richardson, Tex.; James Loton Flanagan, Somerset; Robert Carl 57 ABSTRACT Lummis, Berkeley Heights, both of Speaker verification, as opposed to speaker identification, is carried out by matching a sample of a person's Asslgnee! Bell Telephone Labomml'lesa speech with a reference version of the same text p i Murray ill, NJ. derived from prerecorded samples of the same [22] Filed; April 20 197] speaker. Acceptance or rejection of the person as the claimed individual is based on the concordance of a l PP 135,697 number of acoustic parameters, for example, formant frequencies, pitch period and speech energy. The degree of match is assessed by time aligning the sam- [52] US. Cl. ..l79/l SA [51] Int. Cl. ..Gl0l1/02 and reference. F f f [58] Field of Search 179/1 SA 1 SB 15 55 R 15 55 achieved by a nonlinear process which maximizes the g similarity between the sample and reference through a piece-wise linear continuous transformation of the time scale. The extent of time transformation that is required to achieve maximum similarity also in- [56] References Clted fluences the decision to accept or reject the identity UNITED STATES PATENTS claim- 3,509,280 4/1970 Jones ..179/l SB 12 Claims, 10 Drawing Figures IDENTITY SAMPLE CLAIM SPEECH IN J l6 l9 ANALYZE START l3) 2 CONTROL TIME ADJ.




A 7' 7 TWNF V PATENTEnncr 24 m2 SHEEI 2 BF 5 FIG. 7




T 0 i n+l I I g, W A

r n SQUARE SQUARE I ROOT ROOT c" STORE l 2 v2 1 2 I/2 (-I r dt) (TI s dt) I v 1 STORE 1- n MULT DIV l 57 ADD (68 $9 {"i}o OR 3 56 ADD OR .1 r(f)s(T) [(r I s ITI J I {qfio I l SENSITIVITY SENSITIVITY MEASURE MEASURE ix --i s2 s3 .Q "-2 q d g 0m m MULT Cq cI MULT E a 72 {At }n\. {AIQ No llAqiHsU? IIAT IIsV? GATE 2 YES GATE NO l 74 YES {Aqi} AND TRANSFER PATENTEBHBT 24 I972 SHEET 5 OF 5 WE WERE AWAY A YEAR AGO.


050' 075' (t) NORMALIZED FIG 8B krnlh l REFERENCE) I l H62 (t) NORMALIZED AUTOMATIC SPEAKER VERIFICATION BY NON- LINEAR TIME ALIGNMENT OF ACOUSTIC PARAMETERS This invention relates to speech signal analysis and, more particularly, to a system for verifying the identity to his speech.

BACKGROUND OF THE INVENTION Many business transactions might be conducted by voice over a telephone if the identity of a caller could be verified. It might, for example, be convenient if a person could telephone his bank and ascertain the balance of his account. He might dial the bank and enter both his identification number and his request by keying the dial. A computer could (via synthetic speech) ask him to speak his verification phrase. If a verification of sufficiently high confidence was achieved, the machine would proceed to read out the requested balance. Other instances are apparent where verification by voice would prove useful.

From the practical point of view, the problem of verification appears both more important and more tractable than the problem of absolute identification. The former problem consists of the decision to accept or reject an identity claim made by an unknown voice. In identification the problem is to decide which of a reference set is the unknown most like. In verification, the expected probability of error tends to remain constant regardless of the size of the user population, whereas in identification the expected probability of error tends to unity as the population becomes large. In the usual context of the verification problem one has a closed set of cooperative customers, who wish to be verified and who are willing to pronounce prescribed code phrases (tailored to the individual voices if necessary). The machine may ask for repeats and might adjust its acceptance threshold in accordance with the importance of the transaction. Further, the machine may control the average mix of the two kinds of errors it can make: i.e., accept a false speaker (miss), or reject a true speaker (false alarm).

DESCRIPTION OF THE PRIOR ART responding word. Some work has been done on comparing selected parameters in a sample utterance for example, peaks and valleys of pitch periods, against corresponding reference data.

I SUMMARY OF THE INVENTION It is, accordingly, an object of this invention to verify the identity of a human being on the basis of certain unique acoustic cues in his speech. In accordance with the invention, verification of a speaker is achieved by comparing the characteristic way in which heutters a test sentence with a previously prepared utterance of the same sentence. A number of different tests are made on the speech signals and a binary decision is then made; the identity claim of the talker is either rejected or accepted.

The problem may be defined as follows. A person asserts a certain identity and then makes a sample utterance of a special test phrase. Previously prepared information about the voice of the person whose identity is claimed i.e., a reference utterance, embodies the typical way in which that person utters the test phrase, as well as measures of the variability to be expected in separate repetitions of the phrase by that person. The sample utterance is compared with the reference information and a decision is rendered as to the veracity of the identity claim. For the sake of exposition, it is convenient to divide the verification technique into three human listener) thus is a possibility, but it is generally inconvenient and it occupies talent that might be better applied otherwise. Also, present indications are that auditory verification is not as reliable as machine verification.

Accordingly, several proposals have been made for the automatic recognition of speech sounds based entirely on acoustic information. These have shown some degree of promise, providing that the sample words to be recognized or identified are limited in number. Most of these recognition techniques are based on individual words, with each word being compared to a corbasic operations: time registration, construction of a reference, and measurement of the distance from the reference to a particular sample utterance.

Time registration is the process in which the time axis of a sample time function is warped so as to make the function most nearly similar to the unwarped version of a reference function. The warped time scale may be specified by any continuous transformation. One suitable function is a piece-wise linear continuous function of unwarped time. In this case warping is uniquely determined by two coordinates of each breakpoint in the piece-wise linear function. Typically, 10 break-points may be used for warping a two-secondlong function, so the registration task amounts to the optimal assignment of values to 20 parameters.

The coefficient of correlation between warped sample and unwarped reference may be used as one index of the similarity of the two functions. The 20 warping parameters are iteratively modified to maximize the correlation coefficient. One suitable technique is the method of steepest ascent. That is, in every iteration, each of the 20 parameters is incremented by an amount proportional to the partial derivative of the correlation coefficient with respect to that parameter.

Success of this procedure hinges on the avoidance of certain degenerate outcomes. Accordingly, several constraints on the steepest ascent iteration process are employed. In effect, these constraints prevent the original function from being distorted too severely, and prevent unreasonably large steps on any one iteration.

A reference phrase is formed by collecting a number of independent utterances of the phrase by the same speaker. Each is referred to as a specimen" utterance. A typical phrase which has been used in practice is We were away a year ago. Each utterance is analyzed to yield, for an all voiced utterance such as this one, five control functions (so called because they can be used to control a formant synthesizer to generate a signal similar to the original voice signal). It has been found that gain, pitch period, and first, second, and third formant frequencies, are satisfactory as control functions. The gain function is scaled to have a particular peak value independent of the talking level.

The reference consists of a version of each of the five control functions chosen to represent a typical utterance by that speaker. By convention, the length of the reference is always the same; a value of 1.9 seconds may be used as the standard length. Any value may be used that is not grossly. different from the natural length of the utterance.

The reference functions are constructed by averaging together the specimen functions after each has been time-warped to bring them all into mutual registration with each other. One way this mutual registration has been achieved is as follows. One of the five control functions is singled out to guide the registration. This control function is called the guide function. Either gain or second formant may be used for this purpose. The guide function from each specimen is linearly expanded (or contracted) to the desired reference length, and then all of the expanded guide functions are averaged together. This average is the first trial reference for the control function serving as guide. Each of the specimen guide functions is then registered to the trial reference by non-linear time-warping, and a new trial reference is generated by averaging the warped specimens. This process is continued iteratively, i.e., warp each specimen guide function for registration with the current trial reference, and then make a new trial reference by averaging the warped guide functions, until the reference does not change significantly. The other four control functions for each specimen utterance are then warped by the final guide warping function for that utterance, and then each control function is averaged across all specimens to form a reference. The reference control functions are stored for future use, along with computed variance values which indicate the reliability of the function as a standard in selected intervals of the utterance.

When a sample of the standard utterance is presented for verification, a distance value is computed that is a measure of the unlikelihood that that sample would have been generated by the person whose identity is claimed. Distances are always positive numbers; a distance value of zero means that the utterance is identical to the reference in every detail.

The sample is first analyzed to generate the five control functions in terms of which the reference is stored. The control functions are then brought into temporal registration with the reference. This is done by choosing one of the control functions (e.g.,' gain) to serve as the guide. The guide function of the sample utterance is registered with its counterpart in the reference by non-linear. warping, and other control functions are then warped in an identical way.

After registration of the control functions, a variety of distances between the sample and reference ut-- terance are-measured. Included are measures of the difference in local average, local linear variation, and local quadratic variation for all control functions; local and global correlation coefiicients between sample and reference control functions; and measures that represent the difficulty of time registration. ln forming these separate measures, various time segments of the utterance are weighted in proportion to the constancy of the given measure in that time segment across the set of warped specimens. These measures are then combined to form a single overall distance that represents the degree to which the sample utterance differs from the reference.

The verification decision is based on the single overall distance. If it is less than a pre-determined criterion, the claimed identity is accepted (verified"); if it is greater than the criterion, the identity claim is rejected. In addition, an indeterminate zone may be established around the criterion value within which neither definite decision would be rendered. In this event, additional information about the person is sought.

BRIEF DESCRIPTION OF THE DRAWING The invention will be fully apprehended from the fol- I lowing detailed description of a preferred illustrative embodiment thereof taken in connection with the appended drawings.

In the drawings:

FIG. 1 is a block schematic diagram of a speech verification system in accordance with the invention;

FIG. 2 illustrates an alternative analyzer arrangement; I

FIG. 3 illustrates graphically the registration technique employed in the practice of the invention;

FIG. 4 is a chart which illustrates the dependence of two kinds of error ratios on the choice of threshold;

FIG. 5 is a block schematic diagram of a time adjustment configuration which-may be employed for nonlinearly warping parameter values;

FIG. 6 illustrates a criterion for maximizing similarity of acoustic parameters in accordance with the invention;

FIG. 7 illustrates a number of distance measures used in establishing an identity between two speech samples; and

FIGS. 8A, 8B and 8C are graphic illustrations of speech parameters of an unknown and a reference talker. There is illustrated in A, the time normalized parameters before the nonlinear time warping procedure. B illustrates parameters for a reference and specimen utterance which match after time registration using the second formant as the guide function. C illustrates a fully time normalized set of parameters for an impostor, i.e., a no-match condition.

DETAILED DESCRIPTION A system for verifying an individuals claimed identity is shown schematically in FIG. 1. A library of reference utterances is established to maintain a voice standard for each individual subscriber to the system. A later claim of identity is verified by reference to the appropriate stored reference utterance. Accordingly, an individual speaks a reference sentence, for example, by way of a microphone at a subscriber location, or over his telephone to a central location (indicated generally by the reference numeral 10). Although any reference phrase may be used, the phrase should be capable of representing a number of prosodic characteristics and variations of his speech. Since vowel or voiced sounds contain a considerable number of such features, the reference sentence, We were away a year ago. has been used in practice. This phrase is effective, in part, because of its lack of nasal sounds and its totally voiced character. Moreover, it is long enough to require more than passing attention to temporal registration, and is short enough to afford economical analysis and storage.

Whatever the phrase spoken by the individual to establish a standard, it is delivered to speech analyzer 11, of any known construction, wherein a number of different acoustic parameters are derived to represent it. For example, individual formant frequencies, amplitudes, and pitch, at the Nyquist rate are satisfactory. These speech parameters are commonly used to synthesize speech in vocoder apparatus and the like. One entirely suitable speech signal analyzer is described in detail in a copending application of L. R. Rabiner and R. W. Schafer, Ser. No. 872,050, filed Oct. 29, 1969. In essence, analyzer 11 includes individual channels for identifying formant frequencies F1, F2, F3, pitch period P, and gain G control signals. In addition, fricative identifying signals may be derived if desired.

In order that variations in the manner in which the individual speaks the phrase may be taken into account, it is preferable to have him repeat the reference sentence a number of times in order that an average set 'of speech parameters may be prepared. It is convenient to analyze the utterance as it is spoken, and to adjust the duration of the utterance to a standard length T. Typically, a two-second sample is satisfactory. Each spoken reference sentence therefore is either stretched or contracted in apparatus 12 to adjust it to the standard duration. Each adjusted set of parameters is then stored either as analog, or after conversion, as digital signals, for example, in unit 12. When all of the test utterances have been analyzed and brought into mutual time registration, an average set of parameters is developed in averaging apparatus 13. The single resultant set of reference parameter values is then stored for future use in storage unit 15.

In addition, a set of variance signals is prepared and stored in unit 15. Variance values are developed, in the manner described hereinafter, for parameters in each of a number of time segments within the span of the reference utterance to indicate the extent of any difference in the manner in which the speaker utters that segment of the test phrase. Hence, variance values provide a measure of the reliability with which parameters in different segments may be used as a standard.

It is evident that a non-vocal identification of each individual is also stored, preferably in library store 15. The identification may be either in the form of a separate address or some other key to the storage location of the reference utterance for each individual. Any

dividual identifies himself, for example, by means of his name and address or his credit card number. This data is entered into reading unit 16, of any desired construction, in order that a request to verify may be initiated. Secondly, upon command, i.e., a ready light from unit 17, the individual speaks the reference sentence. These operations are indicated generally by block 18 in FIG. 1. The sample voice signal is delivered to analyzer 19 where it is broken down to develop parameter values equivalent to those previously stored for him. Analyzer l9, accordingly, should be of identical construction to analyzer 11, and preferably, is located physically at the central processing station. The resultant set of sample parameter values are thereupon delivered to unit 17 to initiate all subsequent operations.

Since it is unlikely that the sample utterance will be in time registration with the reference sample, it is necessary to adjust its time scale to bring it into temporal alignment with the reference. This operation is carried out in time adjustment apparatus 20. in essence, iterative processing is employed to maximize the similarity between the specimen parameters and the reference parameters. Similarity may be measured by the coefficient of correlation between the sample and reference. Sample parameters are initially adjusted to start and stop in registry with the reference. It is also in accordance with the invention to match the time spread of variables within the speech sample. Internal time registration is achieved by a nonlinear process which maximizes the similarity between the sample and the reference by way of a monotonic continuous transformation of time.

Accordingly, values of the sample signal parameters, s( t) alleged to be the same as reference signal r(t), are delivered to adjustment apparatus 20. They are remapped, i.e., converted, by a substitution process to values s(1-) where 1-(t)=[a+bt+q(t)]. (I) In the equation, coefficient a and b are determined so as to cause the end points of the sample to coincide with those of the reference when q(t) is zero. The function q(t) defines the character of the time scale transformation between the end points of the utterance. In practice, q(t) may be a continuous piece-wise linear function. The time adjustment operation is illustrated graphically in FIG. 3. A reference function r(t) extends through the period 0 to T. It is noted, however, that the sample reference s(t) is considerably shorter in duration. It is necessary, therefore, to stretch it to a duration T. This is done by means of the substitute function 1'(t) shown in the third line of the illustration. A so-called gradient climbing procedure may be employed in which values q, at times t, are varied in order that values of q, and t, may be found that maximize the normalized correlation 1 between the reference speech and the sample speech, where The symbols denote a time average value of the enclosed expression. By thus maximizing the correlation between the two, a close match between prominent features in the utterance, e.g., formants, pitch, and intensity values, is achieved.

Details of the time normalization process are described hereinafter with reference to FIG. 5. Suffice it to say at this point that the substitute values of the sample s(r) together with values of r(t) and variance values are delivered to measurement apparatus 25. Values of q, and t which reflect the amount of nonlinear squeezing used to maximize I, are delivered to measurement apparatus 26.

Since the reference speech and the sample speech are now in time registry, it is possible to measure internal similarities between the two.- Accordingly, a value is prepared in measurement apparatus 25 which denotes the internal dissimilarities between the two speech signals. Similarly, a measure is prepared in apparatus 26 which denotes the extent of warping required to bring the two into registry. If the dissimilarities are found to be small, it is likely that a match has been found. Yet, if the warping function value is extremely high, there is a likelihood that the match is a false one, resulting solely from extensive registration adjustment. The two measures of dissimilarity are combined in apparatus 27 and delivered to comparison unit 28 wherein a judgment is made in accordance with preestablished rules, i.e., threshold levels, balanced between similarity and inconsistencies. An accept or reject signal isthereupon developed. Ordinarily, this signal is returned to, unit 16 to verify or reject the claim of identity made by the speaker. V

It is evident that there is redundancy in the apparatus illustrated in FIG. 1. Thus, for example, analyzer 11 is used only to prepare reference samples. It may, of course, be switched as required to analyze identity claim samples. Such an arrangement is illustrated in FIG. 2. Reference and sample information is routed by way of switch 29a to analyzer l1 and delivered by way of switch 29b to the appropriate processing channel. Other redundancies within the apparatus may, of course, be minimized by judicious construction. Moreover, it is evident that all of the operations described may equally well be performed on "a computer. All of the steps and all of the apparatus functions may be incorporated in a program for implementation on a general-purpose or dedicated computer. Indeed, in practice, a computer implementation has been found to be most effective. No unusual programming steps are required for carrying outthe indicated operations.

FIG. 4 illustrates the manner in which acceptance or rejection of a sample is established. Since absolute discrimination between reference and sample values would require near perfect matching, it is evident that a compromise must be used. FIG. 4 indicates, therefore, the error rate of the verification procedure as a function of the value of the dissimilarity measure between the reference and sample, taken as a threshold value for acceptance or rejection. A compromise value is selected that then determines the number of true matches that are rejected, i.e., customers whose claim to identity is disallowed, versus the number of impostors whose claim to identity is accepted. Evidently, thev crossover point may be adjusted in accordance with the particular identification application.

FIG. illustrates in block schematic form the operations employed in accordance with the invention for registering the time scales of a reference utterance, in parameter value form, with the sample utterancein like.

parameter value form. It is evident that the Figure illustrates a hardware implementation. The Figure also constitutes a working flow chart for an equivalent computer program. Indeed, FIG. 5 represent the flow chart of the program that has been used in the practice of the invention. As with the overall system, no unusual programming steps are required to implement the arrangement.

The system illustrated in FIG. 5 corresponds generally to the portion of FIG. 1 depicted in unit 20. Reference values of speech signal parameters r(t) from store 15 are read into store 51 as a set. Similarly, samples from analyzer 19 are stored in unit 52. In order to register the time scale of the samples with those of the reference, samples s(t) are converted into a new set of values s(r) in transformation function generator 53. This operation is achieved by developing, in generator 54, values of s(r) as discussed above in Equation (1). Coefficients a and b are'determined to cause the end points of the sample utterance, as determined for example by speech detector 55, to coincide with those of the reference when q(t) is zero. Detector 55 issues a first marker signal at the onset of the sample and a second marker signal at the cessation of the utterance. These signals are delivered directly to generator 54.- Values of q, and t, for the interval between the terminal points of the utterance are initially entered into the system as prescribed sets of constants q and 1, These values are delivered to OR gates 56 and 57, respectively, and by way of adders 58 and 59 to the input of generator 54. Accordingly, with these initial values, a set of values -r(t) is developed in generator 54 in accordance with Equation (1). Values of the specimen s(t) are thereupon remapped in generator 53 according to the functions developed in generator 54 to produce a time-warped version of the sample, designated s(r).

Values of s(r) are next compared with the reference samples to determine whether or not the transformed specimen values have been brought into satisfactory time alignment with the reference. The normalized correlation I, as defined above in Equation (2), is used for this comparison. Since I is developed on the basis of root mean square values of the sample functions, the necessary algebraic terms are prepared by developing a product in multiplier 60 and summing the resultant over the period T in accumulator 61. This summation establishes the numerator of Equation (2). Similarly, values of r(t) and s(t) are squared, integrated, and rooted, respectively, in units 62, 63, 64, and 65, 66, and 67. The two normalized values are delivered to multiplier 68 to form the denominator of Equation (2). Di- I vider network 69 then delivers as its output a value of the normalized correlation function I in accordance with Equation (2). It indicates the similarity of the samthe indicated changes in the values of q and t for the previous values supplied to generator 54. Accordingly, the partial derivative values of I with respect to q and with respect to t are prepared and delivered to multipliers 72 and 73, respectively. These values are equalized by multiplying by constants Cq and Ct in order to enhance the subsequent evaluation. These products constitute incremental values of q and t. The mean squares of the sets of values. q, and t, are thereupon compared in gates 74 and 75 to selected small constants U and V. Constants U and V are selected to indicate the required degree of correlation that will assure a low error rate in the ultimate decision. If either of the comparisons is unsatisfactory, incremental values of q, or n, or both, are returned to adders 58 and 59. The previously developed values q, and t,, are incremented thereby to provide a new set of values as inputs to function generator 54. Values of s(1) are thereupon developed using the new data and the process is repeated. In essence, the values of q at intervals t as shown in FIG. 3 are individually altered to determine an appropriate set of values for maximizing the correlation between the reference utterance and the altered sample utterance.

FIG. 6. illustrates mathematically correlating the operation. The relationships are those used in a computer program used to implement the steps discussed above.

Thus, each q, and t, is adjusted until a further change in its value produces only a small change in correlation. When the change is sufficiently small, the last generated value is held, e. g., in generator 54. When the sensitivity measures are found to meet the abovediscussed criterion, i.e., maximum normalized correlation, gates 74 and 75 deliver the last values of s(1') by way of AND gate 78 to store 79. These values then are used as the time registered specimen samples and, in the apparatus of FIG. 1, are delivered to dissimilarity measuring apparatus 25. The values of q at time t from function generator 54 are similarly delivered, for example, by way of a gate (not shown in the Figure to avoid undue complexity) energized by the output of gate 78 to function measurement apparatus 26 of FIG. 1.

With the sample speech utterance appropriately registered with the averaged reference speech utterance, it is then in accordance with the invention to assess the similarities between the two and to develop a single numerical evaluation of them. The numerical evaluation is used to accept or reject the claim of identity. For convenience, it has been found best to generate a measure of dissimilarity such that a numerical value of zero denotes a perfect match between the two, and progressively higher numerical values denote greater degrees of dissimilarity. Such a value is sometimes termed a distance value.

To provide a satisfactory measure of dissimilarity, the two registered utterances are examined in a variety of different ways and at a variety of different locations in time. The resultant measures are combined to form a single distance value. One convenient way of assessing dissimilarity comprises dividing the interval of the utterances, O to T, into N equal intervals. If T 2 seconds, as discussed in the above example for a typical application, it is convenient to divide the interval into N 20 equal parts. FIG. 7 illustrates such a subdivision. Each subdivision i is then treated individually and a number of measures of dissimilarity are developed. These are based on (1) differences in average values between the reference speech r(t) and the registered sample s('r), (2) the differences between linear components of variation of the two functions, (3) differences between quadratic components of variations of the two functions, and (4) the correlation between the two functions. In addition, a correlation coefficient (5) over the entire interval is obtained. Five such evaluations are made for each of the speech signal parameters used in representing the utterances. Thus, in the example of practice discussed herein, five evaluations are made for each of the formants F F and F for the pitch P of the signal, and for its gain G. Accordingly, 25 individual signal values of dissimilarity are produced.

It has also been found that the reliability of these measures varies between individual segments of the utterances. That is to say, certain speakers appreciably vary the manner in which they deliver certain portions of an utterance but are relatively consistent in delivering other portions. It is preferable therefore to use the most reliable segments for matching purposes and to reduce the relative weight of, or eliminate entirely, the measures in those segments known to be unreliable. The degree of reliability in each segment is based on the variance between the reference'speech signal in each segment for each of the several reference utterances used in preparing the average reference in unit 13 of FIG. 1. The average values are thus compared and a value 0 representative of the variance, is developed and stored along with values r(t) in storage unit 15.

Dissimilarity measurement apparatus 25 thus is supplied with the function r(t), s(1'), and 0 It performs the necessary mathematical evaluation to divide the functions into N equal parts and to compute a measure of the squared difference in average values of the reference utterance and adjusted sample utterance, the squared difference in linear components between the two, (also designated slope) the squared difference in quadratic components between the two (also designated curvature), and the correlation between the two. Each of the measures is scaled in accordance with the reliability factor as measured by the variance 0- discussed above.

The equations which define these mathematical equations are set forth in FIG. 7. In the equations, the subscripts r and s refer, respectively, to the reference utterance and the warped sample utterance, and the functions x, y, and z are the coefficients of the first three terms of an orthogonal polynominal expression of the corresponding utterance value. The symbol p represents the correlation coefficient between the sample and reference functions computed over the full length of the sample. The function p represents the correlation coefficient between the sample and reference computed for the ith segment. Similarly, 0 represents the variance of the reference parameters computed for the entire set of reference utterances used to produce the average. The numerical evaluation for each of these measures is combined to form a single number and a signal representative of the number is delivered to combining network 27.

Although the numerical value of dissimilarity thus prepared is sufficient to permit a reasonably reliable verification decision to be made, it is evident that the sample was adjusted severely to maximize the correlation between it and the reference. The degree of adjustment used constitutes another clue as to the likelihood of identity between the sample and the reference. If the warping values q, and t, were excessively large, it is more unlikely that the sample corresponds to the reference than if maximum correlation was achieved with less severe warping. Accordingly, the final values of q and I developed in generator 24 (FIG. 1) are delivered to measurement apparatus 26. Three measures of warping are thereupon prepared in apparatus 26.

For convenience an expression for the amount of warping employed is defined as Typically, 10 values of 1' are employed so that 10 values of A are produced. These values are averaged to get a single numerical value A X. A value of X is developed for each of the reference speech utterances used to prepare the average. All values of X are next averaged over each of the N reference utterances to produce a value X. A first measure of distance for warping is then evaluated as In similar fashion, a number Y representative of the linear component of variation in the values of A is prepared, and a quadratic component of variation is evaluated as Z. A second measure of distance is then evaluated as D, Z (5) Finally, a third measure of distance is developed as 1 N 1 2 (A,-X- Y(t,t Z(tit,,,)

where t,,, is the value of the t at the midpoint of the utterance.

The three warping distance measures, d D and d;, from system 26 are then delivered together with 25 dissimilarity measures from system 25 to combining unit 27 wherein a single distance measure is developed. Preferably, each of the individual distance values is suitably weighted. If the weighting function is equal to one for each distance value, a simple summation is performed. Other weighting systems may be employed in accordance with experience, i.e., the error rate experienced in verifying claims of identity of those references accommodated by the system.

The warping function measurements, are therefore delivered to combining network 27 where they are combined with the numerical values developed in apparatus 25. The composite distance measure is thereupon used in threshold comparison network 28 to determine whether the sample speech should be accepted or rejected as being identical with the reference, i.e., to verify or reject the claim of identity. Since the distance measure is in the form of a numerical value, it may be matched directly against a stored numerical value in apparatus 28. The stored threshold value is selected to distribute the error possibility between a rejection of true claims of identity versus the acceptance of false claims of identity as illustrated in FIG. 4, discussed above. It is also possible that the distance value is too close to the threshold limit to permit a positive decision 5 to be made. In this case, i.e., in an intermediate zone 10 signal may be used to suggest that additional information about the individual claiming identity is needed, e. g., in the form of other tangible identification.

FIGS. 8A, 8B and 8C illustrate the overall performance of the system of the invention based on data developed in practice. In FIG. 8A, waveforms of the sample sentence We were away a year ago. are shown for the first three formants, for the pitch period, and for signal gain, both for a sample utterance and for an averaged reference utterance. It will be observed that the waveforms of the sample and reference are not in time registry. FIG. 8B illustrates the same parameters after time adjustment, i.e., after warping, for a sample utterance determined to be substantially identical to. the reference. In this case, the dissimilarity measure is sufficiently low to yield an accept signal, thus to verify the claim of identity. In FIG. 8C, the sample and reference utterances of the test sentence have been registered; yet it is evident that severe disparities are present between the two. Hence, the resulting measure of dissimilarity is sufficiently high to yield a reject signal.

Since the basic features of the invention involve the computation of certain numerical values and certain comparison operations, it is evident that the invention may most conveniently be turned to account by way of a suitable program for a computer. Indeed, the block schematic diagrams of FIGS. 1 and 5, together with the mathematical relationships set forth in the specification and figures constitute in essence a flowchart diagram illustrative of the programming steps used in the practice of the invention.

What is claimed is:

1. In an auditory verification system in which acoustic parameters of a test sample of an individuals speech are matched for identity to like parameters of a reference sample of his speech, that improvement which includes the steps of:

time adjusting said test sample parameters with said 0 reference parameters according to a nonlinear registration schedule, measuring internal dissimilarities and irregularities between said time adjusted parameters and said 5 average reference parameters, and

3 measuring internal dissimilarities and irregularities between said test sample parameters and said reference parameters, and

verifying said individuals identity on the basis of said measures of dissimilarities and irregularities.

3. In an auditory verification system in which acoustic parameters of a test sample of an individuals speech are matched for identity to like parameters of a reference sample of his speech, that improvement which comprises the steps of:

developing said reference sample from time registered values of a plurality of different speech signal parameters,

developing a like plurality of different speech signal parameters from said test speech sample,

time adjusting said test sample parameters with said reference parameters according to a nonlinear registration schedule,

measuring internal dissimilarities and irregularities between said time adjusted parameters and said average reference parameters, and

verifying said individual s identity on the basis of said measures of dissimilarities and irregularities.

4. in a speech signal verification system wherein selected speech signal parameters derived from a test phrase spoken by an individual to produce a sample are compared to reference parameters derived from the same test phrase spoken by the same individual, and wherein verification or rejection of the identity of the individual is determined by the similarities of said sample and reference parameters,

means for bringing the time span of said sample parameters into temporal registration with the time span of said reference parameters, and

means for temporally adjusting the time distribution of parameters of said sample within said adjusted time span to maximize similarities between said sample parameters and said reference parameters.

5. The speech signal verification system as defined in claim 4, wherein said similarities between said sample parameters and said reference parameters are measured by the coefficient of correlation therebetween.

6. The speech signal verification system as defined in claim 5, wherein said temporal adjustment of parameters within said adjusted time span comprises,

means for iteratively incrementing the time locations of selected parameter features until said measure of correlation between said sample parameters and said reference parameters does not increase significantly for a selected number of consecutive iterations.

7. The speech signal verification system as defined in claim 4, wherein said means for temporally adjusting said parameters within said adjusted time span comprises,

means for temporally transforming said sample parameters, designated s(t), intoa set of parameters, designated s(1'), in which 1(t) a+bt+q(t), in

which a and b are constants selected to align the 6 14 tinuous piece-wise linear function described by N selected amplitude values q, and time values t, within said time span, wherein i=0, 1, .N.

9. An auditory speech signal verification system,

which comprises, in combination,

means for analyzing a plurality of individual utterances of a test phrase spoken by an individual to develope a prescribed set of acoustic parameter signals for each utterance, means for developing from each of said sets of parameter signals a reference set of parameter signals and a set of signals which denotes variations between parameter signals used to develop said reference set of signals, means for storing a set of reference parameter signals and a set of variation signals for each of a number of different individuals, means for analyzing a sample utterance of said test phrase spoken by an individual purported to be one of said number of different individuals to develop a set of acoustic parameter signals, means for adjusting selected parameter signals of said sample to bring the time scale of said utterance represented by said parameters into registry with the time scale of a designated one of said stored reference utterances represented by said reference parameters, I said means including means for adjusting selected values of said sample parameter signals to maximize similarities between said sample utterance and said reference utterance, means responsive to said reference paraineter signals, said adjusted sample parameter signals, and said variation signals for developing a plurality of signals representative of selected similarities between each of said sample parameters and each of said corresponding reference parameters, means for developing signals representative of the extent of adjustment employed to register said time scales, means responsive to said plurality of similarity signals and said signals representative of the extent of adjustment for developing a signal representative of the overall degree of similarity between said sample utterance and said designated reference utterance, and threshold comparison means supplied with said overall similarity signal for matching the magnitude of said similarity signal to the magnitude of a stored threshold signal, and for issuing an accept signal for similarity signals above threshold, a reject signal for signals below threshold, and a no decision signal for similarity signals within a prescribed narrow range of signal magnitudes near said threshold magnitude. 10. An auditory speech signal verification system, as defined in claim 9, wherein,

said means for developing a plurality of signals representative of selected similarities includes, means for measuring a plurality of different speech signal characteristics for similarity in each of a number of time subintervals within the interval of said designated time scale. 11. An auditory speech signal verification system, as defined in claim 10, wherein,

said different speech signal characteristics are based,

respectively, on (1) the difference in average values between said reference speech signal parameters and said sample speech signal parameters, (2) the squared difference in linear com- 5 ponents between the two, (3) the squared difference in quadratic components between the two, (4) the correlation between the two in each of said subintervals, and (5) the correlation between the two over said entire interval of said designated 1. time scale. 12. An auditory speech signal verification system, as

defined in claim 1 1, wherein, each of said signals representative of selected

Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US3466394 *May 2, 1966Sep 9, 1969IbmVoice verification system
US3509280 *Nov 1, 1968Apr 28, 1970IttAdaptive speech pattern recognition system
US3525811 *Dec 26, 1968Aug 25, 1970Alvaro GarciaRemote control voting system
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US3883850 *Jun 19, 1972May 13, 1975Threshold TechProgrammable word recognition apparatus
US3896266 *Jun 2, 1972Jul 22, 1975Nelson J WaterburyCredit and other security cards and card utilization systems therefore
US3919479 *Apr 8, 1974Nov 11, 1975First National Bank Of BostonBroadcast signal identification system
US3989896 *May 8, 1973Nov 2, 1976Westinghouse Electric CorporationMethod and apparatus for speech identification
US4032711 *Dec 31, 1975Jun 28, 1977Bell Telephone Laboratories, IncorporatedSpeaker recognition arrangement
US4053710 *Mar 1, 1976Oct 11, 1977Ncr CorporationAutomatic speaker verification systems employing moment invariants
US4059725 *Dec 21, 1976Nov 22, 1977Nippon Electric Company, Ltd.Automatic continuous speech recognition system employing dynamic programming
US4060694 *May 27, 1975Nov 29, 1977Fuji Xerox Co., Ltd.Speech recognition method and apparatus adapted to a plurality of different speakers
US4069393 *Dec 11, 1974Jan 17, 1978Threshold Technology, Inc.Word recognition apparatus and method
US4092493 *Nov 30, 1976May 30, 1978Bell Telephone Laboratories, IncorporatedSpeech recognition system
US4282403 *Aug 8, 1979Aug 4, 1981Nippon Electric Co., Ltd.Pattern recognition with a warping function decided for each reference pattern by the use of feature vector components of a few channels
US4348553 *Jul 2, 1980Sep 7, 1982International Business Machines CorporationParallel pattern verifier with dynamic time warping
US4349700 *Apr 8, 1980Sep 14, 1982Bell Telephone Laboratories, IncorporatedContinuous speech recognition system
US4363102 *Mar 27, 1981Dec 7, 1982Bell Telephone Laboratories, IncorporatedSpeaker identification system using word recognition templates
US4446531 *Apr 20, 1981May 1, 1984Sharp Kabushiki KaishaComputer for calculating the similarity between patterns
US4561105 *Jan 19, 1983Dec 24, 1985Communication Intelligence CorporationComplex pattern recognition method and system
US4573196 *Jan 19, 1983Feb 25, 1986Communications Intelligence CorporationConfusion grouping of strokes in pattern recognition method and system
US4601054 *Jul 23, 1985Jul 15, 1986Nippon Electric Co., Ltd.Pattern distance calculating equipment
US4608708 *Dec 23, 1982Aug 26, 1986Nippon Electric Co., Ltd.Pattern matching system
US4739398 *May 2, 1986Apr 19, 1988Control Data CorporationMethod, apparatus and system for recognizing broadcast segments
US4752957 *Sep 7, 1984Jun 21, 1988Kabushiki Kaisha ToshibaApparatus and method for recognizing unknown patterns
US4910782 *Aug 22, 1989Mar 20, 1990Nec CorporationSpeaker verification system
US5091948 *Mar 15, 1990Feb 25, 1992Nec CorporationSpeaker recognition with glottal pulse-shapes
US5167004 *Feb 28, 1991Nov 24, 1992Texas Instruments IncorporatedTemporal decorrelation method for robust speaker verification
US5271088 *Apr 7, 1993Dec 14, 1993Itt CorporationAutomated sorting of voice messages through speaker spotting
US5414755 *Aug 10, 1994May 9, 1995Itt CorporationSystem and method for passive voice verification in a telephone network
US5548647 *Apr 3, 1987Aug 20, 1996Texas Instruments IncorporatedFixed text speaker verification method and apparatus
US5581650 *Jul 7, 1994Dec 3, 1996Sharp Kabushiki KaishaLearning dynamic programming
US5617507 *Jul 14, 1994Apr 1, 1997Korea Telecommunication AuthoritySpeech segment coding and pitch control methods for speech synthesis systems
US5625747 *Sep 21, 1994Apr 29, 1997Lucent Technologies Inc.Speaker verification, speech recognition and channel normalization through dynamic time/frequency warping
US5799276 *Nov 7, 1995Aug 25, 1998Accent IncorporatedKnowledge-based speech recognition system and methods having frame length computed based upon estimated pitch period of vocalic intervals
US6061477 *Apr 15, 1997May 9, 2000Sarnoff CorporationQuality image warper
US6260011 *Mar 20, 2000Jul 10, 2001Microsoft CorporationMethods and apparatus for automatically synchronizing electronic audio files with electronic text files
US6263308 *Mar 20, 2000Jul 17, 2001Microsoft CorporationMethods and apparatus for performing speech recognition using acoustic models which are improved through an interactive process
US6298323 *Jul 25, 1997Oct 2, 2001Siemens AktiengesellschaftComputer voice recognition method verifying speaker identity using speaker and non-speaker data
US7398208 *Oct 1, 2002Jul 8, 2008Siemens AtkiengesellschaftMethod for producing reference segments describing voice modules and method for modeling voice units of a spoken test model
US7490043 *Feb 7, 2005Feb 10, 2009Hitachi, Ltd.System and method for speaker verification using short utterance enrollments
US8321209 *Nov 10, 2009Nov 27, 2012Research In Motion LimitedSystem and method for low overhead frequency domain voice authentication
US8510104 *Sep 14, 2012Aug 13, 2013Research In Motion LimitedSystem and method for low overhead frequency domain voice authentication
US20090248412 *Dec 29, 2008Oct 1, 2009Fujitsu LimitedAssociation apparatus, association method, and recording medium
US20090259465 *Jun 24, 2009Oct 15, 2009At&T Corp.Low latency real-time vocal tract length normalization
US20110112838 *Nov 10, 2009May 12, 2011Research In Motion LimitedSystem and method for low overhead voice authentication
US20120232899 *Mar 23, 2012Sep 13, 2012Obschestvo s orgranichennoi otvetstvennost'yu "Centr Rechevyh Technologij"System and method for identification of a speaker by phonograms of spontaneous oral speech and by using formant equalization
US20140081638 *Dec 10, 2009Mar 20, 2014Jesus Antonio Villalba LopezCut and paste spoofing detection using dynamic time warping
DE2659083A1 *Dec 27, 1976Jul 14, 1977Western Electric CoVerfahren und vorrichtung zur sprechererkennung
WO1981002943A1 *Apr 1, 1981Oct 15, 1981Western Electric CoContinuous speech recognition system
WO1997039420A1 *Apr 17, 1997Oct 23, 1997Sarnoff CorpComputationally efficient digital image warping
WO1998034216A2 *Jan 29, 1998Aug 6, 1998Netix Inc TSystem and method for detecting a recorded voice
U.S. Classification704/246, 704/243, 704/241, 704/238, 704/E15.16
International ClassificationG10L15/12, G10L15/00, G07C9/00
Cooperative ClassificationH05K999/99, G10L15/12, G10L15/00, G07C9/00158
European ClassificationG10L15/00, G10L15/12, G07C9/00C2D