CA2250649A1 - Method and apparatus to detect and delimit foreground speech - Google Patents

Method and apparatus to detect and delimit foreground speech Download PDF

Info

Publication number
CA2250649A1
CA2250649A1 CA002250649A CA2250649A CA2250649A1 CA 2250649 A1 CA2250649 A1 CA 2250649A1 CA 002250649 A CA002250649 A CA 002250649A CA 2250649 A CA2250649 A CA 2250649A CA 2250649 A1 CA2250649 A1 CA 2250649A1
Authority
CA
Canada
Prior art keywords
signal
difference
quantile
channel
equal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002250649A
Other languages
French (fr)
Inventor
Stephen Douglas Peters
Daniel Boies
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nortel Networks Ltd
Original Assignee
Northern Telecom Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northern Telecom Ltd filed Critical Northern Telecom Ltd
Publication of CA2250649A1 publication Critical patent/CA2250649A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • G10L25/87Detection of discrete points within a voice signal

Abstract

The present invention provides improved foreground-speech signal endpointing by computing a spectral stationarity statistic. This statistic is used by a finite state machine to endpoint speech. Endpointing using the spectral stationarity statistic is less susceptible to background noise than endpointing using conventional measures. The present invention uses frame-synchronous quantile estimation to generate a mask signal for signal to Noise Ratio Normalization.

Description

CA 022~0649 1998-10-20 METHOD AND APPARATUS TO DETECT AND
DELIMIT FOREGROUND SPEECH

Back~round : The present invention relates generally to speech recognition. In particular, it relates to speech recognition methods and apparatuses that delimit speech in noisy envlronments.
The automatic recognition of human speech in arbitrary environments is a difficult task. The problem is yet more difficult when the recognition is to be performed in real time, i. e., the delay between the end of speech and the system response is no more than the speaker might expect in a typical human conversation.
One of the key components of a real time speech recognition system is the ability to reliably detect the start and end of speech. While the best way to do this would involve a feedback path from the speech recognizer itself, it is not feasible to do this in real time using current technology. Because feedback is not a viable option, there is a need for methods and apparatus to determine the start and end of speech in a computationally efficient manner.
Endpointing is one technique that delimits the start and end of speech.
Endpointing is difficult, however, when speech is acquired over a telephone network because of system noise. Additionally, the variety of modes and environments in which conventional as well as cellular, cordless, and hands-free telecommunications devices are used all add to the challenge.
The key difficulty in any telecommunication system is the background noise of a telephone call. The background noise can be due to any number of phenomena, including cars, crowds, music, and other speakers. Moreover, the intensity of this background noise can be constantly ch~nging and is impossible to predict accurately.
Currently, telephone-network real-time speech recognition system endpointers are based primarily on the energy in the received signal, which includes the speech and the background noise. They may also use other statistics derived from the received signal including zero-crossings, for more information on zero-crossing see U.S. Patent No.
5,598,466, issued to David L. Gr~l-m~nn on January 28, 1997, or energy variance, for more CA 022~0649 1998-10-20 information on energy variance see U.S. Patent No. 5,323,337, issued to Denis L. Wilson et al. on June 21, 1994. The endpointer statistic is fed to a finite state machine, which signals the start and end of speech on the basis of a number of thresholds and timeouts. An example of how such a state machine operates is given in Fig. 1.
Fig. I is a flow chart showing the operation of a finite state machine. First, the finite state machine receives an endpointer statistic (sfep 102). Next, the state machine determines whether the current statistic exceeds a first threshold for a first predetermined amount of time (a first timeout) (step 104). If the determination is negative, steps 102 and 104 are repeated. If the determination is positive, the state machine identifies the beginning of speech (step 106). The state machine then enters the in speech state (step 108). While in the in speech state, the state machine determines whether the statistic falls below a second threshold for a second predetermined amount of time (step I 10). If the determination is negative, steps 108 and 1 10 are repeated. If the determination is positive, the state machine enters a tentative silence state (step I 12). During the tentative silence state, the state machine determines whether statistic exceeds the first threshold for the first predetermined amount of time. If the determination is positive, the state machine returns to the in speech state, step 108. If the determination is negative, the finite state machine determines whether the statistic has remained below the first threshold for a third predetermined amount of time (step 1 16). If the determination is negative, steps 1 12 to 1 16 are repeated. Finally, if the determination is positive the state machine identifies the end of speech (step 1 18). Thus, the speech recognition system performs recognition on only that portion of the input signal between the beginning of speech and the end of speech (i. e., while the state machine is in the in speech state).
Typically, the effectiveness of an endpointer decreases as the intensity of the background noise increases. Loud background noise may cause the endpointer to signal a start of speech too soon or delay the detection of the end of speech. The latter condition can be quite ~l~m~ging to the performance of a real time speech recognition system.
Clearly, the endpointer requires some adaptation to compensate for the background.
Therefore, it would be desirable to provide an endpointer that pre-processes the inputted signal in real time so that foreground speech delimitation using a fixed threshold endpointing method is less susceptible to background noise.

. .

CA 022~0649 1998-10-20 Summary of the Invention Methods and apparatus consistent with the invention pre-process a channel energy signal to establish a spectral stationarity statistic that an endpointer can use to delimit speech. The spectral stationarity statistic allows an endpointer to perform with less susceptibility to background noise.
To attain the advantages and in accordance with the purpose of the invention, asembodied and broadly described herein, a method consistent with the present invention for processing data in a voice recognition system capable of receiving foreground speech in the presence of background noise, includes the steps of extracting a channel signal, generating a mask signal from the channel signal, m~king the extracted channel signal with the mask signal, and taking a sample standard deviation of the masked channel signal over a temporal window.
An apparatus in a voice recognition system capable of receiving foreground speech in the presence of background noise consistent with the present invention comprises means for extracting a chamlel signal, means for generating a mask signal from the computed channel signal, means for m~king the extracted channel signal with the mask signal, and means for taking a sample standard deviation of the masked energy signal over a temporal window.
A method for generating a quantile estimate of a channel signal, comprising the steps of defining a quantile estimate, initializing a plurality of buffers, receiving a channel signal, computing a plurality of differences, adjusting the quantile estimate based on the plurality of differences, and incrementing the plurality of buffers based on the plurality of differences.
Also, an apparatus for generating a quantile estimate of a channel signal, comprising means for defining an initial quantile estimate, means for ini~i~li7.ing a plurality of buffers, means for receiving a channel signal, means for computing a plurality of differences, means for adjusting the quantile estimate based on the plurality of differences, and means for incrementing the plurality of buffers based on the plurality of differences.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed.

CA 022~0649 1998-10-20 Brief Description of the Drawin~
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate preferred embodiments of the invention and, together with the description, explain the goals, advantages and principles of the invention. In the drawings, Fig. 1 is a flow chart illustrating prior art speech signal endpointing;
Fig. 2 is a flow chart illustrating a method of preprocessing a noisy signal consistent with the present invention;
Fig. 3 is a block diagram of a pre-endpointer processor consistent with the present invention;
Fig. 4 is a block diagram of the quantile estimator of Fig. 3;
Fig. 5 is a flow chart illustrating a method of computing quantile estimates consistent with the present invention; and Fig. 6 is a graphical representation of the high and low quantile estimates in relation to the channel energy;
Fig. 7 is a block diagram of the sample deviation estimator of Fig. 3.
Like reference numerals refer to corresponding parts throughout the several figures of the drawings Description of the Preferred Embodiment Reference will now be made in detail to the present preferred embodiments of the invention, examples of which are illustrated in the accompanying drawings. The matter contained in the description below or shown in the accompanying drawings shall be interpreted as illustrative, and not limiting.
Methods and apparatus consistent with this invention provide improved foreground-speech signal endpointing. To improve endpointing, a spectral stationarity statistic ("S3")iS computed. The statistic s3is more robust to background noise then more conventional measures. Additionally, the statistic S3 can be made even less susceptible to variable background noise by using background norm~li7~tion.
Fig. 2 is a flow chart showing a method of pre-processing a received noisy signal to produce the statistic S3 for each frame consistent with the present invention. A
frame comprises a series of digital samples of the noisy signal over a pre-determined length of time. First, a pre-endpointer processor receives a noisy signal, which includes CA 022~0649 1998-10-20 foreground speech (step 202). As used in this application, foreground speech refers to that portion of the input signal that is to be recognized by the speech recognization system.
Next, using conventional techniques, the pre-endpointer processor extracts a channel energy signal from the received noisy signal (step 204). For simplicity, Fig. 2 only refers to a single recording channel, but multiple recording channels are preferred (i. e, 2,3, 5, 20, or more channels). As explained in more detail below, the pre-endpointer processor then computes both a high and a low quantile estimation of the channel energy signal (step 206).
Using the quantile estimations to generate a mask signal, the noisy signal is masked with the mask signal using a Signal to Noise Ratio ("SNR") norm~li7~tion procedure (step 208).
Finally, the pre-endpointer processor takes a sample standard deviation of the masked signal over a temporal window (step 210). The finite state machine then uses the sample standard deviation, i.e., the statistic S3, in a conventional manner to generate the foreground speech endpoints (step 212).
Fig. 3 is a block diagram of an pre-endpointer processor ("PEP") 300 consistent with the present invention. PEP 300 includes an energy extractor 302, an energy root transformer 304, a quantile estimator 306, a masker 308, a smoothing filter 310, a sample deviation processor 312, two parallel linear filters 314 and 316, a minimi7er 318, and a summer 320. As seen in Fig. 3, each recording channel signal is inputted to PEP 300 and received by energy extractor 302. Energy extractor 302 outputs an extracted channel energy signal to energy root transformer 304 and to masker 308. Energy root transformer 304 performs a non-linear root transformation on the extracted channel energy signal and outputs the transformed signal to quantile estimator 306, which computes high and low quantile estimates for the transformed energy signal. Quantile estimator 306 outputs high and low quantile estimate signals to masker 308. Masker 308 uses the quantile estimate signals to generate a mask signal and perform SNR norm~li7.~tion on the channel energy signal outputted from energy extractor 302 (i. e., adds the mask signal to the channel energy signal). Additionally, masker 308 has a memory (not shown) associated with it to save the current mask signal for use in computing the next mask signal. The masked channel energy signal is sent through smoothing filter 310 to sample deviation processor 312, which takes a sample deviation of the masked channel energy signal over a temporal window, as described in more detail below. The sample deviation signal passes through .. .. .... . .. . . .. .

CA 022~0649 1998-10-20 two parallel linear filters 314 and 316 to minimi7~r 318. Minimi7.er 318 outputs the lesser of the two filter outputs to summer 320, and summer 320 subtracts the output of minimi7er 318 from the sample deviation signal to generate the statistic S3. Finally, the statistic s 3iS
outputted to the finite state machine, which is embodied in Flgure 1. The state machine uses the statistic S3 in a conventional manner to determine the foreground speech endpoints.
In one embodiment, PEP 300, and its associated components, is implemented in software executed by a processor of a host computer (not shown). In other embodiments, PEP 300 is implemented in circuit hardware, or a combination of hardware and software.
When implemented in software, a preferred operating environment is a C-based operating environment.
One of skill in the art would now recognize that the channel energy signals usedto calculate the statistic S3 are in the power domain. These energy signals may vary over a large range. The large range over which the channel energy signals exist makes it difficult to take the high and low quantile estimations of the channel energy signal. Energy root transformer 304, therefore, performs a conventional non-linear transformation (Eq. 1) on the channel energy signal to obtain a root channel energy signal ("RCE"). The only requirement of this conventional conversion is that the "root" operator ~ be predefined such that, as y approaches 0, RCE approaches log CE, where CE is the channel energy signal.
This tends to compress the range of the actual channel energies.

root (CE,~) is defined as RCE = l/y ~ (CE1 - 1) (Eq. 1) Fig. 4 is a block diagram of quantile estimator 306. For each RCE, quantile estimator 306 comprises two non-linear filters 402 and 404; two above integer buffers (counters) 406 and 410; two below integer buffers 408 and 412 (counters), and eight floating point buffers 414, 416, 418, 420, 422, 424, 426, and 428. As can be seen in Fig. 4, quantile estimator 306 receives RCE at non-linear filters 402 and 404. Non-linear filter 402 communicates with above and below integer buffers 406 and 408, and floating point buffers 414, 416, and 418 to generate the high quantile estimate ("HQE"). Non-linear filter 404 communicates with above and below integer buffers 410 and 412, and floating point buffers 424, 426, and 428, and to generate the low quantile estimate ("LQE").

CA 022~0649 1998-10-20 Fig. 5 is a flow chart representing how quantile estimator 306 computes HQE.
First, above integer buffer 406 and below integer 408 are initialized to a value of one (step 502). Floating point buffers 414, 416, and 418 are initialized by, for example, receiving three frames of channel energy signals prior to the initiation of any foreground speech (step 504). These three frames are classified as a highest, a middle, and a lowest channel energy signal. Quantile estimator 306 stores the highest channel energy signal less the middle channel energy signal in floating point buffer 414 as a higher bound, the middle channel energy signal less the lowest channel energy signal in floating point buffer 416 as a lower bound, and the middle channel energy signal in floating point buffer 418 as an initial HQE
(step 506). Quantile estimator 306 uses above integer buffer 406 to count the number of channel energies that are above HQE and below integer buffer 408 to count the number of channel energies that are below HQE. The counting process is described below, in steps 508-538. Because the middle channel energy is set to be HQE, above and below integer buffers 406 and 408, respectively, are set to a value of 1, which indicates one channel energy signal is above HQE and one channel energy signal is below HQE. Once the initialization portion is complete, the quantile estimator runs in steady-state mode.
Although steps 508-538 are shown as a discrete series of steps, during steady state operation the process is continual in nature.
In the steady state, quantile estimator 306 continually receives root channel energy signals (step 508). The HQE output from the quantile estimator 306 depends on two differences. The first difference is the quantile target ratio subtracted from the ratio between the above integer buffer 406 and the below integer buffer 408 (step 510). The quantile target ratio is determined from a predetermined quantile specification. For example, if the quantile specification is fifty percent, the target ratio would be unity (i.e., for every sample above the estimate, there should be one below). If the quantilespecification were ninety percent, the target ratio would be 1 :9.
The second difference is the previous quantile estimate stored in floating pointbuffer 418 subtracted from the current channel energy sample stored in filter 402 (step 512) If both of the differences are positive (step 514), the quantile estimate is increased by the lesser of the higher bound stored in floating point buffer 414 and the second difference (step 516) and the below integer buffer 408 is incremented (step 518). Similarly, if both of CA 022~0649 1998-10-20 the differences are negative (step 520) the quantile estimate stored in floating point buffer 418 is reduced by the lesser of the lower bound stored in floating point buffer 416 and the absolute value of the second difference (step 522) and the above integer buffer 406 is incremented (step 524).
If the first difference is positive and the second difference is negative (step 526), the below integer buffer 408 is incremented (step 528). If the second difference is positive and the first difference negative (step 530), increment the above integer buffer (step 532).
Also, if the second difference is negative and the absolute value of the second difference is less than the lower bound stored in floating point buffer 416, then the second difference is stored in floating point buffer 416 as the new lower bound (step 534). Additionally, if the second difference is positive and the second difference is less than the higher bound currently stored in floating point buffer 414 then the second difference is stored in floating point buffer 414 as the new higher bound (step 536). After all these test and adjustments, the floating point buffers 414 and 416 are floored so that they are not permitted to vanish (step 538). Steps 508 to 538 are repeated as long as the state machine is on-line. The LQE
is determined in a manner similar to determining HQE outline above. In the preferred embodiment of this invention, the HQE is a quantile estimator with a quantile specification of ninety percent, i.e., target ratio of 1 :9, and the LQE is a quantile estimator with a quantile specification of ten percent, i.e., target ratio of 9: 1.
The rem~ining two floating point buffers 420 and 422, which are shared between the HQE and LQE, are used to store the maxima and minim~ of the channel energy. The absolute differences between these values and the quantile estimate are used to regulate the bounds. In the preferred embodiment of this invention the floor on the higher bounds stored in floating point buffers 414 and 424 are one quarter of the ratio between the difference of the maximum stored in floating point buffer 420 and the quantile estimates stored in floating point buffers 418 and 428 and the above integer buffer 406 and 410.
Similarly, the floor on the lower bound stored in floating point buffer 416 and 426 is one quarter of the ratio between the difference of the quantile estimate stored in floating point buffers 418 and 428 and the minimum stored in floating point buffer 422 and the below integer buffers 408 and 412.

, . . . . .... . . . . . ...

CA 022~0649 1998-10-20 Fig. 6 is a graphical representation of a channel energy signal and HQE and LQE generated from the channel energy signal. As can be seen in Fig. 6, HQE and LQE
are adjusted for every frame based, in part, on what the quantile estim~tes should have been for the immediately preceding frame. One of ordinary skill in the art will now recognize that the quantile estim~tor has many applications, of which only one is outlined above.
Once generated, masker 308 uses HQE and LQE to generate a mask signal in a manner analogous to (Eq. 2), HQE +~lt = Target (Eq. 2) LQE +~t where ~Lt equals the mask signal and Target equals a predetermined threshold. Preferably Target is set to make the distance between high and low quantile estimates and the channel energy equal. Not only do HQE and LQE effect ~It~ but ~lt also depends upon a previously computed ~t-l~ where ~lt equals the inct~nt~neous mask signal and ~lt-l equals the previously computed mask signal (Eq. 3), ~It= max ~root~~ (HQE) - (Target) root~~ (LOE)~ t-l)~lmin ~ (Eq- 3) Target- 1 where ~ is a preset forgetting factor, close to but less than unity, and ~IminiS a lower bound on the mask signal, close to or equal to zero.
Masker 308 adds the mask signal ~lt to the extracted channel energy signal to obtain a masked channel energy signal ("MCES") (Eq. 4).
MCES = root ¦ CE + ~lt ~ Y ~ (Eq. 4) ~ root~' (LQE)+ ~t For more information regarding SNR-norm~li7~tion see Tom Claes and Dirk Van Compernolle, SNR-NORMALISATION FOR ROBUST SPEECH RECOGNITION, ICASSP 96, pp 331-334, 1996 ("Claes"). While Claes identifies the general SNR
norm~li7~tion procedure, mask signals consistent with the present invention are significantly different. The SNR norm~li7.~tion in Claes, for example, predictively estimates the mask signal by tracking the m~xim~ and minim~ of the instantaneous SNR.
Conversely, methods consistent with the present invention use quantile approximation, or its equivalent, to generate the target mask signal. Thus, instead of predictively estim~tin~
the mask signal, methods conci.ctPnt with the present invention det~rmine what the mask ., . .. ... . . ~ . . . ..

CA 022~0649 1998-10-20 signal for the previous frame should have been and correspondingly adjusts the instantaneous mask signal.
The MCES is fed through smoothing filter 310, which is a conventional three-tap FIR smoothing filter, into sample deviation processor 312. Fig. 7 is a block diagram of sample deviation processor 312. Sample deviation processor 312 comprises a delay shift register 702, a variance calculator 704, and a square root calculator 70G. Delay shift register 702 has seven register slots 702l 7. The instantaneous MCES is inputted to register slot 702,, the contents of register slots 702l 6 are shifted up one register slot (i. e., the contents of 702, are transferred to 7022, etc.), and the content of register slot 7027 is discarded. Thus, each register slot 702~ 7 stores an associated MCESI 7. Variance calculator 704 computes the variance between the MCESs stored in delay shift resister 702 and square root calculator 706 takes the square root of the variance (Eq. 5) the output is the sample standard deviation over the temporal window ("SDTW").

SDTW = {(1/6) [~ (MCESk)2 - (1/7) (~, MCES,;)2]}1'~ (Eq. 5) ~=1 1;=1 For more information see United States Patent Numbers 5,579,431 and 5,617,508, issued to Benjamin K. Reaves on November 26, 1997 and April 1, 1997, respectively. A sample deviation processor can calculate the variance over any number of stored MCESs, but the use of the current value and the six previous values is satisfactory. Preferably, SDTW is computed for each recording channel energy signal level. Sample deviation processor 312 combines the SDTWs into a "frame-synchronous scalar statistic." This combined process includes developing an Average SDTWs and a Weighted Average SDTW. Assuming twenty recording channels, the Average SDTW is simply adding each of the twenty SDTW
and dividing by twenty (Eq. 6), where i is the recording channel.
7o Average SDTW = (~ SDTWj)/20 (Eq. 6) The Weighted Average SDTW can vary depending on the application, but lends a greater significance to the higher frequency channels. The Weighted Average SDTW is determined by assigning a Weight Factor (WF) to each channel and multiplying the SDTW

CA 022~0649 1998-10-20 by the WF for each channel. The sum of all the WFs will equal twenty. The Weight Adjusted SDTWs are summed and divided by twenty (Eq. 7).

Weighted Average SDTW = (~, (WFi)(SDTWj))/20 (Eq. 7) The frame-synchronous scalar statistic is the greater of the Weighted Average SDTW and the average SDTW. Although it is preferable to have twenty recording channels, more or less could be used depending on system characteristics.
The frame-synchronous scalar statistic could be used by the endpointer to delimit speech in the conventional manner. It is preferred, however, to apply background norm~li7~tion to the frame-synchronous scalar statistic. Background normalization comprises filtering the frame-synchronous scalar statistic using separate and parallel linear filters 314 and 316 (Fig.3). Filter 314 is a conventional one-pole filter with a preset number of frame delays, i. e., a previous background estimator. Filter 316 is a conventional non-causal rectangular impulse response FIR filter that estimates a preset number of frames ahead, i. e., an advanced background estimator. Preferably, the number of frames filters 314 and 316 deviate from the current frame is equal. Adequate background norm~li7~tion can be achieved with a three frame deviation. For more information regarding the background norm~li7~tion procedure see Davies & Knappe, NOISE BACKGROUND
NORMALIZATION FOR SIMULTANEOUS BROADBAND AND NARROWBAND
DETECTION, ICASSP 1988, pp. 2733-36 ("Davies et al."). While similar to Davies et al., one of ordinary skill in the art would now recognize that background norm~ tion methods and apparatuses consistent with the present invention need to be modified, because the signal of interest is neither broadband or narrowband noise. Satisfactory background norm~liz~tion can be achieved, however, by removing the minimum of filters 314 and 316 from the frame-synchronous scalar statistic to achieve the statistic S3.
It will be apparent to those skilled in the art that various modifications and variations can be made in the methods and apparatus consistent with the present invention without departing from the scope or spirit of the invention. Other modification will be apl)a~elll to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. The specification and examples should be considered as exemplary only, with the true scope and spirit of the invention being indicated by the following claims.

Claims (42)

1. A method for processing data in a voice recognition system capable of receiving foreground speech in the presence of background noise, comprising the steps, performed by a processor, of:
extracting a channel signal;
generating a mask signal from the channel signal;
making the extracted channel signal with the mask signal; and taking a sample standard deviation of the masked channel signal over a temporal window.
2. The method of claim 1, wherein the extracting step extracts a channel energy signal.
3. The method of claim 2, further comprising the step of:
performing a background normalization on the sample standard deviation.
4. The method of claim 2, wherein generating the mask signal includes the substeps of:
storing a previous mask signal; and generating the mask signal from the channel signal and the stored previous mask signal.
5. The method of claim 2, further comprising the step of:
computing a high quantile estimation and a low quantile estimation.
6. The method of claim 5, wherein the step of generating the mask signal includes the substep of:
equalizing the separations between the computed high quantile estimate and the extracted channel energy signal and between the computed low quantile estimate and the extracted channel energy signal.
7. The method of claim 2, wherein the step of making the extracted channel energy signal includes the substep of:
adding the generated mask signal to the extracted channel energy signal.
8. The method of claim 2, further comprising the step of:
smoothing the masked channel energy signal.
9. The method of claim 2, wherein the step of taking the sample standard deviation comprises the substeps of:
storing a plurality of previously taken masked signal values in a buffer;
replacing a least current of the plurality of masked signal values with the current masked signal value; and computing the sample variance between the plurality of masked signal values stored in the buffer.
10. The method of claim 8, further comprising the step of:
taking a square root of the variance.
11. The method of claim 3, wherein the step of performing background normalization comprises the substeps of:
filtering the masked channel energy signal to produce an estimated background signal; and subtracting the estimated background signal from the masked channel energy signal.
12. The method of claim 11, wherein the step of filtering comprises the substeps of:
filtering the masked signal using a previous background estimator;
filtering the masked signal using an advanced background estimator; and selecting the minimum of the filtered masked signals as the estimated background signal.
13. The method of claim 2, further comprising the step of:
transforming the extracted channel energy signal.
14. The method of claim 13, wherein the transforming step includes taking a generalized logarithm (root) of the extracted channel energy signal.
15. An apparatus in a voice recognition system capable of receiving foreground speech in the presence of background noise, comprising:
means for extracting a channel signal;
means for generating a mask signal from the channel signal;
means for masking the extracted channel signal using the generated mask signal;
and means for taking a sample standard deviation of the masked channel signal over a temporal window.
16. The apparatus of claim 15, wherein the extracting means extracts a channel energy signal.
17. The apparatus of claim 15, further comprising:
means for performing a background normalization on the sample standard deviation.
18. The apparatus of claim 15, further comprising:
a smoothing filter.
19. The apparatus of claim 15, further comprising:
means for computing a high quantile estimate and a low quantile estimate.
20. The apparatus of claim 15, further comprising:
means for generating a background estimate signal; and means for subtracting the background estimate signal from the sample standard deviation.
21. The apparatus of claim 15, wherein the means for generating a backgroundestimate signal comprises:
a previous background estimator;
an advance background estimator; and a minimizer to output the minimum of the previous background estimator and the advance background estimator as the background estimate signal.
22. A computer program product comprising:
a computer usable medium having computer readable code embodied therein for processing data in a voice recognition system, the computer usable medium comprising an extracting module configured to extract a channel energy signal;
a mask generating module configured to generate a mask signal from the channel energy signal;
a masking module configured to mask the extracted channel energy signal with the generated mask signal; and a standard deviation module configured to take a sample standard deviation of the masked extracted channel energy signal over a temporal window.
23. The computer program product of claim 22, further comprising:
a background normalization module configured to perform background normalization on the sample standard deviation.
24. The computer program product of claim 22, further comprising:
a computing module configured to compute a high quantile estimation and a low quantile estimation.
25. A method for generating a quantile estimate of a channel signal, comprising the steps, performed on a processor, of:
defining a quantile estimate;
initializing a plurality of buffers;
receiving a channel signal;
computing a plurality of differences;
adjusting the quantile estimate based on the plurality of differences; and incrementing the plurality of buffers based on the plurality of differences.
26. The method of claim 25, wherein the initializing step includes the substeps of:
initializing an above counter to one; and initializing a below counter to one.
27. The method of claim 26, wherein the computing step includes the substep of:
computing a first difference and a second difference, the first difference beingequal to a quantile ratio less the above counter divided by the below counter, the second difference being equal to the quantile estimate less the channel signal.
28. The method of claim 25, wherein the defining step includes the substeps of:
receiving a plurality of background signals;
designating a high signal, a low signal, and a middle signal from the plurality of background signals;
storing a higher bound, a lower bound, and a quantile estimate, the higher boundbeing equal to the high signal less the middle signal, the lower bound being equal to the middle signal less the low signal, the quantile estimate being equal to the middle signal;
and establishing a quantile ratio.
29. A method for generating a quantile estimate of a channel signal, comprising the steps, performed on a processor, of:
designating a high signal, a low signal, and a middle signal from the plurality of background signals;
storing a higher bound, a lower bound, and a quantile estimate, the higher boundbeing equal to the high signal less the middle signal, the lower bound being equal to the middle signal less the low signal, the quantile estimate being equal to the middle signal;
and establishing a quantile ratio.
initializing an above counter and a below counter;
receiving a channel signal;
computing a first difference and a second difference, the first difference beingequal to the quantile ratio less the above counter divided by the below counter, the second difference being equal to the quantile estimate less the channel signal.
adjusting the quantile estimate based on the plurality of differences; and incrementing the plurality of counters based on the plurality of differences.
30. The method of claim 29, wherein the adjusting step includes the substeps of:
if the first difference and second difference are positive, increase the quantile estimate by the lesser of the higher bound and the second difference; and if the first difference and second difference are negative, increase the quantile estimate by the lesser of the lower bound and an absolute value of the second.
31. The method of claim 29, wherein the incrementing step includes the substep of:
if the first difference and second difference are positive, increment the below counter;
if the first difference and second difference are negative, increment the above counter;
if the first difference is positive and the second difference is negative, increase the below counter; and if the first difference is negative and the second difference is positive, increase the above counter.
32. A method for generating a quantile estimate of a channel signal, comprising the steps, performed on a processor, of:
initializing a below counter and an above counter;
receiving three background signals designated a high signal, a low signal, and amiddle signal;
storing a higher bound, a lower bound, and a quantile estimate, the higher boundbeing equal to the high signal less the middle signal, the lower bound being equal to the middle signal less the low signal, the quantile estimate being equal to the middle signal;
establishing a quantile ratio;
receiving a channel signal;
computing a first difference and a second difference, the first difference beingequal to the quantile ratio less the above counter divided by the below counter, the second difference being equal to the quantile estimate less the channel signal;
if the first difference and second difference are positive, increase the quantile estimate by the lesser of the higher bound and the second difference and increment the below counter;
if the first difference and second difference are negative, increase the quantile estimate by the lesser of the lower bound and an absolute value of the second difference and increment the above counter;
if the first difference is positive and the second difference is negative, increase the below counter;
if the absolute value of the second difference is less than the lower bound, store the absolute value of the second difference as the lower bound;
if the first difference is negative and the second difference is positive, increase the above counter;
if the second difference is less than the higher bound, store the second difference as the higher bound; and flooring the higher bound, lower bound, and quantile estimate.
33. A computer program product comprising:
a computer usable medium having computer readable code embodied therein for processing data in a voice recognition system, the computer usable medium comprising:

a defining module configured to define a quantile estimate;
an initializing module configured to initialize a plurality of buffers;
a receiving module configured to receive a channel signal;
a computing module configured to compute a plurality of differences;
an adjusting module configured to adjust the quantile estimate based on the plurality of differences; and an incrementing module configured to increment the plurality of buffers based on the plurality of differences.
34. The computer program product of claim 33, wherein the initializing module is further configured to initialize and above counter to one and a below counter to one.
35. The computer program product of claim 34, wherein the computing module is further configured to compute at least one of a first difference and a second difference, the first difference being equal to a quantile ratio less the above counter divided by the below counter, the second difference being equal to the quantile estimate less the channel signal.
36. The computer program product of claim 33, wherein the defining module comprises:
a receiving module configured to receive a plurality of background signals;
a designating module configured to designate a high signal, a low signal, and a middle signal from the plurality of background signals; and a storing module configured to store a higher bound, a lower bound, and a quantile estimate, the higher bound being equal to the high signal less the middle signal, the lower bound being equal to the middle signal less the lower signal, and the quantile estimate being equal to the middle signal.
37. A computer program product comprising:
a computer usable medium having computer readable code embodied therein for processing data in a voice recognition system, the computer usable medium comprising:
a receiving module configured to receive a plurality of background signals;
a designating module configured to designate a high signal, a low signal, and a middle signal from the plurality of background signals;
a storing module configured to store a higher bound, a lower bound, and a quantile estimate, the higher bound being equal to the high signal less the middle signal, the lower bound being equal to the middle signal less the lower signal, and the quantile estimate being equal to the middle signal.
an initializing module configured to initialize a plurality of buffers;
the receiving module further configured to receive a channel signal;
a computing module configured to compute at least one of a first difference and a second difference, the first difference being equal to a quantile ratio less an above counter divided by a below counter, the second difference being equal to the quantile estimate less the channel signal.
an adjusting module configured to adjust the quantile estimate based on the plurality of differences; and an incrementing module configured to increment the above counter and the below counter based on the plurality of differences.
38. The computer program product of claim 37, wherein the adjusting module is further configured to increase the quantile estimate by the lesser of the higher bound and the second difference if the first difference and the second difference are positive and increase the quantile estimate by the lesser of the lower bound and an absolute value of the second difference if the first difference and second difference are negative.
39. An apparatus for generating a quantile estimate of a channel signal, comprising:
means for defining an initial quantile estimate;
means for initializing a plurality of buffers;
means for receiving a channel signal;
means for computing a plurality of differences;
means for adjusting the quantile estimate based on the plurality of differences;and means for incrementing the plurality of buffers based on the plurality of differences.
40. The apparatus of claim 39, wherein the defining means further comprises:
means for receiving a plurality of background signals;
means for designating a high signal, a low signal, and a middle signal from the plurality of background signals;

means for storing a higher bound, a lower bound, and a quantile estimate, the higher bound being equal to the high signal less the middle signal, the lower bound being equal to the middle signal less the low signal, and the quantile estimate being equal to the middle signal.
41. An apparatus for generating a quantile estimate, comprising:
a plurality of counters;
a plurality of buffers;
means to initialize the plurality of counters and the plurality of buffers, the initializing means including at least means for storing quantile estimate;
means for receiving a channel signal;
means for communicating between the plurality of counters and the plurality of buffers to adjust the quantile estimate based on the received channel signal.
42. An apparatus for generating a quantile estimate, comprising:
a non-linear filter coupled to receive an energy signal, the non-linear filter communicating with an above integer buffer, a below integer buffer, and a plurality of floating point buffers;
a first of the plurality of floating point buffers initialized with a value;
a second of the plurality of floating point buffers initialized with a higher bound;
a third of the plurality of floating point buffers initialized with a lower bound;
a fourth of the plurality of floating point buffers initialized with a maximum;
a fifth of the plurality of floating point buffers initialized with a minimum;
means for incrementing the above integer buffer by one when the energy signal received is greater than or equal to the value;
means for incrementing the below integer buffer by one when the energy signal received is less than the value;
means for computing a first difference and a second difference, the first difference being equal to a quantile ratio less the above integer buffer divided by the below integer buffer, the second difference being equal to the value less the energy signal;
means for adjusting the value by one of the lesser of the higher bound and the second difference if the first difference and the second difference are positive, and the lesser of the lower bound and an absolute value of the second difference if the first difference and the second difference are positive; and means for outputting the value as the quantile estimate.
CA002250649A 1997-10-24 1998-10-20 Method and apparatus to detect and delimit foreground speech Abandoned CA2250649A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/950,417 US6134524A (en) 1997-10-24 1997-10-24 Method and apparatus to detect and delimit foreground speech
US08/950,417 1997-10-24

Publications (1)

Publication Number Publication Date
CA2250649A1 true CA2250649A1 (en) 1999-04-24

Family

ID=25490403

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002250649A Abandoned CA2250649A1 (en) 1997-10-24 1998-10-20 Method and apparatus to detect and delimit foreground speech

Country Status (4)

Country Link
US (1) US6134524A (en)
EP (1) EP0911806B1 (en)
CA (1) CA2250649A1 (en)
DE (1) DE69811310T2 (en)

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3753384B2 (en) * 1997-03-19 2006-03-08 株式会社日立製作所 Device for detecting the start point of a voiced section in video
US6321197B1 (en) * 1999-01-22 2001-11-20 Motorola, Inc. Communication device and method for endpointing speech utterances
JP2001075594A (en) * 1999-08-31 2001-03-23 Pioneer Electronic Corp Voice recognition system
US6621834B1 (en) * 1999-11-05 2003-09-16 Raindance Communications, Inc. System and method for voice transmission over network protocols
US7263074B2 (en) * 1999-12-09 2007-08-28 Broadcom Corporation Voice activity detection based on far-end and near-end statistics
EP1279164A1 (en) * 2000-04-28 2003-01-29 Deutsche Telekom AG Method for detecting a voice activity decision (voice activity detector)
US7421393B1 (en) 2004-03-01 2008-09-02 At&T Corp. System for developing a dialog manager using modular spoken-dialog components
US7412393B1 (en) * 2004-03-01 2008-08-12 At&T Corp. Method for developing a dialog manager using modular spoken-dialog components
US8345890B2 (en) 2006-01-05 2013-01-01 Audience, Inc. System and method for utilizing inter-microphone level differences for speech enhancement
US9185487B2 (en) * 2006-01-30 2015-11-10 Audience, Inc. System and method for providing noise suppression utilizing null processing noise subtraction
US8204252B1 (en) 2006-10-10 2012-06-19 Audience, Inc. System and method for providing close microphone adaptive array processing
US8194880B2 (en) 2006-01-30 2012-06-05 Audience, Inc. System and method for utilizing omni-directional microphones for speech enhancement
US8744844B2 (en) 2007-07-06 2014-06-03 Audience, Inc. System and method for adaptive intelligent noise suppression
US8204253B1 (en) 2008-06-30 2012-06-19 Audience, Inc. Self calibration of audio device
US8849231B1 (en) 2007-08-08 2014-09-30 Audience, Inc. System and method for adaptive power control
US8949120B1 (en) 2006-05-25 2015-02-03 Audience, Inc. Adaptive noise cancelation
US8150065B2 (en) * 2006-05-25 2012-04-03 Audience, Inc. System and method for processing an audio signal
US8934641B2 (en) 2006-05-25 2015-01-13 Audience, Inc. Systems and methods for reconstructing decomposed audio signals
US8259926B1 (en) 2007-02-23 2012-09-04 Audience, Inc. System and method for 2-channel and 3-channel acoustic echo cancellation
US8189766B1 (en) 2007-07-26 2012-05-29 Audience, Inc. System and method for blind subband acoustic echo cancellation postfiltering
US8143620B1 (en) 2007-12-21 2012-03-27 Audience, Inc. System and method for adaptive classification of audio sources
US8180064B1 (en) 2007-12-21 2012-05-15 Audience, Inc. System and method for providing voice equalization
US8194882B2 (en) 2008-02-29 2012-06-05 Audience, Inc. System and method for providing single microphone noise suppression fallback
US8355511B2 (en) 2008-03-18 2013-01-15 Audience, Inc. System and method for envelope-based acoustic echo cancellation
US8751227B2 (en) * 2008-04-30 2014-06-10 Nec Corporation Acoustic model learning device and speech recognition device
US8521530B1 (en) * 2008-06-30 2013-08-27 Audience, Inc. System and method for enhancing a monaural audio signal
US8774423B1 (en) 2008-06-30 2014-07-08 Audience, Inc. System and method for controlling adaptivity of signal modification using a phantom coefficient
US9008329B1 (en) 2010-01-26 2015-04-14 Audience, Inc. Noise reduction using multi-feature cluster tracker
US8798290B1 (en) 2010-04-21 2014-08-05 Audience, Inc. Systems and methods for adaptive signal equalization
GB2504341A (en) * 2012-07-26 2014-01-29 Snell Ltd Determining the value of a specified quantile using iterative estimation
US9640194B1 (en) 2012-10-04 2017-05-02 Knowles Electronics, Llc Noise suppression for speech processing based on machine-learning mask estimation
US9536540B2 (en) 2013-07-19 2017-01-03 Knowles Electronics, Llc Speech signal separation and synthesis based on auditory scene analysis and speech modeling
DE112015003945T5 (en) 2014-08-28 2017-05-11 Knowles Electronics, Llc Multi-source noise reduction
US10109277B2 (en) * 2015-04-27 2018-10-23 Nuance Communications, Inc. Methods and apparatus for speech recognition using visual information
US9898847B2 (en) * 2015-11-30 2018-02-20 Shanghai Sunson Activated Carbon Technology Co., Ltd. Multimedia picture generating method, device and electronic device

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS58143394A (en) * 1982-02-19 1983-08-25 株式会社日立製作所 Detection/classification system for voice section
US4718096A (en) * 1983-05-18 1988-01-05 Speech Systems, Inc. Speech recognition system
JPS603700A (en) * 1983-06-22 1985-01-10 日本電気株式会社 Voice detection system
US4696039A (en) * 1983-10-13 1987-09-22 Texas Instruments Incorporated Speech analysis/synthesis system with silence suppression
US4821325A (en) * 1984-11-08 1989-04-11 American Telephone And Telegraph Company, At&T Bell Laboratories Endpoint detector
US4764966A (en) * 1985-10-11 1988-08-16 International Business Machines Corporation Method and apparatus for voice detection having adaptive sensitivity
US4742537A (en) * 1986-06-04 1988-05-03 Electronic Information Systems, Inc. Telephone line monitoring system
US5276765A (en) * 1988-03-11 1994-01-04 British Telecommunications Public Limited Company Voice activity detection
US5007000A (en) * 1989-06-28 1991-04-09 International Telesystems Corp. Classification of audio signals on a telephone line
US5062137A (en) * 1989-07-27 1991-10-29 Matsushita Electric Industrial Co., Ltd. Method and apparatus for speech recognition
DE69127134T2 (en) * 1990-05-28 1998-02-26 Matsushita Electric Ind Co Ltd Speech encoder
US5323322A (en) * 1992-03-05 1994-06-21 Trimble Navigation Limited Networked differential GPS system
US5323337A (en) * 1992-08-04 1994-06-21 Loral Aerospace Corp. Signal detector employing mean energy and variance of energy content comparison for noise detection
US5579431A (en) * 1992-10-05 1996-11-26 Panasonic Technologies, Inc. Speech detection in presence of noise by determining variance over time of frequency band limited energy
US5617508A (en) * 1992-10-05 1997-04-01 Panasonic Technologies Inc. Speech detection device for the detection of speech end points based on variance of frequency band limited energy
US5596680A (en) * 1992-12-31 1997-01-21 Apple Computer, Inc. Method and apparatus for detecting speech activity using cepstrum vectors
US5459814A (en) * 1993-03-26 1995-10-17 Hughes Aircraft Company Voice activity detector for speech signals in variable background noise
US5490204A (en) * 1994-03-01 1996-02-06 Safco Corporation Automated quality assessment system for cellular networks
EP0721257B1 (en) * 1995-01-09 2005-03-30 Daewoo Electronics Corporation Bit allocation for multichannel audio coder based on perceptual entropy
US5598466A (en) * 1995-08-28 1997-01-28 Intel Corporation Voice activity detector for half-duplex audio communication system

Also Published As

Publication number Publication date
EP0911806A2 (en) 1999-04-28
EP0911806B1 (en) 2003-02-12
US6134524A (en) 2000-10-17
DE69811310T2 (en) 2003-10-16
DE69811310D1 (en) 2003-03-20
EP0911806A3 (en) 2001-03-21

Similar Documents

Publication Publication Date Title
EP0911806B1 (en) Method and apparatus to detect and delimit foreground speech
US6023674A (en) Non-parametric voice activity detection
US7508948B2 (en) Reverberation removal
Martin Spectral subtraction based on minimum statistics
DE69831288T2 (en) Sound processing adapted to ambient noise
JP3626492B2 (en) Reduce background noise to improve conversation quality
US7155385B2 (en) Automatic gain control for adjusting gain during non-speech portions
Lin et al. Adaptive noise estimation algorithm for speech enhancement
US5970441A (en) Detection of periodicity information from an audio signal
CA2192397C (en) Method and system for performing speech recognition
US20060224382A1 (en) Noise reduction and audio-visual speech activity detection
DE69920461T2 (en) Method and apparatus for robust feature extraction for speech recognition
CN1210608A (en) Noisy speech parameter enhancement method and apparatus
MX2008013753A (en) Audio gain control using specific-loudness-based auditory event detection.
US6073152A (en) Method and apparatus for filtering signals using a gamma delay line based estimation of power spectrum
JPH0916194A (en) Noise reduction for voice signal
Hardwick et al. Speech enhancement using the dual excitation speech model
US5768392A (en) Blind adaptive filtering of unknown signals in unknown noise in quasi-closed loop system
Kokkinis et al. A Wiener filter approach to microphone leakage reduction in close-microphone applications
EP1533791A2 (en) Voice/unvoice determination and dialogue enhancement
JP4965891B2 (en) Signal processing apparatus and method
Mukai et al. Removal of residual cross-talk components in blind source separation using time-delayed spectral subtraction
EP1286334A2 (en) Method and circuit arrangement for reducing noise during voice communication in communications systems
Chhetri et al. Regression-based residual acoustic echo suppression
Hendriks et al. Speech reinforcement in noisy reverberant conditions under an approximation of the short-time SII

Legal Events

Date Code Title Description
EEER Examination request
FZDE Dead