US20030018472A1 - Vocoder-based voice recognizer - Google Patents

Vocoder-based voice recognizer Download PDF

Info

Publication number
US20030018472A1
US20030018472A1 US10/051,350 US5135002A US2003018472A1 US 20030018472 A1 US20030018472 A1 US 20030018472A1 US 5135002 A US5135002 A US 5135002A US 2003018472 A1 US2003018472 A1 US 2003018472A1
Authority
US
United States
Prior art keywords
vocoder
data
energy
word
lpc
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/051,350
Inventor
Yehuda Hershkovits
Gabriel Ilan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ART Advanced Recognition Technologies Inc
Original Assignee
ART Advanced Recognition Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ART Advanced Recognition Technologies Inc filed Critical ART Advanced Recognition Technologies Inc
Priority to US10/051,350 priority Critical patent/US20030018472A1/en
Assigned to ART ADVANCED RECOGNITION TECHNOLOGIES INC. reassignment ART ADVANCED RECOGNITION TECHNOLOGIES INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HERSHKOVITS, YEHUDA, ILAN, GABRIEL
Publication of US20030018472A1 publication Critical patent/US20030018472A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L13/00Speech synthesis; Text to speech systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/30Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • G10L2025/783Detection of presence or absence of voice signals based on threshold decision

Definitions

  • the present invention relates to voice recognizers generally and to voice recognizers which use LPC vocoder data as input.
  • Voice recognizers are well known in the art and are used in many applications. For example, voice recognition is used in command and control applications for mobile devices, in computer Dictaphones, in children's toys and in car telephones. In all of these systems, the voice signal is digitized and then parametrized. The parametrized input signal is compared to reference parametrized signals whose utterances are known. The recognized utterance is the utterance associated with the reference signal which best matches the input signal.
  • Voice recognition systems have found particular use in voice dialing systems where, when a user says the name of the person he wishes to call, the voice recognition system recognizes the name from a previously provided reference list and provides the phone number associated with the recognized name. The telephone then dials the number. The result is that the user is connected to his destination without having to look for the dialed number and/or use his hands to dial the number.
  • FIG. 1 shows the major elements of a digital mobile telephone.
  • a mobile telephone includes a microphone 10 , a speaker 12 , a unit 14 which converts between analog and digital signals, a vocoder 16 implemented in a digital signal processing (DSP) chip labeled DSP- 1 , an operating system 18 implemented in a microcontroller or a central processing unit (CPU), a radio frequency interface unit 19 and an antenna 20 .
  • DSP digital signal processing
  • the microphone 10 On transmit, the microphone 10 generates analog voice signals which are digitized by unit 14 .
  • the vocoder 16 compresses the voice samples to reduce the amount of data to be transmitted, via RF unit 19 and antenna 20 , to another mobile telephone.
  • the antenna 20 of the receiving mobile telephone provides the received signal, via RF unit 19 , to vocoder 16 which, in turn, decompresses the received signal into voice samples.
  • Unit 14 converts the voice samples to an analog signal which speaker 12 projects.
  • the operating system 18 controls the operation of the mobile telephone.
  • the mobile telephone additionally includes a voice recognizer 22 , implemented in a separate DSP chip labeled DSP- 2 , which receives the digitized voice samples as input, parametrizes the voice signal and matches the parametrized input signal to reference voice signals.
  • the voice recognizer 22 typically either provides the identification of the matched signal to the operating system 18 or, if a phone number is associated with the matched signal, the recognizer 22 provides the associated phone number.
  • FIG. 2 generally illustrates the operation of voice recognizer 22 .
  • the digitized voice samples are organized into frames, of a predetermined length such as 5-20 msec, and it is these frames which are provided (step 28 ) to recognizer 22 .
  • the recognizer 22 For each frame, the recognizer 22 first calculates (step 30 ) the energy of the frame.
  • FIG. 3 illustrates the per frame energy for the spoken word “RICHARD”, as a function of time.
  • the energy signal has two bumps 31 and 33 , corresponding with the two syllables of the word. Where no word is spoken, as indicated by reference numeral 35 , and even between syllables, the energy level is significantly lower.
  • the recognizer 22 searches (step 32 of FIG. 2) for the start and end of a word within the energy signal.
  • the start of a word is defined as the point 37 where a significant rise in energy begins after the energy signal has been low for more than a predetermined length of time.
  • the end of a word is defined as the point 39 where a significant drop in energy finishes after which the energy signal remains low for more than a predetermined length of time.
  • the start point 37 occurs at about 0.37 sec and endpoint 39 occurs at about 0.85 sec.
  • the voice recognizer 22 performs (step 36 ) a linear prediction coding (LPC) analysis to produce parameters of the spoken word.
  • LPC linear prediction coding
  • the voice recognizer 22 calculates recognition features of the spoken word and, in step 40 , the voice recognizer 22 searches for a match from among recognition features of reference words in a reference library. Alternatively, the voice recognizer 22 stores the recognition features in the reference library, in a process known as “training”.
  • An object of the present invention is to provide a voice recognizer which operates with compressed voice data, compressed by LPC-based, vocoders, rather than with sampled voice data thereby to reduce the amount of computation which the recognizer must perform. Accordingly, the voice recognition can be implemented in the microcontroller or CPU which also implements the operating system. Since the voice recognizer does not analyze the voice signal, the microcontroller or CPU can be a of limited processing power and/or one which does not receive the voice signal.
  • the present invention provides a feature generator which can extract the same type of feature data, for use in recognition, from different types of LPC based vocoders.
  • the present invention performs the same recognition (e.g. matching and training) operations on compressed voice data which is compressed by different types of LPC based vocoders.
  • a method for recognizing a spoken word using linear prediction coding (LPC) based, vocoder data without completely reconstructing the voice data implements the method described herein.
  • the method includes the steps of generating at least one energy estimate per frame of the vocoder data and searching for word boundaries in the vocoder data using the associated energy estimates. If a word is found, the LPC word parameters are extracted from the vocoder data associated with the word and recognition features are calculated from the extracted LPC word parameters. Finally, the recognition features are matched with previously stored recognition features of other words, thereby to recognize the spoken word.
  • the energy is estimated from residual data found in the vocoder data.
  • This estimation can be performed in many ways.
  • the residual data is reconstructed from the vocoder data and the estimate is formed from the norm of the residual data.
  • a pitch-gain value is extracted from the vocoder data and this value is used as the energy estimate.
  • the pitch-gain values, lag values and remnant data are extracted from the vocoder data.
  • a remnant signal is generated from the remnant data and from that, a remnant energy estimate is produced.
  • a non-remnant energy estimate is produced from a non-remnant portion of the residual by using the pitch-gain value and a previous energy estimate defined by the lag value.
  • the vocoder data can be from any of the following vocoders: RPE-LTP full and half rate, QCELP 8 and 13 Kbps, EVRC, LD CELP, VSELP, CS ACELP, Enhanced Full Rate Vocoder and LPC10.
  • a digital cellular telephone which includes a mobile telephone operating system, an LPC based vocoder and a vocoder based voice recognizer.
  • the recognizer includes a front end processor which processes the vocoder data to determine when a word was spoken and to generate recognition features of the spoken word and a recognizer which at least recognizes the spoken word as one of a set of reference words.
  • the front end processor includes an energy estimator, an LPC parameter extractor and a recognition feature generator.
  • the energy estimator uses residual information forming part of the vocoder data to estimate the energy of a voice signal.
  • the LPC parameter extractor extracts the LPC parameters of the vocoder data.
  • the recognition feature generator generates the recognition features from the LPC parameters.
  • the front end processor is selectably operable with multiple vocoder types.
  • FIG. 1 is a block diagram illustration of a prior art cellular telephone with voice recognition capabilities
  • FIG. 2 is a flow chart illustration of a prior art, LPC-based, voice recognition method
  • FIG. 3 is a graphical illustration of the energy of a spoken word
  • FIG. 4 is a schematic illustration of a compressed voice data structure
  • FIG. 5 is a block diagram illustration of a cellular telephone with a vocoder based voice recognizer, constructed and operative in accordance with a preferred embodiment of the present invention
  • FIG. 6 is a flow chart illustration of a voice recognition method, in accordance with a preferred embodiment of the present invention.
  • FIG. 7 is a graphical illustration of the energy of a spoken word as estimated from a residual signal
  • FIG. 8 is a graphical illustration of a residual signal, useful in understanding the operation of the present invention.
  • FIG. 9 is a block diagram illustration of a GSM decoder
  • FIG. 10 is a graphical illustration of the energy of a spoken word as estimated from an estimated residual signal.
  • the present invention is a vocoder based, voice recognizer to be implemented in the microcontroller or CPU of a cellular mobile telephone, as detailed hereinbelow with respect to FIGS. 5, 6 and 7 .
  • Vocoder 16 divides the voice signal into a series of frames, each of a length N, typically representing about 20 msec of the voice signal. On each frame, vocoder 16 performs linear prediction coding (LPC) analysis.
  • LPC linear prediction coding
  • Linear prediction coding describes a voice signal y(n) as follows:
  • each frame has p LPC coefficients ⁇ i and the residual signal ⁇ (n) is of length N.
  • the LPC coefficients and the residual signal form the parameters of the frame.
  • the vocoder typically further parametrizes the residual signal ⁇ (n) in terms of at least pitch and gain values.
  • the vocoder can also generate any of the many types of LPC based parameters which are known in the art of LPC vocoders, such as cepstrum coefficients, MEL cepstrum coefficients, line spectral pairs (LSPs), reflection coefficients, log area ratio (LAR) coefficients, etc., all of which are easily calculated from the LPC coefficients.
  • Voice compression frame 52 includes encoded and/or parametrized versions of the LPC coefficients a 1 and encoded versions of the residual signal ⁇ (n).
  • FIG. 5 illustrates a vocoder based, voice recognizer 50 within a cellular telephone. Since the cellular telephone is similar to the prior art telephone shown in FIG. 1, similar reference numerals refer to similar elements. Reference is also made to FIGS. 6 and 7 which are useful in understanding the operation of vocoder based, voice recognizer 50 .
  • the cellular telephone of FIG. 5 includes microphone 10 , speaker 12 , conversion unit 14 , vocoder 16 , operating system 18 , RF interface unit 19 and antenna 20 .
  • the cellular telephone of FIG. 5 includes vocoder based, voice recognizer 50 which receives the LPC-based compressed voice signal, which vocoder 16 produces, as input.
  • the vocoder based, voice recognizer 50 is implemented in the device, labeled CPU 51 , which also implements the operating system 18 .
  • Device 51 can be a CPU, as labeled, or a microcontroller. Since voice recognizer 50 does not analyze the voice signal, voice recognizer 50 can be implemented on any type of microcontroller or CPU, including those which have only limited processing power and those which do not receive the voice signal.
  • FIG. 6 illustrates, in general form, the operations of vocoder based, voice recognizer 50 on a compressed frame such as the frame 52 .
  • the energy of the frame is determined once the frame, in step 58 , has been received.
  • the energy is estimated (step 60 ) from the vocoder data, rather than from the sampled data, and the energy estimation does not involve reconstructing the sampled data.
  • the residual signal ⁇ (n) can be utilized to estimate the energy since, as is known in the art, the residual signal describes the air pressure through the vocal tract while the LPC parameters describe the structure of the vocal tract and are, thus, generally independent of speech volume. As a result, the residual signal is highly correlated to how loudly or quietly a person talks.
  • FIG. 7 illustrates the estimated energy signal produced from the reconstructed residual signals of the voiced word “RICHARD”.
  • the estimated energy signal of FIG. 7 is not a replica of the energy signal of FIG. 3.
  • the estimated energy signal is highly correlated with the prior art energy signal.
  • the start and end points for the signal of FIG. 7, labeled 62 and 63 , respectively, are also at about 0.37 sec and 0.85 sec, respectively.
  • the vocoder based, voice recognizer 50 searches (step 64 ) for word boundaries in the estimated energy signal. If desired, voice recognizer 50 can refine the location of the word boundaries by using any of the characteristics of the LPC parameters (such as their mean and/or variance) which change sharply at a word boundary.
  • recognizer 50 extracts (step 68 ) the LPC word parameters from the vocoder data.
  • Step 68 typically involves decoding the encoded LPC parameters provided in voice compression frame 52 and converting them to the LPC coefficients.
  • Recognizer 50 calculates (step 70 ) its recognition features from the extracted LPC coefficients.
  • These recognition features can be any of the many LPC based parameters, such as cepstrum coefficients, MEL cepstrum coefficients, line spectral pairs (LSPs), reflection coefficients, log area ratio (LAR) coefficients, etc., all of which are easily calculated from the LPC coefficients.
  • LSPs line spectral pairs
  • LAR log area ratio
  • recognizer 50 utilizes the recognition features produced in step 70 to either recognize the input signal as one of the reference words in its reference library or to train a new reference word into its library. Since the recognition features produced by recognizer 50 can be the same as those used in the prior art, this step is equivalent to the recognition/training step 40 of the prior art and thus is so labeled.
  • the book, Fundamentals of Speech Recognition by Lawrence Rabiner and Biing Hwang Juang, Prentice-Hall, 1993, describes suitable recognizers 50 and is incorporated herein by reference.
  • steps 60 - 70 convert from the vocoder data to the recognition features needed for the recognition/training step.
  • Steps 60 - 70 can be tailored to each type of vocoder, in order to produce the same recognition features, regardless of vocoder type.
  • steps 60 - 70 form a processing “front end” to the recognition/training step 40 .
  • the present invention incorporates a vocoder based, voice recognizer which has a plurality of front ends and a single recognition/training unit. This is particularly useful for those mobile telephones which are sold to operate with multiple types of digital cellular telephone systems, each of which uses a different type of vocoder. With many front ends, the voice recognizer of the present invention can operate with many vocoder types.
  • FIG. 8 illustrates an exemplary residual signal, of a voiced signal, which has a series of repeating peaks 70 , all of approximately the same magnitude.
  • the distance between peaks 70 is defined as the pitch P and the magnitude of the peaks 70 is defined as the gain G.
  • a non-voiced signal has a gain value but no pitch value.
  • the energy of the residual signal of the frame or subframe can be estimated by the gain value G.
  • the energy of the frame or subframe is not estimated by reconstructing the residual signal ⁇ (n) but by extracting the gain value G, a parameter of the residual signal ⁇ (n), from the compressed voice data.
  • vocoders such as the vocoders used in Global System for Mobile Communications (GSM), Time Division Multiple Access (TDMA) and Code Division Multiple Access (CDMA) digital cellular communication systems, correlate the residual signal of the current frame or subframe with a concatenated version of the residual signals of previous frames.
  • the vocoders determine a “remnant signal” which is the difference between the previous residual signal multiplied by the pitch gain PG and the current residual signal.
  • the current residual signal is then characterized by the pitch gain PG, the LAG value and the remnant signal.
  • E rem is the energy estimate of the remnant signal and E LAG is the non-remnant energy of the residual, as determined from the energy of the frame or subframe which is LAG FL
  • the former can be produced by reconstructing the remnant signal, a relatively simple operation, or by any other method.
  • the symbols ⁇ and ⁇ indicate the “ceiling” and “floor” operations, respectively and the mth root operation need not be performed.
  • FIG. 9 illustrates the decoder portion of a vocoder which forms part of the GSM standard.
  • FIG. 9 is similar to FIG. 3.4 of the March 1992 version of the I-ETS 300 036 specification from the European Telecommunications Standards Institute, found on page 34 thereof.
  • the details of the decoder are provided in the above-identified specification, which is incorporated herein by reference. For clarity, only the aspects of the decoder necessary for understanding the energy and feature calculations of the present invention are provided hereinbelow.
  • FIG. 9 indicates input data with thick lines and internal signals with thin lines.
  • the input data includes the values M cr , x maxcr , x mcr , b cr , N cr and LAR cr , all of which are defined in the I-ETS specification.
  • FIG. 9 shows that the decoder includes an RPE decoder 80 , a long term predictor 84 , a short term synthesis filter 86 , and a de-emphasizer 88 .
  • the RPE decoder 80 receives the M cr , x maxcr and x mcr signals and generates a remnant signal e r ′.
  • the long term predictor 84 uses the b cr and N cr signals to generate a residual signal d r ′ from the remnant signal e r ′.
  • the short term synthesis filter 86 generates the voice signal from the residual signal d r ′ and the short term LPC parameters, transmitted in the form of the LAR cr data.
  • predictor 84 includes a parameter decoder 90 , a delay unit 92 , a multiplier 94 and a summer 96 .
  • Decoder 90 converts the input values b cr and N cr to the internal data values b r ′ and N r ′, where b r ′ is a multiplier, similar to the pitch gain PG discussed hereinabove, and N r ′ is a delay amount, similar to the value LAG discussed hereinabove.
  • Long term predictor 82 adds the signal d r ′′ to the remnant signal e r ′, where the signal d r ′′ is the previous residual signal d r ′(I ⁇ N r ′), as delayed by delay unit 92 , multiplied by an amount b r ′ via multiplier 94 .
  • FIG. 10 shows the estimated energy using the above calculation.
  • the start and stop word boundaries, labeled 98 and 99 respectively occur at the same locations as in the prior art.
  • Another method of estimating the energy from the extracted parameters also uses N r ′ and b r ′ as above, with FL set to 40, and estimates the energy estimate of the remnant, E rem , as:
  • the LPC word parameters are extracted from the transmitted data within the short term synthesis filter 86 which includes an LAR decoder 100 , an interpolator 102 , a reflection coefficients determining unit 104 and a filter 106 . Together, units 100 , 102 and 104 convert the received LAR cr data to the reflection coefficients r r ′, where the latter are easily transformed into LPC coefficients.
  • the present invention is applicable to all types of digital cellular communication systems and to all types of LPC-based vocoders.
  • the type of information stored in the compressed voice data must be analyzed to determine how to utilize it for the energy and feature calculations.
  • the compressed voice data is described in detail in the standard defining each vocoder.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Telephonic Communication Services (AREA)
  • Steering Control In Accordance With Driving Conditions (AREA)
  • Photoreceptors In Electrophotography (AREA)
  • Telephone Function (AREA)
  • Character Discrimination (AREA)
  • Machine Translation (AREA)
  • Diaphragms For Electromechanical Transducers (AREA)

Abstract

A vocoder based voice recognizer recognizes a spoken word using linear prediction coding (LPC) based, vocoder data without completely reconstructing the voice data. The recognizer generates at least one energy estimate per frame of the vocoder data and searches for word boundaries in the vocoder data using the associated energy estimates. If a word is found, the LPC word parameters are extracted from the vocoder data associated with the word and recognition features are calculated from the extracted LPC word parameters. Finally, the recognition features are matched with previously stored recognition features of other words, thereby to recognize the spoken word.

Description

    FIELD OF THE INVENTION
  • The present invention relates to voice recognizers generally and to voice recognizers which use LPC vocoder data as input. [0001]
  • BACKGROUND OF THE INVENTION
  • Voice recognizers are well known in the art and are used in many applications. For example, voice recognition is used in command and control applications for mobile devices, in computer Dictaphones, in children's toys and in car telephones. In all of these systems, the voice signal is digitized and then parametrized. The parametrized input signal is compared to reference parametrized signals whose utterances are known. The recognized utterance is the utterance associated with the reference signal which best matches the input signal. [0002]
  • Voice recognition systems have found particular use in voice dialing systems where, when a user says the name of the person he wishes to call, the voice recognition system recognizes the name from a previously provided reference list and provides the phone number associated with the recognized name. The telephone then dials the number. The result is that the user is connected to his destination without having to look for the dialed number and/or use his hands to dial the number. [0003]
  • Voice dialing is especially important for car mobile telephones where the user is typically the driver of the car and thus, must continually concentrate on the road. If the driver wants to call someone, it is much safer that the driver speak the name of the person to be called, rather than dialing the number himself. [0004]
  • FIG. 1, to which reference is now made, shows the major elements of a digital mobile telephone. Typically, a mobile telephone includes a [0005] microphone 10, a speaker 12, a unit 14 which converts between analog and digital signals, a vocoder 16 implemented in a digital signal processing (DSP) chip labeled DSP-1, an operating system 18 implemented in a microcontroller or a central processing unit (CPU), a radio frequency interface unit 19 and an antenna 20. On transmit, the microphone 10 generates analog voice signals which are digitized by unit 14. The vocoder 16 compresses the voice samples to reduce the amount of data to be transmitted, via RF unit 19 and antenna 20, to another mobile telephone. The antenna 20 of the receiving mobile telephone provides the received signal, via RF unit 19, to vocoder 16 which, in turn, decompresses the received signal into voice samples. Unit 14 converts the voice samples to an analog signal which speaker 12 projects. The operating system 18 controls the operation of the mobile telephone.
  • For voice dialing systems, the mobile telephone additionally includes a [0006] voice recognizer 22, implemented in a separate DSP chip labeled DSP-2, which receives the digitized voice samples as input, parametrizes the voice signal and matches the parametrized input signal to reference voice signals. The voice recognizer 22 typically either provides the identification of the matched signal to the operating system 18 or, if a phone number is associated with the matched signal, the recognizer 22 provides the associated phone number.
  • FIG. 2, to which reference is now made, generally illustrates the operation of [0007] voice recognizer 22. The digitized voice samples are organized into frames, of a predetermined length such as 5-20 msec, and it is these frames which are provided (step 28) to recognizer 22. For each frame, the recognizer 22 first calculates (step 30) the energy of the frame.
  • FIG. 3, to which reference is now also made, illustrates the per frame energy for the spoken word “RICHARD”, as a function of time. The energy signal has two [0008] bumps 31 and 33, corresponding with the two syllables of the word. Where no word is spoken, as indicated by reference numeral 35, and even between syllables, the energy level is significantly lower.
  • Thus, the recognizer [0009] 22 searches (step 32 of FIG. 2) for the start and end of a word within the energy signal. The start of a word is defined as the point 37 where a significant rise in energy begins after the energy signal has been low for more than a predetermined length of time. The end of a word is defined as the point 39 where a significant drop in energy finishes after which the energy signal remains low for more than a predetermined length of time. In FIG. 3, the start point 37 occurs at about 0.37 sec and endpoint 39 occurs at about 0.85 sec.
  • If a word is found, as checked in [0010] step 34, the voice recognizer 22 performs (step 36) a linear prediction coding (LPC) analysis to produce parameters of the spoken word. In step 38, the voice recognizer 22 calculates recognition features of the spoken word and, in step 40, the voice recognizer 22 searches for a match from among recognition features of reference words in a reference library. Alternatively, the voice recognizer 22 stores the recognition features in the reference library, in a process known as “training”.
  • Unfortunately, the voice recognition process is computationally intensive and, thus, must be implemented in the second DSP chip, DSP-[0011] 2. This adds significant cost to the mobile telephone.
  • SUMMARY OF THE INVENTION
  • An object of the present invention is to provide a voice recognizer which operates with compressed voice data, compressed by LPC-based, vocoders, rather than with sampled voice data thereby to reduce the amount of computation which the recognizer must perform. Accordingly, the voice recognition can be implemented in the microcontroller or CPU which also implements the operating system. Since the voice recognizer does not analyze the voice signal, the microcontroller or CPU can be a of limited processing power and/or one which does not receive the voice signal. [0012]
  • Moreover, the present invention provides a feature generator which can extract the same type of feature data, for use in recognition, from different types of LPC based vocoders. Thus, the present invention performs the same recognition (e.g. matching and training) operations on compressed voice data which is compressed by different types of LPC based vocoders. [0013]
  • There is therefore provided, in accordance with a preferred embodiment of the present invention, a method for recognizing a spoken word using linear prediction coding (LPC) based, vocoder data without completely reconstructing the voice data. The vocoder based recognizer implements the method described herein. The method includes the steps of generating at least one energy estimate per frame of the vocoder data and searching for word boundaries in the vocoder data using the associated energy estimates. If a word is found, the LPC word parameters are extracted from the vocoder data associated with the word and recognition features are calculated from the extracted LPC word parameters. Finally, the recognition features are matched with previously stored recognition features of other words, thereby to recognize the spoken word. [0014]
  • Additionally, in accordance with a preferred embodiment of the present invention, the energy is estimated from residual data found in the vocoder data. This estimation can be performed in many ways. In one embodiment, the residual data is reconstructed from the vocoder data and the estimate is formed from the norm of the residual data. In another embodiment, a pitch-gain value is extracted from the vocoder data and this value is used as the energy estimate. In a further embodiment, the pitch-gain values, lag values and remnant data are extracted from the vocoder data. A remnant signal is generated from the remnant data and from that, a remnant energy estimate is produced. A non-remnant energy estimate is produced from a non-remnant portion of the residual by using the pitch-gain value and a previous energy estimate defined by the lag value. Finally, the two energy estimates, remnant and non-remnant, are combined. [0015]
  • Moreover, in accordance with a preferred embodiment of the present invention, the vocoder data can be from any of the following vocoders: RPE-LTP full and half rate, [0016] QCELP 8 and 13 Kbps, EVRC, LD CELP, VSELP, CS ACELP, Enhanced Full Rate Vocoder and LPC10.
  • There is also provided, in accordance with a further preferred embodiment of the present invention, a digital cellular telephone which includes a mobile telephone operating system, an LPC based vocoder and a vocoder based voice recognizer. The recognizer includes a front end processor which processes the vocoder data to determine when a word was spoken and to generate recognition features of the spoken word and a recognizer which at least recognizes the spoken word as one of a set of reference words. [0017]
  • Further, in accordance with a preferred embodiment of the present invention, the front end processor includes an energy estimator, an LPC parameter extractor and a recognition feature generator. The energy estimator uses residual information forming part of the vocoder data to estimate the energy of a voice signal. The LPC parameter extractor extracts the LPC parameters of the vocoder data. The recognition feature generator generates the recognition features from the LPC parameters. [0018]
  • Still further, in accordance with a preferred embodiment of the present invention, the front end processor is selectably operable with multiple vocoder types. [0019]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The present invention will be understood and appreciated more fully from the following detailed description taken in conjunction with the appended drawings in which: [0020]
  • FIG. 1 is a block diagram illustration of a prior art cellular telephone with voice recognition capabilities; [0021]
  • FIG. 2 is a flow chart illustration of a prior art, LPC-based, voice recognition method; [0022]
  • FIG. 3 is a graphical illustration of the energy of a spoken word; [0023]
  • FIG. 4 is a schematic illustration of a compressed voice data structure; [0024]
  • FIG. 5 is a block diagram illustration of a cellular telephone with a vocoder based voice recognizer, constructed and operative in accordance with a preferred embodiment of the present invention; [0025]
  • FIG. 6 is a flow chart illustration of a voice recognition method, in accordance with a preferred embodiment of the present invention; [0026]
  • FIG. 7 is a graphical illustration of the energy of a spoken word as estimated from a residual signal; [0027]
  • FIG. 8 is a graphical illustration of a residual signal, useful in understanding the operation of the present invention; [0028]
  • FIG. 9 is a block diagram illustration of a GSM decoder; and [0029]
  • FIG. 10 is a graphical illustration of the energy of a spoken word as estimated from an estimated residual signal. [0030]
  • DETAILED DESCRIPTION OF THE PRESENT INVENTION
  • There are many types of voice compression algorithms, the most common of which are those based on linear prediction coding (LPC). Applicants have realized that, since most voice recognition algorithms utilize linear prediction coding analysis in order to parametrize the voice signals, elements of the compressed voice signal can be provided to the voice recognizer to significantly reduce the amount of analysis which the voice recognizer must perform. Thus, the present invention is a vocoder based, voice recognizer to be implemented in the microcontroller or CPU of a cellular mobile telephone, as detailed hereinbelow with respect to FIGS. 5, 6 and [0031] 7.
  • Linear Prediction Analysis
  • The following is a short description of the operation of LPC based [0032] vocoder 16. A discussion of speech coding in general, which includes a more complete description of linear prediction coding than that provided here, can be found in the article “Speech Coding: A Tutorial Review” by Andreas S. Spanias, Proceedings of the IEEE, Vol. 82. No. 10, October 1994, pp. 1541-1582.
  • Vocoder [0033] 16 divides the voice signal into a series of frames, each of a length N, typically representing about 20 msec of the voice signal. On each frame, vocoder 16 performs linear prediction coding (LPC) analysis.
  • Linear prediction coding describes a voice signal y(n) as follows: [0034]
  • y(n)=α1 y(n−1)+α2 y(n−2)+ . . . +αp y(n−p)+ε(n)  Equation 1
  • where the α[0035] 1 are known as the LPC coefficients and ε(n) is known as the residual signal. Typically, each frame has p LPC coefficients αi and the residual signal ε(n) is of length N. The LPC coefficients and the residual signal form the parameters of the frame. The vocoder typically further parametrizes the residual signal ε(n) in terms of at least pitch and gain values. The vocoder can also generate any of the many types of LPC based parameters which are known in the art of LPC vocoders, such as cepstrum coefficients, MEL cepstrum coefficients, line spectral pairs (LSPs), reflection coefficients, log area ratio (LAR) coefficients, etc., all of which are easily calculated from the LPC coefficients.
  • The resultant values are then encoded, thereby producing a typical voice compression frame, such as [0036] frame 52 shown in FIG. 4 to which reference is now made. Voice compression frame 52 includes encoded and/or parametrized versions of the LPC coefficients a1 and encoded versions of the residual signal ε(n).
  • A Vocoder Based, Voice Recognizer
  • Reference is now made to FIG. 5 which illustrates a vocoder based, [0037] voice recognizer 50 within a cellular telephone. Since the cellular telephone is similar to the prior art telephone shown in FIG. 1, similar reference numerals refer to similar elements. Reference is also made to FIGS. 6 and 7 which are useful in understanding the operation of vocoder based, voice recognizer 50.
  • The cellular telephone of FIG. 5 includes [0038] microphone 10, speaker 12, conversion unit 14, vocoder 16, operating system 18, RF interface unit 19 and antenna 20. In addition, the cellular telephone of FIG. 5 includes vocoder based, voice recognizer 50 which receives the LPC-based compressed voice signal, which vocoder 16 produces, as input.
  • In accordance with a preferred embodiment of the present invention, the vocoder based, [0039] voice recognizer 50 is implemented in the device, labeled CPU 51, which also implements the operating system 18. Device 51 can be a CPU, as labeled, or a microcontroller. Since voice recognizer 50 does not analyze the voice signal, voice recognizer 50 can be implemented on any type of microcontroller or CPU, including those which have only limited processing power and those which do not receive the voice signal.
  • FIG. 6 illustrates, in general form, the operations of vocoder based, [0040] voice recognizer 50 on a compressed frame such as the frame 52.
  • As in the prior art, the energy of the frame is determined once the frame, in [0041] step 58, has been received. However, in the present invention, the energy is estimated (step 60) from the vocoder data, rather than from the sampled data, and the energy estimation does not involve reconstructing the sampled data.
  • Applicants have recognized that the residual signal ε(n) can be utilized to estimate the energy since, as is known in the art, the residual signal describes the air pressure through the vocal tract while the LPC parameters describe the structure of the vocal tract and are, thus, generally independent of speech volume. As a result, the residual signal is highly correlated to how loudly or quietly a person talks. [0042]
  • In accordance with a preferred embodiment of the present invention, one method of estimating the energy is to determine the energy in the residual signal, per frame, or, if the frames are divided into subframes, per subframe. Mathematically, this can be written as: [0043] E ~ i = n = 1 M ɛ ( n ) 2 Equation 2
    Figure US20030018472A1-20030123-M00001
  • where {tilde over (E)}[0044] i is the energy in the ith frame, the residual signal ε(n) is reconstructed from the vocoder data and the number M is the number of sample points in the frame or subframe.
  • FIG. 7 illustrates the estimated energy signal produced from the reconstructed residual signals of the voiced word “RICHARD”. As can be seen, the estimated energy signal of FIG. 7 is not a replica of the energy signal of FIG. 3. However, the estimated energy signal is highly correlated with the prior art energy signal. The start and end points for the signal of FIG. 7, labeled [0045] 62 and 63, respectively, are also at about 0.37 sec and 0.85 sec, respectively.
  • Other methods of estimating the energy from the vocoder data are incorporated in the present invention, some of which are described hereinbelow. [0046]
  • Returning to FIG. 6, the vocoder based, [0047] voice recognizer 50 searches (step 64) for word boundaries in the estimated energy signal. If desired, voice recognizer 50 can refine the location of the word boundaries by using any of the characteristics of the LPC parameters (such as their mean and/or variance) which change sharply at a word boundary.
  • If a word is found, as checked by [0048] step 66, recognizer 50 extracts (step 68) the LPC word parameters from the vocoder data. Step 68 typically involves decoding the encoded LPC parameters provided in voice compression frame 52 and converting them to the LPC coefficients.
  • [0049] Recognizer 50 then calculates (step 70) its recognition features from the extracted LPC coefficients. These recognition features can be any of the many LPC based parameters, such as cepstrum coefficients, MEL cepstrum coefficients, line spectral pairs (LSPs), reflection coefficients, log area ratio (LAR) coefficients, etc., all of which are easily calculated from the LPC coefficients. Thus, if the vocoder uses one type of LPC parameter and the recognizer 50 use another type of LPC parameter, recognizer 50 can convert from one to the other either directly or through the LPC coefficients.
  • Finally, [0050] recognizer 50 utilizes the recognition features produced in step 70 to either recognize the input signal as one of the reference words in its reference library or to train a new reference word into its library. Since the recognition features produced by recognizer 50 can be the same as those used in the prior art, this step is equivalent to the recognition/training step 40 of the prior art and thus is so labeled. The book, Fundamentals of Speech Recognition, by Lawrence Rabiner and Biing Hwang Juang, Prentice-Hall, 1993, describes suitable recognizers 50 and is incorporated herein by reference.
  • It will be appreciated that steps [0051] 60-70 convert from the vocoder data to the recognition features needed for the recognition/training step. There are many LPC based vocoders, each of which performs somewhat different operations on the voice signal. Steps 60-70 can be tailored to each type of vocoder, in order to produce the same recognition features, regardless of vocoder type. Thus, steps 60-70 form a processing “front end” to the recognition/training step 40.
  • The present invention incorporates a vocoder based, voice recognizer which has a plurality of front ends and a single recognition/training unit. This is particularly useful for those mobile telephones which are sold to operate with multiple types of digital cellular telephone systems, each of which uses a different type of vocoder. With many front ends, the voice recognizer of the present invention can operate with many vocoder types. [0052]
  • Energy Estimation Methods for use in Determining the Word Boundaries
  • Some simple vocoders, such as the vocoder known as the LPC10 described in the US Department of Defense standard 1015 V.53, describe the residual signal ε(n) with just the pitch and gain values. FIG. 8, to which reference is now made, illustrates an exemplary residual signal, of a voiced signal, which has a series of repeating [0053] peaks 70, all of approximately the same magnitude. The distance between peaks 70 is defined as the pitch P and the magnitude of the peaks 70 is defined as the gain G. A non-voiced signal has a gain value but no pitch value.
  • Thus, the energy of the residual signal of the frame or subframe can be estimated by the gain value G. In this example, the energy of the frame or subframe is not estimated by reconstructing the residual signal ε(n) but by extracting the gain value G, a parameter of the residual signal ε(n), from the compressed voice data. [0054]
  • Other vocoders, such as the vocoders used in Global System for Mobile Communications (GSM), Time Division Multiple Access (TDMA) and Code Division Multiple Access (CDMA) digital cellular communication systems, correlate the residual signal of the current frame or subframe with a concatenated version of the residual signals of previous frames. The point at which the residual signal of the current frame most closely matches previous residual signals, when multiplied by a pitch gain PG, is known as the LAG value. The vocoders then determine a “remnant signal” which is the difference between the previous residual signal multiplied by the pitch gain PG and the current residual signal. The current residual signal is then characterized by the pitch gain PG, the LAG value and the remnant signal. [0055]
  • For the later type of vocoder, the energy of the current frame or subframe, i, can be estimated from the remnant signal and from the non-remnant portion of the residual signal, by: [0056] E ~ i = E LAG m + E rem m m ( m = 1 or 2 ) E LAG = PG · 1 FL { ( LAG mod 40 ) E i - LAG FL + ( FL - LAG mod 40 ) E i - LAG FL } Equation 3
    Figure US20030018472A1-20030123-M00002
  • where E[0057] rem is the energy estimate of the remnant signal and ELAG is the non-remnant energy of the residual, as determined from the energy of the frame or subframe which is LAG FL
    Figure US20030018472A1-20030123-M00003
  • frames or subframes behind the current frame or subframe and the pitch gain. The former can be produced by reconstructing the remnant signal, a relatively simple operation, or by any other method. The symbols ┌┐ and └┘ indicate the “ceiling” and “floor” operations, respectively and the mth root operation need not be performed. [0058]
  • Energy Estimation for GSM Vocoders
  • Reference is now briefly made to FIG. 9 which illustrates the decoder portion of a vocoder which forms part of the GSM standard. FIG. 9 is similar to FIG. 3.4 of the March 1992 version of the I-ETS 300 036 specification from the European Telecommunications Standards Institute, found on [0059] page 34 thereof. The details of the decoder are provided in the above-identified specification, which is incorporated herein by reference. For clarity, only the aspects of the decoder necessary for understanding the energy and feature calculations of the present invention are provided hereinbelow.
  • FIG. 9 indicates input data with thick lines and internal signals with thin lines. The input data includes the values M[0060] cr, xmaxcr, xmcr, bcr, Ncr and LARcr, all of which are defined in the I-ETS specification.
  • FIG. 9 shows that the decoder includes an [0061] RPE decoder 80, a long term predictor 84, a short term synthesis filter 86, and a de-emphasizer 88. The RPE decoder 80 receives the Mcr, xmaxcr and xmcr signals and generates a remnant signal er′. The long term predictor 84 uses the bcr and Ncr signals to generate a residual signal dr′ from the remnant signal er′. The short term synthesis filter 86 generates the voice signal from the residual signal dr′ and the short term LPC parameters, transmitted in the form of the LARcr data.
  • One energy calculation, similar to that described hereinabove, takes the first or second norm of the residual signal d[0062] r′, as follows: E ~ i = n = 0 39 d r [ n ] m ( m = 1 or 2 ) Equation 4
    Figure US20030018472A1-20030123-M00004
  • Another energy calculation uses the remnant signal e[0063] r′ and the internal data values br′ and Nr′ of the long term predictor 84. Specifically, predictor 84 includes a parameter decoder 90, a delay unit 92, a multiplier 94 and a summer 96. Decoder 90 converts the input values bcr and Ncr to the internal data values br′ and Nr′, where br′ is a multiplier, similar to the pitch gain PG discussed hereinabove, and Nr′ is a delay amount, similar to the value LAG discussed hereinabove. Long term predictor 82 adds the signal dr″ to the remnant signal er′, where the signal dr″ is the previous residual signal dr′(I−Nr′), as delayed by delay unit 92, multiplied by an amount br′ via multiplier 94.
  • The energy can be estimated using Equation 3, where N[0064] r′ and br′ replace the LAG and PG values and FL is set to 40. Furthermore, the energy estimate of the remnant, Erem, is calculated by: E rem = n = 0 39 e r [ k ] m Equation 5
    Figure US20030018472A1-20030123-M00005
  • FIG. 10, to which reference is now briefly made, shows the estimated energy using the above calculation. The start and stop word boundaries, labeled [0065] 98 and 99, respectively occur at the same locations as in the prior art.
  • Another method of estimating the energy from the extracted parameters also uses N[0066] r′ and br′ as above, with FL set to 40, and estimates the energy estimate of the remnant, Erem, as:
  • Erem=|Xmaxcr|m  Equation 6
  • Returning to FIG. 9, the LPC word parameters are extracted from the transmitted data within the short [0067] term synthesis filter 86 which includes an LAR decoder 100, an interpolator 102, a reflection coefficients determining unit 104 and a filter 106. Together, units 100, 102 and 104 convert the received LARcr data to the reflection coefficients rr′, where the latter are easily transformed into LPC coefficients.
  • As mentioned hereinabove with respect to FIG. 6, once the LPC coefficients are extracted, they are transformed (step [0068] 70) into the recognition features which the recognizer/training step requires.
  • It will be appreciated by those skilled in the art that, while a full explanation has been provided for the vocoder of the GSM digital cellular communication system, the present invention is applicable to all types of digital cellular communication systems and to all types of LPC-based vocoders. For each type of vocoder, the type of information stored in the compressed voice data must be analyzed to determine how to utilize it for the energy and feature calculations. The compressed voice data is described in detail in the standard defining each vocoder. [0069]
  • The following table lists some currently available cellular communication systems, the vocoders they work with and the standards defining the vocoders and/or the systems. [0070]
    Digital
    Cellular Communication
    System LPC-based Vocoder Standard
    GSM RPE-LTP full rate I-ETS 300 036 6.1
    RPE-LTP half rate 1-ETS 300 581-2
    ver. 4
    CDMA QCELP 8 Kbps, IS 96 A
    13 Kbps
    EVRC IS 127
    LD CELP ITU G.728
    TDMA VSELP IS 54 B
    PHS, PCS CS ACELP ITU G.729
    PCS-TDMA Enhanced Full Rate IS 641
    Vocoder
    PDC (in Japan) VSELP RCR STD 27
  • It will be appreciated by persons skilled in the art that the present invention is not limited by what has been particularly shown and described herein above. Rather the scope of the invention is defined by the claims that follow: [0071]

Claims (20)

1. A method for recognizing a spoken word using linear prediction coding (LPC) based, vocoder data without completely reconstructing the voice data, the vocoder data formed into a series of frames, the method comprising the steps of:
generating at least one energy estimate per frame of said vocoder data;
searching for word boundaries in said vocoder data using the associated energy estimates;
if a word is found, extracting the LPC word parameters from the vocoder data associated with said word;
calculating recognition features from said extracted LPC word parameters; and
matching said recognition features with previously stored recognition features of other words, thereby to recognize the spoken word.
2. A method for preparing to recognize a spoken word using linear prediction coding (LPC) based, vocoder data without completely reconstructing the voice data, the vocoder data formed into a series of frames, the method comprising the steps of:
generating at least one energy estimate per frame of said vocoder data;
searching for word boundaries in said vocoder data using the associated energy estimates;
if a word is found, extracting the LPC word parameters from the vocoder data associated with said word;
calculating recognition features from said extracted LPC word parameters.
3. A method according to claim 2 and wherein said step of generating comprises the step of estimating the energy from residual data found in said vocoder data.
4. A method according to claim 3 and wherein said step of estimating comprises the steps of reconstructing residual data from said vocoder data and generating the norm of said residual data.
5. A method according to claim 3 and wherein said step of estimating comprises the steps of extracting a pitch-gain value from said vocoder data and using said extracted pitch-gain value as said energy estimate.
6. A method according to claim 3 and wherein said step of generating comprises the steps of:
extracting pitch-gain values, lag values and remnant data from said vocoder data;
reconstructing a remnant signal from said remnant data;
generating an energy estimate of said remnant signal;
generating an energy estimate of a non-remnant portion of said residual by using said pitch-gain value and a previous energy estimate defined by said lag value; and
combining said remnant and non-remnant energy estimates.
7. A method according to claim 1 and wherein said vocoder data is of the type produced by any of the following vocoders: RPE-LTP full and half rate, QCELP 8 and 13 Kbps, EVRC, LD CELP, VSELP, CS ACELP, Enhanced Full Rate Vocoder and LPC10.
8. A method according to claim 2 and wherein said vocoder data is of the type produced by any of the following vocoders: RPE-LTP full and half rate, QCELP 8 and 13 Kbps, EVRC, LD CELP, VSELP, CS ACELP, Enhanced Full Rate Vocoder and LPC10.
9. The use of LPC-based vocoder data as an input to a voice recognition system.
10. A digital cellular telephone comprising:
a mobile telephone operating system;
a vocoder which compresses a voice signal using at least linear predication coding (LPC) thereby to produce vocoder data; and
a vocoder based voice recognizer comprising:
a front end processor which processes said vocoder data to determine when a word was spoken and to generate recognition features of said spoken word; and
a recognizer which at least recognizes said spoken word as one of a set of reference words.
11. A digital cellular telephone according to claim 10 and wherein said front end processor includes:
an energy estimator which uses residual information forming part of said vocoder data to estimate the energy of a voice signal;
an LPC parameter extractor which extracts the LPC parameters of said vocoder data; and
a recognition feature generator which generates said recognition features from said LPC parameters.
12. A cellular telephone according to claim 10 and wherein said front end processor is selectably operable with multiple vocoder types.
13. A cellular telephone according to claim 10 and wherein said vocoder is any of the following vocoders: RPE-LTP full and half rate, QCELP 8 and 13 Kbps, EVRC, LD CELP, VSELP, CS ACELP, Enhanced Full Rate Vocoder and LPC10.
14. A vocoder based voice recognizer operable with the data produced by an LPC-based vocoder, the voice recognizer comprising:
a front end processor which processes said vocoder data to determine when a word was spoken and to generate recognition features of said spoken word; and
a recognizer which at least recognizes said spoken word as one of a set of reference words.
15. A voice recognizer according to claim 14 and wherein said front end processor comprises:
an energy estimator which uses residual information forming part of said vocoder data to estimate the energy of a voice signal;
an LPC parameter extractor which extracts the LPC parameters of said vocoder data; and
a recognition feature generator which generates said recognition features from said LPC parameters.
16. A voice recognizer according to claim 15 and wherein said energy estimator comprises a residual energy estimator which estimates the energy from residual data found in said vocoder data.
17. A voice recognizer according to claim 16 and wherein said residual energy estimator comprises a residual reconstructor which reconstructs residual data from said vocoder data and a norm generator which generates the norm of said residual data thereby to produce said energy estimate.
18. A voice recognizer according to claim 16 and wherein said residual energy estimator comprises an extractor which extracts a pitch-gain value from said vocoder data thereby to produce said energy estimate.
19. A voice recognizer according to claim 16 and wherein said residual energy estimator comprises:
an extractor which extracts pitch-gain values, lag values and remnant data from said vocoder data;
a reconstructor which reconstructs a remnant signal from said remnant data;
a remnant energy estimator which generates an energy estimate of said remnant signal;
a non-remnant energy estimator which generates an energy estimate of a non-remnant portion of said residual by using said pitch-gain value and a previous energy estimate defined by said lag value; and
a combiner which combines said remnant and non-remnant energy estimates thereby to produce said energy estimate.
20. A voice recognizer according to claim 14 and wherein said vocoder is any of the following vocoders: RPE-LTP full and half rate, QCELP 8 and 13 Kbps, EVRC, LD CELP, VSELP, CS ACELP, Enhanced Full Rate Vocoder and LPC10.
US10/051,350 1998-01-08 2002-01-22 Vocoder-based voice recognizer Abandoned US20030018472A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/051,350 US20030018472A1 (en) 1998-01-08 2002-01-22 Vocoder-based voice recognizer

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US09/002,616 US6003004A (en) 1998-01-08 1998-01-08 Speech recognition method and system using compressed speech data
US09/412,406 US6377923B1 (en) 1998-01-08 1999-10-05 Speech recognition method and system using compression speech data
US10/051,350 US20030018472A1 (en) 1998-01-08 2002-01-22 Vocoder-based voice recognizer

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/412,406 Continuation US6377923B1 (en) 1998-01-08 1999-10-05 Speech recognition method and system using compression speech data

Publications (1)

Publication Number Publication Date
US20030018472A1 true US20030018472A1 (en) 2003-01-23

Family

ID=21701631

Family Applications (3)

Application Number Title Priority Date Filing Date
US09/002,616 Expired - Lifetime US6003004A (en) 1998-01-08 1998-01-08 Speech recognition method and system using compressed speech data
US09/412,406 Expired - Lifetime US6377923B1 (en) 1998-01-08 1999-10-05 Speech recognition method and system using compression speech data
US10/051,350 Abandoned US20030018472A1 (en) 1998-01-08 2002-01-22 Vocoder-based voice recognizer

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US09/002,616 Expired - Lifetime US6003004A (en) 1998-01-08 1998-01-08 Speech recognition method and system using compressed speech data
US09/412,406 Expired - Lifetime US6377923B1 (en) 1998-01-08 1999-10-05 Speech recognition method and system using compression speech data

Country Status (12)

Country Link
US (3) US6003004A (en)
EP (1) EP1046154B1 (en)
JP (1) JP2001510595A (en)
KR (1) KR100391287B1 (en)
CN (1) CN1125432C (en)
AT (1) ATE282881T1 (en)
AU (1) AU8355398A (en)
DE (1) DE69827667T2 (en)
IL (1) IL132449A (en)
RU (1) RU99124623A (en)
TW (1) TW394925B (en)
WO (1) WO1999035639A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050149333A1 (en) * 2003-12-31 2005-07-07 Sebastian Thalanany System and method for providing talker arbitration in point-to-point/group communication
US20060232868A1 (en) * 2002-02-26 2006-10-19 Wu David C System and method of performing digital multi-channel audio signal decoding
US11153472B2 (en) 2005-10-17 2021-10-19 Cutting Edge Vision, LLC Automatic upload of pictures from a camera

Families Citing this family (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6370504B1 (en) * 1997-05-29 2002-04-09 University Of Washington Speech recognition on MPEG/Audio encoded files
US6134283A (en) * 1997-11-18 2000-10-17 Amati Communications Corporation Method and system for synchronizing time-division-duplexed transceivers
US6003004A (en) * 1998-01-08 1999-12-14 Advanced Recognition Technologies, Inc. Speech recognition method and system using compressed speech data
KR100277105B1 (en) * 1998-02-27 2001-01-15 윤종용 Apparatus and method for determining speech recognition data
US6223157B1 (en) * 1998-05-07 2001-04-24 Dsc Telecom, L.P. Method for direct recognition of encoded speech data
JP4081858B2 (en) * 1998-06-04 2008-04-30 ソニー株式会社 Computer system, computer terminal device, and recording medium
US6321197B1 (en) * 1999-01-22 2001-11-20 Motorola, Inc. Communication device and method for endpointing speech utterances
US6411926B1 (en) * 1999-02-08 2002-06-25 Qualcomm Incorporated Distributed voice recognition system
US6792405B2 (en) * 1999-12-10 2004-09-14 At&T Corp. Bitstream-based feature extraction method for a front-end speech recognizer
US6795698B1 (en) * 2000-04-12 2004-09-21 Northrop Grumman Corporation Method and apparatus for embedding global positioning system (GPS) data in mobile telephone call data
US6564182B1 (en) 2000-05-12 2003-05-13 Conexant Systems, Inc. Look-ahead pitch determination
US6999923B1 (en) * 2000-06-23 2006-02-14 International Business Machines Corporation System and method for control of lights, signals, alarms using sound detection
US7203651B2 (en) * 2000-12-07 2007-04-10 Art-Advanced Recognition Technologies, Ltd. Voice control system with multiple voice recognition engines
US7155387B2 (en) * 2001-01-08 2006-12-26 Art - Advanced Recognition Technologies Ltd. Noise spectrum subtraction method and system
US7089184B2 (en) * 2001-03-22 2006-08-08 Nurv Center Technologies, Inc. Speech recognition for recognizing speaker-independent, continuous speech
US20030028386A1 (en) * 2001-04-02 2003-02-06 Zinser Richard L. Compressed domain universal transcoder
US7319703B2 (en) * 2001-09-04 2008-01-15 Nokia Corporation Method and apparatus for reducing synchronization delay in packet-based voice terminals by resynchronizing during talk spurts
US7050969B2 (en) * 2001-11-27 2006-05-23 Mitsubishi Electric Research Laboratories, Inc. Distributed speech recognition with codec parameters
US7024353B2 (en) * 2002-08-09 2006-04-04 Motorola, Inc. Distributed speech recognition with back-end voice activity detection apparatus and method
US20040073428A1 (en) * 2002-10-10 2004-04-15 Igor Zlokarnik Apparatus, methods, and programming for speech synthesis via bit manipulations of compressed database
FI20021936A (en) * 2002-10-31 2004-05-01 Nokia Corp Variable speed voice codec
CN1302454C (en) * 2003-07-11 2007-02-28 中国科学院声学研究所 Method for rebuilding probability weighted average deletion characteristic data of speech recognition
KR100647290B1 (en) * 2004-09-22 2006-11-23 삼성전자주식회사 Voice encoder/decoder for selecting quantization/dequantization using synthesized speech-characteristics
US7533018B2 (en) * 2004-10-19 2009-05-12 Motorola, Inc. Tailored speaker-independent voice recognition system
US20060095261A1 (en) * 2004-10-30 2006-05-04 Ibm Corporation Voice packet identification based on celp compression parameters
US20060224381A1 (en) * 2005-04-04 2006-10-05 Nokia Corporation Detecting speech frames belonging to a low energy sequence
GB0710211D0 (en) * 2007-05-29 2007-07-11 Intrasonics Ltd AMR Spectrography
US20090094026A1 (en) * 2007-10-03 2009-04-09 Binshi Cao Method of determining an estimated frame energy of a communication
US9208796B2 (en) 2011-08-22 2015-12-08 Genband Us Llc Estimation of speech energy based on code excited linear prediction (CELP) parameters extracted from a partially-decoded CELP-encoded bit stream and applications of same
KR20220140002A (en) * 2013-04-05 2022-10-17 돌비 레버러토리즈 라이쎈싱 코오포레이션 Companding apparatus and method to reduce quantization noise using advanced spectral extension
CN104683959B (en) * 2013-11-27 2018-09-18 深圳市盛天龙视听科技有限公司 Instant messaging type portable audio and its account loading method
KR20150096217A (en) * 2014-02-14 2015-08-24 한국전자통신연구원 Digital data compressing method and device thereof
TWI631556B (en) * 2017-05-05 2018-08-01 英屬開曼群島商捷鼎創新股份有限公司 Device and method for data compression
US10460749B1 (en) * 2018-06-28 2019-10-29 Nuvoton Technology Corporation Voice activity detection using vocal tract area information

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3909532A (en) * 1974-03-29 1975-09-30 Bell Telephone Labor Inc Apparatus and method for determining the beginning and the end of a speech utterance
US4475189A (en) * 1982-05-27 1984-10-02 At&T Bell Laboratories Automatic interactive conference arrangement
US4519094A (en) * 1982-08-26 1985-05-21 At&T Bell Laboratories LPC Word recognizer utilizing energy features
US4866777A (en) * 1984-11-09 1989-09-12 Alcatel Usa Corporation Apparatus for extracting features from a speech signal
US4908865A (en) * 1984-12-27 1990-03-13 Texas Instruments Incorporated Speaker independent speech recognition method and system
US5548647A (en) * 1987-04-03 1996-08-20 Texas Instruments Incorporated Fixed text speaker verification method and apparatus
US5208897A (en) * 1990-08-21 1993-05-04 Emerson & Stern Associates, Inc. Method and apparatus for speech recognition based on subsyllable spellings
US5371853A (en) * 1991-10-28 1994-12-06 University Of Maryland At College Park Method and system for CELP speech coding and codebook for use therewith
US5305422A (en) * 1992-02-28 1994-04-19 Panasonic Technologies, Inc. Method for determining boundaries of isolated words within a speech signal
GB2272554A (en) * 1992-11-13 1994-05-18 Creative Tech Ltd Recognizing speech by using wavelet transform and transient response therefrom
ZA948426B (en) * 1993-12-22 1995-06-30 Qualcomm Inc Distributed voice recognition system
AU684872B2 (en) * 1994-03-10 1998-01-08 Cable And Wireless Plc Communication system
US5704009A (en) * 1995-06-30 1997-12-30 International Business Machines Corporation Method and apparatus for transmitting a voice sample to a voice activated data processing system
US6003004A (en) * 1998-01-08 1999-12-14 Advanced Recognition Technologies, Inc. Speech recognition method and system using compressed speech data

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060232868A1 (en) * 2002-02-26 2006-10-19 Wu David C System and method of performing digital multi-channel audio signal decoding
US7912153B2 (en) * 2002-02-26 2011-03-22 Broadcom Corp. System and method of performing digital multi-channel audio signal decoding
US20110211658A1 (en) * 2002-02-26 2011-09-01 David Chaohua Wu System and method of performing digital multi-channel audio signal decoding
US20050149333A1 (en) * 2003-12-31 2005-07-07 Sebastian Thalanany System and method for providing talker arbitration in point-to-point/group communication
US7558736B2 (en) * 2003-12-31 2009-07-07 United States Cellular Corporation System and method for providing talker arbitration in point-to-point/group communication
US11153472B2 (en) 2005-10-17 2021-10-19 Cutting Edge Vision, LLC Automatic upload of pictures from a camera
US11818458B2 (en) 2005-10-17 2023-11-14 Cutting Edge Vision, LLC Camera touchpad

Also Published As

Publication number Publication date
CN1125432C (en) 2003-10-22
JP2001510595A (en) 2001-07-31
DE69827667D1 (en) 2004-12-23
US6377923B1 (en) 2002-04-23
CN1273662A (en) 2000-11-15
KR100391287B1 (en) 2003-07-12
AU8355398A (en) 1999-07-26
IL132449A (en) 2005-07-25
EP1046154B1 (en) 2004-11-17
EP1046154A4 (en) 2001-02-07
TW394925B (en) 2000-06-21
KR20010006401A (en) 2001-01-26
EP1046154A1 (en) 2000-10-25
DE69827667T2 (en) 2005-10-06
WO1999035639A1 (en) 1999-07-15
US6003004A (en) 1999-12-14
IL132449A0 (en) 2001-03-19
RU99124623A (en) 2001-09-27
ATE282881T1 (en) 2004-12-15

Similar Documents

Publication Publication Date Title
US6377923B1 (en) Speech recognition method and system using compression speech data
KR100923896B1 (en) Method and apparatus for transmitting speech activity in distributed voice recognition systems
US6411926B1 (en) Distributed voice recognition system
US6381569B1 (en) Noise-compensated speech recognition templates
EP1395978B1 (en) Method and apparatus for speech reconstruction in a distributed speech recognition system
US7089178B2 (en) Multistream network feature processing for a distributed speech recognition system
EP1006509B1 (en) Automatic speech/speaker recognition over digital wireless channels
US20030004720A1 (en) System and method for computing and transmitting parameters in a distributed voice recognition system
EP1588354B1 (en) Method and apparatus for speech reconstruction
JP2004527006A (en) System and method for transmitting voice active status in a distributed voice recognition system
US20040148160A1 (en) Method and apparatus for noise suppression within a distributed speech recognition system
WO2002103675A1 (en) Client-server based distributed speech recognition system architecture
CA2297191A1 (en) A vocoder-based voice recognizer
WO2008001991A1 (en) Apparatus and method for extracting noise-robust speech recognition vector by sharing preprocessing step used in speech coding
JP3523579B2 (en) Speech recognition system
Kader EFFECT OF GSM SYSTEM ON TEXT-INDEPENDENT SPEAKER RECOGNITION PERFORMANCE.
WO2001031636A2 (en) Speech recognition on gsm encoded data
JP2002527796A (en) Audio processing method and audio processing device

Legal Events

Date Code Title Description
AS Assignment

Owner name: ART ADVANCED RECOGNITION TECHNOLOGIES INC., CALIFO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HERSHKOVITS, YEHUDA;ILAN, GABRIEL;REEL/FRAME:013064/0913

Effective date: 20020703

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION