WO2002077976A1 - Method of performing speech recognition of dynamic utterances - Google Patents
Method of performing speech recognition of dynamic utterances Download PDFInfo
- Publication number
- WO2002077976A1 WO2002077976A1 PCT/US2002/009045 US0209045W WO02077976A1 WO 2002077976 A1 WO2002077976 A1 WO 2002077976A1 US 0209045 W US0209045 W US 0209045W WO 02077976 A1 WO02077976 A1 WO 02077976A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- dynamic
- utterances
- audio
- correct
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
Definitions
- Automated data provider systems are used to provide data such as stock quotes and bank balances to users over phone lines.
- the information provided by these automated systems typically comprises two parts.
- the first part of the information is known as static data. This can be, for example, a standard greeting or prompt, which may be the same for a number of users.
- the second part of the information is known as dynamic data.
- the name of the company and the current stock price are dynamic data in the real world, because they change continuously as the users of the automated data provider systems make their selections and prices fluctuate.
- the automated data provider systems need to be tested at two levels.
- One level of testing is to test the static data provided by the automated data provider. This can be accomplished by, for example, testing the voice prompts that guide the user through the menus, ensuring that the correct prompts are presented in the correct order.
- a second level of testing is to test that the dynamic data reported to the user is correct, for example, that the reported stock price is actually the price for the named company at the time reported.
- the speech data In existing test systems used to test automated data provider systems, the speech data must be presented to the test system in a training phase prior to the testing phase, which prepares the system to recognize the same speech utterances when presented during the testing phase.
- the recognition scheme is generally known as discrete speaker dependent speech recognition. Thus, the system is limited to testing speech utterances presented to it a priori, and it is impractical to recognize dynamically changing utterances except where the set of all possible utterances is small.
- One system that utilizes speech recognition as part of its provision of testing is the HAMMER ITTM test system available from Empirix Inc. of Wilmington, MA.
- the HAMMER IT test system recognizes the responses from the system under test and verifies that the received responses are the responses expected from the system under test. This test system works extremely well for recognizing static responses and for recognizing a limited number of dynamic responses which are known by the test system, however the HAMMER IT currently cannot test for a wide variety of dynamic responses which are unknown by the test system.
- IQS Interactive Quality Systems
- Hopkins, Minnesota utilizes an alternative recognition scheme, namely, length of utterance, but is still limited to recognizing utterances presented to it a priori. It would seem difficult for this system to recognize typical dynamic data, such as numbers, since the utterance "one two three” would often have the same duration as the utterance "two one three", “three two one” and so on, particularly if the utterances were generated by an automated system.
- a possible alternative would be a semi-automated system, in which the dynamic portion of the utterance would be recorded and presented to a human operator for encoding.
- the dynamic portion of the utterance would be recorded and presented to a human operator for encoding in machine-readable characters.
- test system that tests the responses of automated data provider systems which presents both static data and dynamic data. It would be further desirable to have a test system which does not need to know beforehand the possible dynamic data.
- the invention utilizes continuous speaker-independent speech recognition together with a process known generally as natural language recognition to reduce dynamic utterances to machine encoded text without requiring a prior training phase.
- the test system will convert common examples of dynamic speech, such as numbers, dates, times, and currency utterances into their usual textual representation. For instance, it will convert the utterance "four hundred fifty four dollars and twenty nine cents" into the more usual representation of "454.29". This will eliminate the limitation that all tested utterances need to be known by the test system in advance of the test.
- the invention facilitates automated validation of the data so converted, by allowing its use as input into an automated system which can independently access and validate the data.
- Fig. 1 is a flow chart of the presently disclosed method.
- a second function of the test system is to provide a conversion from the verbal report of the data (dynamic data) by the system under test into a textual representation.
- the textual representation in the form of machine encoded characters, can then be used as input into an automated system which can independently access the data in question and validate it in the appropriate manner, for example, in the case of a stock quotation, by accessing the stock exchange data base.
- One advantage of the present invention is that it directly reduces arbitrary dynamic utterances presented over telecommunications devices, such as dollar amounts, times, account numbers, and so on, into machine encoded character representations suitable for input into an automated independent validation system, without intermediate human intervention.
- the present method eliminates the limitation imposed on known test systems that all possible tested utterances are known in advance of the test.
- the result of the testing of data from an automated data provider system will be one or more of the following three results.
- the presently disclosed system is able to perform speaker independent recognition, so that creating the vocabulary would not be necessary, except for special words.
- the first step 10 is to establish a communications path between the test system and the system under test.
- This communications path may be a telephone connection, a wireless or cellular connection, a network or Internet connection or other types of connections as would be known by someone of reasonable skill in the art.
- Step 20 comprises receiving audio data from the system under test by the test system through the communication path established in step 10.
- This audio data may include static data, dynamic data or a combination of static and dynamic data.
- the list below contains the possible instances of audio data to be received from the system under test.
- the audio data comprises "This is the MegaMaximum bank”
- the entire data is static data.
- the audio data received is "Your current balance is ⁇ dollars>”
- step 40 a determination is made as to whether the static data is correct. If the static data corresponds to the expected data, then step 50 is executed. If the static data is incorrect, then an error condition is indicated as shown in step 90.
- step 50 is executed.
- step 50 a determination is made as to whether the received audio data contains dynamic data. If no dynamic data has been received, then step 80 is executed, and the process ends. If dynamic data has been received as part of the received audio data, then step 60 is executed.
- Step 60 converts the dynamic data to non-audio data. This can be, for example, a textual format such as machine encoded text. Other formats could also be used.
- step 70 is executed.
- Step 70 determines whether the non-audio data is correct.
- the non-audio data could be a stock price, a dollar amount, or the like. This non-audio data typically is compared to a database which contains the correct data. If the non-audio data was correct, then step 80 is executed and the process ends. If the non-audio data was not correct then step 90 is executed wherein an error condition is reported.
- the grammar could also assign tags (names) to each utterance, which the recognizer would return along with the text and/or interpretation. For the simpler applications, this would provide a solution conceptually similar to how speech prompt recognition is typically preformed.
- the grammar would correspond to the vocabulary, and the tag would be a symbolic version of the clip number received as a recognition result.
- ⁇ phrasel> (this is the megamaximum bank)
- ⁇ greeting ⁇ ⁇ phrase2> (if you need assistance just say help)
- ⁇ help_prompt ⁇ ⁇ phrase3> (please enter or say your account number)
- ⁇ account ⁇ ⁇ phrase4> (please enter or say your pin number)
- ⁇ pin ⁇ ⁇ dollars> [NUMBER]
- the script When running the script, as each prompt is presented by the system under test, the prompt is sent off to be recognized, and a string, tag, and understanding, if any, are returned as the result.
- the script compares the returned string against the expected string, or simply checks the tag to see if it is the expected one. For phrase number five above, the script compares only the first four words (static data), and compares the dollar amount (dynamic data) to the expected value as a separate operation.
- a utility to enroll "MegaMaximum” into the speech recognizer's vocabulary A utility to set up a grammar.
- a command to connect the running script with the created grammar A command to compare strings and substrings on a word by word basis (rather than the character basis of most string utilities).
- the system could institute a mode to automatically collect the speech clips and translate them into the format of a grammar (a specialized dictation function). This would be useful for setting up tests on undocumented IVR systems to get the test up and running faster.
- the presently disclosed invention performs recognition on larger and more varied utterances than currently available systems.
- the present invention also handles noise better than currently available systems.
- the present invention also scales to a larger number of channels (via a separate recognition server). Further, the presently disclosed invention handles dynamic prompts seamlessly with static ones.
- a computer usable medium can include a readable memory device, such as a hard drive device, a CD-ROM, a DVD-ROM, or a computer diskette, having computer readable program code segments stored thereon.
- the computer readable medium can also include a communications link, either optical, wired, or wireless, having program code segments carried thereon as digital or analog signals.
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE60217313T DE60217313T2 (en) | 2001-03-22 | 2002-03-19 | METHOD FOR PERFORMING LANGUAGE RECOGNITION OF DYNAMIC REPORTS |
EP02721563A EP1374227B1 (en) | 2001-03-22 | 2002-03-19 | Method of performing speech recognition of dynamic utterances |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/814,576 | 2001-03-22 | ||
US09/814,576 US6604074B2 (en) | 2001-03-22 | 2001-03-22 | Automatic validation of recognized dynamic audio data from data provider system using an independent data source |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2002077976A1 true WO2002077976A1 (en) | 2002-10-03 |
Family
ID=25215468
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2002/009045 WO2002077976A1 (en) | 2001-03-22 | 2002-03-19 | Method of performing speech recognition of dynamic utterances |
Country Status (5)
Country | Link |
---|---|
US (1) | US6604074B2 (en) |
EP (1) | EP1374227B1 (en) |
AT (1) | ATE350746T1 (en) |
DE (1) | DE60217313T2 (en) |
WO (1) | WO2002077976A1 (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
AU2002361710A1 (en) * | 2001-12-17 | 2003-06-30 | Empirix Inc. | Method of testing a voice application |
JP3983765B2 (en) * | 2002-09-27 | 2007-09-26 | ヴァーコ アイ/ピー インコーポレイテッド | Method and apparatus for reducing hydrostatic pressure in subsea risers using floating spheres |
GB2406183A (en) * | 2003-09-17 | 2005-03-23 | Vextra Net Ltd | Accessing audio data from a database using search terms |
US7440895B1 (en) * | 2003-12-01 | 2008-10-21 | Lumenvox, Llc. | System and method for tuning and testing in a speech recognition system |
US8473295B2 (en) * | 2005-08-05 | 2013-06-25 | Microsoft Corporation | Redictation of misrecognized words using a list of alternatives |
JP2010160316A (en) * | 2009-01-08 | 2010-07-22 | Alpine Electronics Inc | Information processor and text read out method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5168548A (en) * | 1990-05-17 | 1992-12-01 | Kurzweil Applied Intelligence, Inc. | Integrated voice controlled report generating and communicating system |
US5231670A (en) * | 1987-06-01 | 1993-07-27 | Kurzweil Applied Intelligence, Inc. | Voice controlled system and method for generating text from a voice controlled input |
US6108632A (en) * | 1995-09-04 | 2000-08-22 | British Telecommunications Public Limited Company | Transaction support apparatus |
US6125347A (en) * | 1993-09-29 | 2000-09-26 | L&H Applications Usa, Inc. | System for controlling multiple user application programs by spoken input |
US6246981B1 (en) * | 1998-11-25 | 2001-06-12 | International Business Machines Corporation | Natural language task-oriented dialog manager and method |
US6332120B1 (en) * | 1999-04-20 | 2001-12-18 | Solana Technology Development Corporation | Broadcast speech recognition system for keyword monitoring |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5715369A (en) * | 1995-11-27 | 1998-02-03 | Microsoft Corporation | Single processor programmable speech recognition test system |
-
2001
- 2001-03-22 US US09/814,576 patent/US6604074B2/en not_active Expired - Lifetime
-
2002
- 2002-03-19 AT AT02721563T patent/ATE350746T1/en not_active IP Right Cessation
- 2002-03-19 DE DE60217313T patent/DE60217313T2/en not_active Expired - Lifetime
- 2002-03-19 EP EP02721563A patent/EP1374227B1/en not_active Expired - Lifetime
- 2002-03-19 WO PCT/US2002/009045 patent/WO2002077976A1/en active IP Right Grant
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5231670A (en) * | 1987-06-01 | 1993-07-27 | Kurzweil Applied Intelligence, Inc. | Voice controlled system and method for generating text from a voice controlled input |
US5168548A (en) * | 1990-05-17 | 1992-12-01 | Kurzweil Applied Intelligence, Inc. | Integrated voice controlled report generating and communicating system |
US6125347A (en) * | 1993-09-29 | 2000-09-26 | L&H Applications Usa, Inc. | System for controlling multiple user application programs by spoken input |
US6108632A (en) * | 1995-09-04 | 2000-08-22 | British Telecommunications Public Limited Company | Transaction support apparatus |
US6246981B1 (en) * | 1998-11-25 | 2001-06-12 | International Business Machines Corporation | Natural language task-oriented dialog manager and method |
US6332120B1 (en) * | 1999-04-20 | 2001-12-18 | Solana Technology Development Corporation | Broadcast speech recognition system for keyword monitoring |
Also Published As
Publication number | Publication date |
---|---|
EP1374227B1 (en) | 2007-01-03 |
ATE350746T1 (en) | 2007-01-15 |
US6604074B2 (en) | 2003-08-05 |
US20020138261A1 (en) | 2002-09-26 |
EP1374227A4 (en) | 2005-09-14 |
DE60217313D1 (en) | 2007-02-15 |
DE60217313T2 (en) | 2007-10-04 |
EP1374227A1 (en) | 2004-01-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20030115066A1 (en) | Method of using automated speech recognition (ASR) for web-based voice applications | |
US7302392B1 (en) | Voice browser with weighting of browser-level grammar to enhance usability | |
US6832196B2 (en) | Speech driven data selection in a voice-enabled program | |
US20030144846A1 (en) | Method and system for modifying the behavior of an application based upon the application's grammar | |
US6173266B1 (en) | System and method for developing interactive speech applications | |
US7933766B2 (en) | Method for building a natural language understanding model for a spoken dialog system | |
US9088652B2 (en) | System and method for speech-enabled call routing | |
López-Cózar et al. | Assessment of dialogue systems by means of a new simulation technique | |
CA2576605C (en) | Natural language classification within an automated response system | |
US8457966B2 (en) | Method and system for providing speech recognition | |
US8175248B2 (en) | Method and an apparatus to disambiguate requests | |
US20030091163A1 (en) | Learning of dialogue states and language model of spoken information system | |
US20050165607A1 (en) | System and method to disambiguate and clarify user intention in a spoken dialog system | |
US20060287868A1 (en) | Dialog system | |
US20060161434A1 (en) | Automatic improvement of spoken language | |
US8488750B2 (en) | Method and system of providing interactive speech recognition based on call routing | |
US20050234720A1 (en) | Voice application system | |
USH2187H1 (en) | System and method for gender identification in a speech application environment | |
US6604074B2 (en) | Automatic validation of recognized dynamic audio data from data provider system using an independent data source | |
US7451086B2 (en) | Method and apparatus for voice recognition | |
US20080243498A1 (en) | Method and system for providing interactive speech recognition using speaker data | |
Larson | W3c speech interface languages: Voicexml [standards in a nutshell] | |
US20040258217A1 (en) | Voice notice relay service method and apparatus | |
Natarajan et al. | Natural Language Call Routing with BBN Call Director | |
Lóopez-Cóozar et al. | Assessment of dialogue systems by means of a new |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG UZ VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2002721563 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 2002721563 Country of ref document: EP |
|
REG | Reference to national code |
Ref country code: DE Ref legal event code: 8642 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: JP |
|
WWG | Wipo information: grant in national office |
Ref document number: 2002721563 Country of ref document: EP |