WO1997021201A1 - Method and apparatus for combined information from speech signals for adaptive interaction in teaching and testing - Google Patents

Method and apparatus for combined information from speech signals for adaptive interaction in teaching and testing Download PDF

Info

Publication number
WO1997021201A1
WO1997021201A1 PCT/US1996/019264 US9619264W WO9721201A1 WO 1997021201 A1 WO1997021201 A1 WO 1997021201A1 US 9619264 W US9619264 W US 9619264W WO 9721201 A1 WO9721201 A1 WO 9721201A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
prompt
assisted method
semaphore
computer assisted
Prior art date
Application number
PCT/US1996/019264
Other languages
French (fr)
Inventor
Jared C. Bernstein
Original Assignee
Bernstein Jared C
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bernstein Jared C filed Critical Bernstein Jared C
Priority to DE69622439T priority Critical patent/DE69622439T2/en
Priority to AU11285/97A priority patent/AU1128597A/en
Priority to JP09521379A priority patent/JP2000501847A/en
Priority to CA002239691A priority patent/CA2239691C/en
Priority to DK96942132T priority patent/DK0956552T3/en
Priority to AT96942132T priority patent/ATE220817T1/en
Priority to EP96942132A priority patent/EP0956552B1/en
Publication of WO1997021201A1 publication Critical patent/WO1997021201A1/en
Priority to HK00102835A priority patent/HK1023638A1/en

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers
    • G09B7/02Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student
    • G09B7/04Electrically-operated teaching apparatus or devices working with questions and answers of the type wherein the student is expected to construct an answer to the question which is presented or wherein the machine gives an answer to the question presented by a student characterised by modifying the teaching programme in response to a wrong answer, e.g. repeating the question, supplying a further explanation
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/04Speaking
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Definitions

  • the area of the present invention relates generally to interactive language proficiency testing systems using speech recognition and, more particularly, to such systems which track the linguistic, indexical and paralinguistic characteristics of spoken inputs.
  • CALL Computer assisted language learning
  • computer systems that interact with human users via spoken language are improved by the combined use of linguistic and extra-linguistic information manifest in the speech of the human user.
  • the present invention extracts linguistic content, speaker state, speaker identity, vocal reaction time, rate of speech, fluency, pronunciation skill, native language, and other linguistic, indexical, or paralinguistic information from an incoming speech signal.
  • the user produces a speech signal in the context of a computer-produced display that is conventionally interpreted by the user as a request for information, or a request to read or repeat a word, phrase, sentence, or larger linguistic unit, or a request to complete, fill in, or identify missing elements in graphic or verbal aggregates (e.g., pictures or paragraphs), or an example to imitate, or any similar graphical or verbal presentation that conventionally serves as a prompt to speak.
  • the display is presented though a device either integral or peripheral to a computer system, such as a local or remote video display terminal or telephone.
  • the extracted linguistic and extra-linguistic information is combined in order to differentially select subsequent computer output for the purpose of amusement, instruction, or evaluation of that person by means of computer-human interaction.
  • Figure 1 illustrates a computer system which serves as an exemplary platform for the apparatus and methods of the present invention.
  • Figure 2 illustrates the transducers and the component subsystems for speech recognition, semaphore construction, and application interface control according to one embodiment of the present invention.
  • Figure 3 shows a block diagram of the automatic speech recognition component system according to one embodiment of the invention.
  • Figure 4 shows a schematic block diagram of the logic used in constructing the semaphore fields for one embodiment of the present invention.
  • Figure 5 shows a schematic block diagram of one embodiment of the application display controller.
  • Figure 6 is a flow diagram representing the conjoint use of semaphore fields in changing application display states.
  • Figure 1 illustrates a computer system 10 implementing the apparatus and methods of the present invention.
  • computer system 10 represents one preferred embodiment of the platform for the present invention.
  • computer system 10 comprises a host CPU 12, memory 14, hard disk drive 16, and floppy disk drive 18, all of which are coupled together via system bus 19.
  • system bus 19 Upon review of this specification, it will be appreciated that some or all of these components can be eliminated from various embodiments of the present invention.
  • operating system software and other software needed for the operation of computer system 10 will be loaded into main memory 14 from either hard disk drive 16 or floppy disk drive 18 upon power up. It will be appreciated that some of the code to be executed by CPU 12 on power up is stored in a ROM or other non-volatile storage device.
  • Computer system 10 is further equipped with a conventional keyboard 20 and a cursor positioning device 22.
  • cursor positioning device 22 includes a trackball and two switches which are actuated by two contoured buttons.
  • Keyboard 22 and cursor positioning device 13 comprise part of the user interface of computer system 10 and allow a user to communicate with the other elements of computer system 10.
  • any keyboard 20 and cursor positioning device 22 could be used with computer system 10, in one embodiment, these items are distinct units which are coupled to the system bus 19 via input /output controller 24. Other embodiments may eliminate the input /output controller and may further integrate keyboard 20 and cursor positioning device 22 into a single unit.
  • Computer system 10 further includes a display unit 26 which is coupled to the system bus 19 through display controller 28.
  • Display 26 may comprise any one of a number of familiar display devices any may be a liquid crystal display unit or video display terminal. It will be appreciated by those skilled in the art, however, that in other embodiments, display 26 can be any one of a number of other display devices.
  • Display controller 28 which typically includes video memory (not shown), receives command and data information via system bus 19 and then provides the necessary signals to display 26, thereby accomplishing the display of text, graphical and other information to the user.
  • menus and other input/output displays which comprise part of the user interface oi the computer system 10 are displayed on display 26 and an associated cursor can be moved on the screen using cursor positioning device 22 in the familiar fashion.
  • printer controller 30 is coupled to system bus 19, thereby allowing for the transfer of command and data information.
  • Printer 32 is coupled to printer controller 30 in the familiar fashion. It will be appreciated that some embodiments of computer system 10 will not utilize printer controller 30 and printer 32.
  • Application interface unit 34 is coupled to system bus 19 and acts as an interface between telephone handset 36, display 38 and speaker 40 and the system bus 19.
  • Application interface unit 34 is further coupled to semaphore logic 42 which, in turn, is coupled to automatic speech, recognizer (ASR) 44.
  • ASR automatic speech, recognizer
  • Microphone 46 and telephone handset 36 are coupled to ASR 44.
  • voice signals are converted to electrical signals by either microphone 46 or telephone handset 36.
  • the electrical signals are then digitized and analyzed by ASR 44 in accordance with the methods of the present invention as described in detail below.
  • the output signals of ASR 44 are passed to semaphore logic 42 which extracts values associated with the signals. These values are presented to application interface unit 34 for further processing as described below. Results of the processing are presented via display 38 and/or speaker 40 and telephone handset 36.
  • display 38 and display 26 may comprise the same unit.
  • display 38 may be a dedicated unit.
  • application interface unit 34 has been depicted as a separate unit, upon review of this specification it will be apparent to those skilled in the art that the functions of application unit 34 may be implemented via host CPU 12.
  • Computer systems that support spoken language interaction are based on speech recognition systems integrated with an application interface logic and other components such as data bases and peripherals.
  • Computer system 10 shown in Figure 1 is such a system.
  • Three principal components of computer system 10: the automatic speech recognizer 44, the semaphore logic 42, and the application interface controller 34 are shown in further detail in Figure 2.
  • These components are directly or indirectly connected to three transducers: a video display terminal (VDT) 38, a loudspeaker 40, and a microphone 46.
  • VDT 38 may comprise an alternative type of display device such as a liquid crystal display.
  • the components and transducers are connected by logical data streams, 50-58.
  • the embodiment shown in Figure 2 resembles a system in which a user interacts at a console with a VDT, microphone and a loudspeaker. However, the microphone and speaker in Figure 2 could both be replaced by a telephone handset 36.
  • a language proficiency testing system that operates over the telephone is one embodiment of the invention shown in Figure 2.
  • the human user may be remote from the computer system 10.
  • the computer system 10 displays speech signals over the outbound data stream 58, which is a telephone line.
  • the user responds by speaking into the microphone 46 or the telephone handset 36.
  • the user's speech signal is transmitted over the phone line 50, and processed by the speech recognizer 44, with reference to the current state of the application interface, as received in data stream 55 from the application interface controller 34.
  • the speech recognizer 44 produces a data stream 52 that contains an augmented representation of the linguistic content of the user's speech signal, including a representation of the speech signal aligned with segment, syllable, word, phrase, and clause units.
  • the semaphore logic 42 is implemented as a sequentially separate processing component in the embodiment shown in Figure 2, although its function may also be performed in whole or in part in the speech recognizer 44.
  • the semaphore logic 42 extracts a series of nominal and numerical values that are associated with each unit level.
  • This embedded semaphore structure is data stream 54 that is stored in application interface controller 34 and combined in various forms to drive the branching decisions and determine the state ol the application interface controller 34.
  • the state of the application interface controller 34 then generates two data streams: 56 that updates ASR 44 and semaphore logic 42 with its current state as relevant lo the processing done in ASR 44 and semaphore logic 42, and 58 that is the audio signal that plays out through the loudspeaker 40 or the user's telephone handset 36.
  • FIG. 3 is a block diagram of one embodiment of a speech recognizer 44.
  • Speech recognizer 44 is consistent with an Hidden Markov Model (HMM) - based system for this embodiment, although the invention is applicable to systems that use other speech recognition techniques.
  • the component comprises a feature extractor 60 that is implemented by digital signal processing techniques well known in the art and a decoder 62 that searches the language model 64 as appropriate to the current state of the application interface controller 34.
  • the techniques required to implement an HMM-based speech recognizers are well known in the art. For example, U.S. Patent No. 5,268,990 to Cohen, et al.
  • FIG 4 is a schematic block diagram of the semaphore logic 42, which operates on the data stream 52 and produces the data stream 53.
  • Semaphore logic 42 implements a set of estimation routines 70-76 that logically operate in parallel, with partial inter-process communication. These processes include, in the embodiment for telephone language proficiency testing, measures of speaking rate and of fluency, estimates of speaker gender and native language, and measures of segmental and prosodic accuracy for the spoken response. Each of these processes is implemented using programming techniques well known in the art.
  • FIG 5 is a block diagram of the application interface controller 34 which comprises a semaphore silo 80, which stores a 10-utterance FIFO of semaphores, a display sequence state machine 82, a display driver 84, and a display content library 86 containing the audio files specified for display by the display sequence state machine 82.
  • Display sequence state machine 82 changes state depending on the content of the semaphore silo 80.
  • the current state of display sequence state machine 82 generates data stream 56 and controls the display driver 84, which copies or adapts content from display content library 86 and produces data stream 58.
  • Figure 6 represents a decision logic element in the state network implemented in the display sequence state machine 82.
  • the combination logic 90 in this embodiment is a deterministic, state- dependent function of the last semaphore value.
  • Combination logic 90 allows display sequence state machine 82 to transition from current state 92 to next state 94 based on the input from semaphoric silo 80.
  • Other possibilities within the scope of the invention include probabilistic functions of the last semaphore values, and probabilistic or deterministic functions on the values of the last n (n ⁇ 11) semaphores.
  • a remote user initiates contact with computer system 10 via a standard telephone handset 36. It will be appreciated that this can be accomplished by dialing up a telephone number associated with computer system 10 whereupon the user's call will be automatically answered.
  • the user initiates the operation of a desired speech testing or other routine in the typical fashion, for example, by responding to audio prompts using the touch-tone keypad of the telephone.
  • computer system 10 loads the desired application program from hard drive 16 into main memory 14 and begins to execute the instructions associated therewith. This further causes computer system 10 to configure its circuitry accordingly so as to implement the operation of the selected application program.
  • computer system 10 begins testing the user's speech abilities by generating a series of displays. These displays may be purely audio, i.e., in the case of a solely telephone contact by the user, or audio-visual, where the user is positioned at a remote terminal or has accessed computer system 10 via a modem. It will be appreciated that one method of accessing computer system 10 may be via a gateway to the network of computer systems commonly referred to as the Internet.
  • the displays initiated by computer system 10 may take the form of a request to read or repeat a word, phrase, or sentence (or larger linguistic unit); a request to complete, fill in, or identify missing elements in a graphic or verbal aggregate (i.e., a picture or a paragraph); example to imitate; or any similar graphical or verbal presentation that conveniently serves as a prompt for the user to speak.
  • the user provides a speech signal which is transmitted via the telephone handset 36 (or other device) to ASR 44.
  • the user's speech signal is processed by ASR 44 to produce data stream 52.
  • This information (data stream 52) is passed on to semaphore logic 42 where the above-described processes operate to extract linguistic and extra-linguistic information. For example, in one embodiment, the response latency and speaking rate of the user is identified. Other embodiments might extract gender and native language information.
  • This extracted information is then utilized by application interface 34 to select the subsequent output of computer system 10.
  • this might include displaying advanced graphical or verbal aggregates to those users whose speech characteristics demonstrate a high level of fluency.
  • targets other than language proficiency. For example, geographical familiarity or competency in arithmetic could be examined.
  • the present invention could be used as a means by which users interact with an amusement game running on computer system 10.

Abstract

A computer system with a speech recognition component provides a method and apparatus for instructing and evaluating the proficiency of human users in skills that can be exhibited through speaking. The computer system tracks linguistic, indexical and paralinguistic characteristics of the spoken input of users, and implements games, data access, instructional systems, and tests. The computer system combines characteristics of the spoken input automatically to select appropriate material and present it in a manner suitable for the user. In one embodiment, the computer system measures the response latency and speaking rate of the user and presents its next spoken display at an appropriate speaking rate. In other embodiments, the computer system identifies the gender and native language of the user, and combines that information with the relative accuracy of the linguistic content of the user's utterance to select and display material that may be easier or more challenging for speakers with these characteristics.

Description

METHOD AND APPARATUS FOR COMBINED INFORMATION
FROM SPEECH SIGNALS FOR ADAPTIVE INTERACTION IN
TEACHING AND TESTING
BACKGROUND
1. Field of the Invention
The area of the present invention relates generally to interactive language proficiency testing systems using speech recognition and, more particularly, to such systems which track the linguistic, indexical and paralinguistic characteristics of spoken inputs.
2. Background Art
Many computer systems support a function whereby a human user may exert control over the system through spoken language. These systems often perform speech recognition with reference to a language model that includes a rejection path for utterances that are beyond the scope of the application as designed. The speech recognition component of the application, therefore, either returns the best match within the language model designed for the application, or it rejects the speech signal. A good description of a variety of systems which incorporate such methods can be found in "Readings in Speech Recognition," edited by Alex Waibel and Kai-Fu Lee (1990).
Computer assisted language learning (CALL) systems for second language instruction have been improved by the introduction of speech recognition. Bernstein & Franco, (1995) and the references therein show some examples. In most cases, the speech recognition component of the CALL system has been used as best match (with rejection) or as a scored performance for testing and skill refinement, either for non-native speakers of the target language or for hearing- impaired speakers. Prior laboratory demonstration systems have been designed to offer instruction in reading in the user's native language. Two systems have emulated selected aspects of the interaction of a reading instructor while the human user reads a displayed text aloud. One system based its spoken displays on the running average of poor pronunciations by the reader (Rtischev, Bernstein, and Chen), and the other system developed models of common false starts, and based its spoken displays on the recognition of the occurrence of these linguistic elements. (Mostow at CMU).
Expert teachers and other human interlocutors are sensitive not only to the linguistic content of a person's speech, but to other apparent characteristics of the speaker and the speech signal. The prior art includes systems that respond differentially depending on the linguistic content of speech signals. Prior art systems have also extracted indexical information like speaker identity or speaker gender, and calculated pronunciation scores or speaking rates in reading. However, these extra-linguistic elements of human speech signals have not been combined with the linguistic content to estimate the global proficiency of a human user in a spoken skill in order to estimate the human user's skill level and thus to control the operation of the computer system in a manner appropriate to the human user's global skill level. Such control of computer-based graphic and audio displays is useful and desirable in order to facilitate fine-grained adaptation to the cognitive, verbal and vocal skills of the human user.
SUMMARY OF THE INVENTION
According to one embodiment of the present invention, computer systems that interact with human users via spoken language are improved by the combined use of linguistic and extra-linguistic information manifest in the speech of the human user. The present invention extracts linguistic content, speaker state, speaker identity, vocal reaction time, rate of speech, fluency, pronunciation skill, native language, and other linguistic, indexical, or paralinguistic information from an incoming speech signal. The user produces a speech signal in the context of a computer-produced display that is conventionally interpreted by the user as a request for information, or a request to read or repeat a word, phrase, sentence, or larger linguistic unit, or a request to complete, fill in, or identify missing elements in graphic or verbal aggregates (e.g., pictures or paragraphs), or an example to imitate, or any similar graphical or verbal presentation that conventionally serves as a prompt to speak. The display is presented though a device either integral or peripheral to a computer system, such as a local or remote video display terminal or telephone. The extracted linguistic and extra-linguistic information is combined in order to differentially select subsequent computer output for the purpose of amusement, instruction, or evaluation of that person by means of computer-human interaction.
Combining the linguistic and extra-linguistic sources of information in a speech signal to select the next audio or graphic display simulates the integrative judgment of a skilled tutor or other interlocutor. The benefits in language instruction and language testing are direct in that language proficiency is a combination of linguistic and extra-linguistic skills, but use of the invention in any content area (e.g., arithmetic or geography) could be advantageous. Synthesis of corresponding indexical, paralinguistic and linguistic information in the speech displays produced by the computer system facilitates communication in the same context.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 illustrates a computer system which serves as an exemplary platform for the apparatus and methods of the present invention. Figure 2 illustrates the transducers and the component subsystems for speech recognition, semaphore construction, and application interface control according to one embodiment of the present invention.
Figure 3 shows a block diagram of the automatic speech recognition component system according to one embodiment of the invention.
Figure 4 shows a schematic block diagram of the logic used in constructing the semaphore fields for one embodiment of the present invention.
Figure 5 shows a schematic block diagram of one embodiment of the application display controller.
Figure 6 is a flow diagram representing the conjoint use of semaphore fields in changing application display states.
DETAILED DESCRIPTION
Referring to the drawings in detail wherein like numerals designate like parts and components, the following description sets forth numerous specific details in order to provide a thorough understanding of the present invention. However, after reviewing this specification, it will be apparent to those skilled in the art that the present invention may be practiced without these specific details. In other instances, well known structures, techniques and devices have not been described in detail in order to not unnecessarily obscure the present invention.
Figure 1 illustrates a computer system 10 implementing the apparatus and methods of the present invention. Although the present invention can be used with any number of integrated or stand-alone systems or devices, computer system 10 represents one preferred embodiment of the platform for the present invention. As shown in Figure 1, computer system 10 comprises a host CPU 12, memory 14, hard disk drive 16, and floppy disk drive 18, all of which are coupled together via system bus 19. Upon review of this specification, it will be appreciated that some or all of these components can be eliminated from various embodiments of the present invention. It will further be appreciated that operating system software and other software needed for the operation of computer system 10 will be loaded into main memory 14 from either hard disk drive 16 or floppy disk drive 18 upon power up. It will be appreciated that some of the code to be executed by CPU 12 on power up is stored in a ROM or other non-volatile storage device.
Computer system 10 is further equipped with a conventional keyboard 20 and a cursor positioning device 22. In one embodiment, cursor positioning device 22 includes a trackball and two switches which are actuated by two contoured buttons. Keyboard 22 and cursor positioning device 13 comprise part of the user interface of computer system 10 and allow a user to communicate with the other elements of computer system 10. Although any keyboard 20 and cursor positioning device 22 could be used with computer system 10, in one embodiment, these items are distinct units which are coupled to the system bus 19 via input /output controller 24. Other embodiments may eliminate the input /output controller and may further integrate keyboard 20 and cursor positioning device 22 into a single unit.
Computer system 10 further includes a display unit 26 which is coupled to the system bus 19 through display controller 28. Display 26 may comprise any one of a number of familiar display devices any may be a liquid crystal display unit or video display terminal. It will be appreciated by those skilled in the art, however, that in other embodiments, display 26 can be any one of a number of other display devices. Display controller 28, which typically includes video memory (not shown), receives command and data information via system bus 19 and then provides the necessary signals to display 26, thereby accomplishing the display of text, graphical and other information to the user. When computer system 10 is in use, menus and other input/output displays which comprise part of the user interface oi the computer system 10 are displayed on display 26 and an associated cursor can be moved on the screen using cursor positioning device 22 in the familiar fashion.
The printer functions of computer system 10 are implemented via printer controller 30 and printer 32. Printer controller 30 is coupled to system bus 19, thereby allowing for the transfer of command and data information. Printer 32 is coupled to printer controller 30 in the familiar fashion. It will be appreciated that some embodiments of computer system 10 will not utilize printer controller 30 and printer 32.
Application interface unit 34 is coupled to system bus 19 and acts as an interface between telephone handset 36, display 38 and speaker 40 and the system bus 19. Application interface unit 34 is further coupled to semaphore logic 42 which, in turn, is coupled to automatic speech, recognizer (ASR) 44. Microphone 46 and telephone handset 36 are coupled to ASR 44. In operation, voice signals are converted to electrical signals by either microphone 46 or telephone handset 36. The electrical signals are then digitized and analyzed by ASR 44 in accordance with the methods of the present invention as described in detail below. The output signals of ASR 44 are passed to semaphore logic 42 which extracts values associated with the signals. These values are presented to application interface unit 34 for further processing as described below. Results of the processing are presented via display 38 and/or speaker 40 and telephone handset 36. It will be appreciated that in some embodiments display 38 and display 26 may comprise the same unit. In other embodiments, display 38 may be a dedicated unit. Although application interface unit 34 has been depicted as a separate unit, upon review of this specification it will be apparent to those skilled in the art that the functions of application unit 34 may be implemented via host CPU 12.
Having thus described the overall computer system 10, the description will now turn to the particular methods and apparatus which comprise the present invention. Although in the description which follows, details of the implementation may be referred to as being in software, hardware alternatives may also be used, and vice- versa.
Computer systems that support spoken language interaction are based on speech recognition systems integrated with an application interface logic and other components such as data bases and peripherals. Computer system 10 shown in Figure 1, is such a system. Three principal components of computer system 10: the automatic speech recognizer 44, the semaphore logic 42, and the application interface controller 34 are shown in further detail in Figure 2. These components are directly or indirectly connected to three transducers: a video display terminal (VDT) 38, a loudspeaker 40, and a microphone 46. It will be appreciated that in other embodiments, VDT 38 may comprise an alternative type of display device such as a liquid crystal display. The components and transducers are connected by logical data streams, 50-58. The embodiment shown in Figure 2 resembles a system in which a user interacts at a console with a VDT, microphone and a loudspeaker. However, the microphone and speaker in Figure 2 could both be replaced by a telephone handset 36.
A language proficiency testing system that operates over the telephone is one embodiment of the invention shown in Figure 2. In such an embodiment, the human user may be remote from the computer system 10. The computer system 10 displays speech signals over the outbound data stream 58, which is a telephone line. The user responds by speaking into the microphone 46 or the telephone handset 36. The user's speech signal is transmitted over the phone line 50, and processed by the speech recognizer 44, with reference to the current state of the application interface, as received in data stream 55 from the application interface controller 34.
The speech recognizer 44 produces a data stream 52 that contains an augmented representation of the linguistic content of the user's speech signal, including a representation of the speech signal aligned with segment, syllable, word, phrase, and clause units. The semaphore logic 42 is implemented as a sequentially separate processing component in the embodiment shown in Figure 2, although its function may also be performed in whole or in part in the speech recognizer 44. The semaphore logic 42 extracts a series of nominal and numerical values that are associated with each unit level. This embedded semaphore structure is data stream 54 that is stored in application interface controller 34 and combined in various forms to drive the branching decisions and determine the state ol the application interface controller 34. The state of the application interface controller 34 then generates two data streams: 56 that updates ASR 44 and semaphore logic 42 with its current state as relevant lo the processing done in ASR 44 and semaphore logic 42, and 58 that is the audio signal that plays out through the loudspeaker 40 or the user's telephone handset 36.
Figure 3 is a block diagram of one embodiment of a speech recognizer 44. Speech recognizer 44 is consistent with an Hidden Markov Model (HMM) - based system for this embodiment, although the invention is applicable to systems that use other speech recognition techniques. The component comprises a feature extractor 60 that is implemented by digital signal processing techniques well known in the art and a decoder 62 that searches the language model 64 as appropriate to the current state of the application interface controller 34. The techniques required to implement an HMM-based speech recognizers are well known in the art. For example, U.S. Patent No. 5,268,990 to Cohen, et al. describes such a system wherein words are modeled as probabilistic networks of phonetic segments, each being represented as a context-independent hidden Markov phone model mixed with a plurality of context-dependent phone models. Such speech recognizers sample and process the input speech to derive a number of spectral features. Such processing is accomplished using code books techniques familiar to those skilled in the art. Recognition of the speech then is achieved by solving for the state sequence that is most likely to have produced the input features.
Figure 4 is a schematic block diagram of the semaphore logic 42, which operates on the data stream 52 and produces the data stream 53. Semaphore logic 42 implements a set of estimation routines 70-76 that logically operate in parallel, with partial inter-process communication. These processes include, in the embodiment for telephone language proficiency testing, measures of speaking rate and of fluency, estimates of speaker gender and native language, and measures of segmental and prosodic accuracy for the spoken response. Each of these processes is implemented using programming techniques well known in the art.
Figure 5 is a block diagram of the application interface controller 34 which comprises a semaphore silo 80, which stores a 10-utterance FIFO of semaphores, a display sequence state machine 82, a display driver 84, and a display content library 86 containing the audio files specified for display by the display sequence state machine 82. Display sequence state machine 82 changes state depending on the content of the semaphore silo 80. The current state of display sequence state machine 82 generates data stream 56 and controls the display driver 84, which copies or adapts content from display content library 86 and produces data stream 58. Figure 6 represents a decision logic element in the state network implemented in the display sequence state machine 82. The combination logic 90 in this embodiment is a deterministic, state- dependent function of the last semaphore value. Combination logic 90 allows display sequence state machine 82 to transition from current state 92 to next state 94 based on the input from semaphoric silo 80. Other possibilities within the scope of the invention include probabilistic functions of the last semaphore values, and probabilistic or deterministic functions on the values of the last n (n < 11) semaphores.
According to one embodiment of the present invention, a remote user initiates contact with computer system 10 via a standard telephone handset 36. It will be appreciated that this can be accomplished by dialing up a telephone number associated with computer system 10 whereupon the user's call will be automatically answered. The user initiates the operation of a desired speech testing or other routine in the typical fashion, for example, by responding to audio prompts using the touch-tone keypad of the telephone. In response to the user input, computer system 10 loads the desired application program from hard drive 16 into main memory 14 and begins to execute the instructions associated therewith. This further causes computer system 10 to configure its circuitry accordingly so as to implement the operation of the selected application program.
Once operation has commenced, computer system 10 begins testing the user's speech abilities by generating a series of displays. These displays may be purely audio, i.e., in the case of a solely telephone contact by the user, or audio-visual, where the user is positioned at a remote terminal or has accessed computer system 10 via a modem. It will be appreciated that one method of accessing computer system 10 may be via a gateway to the network of computer systems commonly referred to as the Internet. Regardless of the method of connection, the displays initiated by computer system 10 may take the form of a request to read or repeat a word, phrase, or sentence (or larger linguistic unit); a request to complete, fill in, or identify missing elements in a graphic or verbal aggregate (i.e., a picture or a paragraph); example to imitate; or any similar graphical or verbal presentation that conveniently serves as a prompt for the user to speak. In response to this prompt, the user provides a speech signal which is transmitted via the telephone handset 36 (or other device) to ASR 44.
As described above, the user's speech signal is processed by ASR 44 to produce data stream 52. This information (data stream 52) is passed on to semaphore logic 42 where the above-described processes operate to extract linguistic and extra-linguistic information. For example, in one embodiment, the response latency and speaking rate of the user is identified. Other embodiments might extract gender and native language information.
This extracted information is then utilized by application interface 34 to select the subsequent output of computer system 10. In the context of a language test, this might include displaying advanced graphical or verbal aggregates to those users whose speech characteristics demonstrate a high level of fluency. Of course, it will be appreciated that other implementations of the present invention may have targets other than language proficiency. For example, geographical familiarity or competency in arithmetic could be examined. Also, the present invention could be used as a means by which users interact with an amusement game running on computer system 10.
Thus, a novel computer implemented method and appartus for combining information from speech signals for adaptive interaction has been described. Although the teachings have been presented in connection with a particular circuit embodiment, it should be understood that the method of the present invention is equally applicable to a number of systems. Therefore, the disclosure should be construed as being exemplary and not limiting and the scope of tlie invention should be measured only in terms of the appended claims.

Claims

CLAIMSWhat is claimed is:
1. A computer assisted method, comprising the steps of: presenting a prompt to a user; receiving a spoken response to the prompt from the user, the spoken response including one or more linguistic units; deriving from the spoken response one or more semaphore values; selecting a next prompt for presentation to the user based at least in part on the semaphore values.
2. A computer assisted method as in claim 1 wherein said semaphore values include one or more values representing semaphores chosen from the list comprising: an identify of the user; a native language of the user; a speaking rate of the user; an identity of the linguistic units; a latency of the spoken response; an amplitude of the spoken response; a fluency of the spoken response; a pronunciation quality of the spoken response; values of the fundamental frequency of the spoken response; and a gender of the user.
3. A computer assisted method as in claim 2 wherein said semaphore values are further derived from an information source other than said spoken response.
4. A computer assisted method as in claim 4 wherein said information source includes a user identification code.
5. A computer assisted method as in claim 4 wherein said user identification code includes an ANI (automatic number identification).
6. A computer assisted method as in claim 3 wherein said next prompt comprises a phrase.
7. A computer assisted method as in claim 3 wherein said next prompt comprises a sentence.
8. A computer assisted method as in claim 3 wherein said next prompt comprises a request to complete a verbal aggregate.
9. A computer assisted method as in claim 3 wherein said next prompt comprises a request to identify missing elements in a verbal aggregate.
10. A computer assisted method as in claim 3 wherein said next prompt comprises an example to be imitated by said user.
11. A computer assisted method as in claim 3 wherein said next prompt is presented via a graphical user interface.
12. A computer assisted method as in claim 3 wherein said next prompt is presented via a telephone system.
13. A computer assisted method as in claim 3 wherein said spoken response is received via a telephone system.
14. A computer assisted method as in claim 13 wherein said step of deriving comprises the steps of: presenting said spoken response as a series of digital signals to a semaphore logic configured to extract said one or more semaphore values using one or more linguistic feature estimation routines.
15. A computer assisted method as in claim 1 wherein said step of selecting a next prompt comprises selecting material for presentation to the user that may be easier or more challenging for the user based on one or more of the linguistic units in the prompt; the indexical properlies of the prompt; the timing of the initiation of the prompt; the time scale of the prompt; or the relative amplitude of the prompt in relation to a level and characteristics of noise in the prompt.
16. A computer assisted method of determining proficiency of spoken language comprising the steps of: receiving at a computer digital signals representing an utterance, said utterance including one or more linguistic units; extracting from the digital signals semaphore values of said one or more linguistic units; combining two or more of said extracted semaphore values of said one or more linguistic units to produce a combined result and comparing said combined result to a stored model to derive a comparison result; and assigning said utterance a level of proficiency based on said comparison result.
17. A computer assisted method as in claim 16 further comprising the step of: displaying for a user a prompt selected according to said level of proficiency, wherein said prompt includes one or more selected linguistic units.
18. A computer assisted method as in claim 17 further comprising the steps of: receiving at said computer further digital signals representing a further utterance in response to said prompt.
19. A computer assisted method as in claim 18 further comprising the step of refining said assigned level of proficiency by analyzing said further utterance with reference to said prompt.
20. A computer assisted method as in claim 19 wherein said prompt includes selected linguistic units comprising a word or a phrase.
21. A computer assisted method as in claim 19 wherein said prompt includes selected linguistic units comprising a sentence.
22. A computer assisted method as in claim 19 wherein said prompt mcludes selected linguistic units comprising a request to complete a verbal aggregate.
23. A computer assisted method as in claim 19 wherein said prompt includes selected linguistic units comprising a request to identify missing elements in a verbal aggregate.
24. A computer assisted method as in claim 19 wherein said prompt includes selected linguistic units comprising an example to be imitated by a user.
25. A computer assisted method as in claim 16 wherein said digital, signals are received via a telephone system.
26. A computer assisted method as in claim 16 wherein said digital signals are received via the Internet.
27. A computer assisted method as in claim 16 wherein said step of extracting comprises the steps of: presenting said digital signals to a semaphore logic including one or more linguistic feature estimation routines; and extracting, using said semaphore logic, a series of values from said digital signals, said values associated with said semaphore values of said linguistic units.
28. A computer assisted method as in claim 16 wherein said step of extracting is accomplished using a speech recognizer consistent wilh a Hidden Markov Model feature extractor.
29. A digital system, comprising: a first user interface component configured to translate a user speech signal into a first electrical signal; an automatic speech recognition unit coupled to the first user interface component and configured to digitize and analyze said first electrical signal so as to generate a second electrical signal which includes a representation of said speech signal aligned with one or more linguistic units; a semaphore logic coupled to said automatic speech recognition unit and configured to extract values associated with each linguistic unit level within said second electrical signal; and an application interface unit coupled to said semaphore logic and configured to process said values so as to generate a third electrical signal comprising information to be presented to said user according to a determined state, trait or attribute of said user, wherein said state, trait or attribute is determined from said values.
30. A digital system as in claim 29, wherein said automatic speech recognition unit comprises: a feature extractor configured to receive and digitize said first electrical signal and generate an output signal; a decoder coupled to said feature extractor; and a language model coupled to said decoder, wherein said decoder is configured to search said language model according to a current state of said application interface unit and to spectral features contained within said output signal.
31. A digital system as in claim 30, wherein said semaphore logic comprises one or more estimation routines.
32. A digital system as in claim 31, wherein said application interface unit comprises: a semaphore silo configured to store a plurality of semaphores and to receive an input from said semaphore logic; a display sequence state machine coupled to said semaphore silo and configured to change state according to the content of said semaphore silo; and a display driver having an associated display library and coupled to said display sequence state machine, said display driver configured to generate said third electrical signal according to a current state of said display sequence state machine.
33. A digital system as in claim 32, wherein said third electrical signal comprises an audio signal for playback through said first user interface component.
34. A digital system as in claim 32, further comprising a second user interface component coupled so as to receive said third electrical signal.
35. A digital system as in claim 34, wherein said third electrical signal is an audio signal for playback through said second user interface component.
36. A digital system as in claim 34, wherein said third electrical signal comprises graphical information for display on said second user interface component.
37. A digital system as in claim 34, wherein said third electrical signal comprises textual information for display on said second user interface component.
38. A digital system as in claim 34, wherein said first user interface component and said second user interface component are housed in a user terminal.
39. A computer readable medium having stored thereon a plurality of sequences of instructions, said plurality of sequences of instruction s which, when executed by a processor, cause said processor to perform the steps of: receiving digital signals representing an utterance including one or more linguistic units; extracting from the digital signals semaphore values of said one or more linguistic units; combining two or more of said extracted semaphore values of said one or more linguistic units to produce a combined result aind comparing said combined result to a stored model to derive a comparison result; and assigning said utterance a level of a user state, trait or attribute based on said comparison result.
40. A computer readable medium as in claim 39 having further stored thereon instructions which cause said processor to perform the steps of: displaying for a user a prompt selected according to said level of a user state, trait or attribute, wherein said prompt includes one or more selected linguistic units.
41. A computer readable medium as in claim 40 having further stored thereon instructions which cause said processor to perform the steps of: receiving further digital signals representing a further utterance corresponding to said selected linguistic units of said prompt; and refining said assigned level of a user state, trait or attribute by analyzing said further utterance against said prompt.
42. A computer readable medium as in claim 41 wherein said prompt includes selected linguistic units comprising a word or a phrase.
43. A computer readable medium as in claim 41 wherein said prompt includes selected linguistic units comprising a sentence.
44. A computer readable medium as in claim 41 wherein said prompt includes selected linguistic units comprising a request to complete a verbal aggregate.
45. A computer readable medium as in claim 41 wherein said prompt includes selected linguistic units comprising a request to identify missing elements in a verbal aggregate.
46. A computer readable medium as in claim 41 wherein said prompt includes selected linguistic units comprising an example to be imitated by a user.
47. A computer readable medium as in claim 39 wherein said digital signals are received via a telephone system.
48. A computer readable medium as in claim 39 wherein said digital signals are received via the Internet.
49. A computer readable medium as in claim 39 having further stored thereon instructions, which when executed by said processor during said step of extracting said one or more linguistic units, cause said processor to perform the steps of: presenting said digital signals to a semaphore logic including one or more linguistic feature estimation routines; and extracting, using said semaphore logic, a series of numerical values from said digital signals, said values associated with said semaphore values of said linguistic units.
50. A computer readable medium as in claim 39 wherein said processor accomplishes said step of extracting by executing a sequence of operations consistent with a Hidden Markov Model extraction.
PCT/US1996/019264 1995-12-04 1996-11-25 Method and apparatus for combined information from speech signals for adaptive interaction in teaching and testing WO1997021201A1 (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
DE69622439T DE69622439T2 (en) 1995-12-04 1996-11-25 METHOD AND DEVICE FOR DETERMINING COMBINED INFORMATION FROM VOICE SIGNALS FOR ADAPTIVE INTERACTION IN TEACHING AND EXAMINATION
AU11285/97A AU1128597A (en) 1995-12-04 1996-11-25 Method and apparatus for combined information from speech signals for adaptive interaction in teaching and testing
JP09521379A JP2000501847A (en) 1995-12-04 1996-11-25 Method and apparatus for obtaining complex information from speech signals of adaptive dialogue in education and testing
CA002239691A CA2239691C (en) 1995-12-04 1996-11-25 Method and apparatus for combined information from speech signals for adaptive interaction in teaching and testing
DK96942132T DK0956552T3 (en) 1995-12-04 1996-11-25 Methods and devices for combined information from speech signals for adaptive interaction for teaching and test purposes
AT96942132T ATE220817T1 (en) 1995-12-04 1996-11-25 METHOD AND DEVICE FOR DETERMINING COMBINED INFORMATION FROM VOICE SIGNALS FOR ADAPTIVE INTERACTION IN TEACHING AND TESTING
EP96942132A EP0956552B1 (en) 1995-12-04 1996-11-25 Method and apparatus for combined information from speech signals for adaptive interaction in teaching and testing
HK00102835A HK1023638A1 (en) 1995-12-04 2000-05-12 Method and apparatus for combined information from speech signals for adaptive interaction in teaching and testing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US791495P 1995-12-04 1995-12-04
US60/007,914 1995-12-04

Publications (1)

Publication Number Publication Date
WO1997021201A1 true WO1997021201A1 (en) 1997-06-12

Family

ID=21728782

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1996/019264 WO1997021201A1 (en) 1995-12-04 1996-11-25 Method and apparatus for combined information from speech signals for adaptive interaction in teaching and testing

Country Status (12)

Country Link
US (1) US5870709A (en)
EP (1) EP0956552B1 (en)
JP (2) JP2000501847A (en)
AT (1) ATE220817T1 (en)
AU (1) AU1128597A (en)
CA (1) CA2239691C (en)
DE (1) DE69622439T2 (en)
DK (1) DK0956552T3 (en)
ES (1) ES2180819T3 (en)
HK (1) HK1023638A1 (en)
PT (1) PT956552E (en)
WO (1) WO1997021201A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000014700A1 (en) * 1998-09-04 2000-03-16 N.V. De Wilde Cbt Apparatus and method for personalized language exercise generation
WO2000022597A1 (en) * 1998-10-15 2000-04-20 Planetlingo Inc. Method for computer-aided foreign language instruction
WO2002005248A1 (en) * 2000-07-11 2002-01-17 Kabushiki Kaisha Nihon Toukei Jim Center Test conducting method and on-line test system
DE19752907C2 (en) * 1997-11-28 2002-10-31 Egon Stephan Method for conducting a dialogue between a single or multiple users and a computer

Families Citing this family (110)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10260692A (en) * 1997-03-18 1998-09-29 Toshiba Corp Method and system for recognition synthesis encoding and decoding of speech
US6493426B2 (en) * 1997-09-08 2002-12-10 Ultratec, Inc. Relay for personal interpreter
US6603835B2 (en) 1997-09-08 2003-08-05 Ultratec, Inc. System for text assisted telephony
US6594346B2 (en) * 1997-09-08 2003-07-15 Ultratec, Inc. Relay for personal interpreter
US5927988A (en) * 1997-12-17 1999-07-27 Jenkins; William M. Method and apparatus for training of sensory and perceptual systems in LLI subjects
US6192341B1 (en) * 1998-04-06 2001-02-20 International Business Machines Corporation Data processing system and method for customizing data processing system output for sense-impaired users
US7203649B1 (en) * 1998-04-15 2007-04-10 Unisys Corporation Aphasia therapy system
GB2348035B (en) * 1999-03-19 2003-05-28 Ibm Speech recognition system
US6224383B1 (en) 1999-03-25 2001-05-01 Planetlingo, Inc. Method and system for computer assisted natural language instruction with distracters
WO2000057386A1 (en) * 1999-03-25 2000-09-28 Planetlingo, Inc. Method and system for computer assisted natural language instruction with adjustable speech recognizer
US6397185B1 (en) * 1999-03-29 2002-05-28 Betteraccent, Llc Language independent suprasegmental pronunciation tutoring system and methods
US7062441B1 (en) * 1999-05-13 2006-06-13 Ordinate Corporation Automated language assessment using speech recognition modeling
US6299452B1 (en) * 1999-07-09 2001-10-09 Cognitive Concepts, Inc. Diagnostic system and method for phonological awareness, phonological processing, and reading skill testing
US6665644B1 (en) * 1999-08-10 2003-12-16 International Business Machines Corporation Conversational data mining
DE19941227A1 (en) * 1999-08-30 2001-03-08 Philips Corp Intellectual Pty Method and arrangement for speech recognition
US6963837B1 (en) * 1999-10-06 2005-11-08 Multimodal Technologies, Inc. Attribute-based word modeling
US6513009B1 (en) * 1999-12-14 2003-01-28 International Business Machines Corporation Scalable low resource dialog manager
KR20000049483A (en) * 2000-03-28 2000-08-05 이헌 Foreign language learning system using a method of voice signal. comparison
KR20000049500A (en) * 2000-03-31 2000-08-05 백종관 Method of Practicing Foreign Language Using Voice Recognition and Text-to-Speech and System Thereof
US6424935B1 (en) 2000-07-31 2002-07-23 Micron Technology, Inc. Two-way speech recognition and dialect system
CA2424397A1 (en) * 2000-10-20 2002-05-02 Eyehear Learning, Inc. Automated language acquisition system and method
US20020115044A1 (en) * 2001-01-10 2002-08-22 Zeev Shpiro System and method for computer-assisted language instruction
US20020147587A1 (en) * 2001-03-01 2002-10-10 Ordinate Corporation System for measuring intelligibility of spoken language
US20020169604A1 (en) * 2001-03-09 2002-11-14 Damiba Bertrand A. System, method and computer program product for genre-based grammars and acoustic models in a speech recognition framework
US6876728B2 (en) 2001-07-02 2005-04-05 Nortel Networks Limited Instant messaging using a wireless interface
US20030039948A1 (en) * 2001-08-09 2003-02-27 Donahue Steven J. Voice enabled tutorial system and method
US8416925B2 (en) 2005-06-29 2013-04-09 Ultratec, Inc. Device independent text captioned telephone service
US7881441B2 (en) * 2005-06-29 2011-02-01 Ultratec, Inc. Device independent text captioned telephone service
US8644475B1 (en) 2001-10-16 2014-02-04 Rockstar Consortium Us Lp Telephony usage derived presence information
AU2002240872A1 (en) * 2001-12-21 2003-07-09 Telefonaktiebolaget Lm Ericsson (Publ) Method and device for voice recognition
US20030135624A1 (en) * 2001-12-27 2003-07-17 Mckinnon Steve J. Dynamic presence management
US6953343B2 (en) * 2002-02-06 2005-10-11 Ordinate Corporation Automatic reading system and methods
TW520488B (en) * 2002-03-12 2003-02-11 Inventec Corp Computer-assisted foreign language audiolingual teaching system for contextual read-after assessment and method thereof
EP1506472A1 (en) * 2002-05-14 2005-02-16 Philips Intellectual Property & Standards GmbH Dialog control for an electric apparatus
USH2187H1 (en) 2002-06-28 2007-04-03 Unisys Corporation System and method for gender identification in a speech application environment
US7299188B2 (en) * 2002-07-03 2007-11-20 Lucent Technologies Inc. Method and apparatus for providing an interactive language tutor
US7181392B2 (en) * 2002-07-16 2007-02-20 International Business Machines Corporation Determining speech recognition accuracy
US7249011B2 (en) * 2002-08-12 2007-07-24 Avaya Technology Corp. Methods and apparatus for automatic training using natural language techniques for analysis of queries presented to a trainee and responses from the trainee
KR20040017896A (en) * 2002-08-22 2004-03-02 박이희 Recording media for learning a foreign language using a computer and method using the same
US7305336B2 (en) * 2002-08-30 2007-12-04 Fuji Xerox Co., Ltd. System and method for summarization combining natural language generation with structural analysis
US20040049391A1 (en) * 2002-09-09 2004-03-11 Fuji Xerox Co., Ltd. Systems and methods for dynamic reading fluency proficiency assessment
US8392609B2 (en) * 2002-09-17 2013-03-05 Apple Inc. Proximity detection for media proxies
US7455522B2 (en) * 2002-10-04 2008-11-25 Fuji Xerox Co., Ltd. Systems and methods for dynamic reading fluency instruction and improvement
US7752045B2 (en) * 2002-10-07 2010-07-06 Carnegie Mellon University Systems and methods for comparing speech elements
US7324944B2 (en) * 2002-12-12 2008-01-29 Brigham Young University, Technology Transfer Office Systems and methods for dynamically analyzing temporality in speech
US7424420B2 (en) * 2003-02-11 2008-09-09 Fuji Xerox Co., Ltd. System and method for dynamically determining the function of a lexical item based on context
US7363213B2 (en) * 2003-02-11 2008-04-22 Fuji Xerox Co., Ltd. System and method for dynamically determining the function of a lexical item based on discourse hierarchy structure
US7369985B2 (en) * 2003-02-11 2008-05-06 Fuji Xerox Co., Ltd. System and method for dynamically determining the attitude of an author of a natural language document
US7260519B2 (en) * 2003-03-13 2007-08-21 Fuji Xerox Co., Ltd. Systems and methods for dynamically determining the attitude of a natural language speaker
US9118574B1 (en) 2003-11-26 2015-08-25 RPX Clearinghouse, LLC Presence reporting using wireless messaging
US8515024B2 (en) 2010-01-13 2013-08-20 Ultratec, Inc. Captioned telephone service
GB2435373B (en) * 2004-02-18 2009-04-01 Ultratec Inc Captioned telephone service
US20060008781A1 (en) * 2004-07-06 2006-01-12 Ordinate Corporation System and method for measuring reading skills
US7433819B2 (en) * 2004-09-10 2008-10-07 Scientific Learning Corporation Assessing fluency based on elapsed time
US20060069562A1 (en) * 2004-09-10 2006-03-30 Adams Marilyn J Word categories
US9520068B2 (en) * 2004-09-10 2016-12-13 Jtt Holdings, Inc. Sentence level analysis in a reading tutor
US8109765B2 (en) * 2004-09-10 2012-02-07 Scientific Learning Corporation Intelligent tutoring feedback
US7243068B2 (en) * 2004-09-10 2007-07-10 Soliloquy Learning, Inc. Microphone setup and testing in voice recognition software
US7624013B2 (en) * 2004-09-10 2009-11-24 Scientific Learning Corporation Word competition models in voice recognition
US20060058999A1 (en) * 2004-09-10 2006-03-16 Simon Barker Voice model adaptation
US20060057545A1 (en) * 2004-09-14 2006-03-16 Sensory, Incorporated Pronunciation training method and apparatus
US20060136215A1 (en) * 2004-12-21 2006-06-22 Jong Jin Kim Method of speaking rate conversion in text-to-speech system
US11258900B2 (en) 2005-06-29 2022-02-22 Ultratec, Inc. Device independent text captioned telephone service
US20070055523A1 (en) * 2005-08-25 2007-03-08 Yang George L Pronunciation training system
US20070055514A1 (en) * 2005-09-08 2007-03-08 Beattie Valerie L Intelligent tutoring feedback
JP2006053578A (en) * 2005-09-12 2006-02-23 Nippon Tokei Jimu Center:Kk Test implementation method
US7697827B2 (en) 2005-10-17 2010-04-13 Konicek Jeffrey C User-friendlier interfaces for a camera
US20070166685A1 (en) * 2005-12-22 2007-07-19 David Gilbert Automated skills assessment
US20070179788A1 (en) * 2006-01-27 2007-08-02 Benco David S Network support for interactive language lessons
JP4911756B2 (en) * 2006-06-12 2012-04-04 株式会社日本統計事務センター Online testing system
JP2008022493A (en) * 2006-07-14 2008-01-31 Fujitsu Ltd Reception support system and its program
JP2008026463A (en) * 2006-07-19 2008-02-07 Denso Corp Voice interaction apparatus
WO2008024377A2 (en) * 2006-08-21 2008-02-28 Power-Glide Language Courses, Inc. Group foreign language teaching system and method
US20080140397A1 (en) * 2006-12-07 2008-06-12 Jonathan Travis Millman Sequencing for location determination
US20080140412A1 (en) * 2006-12-07 2008-06-12 Jonathan Travis Millman Interactive tutoring
US20080140411A1 (en) * 2006-12-07 2008-06-12 Jonathan Travis Millman Reading
US20080140652A1 (en) * 2006-12-07 2008-06-12 Jonathan Travis Millman Authoring tool
US20080140413A1 (en) * 2006-12-07 2008-06-12 Jonathan Travis Millman Synchronization of audio to reading
US20080160487A1 (en) * 2006-12-29 2008-07-03 Fairfield Language Technologies Modularized computer-aided language learning method and system
GB2451907B (en) * 2007-08-17 2010-11-03 Fluency Voice Technology Ltd Device for modifying and improving the behaviour of speech recognition systems
US8271281B2 (en) * 2007-12-28 2012-09-18 Nuance Communications, Inc. Method for assessing pronunciation abilities
US20090209341A1 (en) * 2008-02-14 2009-08-20 Aruze Gaming America, Inc. Gaming Apparatus Capable of Conversation with Player and Control Method Thereof
US20100075289A1 (en) * 2008-09-19 2010-03-25 International Business Machines Corporation Method and system for automated content customization and delivery
US8494857B2 (en) 2009-01-06 2013-07-23 Regents Of The University Of Minnesota Automatic measurement of speech fluency
US10088976B2 (en) 2009-01-15 2018-10-02 Em Acquisition Corp., Inc. Systems and methods for multiple voice document narration
JP5281659B2 (en) * 2009-01-20 2013-09-04 旭化成株式会社 Spoken dialogue apparatus, dialogue control method, and dialogue control program
US20110166862A1 (en) * 2010-01-04 2011-07-07 Eyal Eshed System and method for variable automated response to remote verbal input at a mobile device
US8392186B2 (en) 2010-05-18 2013-03-05 K-Nfb Reading Technology, Inc. Audio synchronization for document narration with user-selected playback
WO2012137131A1 (en) * 2011-04-07 2012-10-11 Mordechai Shani Providing computer aided speech and language therapy
JP2012128440A (en) * 2012-02-06 2012-07-05 Denso Corp Voice interactive device
WO2013138633A1 (en) 2012-03-15 2013-09-19 Regents Of The University Of Minnesota Automated verbal fluency assessment
US9635067B2 (en) 2012-04-23 2017-04-25 Verint Americas Inc. Tracing and asynchronous communication network and routing method
US20130282844A1 (en) 2012-04-23 2013-10-24 Contact Solutions LLC Apparatus and methods for multi-mode asynchronous communication
BR112016017972B1 (en) 2014-02-06 2022-08-30 Contact Solutions LLC METHOD FOR MODIFICATION OF COMMUNICATION FLOW
US20180034961A1 (en) 2014-02-28 2018-02-01 Ultratec, Inc. Semiautomated Relay Method and Apparatus
US10748523B2 (en) 2014-02-28 2020-08-18 Ultratec, Inc. Semiautomated relay method and apparatus
US10878721B2 (en) 2014-02-28 2020-12-29 Ultratec, Inc. Semiautomated relay method and apparatus
US10389876B2 (en) 2014-02-28 2019-08-20 Ultratec, Inc. Semiautomated relay method and apparatus
US20180270350A1 (en) 2014-02-28 2018-09-20 Ultratec, Inc. Semiautomated relay method and apparatus
US9166881B1 (en) 2014-12-31 2015-10-20 Contact Solutions LLC Methods and apparatus for adaptive bandwidth-based communication management
CN104464399A (en) * 2015-01-03 2015-03-25 杨茹芹 Novel display board for English teaching
US9947322B2 (en) * 2015-02-26 2018-04-17 Arizona Board Of Regents Acting For And On Behalf Of Northern Arizona University Systems and methods for automated evaluation of human speech
WO2017024248A1 (en) 2015-08-06 2017-02-09 Contact Solutions LLC Tracing and asynchronous communication network and routing method
US10063647B2 (en) 2015-12-31 2018-08-28 Verint Americas Inc. Systems, apparatuses, and methods for intelligent network communication and engagement
US9799324B2 (en) 2016-01-28 2017-10-24 Google Inc. Adaptive text-to-speech outputs
US10431112B2 (en) 2016-10-03 2019-10-01 Arthur Ward Computerized systems and methods for categorizing student responses and using them to update a student model during linguistic education
US10049664B1 (en) * 2016-10-27 2018-08-14 Intuit Inc. Determining application experience based on paralinguistic information
US20180197438A1 (en) 2017-01-10 2018-07-12 International Business Machines Corporation System for enhancing speech performance via pattern detection and learning
US10593351B2 (en) * 2017-05-03 2020-03-17 Ajit Arun Zadgaonkar System and method for estimating hormone level and physiological conditions by analysing speech samples
US11539900B2 (en) 2020-02-21 2022-12-27 Ultratec, Inc. Caption modification and augmentation systems and methods for use by hearing assisted user

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1990001203A1 (en) * 1988-07-25 1990-02-08 British Telecommunications Public Limited Company Language training
US5036539A (en) * 1989-07-06 1991-07-30 Itt Corporation Real-time speech processing development system
WO1994010666A1 (en) * 1992-11-04 1994-05-11 The Secretary Of State For Defence In Her Britannic Majesty's Government Of The United Kingdom Of Great Britain And Northern Ireland Children's speech training aid
JPH06204952A (en) * 1992-09-21 1994-07-22 Internatl Business Mach Corp <Ibm> Training of speech recognition system utilizing telephone line
WO1994017508A1 (en) * 1993-01-21 1994-08-04 Zeev Shpiro Computerized system for teaching speech
WO1994020952A1 (en) * 1993-03-12 1994-09-15 Sri International Method and apparatus for voice-interactive language instruction
US5393236A (en) * 1992-09-25 1995-02-28 Northeastern University Interactive speech pronunciation apparatus and method
US5540589A (en) * 1994-04-11 1996-07-30 Mitsubishi Electric Information Technology Center Audio interactive tutor

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5065345A (en) * 1988-11-04 1991-11-12 Dyned International, Inc. Interactive audiovisual control mechanism
US5133560A (en) * 1990-08-31 1992-07-28 Small Maynard E Spelling game method
US5268990A (en) * 1991-01-31 1993-12-07 Sri International Method for recognizing speech using linguistically-motivated hidden Markov models
US5302132A (en) * 1992-04-01 1994-04-12 Corder Paul R Instructional system and method for improving communication skills
US5458494A (en) * 1993-08-23 1995-10-17 Edutech Research Labs, Ltd. Remotely operable teaching system and method therefor

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1990001203A1 (en) * 1988-07-25 1990-02-08 British Telecommunications Public Limited Company Language training
US5036539A (en) * 1989-07-06 1991-07-30 Itt Corporation Real-time speech processing development system
JPH06204952A (en) * 1992-09-21 1994-07-22 Internatl Business Mach Corp <Ibm> Training of speech recognition system utilizing telephone line
US5475792A (en) * 1992-09-21 1995-12-12 International Business Machines Corporation Telephony channel simulator for speech recognition application
US5393236A (en) * 1992-09-25 1995-02-28 Northeastern University Interactive speech pronunciation apparatus and method
WO1994010666A1 (en) * 1992-11-04 1994-05-11 The Secretary Of State For Defence In Her Britannic Majesty's Government Of The United Kingdom Of Great Britain And Northern Ireland Children's speech training aid
WO1994017508A1 (en) * 1993-01-21 1994-08-04 Zeev Shpiro Computerized system for teaching speech
WO1994020952A1 (en) * 1993-03-12 1994-09-15 Sri International Method and apparatus for voice-interactive language instruction
US5540589A (en) * 1994-04-11 1996-07-30 Mitsubishi Electric Information Technology Center Audio interactive tutor

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"READING TUTOR USING AN AUTOMATIC SPEECH RECOGNITION", IBM TECHNICAL DISCLOSURE BULLETIN, vol. 36, no. 8, 1 August 1993 (1993-08-01), pages 287 - 289, XP000390225 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19752907C2 (en) * 1997-11-28 2002-10-31 Egon Stephan Method for conducting a dialogue between a single or multiple users and a computer
WO2000014700A1 (en) * 1998-09-04 2000-03-16 N.V. De Wilde Cbt Apparatus and method for personalized language exercise generation
WO2000042589A2 (en) * 1998-09-04 2000-07-20 N.V. De Wilde Cbt Apparatus and method for personalized language exercise generation
WO2000042589A3 (en) * 1998-09-04 2000-10-05 Wilde Cbt Nv De Apparatus and method for personalized language exercise generation
WO2000022597A1 (en) * 1998-10-15 2000-04-20 Planetlingo Inc. Method for computer-aided foreign language instruction
WO2002005248A1 (en) * 2000-07-11 2002-01-17 Kabushiki Kaisha Nihon Toukei Jim Center Test conducting method and on-line test system

Also Published As

Publication number Publication date
JP2000501847A (en) 2000-02-15
CA2239691C (en) 2006-06-06
JP2005321817A (en) 2005-11-17
CA2239691A1 (en) 1997-06-12
DE69622439T2 (en) 2002-11-14
DE69622439D1 (en) 2002-08-22
EP0956552A1 (en) 1999-11-17
AU1128597A (en) 1997-06-27
ATE220817T1 (en) 2002-08-15
PT956552E (en) 2002-10-31
DK0956552T3 (en) 2002-11-04
EP0956552B1 (en) 2002-07-17
ES2180819T3 (en) 2003-02-16
US5870709A (en) 1999-02-09
HK1023638A1 (en) 2000-09-15

Similar Documents

Publication Publication Date Title
US5870709A (en) Method and apparatus for combining information from speech signals for adaptive interaction in teaching and testing
US6157913A (en) Method and apparatus for estimating fitness to perform tasks based on linguistic and other aspects of spoken responses in constrained interactions
US11527174B2 (en) System to evaluate dimensions of pronunciation quality
Bernstein et al. Automatic evaluation and training in English pronunciation.
US6324507B1 (en) Speech recognition enrollment for non-readers and displayless devices
US5487671A (en) Computerized system for teaching speech
JP3520022B2 (en) Foreign language learning device, foreign language learning method and medium
Cutler The comparative perspective on spoken-language processing
CN108431883B (en) Language learning system and language learning program
US20030154080A1 (en) Method and apparatus for modification of audio input to a data processing system
Cucchiarini et al. Second language learners' spoken discourse: Practice and corrective feedback through automatic speech recognition
WO2019215459A1 (en) Computer implemented method and apparatus for recognition of speech patterns and feedback
JP2000019941A (en) Pronunciation learning apparatus
Kabashima et al. Dnn-based scoring of language learners’ proficiency using learners’ shadowings and native listeners’ responsive shadowings
JP2007148170A (en) Foreign language learning support system
JPH06348297A (en) Pronunciation trainer
Wik Designing a virtual language tutor
CN111508523A (en) Voice training prompting method and system
JP7039637B2 (en) Information processing equipment, information processing method, information processing system, information processing program
JP2005241767A (en) Speech recognition device
JP7195593B2 (en) Language learning devices and language learning programs
JP2001282098A (en) Foreign language learning device, foreign language learning method and medium
Dobrovolskyi et al. An approach to synthesis of a phonetically representative english text of minimal length
JP2020129023A (en) Language learning device and language learning program
JPH10326074A (en) Control method for language training device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AT AU AZ BA BB BG BR BY CA CH CN CU CZ CZ DE DE DK DK EE EE ES FI FI GB GE HU IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SK TJ TM TR TT UA UG US UZ VN AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 1996942132

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2239691

Country of ref document: CA

Ref country code: JP

Ref document number: 1997 521379

Kind code of ref document: A

Format of ref document f/p: F

Ref country code: CA

Ref document number: 2239691

Kind code of ref document: A

Format of ref document f/p: F

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 1996942132

Country of ref document: EP

WWG Wipo information: grant in national office

Ref document number: 1996942132

Country of ref document: EP