WO2003038808A1 - Method of and system for transcribing dictations in text files and for revising the texts - Google Patents

Method of and system for transcribing dictations in text files and for revising the texts Download PDF

Info

Publication number
WO2003038808A1
WO2003038808A1 PCT/IB2002/004466 IB0204466W WO03038808A1 WO 2003038808 A1 WO2003038808 A1 WO 2003038808A1 IB 0204466 W IB0204466 W IB 0204466W WO 03038808 A1 WO03038808 A1 WO 03038808A1
Authority
WO
WIPO (PCT)
Prior art keywords
text
file
dictation
confidence
passages
Prior art date
Application number
PCT/IB2002/004466
Other languages
French (fr)
Inventor
Kwaku Frimpong-Ansah
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to JP2003540981A priority Critical patent/JP4145796B2/en
Priority to EP02777662A priority patent/EP1442451B1/en
Priority to DE60211197T priority patent/DE60211197T2/en
Publication of WO2003038808A1 publication Critical patent/WO2003038808A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • G10L2015/0631Creating reference templates; Clustering

Abstract

The invention relates to a method and a transcription system (T) for transcribing dictations, in which a dictation file (5) is converted into a text file (8), and subsequently the text file (8) is compared with the dictation file (5). To increase the speed for the subsequent correction, provision is made that during transcription of the dictation file (5) a confidence value is generated for a transcribed text passage of the text file (8), and a comparison of the text file (8) with the dictation file (5) takes place only in respect of those text passages for which the confidence value of the text passage is below a confidence limit, i.e. a text passage recognized as possibly defective is present.

Description

METHOD OF AND SYSTEM FOR TRANSCRIBING DICTATIONS IN TEXT FILES AND FOR REVISING THE TEXTS
The invention relates to a method for transcribing dictations, in which a dictation file is converted into a text file.
The invention also relates to a transcription system for transcribing dictations with means for converting a dictation file into a text file.
Dictations which have been recorded in various ways are converted or transcribed into text files by transcription services. Normally, automatic speech recognition systems are used for the transcription of dictations. Since the texts obtained in this way always contain a certain percentage of errors or unsuitable text passages, the transcribed dictations have to be checked after conversion, and errors contained in the text file corrected. Normally, this correction is undertaken by means of a comparison of the text file with the dictation file by correction operatives, who play back the dictation file and check the text file in parallel with this. In the event of a defective or unsuitable transcription or text passage picked up by the correction operatives, the defective or unsuitable text passage is replaced with a different text passage. This correction work is extremely time-consuming, thereby considerably increasing the costs of the transcription. Since an error-free transcription will virtually never be achieved, this subsequent correction cannot be dispensed with. One of the aims, therefore, is to make the correction work following a transcription as rapid and efficient as possible. In patent document US 5 712 957, a method for the correction of transcribed dictations is disclosed in which the transcribed text and possible hypotheses, i.e. alternative text passages, are offered and evaluated in two different ways. The transcription result is supplied by combining the two evaluations. Although this method reduces the probability of error in a transcribed text, it still makes a subsequent, time-consuming check by a correction operative necessary.
Patent document US 6 064 961 discloses a method for showing a transcribed text in a window for checking, in which the text section currently under review is always shown in a defined, centralized position in the window. This facilitates the proofreading of the transcribed text, accelerating it slightly at best. It is an object of the invention to accelerate a method for the transcription of dictations by improving the time-consuming correction method, so that the transcription result, i.e. the finished text, can be delivered to the author of the dictation as rapidly as possible. It should also be possible to reduce the costs of transcription. A further object of the invention consists in the creation of a transcription system for the transcription of dictations, which enables the fastest, most efficient transcription possible, so that the finished text can arrive with the author of the dictation as rapidly and as error-free as possible.
The object according to the invention is achieved in respect of the method in that for the converted or transcribed text passages information concerning their reliability is generated and a confidence value is generated for the relevant text passage, and a comparison of the text file with the dictation file takes place only in the case of text passages for which the confidence value is below a confidence limit, i.e. where text passages recognized as possibly defective are present. With the proviso of as good a determination of the confidence value as possible for the transcribed text passages, enormous time savings can be made with this method when correcting the transcribed text. Experience has shown that, when the method according to the invention is applied, only 10% - 20% of a dictation has to be listened to by a correction operative.
It is additionally advantageous if the text passages recognized as possibly defective are marked. This can be done by, for example, underlining the text passages in question or by color marking to highlight them.
It is preferable for the dictation file to be converted into a text file automatically using a speech recognition device.
According to a further feature of the invention, provision is made that during a correction procedure the playback speed for a dictation is altered depending on the confidence value of the relevant transcribed text passage when the text file is compared with the dictation file. Here, dependency may be multi-stage in accordance with the marking of the text passage recognized as possibly defective. For example, in the case of a text passage recognized as very probably defective, the playback speed is considerably reduced, whereas it is increased in the case of a text passage recognized as less probably defective. In the case of defect-free text passages, the playback speed for a dictation can be increased to a stipulated maximum value. For example, the playback speed may be varied between 50% and If the confidence limit can be advantageously set, it is possible to achieve a further increase in efficiency.
To improve the end result, it is possible to repeat the comparison of the text file with the dictation file using an increased confidence limit, so that only text passages with a high error probability are recognized, and a correction is undertaken only for these errors. Although the overall time for the transcription is increased by a second comparison procedure, this can be very advantageous, or even prescribed, for certain applications. The object according to the invention is also achieved by a transcription system for transcribing dictations, comprising conversion means for converting a dictation file into a text file with text passages, and comprising file comparison means for comparing the text file with the dictation file, and comprising confidence- value generation means by which a confidence value can be generated for each converted text passage, and comprising comparison means for comparing the confidence value with a confidence limit, in which the file comparison means undertake the comparison of the text file with the dictation file only in the case of text passages for which the confidence value is below a confidence limit, i.e. where text passages recogmzed as possibly defective are present.
Hereby, marking means for marking the text passages recognized as possibly defective are advantageously provided. This marking may take place as a function of a confidence value which is assigned to a recognized text passage during the transcription. A marking can be used e.g. to highlight the text passage recognized as possibly defective for which the confidence value is below a confidence limit.
The means for converting the dictation file into a text file are advantageously in the form of a speech recognition device.
For one embodiment of the invention, a device for changing the playback speed for a dictation file as a function of text-file passages recognized as possibly defective can be provided. The changing of the playback speed can take place between two fixed values or between several values as a function of the result of the comparison of the confidence value of the particular transcribed text passage with the confidence limit. Means for inputting the confidence limit and thereby for changing it are advantageously provided, with which means a matching of the confidence value for the particular text passage to the particular requirements or according to the experience of a correction operative can also take place. Furthermore, a further correction run with a changed confidence limit can be provided. To facilitate final correction for the author of a dictation, means may be provided for weighting the text passages recognized as possibly defective in the transcribed text in which possible errors or inconsistencies have been found. These means may also be used by the author of the dictation for the final correction in order to indicate to the correction operative which text passages remained defective even after the correction, as a result of which information important to the transcription process can be gathered.
The invention will be further described with reference to examples of embodiment shown in the drawings to which, however, the invention is not restricted. Fig. 1 shows a block diagram of a conventional transcription system. Fig. 2 shows a flowchart which is followed when correcting a text file with text passages recognized as possibly defective.
Fig. 3 shows a flowchart of a conventional method for correcting a transcribed text.
Fig. 4 shows two variants of a method according to the invention for correcting a transcribed text.
Fig. 5 shows schematically a method for changing a confidence limit in a method according to the invention. Fig. 6 shows a block diagram of a part of a transcription system according to the invention.
Fig. 1 shows schematically a block diagram of a transcription system T, with which an author A creates a dictation which is stored either in a dictation device 1 or in a personal computer 2 or in a portable computer 3. It is also possible for author A to dictate into a telephone 4, after which the dictation is stored in, for example, a central computer. The dictation device 1 supplies a dictation file 5, which contains a digitized speech signal. A suitable format for such a file, which contains a digitized speech signal, is a WAN file, for instance. Likewise, the personal computer 2 or the portable computer 3, or a central computer addressed via telephone 4 supplies the corresponding dictation file 5 which contains the digital speech signal. The dictation file 5 or a speech signal 6 is normally fed to a speech recognition device 7 in which an automatic conversion of dictation file 5 or of speech signal 6 into a text file 8 takes place. For the speech recognition, the speech recognition device 7 accesses an information database 9 in which a multiplicity of possible words that could be recognized are contained. Hereby, account can be taken of, for instance, a voice profile and a sentence structure for certain application areas (e.g. from the field of medicine). Naturally, the text file 8 contains a certain number of defective or unsuitable text passages which subsequently have to be corrected. To this end, text file 8 is transferred to file comparison means 10 provided for the purpose, which file comparison means may also be referred to in the following as a correction device. In correction device 10, text file 8 is compared with dictation file 5, this normally being done by a correction operative, whereby the acoustic signal of author A is played back or reproduced, and compared with the text from text file 8 shown on a screen or on another display device. This correction process naturally requires a particularly large amount of time, and accounts for a large proportion of the total processing time. The correction process is often repeated at least once more.
Fig. 3 shows a flowchart 400 of a conventional procedural sequence for correcting a transcribed text. Above a section of the speech signal 6 of dictation file 5 are five text passages W(n-3), W(n-2), W(n-1), W(n) and W(n+1) of text file 8. In accordance with a block 408 of flowchart 400, the start of speech signal 6 or of dictation file 5 is sought, and the playback of dictation file 5 or of speech signal 6 and a synchronous representation of text file 8, e.g. on a screen, starts up. In accordance with block 409, to assist the orientation of the correction operative, a cursor or similar is carried along in the text of text file 8 according to the position in speech signal 6, or the current position in the text is shown by corresponding marking of the relevant text passage W(n) and, at most, of the preceding text passage W(n+1) and subsequent text passage W(n-1). h accordance with a block 410, the current text passages are highlighted e.g. by underlining or by changing the color of the text passages. The correction operative reads the displayed text of text file 8 and simultaneously listens to the speech signal 6, and corrects text passages which, in his estimation, are defective or unsuitable. Correction takes place e.g. by overwriting a text passage marked as defective W(n) with a correct or more suitable text or section of text.
Following the correction process, a corrected text 11 can be fed to a device 12 for quality control. This quality control stage is normally also undertaken by a correction operative, who compares the dictation file 5 with the corrected text 11. Finally, in accordance with a block 14 in Fig. 1, a checked text file 13 is sent to the author A for perusal. This is done, for example, by sending the corrected, checked text file 13 via email. Once the author A has checked the text, he sends a message to this effect to the transcription location, whereupon the transcription is concluded, e.g. by issuing of the invoice. It is important in transcription processes of this kind to minimize the time span between the recording of a dictation by the author A and the receipt of the finished text by the author A in accordance with block 14. In automatic speech recognition systems, a large proportion of this time span is taken up by the correction and any quality control. It is therefore a prime objective to reduce this time span and thereby to shorten considerably the overall transcription process and, as a result, to keep the costs of the transcription low. Fig. 6 shows a block diagram of a part of a transcription system T that is important for the invention. The dictation file 5 is transferred to the speech recognition device 7 and converted into a text file 8, as already described in connection with Fig. 1. The speech recognition device 7 is equipped with confidence- value generation means 25, which is designed to generate a confidence value for a converted text passage W(n). The generation of confidence values of this kind is known in expert circles and is dealt with in, for example, A. Wendemuth, G. Rose, J.G.A. Dalting: Advances in Confidence Measures for Large Vocabulary; Int. Conf. on Acoustic Speech and Signal Processing 1999. By virtue of the reference to this document, the disclosure contained therein is deemed as being included here too. The confidence values supplied by the confidence-value generation means 25 may be within a confidence- value range from zero (0) to one thousand (1,000), whereby a confidence value of one thousand (1,000) means that the text passage W(n) has been correctly recognized or transcribed with 99.99% reliability. It can be mentioned here that the confidence value can equally be represented by a different range of figures, e.g. from zero (0) to one hundred (100).
The text file 8 produced is sent from the speech recognition device 7 to the downstream correction devjce 10, which is designed to display the text file 8 and play back dictation file 5, and to recognize and mark possibly defective text passages W(n). Connected to the correction device 10 are a display device 20, which is designed to display text file 8, and inputting means 19, which is also designed for manually altering a confidence value. The correction device 10 is equipped with weighting means 21, which is provided and designed for manually weighting the text passages W(n) of text file 8. The correction device 10 is also equipped with a device 22, which is designed for altering a playback speed of text passages W(n) of text file 8. Also contained in correction device 10 are marking means 23, which are designed for marking the text passages W(n), and comparison means 24, which are designed for comparing the confidence value with a confidence limit.
Fig. 2 shows a flowchart 300 of a process which runs in correction device 10 of transcription system T according to the invention, h accordance with a block 301, the dictation file, e.g. a WAN file, is opened, and the confidence value or confidence information is reproduced in accordance with a block 302 in the display device 20, which may be, for instance, a screen. The confidence information is represented, or the text passages are marked, in accordance with Fig. 6, in the marking means 23, and this may happen in various ways, e.g. by altering the color of the text displayed on the screen, i.e. by coloring the text passage W(n) according to the associated confidence value, or by coloring the background of the text passage W(n) according to the associated confidence value. Here, the color representation of the text passage W(n) can, for example, be determined from a linear color profile, from a color red for a minimum confidence value to a color green for a maximum confidence value. It can be mentioned that the marking of the text passage W(n) may also take place indirectly in that the color representation of all other text passages is changed as compared with the text passage to be marked W(n). In accordance with a block 303, a confidence limit CG is selected by the user or the correction operative, and in accordance with block 304, the text is checked for possible errors. The confidence limit CG may lie, for example, at 80%> or 90% of a maximum confidence- value range. Accordingly, for each text passage W(n), an inquiry takes place at a block 305 as to whether the confidence value difference C(n) is smaller, equal to or greater than the confidence limit CG- hi the event that the confidence limit CG is exceeded, then in accordance with a block 306, no marking is undertaken of the selected text passage W(n) as possibly defective. If the confidence limit CG is undershot or equaled, the corresponding text passage W(n) is marked as possibly defective. Using the defects in text file 8 recognized in accordance with flowchart 300, a more efficient, considerably more rapid correction of the transcribed text or text file 8 can take place. The correction takes place in such a way that, when the text file 8 is compared with the dictation file 5 during the correction sequence, a jump takes place only to the text passages recognized as possibly defective, and only the text passages recognized as possibly defective have to be corrected by the correction operative. A considerable amount of time can be saved in this way, since the correction operative does not have to listen to the complete dictation file 5. The correction sequence can, for example, take place in such a way that the playback speed for the dictation or the dictation file 5 is altered as a function of the text passages recognized as possibly defective, whereby the playback speed is increased to, for instance, twice its value in the case of text passages not marked as possibly defective, whereas the playback speed is reduced when playing back possibly defective text passages.
Fig. 4 shows flowcharts 500A and 500B of two variants of the method according to the invention. A sequence of six successive text passages W(n-3) to W(n+2) is again shown schematically above the speech signal 6. In the example shown, three text passages, namely W(n-2), W(n-1) and W(n+1) have been recognized as possibly defective and marked accordingly, as shown by the hatching.
In accordance with the flowchart 500A, according to a block 511, the text file 8 and, in parallel, the dictation file 5 or the speech signal 6 is opened and played back, and, according to a block 512, the transcribed text is shown on the display device 20, which may be a monitor. According to a block 513, those text passages which have been classified as not defective are skipped during playback of the speech signal 6 or the dictation file 5, and a jump takes place to the start of the next text passage marked as defective W(n), with playback taking place from there to a next, successive text passage marked as not defective. According to a block 514, a check is made as to whether the end of dictation file 5 or the text file 8 has been reached, whereby, if the result of this decision question is negative, continuation takes place at block 513, and, if the result is positive, the sequence is terminated. In accordance with flowchart 500B, firstly, according to a block 520, the speech signal 6 or the dictation file 5 and, synchronously with this, the associated text file 8 are started and, according to a block 521, the playback of the speech signal 6 or dictation file 5 is started. According to a block 522, a check is made as to whether the end of text file 8 or dictation file 5 has been reached, whereby, in the case of a positive result, the sequence is terminated. Otherwise, in the case of a negative result to the check at block 522, a check is made at a block 523 as to whether the text passage W(n) has been marked as defective, whereby, in the case of a positive result, the sequence is continued at a block 524 or, otherwise, a jump to a block 525 is made. The playback speed for playing back the speech signal 6 and representing text file 8 is altered according to both block 524 and block 525. For example, according to block 525, the playback speed for the text passages marked as not defective W(n-3), W(n) and W(n+2) can be twice as fast as the normal playback speed and, according to block 524, the playback speed for the text passages recognized as possibly defective and marked accordingly W(n-2), W(n-1) and W(n+1) can be selected to be half as fast as the normal playback speed.
Fig. 5 shows schematically a method with which the confidence values are altered manually. One part of a text file 8 is again shown in the form of six successive text passages W(n-3) to W(n+2), and the profile of the automatically produced confidence values is sketched in a profile 15. According to the profile 15, the text passages W(n-2), W(n) and W(n+2) have a lower confidence value than the remaining text passages. If the correction operative now manually makes a contribution to the confidence values according to a profile 16, a correction of the confidence-value profile can take place. For example, the correction operative can, during the playing of text file 8, record with the input means 19, which may be, for instance, a keyboard, that the text passages W(n-2) and W(n) and W(n+2) are probably defective. In accordance with a profile 17, by combining an automatically determined confidence-value profile 15 and the manual confidence- value contribution 16, a resultant confidence-value profile is generated and, as a result, only the text passage W(n) is classified as possibly defective. Through a contribution by experienced correction operatives, a considerable reduction in the number of text passages recognized or classified as possibly defective can thereby be achieved, saving time on the subsequent correction. The method or system according to the invention for transcribing dictations can be used both in the conventional correction of a transcribed text and in the quality control of the transcribed text. Experience has shown that savings of up to 90% in correction time are achievable compared with conventional correction methods in which the entire dictation has to be listened to.

Claims

CLAIMS:
1. A method for transcribing dictations, in which a dictation file (5) is converted into a text file (8) with text passages (W(n)), and in which the text file (8) is compared with the dictation file (5), in which during the conversion to converted text passages (W(n)) a respective confidence value is generated, and in which the comparison of the text file (8) with the dictation file (5) takes place only in the case of those text passages (W(n)) for which the confidence value is below a confidence limit (CG), i.e. where possibly defective text passages (W(n)) are present.
2. A method as claimed in claim 1, in which the text passages recognized as possibly defective (W(n)), for which the confidence value is below a confidence limit (CG), are marked.
3. A method as claimed in claim 1, in which the dictation file (5) is converted into a text file (8) automatically using a speech recognition device (7).
4. A method as claimed in claim 1 , in which the text passages recognized as possibly defective (W(n)), for which the confidence value is below a confidence limit (CQ), are equipped with a weighting factor.
5. A method as claimed in claim 1, in which a playback speed for the dictation file (5) is altered during comparison of the text file (8) with the dictation file (5) as a function of the confidence value of the relevant text passage (W(n)).
6. A method as claimed in claim 1, in which the confidence limit (CG) is adjustable.
7. A method as claimed in claim 1, in which the comparison of the text file (8) with the dictation file (5) is repeated with an increased confidence limit (CG)-
8. A transcription system (T) for transcribing dictations comprising conversion means (7) for converting a dictation file (5) into a text file (8) with text passages (W(n)) and comprising file comparison means (10) for comparing the text file (8) with the dictation file (5), and comprising confidence-value generation means (25) by which a confidence value can be generated for each converted text passage (W(n)), and comprising comparison means (24) for comparing the confidence value with a confidence limit (CG), in which the file comparison means (10) undertake the comparison of the text file (8) with the dictation file (5) only in the case of those text passages (W(n)) for which the confidence value is below a confidence limit (CG), i.e. where text passages recognized as possibly defective are present.
9. A transcription system (T) as claimed in claim 8, in which marking means (23) are provided for marking the text passages recognized as possibly defective (W(n)), for which the confidence value is below a confidence limit (CG).
10. A transcription system (T) as claimed in claim 8, in which the conversion means (7) for converting the dictation file (5) into a text file (8) is in the form of a speech recognition device.
11. A transcription system (T) as claimed in claim 8, in which means (21) are provided for weighting the text passages (W(n)) of text file (8).
12. A transcription system (T) as claimed in claim 8, in which a device (22) is provided for altering a playback speed for the dictation file (5) during comparison of the text file (8) with the dictation file (5) as a function of the result of the comparison of the confidence value for the relevant text passage (W(n)) with the confidence limit (CG).
13. A transcription system (T) as claimed in claim 8, in which means (19) are provided for inputting the confidence limit (CG).
PCT/IB2002/004466 2001-10-31 2002-10-24 Method of and system for transcribing dictations in text files and for revising the texts WO2003038808A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2003540981A JP4145796B2 (en) 2001-10-31 2002-10-24 Method and system for writing dictation of text files and for correcting text
EP02777662A EP1442451B1 (en) 2001-10-31 2002-10-24 Method of and system for transcribing dictations in text files and for revising the texts
DE60211197T DE60211197T2 (en) 2001-10-31 2002-10-24 METHOD AND DEVICE FOR THE CONVERSION OF SPANISHED TEXTS AND CORRECTION OF THE KNOWN TEXTS

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP01890304.7 2001-10-31
EP01890304 2001-10-31

Publications (1)

Publication Number Publication Date
WO2003038808A1 true WO2003038808A1 (en) 2003-05-08

Family

ID=8185163

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2002/004466 WO2003038808A1 (en) 2001-10-31 2002-10-24 Method of and system for transcribing dictations in text files and for revising the texts

Country Status (7)

Country Link
US (1) US7184956B2 (en)
EP (1) EP1442451B1 (en)
JP (1) JP4145796B2 (en)
CN (1) CN1269105C (en)
AT (1) ATE325413T1 (en)
DE (1) DE60211197T2 (en)
WO (1) WO2003038808A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004088635A1 (en) * 2003-03-31 2004-10-14 Koninklijke Philips Electronics N.V. System for correction of speech recognition results with confidence level indication
EP1471502A1 (en) * 2003-04-25 2004-10-27 Sony International (Europe) GmbH Method for correcting a text produced by speech recognition

Families Citing this family (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5093966B2 (en) 2001-03-29 2012-12-12 ニュアンス コミュニケーションズ オーストリア ゲーエムベーハー Alignment of voice cursor and text cursor during editing
DE10142232B4 (en) 2001-08-29 2021-04-29 Roche Diabetes Care Gmbh Process for the production of an analytical aid with a lancet and test element
JP2005301953A (en) * 2004-04-12 2005-10-27 Kenichi Asano Method of relating speech and sentence corresponding to the same at hearer side pace
JP2005301811A (en) * 2004-04-14 2005-10-27 Olympus Corp Data processor, related data generating device, data processing system, data processing software, related data generating software, data processing method, and related data generating method
US8504369B1 (en) 2004-06-02 2013-08-06 Nuance Communications, Inc. Multi-cursor transcription editing
US7844464B2 (en) * 2005-07-22 2010-11-30 Multimodal Technologies, Inc. Content-based audio playback emphasis
US7836412B1 (en) 2004-12-03 2010-11-16 Escription, Inc. Transcription editing
US7640158B2 (en) 2005-11-08 2009-12-29 Multimodal Technologies, Inc. Automatic detection and application of editing patterns in draft documents
US7708702B2 (en) * 2006-01-26 2010-05-04 Roche Diagnostics Operations, Inc. Stack magazine system
US20070208567A1 (en) * 2006-03-01 2007-09-06 At&T Corp. Error Correction In Automatic Speech Recognition Transcripts
US7831423B2 (en) * 2006-05-25 2010-11-09 Multimodal Technologies, Inc. Replacing text representing a concept with an alternate written form of the concept
JP5167256B2 (en) * 2006-06-22 2013-03-21 マルチモーダル・テクノロジーズ・エルエルシー Computer mounting method
US8286071B1 (en) 2006-06-29 2012-10-09 Escription, Inc. Insertion of standard text in transcriptions
US8521510B2 (en) * 2006-08-31 2013-08-27 At&T Intellectual Property Ii, L.P. Method and system for providing an automated web transcription service
US8943394B2 (en) * 2008-11-19 2015-01-27 Robert Bosch Gmbh System and method for interacting with live agents in an automated call center
US8572488B2 (en) * 2010-03-29 2013-10-29 Avid Technology, Inc. Spot dialog editor
US8831940B2 (en) 2010-03-30 2014-09-09 Nvoq Incorporated Hierarchical quick note to allow dictated code phrases to be transcribed to standard clauses
US9760920B2 (en) 2011-03-23 2017-09-12 Audible, Inc. Synchronizing digital content
US9697871B2 (en) 2011-03-23 2017-07-04 Audible, Inc. Synchronizing recorded audio content and companion content
US9703781B2 (en) 2011-03-23 2017-07-11 Audible, Inc. Managing related digital content
US9706247B2 (en) 2011-03-23 2017-07-11 Audible, Inc. Synchronized digital content samples
US9734153B2 (en) 2011-03-23 2017-08-15 Audible, Inc. Managing related digital content
US8855797B2 (en) 2011-03-23 2014-10-07 Audible, Inc. Managing playback of synchronized content
US8862255B2 (en) * 2011-03-23 2014-10-14 Audible, Inc. Managing playback of synchronized content
US8948892B2 (en) 2011-03-23 2015-02-03 Audible, Inc. Managing playback of synchronized content
DE102011080145A1 (en) 2011-07-29 2013-01-31 Robert Bosch Gmbh Method and device for processing sensitivity data of a patient
US8849676B2 (en) 2012-03-29 2014-09-30 Audible, Inc. Content customization
US9037956B2 (en) 2012-03-29 2015-05-19 Audible, Inc. Content customization
GB2502944A (en) * 2012-03-30 2013-12-18 Jpal Ltd Segmentation and transcription of speech
US9075760B2 (en) 2012-05-07 2015-07-07 Audible, Inc. Narration settings distribution for content customization
US9317500B2 (en) 2012-05-30 2016-04-19 Audible, Inc. Synchronizing translated digital content
US8972265B1 (en) 2012-06-18 2015-03-03 Audible, Inc. Multiple voices in audio content
US9141257B1 (en) 2012-06-18 2015-09-22 Audible, Inc. Selecting and conveying supplemental content
US9536439B1 (en) 2012-06-27 2017-01-03 Audible, Inc. Conveying questions with content
US9679608B2 (en) 2012-06-28 2017-06-13 Audible, Inc. Pacing content
US10109278B2 (en) 2012-08-02 2018-10-23 Audible, Inc. Aligning body matter across content formats
US9367196B1 (en) 2012-09-26 2016-06-14 Audible, Inc. Conveying branched content
US9632647B1 (en) 2012-10-09 2017-04-25 Audible, Inc. Selecting presentation positions in dynamic content
US9223830B1 (en) 2012-10-26 2015-12-29 Audible, Inc. Content presentation analysis
US9280906B2 (en) 2013-02-04 2016-03-08 Audible. Inc. Prompting a user for input during a synchronous presentation of audio content and textual content
US9472113B1 (en) 2013-02-05 2016-10-18 Audible, Inc. Synchronizing playback of digital content with physical content
US9317486B1 (en) 2013-06-07 2016-04-19 Audible, Inc. Synchronizing playback of digital content with captured physical content
US9489360B2 (en) 2013-09-05 2016-11-08 Audible, Inc. Identifying extra material in companion content
CN106782627B (en) * 2015-11-23 2019-08-27 广州酷狗计算机科技有限公司 Audio file rerecords method and device
JP2018091954A (en) 2016-12-01 2018-06-14 オリンパス株式会社 Voice recognition device and voice recognition method
CN108647190B (en) * 2018-04-25 2022-04-29 北京华夏电通科技股份有限公司 Method, device and system for inserting voice recognition text into script document
CN108984529B (en) * 2018-07-16 2022-06-03 北京华宇信息技术有限公司 Real-time court trial voice recognition automatic error correction method, storage medium and computing device
CN110889309A (en) * 2018-09-07 2020-03-17 上海怀若智能科技有限公司 Financial document classification management system and method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0651372A2 (en) * 1993-10-27 1995-05-03 AT&T Corp. Automatic speech recognition (ASR) processing using confidence measures
EP0987683A2 (en) * 1998-09-16 2000-03-22 Philips Corporate Intellectual Property GmbH Speech recognition method with confidence measure

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5712957A (en) 1995-09-08 1998-01-27 Carnegie Mellon University Locating and correcting erroneously recognized portions of utterances by rescoring based on two n-best lists
US5960447A (en) * 1995-11-13 1999-09-28 Holt; Douglas Word tagging and editing system for speech recognition
GB2303955B (en) * 1996-09-24 1997-05-14 Allvoice Computing Plc Data processing method and apparatus
US6006183A (en) * 1997-12-16 1999-12-21 International Business Machines Corp. Speech recognition confidence level display
DE19821422A1 (en) * 1998-05-13 1999-11-18 Philips Patentverwaltung Method for displaying words determined from a speech signal
US6064961A (en) 1998-09-02 2000-05-16 International Business Machines Corporation Display for proofreading text
US6366296B1 (en) * 1998-09-11 2002-04-02 Xerox Corporation Media browser using multimodal analysis
US6219638B1 (en) * 1998-11-03 2001-04-17 International Business Machines Corporation Telephone messaging and editing system
FI116991B (en) * 1999-01-18 2006-04-28 Nokia Corp A method for speech recognition, a speech recognition device and a voice controlled wireless message
JP2003518266A (en) * 1999-12-20 2003-06-03 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Speech reproduction for text editing of speech recognition system
US7092496B1 (en) * 2000-09-18 2006-08-15 International Business Machines Corporation Method and apparatus for processing information signals based on content
US6973428B2 (en) * 2001-05-24 2005-12-06 International Business Machines Corporation System and method for searching, analyzing and displaying text transcripts of speech after imperfect speech recognition

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0651372A2 (en) * 1993-10-27 1995-05-03 AT&T Corp. Automatic speech recognition (ASR) processing using confidence measures
EP0987683A2 (en) * 1998-09-16 2000-03-22 Philips Corporate Intellectual Property GmbH Speech recognition method with confidence measure

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
GLASS ET. AL.: "Real-Time Telephone-Based Speech Recognition in the Jupiter Domain", ICASSP 1999, pages 61 - 64 *
KAMPPARI S O ET AL: "Word and phone level acoustic confidence scoring", ICASSP 1999, XP010507710 *
KRISTJANSSON T ET AL: "A unified structure-based framework for indexing and gisting of meetings", MULTIMEDIA COMPUTING AND SYSTEMS, 1999. IEEE INTERNATIONAL CONFERENCE ON FLORENCE, ITALY 7-11 JUNE 1999, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 7 June 1999 (1999-06-07), pages 572 - 577, XP010519451, ISBN: 0-7695-0253-9 *
PALMER D D ET AL: "Robust information extraction from automatically generated speech transcriptions", SPEECH COMMUNICATION, ELSEVIER SCIENCE PUBLISHERS, AMSTERDAM, NL, vol. 32, no. 1-2, September 2000 (2000-09-01), pages 95 - 109, XP004216248, ISSN: 0167-6393 *
WEINTRAUB M ET AL: "Neural-network based measures of confidence for word recognition", ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 1997. ICASSP-97., 1997 IEEE INTERNATIONAL CONFERENCE ON MUNICH, GERMANY 21-24 APRIL 1997, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 21 April 1997 (1997-04-21), pages 887 - 890, XP010225937, ISBN: 0-8186-7919-0 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004088635A1 (en) * 2003-03-31 2004-10-14 Koninklijke Philips Electronics N.V. System for correction of speech recognition results with confidence level indication
EP1471502A1 (en) * 2003-04-25 2004-10-27 Sony International (Europe) GmbH Method for correcting a text produced by speech recognition
US7356467B2 (en) 2003-04-25 2008-04-08 Sony Deutschland Gmbh Method for processing recognized speech using an iterative process

Also Published As

Publication number Publication date
DE60211197T2 (en) 2007-05-03
EP1442451A1 (en) 2004-08-04
ATE325413T1 (en) 2006-06-15
JP4145796B2 (en) 2008-09-03
EP1442451B1 (en) 2006-05-03
DE60211197D1 (en) 2006-06-08
US20030083885A1 (en) 2003-05-01
CN1578976A (en) 2005-02-09
JP2005507536A (en) 2005-03-17
CN1269105C (en) 2006-08-09
US7184956B2 (en) 2007-02-27

Similar Documents

Publication Publication Date Title
US7184956B2 (en) Method of and system for transcribing dictations in text files and for revising the text
US7818175B2 (en) System and method for report level confidence
US6735565B2 (en) Select a recognition error by comparing the phonetic
US7702512B2 (en) Natural error handling in speech recognition
US6839667B2 (en) Method of speech recognition by presenting N-best word candidates
US7562014B1 (en) Active learning process for spoken dialog systems
US7881930B2 (en) ASR-aided transcription with segmented feedback training
US6138099A (en) Automatically updating language models
US20100057461A1 (en) Method and system for creating or updating entries in a speech recognition lexicon
US20060161434A1 (en) Automatic improvement of spoken language
KR20050076697A (en) Automatic speech recognition learning using user corrections
US20200043496A1 (en) Ensemble modeling of automatic speech recognition output
CN111883137A (en) Text processing method and device based on voice recognition
US20060100854A1 (en) Computer generation of concept sequence correction rules
US7689414B2 (en) Speech recognition device and method
Greenberg et al. An introduction to the diagnostic evaluation of Switchboard-corpus automatic speech recognition systems
JP2001282779A (en) Electronized text preparation system
CA2597826C (en) Method, software and device for uniquely identifying a desired contact in a contacts database based on a single utterance
JP7216771B2 (en) Apparatus, method, and program for adding metadata to script
GB2418764A (en) Combining perturbed speech input signals prior to recogntion
JPH08328582A (en) Learning method of hidden-markov-model(hmm)

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2003540981

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2002777662

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 20028217691

Country of ref document: CN

WWP Wipo information: published in national office

Ref document number: 2002777662

Country of ref document: EP

WWG Wipo information: grant in national office

Ref document number: 2002777662

Country of ref document: EP