|Publication number||USRE42000 E1|
|Application number||US 10/038,153|
|Publication date||Dec 14, 2010|
|Filing date||Oct 19, 2001|
|Priority date||Dec 13, 1996|
|Also published as||DE19753453A1, DE19753453B4, US5970459|
|Publication number||038153, 10038153, US RE42000 E1, US RE42000E1, US-E1-RE42000, USRE42000 E1, USRE42000E1|
|Inventors||Jae Woo Yang, Jung Chul Lee, Min Soo Hahn, Hang Seop Lee, YoungJik Lee|
|Original Assignee||Electronics And Telecommunications Research Institute|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (40), Non-Patent Citations (2), Classifications (14), Legal Events (1)|
|External Links: USPTO, USPTO Assignment, Espacenet|
1. Field of the Invention
The present invention relates to a system for synchronization between moving picture and a text-to-speech(TTS) converter, and more particulary to a system for synchronization between moving picture and a text-to-speech converter which can be realized a synchronization between moving picture and synthesized speech by using the moving time of lip and duration of speech information.
2. Description of the Related Art
In general, a speech synthesizer provides a user with various types of information in an audible form. For this purpose, the speech synthesizer should provide a high quality speech synthesis service from the input texts given to a user. In addition, in order for the speech synthesizer to be operatively coupled to a database constructed in a multi-media environment, or various media provided by a counterpart involved in a conversation, the speech synthesizer can generate a synthesized speech so as to be synchronized with these media. In particular, the synchronization between moving picture and the TTS is essentially required to provide a user with a high quality service.
At step 1, a language processing unit 1 converts an input text to a phoneme string, estimates prosodic information, and symbolizes it. The symbol of the prosodic information is estimated from the phrase boundary, clause boundary, accent position, sentence patterns, etc. by analyzing a syntactic structure. At step 2, a prosody processing unit 2 calculates the values for prosody control parameters from the symbolized prosodic information by using rules and tables. The prosody control parameters include phoneme duration and pause interval information. Finally, a signal processing unit 3 generates a synthesized speech by using a synthesis unit DB 4 and the prosody control parameters. That is, the conventional synthesizer should estimate prosodic information related to naturalness and speaking rate only from an input text in the language processing unit 1 and the prosody processing unit 2.
Presently, a lot of researches on the TTS have been conducted through the world for application to mother languages, and some countries have already started a commercial service. However, the conventional synthesizer is aimed at its use in synthesizing a speech from an input text and thus there is no research activity on a synthesizing method which can be used in connection with multi-media. In addition, when dubbing is performed on moving picture or animation by using the conventional TTS method, information required to implement the synchronization of media with a synthesized speech cannot be estimated from the text only. Thus, it is not possible to generate a synthesized speech, which is smoothly and operatively coupled to moving pictures, from only text information.
If the synchronization between moving picture and a synthesized speech is assumed to be a kind of dubbing, there can be three implementation methods. One of these methods includes a method of synchronizing moving picture with a synthesized speech on a sentence basis. This method regulates the time duration of the synthesized speech by using information on the start point and end point of the sentence. This method has an advantage that it is easy to implement and the additional efforts can be minimized. However, the smooth synchronization cannot be achieved with this method. As an alternative, there is a method wherein information on the start and end point, and phoneme symbol for every phoneme are transcribed in the interval of the moving picture related to a speech signal to be used in generating a synthesized speech. Since the synchronization of moving picture with a synthesized speech can be achieved for each phoneme with this method, the accuracy can be enhanced. However, this method has a disadvantage that additional efforts should be exerted to detect and record time duration information for every phoneme in a speech interval of the moving picture.
As another alternative, there is a method wherein synchronization information is recorded based on patterns having the characteristic by which a lip motion can be easily distinguished, such as the start and end points of the speech, the opening and closing of the lip, protrusion of the lip, etc. This method can enhance the efficiency of synchronization while minimizing the additional efforts exerted to make information for synchronization.
It is therefore an object of the present invention to provide a method of formatting and normalizing continuous lip motions to events in a moving picture besides a text in a text-to-speech converter.
It is another object of the invention to provide a system for synchronization between moving picture and a synthesized speech by defining an interface between event information and the TTS and using it in generating the synthesized speech.
In accordance with one aspect of the present invention, a system for synchronization between moving picture and a text-to-speech converter is provided which comprises distributing means for multi-media input information, transforming it into the respective data structures, and distributing it to each medium; image output means for receiving image information of the multi-media information from said distributing means; language processing means for receiving language texts of the multi-media information from said distributing means, transforming the text into phoneme string, estimating and symbolizing prosodic information; prosody processing means for receiving the processing result from said language processing means, calculating the values of prosodic control parameters; synchronization adjusting means for receiving the processing results from said prosody processing means, adjusting time durations for every phoneme for synchronization with image signals by using synchronization information of the multi-media information from said distributing means, and inserting the adjusted time durations into the results of said prosody processing means; signal processing means for receiving the processing results from said synchronization adjusting means to generate a synthesized speech; and a synthesis unit database block for selecting required unit for synthesis in accordance with a request from said signal processing means, and transferring the required data.
The present invention will become more apparent upon a detailed description of the preferred embodiments for carrying out the invention as rendered below. In the description to follow, references will be made to the accompanying drawings, where like reference numerals are used to identify like or similar elements in the various drawings and in which:
Data comprising multi-media such as an image, text, etc. is inputted to the multi-data input unit 5 which outputs the input data to the central processing unit 6. Into the central processing unit 6, the algorithm in accordance with the present invention is embedded. The synthesized database 7, a synthesized DB for use in the synthesis algorithm is stored in a storage device and transmits necessary data to the central processing unit 6. The digital/analog converter 8 converts the synthesized digital data into an analog signal to output it to the exterior. The image output unit 9 displays the input image information on the screen.
Table 1 as shown below illustrates one example of structured multi-media input information to be used in connection with the present invention. The structured information includes a text, moving picture, lip shape, information on positions in the moving picture, and information on the time duration. The lip shape can be transformed into numerical values based on a degree of a down motion of a lower lip, up and down motion at the left edge of an upper lip, up and down motion at the right edge of an upper lip, up and down motion at the left edge of a lower lip, up and down motion at the right edge of a lower lip, up and down motion at the center portion of an upper lip, up and down motion at the center portion of a lower lip, degree of protrusion of an upper lip, degree of protrusion of a lower lip, distance from the center of a lip to the right edge of a lip, and distance from the center of a lip to the left edge of a lip. The lip shape can also be defined in a quantified and normalized pattern in accordance with the position and manner of articulation for each phoneme. The information on positions is defined by the position of a scene in a moving picture, and the time duration is defined by the number of the scenes in which the same lip shape is maintained.
Example of Synchronization Information
degree of a down motion of a lower lip, up
and down motion at the left edge of an
upper lip, up and down motion at the right
edge of an upper lip, up and down motion
at the left edge of a lower lip, up and
down motion at the right edge of a lower
lip, up and down motion at the center
portion of an upper lip, up and down
motion at the center portion of a lower lip,
degree of protrusing of an upper lip,
degree of protrusion of a lower lip,
distance from the center of a lip to the
right edge of a lip, and distance from the
center of a lip to the left edge of a lip
position of scene in moving picture
number of continuous scenes
The multi-media information in the multi-media information input unit 10 is structured in a format as shown above in table 1, and comprises a text, moving picture, lip shape, information on positions in the moving picture, and information on time durations. The multi-media distributor 11 receives the multi-media information from the multi-media information input unit 10, and transfers images and texts of the multi-media information to the image output unit 17 and the language processing unit 12, respectively. When the synchronization information is transferred, it is converted into a data structure which can be used in the synchronization adjusting unit 14.
The language processing unit 12 converts the texts received from the multi-media distributor 11 into a phoneme string, and estimates and symbolize prosodic information to transfer it to the prosody processing unit 13. The symbols for the prosodic information are estimated from the phrase boundary, clause boundary, the accent position, and sentence pattern, etc. by using the results of analysis of syntax structures.
The prosody processing unit 13 receives the processing results from the language processing unit 12, and calculates the values of the prosodic control parameters. The prosodic control parameter includes the time duration of phonemes, contour of pitch, contour of energy, position of pause, and length. The calculated results are transferred to the synchronization adjusting unit 15.
The synchronization adjusting unit 14 receives the processing results from the prosody processing unit 13, and adjusts the time durations for every phoneme to synchronize the image signal by using the synchronization information which was received from the multi-media distributor 11. With the adjustment of the time duration of phonemes, the lip shape can be allocated to each phoneme in accordance with the position and manner of articulation for each phoneme, and the series of phonemes is divided into small groups corresponding to the number of the lip shapes recorded in the synchronization information by comparing the lip shape allocated to each phoneme with the lip shape in the synchronization information.
The time durations of the phonemes in each small group are calculated again by using information on the time durations of the lip shapes which is included in the synchronization information. The adjusted time duration information is made to be included in the results of the prosody processing unit 13, and is transferred to the signal processing unit 15.
The signal processing unit 15 receives the processing results from the synchronization adjusting unit 14, and generates a synthesized speech by using the synthesis unit DB 16 to output it. The synthesis unit DB 16 selects the synthesis units required for synthesis in accordance with the request from the signal processing unit 15, and transfers required data to the signal processing unit 15.
In accordance with the present invention, a synthesized speech can be synchronized with moving picture by using the method wherein the real speech data and the shape of a lip in the moving picture are analyzed, and information on the estimated lip shape and text information are directly used in generating the synthesized speech. Accordingly, the dubbing of target language can be performed onto movies in foreign languages. Further, the present invention can be used in various applications such as a communication service, office automation, education, etc. since the synchronization of image information with the TTS is made possible in the multi-media environment.
The present invention has been described with reference to a particular embodiment in connection with a particular application. Those having ordinary skill in the art and access to the teachings of the present invention will recognize additional modifications and applications within the scope thereof.
It is therefore intended by the appended claims to cover any and all such applications, modifications, and embodiments within the scope of the present invention.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4260229||Jan 23, 1978||Apr 7, 1981||Bloomstein Richard W||Creating visual images of lip movements|
|US4305131||Mar 31, 1980||Dec 8, 1981||Best Robert M||Dialog between TV movies and human viewers|
|US4841575||Nov 14, 1986||Jun 20, 1989||British Telecommunications Public Limited Company||Image encoding and synthesis|
|US5111409||Jul 21, 1989||May 5, 1992||Elon Gasper||Authoring and use systems for sound synchronized animation|
|US5313522||Apr 15, 1993||May 17, 1994||Slager Robert P||Apparatus for generating from an audio signal a moving visual lip image from which a speech content of the signal can be comprehended by a lipreader|
|US5386581||Sep 3, 1993||Jan 31, 1995||Matsushita Electric Industrial Co., Ltd.||Multimedia data editing apparatus including visual graphic display of time information|
|US5500919||Nov 18, 1992||Mar 19, 1996||Canon Information Systems, Inc.||Graphics user interface for controlling text-to-speech conversion|
|US5557661||Oct 26, 1994||Sep 17, 1996||Nec Corporation||System for coding and decoding moving pictures based on the result of speech analysis|
|US5608839||Jun 21, 1994||Mar 4, 1997||Lucent Technologies Inc.||Sound-synchronized video system|
|US5615300||May 26, 1993||Mar 25, 1997||Toshiba Corporation||Text-to-speech synthesis with controllable processing time and speech quality|
|US5630017||May 31, 1995||May 13, 1997||Bright Star Technology, Inc.||Advanced tools for speech synchronized animation|
|US5636325||Jan 5, 1994||Jun 3, 1997||International Business Machines Corporation||Speech synthesis and analysis of dialects|
|US5650629||Jun 28, 1994||Jul 22, 1997||The United States Of America As Represented By The Secretary Of The Air Force||Field-symmetric beam detector for semiconductors|
|US5657426||Jun 10, 1994||Aug 12, 1997||Digital Equipment Corporation||Method and apparatus for producing audio-visual synthetic speech|
|US5677739||Mar 2, 1995||Oct 14, 1997||National Captioning Institute||System and method for providing described television services|
|US5677993||Aug 31, 1993||Oct 14, 1997||Hitachi, Ltd.||Information processing apparatus using pointing input and speech input|
|US5689618||May 31, 1995||Nov 18, 1997||Bright Star Technology, Inc.||Advanced tools for speech synchronized animation|
|US5729694||Feb 6, 1996||Mar 17, 1998||The Regents Of The University Of California||Speech coding, reconstruction and recognition using acoustics and electromagnetic waves|
|US5751906||Jan 29, 1997||May 12, 1998||Nynex Science & Technology||Method for synthesizing speech from text and for spelling all or portions of the text by analogy|
|US5774854||Nov 22, 1994||Jun 30, 1998||International Business Machines Corporation||Text to speech system|
|US5777612||Nov 22, 1995||Jul 7, 1998||Fujitsu Limited||Multimedia dynamic synchronization system|
|US5860064||Feb 24, 1997||Jan 12, 1999||Apple Computer, Inc.||Method and apparatus for automatic generation of vocal emotion in a synthetic text-to-speech system|
|AT72083B||Title not available|
|DE4101022A1||Jan 16, 1991||Jul 23, 1992||Medav Digitale Signalverarbeit||Variable speed reproduction of audio signal without spectral change - dividing digitised audio signal into blocks, performing transformation, and adding or omitting blocks before reverse transformation|
|EP0225729A1||Nov 10, 1986||Jun 16, 1987||BRITISH TELECOMMUNICATIONS public limited company||Image encoding and synthesis|
|EP0689362A2||Jun 14, 1995||Dec 27, 1995||AT&T Corp.||Sound-synchronised video system|
|EP0706170A2||May 24, 1995||Apr 10, 1996||CSELT Centro Studi e Laboratori Telecomunicazioni S.p.A.||Method of speech synthesis by means of concatenation and partial overlapping of waveforms|
|JP4359299B2||Title not available|
|JP5313686B2||Title not available|
|JPH0564171A||Title not available|
|JPH0738857A||Title not available|
|JPH02234285A||Title not available|
|JPH03241399A||Title not available|
|JPH04285769A||Title not available|
|JPH04359299A||Title not available|
|JPH05188985A||Title not available|
|JPH05313686A||Title not available|
|JPH06326967A||Title not available|
|JPH06348811A||Title not available|
|WO1985004747A1||Dec 4, 1984||Oct 24, 1985||First Byte||Real-time text-to-speech conversion system|
|1||Nakumura et al. "Speech Recognition and Lip Movement Synthesis"; HMM based Audio-Visual Integration; pp. 93-98.|
|2||Yamamoto et al. pp. 245-246 Nara Institute of Science and Technology.|
|U.S. Classification||704/276, 704/260|
|International Classification||G10L21/06, G10L13/08, G10L13/00, G11B20/04, G06F17/30, G10L13/04, G06F17/28, G10L13/06|
|Cooperative Classification||G10L13/08, G06F17/30056|
|European Classification||G10L13/08, G06F17/30E4P1|