WO1996028932A1 - Method and apparatus for translating closed caption data - Google Patents

Method and apparatus for translating closed caption data Download PDF

Info

Publication number
WO1996028932A1
WO1996028932A1 PCT/US1996/003504 US9603504W WO9628932A1 WO 1996028932 A1 WO1996028932 A1 WO 1996028932A1 US 9603504 W US9603504 W US 9603504W WO 9628932 A1 WO9628932 A1 WO 9628932A1
Authority
WO
WIPO (PCT)
Prior art keywords
text
caption
output
providing
signal
Prior art date
Application number
PCT/US1996/003504
Other languages
French (fr)
Inventor
Wen Feng Chang
Original Assignee
Wen Feng Chang
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wen Feng Chang filed Critical Wen Feng Chang
Publication of WO1996028932A1 publication Critical patent/WO1996028932A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/025Systems for the transmission of digital non-picture data, e.g. of text during the active part of a television frame
    • H04N7/0255Display systems therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/08Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division
    • H04N7/087Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division with signal insertion during the vertical blanking interval only
    • H04N7/088Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division with signal insertion during the vertical blanking interval only the inserted signal being digital
    • H04N7/0882Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division with signal insertion during the vertical blanking interval only the inserted signal being digital for the transmission of character code signals, e.g. for teletext

Definitions

  • This invention relates to transmitting closed caption data with television signals. and more particularly to processing the closed caption data.
  • Closed caption decoders decode closed caption data embedded in television signals and display in real time the closed caption text with the associated video image.
  • the closed caption decoder may be a stand alone box or integral with a television.
  • the caption text displayed on the screen of the television allows hearing-impaired persons to read the text of the audio while watching television.
  • Closed caption systems provide less benefit to persons having a limited vocabulary in the language of text, such as persons with English as a second language, because the displayed text may contain words that the viewer may not understand.
  • Such persons using the closed caption system upon the occurrence of an unknown word, may either skip the unknown word, or write down the word and look it up in a dictionary. For the latter approach, writing the word correctly after seeing the word a short time may be difficult. Also looking up the word in a dictionary is time consuming.
  • a method for processing closed caption data includes the step of receiving a television signal that includes closed caption data.
  • the closed caption data is removed from the television signal to form a stripped video signal, which is provided to a television, and a closed caption text signal, which is provided to a screen other than the television.
  • a user command is received and a portion of the closed caption data is stored in response to the user command.
  • the portion of the closed caption data is processed, such as defining the portion or translating the portion.
  • the updating of the displayed closed caption text on the screen is ceased.
  • the closed caption data is continued to be removed from the television signal and the stripped video image is provided to the television.
  • An apparatus processes a television signal having caption data therein.
  • a detector receives the television signal.
  • a decoder coupled to the detector receives the television signal and provides the television signal to a television and provides the caption data to a display having a screen for displaying the caption data.
  • a microcontroller coupled to the decoder receives the caption data, and provides the caption data to the display. The microcontroller receives user selected commands. The caption data provided to the display is not updated responsive to a first user selected command.
  • a memory stores definitions of text and provides a definition of selected displayed data responsive to a definition request.
  • the microcontroller provides the definition request to the memory and receives the definition of the text from the memory, which is provided to the display.
  • a memory stores translations of text responsive to a translation request.
  • the microcontroller provides the translation request to the memory and receives the translation of selected displayed data from the memory, which is provided to the display.
  • FIG. 1 is a block diagram illustrating a closed caption processing system in accordance with the present invention.
  • FIG. 2 is a block diagram illustrating a caption decoder and data generator of the closed caption processing system of FIG. 1.
  • FIG. 3 is a block diagram illustrating a caption data manager of the closed caption processing system of FIG. 1.
  • FIG. 4 is a flowchart illustrating the operation of the closed caption processing system of FIG. 1.
  • FIGs. 5a and 5b are flowcharts illustrating the operation of the caption data manager of FIG. 3.
  • FIGs. 6a, 6b, 6c are schematic views illustrating the display screen of the caption data manager of FIG. 3.
  • FIGs. 7a and 7b are schematic views illustrating display screens, in a window configuration, of the caption data manager of FIG. 3 in an alternate embodiment of the displays of FIGs. 6a, 6b, and 6c.
  • FIG. 8 is a block diagram illustrating a caption decoder and data generator in accordance with a second embodiment of the invention.
  • FIG. 9 is a block diagram illustrating a caption data manager in accordance with a second embodiment of the invention.
  • FIG. 1 there is shown a block diagram illustrating a closed caption processing system 10 in accordance with the present invention.
  • a broadcaster (not shown) transmits a television signal 12 having an encoded radio signal modulated therein.
  • the encoded radio signal contains a video signal, an audio signal, and encoded data, such as caption data.
  • the caption data may be closed caption text which is program-related captions or service-related text transmitted in real time with the video signal.
  • NTSC National Television Standards Committee
  • the closed caption processing system 10 has a receiver 14 for receiving the television signal 12.
  • the receiver 14 may be, for example, an antenna for receiving broadcast television signals, a coupler for receiving signals from a cable television system, or a satellite dish and down converter for receiving a satellite transmission and for downconverting the received signal from a microwave signal to a received television signal 15.
  • the receiver 14 provides the received television signal 15 to a tuner 16 for selecting, in response to a user selected command, the channel of the received television signal and converting the signal into an encoded radio signal 18 at a predetermined frequency and into an audio signal 19.
  • the predetermined frequency may be, for example, a seldom used channel such as channel 3 or a video signal.
  • the tuner 16 may be incorporated into a video cassette recorder 17 (VCR) or a cable box, which may also descramble the television signal if necessary.
  • VCR video cassette recorder 17
  • the tuner 16 provides the encoded radio signal 18 to a caption decoder and data generator 20 (CDG), which provides a decoded radio signal 22 to a conventional television 24 and provides a caption data signal 26 to a caption data manager (CDM) 28.
  • the decoded radio signal 22 contains video, sliced caption text on lines of the video, and audio.
  • the television 24 selectively may display or not display the caption text with the video.
  • the caption decoder and data generator 20 may selectively strip or retain the closed caption text in the decoded radio signal 22.
  • the communication link between the caption decoder and data generator 20 and the caption data manager 28 may be, for example, wireless, such as by infrared or RF.
  • the caption data manager 28 displays caption text without a video image on the same screen or window.
  • the caption text may be displayed, for example, in an alphanumeric format.
  • the user may manipulate the caption data without the associated video image, such as cease the scrolling or updating the displayed text, store the displayed text, define the displayed text, or translate the displayed text.
  • FIG. 2 there is shown a block diagram of the caption decoder and data generator 20.
  • the encoded radio signal 18 from the tuner 16 is provided to a closed caption decoder 32 and to a synchronization generator 34.
  • the synchronization generator 34 provides a conventional vertical synchronization signal and a conventional horizontal synchronization signal to the closed caption decoder 32 for timing the processing of the encoded radio signal 18.
  • the synchronization generator 34 may be, for example, a model E 4583C manufactured by Elantec of Milpitas, California.
  • the closed caption decoder 32 demodulates the encoded radio signal 18 to generate a decoded video signal 36 and provide the signal 36 to a radio frequency (RF) modulator 30 and to a terminal 37 for coupling to the television 24.
  • the closed caption decoder 32 slices the encoded radio signal 18 to generate caption data 42.
  • the RF modulator 30 generates the decoded radio signal 22 responsive to the audio signal 19 and the decoded video signal 36.
  • the terminal 37 provides the decoded video signal 36 to the television 24 in systems in which the audio signal 19 is provided to a separate sound system or provided separately to a sound system integral with the television.
  • the RF modulator 30 may be omitted.
  • the closed caption decoder 32 may be, for example, a model CCD3000 manufactured by ITT Semiconductors of Hoffman Estates, Illinois.
  • the RF modulator 30 may be, for example, a model RF-3406 manufactured by EPD Electronics of Gardena, California.
  • a microcontroller 38 having a central processing unit, a memory, an input/output (I/O) port, and other function units provides control signals 40 to the closed caption decoder 32 and receives the caption data 42 from the closed caption decoder 32.
  • the microcontroller 38 may be, for example, a model 8751 manufactured by Philips Semiconductors of Sunnyvale, California. Alternatively, a separate central processing unit, memory, and I/O port may be used instead of a microcontroller.
  • a memory 44 for storing caption data and parameters of software code is coupled to the microcontroller 38.
  • the memory 44 may be, for example, a conventional static random access memory (SRAM).
  • An interface circuit 46 couples the microcontroller 38 to the closed caption manager 28 for communicating the caption data signal 26 and control signals 50.
  • the communication channel between the interface circuit 46 and the caption data manager 28 may be, for example, an RS232 channel.
  • the interface circuit 46 may be, for example, a model MAX202 manufactured by Maxim of Sunnyvale, California.
  • An interface circuit 52 couples the interface circuit 46 of the caption decoder and data generator 20 to a microcontroller 54 for receiving caption data 26 from and providing control signals 50 to the caption decoder and data generator 20.
  • the interface circuit 52 may be, for example, a model MAX202 manufactured by Maxim of Sunnyvale, California.
  • a separate central processing unit, memory, and I/O port may be used instead of a microcontroller.
  • the microcontroller 54 has a central processing unit, a memory, an I/O port, and other function units.
  • the microcontroller 54 may be, for example, a model 8752 manufactured by Philips Semiconductor of Sunnyvale, California.
  • a memory 56 coupled to the microcontroller 54 stores the caption data, a dictionary for defining words of the caption data, or a translation dictionary for translating words of the caption data from a first language into at least one second language.
  • the dictionary may be in multiple languages.
  • the memory 56 may include, for example, a conventional random access memory and a conventional read only memory.
  • An input device 58 receives data or commands from a user and provides same to the microcontroller 54.
  • the input device 58 may be, for example, a keyboard or a voice recognition device.
  • the microcontroller 54 provides a display signal 60 to a display 62.
  • the display 62 may be, for example, a model HDM-40416H-5 liquid crystal display manufactured by Hantronix of Cupertino, California.
  • the display signal 60 may include caption text, definitions of a portion of the caption text, and translations of a portion of the caption text.
  • the caption data manager 28 has a CDM select setting for establishing an operational mode of the microcontroller 54.
  • a first operational mode is an automatic select mode in which the microcontroller 54 selects text having an identifier with a value greater than a preselected threshold value or level for processing, such as translating or defining.
  • the identifier is a flag that represents the degree of difficulty of a word, phrase, information, or the like stored in the memory 56.
  • the degree of difficulty may be, for example, a grade level rating of the word.
  • a second operational mode is a manual select mode for selecting text for processing in response to user selected commands. Referring to FIG. 4, there is shown a flowchart illustrating the operation of the closed caption processing system 10.
  • the caption decoder and data generator 20 receives 402 and decodes 404 the encoded radio signal 18 and generates 404 the caption signal 26 and the decoded radio signal 22.
  • the caption signal 26 is transmitted 406 to the caption data manager 28 for processing as described later herein in conjunction with FIGs. 5a and 5b.
  • the decoded radio signal 22 is transmitted 408 to the television 24 for selectively displaying 410 the video images with the caption text or without the caption text.
  • FIGs. 5a and 5b there are shown flowcharts illustrating the operation of the caption data manager 28.
  • FIGs. 6a, 6b, 6c there are shown schematic views illustrating the display screen of the caption data manager 28.
  • the caption data manager 28 has a default setting as either a caption function or non- caption function. This setting may be changed at any time through a user command from the input device 58.
  • the caption data manager 28 receives input data from the input device 58.
  • the caption data manager 28 processes data received from the caption decoder and data generator 20.
  • the microcontroller 54 of the caption data manager 28 determines 502 the initial setting of the caption data manager 28.
  • the microcontroller 54 performs 504 the caption function.
  • the microcontroller 54 receives 505 the caption data from the caption decoder and data generator 20 and stores the caption data in the memory 56.
  • the microcontroller 54 provides 506 the caption data to the display 62 for updating the displayed data either in full screen or a window, without an associated video image or without overlapping the associated video image.
  • An example of a full screen display 602 of caption text is shown in FIG. 6a.
  • An indicator 604 indicates the nature of the displayed data.
  • An indicator 604 having a value "C" indicates that the displayed data is caption text.
  • An indicator 604 having a value "D" indicates that the displayed text is dictionary data, such as definitions (as shown in FIG. 6b) or translations (as shown in FIG. 6c).
  • the microcontroller 54 determines 508 whether the CDM select setting of the caption data manager 28 is in an automatic selection mode or a manual selection mode. If the CDM select setting is the manual selection mode, while displaying the caption text, the microcontroller 54 monitors 510 the input device 58 for a user selected command. If the user selected command is a command other than a pause command or if there is no user selected command, the microcontroller 54 returns to updating 506 the caption text on the display 62. If the microcontroller 54 is in an automatic mode but had responded to a user selected command in a soft manual mode, the microcontroller 54 returns to the automatic selection mode.
  • the microcontroller 54 freezes 512 the caption text on the display 62 or window by ceasing the updating of displayed caption text. The microcontroller 54 continues to process 505 the caption data and execute other functions during the pause. If a caption command is received 514 from the input device 58, the microcontroller 54 returns to updating 506 the caption text on the display 62. If a select command is received 514, the microcontroller 54 saves 516 the selected caption text 606. Selected text 606 (FIG. 6a) may be indicated on the display 62 in a conventional manner, such as highlighted or displayed in reverse video.
  • the microcontroller 54 retrieves from the memory 56 a definition of the selected text and displays such definition on the display 62 or window.
  • An example of a definition display 608 is shown in FIG. 6b, which shows the definition of the selected word 606.
  • the microcontroller 54 translates the selected caption text and displays the translation content on the display or window.
  • An example of a translation display 610 is shown in FIG. 6c, which shows the translation of the selected word 606 into a second language, here Chinese.
  • the microcontroller 54 determines 518 the type of user selected command.
  • the microcontroller 54 returns to updating 506 the caption text on the display 62.
  • the microcontroller 54 freezes 512 the caption text and proceeds as described earlier herein.
  • the microcontroller 54 performs 520 the requested function.
  • Such functions may be, for example, a dictionary or encyclopedia function.
  • the microcontroller 54 returns to determining 518 the type of user selected command.
  • the microcontroller 54 retrieves 521 from the memory 56 the identifier value for each word in the caption text. The microcontroller 54 compares 522 the identifier with the preselected threshold level. If the identifier value of the caption text is less then the threshold level, the microcontroller 54 proceeds to determining 510 the request. Conversely, if the identifier value of the caption text is greater then the threshold level, the microcontroller 54 selects 524 such caption text and stores 526 the selected caption text in the memory 56. The microcontroller 54 displays the translation or definition of the selected text. The microcontroller 54 monitors 510 the input device 58 for a user command as described earlier herein.
  • the microcontroller 54 performs 528 the selected or default function.
  • the microcontroller 54 may operate as a general purpose electronic system, such as an electronic dictionary or encyclopedia, for a non caption function.
  • the microcontroller 54 continues to perform the non-caption function until a caption command is received 530. Responsive to the caption command, the microcontroller 54 executes 504 the caption function as described above.
  • FIGs. 7a and 7b there are shown schematic views illustrating the display screen in a window configuration of the caption data manager in an alternate embodiment of the display of FIGs. 6a, 6b, and 6c.
  • the display 702 may be partitioned to show multiples windows.
  • the display 702 is arranged in a caption text window 704, a definition window 706, a translation window 708, and an examined word window 710.
  • the text in the caption text window 704 is frozen in response to a pause command.
  • the user may select words for definition or translation by moving a cursor 712 and selecting text. For example, in the manual mode of FIG. 7a, the words “infrastructure" and "mogul" have been selected 516.
  • FIG. 7a the words "infrastructure" and "mogul" have been selected 516.
  • the definitions, the translations into a second language (here Chinese), and a list of each of these words are shown in the definition window 706, the translation window 708, and the examined word window 710, respectively.
  • the indicator 718 shows the threshold level for the selection mode.
  • a control display selection icon 720 allows the user to alter the characteristics of the display, such as font, size, and the like.
  • FIG. 8 there is shown a block diagram illustrating a caption decoder and data generator 120 in accordance with a second embodiment of the invention. Like elements of FIGs. 2 and 8 have like reference numerals.
  • the RF modulator 30, the closed caption decoder 32, the synchronization generator 34, and the terminal 37 are arranged as in the caption decoder and data generator 20 of FIG. 2.
  • the closed caption decoder 32 provides the caption data to a modulator 70 which provides a modulating signal 71 to a transmitter 72 for modulating the data onto a carrier signal.
  • the transmitter 72 provides the modulated carrier signal to an emitter 74 for transmitting the modulated carrier signal 75 to a caption data manager 128.
  • the emitter 74 may be, for example, an antenna or an infrared light emitting diode.
  • the modulated carrier signal may be, for example, a radio signal, an infrared signal, a light signal, and the like.
  • FIG. 9 there is shown a block diagram illustrating the caption data manager 128 in accordance with the second embodiment of the invention. Like elements of FIGs. 3 and 9 have like reference numbers.
  • the microcontroller 54, the display 62, the input device 58, and the memory 56 are arranged as in the caption data manager 28 of FIG. 3.
  • a detector 76 receives the transmitted modulated carrier signal 75 and provides the received signal to a receiver 78 for signal amplification.
  • the detector 76 may be, for example, an RF antenna or an infrared detector.
  • the demodulator 80 demodulates the received carrier signal from the receiver 78 to recover the caption data.
  • the demodulator 80 receives control signals from the microcontroller 54 for power consumption reduction control and data synchronization and provides the recovered caption text data to the microcontroller 54.
  • the microcontroller 54 operates as described above.
  • the closed caption decoder 32 and the synchronization generator 34 are located after the demodulator 80.
  • the tuner 16 provides the encoded radio signal 18 to the modulator 70 which is processed by the transmitter 72.
  • the emitter 74 transmits the modulated encoded radio signal, for example as a video signal, to the detector 76.
  • the encoded signal is provided to the closed caption decoder 32 which slices the caption data and provides the data to the microcontroller 54 for processing as described earlier herein.

Abstract

A closed caption processing system processes a television signal having caption text therein. A detector receives the television signal. A decoder (32) coupled to the detector receives the television signal, and removes the caption data from the signal to form a stripped video signal and a caption signal. The stripped signal is provided to a television. The closed caption data is displayed as text on a screen. A microcontroller (38) receives the caption data from the decoder and provides the caption text to the screen. The caption text is updated unless the microcontroller receives a user-selected command for freezing the displayed text. A memory (44) stores definitions and translations of text. The microcontroller retrieves either the definition or translation of a selected portion of the text responsive to a user request, which is then displayed.

Description

METHOD AND APPARATUS FOR TRANSLATING CLOSED CAPTION DATA
Field of the Invention
This invention relates to transmitting closed caption data with television signals. and more particularly to processing the closed caption data.
Background of the Invention
Closed caption decoders decode closed caption data embedded in television signals and display in real time the closed caption text with the associated video image. The closed caption decoder may be a stand alone box or integral with a television. The caption text displayed on the screen of the television allows hearing-impaired persons to read the text of the audio while watching television.
Closed caption systems provide less benefit to persons having a limited vocabulary in the language of text, such as persons with English as a second language, because the displayed text may contain words that the viewer may not understand. Such persons using the closed caption system, upon the occurrence of an unknown word, may either skip the unknown word, or write down the word and look it up in a dictionary. For the latter approach, writing the word correctly after seeing the word a short time may be difficult. Also looking up the word in a dictionary is time consuming.
Summary of the Invention
In the present invention, a method for processing closed caption data includes the step of receiving a television signal that includes closed caption data. The closed caption data is removed from the television signal to form a stripped video signal, which is provided to a television, and a closed caption text signal, which is provided to a screen other than the television.
A user command is received and a portion of the closed caption data is stored in response to the user command. The portion of the closed caption data is processed, such as defining the portion or translating the portion. In response to the user selected command, the updating of the displayed closed caption text on the screen is ceased. The closed caption data is continued to be removed from the television signal and the stripped video image is provided to the television.
An apparatus processes a television signal having caption data therein. A detector receives the television signal. A decoder coupled to the detector receives the television signal and provides the television signal to a television and provides the caption data to a display having a screen for displaying the caption data. A microcontroller coupled to the decoder receives the caption data, and provides the caption data to the display. The microcontroller receives user selected commands. The caption data provided to the display is not updated responsive to a first user selected command.
A memory stores definitions of text and provides a definition of selected displayed data responsive to a definition request. The microcontroller provides the definition request to the memory and receives the definition of the text from the memory, which is provided to the display. A memory stores translations of text responsive to a translation request. The microcontroller provides the translation request to the memory and receives the translation of selected displayed data from the memory, which is provided to the display.
Description of the Drawings
FIG. 1 is a block diagram illustrating a closed caption processing system in accordance with the present invention.
FIG. 2 is a block diagram illustrating a caption decoder and data generator of the closed caption processing system of FIG. 1.
FIG. 3 is a block diagram illustrating a caption data manager of the closed caption processing system of FIG. 1.
FIG. 4 is a flowchart illustrating the operation of the closed caption processing system of FIG. 1. FIGs. 5a and 5b are flowcharts illustrating the operation of the caption data manager of FIG. 3.
FIGs. 6a, 6b, 6c are schematic views illustrating the display screen of the caption data manager of FIG. 3.
FIGs. 7a and 7b are schematic views illustrating display screens, in a window configuration, of the caption data manager of FIG. 3 in an alternate embodiment of the displays of FIGs. 6a, 6b, and 6c.
FIG. 8 is a block diagram illustrating a caption decoder and data generator in accordance with a second embodiment of the invention.
FIG. 9 is a block diagram illustrating a caption data manager in accordance with a second embodiment of the invention.
Detailed Description
Referring to FIG. 1, there is shown a block diagram illustrating a closed caption processing system 10 in accordance with the present invention. A broadcaster (not shown) transmits a television signal 12 having an encoded radio signal modulated therein. The encoded radio signal contains a video signal, an audio signal, and encoded data, such as caption data. The caption data may be closed caption text which is program-related captions or service-related text transmitted in real time with the video signal. In a National Television Standards Committee (NTSC) video signal, the closed caption text is typically encoded on line 21 of the television signal. The closed caption processing system 10 has a receiver 14 for receiving the television signal 12. The receiver 14 may be, for example, an antenna for receiving broadcast television signals, a coupler for receiving signals from a cable television system, or a satellite dish and down converter for receiving a satellite transmission and for downconverting the received signal from a microwave signal to a received television signal 15. The receiver 14 provides the received television signal 15 to a tuner 16 for selecting, in response to a user selected command, the channel of the received television signal and converting the signal into an encoded radio signal 18 at a predetermined frequency and into an audio signal 19. The predetermined frequency may be, for example, a seldom used channel such as channel 3 or a video signal. The tuner 16 may be incorporated into a video cassette recorder 17 (VCR) or a cable box, which may also descramble the television signal if necessary. The tuner 16 provides the encoded radio signal 18 to a caption decoder and data generator 20 (CDG), which provides a decoded radio signal 22 to a conventional television 24 and provides a caption data signal 26 to a caption data manager (CDM) 28. The decoded radio signal 22 contains video, sliced caption text on lines of the video, and audio. The television 24 selectively may display or not display the caption text with the video. In other words, the caption decoder and data generator 20 may selectively strip or retain the closed caption text in the decoded radio signal 22. The communication link between the caption decoder and data generator 20 and the caption data manager 28 may be, for example, wireless, such as by infrared or RF.
The caption data manager 28 displays caption text without a video image on the same screen or window. The caption text may be displayed, for example, in an alphanumeric format. As described later herein, the user may manipulate the caption data without the associated video image, such as cease the scrolling or updating the displayed text, store the displayed text, define the displayed text, or translate the displayed text. Referring to FIG. 2, there is shown a block diagram of the caption decoder and data generator 20. The encoded radio signal 18 from the tuner 16 is provided to a closed caption decoder 32 and to a synchronization generator 34. The synchronization generator 34 provides a conventional vertical synchronization signal and a conventional horizontal synchronization signal to the closed caption decoder 32 for timing the processing of the encoded radio signal 18. The synchronization generator 34 may be, for example, a model E 4583C manufactured by Elantec of Milpitas, California. The closed caption decoder 32 demodulates the encoded radio signal 18 to generate a decoded video signal 36 and provide the signal 36 to a radio frequency (RF) modulator 30 and to a terminal 37 for coupling to the television 24. The closed caption decoder 32 slices the encoded radio signal 18 to generate caption data 42. The RF modulator 30 generates the decoded radio signal 22 responsive to the audio signal 19 and the decoded video signal 36. The terminal 37 provides the decoded video signal 36 to the television 24 in systems in which the audio signal 19 is provided to a separate sound system or provided separately to a sound system integral with the television. For systems separately using the decoded video signal 36 and the audio signal 19, the RF modulator 30 may be omitted. The closed caption decoder 32 may be, for example, a model CCD3000 manufactured by ITT Semiconductors of Hoffman Estates, Illinois. The RF modulator 30 may be, for example, a model RF-3406 manufactured by EPD Electronics of Gardena, California.
A microcontroller 38 having a central processing unit, a memory, an input/output (I/O) port, and other function units provides control signals 40 to the closed caption decoder 32 and receives the caption data 42 from the closed caption decoder 32. The microcontroller 38 may be, for example, a model 8751 manufactured by Philips Semiconductors of Sunnyvale, California. Alternatively, a separate central processing unit, memory, and I/O port may be used instead of a microcontroller. A memory 44 for storing caption data and parameters of software code is coupled to the microcontroller 38. The memory 44 may be, for example, a conventional static random access memory (SRAM).
An interface circuit 46 couples the microcontroller 38 to the closed caption manager 28 for communicating the caption data signal 26 and control signals 50. The communication channel between the interface circuit 46 and the caption data manager 28 may be, for example, an RS232 channel. For an RS232 channel, the interface circuit 46 may be, for example, a model MAX202 manufactured by Maxim of Sunnyvale, California.
Referring to FIG. 3, there is shown a block diagram illustrating a caption data manager 28. An interface circuit 52 couples the interface circuit 46 of the caption decoder and data generator 20 to a microcontroller 54 for receiving caption data 26 from and providing control signals 50 to the caption decoder and data generator 20. For an RS232 channel, the interface circuit 52 may be, for example, a model MAX202 manufactured by Maxim of Sunnyvale, California. Alternatively, a separate central processing unit, memory, and I/O port may be used instead of a microcontroller. The microcontroller 54 has a central processing unit, a memory, an I/O port, and other function units. The microcontroller 54 may be, for example, a model 8752 manufactured by Philips Semiconductor of Sunnyvale, California. A memory 56 coupled to the microcontroller 54 stores the caption data, a dictionary for defining words of the caption data, or a translation dictionary for translating words of the caption data from a first language into at least one second language. The dictionary may be in multiple languages. The memory 56 may include, for example, a conventional random access memory and a conventional read only memory.
An input device 58 receives data or commands from a user and provides same to the microcontroller 54. The input device 58 may be, for example, a keyboard or a voice recognition device. The microcontroller 54 provides a display signal 60 to a display 62. The display 62 may be, for example, a model HDM-40416H-5 liquid crystal display manufactured by Hantronix of Cupertino, California. The display signal 60 may include caption text, definitions of a portion of the caption text, and translations of a portion of the caption text.
The caption data manager 28 has a CDM select setting for establishing an operational mode of the microcontroller 54. A first operational mode is an automatic select mode in which the microcontroller 54 selects text having an identifier with a value greater than a preselected threshold value or level for processing, such as translating or defining. The identifier is a flag that represents the degree of difficulty of a word, phrase, information, or the like stored in the memory 56. The degree of difficulty may be, for example, a grade level rating of the word. A second operational mode is a manual select mode for selecting text for processing in response to user selected commands. Referring to FIG. 4, there is shown a flowchart illustrating the operation of the closed caption processing system 10. The caption decoder and data generator 20 receives 402 and decodes 404 the encoded radio signal 18 and generates 404 the caption signal 26 and the decoded radio signal 22. The caption signal 26 is transmitted 406 to the caption data manager 28 for processing as described later herein in conjunction with FIGs. 5a and 5b. The decoded radio signal 22 is transmitted 408 to the television 24 for selectively displaying 410 the video images with the caption text or without the caption text.
Referring to FIGs. 5a and 5b, there are shown flowcharts illustrating the operation of the caption data manager 28. Referring to FIGs. 6a, 6b, 6c, there are shown schematic views illustrating the display screen of the caption data manager 28. The caption data manager 28 has a default setting as either a caption function or non- caption function. This setting may be changed at any time through a user command from the input device 58. In the non-caption function, the caption data manager 28 receives input data from the input device 58. In the caption function, the caption data manager 28 processes data received from the caption decoder and data generator 20. Referring now to FIG. 5a, the microcontroller 54 of the caption data manager 28 determines 502 the initial setting of the caption data manager 28. If the initial setting is a caption function, the microcontroller 54 performs 504 the caption function. Referring now to FIG. 5b, the microcontroller 54 receives 505 the caption data from the caption decoder and data generator 20 and stores the caption data in the memory 56. Concurrently, the microcontroller 54 provides 506 the caption data to the display 62 for updating the displayed data either in full screen or a window, without an associated video image or without overlapping the associated video image. An example of a full screen display 602 of caption text is shown in FIG. 6a. An indicator 604 indicates the nature of the displayed data. An indicator 604 having a value "C" indicates that the displayed data is caption text. An indicator 604 having a value "D" indicates that the displayed text is dictionary data, such as definitions (as shown in FIG. 6b) or translations (as shown in FIG. 6c).
The microcontroller 54 determines 508 whether the CDM select setting of the caption data manager 28 is in an automatic selection mode or a manual selection mode. If the CDM select setting is the manual selection mode, while displaying the caption text, the microcontroller 54 monitors 510 the input device 58 for a user selected command. If the user selected command is a command other than a pause command or if there is no user selected command, the microcontroller 54 returns to updating 506 the caption text on the display 62. If the microcontroller 54 is in an automatic mode but had responded to a user selected command in a soft manual mode, the microcontroller 54 returns to the automatic selection mode. If a pause command from the input device 58 is detected, the microcontroller 54 freezes 512 the caption text on the display 62 or window by ceasing the updating of displayed caption text. The microcontroller 54 continues to process 505 the caption data and execute other functions during the pause. If a caption command is received 514 from the input device 58, the microcontroller 54 returns to updating 506 the caption text on the display 62. If a select command is received 514, the microcontroller 54 saves 516 the selected caption text 606. Selected text 606 (FIG. 6a) may be indicated on the display 62 in a conventional manner, such as highlighted or displayed in reverse video. In response to a definition command, the microcontroller 54 retrieves from the memory 56 a definition of the selected text and displays such definition on the display 62 or window. An example of a definition display 608 is shown in FIG. 6b, which shows the definition of the selected word 606. In response to a translate command, the microcontroller 54 translates the selected caption text and displays the translation content on the display or window. An example of a translation display 610 is shown in FIG. 6c, which shows the translation of the selected word 606 into a second language, here Chinese. The microcontroller 54 determines 518 the type of user selected command. In response to a caption command, the microcontroller 54 returns to updating 506 the caption text on the display 62. In response to a pause command, the microcontroller 54 freezes 512 the caption text and proceeds as described earlier herein. For other commands, the microcontroller 54 performs 520 the requested function. Such functions may be, for example, a dictionary or encyclopedia function. The microcontroller 54 returns to determining 518 the type of user selected command.
On the other hand, if the determination 508 is that the CDM select setting is the automatic selection mode, the microcontroller 54 retrieves 521 from the memory 56 the identifier value for each word in the caption text. The microcontroller 54 compares 522 the identifier with the preselected threshold level. If the identifier value of the caption text is less then the threshold level, the microcontroller 54 proceeds to determining 510 the request. Conversely, if the identifier value of the caption text is greater then the threshold level, the microcontroller 54 selects 524 such caption text and stores 526 the selected caption text in the memory 56. The microcontroller 54 displays the translation or definition of the selected text. The microcontroller 54 monitors 510 the input device 58 for a user command as described earlier herein.
Referring back to FIG. 5a, if the user command is a non caption function or if the default setting is a noncaption function 502, the microcontroller 54 performs 528 the selected or default function. For example, the microcontroller 54 may operate as a general purpose electronic system, such as an electronic dictionary or encyclopedia, for a non caption function. The microcontroller 54 continues to perform the non-caption function until a caption command is received 530. Responsive to the caption command, the microcontroller 54 executes 504 the caption function as described above.
Referring to FIGs. 7a and 7b, there are shown schematic views illustrating the display screen in a window configuration of the caption data manager in an alternate embodiment of the display of FIGs. 6a, 6b, and 6c. For a display of sufficient size, the display 702 may be partitioned to show multiples windows. The display 702 is arranged in a caption text window 704, a definition window 706, a translation window 708, and an examined word window 710. The text in the caption text window 704 is frozen in response to a pause command. The user may select words for definition or translation by moving a cursor 712 and selecting text. For example, in the manual mode of FIG. 7a, the words "infrastructure" and "mogul" have been selected 516. In FIG. 7b, say the words "infrastructure" and "entertainment" have identifier values of 6 and 4, respsectively. For a preselected threshold level of 4 these words are selected 524 in the automatic selection mode and are displayed 526 in the window 704. Say the word "mogul" has an identifier value of 3. The word "mogul" is not selected 524 in the automatic mode. The user may select 516 the word "mogul" which is then displayed 516 in the windows 704 and 710. The words may be defined 516 or translated 516 and the definition or translation displayed 516 in the respective window 706, 708. The windows may be scrolled using conventional scroll bars 714, 716. The definitions, the translations into a second language (here Chinese), and a list of each of these words are shown in the definition window 706, the translation window 708, and the examined word window 710, respectively. The indicator 718 shows the threshold level for the selection mode. A control display selection icon 720 allows the user to alter the characteristics of the display, such as font, size, and the like. Referring to FIG. 8, there is shown a block diagram illustrating a caption decoder and data generator 120 in accordance with a second embodiment of the invention. Like elements of FIGs. 2 and 8 have like reference numerals. The RF modulator 30, the closed caption decoder 32, the synchronization generator 34, and the terminal 37 are arranged as in the caption decoder and data generator 20 of FIG. 2. The closed caption decoder 32 provides the caption data to a modulator 70 which provides a modulating signal 71 to a transmitter 72 for modulating the data onto a carrier signal. The transmitter 72 provides the modulated carrier signal to an emitter 74 for transmitting the modulated carrier signal 75 to a caption data manager 128. The emitter 74 may be, for example, an antenna or an infrared light emitting diode. The modulated carrier signal may be, for example, a radio signal, an infrared signal, a light signal, and the like.
Referring to FIG. 9, there is shown a block diagram illustrating the caption data manager 128 in accordance with the second embodiment of the invention. Like elements of FIGs. 3 and 9 have like reference numbers. The microcontroller 54, the display 62, the input device 58, and the memory 56 are arranged as in the caption data manager 28 of FIG. 3.
A detector 76 receives the transmitted modulated carrier signal 75 and provides the received signal to a receiver 78 for signal amplification. The detector 76 may be, for example, an RF antenna or an infrared detector. The demodulator 80 demodulates the received carrier signal from the receiver 78 to recover the caption data. The demodulator 80 receives control signals from the microcontroller 54 for power consumption reduction control and data synchronization and provides the recovered caption text data to the microcontroller 54. The microcontroller 54 operates as described above. In a third embodiment, the closed caption decoder 32 and the synchronization generator 34 are located after the demodulator 80. The tuner 16 provides the encoded radio signal 18 to the modulator 70 which is processed by the transmitter 72. The emitter 74 transmits the modulated encoded radio signal, for example as a video signal, to the detector 76. After processing in the receiver 78 and the demodulator 80, the encoded signal is provided to the closed caption decoder 32 which slices the caption data and provides the data to the microcontroller 54 for processing as described earlier herein.

Claims

I Claim:
1. A method comprising the steps of: receiving a television signal including caption data; removing the caption data from the television signal to form a stripped video signal and a caption data signal; providing the stripped video signal to a television; providing the closed caption data signal to a screen other than the television; and displaying on the screen the closed caption data.
2. The method of claim 1 further comprising the steps of: receiving a user command; and storing a portion of the caption data in response to the user command.
3. The method of claim 2 further comprising the step of processing the portion of the caption data.
4. The method of claim 3 wherein the processing step includes the step of defining the portion of the caption data.
5. The method of claim 3 wherein the processing step includes the step of translating the portion of the caption data.
6. The method of claim 1 further comprising the steps of: cease updating the caption text on the screen in response to a user selected command; continuing the step of removing the caption data; and continuing the step of providing the stripped video image to the television.
7. An apparatus for processing a television signal having caption data therein comprising: a detector for receiving the television signal; a decoder having an input coupled to the detector for receiving the television signal, having a first output for providing the television signal to a television, and having a second output for providing caption data; a display having an input for receiving caption text and having a screen for displaying the caption text; and a microcontroller having a first input coupled to the output of the decoder for receiving the caption data, and having a first output coupled to the input of the display for providing the caption text.
8. The apparatus of claim 7 wherein the microcontroller has a second input for receiving at least one user selected command and the caption text provided to the display is not updated responsive to a first user selected command.
9. The apparatus of claim 8 further comprising a memory for storing definitions of text and having an output for providing a definition of text responsive to a definition request and wherein the microcontroller has a second input coupled to the memory for receiving the definition of the text, has a second output coupled to the memory for providing a definition request in response to a third user selected command, and the first output provides the definition of the text to the display.
10. The apparatus of claim 8 further comprising a memory for storing translations of text and having an output for providing a translation of text responsive to a translation request and wherein the microcontroller has a second input coupled to the memory for receiving the translation of the text, has a second output coupled to the memory for providing a translation request in response to a third user selected command, and the first output provides the translation of the text to the display.
11. An apparatus for processing caption text comprising: a detector for receiving caption data separate from a television signal; a display having an input for receiving caption text and having a screen for displaying the caption text; and a microcontroller having a first input coupled to the output of the detector for receiving the caption data, having a first input for receiving user selected commands, and having a first output coupled to the input of the display for providing the caption text as received in a first operational mode and for providing the same text in a second operational mode responsive to a first user selected command.
12. The apparatus of claim 11 further comprising a memory for storing definitions of text and having an output for providing a definition of text responsive to a definition request and wherein the microcontroller has a second input coupled to the memory for receiving the definition of the text, has a second output coupled to the memory for providing a definition request in response to a third user selected command, and the first output further provides the definition of the text to the display.
13. The apparatus of claim 11 further comprising a memory for storing translations of text and having an output for providing a translation of text responsive to a translation request and wherein the microcontroller has a second input coupled to the memory for receiving the translation of the text, has a second output coupled to the memory for providing a translation request in response to a third user selected command, and the first output further provides the translation of the text to the display.
14. An apparatus for processing a television signal having caption text therein comprising: a detector for receiving the television signal; a decoder having an input coupled to the detector for receiving the television signal, having a first output for providing the television signal to a television, and having a second output for providing caption data; a modulator having an input coupled to the second output of the decoder and having an output for providing a modulating signal responsive to the caption data; and a transmitter having an input coupled to the output of the modulator for receiving the modulating signal and having an output for providing a modulated carrier signal responsive to the modulating signal.
PCT/US1996/003504 1995-03-13 1996-03-13 Method and apparatus for translating closed caption data WO1996028932A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/403,203 US5543851A (en) 1995-03-13 1995-03-13 Method and apparatus for translating closed caption data
US08/403,203 1995-03-13

Publications (1)

Publication Number Publication Date
WO1996028932A1 true WO1996028932A1 (en) 1996-09-19

Family

ID=23594870

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1996/003504 WO1996028932A1 (en) 1995-03-13 1996-03-13 Method and apparatus for translating closed caption data

Country Status (2)

Country Link
US (1) US5543851A (en)
WO (1) WO1996028932A1 (en)

Families Citing this family (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2188707C (en) * 1995-11-13 2000-08-01 Aaron Hal Dinwiddie System providing freeze of closed captioning data
US6005536A (en) * 1996-01-16 1999-12-21 National Captioning Institute Captioning glasses
US6018369A (en) * 1996-03-21 2000-01-25 Samsung Electronics Co., Ltd. Video decoder with closed caption data on video output
US5801782A (en) * 1996-03-21 1998-09-01 Samsung Information Systems America Analog video encoder with metered closed caption data on digital video input interface
CN1178479C (en) * 1996-05-03 2004-12-01 三星电子株式会社 Selective-caption-displayed scrolling method suitable for syllable characters
KR100212134B1 (en) * 1996-05-03 1999-08-02 윤종용 Soft scroll method of viewer selection type caption display
US5774857A (en) * 1996-11-15 1998-06-30 Motorola, Inc. Conversion of communicated speech to text for tranmission as RF modulated base band video
US7139031B1 (en) 1997-10-21 2006-11-21 Principle Solutions, Inc. Automated language filter for TV receiver
US6972802B2 (en) * 1997-10-21 2005-12-06 Bray J Richard Language filter for home TV
US6075550A (en) * 1997-12-23 2000-06-13 Lapierre; Diane Censoring assembly adapted for use with closed caption television
US6373526B1 (en) * 1999-03-19 2002-04-16 Sony Corporation Processing of closed caption in different formats
US6513003B1 (en) 2000-02-03 2003-01-28 Fair Disclosure Financial Network, Inc. System and method for integrated delivery of media and synchronized transcription
US7047191B2 (en) * 2000-03-06 2006-05-16 Rochester Institute Of Technology Method and system for providing automated captioning for AV signals
KR100367675B1 (en) * 2000-04-27 2003-01-15 엘지전자 주식회사 Tv text information translation system and control method the same
US6898799B1 (en) 2000-10-23 2005-05-24 Clearplay, Inc. Multimedia content navigation and playback
US6889383B1 (en) 2000-10-23 2005-05-03 Clearplay, Inc. Delivery of navigation data for playback of audio and video content
US7975021B2 (en) 2000-10-23 2011-07-05 Clearplay, Inc. Method and user interface for downloading audio and video content filters to a media player
US7130790B1 (en) * 2000-10-24 2006-10-31 Global Translations, Inc. System and method for closed caption data translation
US7747434B2 (en) * 2000-10-24 2010-06-29 Speech Conversion Technologies, Inc. Integrated speech recognition, closed captioning, and translation system and method
US20020067428A1 (en) * 2000-12-01 2002-06-06 Thomsen Paul M. System and method for selecting symbols on a television display
US7221405B2 (en) * 2001-01-31 2007-05-22 International Business Machines Corporation Universal closed caption portable receiver
US8270820B2 (en) * 2001-02-23 2012-09-18 Hewlett-Packard Development Company, L.P. Archiving and sharing of multimedia content
JP2003037792A (en) * 2001-07-25 2003-02-07 Toshiba Corp Data reproducing device and data reproducing method
US20030035075A1 (en) * 2001-08-20 2003-02-20 Butler Michelle A. Method and system for providing improved user input capability for interactive television
US20030065503A1 (en) * 2001-09-28 2003-04-03 Philips Electronics North America Corp. Multi-lingual transcription system
CN1218574C (en) * 2001-10-15 2005-09-07 华为技术有限公司 Interactive video equipment and its caption superposition method
ATE441284T1 (en) * 2001-12-07 2009-09-15 Accenture Global Services Gmbh METHOD AND DEVICE FOR DISPLAYING DEFINITIONS OF SELECTED TERMS IN A TELEVISION PROGRAM
KR100426090B1 (en) * 2001-12-18 2004-04-06 삼성전자주식회사 Apparatus and method for receiving digital broadcasting
FR2836733B1 (en) * 2002-03-01 2004-05-28 France Telecom METHOD AND DEVICE FOR COMPUTER PROCESSING OF AUDIOVISUAL CONTENT WITH SUBTITLES
EP1491053A1 (en) * 2002-03-21 2004-12-29 Koninklijke Philips Electronics N.V. Multi-lingual closed-captioning
US8667547B2 (en) * 2002-07-31 2014-03-04 Comverse, Ltd. Method and system for editing text messages conveyed via a CATV infrastructure at a TV set-top box
US20040044532A1 (en) * 2002-09-03 2004-03-04 International Business Machines Corporation System and method for remote audio caption visualizations
US6980257B2 (en) * 2002-11-18 2005-12-27 Sharp Laboratories Of America, Inc. Plural-source, selectable, combined image/text single-screen display
US20070005338A1 (en) * 2003-08-25 2007-01-04 Koninklijke Philips Electronics, N.V Real-time media dictionary
JP2007504495A (en) * 2003-08-26 2007-03-01 クリアプレイ,インク. Method and apparatus for controlling the performance of an acoustic signal
US20050075857A1 (en) * 2003-10-02 2005-04-07 Elcock Albert F. Method and system for dynamically translating closed captions
US20050086702A1 (en) * 2003-10-17 2005-04-21 Cormack Christopher J. Translation of text encoded in video signals
AU2004304818A1 (en) * 2003-10-22 2005-07-07 Clearplay, Inc. Apparatus and method for blocking audio/visual programming and for muting audio
US20050188404A1 (en) * 2004-02-19 2005-08-25 Sony Corporation System and method for providing content list in response to selected content provider-defined word
US20050188411A1 (en) * 2004-02-19 2005-08-25 Sony Corporation System and method for providing content list in response to selected closed caption word
KR100678938B1 (en) * 2004-08-28 2007-02-07 삼성전자주식회사 Apparatus and method for synchronization between moving picture and caption
US8117282B2 (en) * 2004-10-20 2012-02-14 Clearplay, Inc. Media player configured to receive playback filters from alternative storage mediums
US7707044B2 (en) * 2005-02-11 2010-04-27 Avaya Inc. Use of location awareness to transfer communications sessions between terminals in a healthcare environment
US20060227240A1 (en) * 2005-03-30 2006-10-12 Inventec Corporation Caption translation system and method using the same
US20080064326A1 (en) * 2006-08-24 2008-03-13 Stephen Joseph Foster Systems and Methods for Casting Captions Associated With A Media Stream To A User
US7508450B1 (en) * 2008-05-06 2009-03-24 International Business Machines Corporation Method for closed captioning of audio video content on a portable playback device
WO2009157893A1 (en) * 2008-06-24 2009-12-30 Thomson Licensing Method and system for redisplaying text
KR101479079B1 (en) * 2008-09-10 2015-01-08 삼성전자주식회사 Broadcast receiver for displaying description of terminology included in digital captions and method for processing digital captions applying the same
US8359399B2 (en) * 2008-10-16 2013-01-22 Echostar Technologies L.L.C. Method and device for delivering supplemental content associated with audio/visual content to a user
US8330864B2 (en) * 2008-11-02 2012-12-11 Xorbit, Inc. Multi-lingual transmission and delay of closed caption content through a delivery system
US20100201617A1 (en) * 2009-02-06 2010-08-12 Inventec Corporation Real-time Translation display interface and display method thereof
US20100265397A1 (en) * 2009-04-20 2010-10-21 Tandberg Television, Inc. Systems and methods for providing dynamically determined closed caption translations for vod content
US8751284B2 (en) 2009-04-30 2014-06-10 United Parcel Service Of America, Inc. Systems and methods for a real-time workflow platform using Petri net model mappings
US8332811B2 (en) * 2009-04-30 2012-12-11 United Parcel Service Of America, Inc. Systems and methods for generating source code for workflow platform
US20110020774A1 (en) * 2009-07-24 2011-01-27 Echostar Technologies L.L.C. Systems and methods for facilitating foreign language instruction
US8990854B2 (en) * 2009-09-14 2015-03-24 Broadcom Corporation System and method in a television for providing user-selection of objects in a television program
KR101797202B1 (en) * 2009-11-13 2017-11-14 삼성전자주식회사 Display apparatus and method for remotly outputting audio thereof and mobile device and method for outputting audio thereof
WO2011158010A1 (en) * 2010-06-15 2011-12-22 Jonathan Edward Bishop Assisting human interaction
US8707378B2 (en) * 2010-09-29 2014-04-22 Verizon Patent And Licensing Inc. Catalog and user application for a video provisioning system
WO2014198035A1 (en) * 2013-06-13 2014-12-18 Google Inc. Techniques for user identification of and translation of media
KR102129536B1 (en) * 2013-08-06 2020-07-03 삼성전자주식회사 Mobile terminal and method for controlling the mobile terminal
KR20150019931A (en) * 2013-08-16 2015-02-25 삼성전자주식회사 Display apparatus and control method thereof
US10289677B2 (en) 2015-02-19 2019-05-14 Tribune Broadcasting Company, Llc Systems and methods for using a program schedule to facilitate modifying closed-captioning text
US9854329B2 (en) * 2015-02-19 2017-12-26 Tribune Broadcasting Company, Llc Use of a program schedule to modify an electronic dictionary of a closed-captioning generator

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4739402A (en) * 1985-09-12 1988-04-19 Sony Corporation Protocol converting apparatus for videotex system
US4894789A (en) * 1988-02-22 1990-01-16 Yee Keen Y TV data capture device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2149627B (en) * 1983-10-26 1987-06-10 Philips Electronic Associated Teletext television receiver with multi-language display
GB2207329A (en) * 1987-07-17 1989-01-25 Philips Electronic Associated Teletext decoders
GB2241089B (en) * 1990-02-20 1993-12-08 Xerox Corp Electrostatographic machine
JPH05252442A (en) * 1992-03-04 1993-09-28 Mitsubishi Electric Corp Title broadcast television receiver and decoder
US5262860A (en) * 1992-04-23 1993-11-16 International Business Machines Corporation Method and system communication establishment utilizing captured and processed visually perceptible data within a broadcast video signal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4739402A (en) * 1985-09-12 1988-04-19 Sony Corporation Protocol converting apparatus for videotex system
US4894789A (en) * 1988-02-22 1990-01-16 Yee Keen Y TV data capture device

Also Published As

Publication number Publication date
US5543851A (en) 1996-08-06

Similar Documents

Publication Publication Date Title
US5543851A (en) Method and apparatus for translating closed caption data
US6816201B1 (en) XDS enhancement system
US6058430A (en) Vertical blanking interval encoding of internet addresses for integrated television/internet devices
US7929057B2 (en) Display control method
US5694176A (en) Method and apparatus for generating television program guides with category selection overlay
EP0735750B1 (en) Program guide signal receiver for television
US6888576B2 (en) Digital/analog broadcast receiving device capable of limiting change in setting in operation setting menu thereof
US8599314B2 (en) Display device, program, and storage medium
JPH07162818A (en) Method and equipment for data display
JP2002223400A (en) Broadcast receiver having on-screen display function for channel information
JP2002344829A (en) Digital broadcast signal receiver
JP4320741B2 (en) Broadcast reception system
US20090300682A1 (en) Control device and method for channel searching in image display device
KR100304884B1 (en) Apparatus for screen expressing of program guide information and method for the same
EP1387580B1 (en) Teleweb apparatus
KR100531354B1 (en) Apparatus and method for displaying information of detail in TV program
JP2002335465A (en) Broadcast receiver with menu item display function
EP0876729B1 (en) Television receiver
JPH06225174A (en) Method and apparatus for controlling caption display
KR20070028651A (en) Display apparatus and method for displaying caption thereof
KR100228494B1 (en) Added information output apparatus
JP2002290930A (en) Broadcasting receiver
KR20080051350A (en) Recording command transmission method
JPH08317349A (en) Information receiver
US20080092167A1 (en) Display apparatus and information saving method thereof

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP KR