WO2003085938A2 - Communications terminal device for receiving and recording content, starting recording when a chance of status in a voice communication is detected - Google Patents

Communications terminal device for receiving and recording content, starting recording when a chance of status in a voice communication is detected Download PDF

Info

Publication number
WO2003085938A2
WO2003085938A2 PCT/JP2003/004153 JP0304153W WO03085938A2 WO 2003085938 A2 WO2003085938 A2 WO 2003085938A2 JP 0304153 W JP0304153 W JP 0304153W WO 03085938 A2 WO03085938 A2 WO 03085938A2
Authority
WO
WIPO (PCT)
Prior art keywords
section
terminal device
content
operable
voice communication
Prior art date
Application number
PCT/JP2003/004153
Other languages
French (fr)
Other versions
WO2003085938A3 (en
Inventor
Mami Kuramitsu
Original Assignee
Matsushita Electric Industrial Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co., Ltd. filed Critical Matsushita Electric Industrial Co., Ltd.
Priority to KR1020037016115A priority Critical patent/KR100921303B1/en
Priority to EP03745884.1A priority patent/EP1407601B1/en
Publication of WO2003085938A2 publication Critical patent/WO2003085938A2/en
Publication of WO2003085938A3 publication Critical patent/WO2003085938A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M11/00Telephonic communication systems specially adapted for combination with other electrical systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/4147PVR [Personal Video Recorder]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/4223Cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4333Processing operations in response to a pause request
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6156Network physical structure; Signal processing specially adapted to the upstream path of the transmission network
    • H04N21/6181Network physical structure; Signal processing specially adapted to the upstream path of the transmission network involving transmission via a mobile phone network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications
    • H04N7/17318Direct or substantially direct transmission and handling of requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/38Transmitter circuitry for the transmission of television signals according to analogue transmission standards

Definitions

  • the present invention relates to communications terminal devices, and more particularly, to a communications terminal device having a content reception function and a voice communication function.
  • BACKGROUND ART Terrestrial digital broadcasting is scheduled to commence in Japan first in the three biggest metropolitan areas in 2003 and then nationwide in 2006.
  • a feature of terrestrial digital broadcasting is that mobile reception of contents is possible.
  • the third-generation cellular phone service started in 2001, enabling distribution of moving pictures and a portable videophone.
  • a voice call arrives while viewing a content with such a communications terminal device, the user tends to choose to answer the call rather than continuing viewing the content.
  • the user tends to continue the voice communication.
  • a conventional television which receives a video signal externally broadcast on the channel selected by the viewer, reproduces the received video signal, and outputs a video represented by the received video signal.
  • the conventional television includes a built-in modem which outputs a status signal when a fixed telephone receives a call during the reception of the video signal.
  • the conventional television In response to the status signal, the conventional television begins recording of the currently receiving video signal in a storage device built therein (that is, performs video recording) . After the voice communication is finished, the conventional television reproduces the video signal recorded in the storage device. Thus, the viewer can view the video missed due to the voice communication.
  • the conceptual communications terminal device has a problem that the user is not allowed to view the content at least until the voice communication is finished.
  • the conventional television describedabove needs to have a storage device with a large capacity capable of storing a long content.
  • the communications terminal device describedabove which is amobileunit , is allowedto include onlyasmall-capacitystorage device and thus cannot store a long content therein. In view of this, it is difficult to implement the technology developed for televisions as describedaboveforamobilecommunications terminal device without substantial modifications .
  • an object of the present invention is to provide a communications terminal device capable of outputting a portion of a content missed by the user due to voice communication at a time shifted from the actual broadcast time.
  • Another object of the present invention is to provide a communications terminal device capable of recording/reproducing a portion of a content missed by the user due to voice communication by a technique suitable for mobile units .
  • the present invention has the following aspects.
  • a first aspect of the present invention is directed to a communications terminal device including: a reproduction section operable to repeive and reproduce a content transmitted from an external source; a telephony processing section operable to receive and reproduce at least voice of a party on the other end of voice communication; a status detection section operable to detect a status change of voice communication; a storage section operable to store the content receivedby the reproduction section; a write section operable to write the content received by the reproduction section in the storage section while the status detection section detects a status change of voice communication; andareadsection operable to readthe content storedin the storage section.
  • the reproduction section is further operable to reproduce the content read by the read section.
  • the reproduction section receives a program composed of a video and audio from a remote broadcast station as a content .
  • the communications terminal device stores the received content in the storage section while the user is engaged in voice communication, and reads and reproduces the stored content after the voice communication is finished.
  • the communications terminal device canprovide the userwiththe portion of the content the user failed to view due to the voice communication.
  • the read section is operable to start read of the content stored in the storage section while the status detection section detects a next status change of voice communication.
  • the status detection section detects a next status change of voice communication.
  • the status detection section is operable to detect an incoming call at the telephony processing section as a start point of voice communication, and detect that the telephonyprocessing sectionhas disconnectedvoice communication.
  • the status detection section is operable to detect that the telephonyprocessing sectionhas enteredan off-hook state as a start point of voice communication, and detect that the telephony processing section has entered an on-hook state as an end point of voice communication. In this way, the status can be detected using the voice communications function originally possessed by the communications terminal device. It is therefore possible to reduce the number of components of the communications terminal device and minimize the fabrication cost.
  • the telephony processing section is operable to receive and reproduce an image of the party on the other end of voice communication.
  • the reproduction section is operable to reproduce the content read by the read section at n times speed (n is a positive number satisfying n > 1) , and also is operable to receive and reproduce the content transmitted from the external source when the read by the read section is completed.
  • the communications terminal device reproduces the receiving content once substantially no data is left in the storage section. Therefore, it is unnecessary to store all the content received after the start of voice communication in the storage section. This enables recording/reproduction suitable for a communications terminal device allowed to include only a small-capacity memory.
  • the communications terminal device may further include: an image generation section operable to generate image informationrelatingtovoicecommunication; andanimagecombining section operable to generate combined image information by combining the content received by the reproduction section and the image information generated by the image generation section while the status detection section detects a status change of voice communication.
  • an image generation section operable to generate image informationrelatingtovoicecommunication
  • an imagecombining section operable to generate combined image information by combining the content received by the reproduction section and the image information generated by the image generation section while the status detection section detects a status change of voice communication.
  • the image combining section is operable to generate the combined image information to which an image of the party on the other end of the voice communication is additionally included. Furthermore, when the reproduction section can capture an image of the user, the image combining section can generate the combined image information to which the captured image of the user is additionally included.
  • the communications terminal device may further includes : a mute detection section operable to detect a mute time period of voice communication; and a voice switch section operable to output the audio reproduced by the reproduction section during the mute time period detected bythemute detection section.
  • Thevoice switchsection can further output a voice signal reproduced by the telephony processing sectionwhen themute detection section detects nomute time period. This enables the user to hear audio constituting the content even during voice communication. It is therefore possible to provide a communications terminal device having further enhanced operability.
  • the communications terminal device further includes first and second speakers operable to output the audio reproduced by the reproduction section and the voice reproduced by the telephony processing section while a status change of voice communication is detected by the status detection section. This enables the user to hear audio of the content during voice communication.
  • the communications terminal device further includes a start detection section operable to detect a predetermined content transmission start time.
  • the write section can further store the content received by the reproduction section while a transmission status change is detected by the start detection section.
  • the communications terminal device can provide the user with the portion of the content the user failed to view and hear due to voice communication.
  • the read section can further read the content stored in the storage section from a head thereof during the progress of writing of the content in the storage section by the write section.
  • the communications terminal device can provide the user with the portion of the content the user failed to view and hear due to voice communication.
  • the communications terminal device further include: an end time determination section operable to determine an end time of the content received by the reproduction section and currently being written in the storage section; and a write terminating section operable to terminate the write of the content in the storage section when the end time determination section determines that the endtimehas passed. Withthewriteterminating section, the write of the object content in the storage section can be terminated once the content are finished, even during voice communication. This enables recording/reproduction suitable for a communications terminal device allowed to include only a small-capacity memory.
  • the communications terminal device further include a remaining capacity detection section operable to detect the remaining recording capacity of the storage section.
  • the write section can further determine a bit rate based on the remaining capacity detected by the remaining capacity detection section, and write the content received by the reproduction section based on the determined bit rate.
  • the bit rate of the content can be controlled according to the remaining capacity of the storage section. This enables recording/reproduction suitable for a communications terminal device allowed to include only a small-capacity memory.
  • a second aspect of the present invention is a computer program for providing a function of broadcast reception and a function of voice communication to a computer, including the steps of: receiving and reproducing a content transmitted from an external source; receiving and reproducing at least voice of a party on the other end of voice communication; detecting a status change time point of the voice communication; writing the content received in the step of receiving and reproducing a content while a status change of voice communication is detected in the step ofdetecting; andreadingthe contentwritten inthe step ofwriting.
  • the step of receiving and reproducing a content can further reproduce the content read in the step of reading.
  • the computer program is recorded in a recording medium.
  • FIG. 1 is a block diagram showing the construction of a terminal device E x of an embodiment of the present invention
  • FIG. 2 is a timing chart showing an outline of an operation of the terminal device Ei of FIG. 1;
  • FIG. 3 is a flowchart showing a detailed operation of the terminal device Ei of FIG. 1;
  • FIG. 4 is a block diagram showing the construction of a terminal device E 2 that is a first variant of the terminal device E x of FIG . 1 ;
  • FIG. 5 is a timing chart showing an outline of an operation of the terminal device E 2 of FIG. 4;
  • FIG. 6 is a flowchart showing a detailed operation of the terminal device E 2 of FIG. 4;
  • FIG. 7 is a block diagram showing the construction of a terminal device E 3 that is a second variant of the terminal device Ei of FIG. 1;
  • FIG. 8 is a timing chart showing an outline of an operation of the terminal device E 3 of FIG. 7;
  • FIG. 9 is a flowchart showing a detailed operation of the terminal device E 3 of FIG. 7;
  • FIG.10 is a view showing a first example of a combined image signal SL M generated by an image combining section 104 in FIG. 7;
  • FIG.11 is aview showinga secondexample of the combined image signal SL M generated by the image combining section 104 in FIG. 7;
  • FIG.12 is aview showing a third example of the combined image signal SL M generated by the image combining section 104 in FIG. 7;
  • FIG. 13 is a block diagram showing the construction of a terminal device E 4 that is a third variant of the terminal device E-* . of FIG. 1;
  • FIG. 14 is a timing chart showing an outline of an operation of the terminal device E 4 of FIG. 13;
  • FIG. 15 is a flowchart showing a detailed operation of the terminal device E of FIG. 13;
  • FIG. 16 is a block diagram showing the construction of a terminal device E 5 that is a fourth variant of the terminal device Ei of FIG. 1;
  • FIG. 17 is a timing chart showing an outline of an operation of the terminal device E 5 of FIG. 16;
  • FIG. 18 is a flowchart showing a detailed operation of the terminal device E 5 of FIG. 16;
  • FIG. 19 is a block diagram showing the construction of a terminal device E 6 that is a fifth variant of the terminal device Ei of FIG. 1;
  • FIG. 20 is a timing chart showing an outline of an operation of the terminal device E 6 of FIG. 19;
  • FIG. 21 is a flowchart showing a detailed operation of the terminal device E 6 of FIG. 19;
  • FIG. 22 is a block diagram showing the construction of a terminal device E 7 that is a sixth variant of the terminal device Ei of FIG. 1;
  • FIG. 23 is a timing chart showing an outline of an operation of the terminal device E 7 of FIG. 22;
  • FIG.24 is a flowchart showing the detailed operation of the terminal device E 7 of FIG. 22;
  • FIG. 25 is a block diagram showing the construction of a terminal device E 8 that is a seventh variant of the terminal device E x of FIG. 1;
  • FIG. 26 is a timing chart showing an outline of an operation of the terminal device E 8 of FIG. 25.
  • FIG. 27 is a flowchart showing a detailed operation of the terminal device E 8 of FIG. 25.
  • FIG. 1 is a block diagram showing the construction of amobile communications terminal device (hereinafter, referred to as a terminal device for simplification) E x of an embodiment of thepresent invention .
  • the terminaldeviceEi includes a content reproduction section 1 , a telephony processing section 2, an image switch section 3, a display device 4, a voice switch section 5, a speaker 6, a status detection section 7, a control section 8, a content storage section 9, and an input device 10.
  • Thecontentreproduction section 1 receives atransport stream ST T which is composed of at least one channel and broadcast from a terrestrial digital broadcast station 101, and reproduces a content from the received transport stream ST T .
  • the content are assumed to be a TV program broadcast in a scheduled time frame according to a timetable made up by the broadcastingprovider, for example.
  • the TVprogram is essentially composed of a video represented by a video signal SL V and audio represented by an audio signal SL A .
  • the video signal SL V and the audio signal SL A are encoded at the broadcast station 101 according to Moving Picture Experts Group (MPEG).
  • MPEG Moving Picture Experts Group
  • the resultant encoded video signal CSL V and encoded audio signal CSL A are multiplexed for generating the transport stream ST T .
  • the content reproduction section 1 is also made operable to reproduce the video signal SL V and the audio signal SL A from the transport stream ST read from the content storage section 9 (described below) in an event that voice communication is started during reception/reproduction of contents.
  • the content reproduction section 1 includes an antenna 11, a tuner 12, a TS switch section 13, a demultiplexer 14, a video decoder 15, and an audio decoder 16.
  • the antenna 11 receives transport streams ST T broadcast from a plurality of broadcast stations 101 (a single station is shown in FIG. 1), and outputs the received streams to the tuner 12.
  • the tuner 12 selects a transport stream ST T transmitted on the channel designated by the user among ones transmitted on the channels receivable by the antenna 11, and outputs the selected transport stream ST T to both the TS switch section 13 and the control section 8.
  • the TS switch section 13 outputs the transport stream ST T sent from the tuner 12 to the demultiplexer 14.
  • the TS switch section 13 also receives the transport stream ST T read from the content storage section 9 by the control section 8, and outputs the received transport stream ST to the demultiplexer 14.
  • the TS switch section 13 switches these two input lines in accordance with a control signal CS C sent from the control section 8.
  • the demultiplexer 14 demultiplexes the transport stream ST T output from the TS switch section 13 into the encoded video signal CSL V and the encoded audio signal CSL A , which are sent to the video decoder 15 and the audio decoder 16, respectively.
  • the video decoder 15 decodes the encoded video signal CSL V received from the demultiplexer 14 in accordance with MPEG, and reproduces the video signal SL V representing a video constituting the content .
  • Thereproducedvideo signal SL v is output to the image switch section 3.
  • the audio decoder 16 decodes the encoded audio signal CSL A received from the demultiplexer 14 in accordance with MPEG, and reproduces the audio signal SL A representing audio synchronizing with the video and constituting the content .
  • the reproduced audio signal SL A is output to the voice switch section 5.
  • the telephony processing section 2 communicates with a base station 102 included in a mobile communication system, and receives/sends a multiplexed signal SL S from/to the base station 102.
  • the multiplexed signal SL S includes at least an encoded voice signal CSL S ⁇ representing the speech of the party with which the user speaks using the terminal device Ei and an encoded voice signal CSL S2 representing the speech of the user.
  • thetelephony processing section 2 typically includes an antenna 21, a wireless communications part 22, a voice decoder 23, a voice input part 24, and a voice encoder 25.
  • the antenna 21 receives the multiplexed signal SL S sent from the base station 102.
  • the wireless communications part 22 as a demultiplexer, demultiplexes the multiplexed signal SL S to obtain the encodedvoice signal CSL S ⁇ , andoutputs the demultiplexed signal to the voice decoder 23.
  • the voice decoder 23 decodes the encoded voice signal CSL S1 output from the wireless communications part 22 according to the voice encoding scheme described above, and outputs the resultant voice signal SL S1 to the voice switch section 5.
  • the voice input part 24 generates a voice signal SL S2 representing the speech of the user, and outputs the generated signal to the voice encoder 25.
  • the voice encoder 25 encodes the voice signal SL S2 received from the voice input part 24 according to the voice encoding scheme described above, and outputs the resultant encodedvoice signal CSL S2 to thewireless communications part 22.
  • the wireless communications part 22 as a multiplexer, multiplexes the encoded voice signal CSL S2 received from the voice encoder 25 for generating the multiplexed signal SL S , and outputs the generated signal SL S to the antenna 21.
  • the antenna 21 sends the multiplexed signal SL S received from the wireless communications part 22 to the base station 102.
  • the image switch section 3 outputs the video signal SL V received from the video decoder 15 to the display device 4.
  • the image switch section 3 also receives an image signal SL-j-., which is used during voice communication typically for displaying the current time, the radio-wave reception state, and the amount of remaining battery time.
  • the image signal SLi is generated by the control section 8.
  • the image switch section 3 switches between the output of the input video signal SL V and the output of the input image signal SLi in accordance with the control signal CS a or CS b sent from the control section 8.
  • the display device 4 displays a video or an image in accordance with the video signal SL V or the image signal SL* ! output from the image switch section 3.
  • the voice switch section 5 outputs the audio signal SL A sent from the audio decoder 16 to the speaker 6.
  • the voice switch section 5 also outputs the voice signal SL S ⁇ sent from the voice decoder 23 to the speaker 6.
  • the voice switch section 5 switches between the output of the input audio signal SL A and the output of the input voice signal SL S ⁇ in accordance with the control signal CS a or CS b sent from the control section 8.
  • the speaker 6 outputs the audio synchronizing with the video or the speech of the party on the other end of the voice communication.
  • control signals CS M such as those indicating an incoming call and disconnection of voice communication, are exchanged between the terminal device Ei and the base station 102.
  • the wireless communications part 22 sends and receives such control signals CS M via the antenna 21.
  • Such control signals CS M received or to be sent, are also supplied to the status detection section 7 from the wireless communications part 22.
  • the status detection section 7 decodes the control signals CS M sent from the wireless communications part 22, and outputs a signal CS S ⁇ (hereinafter, referred to as status notification) indicating status changes of voice communication, which are typically an incoming call or disconnection of the voice communication, to the control section 8.
  • control section 8 includes aprogrammemory 81, aprocessor 82 , andaworking area 83.
  • the program memory 81 stores an operating system (OS) , computer programs for receiving/reproducing of contents , and computer programs for voice communication processing. In this embodiment , these programs are collectively referred to as a program C_? ⁇ for the sake of convenience .
  • the processor 82 executes the program CP-* . , using the working area 83.
  • the content storage section 9 stores the transport streamST T transferredfromthe tuner 12 undercontrol of the control section 8.
  • the input device 10 outputs, to the control section 8, a signal SL P (hereinafter, referred to as start instruction) for instructing read of the transport stream ST T stored in the content storage section 9 in response to an input from the user.
  • start instruction a signal SL P (hereinafter, referred to as start instruction) for instructing read of the transport stream ST T stored in the content storage section 9 in response to an input from the user.
  • FIG. 2 assuming that a call arrives when the terminal deviceEi receives/reproduces the transport stream ST T (that is, content) at time t 0 . the user is prevented from viewing the content from time t 0 until the voice communication is finished.
  • the terminal device Ei therefore stores the transport stream ST T received during the voice communication in the content storage section 9 while the user is prevented from viewing the content. Assuming that the voice communication is finished and the terminal device E**. disconnects it at time ti, the terminal device Ei restarts the reception/reproduction of the transport stream ST T at time t l t and the reception/reproduction thereof is finished at time t 2 .
  • the start instruction SL P describedabove is generated.
  • the terminal device E x reads the transport stream ST T stored in the content storage section 9, and reproduces the transport stream ST T . In this way, the user can view the portion of the content missed due to the voice communication.
  • the processor 82 executes the program for receiving/reproducingofcontents , whichis includedintheprogram CPi.
  • the program for receiving/reproducingofcontents whichis includedintheprogram CPi.
  • the executionof theprogramCP-j if theuserdesignates a channel in order to view a desired content (hereinafter, referred to as an object content), the following setting is performed by the processor 82.
  • the terminal device E x reproduces the video signal SL V and the audio signal SL A from the received transport stream ST T , and outputs a video and audio synchronizing with the video (step SI). More specifically, the tuner 12 selects a transport stream ST T transmitted via the set channel among the transport streams ST T output from the antenna 11, and outputs the selected transport stream ST T to the TS switch section 13.
  • the TS switch section 13 outputs the input transport stream ST T to the demultiplexer 14.
  • the demultiplexer 14 demultiplexes the input transport streamST T , andoutputs theresultant encodedvideo signal CSLv and encoded audio signal CSL A to the video decoder 15 and the audio decoder 16, respectively.
  • the video decoder 15 decodes the input encoded video signal CSL V , and outputs the resultant video signal SL to the display device 4 via the image switch section 3.
  • the audio decoder 16 decodes the input encoded audio signal CSL A , and outputs the resultant audio signal SL A to the speaker 6 via the voice switch section 5.
  • a video constituting the object content is displayed on the display device 4 while audio synchronizing with the displayed video is output from the speaker 6.
  • the output transport stream ST T is also sent to the control section 8 from the tuner 12.
  • the control section 8 preferably abandons the input transport stream ST T without transferring it to the content storage section 9.
  • a switch (not shown) may be provided somewhere between the tuner 12 and the TS switch section 13 so as to block the transport stream ST T output from the tuner 12 from being input into the control section 8 in step SI.
  • step SI the processor 82 determines whether or not status notification CS S ⁇ indicating a status change of the voice communication, that is, an incoming call, has been received from the status detection section 7 (step S2). If not received, indicating no voice communication processing is necessary, execution of step SI is repeated. If status notification CS S ⁇ indicating an incoming call has been received, the processor 82 first generates the control signal CS a , and sends the signal to the image switch section 3 for switching between the two input lines and the voice switch section 5 for switching between the two input lines (step S3) .
  • the image switch section 3 is set to the state ready to receive the output of the control section 8
  • the voice switch section 5 is set to the state ready to receive the output of the voice decoder 23.
  • the processor 82 then starts execution of the program for voice communication processing included in the program CP**.
  • the terminal device Ei exchanges the multiplexed signal SL S with the base station 102 for voice communication, reproduces the voice signal SL S ⁇ included in the multiplexed signal SL S , and outputs the speech of the caller.
  • the terminal device Ei also generates the encoded voice signal CSL S2 from the voice signal SL S2 representing the speech of the user, multiplexes the encoded voice signal CSL S2 , and sends the resultant multiplexed signal SL S to the base station 102. That is, the terminal device E**. performs voice communication processing (step S4 ) . More specifically, the wireless communications part 22 switches its function between that of a demultiplexer and that of a multiplexer. The wireless communications part 22, as a demultiplexer, demultiplexes the multiplexed signal SL S output from the antenna 21 to obtain the encoded voice signal CSL S ⁇ , and outputs the encoded voice signal CSLsi to the voice decoder 23. The voice decoder 23 decodes the received encoded voice signal CSL S ⁇ , and outputs the decoded voice signal SL S ⁇ to the speaker 6 via the voice switch section 5. By the processing described above, the speech of the caller is output from the speaker 6.
  • the voice input part 24 generates the voice signal SL S2 representing the speech of the user, and outputs the voice signal SL S2 to the voice encoder 25.
  • the voice encoder 25 encodes the input voice signal SL S2 , and outputs the resultant encoded voice signal CSL S2 to the wireless communications part 22.
  • the wireless communications part 22 as a multiplexer, multiplexes the input encoded voice signal CSL S2 , and sends the multiplexed signal SL S to the base station 102.
  • the processor 82 also generates the image signal SLi on the working area 83 if required, and sends the generated signal
  • step S5 the transport stream ST T output from the tuner 12 is stored in the content storage section 9 under control of the processor 82 (step S5) .
  • the processor 82 determines whether or not status notification CS ST indicating a next status change of the voice communication, that is, disconnection of the voice communication, has been received from the status detection section 7 (step S6) . If not received, indicating that no restart of reception/reproduction of the content is necessary, steps S4 and S5 are executed until disconnection of the voice communication is detecte . If status notification CS S ⁇ indicating disconnection of the voice communication has been received, this means that time ti (see FIG. 2) has been detected.
  • the processor 82 in order to restart reception/reproduction of the content, the processor 82 generates the control signal CS b , and sends the signal CS b to the image switch section 3 for switching between the two input lines and the voice switch section 5 for switching between the two input lines (step S7).
  • the image switch section 3 is set to the state ready to receive the output of the video decoder 15, and the voice switch section 5 is set to the state ready to receive the output of theaudio decoder 16.
  • thecontent reproduction section 1 outputs a video and audio constituting the content in the same manner as that in step SI (step S8).
  • step S8 the processor 82 determines whether or not start instruction SL P has been received from the input device 10 (step S9). If not received, indicating that no read of the transport stream ST T from the content storage section 9 is necessary, execution of step S8 is repeated. If start instruction SL P has beenreceived, thismeans that time t 2 ( seeFIG.2 ) has beendetected.
  • the processor 82 generates the control signal CS C for changing the input of the TS switch section 13 from the tuner 12 side to the control section 8 side, and sends the signal CS C to the TS switch section 13 (step S10) .
  • the TS switch section 13 changes its input line as described above.
  • the processor 82 then reads the transport stream ST stored in the content storage section 9 , andtransfers the transport stream ST T to the TS switch section 13.
  • the demultiplexer 14 receives the transport stream ST T transferred via the TS switch section 13, demultiplexes the transport stream ST T , and outputs the resultant encoded video signal CSL V and encoded audio signal CSL A to the video decoder 15 and the audio decoder 16, respectively.
  • the video decoder 15 and the audio decoder 16 operate in the same manner as that in step SI , reproducing the video signal SL V and the audio signal SL A from the input encoded video signal CSL V and encoded audio signal CSL A , and outputting the respective signals to the display device 4 and the speaker 6. That is , the terminal device Ei reads and reproduces the object content (step Sll) . As a result, the portion of a video constituting the object content missed by the user during the voice communication is displayed on the display device 4, and audio synchronizing with the video is output from the speaker 6.
  • the control section 8 preferably controls the relevant components to block the transport stream ST T output from the tuner 12 from being stored in the content storage section 9.
  • step S12 determines whether or not there is any part of the transport stream ST T which has yet to be reproduced left in the content storage section 9 (step S12) . If there is, execution of step Sll is repeated. If no part of the transport stream ST is left, meaning that the entire portion missed by the user during the voice communication has been reproduced, the processor 82 terminates the processing shown in FIG. 3.
  • the terminal device Ei stores the transport stream ST T in the content storage section 9 during voice communication. After the user finishes the voice communication and after the reception of the transport stream ST T from the broadcast station 101 is finished, the terminal device Ei starts reproduction of the transport stream ST T stored in the content storage section 9 in response to the start instruction SL P . In this way, it is possible to provide the terminal device Ei capable of outputting a portion of a content missed by the user due to voice communication at a time shifted from the actual broadcast time. As described above, the terminal device E x can stop storing the content in the content storage section 9 once the read from the content storage section 9 is started in step Sll. Therefore, no unnecessary contents are recorded in the content storage section 9. This enables efficient use of the recording capacity of the content storage section 9.
  • the status detection section 7 detects status changes of voice communication by way of the control signals CS M .
  • the terminal device E typified by a cellular phone, is normally provided with an input device for entering the on-hook or off-hook state. Therefore, using a signal output from this input device, the status detection section 7 may detect the time point at which the telephony processing section enters the off-hook state as the first status change of the voice communication and the time point at which the telephony processing section 2 enters the on-hook state as the next status change of the voice communication.
  • the telephony processing section 2 performs processing related to voice communication.
  • the telephony processing section 2 may perform processing related to a videophone.
  • the telephony processing section 2 is required to additionally perform reception/reproduction of image information on the side of the partyon the other end of the voice communication, and display a video showing the party on the other end, in place of the image given by the image signal SLi described above, and also required to capture and encode image information on the side of the user.
  • the content is assumed to be a TV program, but is not restricted thereto.
  • the content maybe a radio programbroadcast in a scheduled time frame accordingto atimetablemadeupbytheradio broadcasting provider.
  • a radio program is composed of audio represented by the audio signal SL A .
  • the content may be music, composedof avideo andaudio, or composedof audio only, distributed as a stream from a server via a digital network typified by the Internet. Such music is provided as the audio signal SL A .
  • FIG. 4 is a block diagram showing the construction of a mobile communications terminal device (hereinafter, referred to as a terminal device for simplification) E 2 that is a first variant of the terminal device Ei described above.
  • the terminal device E 2 has the same construction as the terminal device Ei, except that a computer program (hereinafter, referred to as a program for simplification) CP 2 is stored in the program memory 81 and that an input device 103 is used in place of the input device 10. Therefore, in FIG. 4, the same components as those of the terminal device E x in FIG. 1 are denoted by the same reference numerals, and the description thereof is omitted here.
  • the program CP 2 is the same in configuration as the program CPi. By executing the program CP 2 , however, the terminal device E 2 performs some processing items different from those performed by the terminal device E x . This will be described below with reference to FIGS . 5 and 6.
  • the input device 103 outputs a signal SL F instructing end of reception/reproduction of the transport stream ST (hereinafter, referred to as end instruction) in response to an input from the user.
  • FIG. 5 assuming that a call arrives when the terminal device E 2 receives/reproduces the transport stream ST T at time t 0 , the user is prevented from viewing the object content from time t 0 until the voice communication is finished.
  • the terminal device E 2 therefore stores the transport stream ST received during the voice communication in the content storage section 9 while the user is prevented from viewing the content.
  • the terminal device E 2 reads the transport streamST T stored in the content storage section 9 from time ti, and reproduces the transport stream ST T at n times speed (n is a number satisfying n>l) .
  • the portion of the object content missed by the user due to the voice communication is read sequentially from the head thereof. During this time, therefore, the user is preventedfromviewing theportion of the object content being broadcast according to the actual broadcast time (portion after time ti) . Therefore, the terminal device E 2 continues storing the transport stream ST T , that is, the object content being broadcast according to the actual broadcast time after time t x in the storage section 9. In this state, as n times speed time-shifted reproduction of the stored content is performed, the time lag between the content being reproduced and the content being broadcast is gradually reduced. Hence the transport stream ST to be stored in the content storage section 9 decreases. In other words, n times speed time-shifted reproduction is performed from time ti to time t 2 . As will be understood from the above description, by means of the n times speed time-shifted reproduction, the object content is reproduced at n times speed along a time axis different from the actual digital broadcast time.
  • the terminal device E 2 terminates the write of the content in the content storage section 9 and the read of the content from the content storage section 9, and instead receives/reproduces the transport stream ST T being broadcast from the broadcast station 101 according to the actual broadcast time.
  • FIG. 6 the operation of the terminal device E 2 outlined with reference to FIG. 5 will be described in detail.
  • the flowchart of FIG. 6 is the same as that of FIG. 3, except that steps S21 to S27 are included in place of steps S7 to S12. Therefore, in FIG.6, the same steps as those in FIG.3 are denoted by the same step numbers , and the description thereof is omitted here.
  • the processor 82 determines that the voice communication is disconnected in step S6 , this means that time ti (see FIG. 5) has been detected. Therefore, to perform the n times speed time-shifted reproduction, the processor 82 generates the control signal CS b and sends the signal to the image switch section 3 for switching between the two input lines and the voice switch section 5 for switching between the two input lines (step S21) . With this control signal CS b , the image switch section 3 is set to the state ready to receive the output of the video decoder 15, and the voice switch section 5 is set to the state ready to receive the output of the audio decoder 16.
  • theprocessor 82 generates the control signal CS C for changing the input of the TS switch section 13 from the tuner 12 side to the control section 8 side and also instructing the video decoder 15 and the audio decoder 16 to perform the n times speed time-shifted reproduction, and sends this signal to the TS switch section 13, the video decoder 15, and the audio decoder 16 (step S22) .
  • the TS switch section 13 changes its input lines as described above, and the reproduction speed of the video decoder 15 and the audio decoder 16 is set at n times.
  • theprocessor 82 reads the transport stream ST T stored in the content storage section 9 and transfers the transport stream ST T to the TS switch section 13.
  • the portion of the object content missed by the user due to the voice communication is sequentiallyread.
  • the portion of the transport stream ST T stored during the voice communication is sequentially read from the head thereof.
  • the demultiplexer 14 demultiplexes the transport stream ST T received via the TS switch section 13, and outputs the resultant encoded video signal CSL and encoded audio signal CSL A to the video decoder 15 and the audio decoder 16 , respectively.
  • the video decoder 15 selects pictures required for the n times speed time-shifted reproduction from the received encoded video signal CSLv, decodes the selected pictures according to MPEG, and reproduces the video signal SL V .
  • the reproduced video signal SL V is output to the display device 4 via the image switch section 3.
  • the audio decoder 16 selects portions required for the n times speed time-shifted reproduction from the received encoded audio signal CSL A , decodes the selected portions according to MPEG, and reproduces the audio signal SL A .
  • the reproduced audio signal SL A is output to the speaker 6 via the voice switch section 5.
  • the transport stream ST T output from the tuner 12 is written in the content storage section 9 under control of the processor 82.
  • step S23 the terminal device E 2 performs the n times speed time-shifted reproduction.
  • the content missed by the user due to the voice communication is displayed from the head thereof on the display device 4 at n times speed, and audio synchronizing with the video is output from the speaker 6.
  • the processor 82 determines whether or not there is anypart of the transport stream ST T whichhas yet to be reproduced left in the content storage section 9 (step S24). If there is, execution of step S23 is repeated. If no part of the transport stream ST T is left, this means that the entire portion of the object content missed by the user due to the voice communication has been reproduced and also that time t 2 (see FIG. 5) has been detected.
  • the processor 82 to perform reception/reproduction of the content, the processor 82 generates a control signal CS d for changing the input lines of the TS switch section 13 from the control section 8 side to the tuner 12 side and also instructing the video decoder 15 and the audio decoder 16 to perform normal-speed reproduction, and sends this signal to the TS switch section 13 , the video decoder 15, and the audio decoder 16 (step S25) .
  • control signal CS d With such control signal CS d , the TS switch section 13 changes its input lines as described above, and the reproduction speed of the video decoder 15 and the audio decoder 16 is set at normal. Thereafter, the content reproduction section 1 reproduces the object content in the same manner as that in step SI (step S26).
  • the processor 82 determines whether or not end instruction SL F has been received (step S27). If not received, it is determined that the user is still viewing the object content, and the processor 82 repeats the execution of step S26. If having received the end instruction SL F , the processor 82 determines that the user has finished viewing the content and terminates the processing shown in FIG. 6.
  • the terminal device E 2 writes the transport stream ST T in the content storage section 9 from the start point of the voice communication (that is, time to) till time t 2 as shown in FIG. 5.
  • the terminal device E 2 starts n times speed time-shiftedreproduction of theportion of the object content stored in the content storage section 9 missed by the user from the head thereof. This n times speed time-shifted reproduction is performed from time t x until time t 2 .
  • the video decoder 15 and the audio decoder 16 select portions required for the n times speed time-shifted reproduction, and reproduce the selected portions .
  • the processor 82 may read only portions required for the n times speed time-shifted reproduction from the transport stream ST T stored in the content storage section 9 , and transfer these portions to the TS switch section 13.
  • the processor 82 determines when the n times speed time-shifted reproduction is returned to normal reproduction by examining whether or not any part of the transport stream ST T is left in the content storage section 9.
  • the n times speed time-shifted reproduction may be returned to the normal reproduction when the difference between the value of a presentation time stamp (PTS) included in the transport stream ST T being written and the value of the PTS included in the transport stream ST being read becomes substantially zero.
  • PTS presentation time stamp
  • the content storage section 9 will become substantially vacant.
  • the terminal device E 2 may detect this time t 2 , and then may receive/reproduce the transport stream ST being broadcast from the broadcast station 101 according to the actual broadcast time. In this case, it is preferred not to write the portions that the user will presumably consider unnecessary in the content storage section 9. This can reduce the storage area occupied by the object content in the content storage section 9.
  • FIG. 7 is a block diagram showing the construction of a mobile communications terminal device (hereinafter, referred to as a terminal device for simplification) E 3 that is a second variant of the terminal device Ei described above.
  • the terminal device E 3 has the same construction as the terminal device Ei, except that a computer program (hereinafter, referred to as a program for simplification) CP 3 is stored in the program memory 81 and that an image combining section 104 is provided in place of the image switch section 3.
  • a computer program hereinafter, referred to as a program for simplification
  • the program CP 3 is the same in configuration as the program CPi. By executing the program CP 3 , however, the terminal device E 3 performs some processing items different from those performed by the terminal device E x . This will be described below with reference to FIGS. 8 and 9.
  • the image combining section 104 receives the video signal SL V from the video decoder 15 and the image signal SLi generated by the control section 8.
  • the image combining section 104 combines the input video signal SL V and the input image signal SLi for generating e_ combined image signal SL M , and outputs the combined signal to the display device 4.
  • the image combining section 104 outputs the video signal SL V from the video decoder 15 to the display device 4 as it is.
  • FIG. 8 assume that a call arrives when the terminal device E 3 receives/reproduces the transport stream ST T at time t 0 and that the voice communication is disconnected at time ti.
  • the user is prevented from viewing the object content during the time period from t 0 to ti.
  • the terminal device E 3 generates the combined image SL M as described above, and displays this image during the time period from to to ti. By displaying this image, the user can view the object content during the voice communication. In this way, the terminal device E 3 can provide more enhanced operability.
  • step S31 is included in place of step S4. Therefore, in FIG. 9, the same steps as those in FIG. 3 are denoted by the same step numbers , and the description thereof is omitted here.
  • the processor 82 starts execution of the program for voice communication processing included in the program CP 3 .
  • the terminal device E 3 then performs processing required for voice communication, and also generates and displays the combined image SL M (step S31).
  • the processing required for voice communication is the same as that performed in the embodiment described above. In this variant, therefore, only the generation/display of the combined image SL M will be described in detail.
  • the processor 82 generates the image signal SLi on the working area 83 if required, and sends the signal to the image combining section 104.
  • the video signal SL V is also sent to the image combining section 104 from the video decoder 15 as described above.
  • the image combining section 104 combines the input image signal SLj and the input video signal SL V for generating the combined image signal SL M in which an video of the broadcast content is superimposed on the image used during voice communication.
  • the display device 4 receiving the combined image signal SL M and performing necessary display processing for the received signal, displays the image represented by the image signal SL;*; and a video of the object content.
  • the terminal device E 3 can output the object content even during voice communication.
  • terminal device E 3 has been described as a variant of the terminal device Ei, it may be a variant of the terminal device E 2 . That is, step S31 described above may be executed in place of step S4 in FIG. 6.
  • the combined signal SL M was obtained by combining the object content and the image to be displayed during voice communication.
  • the image combining section 104 may generate the combined image signal SL M additionally including the caption as shown in FIG. 10. If the telephony processing section 2 performs processing required for a videophone, the image combining section 104 may generate the combined image signal SL M additionally including an image of the party on the other end of voice communication as shown in FIG. 11. The image combining section 104 may also generate the combined image signal SL M further additionally including an image of the user as shown in FIG. 12.
  • FIG. 13 is a block diagram showing the construction of a mobile communications terminal device (hereinafter, referred to as a terminal device for simplification) E that is a third variant of the terminal device Ei described above.
  • the terminal device E 4 has the same construction as the terminal device E l t except that a computer program (hereinafter, referred to as a program for simplification) CP is stored in the program memory 81 and that a mute detection section 105 is additionally included.
  • a computer program hereinafter, referred to as a program for simplification
  • the program CP is the same in configuration as the program CPi. By executing the program CP , however, the terminal device E performs some processing items different from those performed by the terminal device E x . This will be described below in detail with reference to FIGS. 14 and 15.
  • the mute detection section 105 receives the voice signal SL S ⁇ output from the voice decoder 23.
  • the mute detection section 105 typically detects a mute time period B NS during which the party on the other end of voice communication does not speak based on the amplitude value of the input voice signal SL S ⁇ , generates a timing signal SL indicating a start or end point of the mute time period, and outputs the timing signal to the control section 8.
  • FIG. 14 assume that times t 0 and ti are defined as described above.
  • the user is preventedfromhearingaudioconstitutingtheobject content during the time period from t 0 to ti in the embodiment described above.
  • the terminal device E 4 detects the mute time period B NS during which the party on the other end of voice communication does not speak based on the voice signal SL S ⁇ , and controls the input line of the voice switch section 5 so that the audio signal SL A from the audio decoder 16 is input into the speaker 6 during the detected mute time period B NS .
  • the terminal device E 4 can provide more enhanced operability.
  • the operation of the terminal device E outlined above is described in more detail with reference to the flowchart of FIG. 15.
  • the flowchart of FIG. 15 is the same as that of FIG. 3, except that steps S41 to S44 are additionally included. Therefore, in FIG.15, the same steps as those in FIG.3 are denoted by the same step numbers , and the description thereof is omitted here .
  • step S5 the processor 82 determines whether ornot the timing signal SL T has beenreceivedfromthemute detection section 105 (step S41) . If not received, the processing proceeds to step S6 because no switching of the voice switch section 5 is required. If the timing signal SL T has beenreceived, the processor 82 determines whether or not the signal indicates the end point of a mute time period B NS (step S42) . If not, this means that the received timing signal SL T indicates the start point of the mute time period B NS .
  • the processor 82 generates a control signal CS a for changing the input line of the voice switch section 5 from the voice decoder 23 to the audio decoder 16, and outputs the generated control signal to the voice switch section 5 (step S43) .
  • the processor 82 if the timing signal SL T indicating the end point of the mute time period B NS has been received in step S42, the processor 82 generates a control signal CS e for changing the input line of the voice switch section 5 from the audio decoder 16 to the voice decoder 23, andoutputs the generated control signal to the voice switch section 5 (step S44).
  • the terminal device E can output audio constituting the object content during voice communication if it is in the mute time period B NS .
  • the terminal device E 4 was described as a variant of the terminal device E x , it maybe avariant of the terminal device E 2 or E 3 . That is, steps S41 to S44 described above may be incorporated in the flowchart of FIG. 6 or 9.
  • FIG. 16 is a block diagram showing the construction of a mobile communications terminal device (hereinafter, referred to as a terminal device for simplification) E 5 as a fourth variant of the terminal device Ei described above.
  • the terminal device E 5 has the same construction as the terminal device E x , except that a computer program (hereinafter, referred to as a program for simplification) CP 5 is stored in the program memory 81 and that avoice switch section 106 and first and second speakers 107 and 108 are provided in place of the voice switch section 5 and the speaker 6.
  • a computer program hereinafter, referred to as a program for simplification
  • the program CP 5 is the same in configuration as the program CPi. By executing the program CP 5 , however, the terminal device E 5 performs some processing items different from those performed by the terminal device Ei. This will be described below in detail with reference to FIGS. 17 and 18.
  • the voice switch section 106 receives the audio signal SL A output from the audio decoder 16 and the voice signal SL S ⁇ output from the voice decoder 23. During reception/reproduction of the transport stream ST T , the voice switch section 106 outputs the input audio signal SL A to the first and second speakers 107 and 108. However, during voice communication, the voice switch section 106 outputs the input audio signal SL A to one of the first and second speakers 107 and 108 (the second speaker 108 in FIG. 16) , and outputs the input voice signal SL S ⁇ to the other speaker
  • the voice switch section 106 switches its input/output lines in accordance with the control signal CS a or CS b output from the control section 8.
  • the first and second speakers 107 and 108 are L-side and R-side speakers, respectively, for stereo output.
  • the terminal device E 5 controls the voice switch section 106 so that the voice signal SLsi received from the voice decoder 23 is output from the first speaker 107 and the audio signal SL A received from the audio decoder 16 is output from the second speaker 108.
  • the terminal device E 5 can hear the audio of the object content even during voice communication. In this way, the terminal device E 5 can provide more enhanced operability.
  • FIG. 18 The operation of the terminal device E 5 outlined above will be described in detail with reference to the flowchart of FIG. 18.
  • the flowchart of FIG. 18 is the same as that of FIG. 3, except that steps S51 to S53 are included in place of steps S3, S4 and S7. Therefore, in FIG. 18, the same steps as those in FIG.3 are denoted by the same step numbers, and the description thereof is omitted here.
  • the processor 82 If status notification CS S ⁇ indicating an incoming call has been received in step S2 , the processor 82 generates the control signal CS a , and sends the signal to the image switch section 3 for switching between the two input lines and the voice switch section 5 for switching between the two input lines (step S51).
  • the image switch section 3 is set to the state ready to receive the output of the control section 8
  • the voice switch section 5 is set to the state ready to receive both the outputs of the audio decoder 16 and the voice decoder 23.
  • the processor 82 then starts execution of the program for voice communication processing included in the program CP 5 .
  • the terminal device E 5 exchanges the multiplexed signal SL S with the base station 102 for voice communication, demultiplexes the encoded voice signal CSL S ⁇ included in the multiplexed signal to reproduce the voice signal SL S ⁇ , and thus outputs the speech of the caller.
  • the terminal device E 5 also generates the encoded voice signal CSL S2 representing the speech of the user, multiplexes the encoded voice signal, and sends the resultant multiplexed signal SL S to the base station 102 (step S52) . More specifically, the wireless communications part 22 switches its function between that of a demultiplexer and that of a multiplexer.
  • the wireless communications part 22 demultiplexes the multiplexed signal SL S input from the antenna 21 to obtain the encoded voice signal CSL S ⁇ , and outputs the encoded voice signal to the voice decoder 23.
  • the voice decoder 23 decodes the input encoded voice signal CSL S ⁇ , and outputs the decoded voice signal SLsi to one of the first and second speakers 107 and 108 via the voice switch section 106.
  • the audio decoder 16 outputs the reproduced audio signal SL A to the other speaker 107 or 108 via the voice switch section 106.
  • the voice encoder 25 encodes the voice signal SL S2 from the voice input part 24 , and outputs the encoded voice signal CSL S2 to the wireless communications part 22.
  • the wireless communications part 22 as a multiplexer, multiplexes the input encoded voice signal CSL S2 and sends the resultant multiplexed signal SL S to the base station 102 via the antenna 21.
  • the processor 82 generates the image signal SL-x on the working area 83 if required, and sends the signal to the display device 4 via the image switch section 3. By this processing, an image representedbythe image signal S i is displayedon the display device 4.
  • theprocessor 82 If it is determined that the voice communication has been disconnectedin step S6 , theprocessor 82 generates the control signal CS b and sends the signal to both the image switch section 3 and the voice switch section 106 for switching the input lines (step S53). As a result, the image switch section 3 is set to the state ready to receive the output of the video decoder 15, and the voice switch section 106 is set to the state ready to receive the output of the audio decoder 16.
  • the operationoutlinedwithreference to FIG.17 is obtained. That is, the terminal device E 5 can output audio constituting the object content even during voice communication.
  • terminal device E 5 has been described as a variant of the terminal device Ei, it may be a variant of the terminal device E 2 or E 3 . That is , steps S51 to S53 described above may be incorporated in the flowchart of FIG. 6 or 9.
  • FIG. 19 is a block diagram showing the construction of amobile communications terminal device (hereinafter, referred to as a terminal device for simplification) E 6 as a fifth variant of the terminal device Ei described above .
  • the terminal device E 6 has the same construction as the terminal device Ei, except that a computer program (hereinafter, referred to as a program for simplification) CP 6 is stored in the program memory 81 and that an input device 109 and a preselection storage section 110 are additionally provided.
  • a computer program hereinafter, referred to as a program for simplification
  • the program CP 6 is the same in configuration as the program CPi.
  • the terminal device E 6 By executing the program CP 6 , however, the terminal device E 6 performs some processing items different from those performed by the terminal device Ei. This will be described below with reference to FIGS. 20 and 21.
  • the input device 109 outputs a signal SL R indicating the channel and the broadcast start time of a content the user wants to view in the future (hereinafter, referred to as preselection information) to the control section 8 in response to the input from the user.
  • preselection information a signal SL R indicating the channel and the broadcast start time of a content the user wants to view in the future
  • the terminal device E 6 assumes that the user is still engaged in voice communication using the terminal device E 6 when broadcast of a content specified by the preselection information SL R (hereinafter, referred to as an object content) starts at time t 0 .
  • the user is prevented from viewing the object content from time t 0 until the voice communication is finished.
  • the content as used in this variant has the same de inition as that described in the above embodiment.
  • the terminal device E 6 stores the received transport stream ST T in the content storage section 9. Assuming that the voice communication is finished and disconnected at time t i# the terminal device E 6 reproduces the received transport stream ST T at time t 2 that is after time ti. In this way, the user can view the portion of the content broadcast during the voice communication.
  • the user inputs the channel and the broadcast start time of the object content with the input device 109 of the terminal device E 6 .
  • the input device 109 generates preselection information SL R indicating the input information.
  • the generatedpreselection information SL R is stored in the preselection storage section 110 (step S61).
  • the processor 82 When a control signal CS M indicating an incoming call is received by the voice communication processing unit 2, the processor 82 receives the status notification CS S ⁇ from the status detection section 7 and, in response to this notification, executes the program for voice communication processing included in the program CP 6 . That is, the image switch section 3 is set to the state ready to receive the output of the control section 8, and the voice switch section 5 is set to the state ready to receive the output of the voice decoder 23. The terminal device E 6 then exchanges the multiplexed signal SL S with the base station 102 for voice communication as in step S4 described above (step S62) .
  • theprocessor 62 accesses thepreselection storage section 110 and determines whether or not the time of broadcast of the object content designated by the preselection information SL R has come (step S63) . If not , indicating that write of the transport stream ST T is unnecessary, execution of step S62 is repeated. If the time of the start of the object program has come, the tuner 12 is set to receive the preselected channel under the control of the processor 82, and the processor 82 writes the transport stream ST T output from the tuner 12 in the content storage section 9, as in step S5 described above (step S64). Subsequent to the write operation, the processor 82 determines whether or not the status notification CS S ⁇ indicating disconnection of the voice communicationhas beenreceived (step S65) .
  • step S62 If not received, indicating that reproduction of the transport stream ST T stored in the content storage section 9 is unnecessary, execution of step S62 is repeated. If the status notification CS S ⁇ indicating disconnection of the voice communication has been received, the processor 82 performs necessary processing in response to this notification and then terminates the write of the content in the content storage section 9. At the same time, the processor 82 generates the control signal CS a and sends the signal to the image switch section 3 for switching between the two input lines and the voice switch section 5 for switching between the two input lines (step S66) .
  • step S66 a video composing of the object content is displayed on the display device 4 , and audio synchronizing with the video is output from the speaker 6, as in step SI described above (step S67) .
  • the processor 82 determines whether or not the start instruction SL P has been received from the input device 10 (stepS68). If not received, indicating that read of the transport stream ST T from the content storage section 9 is unnecessary, execution of step S67 is repeated. If the start instruction SL P has been received, the processor 82 generates the control signal CS C for changing the input of the TS switch section 13 from the tuner 12 side to the control section 8 side, and sends the signal to the TS switch section 13 (step S69). By this step S69, the TS switch section 13 changes the input accordingly.
  • the processor 82 reads the transport stream ST T stored in the content storage section 9 and transfers the transport stream to the TS switch section 13.
  • the demultiplexer 14 demultiplexes the transport stream ST T transferred via the TS switch section 13 , and outputs the resultant encoded video signal CSL V and encoded audio signal CSL A to the video decoder 15 and the audio decoder 16, respectively.
  • the video decoder 15 and the audio decoder 16 operate as in step SI for reproducing the video signal SL V and the audio signal SL A (step S610) .
  • avideo constituting the object contentmissedbytheuserduringthevoice communication is displayed on the display device 4 , and audio synchronizing with the video is output from the speaker 6.
  • step Sll the processor 82 determines whether or not there is any part of the transport stream ST T which has yet to be reproduced left in the content storage section 9 (step S611). If there is, execution of step S610 is repeated. If no part of the transport stream ST T is left , this means that the user has viewed all the portion of the content missed due to the voice communication. Therefore, the processor 82 terminates the processing shown in FIG. 21.
  • the operation outlined with reference to FIG. 20 is obtained. That is, during voice communication, the terminal device E 6 writes the transport stream ST in the content storage section 9 once broadcast of the object program is started. After the voice communication is finished and after the reception of the transport stream ST T from the broadcast station 101 is finished, reproduction of the transport stream ST T stored in the content storage section 9 is started. In this way, it is possible to provide a communications terminal device capable of outputting the portion of the content missed by the user due to the voice communication at a time shifted from the actual broadcast time.
  • terminal device E s was described as a variant of the terminal device E 1 it may be a variant of any of the terminals E 2 to E 5 . Otherwise, the terminal device E 6 may be combined with any of the terminals E x to E 5 .
  • FIG. 22 is a block diagram showing the construction of amobile communications terminal device (hereinafter, referred to as a terminal device for simplification) E 7 as a sixth variant of the terminal device Ei described above.
  • the terminal device E 7 has the same construction as the terminal device Ei, except that a computer program (hereinafter, referred to as a program for simplification) CP 7 is stored in the program memory
  • the program CP 7 is the same in configuration as the program CPi. By executing the program CP 7 , the terminal device E 7 performs some processing items different from those performed by the terminal device Ei. This will be described below with reference to FIGS. 23 and 24.
  • FIG.23 an operation of the terminal device E 7 described above is outlined with reference to FIG.23.
  • the user is prevented fromviewing the object content during the time period from t o to t due to voice communication, as in the cases described above .
  • the terminal device E 7 writes the receivedtransport stream ST T in the content storage section 9 after time ti until at least time t 2 at which the broadcast of the object content is finished.
  • the terminal device E 7 reads the transport stream ST T stored in the content storage section 9 and reproduces at the normal speed. This read is performed sequentially from the head of the portion of the object content missedby the user due to thevoice communication and displayed for the view by the user.
  • the terminal device E 7 displays the object content stored in the content storage section 9 for the user from time ti until time t 3 at which read of the object content fromthe content storage section 9 is finished. In other words , after the voice communication, the terminal device E 7 performs time-shifted reproduction of the content along the time axis shifted from the actual broadcast time by time (t ⁇ -t 0 ) .
  • FIG. 24 The flowchart of FIG. 24 is the same as that of FIG. 3, except that steps S71 to S74 are included in place of steps S7 to S12. Therefore, in FIG.24, the same steps as those in FIG. 3 are denoted by the same step numbers , and the description thereof is omitted here.
  • step S71 If it is determined that the voice communication has been disconnected in step S6, meaning that time ti (see FIG. 23) has been detected, the processor 82 generates and sends the control signal CS b described above in relation to step S21 in FIG. 6 (step S71). With this control signal, the image switch section 3 and thevoice switch section 5 are set to therespective states described above in relation to step S21. The processor 82 also changes the input line of the TS switch section 13 to the control section 8 side by sending the control signal CS C to the TS switch section
  • step S72 the control signal CS C is also sent to the video decoder 15 and the audio decoder 16 to set the reproduction speed thereof at the normal speed.
  • the processor 82 then reads the transport stream ST T stored in the content storage section 9 and transfers the stream to the TS switch section 13. By this transfer, the portion of the object content missedby the user due to the voice communication is sequentially read from the head thereof.
  • the demultiplexer
  • step S73 the processing in step S73, an image of the content missed by the user due to the voice communication is displayed from the head thereof on the display device 4 at the normal speed, while audio synchronizing with the video is output from the speaker 6.
  • step S74 the processor 82 determines whether or not the time-shifted reproduction is finished. If not finished, execution of step S73 is repeated. If the time-shifted reproduction is finished, the processing shown in FIG. 24 is terminated.
  • the terminal device E 7 writes the transport stream ST T in the content storage section 9 from the start point of voice communication (that is, time t 0 ) . From the end point of the voice communication (that is, time ti) , the terminal device E 7 starts time-shifted reproduction of the portion of the object content missed by the user and stored in the content storage section 9, from the head of the portion. In this way, it is possible to provide a communications terminal device capable of outputting the portion of the content missed by the user due to the voice communication at a time shifted from the actual broadcast time.
  • FIG. 25 is a block diagram showing the construction of a mobile communications terminal device (hereinafter, referred to as a terminal device for simplification) E 8 as a seventh variant of the terminal device Ei described above.
  • the terminal device E 8 has the same construction as the terminal device E l r except that a demultiplexer 120 is included in place of the demultiplexer 14 andthat acomputerprogram (hereinafter, referred to as a program for simplification) CP 8 is stored in the program memory 81.
  • a computerprogram hereinafter, referred to as a program for simplification
  • FIG. 25 therefore, the same components as those of the terminal device E x in FIG.1 are denoted by the same reference numerals, and the description thereof is omitted here.
  • the demultiplexer 120 demultiplexes the transport stream ST T output from the TS switch section 13, and outputs the resultant encoded video signal CSL V and encoded audio signal CLS A to the video decoder 15 and the audio decoder 16, respectively, as does the demultiplexer 14. By this demultiplexing, the demultiplexer 120 also obtains a program map table (PMT) including at least the broadcast end time of the receiving content, and sends the PMT to the processor 82.
  • PMT program map table
  • the program CP 8 is the same in configuration as the program CPi. By executing the program CP 8 , however, the terminal device E 8 performs some processing items different from those performed by the terminal device Ei. This will be described below with reference to FIGS. 26 and 27.
  • FIG.26 an operation of the terminal device E 8 described above is outlined with reference to FIG.26.
  • the user has voice communication during the time period from t 0 to t x , as in the above cases. If the object content finish at time t 2 that is between time t 0 and time ti, the processor 82 terminates the write of the object content in the content storage section 9.
  • the operation of the terminal device E 8 outlined above with reference to FIG. 26 will be described in more detail with reference to the flowchart of FIG. 27.
  • the flowchart of FIG. 27 is the same as that of FIG. 3, except that steps S81 to S83 are additionally included. Therefore, in FIG. 27, the same steps as those in FIG. 3 are denoted by the same step numbers, and the description thereof is omitted here.
  • step S4 the processor 82 determines whether or not write terminating processing has been executed in step S83 (step S81). If not executed, the processor 82 proceeds to step S5. If the write terminating processing has been executed, the write of the received transport stream ST T in the content storage section 9 is no more necessary. Therefore, the processor 82 skips step S5 and the following some steps and proceeds to step S6.
  • step S5 the processor 82 determines whether or not the broadcast end time of the object content has come based on the PMT sent from the demultiplexer 120 (step S82). If determining that the broadcast end time has not come, the processor 82 proceeds to step S6. If it is determined that the broadcast end time has come, the processor 82 terminates the write of the object content in the content storage section 9 (step S83). In other words, the processor 82 discards the transport stream ST T sent from the tuner 12 , not transferring this to the content storage section 9. The processor 82 then performs step S6. By the processing described above, the write of the object content in thecontent storage section 9 is terminateduponendof thebroadcast of the object content. This enables recording/reproduction suitable for the content storage section 9 whose recording capacity is small.
  • terminal device E 8 was described as a variant of the terminal device Ei, it may be a variant of any of the terminals E 2 to E 7 .
  • the processor 82 may control this switch to block the transport stream ST T output from the tuner 12 from being input into the control section 8 in step S83.
  • the processor 82 determines whether or not the end time of the object content has come based on the PMT. If an electric program guide (EPG) is obtainable, the processor 82 may determine whether or not the end time of the object content has come based on this obtained EPG.
  • EPG electric program guide
  • the processor 82 may detect the remaining recording capacity of the content storage section 9 , determine the bit rate for the write of the object content based on the detected remaining capacity, and then store the object content in the content storage section 9 according to the determined bit rate.
  • a program stream converted from the transport stream ST T may be stored in the content storage section 9. Otherwise, an MPEG4-en ⁇ oded transport stream ST T may be stored in the content storage section 9.
  • the communications terminal device of the present invention is applicable to a digital device such as a cellular phone, a personal digital assistant (PDA), and a personal computer
  • PC capable of incorporating the content receiving function and the voice communication function.

Abstract

In a terminal, a content reproduction section reproduces a video signal and an audio signal from a received transport stream. A telephony processing section performs processing necessary for voice communication. A status detection section detects a time point at which the status of voice communication changes. A control section writes the transport stream received by the content reproduction section in a content storage section when the status detection section detects start of voice communication, and reads and sends the transport stream stored in the content storage section to the content reproduction section according to a predetermined timing. The content reproduction section reproduces a video signal and an audio signal from the transport stream read by the control section.

Description

DESCRIPTION
COMMUNICATIONS TERMINAL DEVICE ALLOWING CONTENT RECEPTION AND VOICE COMMUNICATION
TECHNICAL FIELD The present invention relates to communications terminal devices, and more particularly, to a communications terminal device having a content reception function and a voice communication function.
BACKGROUND ART Terrestrial digital broadcasting is scheduled to commence in Japan first in the three biggest metropolitan areas in 2003 and then nationwide in 2006. A feature of terrestrial digital broadcasting is that mobile reception of contents is possible. As for mobile communications, the third-generation cellular phone service started in 2001, enabling distribution of moving pictures and a portable videophone. Under the present situation as described above, there has been recently announced a concept of a communications terminal device having both the content reception function and the voice communication function.
However, when a voice call arrives while viewing a content with such a communications terminal device, the user tends to choose to answer the call rather than continuing viewing the content. In the opposite case, that is, if a content that the user desires to viewhas startedwhile the user is engaged invoice communication using a mobile communications terminal device, the user tends to continue the voice communication. There is known in the art a conventional television which receives a video signal externally broadcast on the channel selected by the viewer, reproduces the received video signal, and outputs a video represented by the received video signal. The conventional television includes a built-in modem which outputs a status signal when a fixed telephone receives a call during the reception of the video signal. In response to the status signal, the conventional television begins recording of the currently receiving video signal in a storage device built therein (that is, performs video recording) . After the voice communication is finished, the conventional television reproduces the video signal recorded in the storage device. Thus, the viewer can view the video missed due to the voice communication.
The conceptual communications terminal device has a problem that the user is not allowed to view the content at least until the voice communication is finished. As another problem, the conventional television describedabove needs to have a storage device with a large capacity capable of storing a long content. However, the communications terminal device describedabove, which is amobileunit , is allowedto include onlyasmall-capacitystorage device and thus cannot store a long content therein. In view of this, it is difficult to implement the technology developed for televisions as describedaboveforamobilecommunications terminal device without substantial modifications .
Therefore, an object of the present invention is to provide a communications terminal device capable of outputting a portion of a content missed by the user due to voice communication at a time shifted from the actual broadcast time.
Another object of the present invention is to provide a communications terminal device capable of recording/reproducing a portion of a content missed by the user due to voice communication by a technique suitable for mobile units .
DISCLOSURE OF THE INVENTION
To achieve the above objects, the present invention has the following aspects.
A first aspect of the present invention is directed to a communications terminal device including: a reproduction section operable to repeive and reproduce a content transmitted from an external source; a telephony processing section operable to receive and reproduce at least voice of a party on the other end of voice communication; a status detection section operable to detect a status change of voice communication; a storage section operable to store the content receivedby the reproduction section; a write section operable to write the content received by the reproduction section in the storage section while the status detection section detects a status change of voice communication; andareadsection operable to readthe content storedin the storage section. The reproduction section is further operable to reproduce the content read by the read section. Typically, the reproduction section receives a program composed of a video and audio from a remote broadcast station as a content . As described above, the communications terminal device stores the received content in the storage section while the user is engaged in voice communication, and reads and reproduces the stored content after the voice communication is finished. Thus, the communications terminal device canprovide the userwiththe portion of the content the user failed to view due to the voice communication.
Preferably, the read section is operable to start read of the content stored in the storage section while the status detection section detects a next status change of voice communication. By this processing, reproduction of the content automatically starts. It is therefore possible to provide a communications terminal device having more enhanced operability.
Typically, the status detection section is operable to detect an incoming call at the telephony processing section as a start point of voice communication, and detect that the telephonyprocessing sectionhas disconnectedvoice communication. Alternatively, the status detection section is operable to detect that the telephonyprocessing sectionhas enteredan off-hook state as a start point of voice communication, and detect that the telephony processing section has entered an on-hook state as an end point of voice communication. In this way, the status can be detected using the voice communications function originally possessed by the communications terminal device. It is therefore possible to reduce the number of components of the communications terminal device and minimize the fabrication cost.
Furthermore, the telephony processing section is operable to receive and reproduce an image of the party on the other end of voice communication. Alternatively, the reproduction section is operable to reproduce the content read by the read section at n times speed (n is a positive number satisfying n > 1) , and also is operable to receive and reproduce the content transmitted from the external source when the read by the read section is completed. The communications terminal device reproduces the receiving content once substantially no data is left in the storage section. Therefore, it is unnecessary to store all the content received after the start of voice communication in the storage section. This enables recording/reproduction suitable for a communications terminal device allowed to include only a small-capacity memory.
The communications terminal device may further include: an image generation section operable to generate image informationrelatingtovoicecommunication; andanimagecombining section operable to generate combined image information by combining the content received by the reproduction section and the image information generated by the image generation section while the status detection section detects a status change of voice communication. With these components, the user can view the content even duringvoice communicatio . It is thereforepossible torealizeacommunications terminaldevicehavingf rtherenhanced operability.
Thereproduction sectionis furtheroperabletoreceive text data relating to the content, and the image combining section is operable to generate the combined image information to which the received text data is additionally included. The user can viewthetext dataevenduringvoicecommunication. It is therefore possibletorealizeacommunications terminaldevicehavingfurther enhanced operability.
The image combining section is operable to generate the combined image information to which an image of the party on the other end of the voice communication is additionally included. Furthermore, when the reproduction section can capture an image of the user, the image combining section can generate the combined image information to which the captured image of the user is additionally included.
When the reproduction section can reproduce at least audio constituting the received content, the communications terminal device may further includes : a mute detection section operable to detect a mute time period of voice communication; and a voice switch section operable to output the audio reproduced by the reproduction section during the mute time period detected bythemute detection section. Thevoice switchsectioncan further output a voice signal reproduced by the telephony processing sectionwhen themute detection section detects nomute time period. This enables the user to hear audio constituting the content even during voice communication. It is therefore possible to provide a communications terminal device having further enhanced operability.
The communications terminal device further includes first and second speakers operable to output the audio reproduced by the reproduction section and the voice reproduced by the telephony processing section while a status change of voice communication is detected by the status detection section. This enables the user to hear audio of the content during voice communication.
The communications terminal device further includes a start detection section operable to detect a predetermined content transmission start time. The write section can further store the content received by the reproduction section while a transmission status change is detected by the start detection section. Thus, the communications terminal device can provide the user with the portion of the content the user failed to view and hear due to voice communication.
For time-shifted reproduction, the read section can further read the content stored in the storage section from a head thereof during the progress of writing of the content in the storage section by the write section. Thus, the communications terminal device can provide the user with the portion of the content the user failed to view and hear due to voice communication. The communications terminal device further include: an end time determination section operable to determine an end time of the content received by the reproduction section and currently being written in the storage section; and a write terminating section operable to terminate the write of the content in the storage section when the end time determination section determines that the endtimehas passed. Withthewriteterminating section, the write of the object content in the storage section can be terminated once the content are finished, even during voice communication. This enables recording/reproduction suitable for a communications terminal device allowed to include only a small-capacity memory.
The communications terminal device further include a remaining capacity detection section operable to detect the remaining recording capacity of the storage section. The write section can further determine a bit rate based on the remaining capacity detected by the remaining capacity detection section, and write the content received by the reproduction section based on the determined bit rate. As described above, the bit rate of the content can be controlled according to the remaining capacity of the storage section. This enables recording/reproduction suitable for a communications terminal device allowed to include only a small-capacity memory.
A second aspect of the present invention is a computer program for providing a function of broadcast reception and a function of voice communication to a computer, including the steps of: receiving and reproducing a content transmitted from an external source; receiving and reproducing at least voice of a party on the other end of voice communication; detecting a status change time point of the voice communication; writing the content received in the step of receiving and reproducing a content while a status change of voice communication is detected in the step ofdetecting; andreadingthe contentwritten inthe step ofwriting. The step of receiving and reproducing a content can further reproduce the content read in the step of reading. Typically, the computer program is recorded in a recording medium.
BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram showing the construction of a terminal device Ex of an embodiment of the present invention; FIG. 2 is a timing chart showing an outline of an operation of the terminal device Ei of FIG. 1;
FIG. 3 is a flowchart showing a detailed operation of the terminal device Ei of FIG. 1;
FIG. 4 is a block diagram showing the construction of a terminal device E2 that is a first variant of the terminal device Ex of FIG . 1 ;
FIG. 5 is a timing chart showing an outline of an operation of the terminal device E2 of FIG. 4;
FIG. 6 is a flowchart showing a detailed operation of the terminal device E2 of FIG. 4;
FIG. 7 is a block diagram showing the construction of a terminal device E3 that is a second variant of the terminal device Ei of FIG. 1;
FIG. 8 is a timing chart showing an outline of an operation of the terminal device E3 of FIG. 7;
FIG. 9 is a flowchart showing a detailed operation of the terminal device E3 of FIG. 7;
FIG.10 is a view showing a first example of a combined image signal SLM generated by an image combining section 104 in FIG. 7;
FIG.11 is aview showinga secondexample of the combined image signal SLM generated by the image combining section 104 in FIG. 7;
FIG.12 is aview showing a third example of the combined image signal SLM generated by the image combining section 104 in FIG. 7;
FIG. 13 is a block diagram showing the construction of a terminal device E4 that is a third variant of the terminal device E-*. of FIG. 1; FIG. 14 is a timing chart showing an outline of an operation of the terminal device E4 of FIG. 13;
FIG. 15 is a flowchart showing a detailed operation of the terminal device E of FIG. 13;
FIG. 16 is a block diagram showing the construction of a terminal device E5 that is a fourth variant of the terminal device Ei of FIG. 1;
FIG. 17 is a timing chart showing an outline of an operation of the terminal device E5 of FIG. 16;
FIG. 18 is a flowchart showing a detailed operation of the terminal device E5 of FIG. 16;
FIG. 19 is a block diagram showing the construction of a terminal device E6 that is a fifth variant of the terminal device Ei of FIG. 1;
FIG. 20 is a timing chart showing an outline of an operation of the terminal device E6 of FIG. 19;
FIG. 21 is a flowchart showing a detailed operation of the terminal device E6 of FIG. 19;
FIG. 22 is a block diagram showing the construction of a terminal device E7 that is a sixth variant of the terminal device Ei of FIG. 1;
FIG. 23 is a timing chart showing an outline of an operation of the terminal device E7 of FIG. 22;
FIG.24 is a flowchart showing the detailed operation of the terminal device E7 of FIG. 22; FIG. 25 is a block diagram showing the construction of a terminal device E8 that is a seventh variant of the terminal device Ex of FIG. 1;
FIG. 26 is a timing chart showing an outline of an operation of the terminal device E8 of FIG. 25; and
FIG. 27 is a flowchart showing a detailed operation of the terminal device E8 of FIG. 25.
BEST MODE FOR CARRYING OUT THE INVENTION
FIG. 1 is a block diagram showing the construction of amobile communications terminal device (hereinafter, referred to as a terminal device for simplification) Ex of an embodiment of thepresent invention . InFIG.1 , the terminaldeviceEi includes a content reproduction section 1 , a telephony processing section 2, an image switch section 3, a display device 4, a voice switch section 5, a speaker 6, a status detection section 7, a control section 8, a content storage section 9, and an input device 10.
Thecontentreproduction section 1 receives atransport stream STT which is composed of at least one channel and broadcast from a terrestrial digital broadcast station 101, and reproduces a content from the received transport stream STT. In this embodiment, the content are assumed to be a TV program broadcast in a scheduled time frame according to a timetable made up by the broadcastingprovider, for example. The TVprogram is essentially composed of a video represented by a video signal SLV and audio represented by an audio signal SLA. The video signal SLV and the audio signal SLA are encoded at the broadcast station 101 according to Moving Picture Experts Group (MPEG). The resultant encoded video signal CSLV and encoded audio signal CSLA are multiplexed for generating the transport stream STT. The content reproduction section 1 is also made operable to reproduce the video signal SLV and the audio signal SLA from the transport stream ST read from the content storage section 9 (described below) in an event that voice communication is started during reception/reproduction of contents. To implement the reception/reproduction of contents as described above, the content reproduction section 1 includes an antenna 11, a tuner 12, a TS switch section 13, a demultiplexer 14, a video decoder 15, and an audio decoder 16.
The antenna 11 receives transport streams STTbroadcast from a plurality of broadcast stations 101 (a single station is shown in FIG. 1), and outputs the received streams to the tuner 12. The tuner 12 selects a transport stream STT transmitted on the channel designated by the user among ones transmitted on the channels receivable by the antenna 11, and outputs the selected transport stream STT to both the TS switch section 13 and the control section 8. The TS switch section 13 outputs the transport stream STT sent from the tuner 12 to the demultiplexer 14. The TS switch section 13 also receives the transport stream STT read from the content storage section 9 by the control section 8, and outputs the received transport stream ST to the demultiplexer 14. The TS switch section 13 switches these two input lines in accordance with a control signal CSC sent from the control section 8. The demultiplexer 14 demultiplexes the transport stream STT output from the TS switch section 13 into the encoded video signal CSLV and the encoded audio signal CSLA, which are sent to the video decoder 15 and the audio decoder 16, respectively. The video decoder 15 decodes the encoded video signal CSLV received from the demultiplexer 14 in accordance with MPEG, and reproduces the video signal SLV representing a video constituting the content . Thereproducedvideo signal SLvis output to the image switch section 3. The audio decoder 16 decodes the encoded audio signal CSLA received from the demultiplexer 14 in accordance with MPEG, and reproduces the audio signal SLA representing audio synchronizing with the video and constituting the content . The reproduced audio signal SLA is output to the voice switch section 5. The telephony processing section 2 communicates with a base station 102 included in a mobile communication system, and receives/sends a multiplexed signal SLS from/to the base station 102. The multiplexed signal SLS, multiplexed and encoded according to a multiplexing scheme and a voice encoding scheme adopted by the mobile communication system, includes at least an encoded voice signal CSLSχ representing the speech of the party with which the user speaks using the terminal device Ei and an encoded voice signal CSLS2 representing the speech of the user. Toimplement thevoicecommunicationdescribedabove, thetelephony processing section 2 typically includes an antenna 21, a wireless communications part 22, a voice decoder 23, a voice input part 24, and a voice encoder 25.
The antenna 21 receives the multiplexed signal SLS sent from the base station 102. The wireless communications part 22, as a demultiplexer, demultiplexes the multiplexed signal SLS to obtain the encodedvoice signal CSLSι, andoutputs the demultiplexed signal to the voice decoder 23. The voice decoder 23 decodes the encoded voice signal CSLS1 output from the wireless communications part 22 according to the voice encoding scheme described above, and outputs the resultant voice signal SLS1 to the voice switch section 5. The voice input part 24 generates a voice signal SLS2 representing the speech of the user, and outputs the generated signal to the voice encoder 25. The voice encoder 25 encodes the voice signal SLS2 received from the voice input part 24 according to the voice encoding scheme described above, and outputs the resultant encodedvoice signal CSLS2 to thewireless communications part 22. The wireless communications part 22, as a multiplexer, multiplexes the encoded voice signal CSLS2 received from the voice encoder 25 for generating the multiplexed signal SLS, and outputs the generated signal SLS to the antenna 21. The antenna 21 sends the multiplexed signal SLS received from the wireless communications part 22 to the base station 102.
The image switch section 3 outputs the video signal SLV received from the video decoder 15 to the display device 4. The image switch section 3 also receives an image signal SL-j-., which is used during voice communication typically for displaying the current time, the radio-wave reception state, and the amount of remaining battery time. The image signal SLi is generated by the control section 8. The image switch section 3 switches between the output of the input video signal SLV and the output of the input image signal SLi in accordance with the control signal CSa or CSb sent from the control section 8. The display device 4 displays a video or an image in accordance with the video signal SLV or the image signal SL*! output from the image switch section 3.
The voice switch section 5 outputs the audio signal SLA sent from the audio decoder 16 to the speaker 6. The voice switch section 5 also outputs the voice signal SLSι sent from the voice decoder 23 to the speaker 6. The voice switch section 5 switches between the output of the input audio signal SLA and the output of the input voice signal SLSι in accordance with the control signal CSa or CSb sent from the control section 8. The speaker 6 outputs the audio synchronizing with the video or the speech of the party on the other end of the voice communication. In the mobile communication system, in addition to themultiplexed signal SLΞ describedabove, various control signals CSM, such as those indicating an incoming call and disconnection of voice communication, are exchanged between the terminal device Ei and the base station 102. The wireless communications part 22 sends and receives such control signals CSM via the antenna 21. Such control signals CSM, received or to be sent, are also supplied to the status detection section 7 from the wireless communications part 22. The status detection section 7 decodes the control signals CSM sent from the wireless communications part 22, and outputs a signal CSSτ (hereinafter, referred to as status notification) indicating status changes of voice communication, which are typically an incoming call or disconnection of the voice communication, to the control section 8.
To control the components described above, the control section 8 includes aprogrammemory 81, aprocessor 82 , andaworking area 83. The program memory 81 stores an operating system (OS) , computer programs for receiving/reproducing of contents , and computer programs for voice communication processing. In this embodiment , these programs are collectively referred to as a program C_?λ for the sake of convenience . The processor 82 executes the program CP-*., using the working area 83.
The content storage section 9 stores the transport streamSTTtransferredfromthe tuner 12 undercontrol of the control section 8. The input device 10 outputs, to the control section 8, a signal SLP (hereinafter, referred to as start instruction) for instructing read of the transport stream STT stored in the content storage section 9 in response to an input from the user.
Next, an operation of the terminal device Ej. described above is outlined with reference to FIG. 2. In FIG. 2, assuming that a call arrives when the terminal deviceEi receives/reproduces the transport stream STT (that is, content) at time t0. the user is prevented from viewing the content from time t0 until the voice communication is finished. The terminal device Ei therefore stores the transport stream STT received during the voice communication in the content storage section 9 while the user is prevented from viewing the content. Assuming that the voice communication is finished and the terminal device E**. disconnects it at time ti, the terminal device Ei restarts the reception/reproduction of the transport stream STT at time tl t and the reception/reproduction thereof is finished at time t2. Aftertime t2, the start instruction SLPdescribedabove is generated. In response to the start instruction SLP, the terminal device Ex reads the transport stream STT stored in the content storage section 9, and reproduces the transport stream STT. In this way, the user can view the portion of the content missed due to the voice communication.
The operation outlined above with reference to FIG. 2 is described in detail with reference to the flowchart of FIG. 3. In FIG. 3, the processor 82 executes the program for receiving/reproducingofcontents , whichis includedintheprogram CPi. Duringthe executionof theprogramCP-j., if theuserdesignates a channel in order to view a desired content (hereinafter, referred to as an object content), the following setting is performed by the processor 82. That is, the designated channel to be received by the tuner 12 is set, the TS switch section 13 is set to the state ready to receive the output of the tuner 12, the image switch section 3 is set to the state ready to receive the output of the video decoder 15, and the voice switch section 5 is set to the state ready to receive the output of the audio decoder 16. After this setting, the terminal device Ex reproduces the video signal SLV and the audio signal SLA from the received transport stream STT, and outputs a video and audio synchronizing with the video (step SI). More specifically, the tuner 12 selects a transport stream STT transmitted via the set channel among the transport streams STT output from the antenna 11, and outputs the selected transport stream STT to the TS switch section 13. The TS switch section 13 outputs the input transport stream STT to the demultiplexer 14. The demultiplexer 14 demultiplexes the input transport streamSTT, andoutputs theresultant encodedvideo signal CSLv and encoded audio signal CSLA to the video decoder 15 and the audio decoder 16, respectively. The video decoder 15 decodes the input encoded video signal CSLV, and outputs the resultant video signal SL to the display device 4 via the image switch section 3. The audio decoder 16 decodes the input encoded audio signal CSLA, and outputs the resultant audio signal SLA to the speaker 6 via the voice switch section 5. By the processing described above, a video constituting the object content is displayed on the display device 4 while audio synchronizing with the displayed video is output from the speaker 6. As is found from FIG. 1, the output transport stream STT is also sent to the control section 8 from the tuner 12. In step SI, however, the control section 8 preferably abandons the input transport stream STT without transferring it to the content storage section 9. A switch (not shown) may be provided somewhere between the tuner 12 and the TS switch section 13 so as to block the transport stream STT output from the tuner 12 from being input into the control section 8 in step SI.
Subsequent to step SI, the processor 82 determines whether or not status notification CSSτ indicating a status change of the voice communication, that is, an incoming call, has been received from the status detection section 7 (step S2). If not received, indicating no voice communication processing is necessary, execution of step SI is repeated. If status notification CSSτ indicating an incoming call has been received, the processor 82 first generates the control signal CSa, and sends the signal to the image switch section 3 for switching between the two input lines and the voice switch section 5 for switching between the two input lines (step S3) . With this control signal CSa, the image switch section 3 is set to the state ready to receive the output of the control section 8, and the voice switch section 5 is set to the state ready to receive the output of the voice decoder 23. The processor 82 then starts execution of the program for voice communication processing included in the program CP**.. The terminal device Ei exchanges the multiplexed signal SLS with the base station 102 for voice communication, reproduces the voice signal SLSι included in the multiplexed signal SLS, and outputs the speech of the caller. The terminal device Ei also generates the encoded voice signal CSLS2 from the voice signal SLS2 representing the speech of the user, multiplexes the encoded voice signal CSLS2, and sends the resultant multiplexed signal SLS to the base station 102. That is, the terminal device E**. performs voice communication processing ( step S4 ) . More specifically, the wireless communications part 22 switches its function between that of a demultiplexer and that of a multiplexer. The wireless communications part 22, as a demultiplexer, demultiplexes the multiplexed signal SLS output from the antenna 21 to obtain the encoded voice signal CSLSι, and outputs the encoded voice signal CSLsi to the voice decoder 23. The voice decoder 23 decodes the received encoded voice signal CSLSι, and outputs the decoded voice signal SLSι to the speaker 6 via the voice switch section 5. By the processing described above, the speech of the caller is output from the speaker 6.
The voice input part 24 generates the voice signal SLS2 representing the speech of the user, and outputs the voice signal SLS2 to the voice encoder 25. The voice encoder 25 encodes the input voice signal SLS2, and outputs the resultant encoded voice signal CSLS2 to the wireless communications part 22. The wireless communications part 22, as a multiplexer, multiplexes the input encoded voice signal CSLS2, and sends the multiplexed signal SLS to the base station 102. The processor 82 also generates the image signal SLi on the working area 83 if required, and sends the generated signal
SLi to the display device 4 via the image switch section 3. By this processing, an image represented by the image signal SLi is displayed on the display device 4.
Subsequent to step S4 described above, the transport stream STT output from the tuner 12 is stored in the content storage section 9 under control of the processor 82 (step S5) . After storing of the transport stream ST , the processor 82 determines whether or not status notification CSST indicating a next status change of the voice communication, that is, disconnection of the voice communication, has been received from the status detection section 7 (step S6) . If not received, indicating that no restart of reception/reproduction of the content is necessary, steps S4 and S5 are executed until disconnection of the voice communication is detecte . If status notification CSSτ indicating disconnection of the voice communication has been received, this means that time ti (see FIG. 2) has been detected. Therefore, in order to restart reception/reproduction of the content, the processor 82 generates the control signal CSb, and sends the signal CSb to the image switch section 3 for switching between the two input lines and the voice switch section 5 for switching between the two input lines (step S7). With this control signal CSb, the image switch section 3 is set to the state ready to receive the output of the video decoder 15, and the voice switch section 5 is set to the state ready to receive the output of theaudio decoder 16. Thereafter, thecontent reproduction section 1 outputs a video and audio constituting the content in the same manner as that in step SI (step S8).
During the repetition of processing steps S4 and S5 , thevideo decoder 15 andthe audio decoder 16 are ree fromoperation. Therefore, to conserve consumption of power of the terminal device Ei, the processor 82 may stop supplying power to these components . In this case, the processor 82 has to restart supplying power to these components in step S8. After step S8 , the processor 82 determines whether or not start instruction SLP has been received from the input device 10 (step S9). If not received, indicating that no read of the transport stream STT from the content storage section 9 is necessary, execution of step S8 is repeated. If start instruction SLP has beenreceived, thismeans that time t2 ( seeFIG.2 ) has beendetected. Therefore, the processor 82 generates the control signal CSC for changing the input of the TS switch section 13 from the tuner 12 side to the control section 8 side, and sends the signal CSC to the TS switch section 13 (step S10) . By this processing of step S10, the TS switch section 13 changes its input line as described above. The processor 82 then reads the transport stream ST stored in the content storage section 9 , andtransfers the transport stream STT to the TS switch section 13. The demultiplexer 14 receives the transport stream STT transferred via the TS switch section 13, demultiplexes the transport stream STT, and outputs the resultant encoded video signal CSLV and encoded audio signal CSLA to the video decoder 15 and the audio decoder 16, respectively. The video decoder 15 and the audio decoder 16 operate in the same manner as that in step SI , reproducing the video signal SLV and the audio signal SLA from the input encoded video signal CSLV and encoded audio signal CSLA, and outputting the respective signals to the display device 4 and the speaker 6. That is , the terminal device Ei reads and reproduces the object content (step Sll) . As a result, the portion of a video constituting the object content missed by the user during the voice communication is displayed on the display device 4, and audio synchronizing with the video is output from the speaker 6. In step Sll, as in step SI, the control section 8 preferably controls the relevant components to block the transport stream STT output from the tuner 12 from being stored in the content storage section 9.
Thereafter, the processor 82 determines whether or not there is any part of the transport stream STT which has yet to be reproduced left in the content storage section 9 (step S12) . If there is, execution of step Sll is repeated. If no part of the transport stream ST is left, meaning that the entire portion missed by the user during the voice communication has been reproduced, the processor 82 terminates the processing shown in FIG. 3.
By the processing described above, the terminal device Ei stores the transport stream STT in the content storage section 9 during voice communication. After the user finishes the voice communication and after the reception of the transport stream STT from the broadcast station 101 is finished, the terminal device Ei starts reproduction of the transport stream STT stored in the content storage section 9 in response to the start instruction SLP. In this way, it is possible to provide the terminal device Ei capable of outputting a portion of a content missed by the user due to voice communication at a time shifted from the actual broadcast time. As described above, the terminal device Ex can stop storing the content in the content storage section 9 once the read from the content storage section 9 is started in step Sll. Therefore, no unnecessary contents are recorded in the content storage section 9. This enables efficient use of the recording capacity of the content storage section 9.
Inthe embodiment describedabove, the status detection section 7 detects status changes of voice communication by way of the control signals CSM. The terminal device E , typified by a cellular phone, is normally provided with an input device for entering the on-hook or off-hook state. Therefore, using a signal output from this input device, the status detection section 7 may detect the time point at which the telephony processing section enters the off-hook state as the first status change of the voice communication and the time point at which the telephony processing section 2 enters the on-hook state as the next status change of the voice communication.
In the embodiment described above, the telephony processing section 2 performs processing related to voice communication. Alternatively, the telephony processing section 2 may perform processing related to a videophone. In this case, the telephony processing section 2 is required to additionally perform reception/reproduction of image information on the side of the partyon the other end of the voice communication, and display a video showing the party on the other end, in place of the image given by the image signal SLi described above, and also required to capture and encode image information on the side of the user.
In the embodiment described above, the content is assumed to be a TV program, but is not restricted thereto. For example, the content maybe a radio programbroadcast in a scheduled time frame accordingto atimetablemadeupbytheradio broadcasting provider. Such a radio program is composed of audio represented by the audio signal SLA. Otherwise, the content may be music, composedof avideo andaudio, or composedof audio only, distributed as a stream from a server via a digital network typified by the Internet. Such music is provided as the audio signal SLA.
FIG. 4 is a block diagram showing the construction of a mobile communications terminal device (hereinafter, referred to as a terminal device for simplification) E2 that is a first variant of the terminal device Ei described above. In FIG. 4, the terminal device E2 has the same construction as the terminal device Ei, except that a computer program (hereinafter, referred to as a program for simplification) CP2 is stored in the program memory 81 and that an input device 103 is used in place of the input device 10. Therefore, in FIG. 4, the same components as those of the terminal device Ex in FIG. 1 are denoted by the same reference numerals, and the description thereof is omitted here. The program CP2 is the same in configuration as the program CPi. By executing the program CP2, however, the terminal device E2 performs some processing items different from those performed by the terminal device Ex. This will be described below with reference to FIGS . 5 and 6.
The input device 103 outputs a signal SLF instructing end of reception/reproduction of the transport stream ST (hereinafter, referred to as end instruction) in response to an input from the user.
Next , an operation of the terminal device E2 described above is outlined with reference to FIG. 5. In FIG. 5, assuming that a call arrives when the terminal device E2 receives/reproduces the transport stream STT at time t0, the user is prevented from viewing the object content from time t0 until the voice communication is finished. The terminal device E2 therefore stores the transport stream ST received during the voice communication in the content storage section 9 while the user is prevented from viewing the content. Assuming that the voice communication is disconnected at time ti, the terminal device E2 reads the transport streamSTT stored in the content storage section 9 from time ti, and reproduces the transport stream STT at n times speed (n is a number satisfying n>l) . Specifically, the portion of the object content missed by the user due to the voice communication is read sequentially from the head thereof. During this time, therefore, the user is preventedfromviewing theportion of the object content being broadcast according to the actual broadcast time (portion after time ti) . Therefore, the terminal device E2 continues storing the transport stream STT, that is, the object content being broadcast according to the actual broadcast time after time tx in the storage section 9. In this state, as n times speed time-shifted reproduction of the stored content is performed, the time lag between the content being reproduced and the content being broadcast is gradually reduced. Hence the transport stream ST to be stored in the content storage section 9 decreases. In other words, n times speed time-shifted reproduction is performed from time ti to time t2. As will be understood from the above description, by means of the n times speed time-shifted reproduction, the object content is reproduced at n times speed along a time axis different from the actual digital broadcast time.
As a result of the n times speed time-shifted reproduction, the content storage section 9 becomes substantially vacant, and thus the read of the transport stream STT is no more possible at time t2. When this time t2 is detected, the terminal device E2 terminates the write of the content in the content storage section 9 and the read of the content from the content storage section 9, and instead receives/reproduces the transport stream STT being broadcast from the broadcast station 101 according to the actual broadcast time.
Referring to the flowchart of FIG. 6, the operation of the terminal device E2 outlined with reference to FIG. 5 will be described in detail. The flowchart of FIG. 6 is the same as that of FIG. 3, except that steps S21 to S27 are included in place of steps S7 to S12. Therefore, in FIG.6, the same steps as those in FIG.3 are denoted by the same step numbers , and the description thereof is omitted here.
If the processor 82 determines that the voice communication is disconnected in step S6 , this means that time ti (see FIG. 5) has been detected. Therefore, to perform the n times speed time-shifted reproduction, the processor 82 generates the control signal CSb and sends the signal to the image switch section 3 for switching between the two input lines and the voice switch section 5 for switching between the two input lines (step S21) . With this control signal CSb, the image switch section 3 is set to the state ready to receive the output of the video decoder 15, and the voice switch section 5 is set to the state ready to receive the output of the audio decoder 16. In addition, for the ntimes speedtime-shiftedreproductio , theprocessor 82 generates the control signal CSC for changing the input of the TS switch section 13 from the tuner 12 side to the control section 8 side and also instructing the video decoder 15 and the audio decoder 16 to perform the n times speed time-shifted reproduction, and sends this signal to the TS switch section 13, the video decoder 15, and the audio decoder 16 (step S22) . With this control signal CSC, the TS switch section 13 changes its input lines as described above, and the reproduction speed of the video decoder 15 and the audio decoder 16 is set at n times.
Thereafter, theprocessor 82 reads the transport stream STT stored in the content storage section 9 and transfers the transport stream STT to the TS switch section 13. During such read processing, the portion of the object content missed by the user due to the voice communication is sequentiallyread. In other words, the portion of the transport stream STT stored during the voice communication is sequentially read from the head thereof. The demultiplexer 14 demultiplexes the transport stream STT received via the TS switch section 13, and outputs the resultant encoded video signal CSL and encoded audio signal CSLA to the video decoder 15 and the audio decoder 16 , respectively. The video decoder 15 selects pictures required for the n times speed time-shifted reproduction from the received encoded video signal CSLv, decodes the selected pictures according to MPEG, and reproduces the video signal SLV. The reproduced video signal SLV is output to the display device 4 via the image switch section 3. The audio decoder 16 selects portions required for the n times speed time-shifted reproduction from the received encoded audio signal CSLA, decodes the selected portions according to MPEG, and reproduces the audio signal SLA. The reproduced audio signal SLA is output to the speaker 6 via the voice switch section 5. During suchn times speed time-shifted reproduction, the transport stream STT output from the tuner 12 is written in the content storage section 9 under control of the processor 82. In this way, the terminal device E2 performs the n times speed time-shifted reproduction (step S23). As a result of the processing in step S23,. the content missed by the user due to the voice communication is displayed from the head thereof on the display device 4 at n times speed, and audio synchronizing with the video is output from the speaker 6.
The processor 82 then determines whether or not there is anypart of the transport stream STTwhichhas yet to be reproduced left in the content storage section 9 (step S24). If there is, execution of step S23 is repeated. If no part of the transport stream STT is left, this means that the entire portion of the object content missed by the user due to the voice communication has been reproduced and also that time t2 (see FIG. 5) has been detected. Therefore, to perform reception/reproduction of the content, the processor 82 generates a control signal CSd for changing the input lines of the TS switch section 13 from the control section 8 side to the tuner 12 side and also instructing the video decoder 15 and the audio decoder 16 to perform normal-speed reproduction, and sends this signal to the TS switch section 13 , the video decoder 15, and the audio decoder 16 (step S25) . With such control signal CSd, the TS switch section 13 changes its input lines as described above, and the reproduction speed of the video decoder 15 and the audio decoder 16 is set at normal. Thereafter, the content reproduction section 1 reproduces the object content in the same manner as that in step SI (step S26). The processor 82 then determines whether or not end instruction SLF has been received (step S27). If not received, it is determined that the user is still viewing the object content, and the processor 82 repeats the execution of step S26. If having received the end instruction SLF, the processor 82 determines that the user has finished viewing the content and terminates the processing shown in FIG. 6.
By the processing described above, the terminal device E2 writes the transport stream STT in the content storage section 9 from the start point of the voice communication (that is, time to) till time t2 as shown in FIG. 5. At the end point of the voice communication (that is, time ti) , the terminal device E2 starts n times speed time-shiftedreproduction of theportion of the object content stored in the content storage section 9 missed by the user from the head thereof. This n times speed time-shifted reproduction is performed from time tx until time t2. By this processing, the amount of the transport stream STT stored in the content storage section 9 can be reduced, and also the recording areafor the transport stream STT canbe soon freedup. This enables effective use of the recording area of the content storage section 9.
In the variant described above, the video decoder 15 and the audio decoder 16 select portions required for the n times speed time-shifted reproduction, and reproduce the selected portions . Alternatively, the processor 82 may read only portions required for the n times speed time-shifted reproduction from the transport stream STT stored in the content storage section 9 , and transfer these portions to the TS switch section 13. In the variant described above, the processor 82 determines when the n times speed time-shifted reproduction is returned to normal reproduction by examining whether or not any part of the transport stream STT is left in the content storage section 9. Alternatively, the n times speed time-shifted reproduction may be returned to the normal reproduction when the difference between the value of a presentation time stamp (PTS) included in the transport stream STT being written and the value of the PTS included in the transport stream ST being read becomes substantially zero. As another case, if portions of the object content that the user will presumably consider unnecessary (typically, commercials) are skipped and only a TV program is reproduced, the content storage section 9 will become substantially vacant. The terminal device E2 may detect this time t2, and then may receive/reproduce the transport stream ST being broadcast from the broadcast station 101 according to the actual broadcast time. In this case, it is preferred not to write the portions that the user will presumably consider unnecessary in the content storage section 9. This can reduce the storage area occupied by the object content in the content storage section 9.
FIG. 7 is a block diagram showing the construction of a mobile communications terminal device (hereinafter, referred to as a terminal device for simplification) E3 that is a second variant of the terminal device Ei described above. In FIG. 7, the terminal device E3 has the same construction as the terminal device Ei, except that a computer program (hereinafter, referred to as a program for simplification) CP3 is stored in the program memory 81 and that an image combining section 104 is provided in place of the image switch section 3. In FIG. 7, the same components as those of the terminal device Ei in FIG. 1 are denoted by the same reference numerals, and the description thereof is omitted here.
The program CP3 is the same in configuration as the program CPi. By executing the program CP3, however, the terminal device E3 performs some processing items different from those performed by the terminal device Ex. This will be described below with reference to FIGS. 8 and 9.
During voice communication, the image combining section 104 receives the video signal SLV from the video decoder 15 and the image signal SLi generated by the control section 8. The image combining section 104 combines the input video signal SLV and the input image signal SLi for generating e_ combined image signal SLM, and outputs the combined signal to the display device 4. During reception/reproduction of the transport stream STT, the image combining section 104 outputs the video signal SLV from the video decoder 15 to the display device 4 as it is.
Next, an operation of the terminal device E3 described above is outlined with reference to FIG. 8. In FIG. 8, assume that a call arrives when the terminal device E3 receives/reproduces the transport stream STT at time t0 and that the voice communication is disconnected at time ti. In the embodiment and the variant described above, the user is prevented from viewing the object content during the time period from t0 to ti. In this variant, however, the terminal device E3 generates the combined image SLM as described above, and displays this image during the time period from to to ti. By displaying this image, the user can view the object content during the voice communication. In this way, the terminal device E3 can provide more enhanced operability.
Referring to the flowchart of FIG. 9, the operation of the terminal device E3 outlined with reference to FIG. 8 will be described in more detail. The flowchart of FIG. 9 is the same as that in FIG. 3, except that step S31 is included in place of step S4. Therefore, in FIG. 9, the same steps as those in FIG. 3 are denoted by the same step numbers , and the description thereof is omitted here. If the status notification CSSτ indicating an incoming call has been received in step S2 , the processor 82 starts execution of the program for voice communication processing included in the program CP3. The terminal device E3 then performs processing required for voice communication, and also generates and displays the combined image SLM (step S31). The processing required for voice communication is the same as that performed in the embodiment described above. In this variant, therefore, only the generation/display of the combined image SLM will be described in detail. The processor 82 generates the image signal SLi on the working area 83 if required, and sends the signal to the image combining section 104. The video signal SLV is also sent to the image combining section 104 from the video decoder 15 as described above. The image combining section 104 combines the input image signal SLj and the input video signal SLVfor generating the combined image signal SLM in which an video of the broadcast content is superimposed on the image used during voice communication. The display device 4, receiving the combined image signal SLM and performing necessary display processing for the received signal, displays the image represented by the image signal SL;*; and a video of the object content. By the processing described above, the terminal device E3 can output the object content even during voice communication.
Although the terminal device E3 has been described as a variant of the terminal device Ei, it may be a variant of the terminal device E2. That is, step S31 described above may be executed in place of step S4 in FIG. 6.
In the variant described above, the combined signal SLM was obtained by combining the object content and the image to be displayed during voice communication. Alternatively, if the transport stream STT includes multiplexed text data representing in characters what is expressed by the voice constituting the content, that is, caption data, the image combining section 104 may generate the combined image signal SLM additionally including the caption as shown in FIG. 10. If the telephony processing section 2 performs processing required for a videophone, the image combining section 104 may generate the combined image signal SLM additionally including an image of the party on the other end of voice communication as shown in FIG. 11. The image combining section 104 may also generate the combined image signal SLM further additionally including an image of the user as shown in FIG. 12.
FIG. 13 is a block diagram showing the construction of a mobile communications terminal device (hereinafter, referred to as a terminal device for simplification) E that is a third variant of the terminal device Ei described above. In FIG. 13, the terminal device E4 has the same construction as the terminal device El t except that a computer program (hereinafter, referred to as a program for simplification) CP is stored in the program memory 81 and that a mute detection section 105 is additionally included. In FIG.13 , the same components as those of the terminal device Ei in FIG. 1 are denoted by the same reference numerals, and the description thereof is omitted here.
The program CP is the same in configuration as the program CPi. By executing the program CP , however, the terminal device E performs some processing items different from those performed by the terminal device Ex. This will be described below in detail with reference to FIGS. 14 and 15.
The mute detection section 105 receives the voice signal SLSι output from the voice decoder 23. The mute detection section 105 typically detects a mute time period BNS during which the party on the other end of voice communication does not speak based on the amplitude value of the input voice signal SLSι, generates a timing signal SL indicating a start or end point of the mute time period, and outputs the timing signal to the control section 8.
Next , an operation of the terminal device E4 described above is outlined with reference to FIG. 14. In FIG. 14, assume that times t0 and ti are defined as described above. The user is preventedfromhearingaudioconstitutingtheobject content during the time period from t0 to ti in the embodiment described above. In this variant, however, the terminal device E4 detects the mute time period BNS during which the party on the other end of voice communication does not speak based on the voice signal SLSι, and controls the input line of the voice switch section 5 so that the audio signal SLA from the audio decoder 16 is input into the speaker 6 during the detected mute time period BNS. By this processing, the user can hear the audio of the object content during the voice communication if it is in the mute time period BNS. In this way, the terminal device E4 can provide more enhanced operability. The operation of the terminal device E outlined above is described in more detail with reference to the flowchart of FIG. 15. The flowchart of FIG. 15 is the same as that of FIG. 3, except that steps S41 to S44 are additionally included. Therefore, in FIG.15, the same steps as those in FIG.3 are denoted by the same step numbers , and the description thereof is omitted here .
After step S5, the processor 82 determines whether ornot the timing signal SLThas beenreceivedfromthemute detection section 105 (step S41) . If not received, the processing proceeds to step S6 because no switching of the voice switch section 5 is required. If the timing signal SLThas beenreceived, the processor 82 determines whether or not the signal indicates the end point of a mute time period BNS (step S42) . If not, this means that the received timing signal SLT indicates the start point of the mute time period BNS. Therefore, the processor 82 generates a control signal CSa for changing the input line of the voice switch section 5 from the voice decoder 23 to the audio decoder 16, and outputs the generated control signal to the voice switch section 5 (step S43) . On the other hand, if the timing signal SLT indicating the end point of the mute time period BNS has been received in step S42, the processor 82 generates a control signal CSe for changing the input line of the voice switch section 5 from the audio decoder 16 to the voice decoder 23, andoutputs the generated control signal to the voice switch section 5 (step S44). Once the processing of step S43 or S44 described above is finished, the processor 82 executes step S4 again. By the processing described above, the terminal device E can output audio constituting the object content during voice communication if it is in the mute time period BNS. Although the terminal device E4 was described as a variant of the terminal device Ex, it maybe avariant of the terminal device E2 or E3. That is, steps S41 to S44 described above may be incorporated in the flowchart of FIG. 6 or 9.
FIG. 16 is a block diagram showing the construction of a mobile communications terminal device (hereinafter, referred to as a terminal device for simplification) E5 as a fourth variant of the terminal device Ei described above. In FIG.16 , the terminal device E5 has the same construction as the terminal device Ex, except that a computer program (hereinafter, referred to as a program for simplification) CP5 is stored in the program memory 81 and that avoice switch section 106 and first and second speakers 107 and 108 are provided in place of the voice switch section 5 and the speaker 6. In FIG. 16, therefore, the same components as those of the terminal device Ei in FIG. 1 are denoted by the same reference numerals, and the description thereof is omitted here.
The program CP5 is the same in configuration as the program CPi. By executing the program CP5, however, the terminal device E5 performs some processing items different from those performed by the terminal device Ei. This will be described below in detail with reference to FIGS. 17 and 18.
The voice switch section 106 receives the audio signal SLA output from the audio decoder 16 and the voice signal SLSι output from the voice decoder 23. During reception/reproduction of the transport stream STT, the voice switch section 106 outputs the input audio signal SLA to the first and second speakers 107 and 108. However, during voice communication, the voice switch section 106 outputs the input audio signal SLA to one of the first and second speakers 107 and 108 (the second speaker 108 in FIG. 16) , and outputs the input voice signal SLSι to the other speaker
107 or 108 (the first speaker 107 in FIG. 16) . The voice switch section 106 switches its input/output lines in accordance with the control signal CSa or CSb output from the control section 8.
The first and second speakers 107 and 108 are L-side and R-side speakers, respectively, for stereo output.
Next , an operation of the terminal device E5 described above is outlined with reference to FIG.17. The user is prevented fromhearing audio constituting the object content during the time period from t0 to ti in the embodiment described above. In this variant, however, as shown in FIG. 17, the terminal device E5 controls the voice switch section 106 so that the voice signal SLsi received from the voice decoder 23 is output from the first speaker 107 and the audio signal SLA received from the audio decoder 16 is output from the second speaker 108. By this processing, the user can hear the audio of the object content even during voice communication. In this way, the terminal device E5 can provide more enhanced operability.
The operation of the terminal device E5 outlined above will be described in detail with reference to the flowchart of FIG. 18. The flowchart of FIG. 18 is the same as that of FIG. 3, except that steps S51 to S53 are included in place of steps S3, S4 and S7. Therefore, in FIG. 18, the same steps as those in FIG.3 are denoted by the same step numbers, and the description thereof is omitted here. If status notification CSSτ indicating an incoming call has been received in step S2 , the processor 82 generates the control signal CSa, and sends the signal to the image switch section 3 for switching between the two input lines and the voice switch section 5 for switching between the two input lines (step S51). As a result, the image switch section 3 is set to the state ready to receive the output of the control section 8 , and the voice switch section 5 is set to the state ready to receive both the outputs of the audio decoder 16 and the voice decoder 23.
The processor 82 then starts execution of the program for voice communication processing included in the program CP5. The terminal device E5 exchanges the multiplexed signal SLS with the base station 102 for voice communication, demultiplexes the encoded voice signal CSLSι included in the multiplexed signal to reproduce the voice signal SLSι, and thus outputs the speech of the caller. The terminal device E5 also generates the encoded voice signal CSLS2 representing the speech of the user, multiplexes the encoded voice signal, and sends the resultant multiplexed signal SLS to the base station 102 (step S52) . More specifically, the wireless communications part 22 switches its function between that of a demultiplexer and that of a multiplexer. The wireless communications part 22, as a demultiplexer, demultiplexes the multiplexed signal SLS input from the antenna 21 to obtain the encoded voice signal CSLSι, and outputs the encoded voice signal to the voice decoder 23. The voice decoder 23 decodes the input encoded voice signal CSLSι, and outputs the decoded voice signal SLsi to one of the first and second speakers 107 and 108 via the voice switch section 106. At the same time, the audio decoder 16 outputs the reproduced audio signal SLA to the other speaker 107 or 108 via the voice switch section 106. By the processing described above, the speech of the caller and the audio of the content are output from the speakers 107 and 108.
The voice encoder 25 encodes the voice signal SLS2 from the voice input part 24 , and outputs the encoded voice signal CSLS2 to the wireless communications part 22. The wireless communications part 22, as a multiplexer, multiplexes the input encoded voice signal CSLS2 and sends the resultant multiplexed signal SLS to the base station 102 via the antenna 21.
The processor 82 generates the image signal SL-x on the working area 83 if required, and sends the signal to the display device 4 via the image switch section 3. By this processing, an image representedbythe image signal S i is displayedon the display device 4.
If it is determined that the voice communication has been disconnectedin step S6 , theprocessor 82 generates the control signal CSb and sends the signal to both the image switch section 3 and the voice switch section 106 for switching the input lines (step S53). As a result, the image switch section 3 is set to the state ready to receive the output of the video decoder 15, and the voice switch section 106 is set to the state ready to receive the output of the audio decoder 16. By the processing described above, the operationoutlinedwithreference to FIG.17 is obtained. That is, the terminal device E5 can output audio constituting the object content even during voice communication.
Although the terminal device E5 has been described as a variant of the terminal device Ei, it may be a variant of the terminal device E2 or E3. That is , steps S51 to S53 described above may be incorporated in the flowchart of FIG. 6 or 9.
FIG. 19 is a block diagram showing the construction of amobile communications terminal device (hereinafter, referred to as a terminal device for simplification) E6 as a fifth variant of the terminal device Ei described above . In FIG.19 , the terminal device E6 has the same construction as the terminal device Ei, except that a computer program (hereinafter, referred to as a program for simplification) CP6 is stored in the program memory 81 and that an input device 109 and a preselection storage section 110 are additionally provided. In FIG. 19, therefore, the same components as those of the terminal device Ex in FIG.1 are denoted by the same reference numerals , and the description thereof is omitted here. The program CP6 is the same in configuration as the program CPi. By executing the program CP6, however, the terminal device E6 performs some processing items different from those performed by the terminal device Ei. This will be described below with reference to FIGS. 20 and 21. The input device 109 outputs a signal SLR indicating the channel and the broadcast start time of a content the user wants to view in the future (hereinafter, referred to as preselection information) to the control section 8 in response to the input from the user. Next, an operation of the terminal device E6 described above is outlined with reference to FIG. 20. Preselection information SLR generated according to the user' s input with the input device 109 is stored in the preselection storage section 110 of the terminal device E6. In FIG. 20, assume that the user is still engaged in voice communication using the terminal device E6 when broadcast of a content specified by the preselection information SLR (hereinafter, referred to as an object content) starts at time t0. The user is prevented from viewing the object content from time t0 until the voice communication is finished. Note that the content as used in this variant has the same de inition as that described in the above embodiment. During the time period forwhich the usermisses the content, the terminal device E6 stores the received transport stream STT in the content storage section 9. Assuming that the voice communication is finished and disconnected at time ti# the terminal device E6 reproduces the received transport stream STT at time t2 that is after time ti. In this way, the user can view the portion of the content broadcast during the voice communication.
The operation outlined above with reference to FIG. 20 is described in detail with reference to the flowchart of FIG. 21. In FIG. 21, the user inputs the channel and the broadcast start time of the object content with the input device 109 of the terminal device E6. In response to this input, the input device 109 generates preselection information SLR indicating the input information. The generatedpreselection information SLRis stored in the preselection storage section 110 (step S61).
When a control signal CSM indicating an incoming call is received by the voice communication processing unit 2, the processor 82 receives the status notification CSSτ from the status detection section 7 and, in response to this notification, executes the program for voice communication processing included in the program CP6. That is, the image switch section 3 is set to the state ready to receive the output of the control section 8, and the voice switch section 5 is set to the state ready to receive the output of the voice decoder 23. The terminal device E6 then exchanges the multiplexed signal SLS with the base station 102 for voice communication as in step S4 described above (step S62) . Thereafter, theprocessor 62 accesses thepreselection storage section 110 and determines whether or not the time of broadcast of the object content designated by the preselection information SLR has come ( step S63) . If not , indicating that write of the transport stream STT is unnecessary, execution of step S62 is repeated. If the time of the start of the object program has come, the tuner 12 is set to receive the preselected channel under the control of the processor 82, and the processor 82 writes the transport stream STT output from the tuner 12 in the content storage section 9, as in step S5 described above (step S64). Subsequent to the write operation, the processor 82 determines whether or not the status notification CSSτ indicating disconnection of the voice communicationhas beenreceived (step S65) . Ifnot received, indicating that reproduction of the transport stream STT stored in the content storage section 9 is unnecessary, execution of step S62 is repeated. If the status notification CSSτ indicating disconnection of the voice communication has been received, the processor 82 performs necessary processing in response to this notification and then terminates the write of the content in the content storage section 9. At the same time, the processor 82 generates the control signal CSa and sends the signal to the image switch section 3 for switching between the two input lines and the voice switch section 5 for switching between the two input lines (step S66) .
As a result of the switching of the input lines in step S66, a video composing of the object content is displayed on the display device 4 , and audio synchronizing with the video is output from the speaker 6, as in step SI described above (step S67) . Thereafter, the processor 82 determines whether or not the start instruction SLP has been received from the input device 10 (stepS68). If not received, indicating that read of the transport stream STT from the content storage section 9 is unnecessary, execution of step S67 is repeated. If the start instruction SLP has been received, the processor 82 generates the control signal CSC for changing the input of the TS switch section 13 from the tuner 12 side to the control section 8 side, and sends the signal to the TS switch section 13 (step S69). By this step S69, the TS switch section 13 changes the input accordingly.
The processor 82 reads the transport stream STT stored in the content storage section 9 and transfers the transport stream to the TS switch section 13. The demultiplexer 14 demultiplexes the transport stream STT transferred via the TS switch section 13 , and outputs the resultant encoded video signal CSLV and encoded audio signal CSLA to the video decoder 15 and the audio decoder 16, respectively. The video decoder 15 and the audio decoder 16 operate as in step SI for reproducing the video signal SLV and the audio signal SLA (step S610) . As a result, avideo constituting the object contentmissedbytheuserduringthevoice communication is displayed on the display device 4 , and audio synchronizing with the video is output from the speaker 6.
As in step Sll, the processor 82 determines whether or not there is any part of the transport stream STT which has yet to be reproduced left in the content storage section 9 (step S611). If there is, execution of step S610 is repeated. If no part of the transport stream STT is left , this means that the user has viewed all the portion of the content missed due to the voice communication. Therefore, the processor 82 terminates the processing shown in FIG. 21.
By the processing described above, the operation outlined with reference to FIG. 20 is obtained. That is, during voice communication, the terminal device E6 writes the transport stream ST in the content storage section 9 once broadcast of the object program is started. After the voice communication is finished and after the reception of the transport stream STT from the broadcast station 101 is finished, reproduction of the transport stream STT stored in the content storage section 9 is started. In this way, it is possible to provide a communications terminal device capable of outputting the portion of the content missed by the user due to the voice communication at a time shifted from the actual broadcast time.
Although the terminal device Es was described as a variant of the terminal device E1 it may be a variant of any of the terminals E2 to E5. Otherwise, the terminal device E6 may be combined with any of the terminals Ex to E5.
FIG. 22 is a block diagram showing the construction of amobile communications terminal device (hereinafter, referred to as a terminal device for simplification) E7 as a sixth variant of the terminal device Ei described above. In FIG.22 , the terminal device E7 has the same construction as the terminal device Ei, except that a computer program (hereinafter, referred to as a program for simplification) CP7 is stored in the program memory
81 and that the input device 10 is unnecessary. In FIG. 22, therefore, the same components as those of the terminal device
Ei in FIG. 1 are denoted by the same reference numerals, and the description thereof is omitted here.
The program CP7 is the same in configuration as the program CPi. By executing the program CP7, the terminal device E7 performs some processing items different from those performed by the terminal device Ei. This will be described below with reference to FIGS. 23 and 24.
Next, an operation of the terminal device E7 described above is outlined with reference to FIG.23. In FIG.23, the user is prevented fromviewing the object content during the time period from to to t due to voice communication, as in the cases described above . The terminal device E7writes the receivedtransport stream STT in the content storage section 9 after time ti until at least time t2 at which the broadcast of the object content is finished. After disconnection of the voice communication (time tx), the terminal device E7 reads the transport stream STT stored in the content storage section 9 and reproduces at the normal speed. This read is performed sequentially from the head of the portion of the object content missedby the user due to thevoice communication and displayed for the view by the user. Therefore, the terminal device E7 displays the object content stored in the content storage section 9 for the user from time ti until time t3 at which read of the object content fromthe content storage section 9 is finished. In other words , after the voice communication, the terminal device E7 performs time-shifted reproduction of the content along the time axis shifted from the actual broadcast time by time (tι-t0) .
The operation outlined above with reference to FIG.
23 will be described in detail with reference to the flowchart of FIG. 24. The flowchart of FIG. 24 is the same as that of FIG. 3, except that steps S71 to S74 are included in place of steps S7 to S12. Therefore, in FIG.24, the same steps as those in FIG. 3 are denoted by the same step numbers , and the description thereof is omitted here.
If it is determined that the voice communication has been disconnected in step S6, meaning that time ti (see FIG. 23) has been detected, the processor 82 generates and sends the control signal CSb described above in relation to step S21 in FIG. 6 (step S71). With this control signal, the image switch section 3 and thevoice switch section 5 are set to therespective states described above in relation to step S21. The processor 82 also changes the input line of the TS switch section 13 to the control section 8 side by sending the control signal CSC to the TS switch section
13 (step S72) . As described above in relation to step S22 in FIG. 6, the control signal CSC is also sent to the video decoder 15 and the audio decoder 16 to set the reproduction speed thereof at the normal speed.
The processor 82 then reads the transport stream STT stored in the content storage section 9 and transfers the stream to the TS switch section 13. By this transfer, the portion of the object content missedby the user due to the voice communication is sequentially read from the head thereof. The demultiplexer
14 demultiplexes the transport stream STT transferred via the TS switch section 13, and outputs the resultant encoded video signal CSLv and encoded audio signal CSLA to the video decoder 15 and the audio decoder 16, respectively. The video decoder 15 reproduces the video signal SLV, which is output to the display device 4 via the image switch section 3. The audio decoder 16 reproduces the audio signal SLA, which is output to the speaker 6 via the voice switch section 5. At the same time, the transport stream STT output from the tuner 12 continues to be written in the content storage section 9 under the control of the processor 82. In this way, the terminal device E7 performs time-shifted reproduction (step S73) . By the processing in step S73, an image of the content missed by the user due to the voice communication is displayed from the head thereof on the display device 4 at the normal speed, while audio synchronizing with the video is output from the speaker 6.
Thereafter, the processor 82 determines whether or not the time-shifted reproduction is finished (step S74) . If not finished, execution of step S73 is repeated. If the time-shifted reproduction is finished, the processing shown in FIG. 24 is terminated.
By the processing described above, the operation outlined with reference to FIG. 23 is obtained. That is, the terminal device E7 writes the transport stream STT in the content storage section 9 from the start point of voice communication (that is, time t0) . From the end point of the voice communication (that is, time ti) , the terminal device E7 starts time-shifted reproduction of the portion of the object content missed by the user and stored in the content storage section 9, from the head of the portion. In this way, it is possible to provide a communications terminal device capable of outputting the portion of the content missed by the user due to the voice communication at a time shifted from the actual broadcast time. FIG. 25 is a block diagram showing the construction of a mobile communications terminal device (hereinafter, referred to as a terminal device for simplification) E8 as a seventh variant of the terminal device Ei described above. In FIG.25 , the terminal device E8 has the same construction as the terminal device El r except that a demultiplexer 120 is included in place of the demultiplexer 14 andthat acomputerprogram (hereinafter, referred to as a program for simplification) CP8 is stored in the program memory 81. In FIG. 25, therefore, the same components as those of the terminal device Ex in FIG.1 are denoted by the same reference numerals, and the description thereof is omitted here.
The demultiplexer 120 demultiplexes the transport stream STT output from the TS switch section 13, and outputs the resultant encoded video signal CSLV and encoded audio signal CLSA to the video decoder 15 and the audio decoder 16, respectively, as does the demultiplexer 14. By this demultiplexing, the demultiplexer 120 also obtains a program map table (PMT) including at least the broadcast end time of the receiving content, and sends the PMT to the processor 82.
The program CP8 is the same in configuration as the program CPi. By executing the program CP8, however, the terminal device E8 performs some processing items different from those performed by the terminal device Ei. This will be described below with reference to FIGS. 26 and 27.
Next, an operation of the terminal device E8 described above is outlined with reference to FIG.26. In FIG.26, the user has voice communication during the time period from t0 to tx , as in the above cases. If the object content finish at time t2 that is between time t0 and time ti, the processor 82 terminates the write of the object content in the content storage section 9. The operation of the terminal device E8 outlined above with reference to FIG. 26 will be described in more detail with reference to the flowchart of FIG. 27. The flowchart of FIG. 27 is the same as that of FIG. 3, except that steps S81 to S83 are additionally included. Therefore, in FIG. 27, the same steps as those in FIG. 3 are denoted by the same step numbers, and the description thereof is omitted here.
After step S4, the processor 82 determines whether or not write terminating processing has been executed in step S83 (step S81). If not executed, the processor 82 proceeds to step S5. If the write terminating processing has been executed, the write of the received transport stream STT in the content storage section 9 is no more necessary. Therefore, the processor 82 skips step S5 and the following some steps and proceeds to step S6.
After step S5, the processor 82 determines whether or not the broadcast end time of the object content has come based on the PMT sent from the demultiplexer 120 (step S82). If determining that the broadcast end time has not come, the processor 82 proceeds to step S6. If it is determined that the broadcast end time has come, the processor 82 terminates the write of the object content in the content storage section 9 (step S83). In other words, the processor 82 discards the transport stream STT sent from the tuner 12 , not transferring this to the content storage section 9. The processor 82 then performs step S6. By the processing described above, the write of the object content in thecontent storage section 9 is terminateduponendof thebroadcast of the object content. This enables recording/reproduction suitable for the content storage section 9 whose recording capacity is small.
Although the terminal device E8 was described as a variant of the terminal device Ei, it may be a variant of any of the terminals E2 to E7.
When a switch (not shown) is provided between the tuner 12 and the TS switch section 13, the processor 82 may control this switch to block the transport stream STT output from the tuner 12 from being input into the control section 8 in step S83.
In this variant, the processor 82 determines whether or not the end time of the object content has come based on the PMT. If an electric program guide (EPG) is obtainable, the processor 82 may determine whether or not the end time of the object content has come based on this obtained EPG.
In the embodiment and the variants described above, the processor 82 may detect the remaining recording capacity of the content storage section 9 , determine the bit rate for the write of the object content based on the detected remaining capacity, and then store the object content in the content storage section 9 according to the determined bit rate.
In the embodiment and the variants described above, a program stream converted from the transport stream STT may be stored in the content storage section 9. Otherwise, an MPEG4-enσoded transport stream STT may be stored in the content storage section 9.
While the invention has been described in detail, the foregoing description is in all aspects illustrative and not restrictive. It is understood that numerous other modi ications and variations can be devised without departing from the scope of the invention.
INDUSTRIAL APPLICABILITY The communications terminal device of the present invention is applicable to a digital device such as a cellular phone, a personal digital assistant (PDA), and a personal computer
(PC) capable of incorporating the content receiving function and the voice communication function.

Claims

1. A communications terminal device comprising: areproduction section operable to receive andreproduce a content transmitted from an external source; a telephony processing section operable to receive and reproduce at least voice of a party on the other end of voice communication; a status detection section operable to detect a status change of voice communication; a storage section operable to store the content received by the reproduction section; a write section operable to write the content received by the reproduction section in the storage section while the status detection section detects a status change of voice communication; and a read section operable to read the content stored by the storage section, wherein the reproduction section is further operable to reproduce the content read by the read section.
2. The communications terminal device according to claim 1, wherein the reproduction section receives a program composed of a video and audio from a remote broadcast station as a content .
3. The communications terminal device according to claim 1 , wherein the read section is operable to start read of the content storedin the storage sectionwhile the status detection section detects a next status change of voice communication.
4. The communications terminal device according to claim 1 , wherein the status detection section is operable to detect an incoming call at the telephony processing section as a start point of voice communication.
5. The communications terminal device according to claim 3 , wherein the status detection section is operable to detect that the telephony processing section has disconnected voice communication.
6. The communications terminal device according to claim 1 , wherein the status detection section is operable to detect that the telephonyprocessing sectionhas entered an off-hook state as a start point of voice communication.
7. The communications terminal device according to claim 3 , wherein the status detection section is operable to detect that the telephony processing section has entered an on-hook state as an end point of voice communication.
8. The communications terminal device according to claim 1 , wherein the telephony processing section is operable to receive and reproduce an image of the party on the other end of voice communication.
9. The communications terminal device according to claim 3 , wherein the reproduction section is operable to reproduce the content read by the read section at n times speed (n is a positive number satisfying n > 1), and also is operable to receive and reproduce the content transmitted from the external source when the read by the read section is completed.
10. The communications terminal device according to claim 1, further comprising: an image generation section operable to generate image information relating to voice communication; and an image combining section operable to generate combined image information by combining the content received by the reproduction section and the image information generated by the image generation sectionwhilethe status detection sectiondetects a status change of voice communication.
11. The communications terminal device according to claim 10, wherein the reproduction section is further operable to receive text data relating to the content, and the image combining section is operable to generate the combined image information to which the received text data is additionally included.
12. The communications terminal device according to claim 10, wherein the image combining section is operable to generate the combined image information to which an image of the party on the other end of the voice communication is additionally included.
13. The communications terminal device according to claim 10, wherein the telephony processing section is further operable to capture an image of the user side, and the image combining section is operable to generate the combined image information to which the captured image of the user is additionally included.
14. The communications terminal device according to claim 2 , wherein the reproduction section is operable to reproduce at least audio constituting the received content, the communications terminal device further comprises : a mute detection section operable to detect a mute time period of voice communication; and a voice switch section operable to output the audio reproduced by the reproduction section during the mute time period detected by the mute detection section, and thevoice switchsectionis furtheroperabletooutput a voice signal reproduced by the telephony processing section when the mute detection section detects no mute time period.
15. The communications terminal device according to claim 2 , further comprising first and second speakers operable to output the audio reproduced by the reproduction section and the voice reproduced by the telephony processing section while a status change of voice communication is detected by the status detection section.
16. The communications terminal device according to claim 3, further comprising a start detection section operable to detect a predetermined content transmission start time , wherein the write section is further operable to store the content received by the reproduction section while a transmission status change is detected by the start detection section.
17. The communications terminal device according to claim 3, wherein for time-shifted reproduction, the read section is furtheroperabletoreadthecontent storedinthe storagesection from a head thereof during the progress of writing of the content in the storage section by the write section.
18. The communications terminal device according to claim 1, further comprising: an end time determination section operable to determine an end time of the content received by the reproduction section and currently being written in the storage section; and a write terminating section operable to terminate the write of the content in the storage section when the end time determination section determines that the end time has passed.
19. The communications terminal device according to claim 1 , further comprising a remaining capacity detection section operable to detect the remaining recording capacity of the storage section, wherein the write section is further operable to determine a bit rate based on the remaining capacity detected by the remaining capacity detection section, and write the content received by the reproduction section based on the determined bit rate.
20. A computer program for providing a function of broadcast reception and a function of voice communication to a computer, comprising the steps of: receiving and reproducing a content transmitted from an external source; receiving and reproducing at least voice of a party on the other end of voice communication; detecting a status change time point of the voice communication; writing the content received in the step of receiving and reproducing a content while a status change of voice communication is detected in the step of detecting; and reading the content written in the step of writing, wherein the step of receiving and reproducing a content is further operable to reproduce the content read in the step of reading.
21. A computer program according to claim 20, wherein the computer program is recorded in a recording medium.
PCT/JP2003/004153 2002-04-05 2003-04-01 Communications terminal device for receiving and recording content, starting recording when a chance of status in a voice communication is detected WO2003085938A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
KR1020037016115A KR100921303B1 (en) 2002-04-05 2003-04-01 Communications terminal device allowing content reception and voice communication
EP03745884.1A EP1407601B1 (en) 2002-04-05 2003-04-01 Communications terminal device allowing content reception and voice communication

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2002104067 2002-04-05
JP2002/104067 2002-04-05

Publications (2)

Publication Number Publication Date
WO2003085938A2 true WO2003085938A2 (en) 2003-10-16
WO2003085938A3 WO2003085938A3 (en) 2004-01-15

Family

ID=28786328

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2003/004153 WO2003085938A2 (en) 2002-04-05 2003-04-01 Communications terminal device for receiving and recording content, starting recording when a chance of status in a voice communication is detected

Country Status (6)

Country Link
US (1) US7221903B2 (en)
EP (1) EP1407601B1 (en)
JP (1) JP4528845B2 (en)
KR (1) KR100921303B1 (en)
CN (1) CN1320819C (en)
WO (1) WO2003085938A2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102004026577A1 (en) * 2004-06-01 2005-12-29 Infineon Technologies Ag Controlling communications device involves determining if communications request to be processed/generated or not, temporarily storing data stream, at least partly displaying it with time offset after processing/generation of request
EP1624679A1 (en) * 2004-08-03 2006-02-08 Nagra France Sarl Device and method for controlling audio/video devices when a phone call is detected
EP1665562A1 (en) * 2003-08-29 2006-06-07 Varovision Co., Ltd. Contents providing system and mobile communication terminal therefor
EP1801803A1 (en) * 2005-12-21 2007-06-27 Advanced Digital Broadcast S.A. Audio/video device with replay function and method for handling replay function
WO2007074959A1 (en) * 2005-12-26 2007-07-05 Kt Corporation System for providing share of contents based on packet network in voice comunication based on circuit network

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100678204B1 (en) * 2002-09-17 2007-02-01 삼성전자주식회사 Device and method for displaying data and television signal according to mode in mobile terminal
JP4073819B2 (en) * 2003-04-10 2008-04-09 エボリウム・エス・アー・エス Push-type distribution method of video information to mobile phones
KR100514685B1 (en) * 2003-05-13 2005-09-13 주식회사 팬택앤큐리텔 Handset for embodying function of time shift and method thereof
US20070265031A1 (en) * 2003-10-22 2007-11-15 Sandy Electric Co., Ltd. Mobile Phone, Display Method, and Computer Program
US20050245240A1 (en) * 2004-04-30 2005-11-03 Senaka Balasuriya Apparatus and method for storing media during interruption of a media session
CN1943227B (en) 2004-06-02 2011-08-03 松下电器产业株式会社 Mobile terminal device, control method thereof, program, and semiconductor device
WO2006001481A1 (en) * 2004-06-29 2006-01-05 Kyocera Corporation Digital broadcast receiving apparatus
KR100595708B1 (en) * 2004-12-30 2006-07-20 엘지전자 주식회사 Apparatus and method for pause function of broadcasting streaming in mobile communication terminal
KR100686157B1 (en) * 2005-05-04 2007-02-26 엘지전자 주식회사 A mobile terminal having a digital multimedia data recording function and the recording method thereof
KR100557185B1 (en) * 2005-05-17 2006-03-03 삼성전자주식회사 Digital multimedia broadcasting receiving mobile terminal and concurrent call method for accomplishing simultaneously digital multimedia broadcasting receiving with call
JP2006345251A (en) * 2005-06-09 2006-12-21 Kyocera Corp Radio communication terminal and communication method thereof
KR100743035B1 (en) 2005-07-18 2007-07-26 엘지전자 주식회사 Mobile phone and method for forwarding broadcasting by video telephony
US20070033617A1 (en) * 2005-08-08 2007-02-08 Sony Ericsson Mobile Communications Ab Redirecting broadcast signals for recording programming
KR100792983B1 (en) * 2005-10-11 2008-01-08 엘지전자 주식회사 Method for processing digital broadcasting data
KR100656159B1 (en) * 2005-10-17 2006-12-13 삼성전자주식회사 Optical disc reproducing method
KR100781266B1 (en) * 2005-10-20 2007-11-30 엘지전자 주식회사 Communicating device and processing method
KR100702705B1 (en) * 2005-11-28 2007-04-02 (주)케이티에프테크놀로지스 Multiband-multimode portable terminal and multitasking method for multiband-multimode portable terminal
KR100757231B1 (en) * 2006-06-08 2007-09-10 삼성전자주식회사 Method and apparatus for simultaneous watching of multi scene plural channel broadcasting in dmb mobile phone
KR100799670B1 (en) * 2006-06-30 2008-01-30 삼성전자주식회사 Method and apparatus for screen partition as receiving broadcast signal with a mobile terminal
KR101086423B1 (en) * 2007-01-12 2011-11-25 삼성전자주식회사 Method of improving channel switching speed in a digital TV receiver and digital TV receiver thereof
KR101448616B1 (en) * 2007-04-05 2014-10-10 엘지전자 주식회사 A method of controlling a broadcasting outputting in mobile communication terminal and the mobile communication terminal
DE102007060053A1 (en) * 2007-12-13 2009-06-18 Robert Bosch Gmbh Device and method for controlling an acoustic reproduction of at least two audio signals
KR101919787B1 (en) * 2012-05-09 2018-11-19 엘지전자 주식회사 Mobile terminal and method for controlling thereof
US8971690B2 (en) * 2012-12-07 2015-03-03 Intel Corporation Technique to coordinate activities between a content device and a wireless device based on context awareness
CN105721088A (en) * 2016-03-02 2016-06-29 浙江吉利控股集团有限公司 Radioing method, radioing device and vehicular system
JP2018084878A (en) * 2016-11-21 2018-05-31 ソニー株式会社 Information processing device, information processing method, and program
US11330034B1 (en) * 2020-10-27 2022-05-10 T-Mobile Usa, Inc. Data disruption tracking for wireless networks, such as IMS networks

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5706388A (en) * 1993-10-29 1998-01-06 Ricoh Company, Ltd. Recording system recording received information on a recording medium while reproducing received information previously recorded on the recording medium
GB2343074A (en) * 1998-10-23 2000-04-26 Sony Uk Ltd Concurrent recording and playback of broadcast material

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5241428A (en) * 1991-03-12 1993-08-31 Goldwasser Eric P Variable-delay video recorder
JPH0730872A (en) * 1993-07-08 1995-01-31 Casio Comput Co Ltd Video telephone system with video broadcast receiver
JPH0730622A (en) * 1993-07-12 1995-01-31 Casio Comput Co Ltd Telephone set
US6002832A (en) * 1995-02-09 1999-12-14 Matsushita Electric Industrial Co., Ltd. Apparatus and method for recording and reproducing data
US6005564A (en) * 1996-12-05 1999-12-21 Interval Research Corporation Display pause with elastic playback
US6229810B1 (en) * 1997-12-31 2001-05-08 At&T Corp Network server platform for a hybrid fiber twisted pair local loop network service architecture
JPH11252471A (en) 1998-03-03 1999-09-17 Matsushita Electric Ind Co Ltd Center device and terminal equipment for broadcasting program and program information
US6400804B1 (en) * 1998-12-10 2002-06-04 At&T Corp. On-hold activity selection apparatus and method
JP4350185B2 (en) * 1998-12-25 2009-10-21 キヤノン株式会社 Display device with videophone function, control method for display device with videophone function, and storage medium
US20010038690A1 (en) * 1999-12-30 2001-11-08 Douglas Palmer Method and apparatus for management and synchronization of telephony services with video services over an HFC network
US20020040475A1 (en) * 2000-03-23 2002-04-04 Adrian Yap DVR system
US20010036254A1 (en) * 2000-04-25 2001-11-01 Robert Davis DVR Telephone answering device
KR20010097757A (en) * 2000-04-26 2001-11-08 윤종용 System for servicing multimedia by wireless mobile telecommunication terminal
JP2001333334A (en) 2000-05-22 2001-11-30 Matsushita Electric Ind Co Ltd Television receiver
US6768722B1 (en) * 2000-06-23 2004-07-27 At&T Corp. Systems and methods for managing multiple communications
GB2364211A (en) * 2000-06-30 2002-01-16 Nokia Oy Ab A terminal comprising two receivers for receiving an encrypted first signal from a first network and a decryption second signal from a second network
DE60235651D1 (en) * 2002-03-27 2010-04-22 Mitsubishi Electric Corp COMMUNICATION DEVICE AND COMMUNICATION PROCESS

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5706388A (en) * 1993-10-29 1998-01-06 Ricoh Company, Ltd. Recording system recording received information on a recording medium while reproducing received information previously recorded on the recording medium
GB2343074A (en) * 1998-10-23 2000-04-26 Sony Uk Ltd Concurrent recording and playback of broadcast material

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1665562A1 (en) * 2003-08-29 2006-06-07 Varovision Co., Ltd. Contents providing system and mobile communication terminal therefor
EP1665562A4 (en) * 2003-08-29 2008-09-03 Varovision Co Ltd Contents providing system and mobile communication terminal therefor
DE102004026577A1 (en) * 2004-06-01 2005-12-29 Infineon Technologies Ag Controlling communications device involves determining if communications request to be processed/generated or not, temporarily storing data stream, at least partly displaying it with time offset after processing/generation of request
EP1624679A1 (en) * 2004-08-03 2006-02-08 Nagra France Sarl Device and method for controlling audio/video devices when a phone call is detected
EP1801803A1 (en) * 2005-12-21 2007-06-27 Advanced Digital Broadcast S.A. Audio/video device with replay function and method for handling replay function
US7885519B2 (en) 2005-12-21 2011-02-08 Advanced Digital Broadcast S.A. Audio/video device with replay function and method for handling replay function
WO2007074959A1 (en) * 2005-12-26 2007-07-05 Kt Corporation System for providing share of contents based on packet network in voice comunication based on circuit network

Also Published As

Publication number Publication date
JP4528845B2 (en) 2010-08-25
CN1602629A (en) 2005-03-30
US20040204020A1 (en) 2004-10-14
US7221903B2 (en) 2007-05-22
WO2003085938A3 (en) 2004-01-15
KR100921303B1 (en) 2009-10-09
CN1320819C (en) 2007-06-06
EP1407601A2 (en) 2004-04-14
JP2008206189A (en) 2008-09-04
KR20040091525A (en) 2004-10-28
EP1407601B1 (en) 2018-10-10

Similar Documents

Publication Publication Date Title
EP1407601B1 (en) Communications terminal device allowing content reception and voice communication
CN102123232B (en) Digital broadcast receiving apparatus
KR20050083086A (en) Method and device for outputting data of wireless terminal to external device
WO2002049362A1 (en) Broadcast viewing method, broadcast transmitting server, mobile terminal, and multi-location calling/broadcast control viewing apparatus
JP4137520B2 (en) Mobile device
JPH0528190U (en) External remote control compatible image providing device
US7937074B2 (en) Information terminal, and event notifying method
US20120194633A1 (en) Digital Broadcast Receiver
CN101779458B (en) Video distribution device and video distribution program
JP4368125B2 (en) Communication terminal device capable of content reception and voice call
JP2005064592A (en) Portable communication terminal
US5892537A (en) Audio-visual telecommunications unit designed to form a videophone terminal
US20070040897A1 (en) Video communication apparatus and video communication method
JP4525548B2 (en) Mobile terminal device, broadcast data receiving system, and broadcast data output method
CN1977460B (en) Digital broadcast receiver
JPH11102195A (en) Karaoke-video communication terminal and its use method
JP5267214B2 (en) Synchronous recording / reproducing apparatus, synchronous recording / reproducing method, synchronous recording / reproducing system, and communication terminal apparatus
JP3627435B2 (en) Information communication terminal and moving picture data transmission method
JP2005117149A (en) Mobile telephone
KR100223590B1 (en) Multi-function tv
EP1286540A1 (en) Consumer electronics appliances with shared memory
JP2831013B2 (en) Videophone equipment
KR100499521B1 (en) Apparatus and method for VOD service of TV system using the personal communication device
JP2003348222A (en) Portable telephone set with video telephone function
JPH09107530A (en) Transmitter for isdb and its receiver

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): CN KR

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR

WWE Wipo information: entry into national phase

Ref document number: 2003745884

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 1020037016115

Country of ref document: KR

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 20038008181

Country of ref document: CN

WWP Wipo information: published in national office

Ref document number: 2003745884

Country of ref document: EP