US20040202249A1 - Real-time MPEG video encoding method of maintaining synchronization between video and audio - Google Patents

Real-time MPEG video encoding method of maintaining synchronization between video and audio Download PDF

Info

Publication number
US20040202249A1
US20040202249A1 US10/408,555 US40855503A US2004202249A1 US 20040202249 A1 US20040202249 A1 US 20040202249A1 US 40855503 A US40855503 A US 40855503A US 2004202249 A1 US2004202249 A1 US 2004202249A1
Authority
US
United States
Prior art keywords
frame
frames
null
discrete cosine
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/408,555
Inventor
Chien-Shun Lo
Wei-Chuan Hsiao
Shih-Chin Hsu
Teng-chou Chang
Wei-Jen Huang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Newsoft Tech Corp
Original Assignee
Newsoft Tech Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Newsoft Tech Corp filed Critical Newsoft Tech Corp
Priority to US10/408,555 priority Critical patent/US20040202249A1/en
Assigned to NEWSOFT TECHNOLOGY CORPORATION reassignment NEWSOFT TECHNOLOGY CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHANG, TENG-CHOU, HSIAO, WEI-CHUAN, HSU, SHIH-CHIN, HUANG, WEI-JEN, LO, CHIEN-SHUN
Publication of US20040202249A1 publication Critical patent/US20040202249A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
    • H04N19/895Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder in combination with error concealment

Definitions

  • This invention relates to a video encoding method. More particularly, this invention relates to a real-time MPEG (Motion Picture Experts Group) video encoding method of maintaining synchronization between video and audio.
  • MPEG Motion Picture Experts Group
  • Video compact discs (VCDs) and digital versatile discs (DVDs) are both small in size with excellent data storage. At the same time, a normal user can easily produce and edit a personalized VCD or DVD for giving to relatives and friends as a gift. The characteristics of the DVDs, such as a high picture quality and a large storage capacity, can even be used for immediate recording of television programs. Because of the advantages brought by the digital video and audio technology, the VCDs and DVDs have gradually taken over the market place of conventional videotapes, and have won wide acceptance among the people.
  • MPEG 1 and MPEG 2 are used respectively as the primary formats for video encoding to store videos.
  • NSC National Television System Committee
  • the image resolution for the VCD is 352 ⁇ 240 pixels
  • the number of frames played per second is 29.97
  • the uncompressed data is up to 58 Mbps.
  • the image resolution is 720 ⁇ 480 pixels
  • the number of frames played per second is 29.97
  • the uncompressed data is up to 237 Mbps.
  • the image resolution for the VCD is 352 ⁇ 288 pixels, the number of frames played per second is 25, and the uncompressed data is also up to 58 Mbps.
  • the image resolution is 720 ⁇ 576 pixels, the number of frames played is 25, and the uncompressed data is also up to 237 Mbps.
  • I picture an intra picture
  • DCT discrete cosine transform
  • inter-coding is a method for predicting reconstruction frames by using the adjacent frames to predict the current frame. The method can generate a predicted picture (P picture) or a bi-directional predicted picture (B picture).
  • the difference between the P and B pictures is that the P picture uses only forward prediction. That is, only the forward frame is referred for reconstructing the P picture.
  • the reconstruction of the B picture uses both of the forward and backward frames or an average of the two, and therefore has the highest efficiency for encoding.
  • the B picture itself cannot be used for other prediction encoding.
  • a video regarding a ball falling from the top of a building is recorded in a format of MPEG.
  • the DCT technique is employed to encode a first I picture. If the P pictures are generated in such a manner that a forward-predicted distance is 1, the reconstruction of the next P picture will use motion estimation and compensation technique with reference to the previous I picture. For instance, assume the first I picture is a static frame showing the ball is at the top of the building. Because the building is a static object, the building in the second frame can refer to the building in the previous I picture. As to the falling ball, the motion estimation technique can be utilized to calculate motion vectors and motion compensation with reference to the previous I picture, for reconstructing a second P picture. Likewise, a third P picture is reconstructed with reference to the second P picture. Therefore, the P pictures can be reconstructed by recording only the differences from the previous frame, thereby dramatically reducing the required amount of data to achieve compression.
  • the video and audio may not be kept synchronous.
  • the MPEG video encoder cannot keep up with the speed required for the real-time encoding compression, resulting in an insufficient amount of frames. For instance, in accordance with NTSC, there should be 30 frames played in a second for inputting a video of 1 second in length. If the MPEG video encoder encodes only 15 frames per second, the encoded MPEG file will contain only 15 frames.
  • the MPEG file with 15 frames When the video is played, the MPEG file with 15 frames will be played in 0.5 second but the voice keeps playing for 1 second, resulting in a synchronization failure that the video is finished 0.5 seconds earlier than the audio.
  • the output of the video and audio of the encoded MPEG file will not be synchronous even when the encoding speed of the MPEG video encoder meets the requirement. To sum up, it is a major challenge to ensure a synchronous output of the video and audio after the MPEG encoding in such circumstances that the speed of the MPEG video encoder is insufficient for the real-time MPEG video encoding or the source video has an insufficient number of frames per second.
  • an object of the invention is to provide a real-time MPEG video encoding method of maintaining synchronization between video and audio, which is capable of ensuring a synchronous output of the video and audio after the MPEG encoding when a speed of an MPEG video encoder is insufficient for a real-time MPEG video encoding.
  • Another object of the invention is to provide a real-time MPEG video encoding method of maintaining synchronization between video and audio, which is capable of ensuring a synchronous output of the video and audio after the MPEG encoding when the source video has an insufficient number of frames per second.
  • a real-time MPEG video encoding method of maintaining synchronization between video and audio comprises a frame checking procedure, a frame compensating procedure, and a frame encoding procedure.
  • the frame checking procedure checks if any frames are lost.
  • the frame compensating procedure uses some null frames to compensate lost frames if any frames are lost.
  • the frame encoding procedure uses a standard MPEG video encoding procedure to encode a received frame and output an encoded frame.
  • the encoding method for the null frames encodes a predicted frame with a pixel value of 0. When the encoding of variable length coding (VLC) is performed, the received motion vector is (0,0).
  • VLC variable length coding
  • Another real-time MPEG video encoding method of maintaining synchronization between video and audio of the invention comprises a system performance checking procedure, a frame-type output sequence selecting procedure, and a frame encoding procedure.
  • the system performance checking procedure estimates a system performance for MPEG video encoding and predicts a number of frames that can be encoded per second by the system.
  • the frame-type output sequence selecting procedure selects a frame-type output sequence by checking the number of frames that can be encoded per second with reference to a frame-type output sequence table.
  • the frame-type output sequence table is a predefined in such a content including frame-type output sequences of frames in each GOP.
  • the frame-type comprises the I picture, the P picture, and the null frame.
  • the frame encoding procedure performs MPEG video encoding on the received frames in sequence according to the selected frame-type output sequence.
  • the real-time MPEG video encoding method of maintaining synchronization between video and audio of the invention because the null-frame encoding speed is faster, it can immediately compensate the lost frames or frames that cannot be encoded and output in time due to insufficient system performance. A sufficient number of frames for the encoded MPEG file are ensured for maintaining synchronization between video and audio.
  • FIG. 1 is a schematic diagram of a GOP consisting of frame-type output sequences output by a conventional real-time MPEG video encoding method
  • FIG. 2 a is a flow diagram illustrating a real-time MPEG video encoding method of maintaining synchronization between video and audio in one embodiment of the invention
  • FIG. 2 b is a flow diagram showing a simplified process of the real-time MPEG video encoding method of maintaining synchronization between video and audio of FIG. 2 a;
  • FIG. 3 is a flow diagram showing a null-frame encoding method of FIG. 2 a;
  • FIG. 4 is a flow diagram showing a real-time MPEG video encoding method of maintaining synchronization between video and audio in another embodiment of the invention.
  • FIG. 5 is a frame-type output sequence table showing frame-type output sequences of GOPs corresponding to numbers of frames that can be encoded per second.
  • the real-time MPEG video encoding method of maintaining synchronization between video and audio in a preferred embodiment of the invention utilizes a forward prediction to generate P pictures with a predicted distance of 1. That is, for each of the P pictures, the reference frame is a previous frame.
  • a forward prediction to generate P pictures with a predicted distance of 1. That is, for each of the P pictures, the reference frame is a previous frame.
  • S 21 when frames are transmitted from a video source (S 21 ), check whether there are any lost frames (S 22 ).
  • Reasons for causing lost frames could be that the system performance for MPEG video encoding does not meet the requirement, an insufficient number of frames are transmitted from the video source initially, or transmission quality is poor.
  • the number of lost frames may be determined by comparing the current number of encoded frames with timestamps or frame sequence numbers provided by the input frames.
  • 30 frames are played in one second for NTSC. That is, 30 frames are transmitted to the system in one second for video encoding. Assuming that a video is 1 second in length. Each of the frames transmitted is sequentially assigned with one of frame sequence numbers from 1 to 30. Also, the system performs the video encoding to each of the 30 frames transmitted in sequence. Normally, an input frame with a sequence number of 25 implies that the system should have encoded 24 frames and is performing the video encoding for the 25 th frame. However, as a result of the insufficient system performance or poor transmission quality, the number of frames encoded by system may be only 20 when a sequence number of an input frame is 25. Obviously, some frames are lost. The number of lost frames F lost can be calculated using the following equation:
  • F input is a number of frames being currently input
  • F coded is a number of frames having been encoded by the system.
  • the number of lost frames in the above example is 4.
  • the sequence number of the corresponding frame can also be determined.
  • the time difference between two timestamps of two adjacent frames is about ⁇ fraction (1/30) ⁇ second. If a first frame with a frame sequence number of 1 is indicated by a timestamp, the frame sequence numbers of the following frames can be determined from the corresponding timestamps divided by a time difference of ⁇ fraction (1/30) ⁇ second. Then, the number of lost frames can be calculated by using the equation (1).
  • a step S 23 is performed for outputting null frames to completely compensate the lost frames (the encoding of the null frames will be explained later). After the compensation of the lost frames, a video encoding of the input frames begins.
  • a step S 24 determines whether the GOP has accumulated an enough number of frames. Since a GOP consists of at least one I picture and a plurality number of P pictures which can be substituted by null frames, the input frames are encoded and output as an I picture when the GOP has accumulated enough number of frames (S 25 ) for generating a new GOP; otherwise, a certain number of P pictures are output (S 26 ) so that the current GOP can accumulate an enough number of frames. Both of the video encoding methods for the I and P pictures are conventional and will not be explained any further. Thereafter, the processing flow returns to the step S 21 to check whether other frames are input. If no frames have been input, the whole process for video encoding is terminated.
  • a frame checking procedure 27 is performed to check if the number of frames received is the same as the number of frames encoded. That is to check if there is a loss of frames. If any lost frames are detected, a procedure 28 is performed to compensate the lost frames by using null frames. After all of the lost frames are compensated, a frame-encoding procedure 29 is performed to encode the received frames with a standard MPEG video encoding procedure and then output the encoded frames. In the situation where no frame is lost, the frame-encoding procedure 29 is directly performed to encode and output the received frames.
  • a null predictive frame 31 with a pixel value of 0 is received (S 31 ).
  • the null predictive frame is transformed to a discrete cosine transform coefficient 32 by a discrete cosine transform algorithm (S 32 ).
  • the discrete cosine transform coefficient 32 is then transformed into a quantized discrete cosine transform coefficient 33 by quantization (S 33 ).
  • the quantized discrete cosine transform coefficient 33 is encoded with a null motion vector by variable length coding (VLC) (S 34 ) to output a null frame (S 35 ).
  • VLC variable length coding
  • the value of the null motion vector is (0,0). Because the motion estimation is omitted for the null frames and the calculation for the transformation and quantization of the discrete cosine are faster, the speed for encoding the null frames is considerably faster than those for the I, P, and B pictures.
  • FIG. 4 Another embodiment of the real-time MPEG video encoding method of maintaining synchronization between video and audio is described with reference to FIG. 4.
  • a system performance for video encoding of a MPEG video encoding system 41 is detected (S 41 ), and according to the performance evaluation of system 41 , a number of frames that the system 41 can encode and output in one second is estimated (S 42 ) and is denoted by F p .
  • a suitable sequence for the output frame is selected with reference to a frame-type output sequence table (S 43 ).
  • the frame-type output sequence table is a predefined on the basis of experiment results.
  • the content of the table refers to the frame-type output sequences of the frames included in each GOP.
  • the types of frame include the I picture, the P picture, and the null frame.
  • the frame-type output sequence is IP ⁇ PP ⁇ PP ⁇ PP ⁇ PP ⁇ for NTSC in each GOP with a length of 15 frames.
  • the frame-type output sequence is IPP ⁇ PP ⁇ PP ⁇ PP ⁇ in each GOP with a length of 13 frames.
  • the symbol ⁇ represents an intra-picture
  • the symbol P represents a predicted picture
  • the symbol ⁇ represents a null frame.
  • the MPEG video encoding is performed in accordance with the corresponding frame-type output sequence of the output frame sequence (S 45 ).
  • the video is encoded in accordance with the frame-type output sequence IP ⁇ PP ⁇ PP ⁇ PP ⁇ PP ⁇ . That is, a GOP consists of an I picture as a first frame, a P picture as a second frame, a null frame as a third frame, a P picture as a fourth frame, a P picture as a fifth frame, a null frame as a sixth frame, and so on.
  • the I and P pictures are encoded by a conventional MPEG video encoding technique.
  • the method for encoding null frames is as described above. It should be noted that the sequence of output frames listed in FIG. 5 is for illustration only A person skilled in the art can base on experiment results or any special requirements to set appropriate sequences for output frames. Besides, a length of a GOP may also be modified according to the demands such as requirements of video quality.
  • the process for motion estimation is omitted for null frames and the calculation for the transformation and quantization of the discrete cosine is faster, so the speed for encoding a null frame is much faster than those for the I, P, and B pictures. Therefore, when the real-time MPEG video encoding is performed, if a loss of frames is determined, some null frames can quickly be used to compensate the lost frames. Or, when the system performance of video encoding is insufficient, an output frame-type output sequence can be defined beforehand, so the null frames can be used by the system to compensate those frames which cannot be encoded in real-time, thereby reaching the number of frames required of maintaining synchronization between video and audio for an encoded MPEG file.

Abstract

When a real-time MPEG video encoding is performed, null frames are used to compensate lost frames. Or, when a performance of the real-time MPEG video encoding is insufficient, the encoding is performed in accordance with a predetermined frame-type output sequence, in which the null frames are uniformly distributed in the predetermined frame-type output sequence. The null frames are encoded by transforming a null predictive frame with a pixel value of 0 into a discrete cosine transform coefficient through a discrete cosine transform algorithm. The discrete cosine transform coefficient is then transformed into a quantized discrete cosine transform coefficient by quantization. The quantized discrete cosine transform coefficient is then encoded with a null motion vector of (0,0) by variable length coding.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0001]
  • This invention relates to a video encoding method. More particularly, this invention relates to a real-time MPEG (Motion Picture Experts Group) video encoding method of maintaining synchronization between video and audio. [0002]
  • 2. Description of the Related Art [0003]
  • Video compact discs (VCDs) and digital versatile discs (DVDs) are both small in size with excellent data storage. At the same time, a normal user can easily produce and edit a personalized VCD or DVD for giving to relatives and friends as a gift. The characteristics of the DVDs, such as a high picture quality and a large storage capacity, can even be used for immediate recording of television programs. Because of the advantages brought by the digital video and audio technology, the VCDs and DVDs have gradually taken over the market place of conventional videotapes, and have won wide acceptance among the people. [0004]
  • In the specifications of the VCDs and DVDs, [0005] MPEG 1 and MPEG 2 are used respectively as the primary formats for video encoding to store videos. For National Television System Committee (NTSC), which is used in North America and Japan, the image resolution for the VCD is 352×240 pixels, the number of frames played per second is 29.97, and the uncompressed data is up to 58 Mbps. For the DVD, the image resolution is 720×480 pixels, the number of frames played per second is 29.97, and the uncompressed data is up to 237 Mbps. For Phase Alternate Lines (PAL) on the other hand, which is used in most European countries and Australia, the image resolution for the VCD is 352×288 pixels, the number of frames played per second is 25, and the uncompressed data is also up to 58 Mbps. For the DVD, the image resolution is 720×576 pixels, the number of frames played is 25, and the uncompressed data is also up to 237 Mbps. With such a big amount of data, how to perform real-time encoding compression is a major challenge today.
  • Hereinafter, the principle of [0006] MPEG 1 and MPEG 2 video encoding is briefly explained. Primarily speaking, a sequential video coding is divided into two parts. The first one is called “intra-coding” for generating an intra picture (I picture), which is a compression encoding method for a single image using discrete cosine transform (DCT) algorithm. Therefore, the I picture can be transformed to a static frame without reference to forward or backward frames, and require the biggest amount of data. The second part is called “inter-coding,” which is a method for predicting reconstruction frames by using the adjacent frames to predict the current frame. The method can generate a predicted picture (P picture) or a bi-directional predicted picture (B picture). The difference between the P and B pictures is that the P picture uses only forward prediction. That is, only the forward frame is referred for reconstructing the P picture. On the other hand, the reconstruction of the B picture uses both of the forward and backward frames or an average of the two, and therefore has the highest efficiency for encoding. However, the B picture itself cannot be used for other prediction encoding.
  • For example, a video regarding a ball falling from the top of a building is recorded in a format of MPEG. First, the DCT technique is employed to encode a first I picture. If the P pictures are generated in such a manner that a forward-predicted distance is 1, the reconstruction of the next P picture will use motion estimation and compensation technique with reference to the previous I picture. For instance, assume the first I picture is a static frame showing the ball is at the top of the building. Because the building is a static object, the building in the second frame can refer to the building in the previous I picture. As to the falling ball, the motion estimation technique can be utilized to calculate motion vectors and motion compensation with reference to the previous I picture, for reconstructing a second P picture. Likewise, a third P picture is reconstructed with reference to the second P picture. Therefore, the P pictures can be reconstructed by recording only the differences from the previous frame, thereby dramatically reducing the required amount of data to achieve compression. [0007]
  • Besides, as shown in FIG. 1, it is necessary to re-establish another I picture after a specified number of P pictures come behind the first I picture, in order to maintain the quality of the video. Only then can user play a video starting from any part of the video without the need to play sequentially starting from the first I picture. If the two I pictures are too far apart, that is, much more P pictures are present between the two I pictures, the quality of the image will be deteriorated. On the other hand, if the two I pictures are too close, the use of the compression technique to reduce the amount of data is less effective because the I picture is a wholeness of a static frame and holds a considerably large amount of data. Frames starting from one I picture and ending before next I picture is called a group of pictures (GOP). The encoding rule regarding how many frames to be contained in a GOP may be set in accordance with practical demands. [0008]
  • However, when a real-time encoding compression is performed by using the above-mentioned MPEG video encoding method, the video and audio may not be kept synchronous. The main reason is that the MPEG video encoder cannot keep up with the speed required for the real-time encoding compression, resulting in an insufficient amount of frames. For instance, in accordance with NTSC, there should be 30 frames played in a second for inputting a video of 1 second in length. If the MPEG video encoder encodes only 15 frames per second, the encoded MPEG file will contain only 15 frames. When the video is played, the MPEG file with 15 frames will be played in 0.5 second but the voice keeps playing for 1 second, resulting in a synchronization failure that the video is finished 0.5 seconds earlier than the audio. Besides, if the source video has an insufficient number of frames per second, the output of the video and audio of the encoded MPEG file will not be synchronous even when the encoding speed of the MPEG video encoder meets the requirement. To sum up, it is a major challenge to ensure a synchronous output of the video and audio after the MPEG encoding in such circumstances that the speed of the MPEG video encoder is insufficient for the real-time MPEG video encoding or the source video has an insufficient number of frames per second. [0009]
  • SUMMARY OF THE INVENTION
  • In view of the above-mentioned problems, an object of the invention is to provide a real-time MPEG video encoding method of maintaining synchronization between video and audio, which is capable of ensuring a synchronous output of the video and audio after the MPEG encoding when a speed of an MPEG video encoder is insufficient for a real-time MPEG video encoding. [0010]
  • Another object of the invention is to provide a real-time MPEG video encoding method of maintaining synchronization between video and audio, which is capable of ensuring a synchronous output of the video and audio after the MPEG encoding when the source video has an insufficient number of frames per second. [0011]
  • In order to achieve the aforementioned objects, a real-time MPEG video encoding method of maintaining synchronization between video and audio according to the present invention comprises a frame checking procedure, a frame compensating procedure, and a frame encoding procedure. The frame checking procedure checks if any frames are lost. The frame compensating procedure uses some null frames to compensate lost frames if any frames are lost. The frame encoding procedure uses a standard MPEG video encoding procedure to encode a received frame and output an encoded frame. The encoding method for the null frames encodes a predicted frame with a pixel value of 0. When the encoding of variable length coding (VLC) is performed, the received motion vector is (0,0). During the process of MPEG video encoding, because the step of motion estimation is omitted for null frames and the calculation for transformation and quantization of discrete cosine is faster, the speed for encoding null frames is considerably faster than those for the I, P and B pictures, thereby compensating the lost frames immediately. [0012]
  • Another real-time MPEG video encoding method of maintaining synchronization between video and audio of the invention comprises a system performance checking procedure, a frame-type output sequence selecting procedure, and a frame encoding procedure. The system performance checking procedure estimates a system performance for MPEG video encoding and predicts a number of frames that can be encoded per second by the system. The frame-type output sequence selecting procedure selects a frame-type output sequence by checking the number of frames that can be encoded per second with reference to a frame-type output sequence table. The frame-type output sequence table is a predefined in such a content including frame-type output sequences of frames in each GOP. The frame-type comprises the I picture, the P picture, and the null frame. The frame encoding procedure performs MPEG video encoding on the received frames in sequence according to the selected frame-type output sequence. [0013]
  • According to the real-time MPEG video encoding method of maintaining synchronization between video and audio of the invention, because the null-frame encoding speed is faster, it can immediately compensate the lost frames or frames that cannot be encoded and output in time due to insufficient system performance. A sufficient number of frames for the encoded MPEG file are ensured for maintaining synchronization between video and audio.[0014]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The above-mentioned and other objects, features, and advantages of the present invention will become apparent with reference to the following descriptions and accompanying drawings, wherein: [0015]
  • FIG. 1 is a schematic diagram of a GOP consisting of frame-type output sequences output by a conventional real-time MPEG video encoding method; [0016]
  • FIG. 2[0017] a is a flow diagram illustrating a real-time MPEG video encoding method of maintaining synchronization between video and audio in one embodiment of the invention;
  • FIG. 2[0018] b is a flow diagram showing a simplified process of the real-time MPEG video encoding method of maintaining synchronization between video and audio of FIG. 2a;
  • FIG. 3 is a flow diagram showing a null-frame encoding method of FIG. 2[0019] a;
  • FIG. 4 is a flow diagram showing a real-time MPEG video encoding method of maintaining synchronization between video and audio in another embodiment of the invention; and [0020]
  • FIG. 5 is a frame-type output sequence table showing frame-type output sequences of GOPs corresponding to numbers of frames that can be encoded per second.[0021]
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • A real-time MPEG video encoding method of maintaining synchronization between video and audio is explained in detail by descriptions of embodiments with reference to relevant drawings. In the accompanying drawings, similar elements will be denoted with similar reference symbols. [0022]
  • The real-time MPEG video encoding method of maintaining synchronization between video and audio in a preferred embodiment of the invention utilizes a forward prediction to generate P pictures with a predicted distance of 1. That is, for each of the P pictures, the reference frame is a previous frame. Referring to FIG. 2[0023] a, when frames are transmitted from a video source (S21), check whether there are any lost frames (S22). Reasons for causing lost frames could be that the system performance for MPEG video encoding does not meet the requirement, an insufficient number of frames are transmitted from the video source initially, or transmission quality is poor. The number of lost frames may be determined by comparing the current number of encoded frames with timestamps or frame sequence numbers provided by the input frames.
  • For instance, 30 frames are played in one second for NTSC. That is, 30 frames are transmitted to the system in one second for video encoding. Assuming that a video is 1 second in length. Each of the frames transmitted is sequentially assigned with one of frame sequence numbers from 1 to 30. Also, the system performs the video encoding to each of the 30 frames transmitted in sequence. Normally, an input frame with a sequence number of 25 implies that the system should have encoded 24 frames and is performing the video encoding for the 25[0024] th frame. However, as a result of the insufficient system performance or poor transmission quality, the number of frames encoded by system may be only 20 when a sequence number of an input frame is 25. Obviously, some frames are lost. The number of lost frames Flost can be calculated using the following equation:
  • F lost =F input −F coded−1  (1)
  • Here, F[0025] input is a number of frames being currently input, Fcoded is a number of frames having been encoded by the system. Accordingly, the number of lost frames in the above example is 4. Similarly, by using timestamps, the sequence number of the corresponding frame can also be determined. For the above example, the time difference between two timestamps of two adjacent frames is about {fraction (1/30)} second. If a first frame with a frame sequence number of 1 is indicated by a timestamp, the frame sequence numbers of the following frames can be determined from the corresponding timestamps divided by a time difference of {fraction (1/30)} second. Then, the number of lost frames can be calculated by using the equation (1).
  • If a loss of frames is determined (S[0026] 22), a step S23 is performed for outputting null frames to completely compensate the lost frames (the encoding of the null frames will be explained later). After the compensation of the lost frames, a video encoding of the input frames begins. A step S24 determines whether the GOP has accumulated an enough number of frames. Since a GOP consists of at least one I picture and a plurality number of P pictures which can be substituted by null frames, the input frames are encoded and output as an I picture when the GOP has accumulated enough number of frames (S25) for generating a new GOP; otherwise, a certain number of P pictures are output (S26) so that the current GOP can accumulate an enough number of frames. Both of the video encoding methods for the I and P pictures are conventional and will not be explained any further. Thereafter, the processing flow returns to the step S21 to check whether other frames are input. If no frames have been input, the whole process for video encoding is terminated.
  • In order to better understand the spirit of the invention, refer to FIG. 2[0027] b for the following detailed description. When frames are received from the video source, a frame checking procedure 27 is performed to check if the number of frames received is the same as the number of frames encoded. That is to check if there is a loss of frames. If any lost frames are detected, a procedure 28 is performed to compensate the lost frames by using null frames. After all of the lost frames are compensated, a frame-encoding procedure 29 is performed to encode the received frames with a standard MPEG video encoding procedure and then output the encoded frames. In the situation where no frame is lost, the frame-encoding procedure 29 is directly performed to encode and output the received frames.
  • Now referring to FIG. 3, the encoding method for null frames is described. At first, a null predictive frame [0028] 31 with a pixel value of 0 is received (S31). The null predictive frame is transformed to a discrete cosine transform coefficient 32 by a discrete cosine transform algorithm (S32). The discrete cosine transform coefficient 32 is then transformed into a quantized discrete cosine transform coefficient 33 by quantization (S33). The quantized discrete cosine transform coefficient 33 is encoded with a null motion vector by variable length coding (VLC) (S34) to output a null frame (S35). The value of the null motion vector is (0,0). Because the motion estimation is omitted for the null frames and the calculation for the transformation and quantization of the discrete cosine are faster, the speed for encoding the null frames is considerably faster than those for the I, P, and B pictures.
  • Another embodiment of the real-time MPEG video encoding method of maintaining synchronization between video and audio is described with reference to FIG. 4. First, a system performance for video encoding of a MPEG [0029] video encoding system 41 is detected (S41), and according to the performance evaluation of system 41, a number of frames that the system 41 can encode and output in one second is estimated (S42) and is denoted by Fp. Then, depending on the number Fp of frames that can be encoded per second, a suitable sequence for the output frame is selected with reference to a frame-type output sequence table (S43). The frame-type output sequence table is a predefined on the basis of experiment results. The content of the table refers to the frame-type output sequences of the frames included in each GOP. The types of frame include the I picture, the P picture, and the null frame. As shown in FIG. 5, for example, when the number Fp of frames encoded per second by a system is 20 to 21, the frame-type output sequence is IPμPPμPPμPPμPPμ for NTSC in each GOP with a length of 15 frames. For PAL, the frame-type output sequence is IPPμPPμPPμPPμ in each GOP with a length of 13 frames. Here, the symbol μ represents an intra-picture, the symbol P represents a predicted picture, and the symbol μ represents a null frame.
  • After a suitable sequence is selected for the output frames, when frames are received from the video source (S[0030] 44), the MPEG video encoding is performed in accordance with the corresponding frame-type output sequence of the output frame sequence (S45). Continued from the above-mentioned example, for NTSC, the video is encoded in accordance with the frame-type output sequence IPμPPμPPμPPμPPμ. That is, a GOP consists of an I picture as a first frame, a P picture as a second frame, a null frame as a third frame, a P picture as a fourth frame, a P picture as a fifth frame, a null frame as a sixth frame, and so on. In this way, the video encoding procedure is repeated until no more input of frames. The I and P pictures are encoded by a conventional MPEG video encoding technique. The method for encoding null frames is as described above. It should be noted that the sequence of output frames listed in FIG. 5 is for illustration only A person skilled in the art can base on experiment results or any special requirements to set appropriate sequences for output frames. Besides, a length of a GOP may also be modified according to the demands such as requirements of video quality.
  • In accordance with embodiments of the invention, the process for motion estimation is omitted for null frames and the calculation for the transformation and quantization of the discrete cosine is faster, so the speed for encoding a null frame is much faster than those for the I, P, and B pictures. Therefore, when the real-time MPEG video encoding is performed, if a loss of frames is determined, some null frames can quickly be used to compensate the lost frames. Or, when the system performance of video encoding is insufficient, an output frame-type output sequence can be defined beforehand, so the null frames can be used by the system to compensate those frames which cannot be encoded in real-time, thereby reaching the number of frames required of maintaining synchronization between video and audio for an encoded MPEG file. [0031]
  • While the invention has been described by way of examples and in terms of preferred embodiments, it is to be understood that the invention is not limited to the disclosed embodiments. To the contrary, it is intended to cover various modifications. Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications. [0032]

Claims (19)

What is claimed is:
1. A real-time Motion Picture Experts Group (MPEG) video encoding method of maintaining synchronization between video and audio, comprising:
a frame checking procedure for determining whether a number of received frames is the same as a number of encoded frames;
a frame compensating procedure for using a null frame to compensate a lost frame when the number of frames encoded is less than the number of frames received; and
a frame encoding procedure for encoding a received frame with a standard procedure of MPEG video encoding and then outputting an encoded frame.
2. The method according to claim 1, wherein said encoded frame has two frame types of an intra picture (I picture) and a predicted picture (P picture), in which the P picture is generated by a forward prediction with a prediction distance of 1.
3. The method according to claim 1, wherein said number of received frames is a current frame sequence number provided by a received frame.
4. The method according to claim 1, wherein said number of received frames is a current frame sequence number calculated from a timestamp provided by a received frame.
5. The method according to claim 3, wherein a number of lost frames is calculated by using the following equation:
F lost =F input −F coded−1
wherein Flost is said number of lost frames, Finput is said sequence number of the current frame, Fcoded is said number of encoded frames.
6. The method according to claim 1, wherein said null frame is encoded by the steps of:
receiving a null predictive frame;
generating a discrete cosine transform coefficient by using a discrete cosine transformation;
quantizing said discrete cosine transform coefficient to generate a quantized discrete cosine transform coefficient; and
encoding said quantized discrete cosine transform coefficient and a null motion vector by a variable length coding so as to output said null frame.
7. The method according to claim 6, wherein said null predictive frame has a pixel value of 0.
8. The method according to claim 6, wherein said null motion vector is (0,0).
9. A real-time MPEG video encoding method of maintaining synchronization between video and audio, comprising:
a system performance checking procedure for estimating a performance of a system and predicting a number of frames that can be encoded per second by the system;
a frame-type output sequence selecting procedure for selecting an frame-type output sequence for a plurality number of frames included in a group of pictures (GOP) in accordance with said number of frames that can be encoded per second with reference to a frame-type output sequence table; and
a frame encoding procedure for performing an MPEG video encoding on received frames in accordance with one frame type corresponding to said frame-type output sequence.
10. The method according to claim 9, wherein said frame-type output sequence table is predefined in such a content at least including said number of frames that can be encoded per second and said frame-type output sequence.
11. The method according to claim 10, wherein said frame type includes an intra picture (I picture), a predicted picture (P picture), and a null frame.
12. The method according to claim 11, wherein said P picture is generated by using a forward prediction with a predicting distance of 1.
13. The method according to claim 11, wherein said null frame is encoded by the steps of:
receiving a null predictive frame;
generating a discrete cosine transform coefficient by using a discrete cosine transformation;
quantizing said discrete cosine transform coefficient to generate a quantized discrete cosine transform coefficient; and
encoding said quantized discrete cosine transform coefficient and a null motion vector by a variable length coding so as to output said null frame.
14. The method according to claim 13, wherein said null predictive frame has a pixel value of 0.
15. The method according to claim 13, wherein said null motion vector is (0,0).
16. A real-time video encoding method for null frames used in MPEG video encoding to compensate lost frames, said method comprising:
receiving a null predictive frame;
generating a discrete cosine transform coefficient by using a discrete cosine transformation;
quantizing said discrete cosine transform coefficient to generate a quantized discrete cosine transform coefficient; and
encoding said quantized discrete cosine transform coefficient and a null motion vector by a variable length coding so as to output said null frame.
17. The method according to claim 16, wherein said null predictive frame has a pixel value of 0.
18. The method according to claim 16, wherein said null motion vector is (0,0).
19. The method according to claim 16, wherein
said null predictive frame has a pixel value of 0; and
said null motion vector is (0,0).
US10/408,555 2003-04-08 2003-04-08 Real-time MPEG video encoding method of maintaining synchronization between video and audio Abandoned US20040202249A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/408,555 US20040202249A1 (en) 2003-04-08 2003-04-08 Real-time MPEG video encoding method of maintaining synchronization between video and audio

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/408,555 US20040202249A1 (en) 2003-04-08 2003-04-08 Real-time MPEG video encoding method of maintaining synchronization between video and audio

Publications (1)

Publication Number Publication Date
US20040202249A1 true US20040202249A1 (en) 2004-10-14

Family

ID=33130518

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/408,555 Abandoned US20040202249A1 (en) 2003-04-08 2003-04-08 Real-time MPEG video encoding method of maintaining synchronization between video and audio

Country Status (1)

Country Link
US (1) US20040202249A1 (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070094579A1 (en) * 2005-10-07 2007-04-26 Miguel Cerrato Sanchez Method for handling audio packet loss in a windows® media decoder
WO2007081537A2 (en) * 2006-01-04 2007-07-19 University Of Dayton Frame decimation through frame simplification
US20080224358A1 (en) * 2005-09-15 2008-09-18 John Rogers Nano-Molding Process
US20130003624A1 (en) * 2011-01-21 2013-01-03 Qualcomm Incorporated User input back channel for wireless displays
EP2547106A1 (en) * 2010-01-14 2013-01-16 Sumitomo Electric Industries, Ltd. Video image coded data display method, device, and communications system
US8964783B2 (en) 2011-01-21 2015-02-24 Qualcomm Incorporated User input back channel for wireless displays
US9065876B2 (en) 2011-01-21 2015-06-23 Qualcomm Incorporated User input back channel from a wireless sink device to a wireless source device for multi-touch gesture wireless displays
US20150327197A1 (en) * 2012-08-14 2015-11-12 Nokia Corporation Method and apparatuses for adjusting time, computer-readable storage media and a computer program product
US9198084B2 (en) 2006-05-26 2015-11-24 Qualcomm Incorporated Wireless architecture for a traditional wire-based protocol
US20150341646A1 (en) * 2009-07-08 2015-11-26 Dejero Labs Inc. System and method for automatic encoder adjustment based on transport data
US9264248B2 (en) 2009-07-02 2016-02-16 Qualcomm Incorporated System and method for avoiding and resolving conflicts in a wireless mobile display digital interface multicast environment
US9398089B2 (en) 2008-12-11 2016-07-19 Qualcomm Incorporated Dynamic resource sharing among multiple wireless devices
US9413803B2 (en) 2011-01-21 2016-08-09 Qualcomm Incorporated User input back channel for wireless displays
CN105898343A (en) * 2016-04-07 2016-08-24 广州盈可视电子科技有限公司 Video live broadcasting method and device and terminal video live broadcasting method and device
US9503771B2 (en) 2011-02-04 2016-11-22 Qualcomm Incorporated Low latency wireless display for graphics
US9525998B2 (en) 2012-01-06 2016-12-20 Qualcomm Incorporated Wireless display with multiscreen service
US9582238B2 (en) 2009-12-14 2017-02-28 Qualcomm Incorporated Decomposed multi-stream (DMS) techniques for video display systems
US9582239B2 (en) 2011-01-21 2017-02-28 Qualcomm Incorporated User input back channel for wireless displays
WO2017127115A1 (en) * 2016-01-22 2017-07-27 Hewlett-Packard Development Company, L.P. Drift correction
US9756468B2 (en) 2009-07-08 2017-09-05 Dejero Labs Inc. System and method for providing data services on vehicles
US9787725B2 (en) 2011-01-21 2017-10-10 Qualcomm Incorporated User input back channel for wireless displays
US10028163B2 (en) 2010-07-15 2018-07-17 Dejero Labs Inc. System and method for transmission of data from a wireless mobile device over a multipath wireless router
US10033779B2 (en) 2009-07-08 2018-07-24 Dejero Labs Inc. Multipath data streaming over multiple wireless networks
US10108386B2 (en) 2011-02-04 2018-10-23 Qualcomm Incorporated Content provisioning for wireless back channel
US10117055B2 (en) 2009-07-08 2018-10-30 Dejero Labs Inc. System and method for providing data services on vehicles
CN108810281A (en) * 2018-06-22 2018-11-13 Oppo广东移动通信有限公司 Frame losing compensation method, device, storage medium and terminal
US10135900B2 (en) 2011-01-21 2018-11-20 Qualcomm Incorporated User input back channel for wireless displays

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5214507A (en) * 1991-11-08 1993-05-25 At&T Bell Laboratories Video signal quantization for an mpeg like coding environment
US5608450A (en) * 1994-09-07 1997-03-04 Intel Corporation Video conferencing system with fast packet loss recovery
US5933532A (en) * 1996-08-30 1999-08-03 Sony Corporation Video data compression apparatus and method of same
US6163574A (en) * 1991-09-30 2000-12-19 Sony Corporation Motion picture encoding system for either intra-frame encoding or inter-frame encoding
US6324217B1 (en) * 1998-07-08 2001-11-27 Diva Systems Corporation Method and apparatus for producing an information stream having still images
US6353703B1 (en) * 1996-10-15 2002-03-05 Matsushita Electric Industrial Co., Ltd. Video and audio coding method, coding apparatus, and coding program recording medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6163574A (en) * 1991-09-30 2000-12-19 Sony Corporation Motion picture encoding system for either intra-frame encoding or inter-frame encoding
US5214507A (en) * 1991-11-08 1993-05-25 At&T Bell Laboratories Video signal quantization for an mpeg like coding environment
US5608450A (en) * 1994-09-07 1997-03-04 Intel Corporation Video conferencing system with fast packet loss recovery
US5933532A (en) * 1996-08-30 1999-08-03 Sony Corporation Video data compression apparatus and method of same
US6353703B1 (en) * 1996-10-15 2002-03-05 Matsushita Electric Industrial Co., Ltd. Video and audio coding method, coding apparatus, and coding program recording medium
US6324217B1 (en) * 1998-07-08 2001-11-27 Diva Systems Corporation Method and apparatus for producing an information stream having still images

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080224358A1 (en) * 2005-09-15 2008-09-18 John Rogers Nano-Molding Process
US7976748B2 (en) 2005-09-15 2011-07-12 The Board Of Trustees Of The University Of Illinois Nano-molding process
US20070094579A1 (en) * 2005-10-07 2007-04-26 Miguel Cerrato Sanchez Method for handling audio packet loss in a windows® media decoder
WO2007081537A2 (en) * 2006-01-04 2007-07-19 University Of Dayton Frame decimation through frame simplification
WO2007081537A3 (en) * 2006-01-04 2007-10-04 Univ Dayton Frame decimation through frame simplification
US9198084B2 (en) 2006-05-26 2015-11-24 Qualcomm Incorporated Wireless architecture for a traditional wire-based protocol
US9398089B2 (en) 2008-12-11 2016-07-19 Qualcomm Incorporated Dynamic resource sharing among multiple wireless devices
US9264248B2 (en) 2009-07-02 2016-02-16 Qualcomm Incorporated System and method for avoiding and resolving conflicts in a wireless mobile display digital interface multicast environment
US10165286B2 (en) * 2009-07-08 2018-12-25 Dejero Labs Inc. System and method for automatic encoder adjustment based on transport data
US10117055B2 (en) 2009-07-08 2018-10-30 Dejero Labs Inc. System and method for providing data services on vehicles
US10701370B2 (en) 2009-07-08 2020-06-30 Dejero Labs Inc. System and method for automatic encoder adjustment based on transport data
US9756468B2 (en) 2009-07-08 2017-09-05 Dejero Labs Inc. System and method for providing data services on vehicles
US20150341646A1 (en) * 2009-07-08 2015-11-26 Dejero Labs Inc. System and method for automatic encoder adjustment based on transport data
US10033779B2 (en) 2009-07-08 2018-07-24 Dejero Labs Inc. Multipath data streaming over multiple wireless networks
US11689884B2 (en) 2009-07-08 2023-06-27 Dejero Labs Inc. System and method for providing data services on vehicles
US11503307B2 (en) * 2009-07-08 2022-11-15 Dejero Labs Inc. System and method for automatic encoder adjustment based on transport data
US11006129B2 (en) * 2009-07-08 2021-05-11 Dejero Labs Inc. System and method for automatic encoder adjustment based on transport data
US11838827B2 (en) 2009-07-08 2023-12-05 Dejero Labs Inc. System and method for transmission of data from a wireless mobile device over a multipath wireless router
US11563788B2 (en) 2009-07-08 2023-01-24 Dejero Labs Inc. Multipath data streaming over multiple networks
US9582238B2 (en) 2009-12-14 2017-02-28 Qualcomm Incorporated Decomposed multi-stream (DMS) techniques for video display systems
EP2547106A1 (en) * 2010-01-14 2013-01-16 Sumitomo Electric Industries, Ltd. Video image coded data display method, device, and communications system
US9219925B2 (en) 2010-01-14 2015-12-22 Sumitomo Electric Industries, Ltd. Video image coded data display method, device, and communications system
EP2547106A4 (en) * 2010-01-14 2014-05-21 Sumitomo Electric Industries Video image coded data display method, device, and communications system
US10028163B2 (en) 2010-07-15 2018-07-17 Dejero Labs Inc. System and method for transmission of data from a wireless mobile device over a multipath wireless router
US10575206B2 (en) 2010-07-15 2020-02-25 Dejero Labs Inc. System and method for transmission of data from a wireless mobile device over a multipath wireless router
US9065876B2 (en) 2011-01-21 2015-06-23 Qualcomm Incorporated User input back channel from a wireless sink device to a wireless source device for multi-touch gesture wireless displays
US9787725B2 (en) 2011-01-21 2017-10-10 Qualcomm Incorporated User input back channel for wireless displays
US9582239B2 (en) 2011-01-21 2017-02-28 Qualcomm Incorporated User input back channel for wireless displays
US10135900B2 (en) 2011-01-21 2018-11-20 Qualcomm Incorporated User input back channel for wireless displays
US10382494B2 (en) 2011-01-21 2019-08-13 Qualcomm Incorporated User input back channel for wireless displays
US9413803B2 (en) 2011-01-21 2016-08-09 Qualcomm Incorporated User input back channel for wireless displays
US8964783B2 (en) 2011-01-21 2015-02-24 Qualcomm Incorporated User input back channel for wireless displays
US20130003624A1 (en) * 2011-01-21 2013-01-03 Qualcomm Incorporated User input back channel for wireless displays
US10911498B2 (en) 2011-01-21 2021-02-02 Qualcomm Incorporated User input back channel for wireless displays
US9723359B2 (en) 2011-02-04 2017-08-01 Qualcomm Incorporated Low latency wireless display for graphics
US10108386B2 (en) 2011-02-04 2018-10-23 Qualcomm Incorporated Content provisioning for wireless back channel
US9503771B2 (en) 2011-02-04 2016-11-22 Qualcomm Incorporated Low latency wireless display for graphics
US9525998B2 (en) 2012-01-06 2016-12-20 Qualcomm Incorporated Wireless display with multiscreen service
US20150327197A1 (en) * 2012-08-14 2015-11-12 Nokia Corporation Method and apparatuses for adjusting time, computer-readable storage media and a computer program product
US10757435B2 (en) 2016-01-22 2020-08-25 Hewlett-Packard Development Company, L.P. Video frame drift correction
WO2017127115A1 (en) * 2016-01-22 2017-07-27 Hewlett-Packard Development Company, L.P. Drift correction
CN105898343A (en) * 2016-04-07 2016-08-24 广州盈可视电子科技有限公司 Video live broadcasting method and device and terminal video live broadcasting method and device
CN108810281A (en) * 2018-06-22 2018-11-13 Oppo广东移动通信有限公司 Frame losing compensation method, device, storage medium and terminal

Similar Documents

Publication Publication Date Title
US20040202249A1 (en) Real-time MPEG video encoding method of maintaining synchronization between video and audio
JP3443867B2 (en) Image signal encoding / decoding method and image signal recording medium
JP5021502B2 (en) Method and apparatus for controlling rate distortion trade-off by mode selection of video encoder
US6323914B1 (en) Compressed video recording device with integrated effects processing
US20050169371A1 (en) Video coding apparatus and method for inserting key frame adaptively
US20050238100A1 (en) Video encoding method for encoding P frame and B frame using I frames
JP2001292451A (en) Moving picture signal compression device and method
JPH08205180A (en) Method and system for two-stage video film compression
JP2000278692A (en) Compressed data processing method, processor and recording and reproducing system
US7881366B2 (en) Moving picture encoder
US6314139B1 (en) Method of inserting editable point and encoder apparatus applying the same
JP3403168B2 (en) Image processing method, image processing apparatus capable of using the method, and television receiver
US7636482B1 (en) Efficient use of keyframes in video compression
JP3147792B2 (en) Video data decoding method and apparatus for high-speed playback
JP4869147B2 (en) Image recording / playback device
US8165217B2 (en) Image decoding apparatus and method for decoding prediction encoded image data
JP2002199392A (en) Method and device for encoding image
KR100860661B1 (en) Image reproducing method and image processing method, and image reproducing device, image processing device, and television receiver capable of using the methods
JP3487205B2 (en) Image data editing device
JPH08251582A (en) Encoded data editing device
JP4073541B2 (en) Video re-encoding device
JP4193224B2 (en) Moving picture coding apparatus and method, and moving picture decoding apparatus and method
JP2004015351A (en) Encoding apparatus and method, program, and recording medium
KR101054644B1 (en) Motion Estimation Based Image Coding / Decoding Apparatus and Method
JP4765961B2 (en) Digital video recording device

Legal Events

Date Code Title Description
AS Assignment

Owner name: NEWSOFT TECHNOLOGY CORPORATION, TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LO, CHIEN-SHUN;HSIAO, WEI-CHUAN;HSU, SHIH-CHIN;AND OTHERS;REEL/FRAME:013952/0884

Effective date: 20030214

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION