|Publication number||USRE43129 E1|
|Application number||US 12/819,209|
|Publication date||Jan 24, 2012|
|Filing date||Jun 20, 2010|
|Priority date||Mar 14, 1998|
|Also published as||CN1159915C, CN1229323A, EP1076998A1, EP1076998B1, EP1933567A2, EP1933567A3, EP1933567B1, EP1940177A2, EP1940177A3, EP1940177B1, US6259732, USRE41383, USRE41951, USRE43060, USRE43061, USRE43130, WO1999048298A1|
|Publication number||12819209, 819209, US RE43129 E1, US RE43129E1, US-E1-RE43129, USRE43129 E1, USRE43129E1|
|Original Assignee||Daewoo Electronics Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (8), Classifications (41), Legal Events (1)|
|External Links: USPTO, USPTO Assignment, Espacenet|
More than one reissue application has been filed, in that reissue application Ser. Nos. 12/819,207, 12/819,208 and 12/819,210 are all filed Jun. 20, 2010 are continuation cases of reissue application Ser. No. 12/131,723 filed Jun. 2, 2008. The present reissue application Ser. No. 12/819,209, filed Jun. 20, 2010, is a continuation of the reissue application Ser. No. 12/131,723 filed Jun. 2, 2008, which issued on Nov. 23, 2010 as U.S. Pat. No. Re. 41,951E, which is a continuation case of reissue application Ser. No. 10/611,938 filed Jul. 3, 2003, now abandoned, which is a reissue application of U.S. Pat. No. 6,259,732 B1, which issued on Jul. 10, 2001 from U.S. application Ser. No. 09/088,375, and which claims priority under 35 U.S.C. 119 from Korean Patent Application KR 98-8637 filed Mar. 14, 1998.
The present invention relates to a method and apparatus for encoding interlaced macroblock texture information; and, more particularly, to a method and apparatus for padding interlaced texture information on a reference VOP on a texture macroblock basis in order to perform a motion estimation while using the interlaced coding technique.
In digitally televised systems such as video-telephone, teleconference and high definition television systems, a large amount of digital data is needed to define each video frame signal since a video line signal in the video frame signal comprises a sequence of digital data referred to as pixel values. Since, however, the available frequency bandwidth of a conventional transmission channel is limited, in order to transmit the large amount of digital data therethrough, it is necessary to compress or reduce the volume of data through the use of various data compression techniques, especially in the case of such low bit-rate video signal encoders as video-telephone and teleconference systems.
One of such techniques for encoding video signals for a low bit-rate encoding system is the so-called object-oriented analysis-synthesis coding technique, wherein an input video image is divided into objects, and three sets of parameters for defining the motion, contour and pixel data of each object are processed through different encoding channels.
One example of object-oriented coding scheme is the so-called MPEG(Moving Picture Express Group) phase 4(MPEG-4), which is designed to provide an audio-visual coding standard for allowing content-based interactivity, improved coding efficiency and/or universal accessibility in such applications as low-bit rate communication, interactive multimedia(e.g., games, interactive TV, etc.) and area surveillance(see, for instance, MPEG-4 Video Verification Model Version 7.0, International Organisation for Standardisation, ISO/IEC JTC1/SC29/WG11 MPEG97/N1642, Apr. 1997).
According to the MPEG-4, an input video image is divided into a plurality of video object planes(VOP's), which correspond to entities in a bitstream that a user can access and manipulate. A VOP can be referred to as an object and represented by a bounding rectangle whose width and height may be the smallest multiples of 16 pixels(a macroblock size) surrounding each object so that the encoder may process the input video image on a VOP-by-VOP basis, i.e., an object-by-object basis.
A VOP disclosed in the MPEG-4 includes shape information and texture information for an object therein which are represented by a plurality of macroblocks on the VOP, each of macroblocks having, e.g., 16×16 pixels, wherein the shape information is represented in binary shape signals and the texture information includes luminance and chrominance data.
Since the texture information for two input video images sequentially received has temporal redundancies, it is desirable to reduce the temporal redundancies therein by using a motion estimation and compensation technique in order to efficiently encode the texture information.
In order to perform the motion estimation and compensation, a reference VOP, e.g., a previous VOP, should be padded by a progressive image padding technique, i.e., a conventional repetitive padding technique. In principle, the repetitive padding technique fills the transparent area outside the object of the VOP by repeating boundary pixels of the object, wherein the boundary pixels are located on the contour of the object. It is preferable to perform the repetitive padding technique with respect to the reconstructed shape information. If transparent pixels in a transparent area outside the object can be filled by the repetition of more than one boundary pixel, the average of the repeated values is taken as a padded value. This progressive padding process is generally divided into 3 steps: a horizontal repetitive padding; a vertical repetitive padding and an exterior padding(see, MPEG-4 Video Verification Model Version 7.0)
While the progressive padding process as described above may be used to encode progressive texture information which has a larger spacial correlation between rows on a macroblock basis, the coding efficiency thereof may be low if the motion of an object within a VOP or a frame is considerably large. Therefore, prior to performing the motion estimation and compensation on a field-by-field basis for an interlaced texture information with the fast movement such as a sporting event, horse racing and car racing, an interlaced padding process may be preferable to the progressive padding process, wherein in the interlaced padding process a macroblock is divided into two field blocks and padding is carried out on a field block basis.
However, if all field blocks are padded without considering their correlation between fields, certain field blocks may not be properly padded.
It is, therefore, an object of the invention to provide a method and apparatus capable of padding the interlaced texture information considering its correlation between fields.
In accordance with the invention, there is provided a method for encoding interlaced texture information on a texture macroblock basis through a motion estimation between a current VOP and its one or more reference VOP's, wherein each texture macroblock of the current and the reference VOP's has M×N defined or undefined texture pixels, M and N being positive even integers, respectively, the method comprising the steps of:
The above and other objects and features of the present invention will become apparent from the following description of preferred embodiments given in conjunction with the accompanying drawings, in which:
The division circuit 102 divides each texture macroblock into a top and a bottom field blocks which may be referred to as interlaced texture information, wherein the top field block having M/2×N texture pixels contains every odd row of each texture macroblock and the bottom field block having the other M/2×N texture pixels contains every even row of each texture macroblock. The top and the bottom field blocks for each texture macroblock are sequentially provided as a current top and a current bottom field blocks, respectively, to a subtractor 104 and a motion estimator 116.
Reference, e.g., previous interlaced texture information, i.e., interlaced texture information of a reference VOP, is read out from a reference frame processing circuit 114 and provided to the motion estimator 116 and a motion compensator 118. The reference VOP is also partitioned into a plurality of search regions and each search region is divided into a top and a bottom search regions, wherein the top search regions having a predetermined number, e.g., P(M/2×N) of reference pixels contains every odd row of each search region and the bottom search region having the predetermined number of reference pixels contains every even row of each search region, P being a positive integer, typically, 2.
The motion estimator 116 determines a motion vector for each current top or bottom field block on a field-by-field basis. First, the motion estimator 116 detects two reference field blocks, i.e., a reference top and a reference bottom field blocks for each current top or bottom field block, wherein the two reference field blocks within the top and bottom search regions, respectively, are located at a same position as each current top or bottom field block. Since the top and the bottom search regions have a plurality of candidate top and candidate bottom field blocks including the reference top and the reference bottom field blocks, respectively, each current top or bottom field block can be displaced on a pixel-by-pixel basis within the top and the bottom search regions to correspond with a candidate top and a candidate bottom field blocks for each displacement, respectively; at all possible displacements, errors between each current top or bottom field block and all candidate top and bottom field blocks therefore are calculated to be compared with one another; and selects, as an optimum candidate field block or a most similar field block, a candidate top or bottom field block which yields a minimum error. Outputs from the motion estimator 116 are a motion vector and a field indication flag being provided to the motion compensator 118 and a statistical coding circuit 108 by using, e.g., a variable length coding(VLC) discipline, wherein the motion vector denotes a displacement between each current top or bottom field block and the optimum candidate field block and the field indication flag represents whether the optimum candidate field block belongs, to the top search region or not.
The motion compensator 118 provides the optimum candidate field block as a predicted top or bottom field block for each current top or bottom field block based on the motion vector and the field indication flag to the subtractor 104 and an adder 112.
The subtractor 104 obtains an error field block by subtracting the predicted top or bottom field block from each current top or bottom field block on a corresponding pixel-by-pixel basis, to provide the error field block to a texture encoding circuit 106.
In the texture encoding circuit 106, the error field block is subjected to an orthogonal transform for removing spatial redundancy thereof and then transform coefficients are quantized, to thereby provide the quantized transform coefficients to the statistical coding circuit 108 and a texture reconstruction circuit 110. Since a conventional orthogonal transform such as a discrete cosine transform(DCT) is performed on a DCT block-by-DCT block basis, each DCT block having typically 8×8 texture pixels, the error field block having 8×16 error texture pixels may be preferably divided into two DCT blocks in the texture encoding circuit 106. If necessary, before performing the DCT, each error field block may be DCT-padded based on the shape information or the reconstructed shape information of each VOP in order to reduce higher frequency components which may be generated in the DCT processing. For example, a predetermined value, e.g., ‘0’, may be assigned to the error texture pixels at the exterior of the contour in each VOP.
The statistical coding circuit 108 performs a statistical encoding on the quantized transform coefficients fed from the texture encoding circuit 106 and the field indication flag and the motion vector, for each current top or bottom field block, fed from the motion estimator 116 by using, e.g., a conventional variable length coding technique, to thereby provide statistically encoded data to a transmitter (not shown) for the transmission thereof.
In the meantime, the texture reconstruction circuit 110 performs an inverse quantization and inverse transform on the quantized transform coefficients to provide a reconstructed error field block, which corresponds to the error field block, to the adder 112. The adder 112 combines the reconstructed error field block frets the texture reconstruction circuit 110 and the predicted top or bottom field block from the motion compensator 118 on a pixel-by-pixel basis, to thereby provide a combined result as a reconstructed top or bottom field block for each current top or bottom field block to the reference frame processing circuit 114.
The reference frame processing circuit 114 sequentially pads the reconstructed top or bottom field block based on the shape information or the reconstructed shape information for the current VOP, to thereby store the padded top and bottom field blocks as another reference interlaced texture information for a subsequent current VOP to the motion estimator 116 and the motion compensator 118.
At step S201, the reconstructed top or bottom field block is sequentially received and, at step S203, exterior pixels in the reconstructed top or bottom field block are eliminated based on the shape information, wherein the exterior pixels are located at the outside of the contour for the object. The reconstructed shape information may be used on behalf of the shape information. While the exterior pixels are eliminated to be set as transparent pixels, i.e., undefined texture pixels, the remaining interior pixels in the reconstructed top or bottom field block are provided as defined texture pixels on a field block-by-field block basis.
At step S204, each reconstructed block having a reconstructed top and its corresponding reconstructed bottom field blocks is determined whether or not being traversed by the contour of the object. In other words, each reconstructed block is determined as an interior block, a boundary block, or an exterior block, wherein the interior block has only the defined texture pixels, the exterior block has only the undefined texture pixels and the boundary block has both the defined texture pixels and the undefined texture pixels. If the reconstructed block is determined as an interior bIock, at step S210, no padding is performed and the process goes to step S208.
If the reconstructed block is a boundary block BB as shown in
First, at step S221, the boundary block is divided into a top and a bottom boundary field blocks T and B as shown in
At step S222, the undefined texture pixels are padded on a row-by-row basis by using a horizontal repetitive padding technique as shown in
If there exist one or more transparent rows, having the undefined texture pixels only, on each top or bottom field block, at step S223, each transparent row is padded by using one or more nearest defined or padded rows among the corresponding top or bottom field block, wherein the defined row has all the defined texture pixels therein. For example, as shown in
If there exists one transparent boundary field block in the boundary block as shown in
After all the interior and boundary blocks are padded as described above, in order to cope with a VOP of fast motion, the padding must be further extended to undefined adjacent blocks, i.e., exterior blocks which are adjacent to one or more interior or boundary blocks. The adjacent blocks can stretch outside the VOP, if necessary. At step S208, the undefined texture pixels in the undefined adjacent block are padded based on one of the extrapolated boundary blocks and the interior blocks to generate an extrapolated adjacent block for the undefined adjacent block, wherein each extrapolated boundary block has a part of the contour A of an object and each undefined adjacent block is shown as a shaded region as shown in
As described above, at step S211, the extrapolated boundary and the extrapolated adjacent blocks as well as the interior blocks are stored.
While the present invention has been described with respect to the particular embodiments, it will be apparent to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5623310 *||Mar 29, 1996||Apr 22, 1997||Daewoo Electronics Co., Ltd.||Apparatus for encoding a video signal employing a hierarchical image segmentation technique|
|US5929915 *||Dec 2, 1997||Jul 27, 1999||Daewoo Electronics Co., Ltd.||Interlaced binary shape coding method and apparatus|
|US5991453 *||Sep 30, 1997||Nov 23, 1999||Kweon; Ji-Heon||Method of coding/decoding image information|
|US6026195 *||Apr 28, 1999||Feb 15, 2000||General Instrument Corporation||Motion estimation and compensation of video object planes for interlaced digital video|
|US6035070 *||Sep 24, 1997||Mar 7, 2000||Moon; Joo-Hee||Encoder/decoder for coding/decoding gray scale shape data and method thereof|
|US6055330 *||Oct 9, 1996||Apr 25, 2000||The Trustees Of Columbia University In The City Of New York||Methods and apparatus for performing digital image and video segmentation and compression using 3-D depth information|
|US6069976 *||Apr 3, 1998||May 30, 2000||Daewoo Electronics Co., Ltd.||Apparatus and method for adaptively coding an image signal|
|EP0577365A2||Jun 28, 1993||Jan 5, 1994||Sony Corporation||Encoding and decoding of picture signals|
|U.S. Classification||375/240.01, 375/E07.105, 375/E07.12, 375/E07.081, 375/E07.228, 375/240.24, 375/E07.226, 375/E07.211, 375/240|
|International Classification||H04N19/20, H04N19/196, H04N19/503, H04N19/61, H04N19/563, H04N19/51, H04N19/21, H04N19/91, H04N19/625, H04N19/17, H04N19/167, H04N19/132, H04N19/176, H04N19/85, H04N19/137, H04N19/59, H04N7/24, H04N7/12|
|Cooperative Classification||H04N19/563, H04N19/61, H04N19/649, H04N19/20, H04N19/51, H04N19/593, H04N19/16|
|European Classification||H04N7/26A6S4, H04N7/26M2, H04N7/26J2, H04N7/50, H04N7/26M4P, H04N7/34B, H04N7/30B|
|Jan 9, 2013||AS||Assignment|
Owner name: ZTE CORPORATION, CHINA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DAEWOO ELECTRONICS CORPORATION;REEL/FRAME:029594/0117
Effective date: 20121214