CA2877241A1 - Method and apparatus for encoding and decoding image by using large transformation unit - Google Patents

Method and apparatus for encoding and decoding image by using large transformation unit Download PDF

Info

Publication number
CA2877241A1
CA2877241A1 CA 2877241 CA2877241A CA2877241A1 CA 2877241 A1 CA2877241 A1 CA 2877241A1 CA 2877241 CA2877241 CA 2877241 CA 2877241 A CA2877241 A CA 2877241A CA 2877241 A1 CA2877241 A1 CA 2877241A1
Authority
CA
Canada
Prior art keywords
unit
coding unit
prediction
transformation
coding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CA 2877241
Other languages
French (fr)
Other versions
CA2877241C (en
Inventor
Tammy Lee
Woo-Jin Han
Jianle Chen
Hae-Kyung Jung
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of CA2877241A1 publication Critical patent/CA2877241A1/en
Application granted granted Critical
Publication of CA2877241C publication Critical patent/CA2877241C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/24Systems for the transmission of television signals using pulse code modulation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/625Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using discrete cosine transform [DCT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/122Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Abstract

Disclosed are an image encoding method and apparatus for encoding an image by grouping a plurality of adjacent prediction units into a transformation unit and transforming the plurality of adjacent prediction into a frequency domain, and an image decoding method and apparatus for decoding an image encoded by using the image encoding method and apparatus.

Description

Description METHOD AND APPARATUS FOR ENCODING AND DECODING
IMAGE BY USING LARGE TRANSFORMATION UNIT
This application is divisional of Canadian Patent Application No. 2,768,181 filed August 13, 2010 Technical Field 11 The exemplary embodiments relate to a method and apparatus for encoding and decoding an image, and more particularly, to a method and apparatus for encoding and decoding an image by transforming an image of a pixel domain into coefficients of a frequency domain.
Background Art [2] In order to perform imai..?e compression, most of image encoding and decoding methods and apparatuses encode an image by transforming an image of a pixel domain into coefficients of a frequency domain. A discrete cosine transform (DC I), which is one of frequency transform techniques, is a well-known technique that is widely used in image or sound compression. An image encoding method using the DC] involves performing the DCT on an image of a pixel domain, generating discrete cosine coef-ficients, quantizing the generated discrete cosine coefficients. and performing entropy coding on the generated discrete cosine coefficients.
Disclosure of Invention Solution to Problem 131 The exemplary embodiments provide a method and apparatus IW encoding and decoding an image by using more efficient discrete cosine transform MCI), and also provide a computer readable recording medium having recorded thereon a program for executing the method.
Advantageous Effects of Invention [4] According to the one or more exemplary embodiments, it is possible to set the trans-formation unit so as to be greater than the prediction unit, and to perform the DCT, so that an image may be efficiently compressed and encoded.
Brief Description of Drawings [5] 1-1-1e above and other features of the exemplary embodiments will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings in which:
[6] FIG. 1 is a block diagram of an image encoding apparatus according to an exemplary embodiment;
FIG. 2 is a diagram of an image decoding apparatus according to another exemplary embodiment;
181 FIG. 3 is a diagram of a hierarchical coding unit according to another exemplary em-bodiment;
191 FIG. 4 is a block diagram of an image encoder based on a coding unit according to another exemplary embodiment;
1101 FIG. 5 is a block diagram of an image decoder based on a coding unit according to another exemplary embodiment;
[ 111 FIG. 6 illustrates a maxiinum coding unit, sub-coding units, and prediction units according to another exemplary embodiment;
[121 FIG. 7 is a diagram of a coding unit and a transformation unit according to another exemplary embodiment;
1131 FIGS. 8A and 8B illustrate division shapes of a maximum coding unit, a prediction unit, and a transformation unit according to another exemplary embodiment;
1141 FIG. 9 is a block diagram of an image encoding apparatus according to another exemplary embodiment;
1151 FIG. 10 is a diagram of the transformer;
1161 FIGS. I IA through 11C illustrate types of a transformation unit according to another exemplary embodiment;
[17] FIG. 12 illustrates different transformation units according to another exemplary em-bodiment;
1181 FIG. 13 is a block diagram of an image decoding apparatus according to another exemplary embodiment; and 1191 FIG. 14 is a flowchart of an image encoding method, according to an exemplary cm-, bodiment.
1201 FIG. 15 is a flowchart of an image decoding method, according to another exemplary embodiment, Best Mode for Carrying out the Invention [21] According to an aspect of an exemplary embc)diment, there is provided an image encoding method including the operations of setting a transfmnation unit by selecting a plurality of adjacent prediction units; and transforming the plurality of adjacent prediction units into a frequency domain according to the transformation unit, and generating frequency component coefficients; quantizing the frequency component co-efficients; and performing entropy encoding on the quantized frequency component co-efficients.
[22] The operation of setting the transformation unit may be performed based on a depth indicating a level of size-reduction that is gradually performed from a maximum coding unit of a current slice or a current picture to a sub-coding unit comprising the plurality of adjacent prediction units.
2:11 vl-he operation of setting the transformation unit may be performed by selecting a plurality of adjacent prediction units on which prediction is performed according to a same prediction mode.
1241 The same prediction mode may be an inter-prediction mode or an intra-prediction rnode.
1251 The image encoding method may further include the operation of setting an optimal translOrmation unit by repeatedly performing the aforementioned operations on different transformation units, wherein the aforementioned operations include the op-erations of setting the transformation unit by selecting a plurality of adjacent prediction units, transforming the plurality of adjacent prediction units into the frequency domain according to the transformation unit and generating the frequency component coef-ficients, quantizing the frequency component coefficients and performing the entropy encoding on the quantized frequency component coefficients.
1261 According to another aspect of an exemplary embodiment, there is provided an image encoding apparatus including a transformer for setting a transformation unit by selecting a plurality of adjacent prediction units, transforming the plurality of adjacent prediction units into a frequency domain according to the transformation unit, and generating frequency component coefficients; a quantization unit for quantizing the frequency component coefficients; and an entropy encoding unit for performing entropy encoding on the quantized frequency component coefficients.
1271 According to another aspect of an exemplary embodiment, there is provided an image decoding method include the operations of entropy-decoding frequency component coefficients that are generated by being transformed to a frequency domain according to a transformation unit; inverse-quantizing the frequency component coef-ficients; and inverse-transforming the frequency component coefficients into a pixel domain, and reconstructing a plurality of adjacent prediction units comprised in the transformation unit.
[281 According to another aspect (Ilan exemplary embodiment, there is provided an image decoding apparatus including an entropy decoder for entropy-decoding frequency component coefficients that are generated by being transformed to a frequency domain according to a transformation unit; an inverse-quantization unit for inverse-quantizing the frequency component coefficients; and an inverse-transformer for inverse-transforming the frequency component coefficients into a pixel domain, and reconstructing a plurality of adjacent prediction units comprised in the trans-formation unit.
[291 According to another aspect of an exemplary embodiment, there is provided a computer readable recording medium having recorded thereon a program for executing the image encoding and decoding methods.
Mode for the Invention [301 Hereinafter, the exemplary embodiments will be described in detail with reference to the attached drawings. In the exemplary embodiments, "unit" may or may not refer to a unit of size, depending on its context, and "image" may denote a still image for a video or a moving image, that is, the video itself.
[31] FIG. 1 is a block diagram of all apparatus 100 for encoding an image, according to an exemplary embodiment.
[321 Referring to Fla 1, the apparatus 100 includes a maximum encoding unit dividing unit 110, an encoding depth determining unit 120, an image data encoder 130.
and an encoding information encoder 140.
[33] The maximum encoding unit dividing unit 110 can divide a current picture or slice based on a maximum coding unit that is an encoding unit of the largest size.
That is, the maximum encoding unit dividing unit 110 can divide the current picture or slice to obtain at least one maximum coding unit.
[34] According to an exemplary embodiment, an encoding unit can be represented using a maximum coding unit and a depth. As described above, the maximum coding unit indicates an encoding unit having the largest size from among coding units of the current picture, and the depth indicates the size of a sub coding unit obtained by hierar-chically decreasing the coding unit. As a depth increases, a coding unit can decrease in size from a maximum coding unit to a minimum coding unit, wherein a depth of the maximum coding unit is defined as a minimum depth and a depth of the minimum coding unit is defined as a maximum depth. Since the size of an coding unit decreases from a maximum coding unit as a depth increases, a sub coding unit of a kth depth can include a plurality of sub coding units of a (k+n)th depth (lc and n are integers equal to or greater than 1).
[35] According to an increase of the size of a picture to be encoded, encoding an image in a greater coding unit can result in a higher image compression ratio. However, if a greater coding unit is fixed, an image cannot he efficiently encoded by taking into account the continuously changing image characteristics.
[36] For example, when a smooth area such as the sea or sky is encoded, the greater an coding unit is, the compression ratio can increase. I loi,vever, when a complex area such as people or buildings is encoded, the smaller an coding unit is, the more a com-pression ratio can increase.
[371 Accordingly, according to an exemplary embodiment, a different maximum image coding unit and a different maximum depth are set for each picture or slice.
Since a maximum depth denotes the maximum number of times by which a coding unit can decrease, the Size of each minimum coding unit included in a maximum image coding unit can be variably set according to a maximum depth.
[381 The encoding depth determining unit 120 determines a maximum depth.
The maximum depth can be determined based on calculation of Rate-Distortion (R-D) cost.
The maximum depth may hc determined differently for each picture or slice or for each maximum coding unit. The determined maximum depth is provided to the encoding in-formation encoder 140, and image data according to maximum coding units is provided to the image data encoder 130.
1391 The maximum depth denotes a coding unit having the smallest size, which can he included in a maximum coding unit, i.e., a minimum coding unit. In other words, a maximum coding unit can he divided into sub coding units having different Sizes according to different depths. This is described in detail later with reference to FIGS.
SA and 8B. In addition, the sub coding units having different sizes, which are included in the maximum coding unit, can be predicted or transformed based on processing units having different Sizes. In other words, the apparatus 100 can perlOrm a plurality of processing operations for image encoding based on processing units having various sizes and various shapes. To encode image data, processing operations such as prediction, translOrmation, and entropy encoding are performed, wherein processing units having the same Size may he used for every operation or processing units having different sizes may be used for every operation.
[401 For example, the apparatus 100 can select a processing unit that is different from a coding unit to predict the coding unit. =
1411 When the size of a coding unit is 2Nx2N (where N is a positive integer), processing units tor prediction may he 2Nx2N, 2NxN, Nx2N, and NxN. In other words, motion prediction may be performed based on a processing unit having a shape whereby at least one of height and width of a coding unit is equally divided by two. I
lereinalter, a processing unit, which is the base of prediction, is defined as a 'prediction unit'.
(421 A prediction mode may he at least one of an intra mode, an inter mode, and a skip mode, and a specific prediction mode can be performed for only a prediction unit having a specific size or shape. For example, the intra mode can he performed for only prediction units having the sizes of 2Nx2N and NxN of which the shape is a square.
Further, the skip mode can he performed for only a prediction unit having the size of 2Nx2N. If a plurality of prediction units exist in a coding unit, the prediction mode with the least encoding errors can he selected after performing prediction for every prediction unit.
[431 Alternatively, the apparatus 100 can perform frequency transformation on image data based on a processing unit having a different size from a coding unit. For the frequency transformation in the coding unit, the frequency transformation can he performed based on a processing unit having a size equal to or smaller than that ot'thc coding unit. Hereinafter, a processing unit, which is the base of frequency transformation, is defined as a 'transtbrmation The frequency transformation may be a Discrete Cosine 'Transform (DCT) or a Karhunen Loeve TranslOrm (KU).
1441 The encoding depth determining unit 120 can determine sub coding units included in a maximum coding unit using RD optimization based on a Lagrangian multiplier.
In other words, the encoding depth determining unit 120 can determine the shapes of a plurality of sub coding units divided from the maximum coding unit, wherein the plurality of sub coding units have different sizes according to their depths.
The image data encoder 130 outputs a bitstream by encoding the maximum coding unit based on the division shapes, i.e., the shapes which divide the maximum coding unit, as de-termined by the encoding depth determining unit 120.
[451 The encoding information encoder 140 encodes information about an encoding mode of the maximum coding unit determined by the encoding depth determining unit 120.
In other words, the encoding information encoder 140 outputs a bitstream by encoding information about a division shape of the maximum coding unit, information about the maximum depth, and information about an encoding mode of a sub coding unit for each depth. The information about the encoding mode of the sub coding unit can include information about a prediction unit of the sub coding unit, intbrmation about a prediction mode for each prediction unit, and information about a transformation unit of the sub coding unit.
[46] Since sub coding units having different sizes exist for each maximum coding unit and information about an encoding mode must he determined for each sub coding unit, information about at least one encoding mode can be determined for one maximum coding unit.
[471 The apparatus 100 can generate sub coding units by equally dividing both height and width of a maximum coding unit by two according to an increase of depth. That is, when the size of a coding unit of a kth depth is 2Nx2N, the size of a coding unit of a (k+l)th depth is NxN.
[48] Accordingly, the apparatus 100 according to an exemplary embodiment can determine an optimal division shape for each maximum coding unit based on sizes of maximum coding units and a maximum depth in consideration of image charac-teristics. By variably adjusting the size of a maximum coding unit in consideration of image characteristics and encoding an image through the division of a maximum coding unit into sub coding units of different depths, images having various resolutions can be more efficiently encoded.
[49] FIG. 2 is a block diagram of an apparatus 200 for decoding an image according to an exemplary embodiment.

1 501 Referring to FIG. 2, the apparatus 200 includes an image data obtaining unit 210, an encoding information extracting unit 220, and an image data decoder 230.
[51] The image data obtaining unit 210 acquires image data according to maximum coding units by parsing a bitstream received by the apparatus 200 and outputs the image data to the image data decoder 230. The image data obtaining unit 210 can extract information about a maximum coding unit of a current picture or slice from a header of the current picture or slice. In other words, the image data obtaining unit 210 divides the bitstream in the maximum coding unit so that the image data decoder 230 can decode the image data according to maximum coding units.
[52] The encoding information extracting unit 220 extracts information about a maximum coding unit, a maximum depth, a division shape of the maximum coding unit, an encoding mode of sub coding units from the header of the current picture by parsing the bitstream received by the apparatus 200. The information about a division shape and the information about an encoding mode are provided to the image data decoder 230.
[53] '[he information about a division shape of the maximum coding unit can include in-formation about sub coding units having different sizes according to depths included in the maximum coding unit, and the information about an encoding mode can include in-formation about a prediction unit according to sub coding unit, information about a prediction mode, and information about a transformation units.
[54] The image data decoder 230 restores the current picture by decoding image data of every maximum coding unit based on the information extracted by the encoding in-formation extracting unit 220. 'File image data decoder 230 can decode sub coding units included in a maximum coding unit based on the information about a division shape of the maximum coding unit. A decoding process can include a prediction process including intra prediction and motion compensation and an inverse trans-formation process.
[551 The image data decoder 230 can perform intra prediction or inter prediction based on inlOrmation about a prediction unit and information about a prediction mode in order to predict a prediction unit. The image data decoder 230 can also perform inverse translOrmation for each sub coding unit based on information about a transformation unit of a sub coding unit.
[56] FIG. 3 illustrates hierarchical coding units according to an exemplary embodiment.
[57] Referring to FIG. 3, the hierarchical coding units according to an exemplary em-bodiment can include coding units whose widthxheights are 64x64, 32x32, 16x16, Sx8, and 4x4. Resides these coding units having perfect square shapes, coding units whose widthxheights are 64x32, 32x64, 32x16, 16x32, I6x8, 8x16, 8x4, and 4:6 may also exist.

[581 Referring to FICi, 3, for image data 3 10 whose resolution is 1920x1080, the size of a maximum coding unit is set to 64x64, and a maximum depth is set to 2. .
[591 For image data 320 whose resolution is 1920x1080, the size of a maximum coding unit is set to 64x64, and a maximum depth is set to 4. For image data 330 whose resolution is 352x288, the size of a maximum coding unit is set to I 6x16, and a maximum depth is set to 1.
[60] When the resolution is high or the amount of data is great, it is preferable, hut not necessary, that a maximum site of a coding unit is relatively great to increase a com-pression ratio and exactly reflect image characteristics. Accordingly, for the image data 310 and 320 having higher resolution than the image data 330, 64x64 can be selected as the size of a maximum coding unit.
[611 A maximum depth indicates the total number of layers in the hierarchical coding units. Since the maximum depth of the image data 310 is 2, a coding unit 315 of the image data. 310 can include a maximum coding unit whose longer axis site is 64 and sub coding units whose longer axis sizes-are 32 and 16, according to an increase of a depth.
[62] On the other hand, since the maximum depth of the image data 330 is I.
a coding unit 335 of the image data 330 can include a maximum coding unit whose longer axis size is 16 and coding units whose longer axis sizes is 8, according to an increase of a depth.
[63] However, since the maximum depth of the image data 320 is 4, a coding unit 325 of the image data 320 can include a maximum coding unit whose longer axis size is and sub coding units whose longer axis sizes are 32, 16, 8 and 4 according to an increase of a depth. Since an image is encoded based on a smaller sub coding unit as a depth increases, the exemplary embodiment is suitable for encoding an image including more minute details in scenes.
[641 FIG. 4 is a block diagram of an image encoder 400 based on a coding unit, according to an exemplary embodiment.
[651 An Ultra predictor 410 performs intra prediction on prediction units of the intra mode in a current frame 405, and a motion estimation unit 420 and a motion compensation unit 425 perform inter prediction and motion compensation on prediction units of the inter mode using the current frame 405 and a reference frame 495.
1661 Residual values are generated based on the prediction units output from the intra predictor 410, the motion estimation unit 420, and the motion compensation unit 425, and the generated residual values are output as quantized transform coefficients by passing through a transformer 430 and a quantization unit 440.
[671 The quantized transform coefficients are restored to residual values by passing through an inverse-quantization unit 460 and a frequency inverse-transformer 470, and the restored residual values are post-processed by passing through a dcblocking unit 480 and a loop filtering unit 490 and output as the reference frame 495. The quantized transform coefficients can be output as a bitstrcam 455 by passing through an entropy encoder 450.
[681 To perform encoding based on an encoding method according to an exemplary em-bodiment, components of the image encoder 400, i.e., the intra predictor 410, the motion estimation unit 420, the motion compensation unit 425, the transformer 430, the quantization unit 440, the entropy encoder 450, the inverse-quantization unit 460, the frequency inverse-transformer 470, the deblocking unit 480 and the loop filtering unit 490, perform image encoding processes based on a maximum coding unit, a sub coding unit according to depths, a prediction unit, and a transformation unit.
69] FIG. 5 is a block diagram of an image decoder 500 based on a coding unit, according to an exemplary embodiment.
[70] A biistream 505 passes through a parsing unit 510 so that encoded image data to be decoded and encoding information necessary for decoding arc parsed. The encoded image data is output as inverse-quantized data by passing through an entropy decoder 520 and an inverse-quantization unit 530 and restored to residual values by passing through a frequency inverse-transformer 540. The residual values are restored according to coding units by being added to an intra prediction result of an intra predictor 550 or a motion compensation result of a motion compensation unit 560. The restored coding units are used for prediction of next coding units or a next picture by passing through a deblocking unit 570 and a loop filtering unit 580.
1711 In perform decoding based on a decoding method according to an exemplary em-bodiment. components of the image decoder 500, i.e., the parsing unit 510, the entropy decoder 520, the inverse-quantization unit 530, the frequency inverse-transformer 540, the intro. predictor 550, the motion compensation unit 560, the dcblocking unit 570 and the loop filtering unit 5S0, perform image decoding processes bused on a maximum coding unit, a sub coding unit according to depths, a prediction unit, and a trans-formation unit.
1721 In particular, the intra predictor 550 and the motion compensation unit 560 determine a prediction unit and a prediction mode in a sub coding unit by considering a maximum coding unit and a depth, and the frequency inverse-transformer 540 performs inverse transformation by considering the size of a transformation unit.
1731 FIG. 6 illustrates a maximum coding unit, a sub coding unit, and a prediction unit, according to an exemplary embodiment.
1741 The apparatus 100 and the apparatus 200 according to an exemplary embodiment use hierarchical coding units to perform encoding and decoding in consideration of image characteristics. A maximum coding unit and a maximum depth can be adaptively set according to the image characteristics or variably set according to requirements of a user.
[751 A hierarchical coding unit structure 600 according to an exemplary embodiment il-lustrates a maximum coding unit 610 whose height and width are 64 and maximum depth is 4. A depth increases along a vertical axis of the hierarchical coding unit structure 600, and as a depth increases, heights and widths of sub coding units 620 to 650 decrease. Prediction units of the maximum coding unit 610 and the sub coding units 620 to 650 are shown along, a horizontal axis of the hierarchical coding unit structure 600.
[76] The maximum coding unit 610 has a depth of 0 and the size of a coding unit, i.e., height and width, of 64x64. A depth increases along the vertical axis, and there exist a sub coding unit 620 whose site is 32x32 and depth is 1, a sub coding unit 630 whose size is 16x16 and depth is 2, a sub coding unit 640 whose size is 8x8 and depth is 3, and a sub coding unit 650 whose size is 4x4 and depth is 4. The sub coding unit 650 whose site is 4x4 and depth is 4 is a minimum coding unit, and the minimum coding unit may be divided into prediction units, each of which is less than the minimum coding unit.
[771 Referring to FIG. 6, examples of a prediction unit are shown along the horizontal axis according to each depth. That is, a prediction unit of the maximum coding unit 610 whose depth is 0 may he a prediction unit whose site is equal to the coding unit 610, i.e., 64x64, or a prediction unit 612 whose site is 64x32, a prediction unit 614 whose site is 32x64, or a prediction unit 616 whose site is 32x32, which has a size smaller than the coding unit 610 whose size is 64x64.
1781 A prediction unit of the coding unit 620 whose depth is 1 and site is 32x32 may be a prediction unit whose size is equal to the coding unit 620, i.e., 32x32, or a prediction unit 622 whose size is 32x 16, a prediction unit 624 whose size is 16x32, or a prediction unit 626 whose site is 16x16, which has a site smaller than the coding. unit 620 whose size is 32x32.
1791 A prediction unit of the coding unit 630 whose depth is 2 and site is 16x16 may be a prediction unit whose size is equal to the coding unit 630, i.e., 16x16, or a prediction unit 632 whose size is 16x8, a prediction unit 634 whose size is 8x16, or a prediction unit 636 whose size is 8x8, which has a size smaller than the coding unit 630 whose site is 16x16.
[801 A prediction unit of the coding unit 640 whose depth is 3 and size is 8x8 may be a prediction unit whose size is equal to the coding unit 640, i.e., 8x8, or a prediction unit 642 whose size is 8x4, a prediction unit 644 whose size is 4x8, or a prediction unit 646 whose site is 4x4, which has a size smaller than the coding unit 640 whose size is 8:6.
811 Finally, the coding unit 650 whose depth is 4 and size is 4x4 is a minimum coding unit and a coding unit of a maximum depth, and a prediction unit of the coding unit 650 may be a prediction unit 650 whose size is 4x4, a prediction unit 652 having a size of 4x2, a prediction unit 654 having a size of 2x4, or a prediction unit 656 having a site of 2x2.
[821 FIG. 7 illustrates a coding unit and a transformation unit, according to an exemplary embodiment.
[831 The apparatus 100 and the apparatus 200, according to an exemplary embodiment, perform encoding with a maximum coding unit itself or with sub coding units, which are equal to or smaller than the maximum coding unit, divided from the maximum coding unit.
1841 In the encoding process, the size of a transformation unit for frequency trans-formation is selected to be no larger than that of a corresponding coding unit. For example, when a encoding unit 710 has the size of 64x64, frequency transformation can be performed using a transformation unit 720 having the size of 32x32.
[851 FIGS. 8A and 813 illustrate division shapes of a coding unit, a prediction unit, and a transformation unit, according to an exemplary embodiment.
[861 FIG. 8A illustrates a coding unit and a prediction unit, according to an exemplary embodiment.
1871 A left side of FIG. SA shows a division shape selected by the apparatus 100, according to an exemplary embodiment, in order to encode a maximum coding unit 810. The apparatus 100 divides the maximum coding unit 810 into various shapes, performs encoding, and selects an optimal division shape by comparing encoding results of various division shapes with each other based on R-D cost. When it is optimal to encode the maximum coding unit S10 as it is, the maximum coding unit 8 10 may be encoded without dividing the maximum coding unit 810 as illustrated in FIGS.
SA and 813.
1881 Referring to the left side of FIG. SA, the maximum codino. unit 810 whose depth is 0 is encoded by dividing it into sub coding units whose depths arc equal to or greater than I. That is, the maximum coding unit 810 is divided into 4 sub coding units whose depths are 1, and all or some of the sub coding units whose depths are 1 are divided into sub coding units whose depths are 2.
1891 A sub coding unit located in an upper-right side and a sub coding unit located in a lower-left side among the sub coding units whose depths are 1 are divided into sub coding units whose depths are equal to or greater than 2. Some of the sub coding units whose depths are equal to or greater than 2 may be divided into sub coding units whose depths are equal to or greater than 3.
[901 The right side of FIG. 8A shows a division shape of a prediction unit for the maximum coding unit 810.

[ 911 Referring to the right side of FIG. 8A, a prediction unit 860 for the maximum coding unit 810 can be divided differently from the maximum coding unit 810. In other words, a prediction unit for each of sub coding units can he smaller than a corresponding sub coding unit.
[92] For example, a prediction unit for a sub coding unit 854 located in a lower-right side among the sub coding units whose depths are I can be smaller than the sub coding unit 854. In addition, prediction units for some (814, 816, 850, and 852) of sub coding units 814, 816, 818, 828, 850, and 852 whose depths are 2 can be smaller than the sub coding units 814, 816, 850, and 852, respectively. In addition, prediction units for sub coding units 822, 832, and 848 whose depths are 3 can be smaller than the sub coding units 822, 832, and 848, respectively. The prediction units may have a shape whereby respective sub coding units are equally divided by two in a direction of height or width or have a shape whereby respective sub coding units are equally divided by four in di-rections or height and width.
[93] FIG. 813 illustrates a prediction unit and a transformation unit, according to an exemplary embodiment.
[94] A left side of FIG. 813 shows a division shape of a prediction unit for the maximum coding unit 810 shown in the right side of FIG. SA, and a right side of FIG.
SB shows a division shape of a transformation unit of the maximum coding unit 810.
195] Referring to the right side of FIG. SB, a division shape of a transformation unit 870 can be set differently from the prediction unit 860.
[96] For example, even though a prediction unit for the coding unit 854 whose depth is 1 is selected with a shape whereby the height or the coding unit 854 is equally divided by two, a transformation unit can be selected with the same size as the coding unit 854.
Likewise, even though prediction units for coding units 814 and 850 whose depths are 2 are selected with a shape whereby the height of each of the coding units 814 and 850 is equally divided by two, a transformation unit can be selected with the same size as the original size of each of the coding units 814 and 850.
[97] A transformation unit may be selected with a smaller size than a prediction unit. For example, when a prediction unit for the coding unit 852 whose depth is 2 is selected with a shape whereby the width of the coding unit 852 is equally divided by two, a transformation unit can be selected with a shape whereby the coding unit 852 is equally divided by four in directions or height and width, which has a smaller size than the shape of the prediction unit.
[98] FIG. 9 is a block diagram of an image encoding apparatus 900 according to another exemplary embodiment.
[99] Referring to HG. 9, the image encoding apparatus 900 according to the present exemplary embodiment includes a transformer 910, a quantization unit 920, and an entropy encoder 930.
I 100] The transformer 910 receives an image processing unit of a pixel doinain, and transforms the image processing unit into a frequency domain. The transformer receives a plurality of prediction units including residual values generated due to intra-prediction or inter-prediction, and transforms the prediction units into a frequency domain. As a result of the transform to the frequency domain, coefficients of frequency components are generated. According to the present exemplary embodiment, the transform to the frequency domain may occur via a discrete cosine transform (DCT) or Karbunen I beve Transform (K! T). and as a result of the DCT or KI:T, coefficients of frequency domain are generated. Hereinafter, the transform to the frequency domain may he the DCT, however, it is obvious to one of ordinary skill in the art that the transform to the frequency domain may be any transform involving transformation of an image of a pixel domain into a frequency domain.
11011 Also, according to the present exemplary embodiment, the transformer 910 sets a transformation unit by grouping a plurality of prediction units, and performs the trans-formation according to the transformation unit. This process will he described in detail with reference to FIGS. 10, 11A, 1113, and 12.
1102] HG. 10 is a diagram of the transformer 910.
1103] Referring to FIG. 10, the transformer 910 includes a selection unit 1010 and a transform perlbrming unit 1020.
[104] The selection unit 1010 sets a transformation unit by selecting a plurality of adjacent prediction units.
[105] An image encoding apparatus according to the related art perlbrms intra-prediction or inter-prediction according to a block having a predetermined size, i.e., according to a prediction unit, and performs the DCT based on a site that is less than or equal to that of the prediction unit. In other words, the image encoding apparatus according to the related art performs the DC'T by using translbrmation units that are less than or equal to the prediction unit.
[106] However, due to a plurality of pieces of header information added to the trans-formation units, added overheads are increased as the transformation units are decreased, such that a compression rate of an image encoding operation deteriorates. In order to solve this problem, the image encoding apparatus 900 according to the present exemplary embodiment groups a plurality of adjacent prediction units into a trans-formation unit, and performs transformation according to the transformation unit that is generated by the grouping. There is a high possibility that the adjacent prediction units may include similar residual values, so that, if the adjacent prediction units are grouped into one transformation unit and then the transformation is performed thereon, a com-pression rate of an encoding operation may he highly increased, [1071 For this increase, the selection unit 1010 selects the adjacent prediction units to be grouped into one transformation unit. This process will be described in detail with reference to FIGS. 11A through 11C and 12.
[10h] FIGS. 11A through 11C illustrate types of a transformation unit according to another exemplary embodiment.
11091 Referring to FIGS. 11A through I IC, a prediction unit 1120 with respect to a coding unit 1110 may have a division shape obtained by halving a width of the coding unit 1110. The coding unit 1110 may be a maximum coding unit, or may be a sub-coding unit having a smaller size than the maximum coding unit.
[110] As illustrated in FIG. 11A, a site of the transformation unit 1130 may be less than the prediction unit 1120, or as illustrated in FIG. 11B, a size of the transformation unit 1140 may be equal to the prediction unit 1120. Also, as illustrated in FIG.
11C, a size of the transformation unit 1150 may be greater than the prediction unit 1120.
That is, the transformation units 1130 through 1150 may be set while having no connection with the prediction unit 1120.
[ 1 I 11 Also, FIG. 11C illustrates an example in which the prediction unit 1120 is set by grouping a plurality of the prediction units 1120 included in the coding unit 1110.
However, a transformation unit may be set to be greater than a coding unit in a manner that a plurality of prediction units, which are included not in one coding unit hut in a plurality of coding units, are set as one transformation unit. In other words, as described with reference to FIGS. I IA through I IC, a transformation unit may be set to be equal to or less than a size of a coding unit, or to be greater than the size of the coding unit. That is, the transformation unit may be set while having no connection with the prediction unit and the coding unit.
[ 12] Although FIGS. 11A through 11C illustrate examples in which the transformation unit has a square form. I lowever, according to a method of grouping adjacent prediction units, the transiOrmation unit may have a rectangular form. For example, in a case where the prediction unit is not set to have rectangular forms as illustrated in FICiS. 11A through 11C but is set to have four square forms obtained by quadrisecting the coding unit 1110, upper and lower prediction units, or left and right prediction units are grouped so that the translOrmation unit may have a rectangular form whose horizontal side or vertical side is long.
[ 113 ] Referring back to HG. 10, there is no limit in a criterion by which the selection unit 1010 selects the adjacent prediction units. however, according to the exemplary em-bodiment, the selection unit 1010 may select the transformation unit according to a depth. As described above, the depth indicates a level of size-reduction that is gradually performed from a maximum coding unit of a current slice or a current picture to a sub-coding unit. As described above with reference to FIGS. 3 and 6, as the depth is increased, a 517e of a sub-coding unit is decreased, and thus a prediction unit included in the sub-coding unit is also decreased. In this case, if the transformation is performed according to a transformation unit that is less than or equal to the prediction unit, a compression rate of an image encoding operation deteriorates since header in-formation is added to every transformation unit.
[1141 Thus, with respect to a sub-coding unit at a depth of a predetermined value, it is preferable, but not necessary, that prediction units included in the sub-coding unit are grouped and set as a transformation unit, and then the transformation is performed thereon. For this. the selection unit 1010 sets the transformation unit based on the depth of the sub-coding unit. For example, in the case where a depth of the coding unit 1110 in FIG. 11C is greater than k, the selection unit 1010 groups prediction units 1120 and sets them as a transformation unit 1150.
[115[ Also, according to another exemplary embodiment, the selection unit 1010 may group a plurality of adjacent prediction units on which prediction is performed according to the same prediction mode, and may set them as one transformation unit.
The selection unit 1010 groups the adjacent prediction units on which prediction is performed according to intra-prediction or inter-prediction, and then sets them as one transIbrination unit. Since there is a high possibility that the adjacent prediction units on which prediction is performed according to the same prediction mode include similar residual values, it is possible to group the adjacent prediction units into the transformation unit and then to perform the transformation on the adjacent prediction units.
1116 ] When the selection unit 1010 sets the transformation unit, the transform performing unit 1020 transforms the adjacent prediction units into a frequency domain, according to the transformation unit. The transform performing unit 1020 performs the DCT on the adjacent prediction units according to the transformation unit, and generates discrete cosine coefficients.
[ 1171 Referring back to FIG. 9, the quanti7ation unit 920 quantizes frequency component coefficients generated by the transformer 910, e.g., the discrete cosine coefficients. 'Hie quantization unit 920 may quantize the discrete cosine coefficients that are input according to a predetermined quantization step.
111S 1 The entropy encoder 930 performs entropy encoding on the frequency component coefficients that are quantized by the quantization unit 920. The entropy encoder 930 may perform the entropy encoding on the discrete cosine coefficients by using context-adaptive variable arithmetic coding (CABAC) or context-adaptive variable length coding (CAVI,C).
1119] The image encoding apparatus 900 may determine an optimal transformation unit by repeatedly performing the DCT, the quantization, and the entropy encoding on different transformation units. A procedure tor selecting the adjacent prediction units may he repeated to determine the optimal transformation unit. The optimal trans-formation unit may he determined in consideration of an RD cost calculation, and this will be described in detail with reference to FIG. 12.
[120] FIG. 12 illustrates different transformation units according to another exemplary em-bodiment.
[121] Referring to FIG. 12, the image encoding apparatus 900 repeatedly performs an encoding operation on the different transformation units.
[1221 As illustrated in FIG. 12, a coding unit 1210 may be predicted and encoded based on a prediction unit 1220 having a smaller size than the coding unit 1210. A
trans-formation is pertbrmed on residual values that arc generated by a result of the prediction, and here, as illustrated in FIG. 12, the DCT may be performed on the residual values based on the different transformation units.
11231 A first-illustrated transformation unit 1230 has the same size as the coding unit 1210, and has a size obtained by grouping all prediction units included in the coding unit 1210.
[124] A second-illustrated transformation unit 1240 has SiZCS obtained by halving a width of the coding unit 1210, and the sizes are obtained by grouping every two prediction units adjacent to each other in a vertical direction, respectively.
[1251 A third-illustrated transformation unit 1250 has sizes obtained by halving a height of the coding unit 1210, and the sizes are obtained by grouping every two prediction units adjacent to each other in a horizontal direction, respectively.
[126] A fourth-illustrated transformation unit 1260 is used when the transformation is performed based on the fourth-illustrated transformation unit 1260 having the same site as the prediction unit 1220.
[127] FIG. 13 is a block diagram of an image decoding apparatus 1300 according to another exemplary embodiment.
[ 1281 Referring to FIG. 13, the image decoding apparatus 1300 according to the present exemplary embodiment includes an entropy decoder 1310, an inverse-quantization unit 1320, and an inverse-transformer 1330, [1291 The entropy decoder 1310 performs entropy decoding on frequency component coef-ficients with respect to a predetermined transformation unit. As described above with reference to FIGS. 11A through 11C and 12, the predetermined transformation unit may be a transformation unit generated by grouping a plurality of adjacent prediction units.
[1301 As described above with reference to the image encoding apparatus 900, the trans-formation unit may be generated by grouping the adjacent prediction units according to a depth, or may be generated by grouping a plurality of adjacent prediction units on which prediction is performed according to the same prediction mode, that is, according to an intra-prediction mode or an inter-prediction mode.
[131] The plurality of prediction units may not he included in one coding unit hut included in a plurality of coding units. In other words, as described above with reference to FIGS. 11 A through 11C, the transformation unit that is entropy-decoded by the entropy decoder 1310 may be set to be equal to or less than a site of a coding unit, or to he greater than the size of the coding unit.
1132] Also, as described above with reference to HG. 12, the transformation unit may be an optimal transformation unit selected by repeating a procedure for grouping a plurality of adjacent prediction units, and by repeatedly performing a transformation, quantization, and entropy decoding on different transformation units.
[133] The inverse-quantization unit 1320 inverse-quantizes the frequency component coef-ficients that are entropy-decoded by the entropy decoder 1310.
[134] The inverse-quantization unit 1320 inverse-quantizes the entropy-decoded frequency component coefficients according to a quantization step that is used in encoding of the transformation unit.
[135] The inverse-transformer 1330 inverse-transforms the inverse-quantized frequency component coefficients into a pixel domain. The inverse-transformer may perform an inverse-DCT on inverse-quantized discrete cosine coefficients (i.e., the inverse-quantized frequency component coefficients), and then may reconstruct a trans-formation unit of the pixel domain. The reconstructed transformation unit may include adjacent prediction units.
1136] HG. 14 is a flowchart of an image encoding method, according to an exemplary em-bodiment.
[137] Referring to HCI. 14, in operation 1410, an image encoding apparatus sets a trans-formation unit by selecting a plurality of adjacent prediction units. The image encoding apparatus may select a plurality of adjacent prediction units according to a depth, or may select a plurality of adjacent prediction units on which prediction is performed according to the same prediction mode.
11381 In operation 1420, the image encoding apparatus transforms the adjacent prediction units into a frequency domain according to the transformation unit set in operation 1420. The image encoding apparatus groups the adjacent prediction units, performs a DCT on the adjacent prediction units, and thus generates discrete cosine coefficients.
[139] In operation 1430, the image encoding apparatus quantizes frequency component co-efficients, generated in operation 1420, according to a quantization step.
[140] In operation 1440, the image encoding apparatus performs entropy encoding on the frequency component coefficients quantized in operation 1430. The image encoding apparatus performs the entropy encoding on the discrete cosine coefficients by using CARAC or CAVI:C.
11411 An image encoding method according to another exemplary embodiment may further include an operation of setting an optimal transformation unit by repeatedly performing operations 1410 through 1440 on different transformation units.
That is, by repeatedly performing thc transformation, the quantization, and the entropy encoding on different transformation units as illustrated in FIG. 12, it is possible to set the optimal transformation unit.
[142] FIG. 15 is a flowchart olan image decoding method, according to another exemplary embodiment, [1431 Referring to FIG. 15, in operation 1510, an image decoding apparatus performs entropy decoding on frequency component coefficients with respect to a predetermined transformation unit. The frequency component coefficients may be discrete cosine co-efficients.
11441 In operation 1520, the image decoding apparatus inverse-quantizes the frequency component coefficients that are entropy-decoded in operation 1510. The image decoding apparatus inverse-quantizes the discrete cosine coefficients by using a quan-tization step used in an encoding operation.
[145] In operation 1530, the image decoding apparatus inverse-transforms the frequency component coefficients, which have been inverse-quantized in operation 1520, into a pixel domain and then reconstructs the transformation unit. The reconstructed trans-formation unit is set by grouping a plurality of adjacent prediction units. As described above, the transformation unit may he set by grouping the adjacent prediction units according to a depth, or may be set by grouping a plurality of adjacent prediction units on which prediction is performed according to the same prediction mode.
[1461 According to the one or more exemplary embodiments, it is possible to set the trans-formation unit so as to he greater than the prediction unit, and to perform the DCT, so that an image may be efficiently compressed and encoded.
1147] The exemplary embodiments can also be embodied as computer-readable codes on a computer-readable recording medium.]Fhe computer-readable recording medium is any data storage device that can store data, which can he thereafter read by a computer system. Examples of the computer-readable recording medium include read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices. The computer-readable recording medium can also he distributed over network-coupled computer systems so that the computer-.
readable code is stored and executed in a distributed fashion.
[148] For example, each of the image encoding apparatus, the image decoding apparatus, the image encoder, and the image decoder according to the one or more embodiments may include a hits coupled to each unit in an apparatus as illustrated in FIGS. 1-2, 4-5, 9- 10, and 14, and at least one processor coupled to the bus. Also, each of the image encoding apparatus, the image decoding apparatus, the image encoder, and the image decoder according to the one or more embodiments may include a memory coupled to the at least one processor that is coupled to the bus so as to store commands, received messages or generated messages, and to execute the commands.
[149] While this invention has been particularly shown and described with reference to exemplary embodiments thereof, it will he understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
The exemplary embodiments should be considered in a descriptive sense only and not for purposes of limitation. Therefore, the scope of the invention is defined not by the detailed description of the invention but by the appended claims, and all differences within the scope will be construed as being included in the present invention.

Claims (4)

1. A method of decoding image, the method comprising:
obtaining quantized transformation coefficients of a transformation unit in a coding unit by performing entropy decoding on a bitstream encoded based on the coding unit;
determining the coding unit which is hierarchically split from a maximum coding unit using split information of the coding unit, wherein the split information is parsed from the bitstream;
determining at least one prediction unit which is split from the coding unit using information about a partition type, wherein the information about a partition type is parsed from the bitstream;
determining at least one transformation unit which is split from the coding unit using information about division for the at least one transformation unit, wherein the information about division for the at least one transformation unit is parsed from the bitstream;
obtaining residuals of the transformation unit by performing inverse quantization and inverse transformation on the quantized transformation coefficients of the transformation unit parsed from the bitstream; and performing intra prediction or inter predcition using at least one prediction unit included in the coding unit to generate a predictor, and reconstructing the coding unit using the residuals and the predictor, wherein the image is split into a plurality of maximum coding units comprising the maximum coding unit, and a size of the transformation unit split from the coding unit is determined without considering a size of the at least one prediction unit in the coding unit.
2. The method of claim1, wherein, the size of the transformation unit is different from the size of the prediction unit.
3. The method of claim 1, wherein, when the split information indicates a split for a current depth, a coding unit of the current depth is split into four square coding units of a lower depth, independently from neighboring coding units.
4. An apparatus for decoding image, the apparatus comprising:
a processor which is configured for obtaining quantized transformation coefficients of a transformation unit in a coding unit by performing entropy decoding on a bitstream encoded based on the coding unit; and a decoder which is configured for reconstructing residuals of the transformation unit by performing inverse quantization and inverse transformation on the quantized transformation coefficients of the transformation unit parsed from the bitstream, and performing intra prediction or inter prediction using at least one prediction unit included in the coding unit, wherein the processor is configured for determining the coding unit which is hierarchically split from a maximum coding unit using split information of the coding unit, wherein the split information is parsed from the bitstream, determining at least one prediction unit which is split from the coding unit using information about a partition type, wherein the information about a partition type is parsed from the bitstream, and determining at least one transformation unit which is split from the coding unit using information about division for the at least one transformation unit, wherein the information about division for the at least one transformation unit is parsed from the bitstream, and wherein the image is split into a plurality of maximum coding units comprising the maximum coding unit, and a size of the transformation unit split from the coding unit is determined without considering a size of the at least one prediction unit in the coding unit.
CA2877241A 2009-08-13 2010-08-13 Method and apparatus for encoding and decoding image by using large transformation unit Active CA2877241C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
KR10-2009-0074895 2009-08-13
KR1020090074895A KR101474756B1 (en) 2009-08-13 2009-08-13 Method and apparatus for encoding and decoding image using large transform unit
CA 2768181 CA2768181C (en) 2009-08-13 2010-08-13 Method and apparatus for encoding and decoding image by using large transformation unit

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CA 2768181 Division CA2768181C (en) 2009-08-13 2010-08-13 Method and apparatus for encoding and decoding image by using large transformation unit

Publications (2)

Publication Number Publication Date
CA2877241A1 true CA2877241A1 (en) 2011-02-17
CA2877241C CA2877241C (en) 2016-10-11

Family

ID=43586668

Family Applications (5)

Application Number Title Priority Date Filing Date
CA2877255A Active CA2877255C (en) 2009-08-13 2010-08-13 Method and apparatus for encoding and decoding image by using large transformation unit
CA 2768181 Active CA2768181C (en) 2009-08-13 2010-08-13 Method and apparatus for encoding and decoding image by using large transformation unit
CA2877241A Active CA2877241C (en) 2009-08-13 2010-08-13 Method and apparatus for encoding and decoding image by using large transformation unit
CA 2815777 Active CA2815777C (en) 2009-08-13 2010-08-13 Method and apparatus for encoding and decoding image by using large transformation unit
CA 2815893 Active CA2815893C (en) 2009-08-13 2010-08-13 Method and apparatus for encoding and decoding image by using large transformation unit

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CA2877255A Active CA2877255C (en) 2009-08-13 2010-08-13 Method and apparatus for encoding and decoding image by using large transformation unit
CA 2768181 Active CA2768181C (en) 2009-08-13 2010-08-13 Method and apparatus for encoding and decoding image by using large transformation unit

Family Applications After (2)

Application Number Title Priority Date Filing Date
CA 2815777 Active CA2815777C (en) 2009-08-13 2010-08-13 Method and apparatus for encoding and decoding image by using large transformation unit
CA 2815893 Active CA2815893C (en) 2009-08-13 2010-08-13 Method and apparatus for encoding and decoding image by using large transformation unit

Country Status (24)

Country Link
US (10) US8792741B2 (en)
EP (7) EP2866442B1 (en)
JP (7) JP5746169B2 (en)
KR (1) KR101474756B1 (en)
CN (6) CN102484703B (en)
AU (1) AU2010283113B2 (en)
BR (3) BR112012001757A2 (en)
CA (5) CA2877255C (en)
CY (5) CY1119836T1 (en)
DK (6) DK3282696T3 (en)
ES (6) ES2648089T3 (en)
HR (5) HRP20171769T1 (en)
HU (6) HUE039342T2 (en)
IN (3) IN2015MN00400A (en)
LT (5) LT2629518T (en)
MX (1) MX2012000614A (en)
MY (3) MY157499A (en)
NO (4) NO2866442T3 (en)
PL (6) PL3282696T3 (en)
PT (5) PT2890123T (en)
RU (4) RU2551794C2 (en)
SI (5) SI2629526T1 (en)
WO (1) WO2011019234A2 (en)
ZA (5) ZA201201157B (en)

Families Citing this family (38)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7956930B2 (en) 2006-01-06 2011-06-07 Microsoft Corporation Resampling and picture resizing operations for multi-resolution video coding and decoding
US8711948B2 (en) * 2008-03-21 2014-04-29 Microsoft Corporation Motion-compensated prediction of inter-layer residuals
US9571856B2 (en) 2008-08-25 2017-02-14 Microsoft Technology Licensing, Llc Conversion operations in scalable video encoding and decoding
US8503527B2 (en) 2008-10-03 2013-08-06 Qualcomm Incorporated Video coding with large macroblocks
KR101474756B1 (en) 2009-08-13 2014-12-19 삼성전자주식회사 Method and apparatus for encoding and decoding image using large transform unit
PL2991353T3 (en) 2009-10-01 2017-09-29 Sk Telecom Co., Ltd. Apparatus for encoding image using split layer
CN104618720B (en) * 2009-10-20 2018-11-13 夏普株式会社 Dynamic image encoding device, moving image decoding apparatus, dynamic image encoding method and dynamic image decoding method
KR101487687B1 (en) 2010-01-14 2015-01-29 삼성전자주식회사 Method and apparatus for encoding and decoding image using large transform unit
KR101602873B1 (en) * 2010-04-13 2016-03-11 삼성전자주식회사 Method and apparatus for video encoding with deblocking filtering based on tree-structured data unit, and method and apparatus for video decoding with the same
EP2955918B1 (en) * 2010-08-17 2018-07-25 Samsung Electronics Co., Ltd Video decoding using transformation unit of variable tree structure
KR20120035096A (en) * 2010-10-04 2012-04-13 한국전자통신연구원 A method and apparatus of side information signaling for quadtree transform
US9532059B2 (en) 2010-10-05 2016-12-27 Google Technology Holdings LLC Method and apparatus for spatial scalability for video coding
US8605567B2 (en) * 2010-12-02 2013-12-10 Adtran, Inc. Apparatuses and methods for enabling crosstalk vectoring in expandable communication systems
EP4340362A2 (en) 2010-12-13 2024-03-20 Electronics And Telecommunications Research Institute Method for determining reference unit
JP5594841B2 (en) * 2011-01-06 2014-09-24 Kddi株式会社 Image encoding apparatus and image decoding apparatus
US8494290B2 (en) * 2011-05-05 2013-07-23 Mitsubishi Electric Research Laboratories, Inc. Method for coding pictures using hierarchical transform units
RU2620718C2 (en) * 2011-06-30 2017-05-29 Самсунг Электроникс Ко., Лтд. Method for video encoding with control of bits depth conversion to fixed-point and device for it, and method for video decoding and device for it
CA2840887C (en) 2011-07-01 2018-06-19 Samsung Electronics Co., Ltd. Method and apparatus for entropy encoding using hierarchical data unit, and method and apparatus for decoding
US9807426B2 (en) * 2011-07-01 2017-10-31 Qualcomm Incorporated Applying non-square transforms to video data
WO2013076888A1 (en) * 2011-11-21 2013-05-30 パナソニック株式会社 Image processing device and image processing method
US9094681B1 (en) 2012-02-28 2015-07-28 Google Inc. Adaptive segmentation
US8396127B1 (en) * 2012-06-27 2013-03-12 Google Inc. Segmentation for video coding using predictive benefit
TWI535222B (en) * 2012-06-29 2016-05-21 Sony Corp Image processing apparatus and method
CN115052156A (en) * 2012-07-02 2022-09-13 韩国电子通信研究院 Video encoding/decoding method and non-transitory computer-readable recording medium
DK3361733T3 (en) * 2012-07-02 2020-01-20 Samsung Electronics Co Ltd ENTROPY CODING A VIDEO AND ENTROPY DECODING A VIDEO
US9332276B1 (en) 2012-08-09 2016-05-03 Google Inc. Variable-sized super block based direct prediction mode
US9380298B1 (en) 2012-08-10 2016-06-28 Google Inc. Object-based intra-prediction
US9445124B2 (en) 2013-03-15 2016-09-13 Samsung Electronics Co., Ltd. Electronic system with frequency mechanism and method of operation thereof
JP2016519519A (en) * 2013-04-11 2016-06-30 エルジー エレクトロニクス インコーポレイティド Video signal processing method and apparatus
JP6402520B2 (en) * 2014-07-22 2018-10-10 沖電気工業株式会社 Encoding apparatus, method, program, and apparatus
US20160029022A1 (en) * 2014-07-25 2016-01-28 Mediatek Inc. Video processing apparatus with adaptive coding unit splitting/merging and related video processing method
CN109155857B (en) * 2016-03-11 2023-05-30 数字洞察力有限公司 Video coding method and device
KR102416804B1 (en) * 2016-10-14 2022-07-05 세종대학교산학협력단 Image encoding method/apparatus, image decoding method/apparatus and and recording medium for storing bitstream
KR101823533B1 (en) * 2017-03-21 2018-01-30 삼성전자주식회사 Method and apparatus for encoding and decoding image using large transform unit
CN117834920A (en) * 2018-01-17 2024-04-05 英迪股份有限公司 Method of decoding or encoding video and method for transmitting bit stream
KR101913734B1 (en) 2018-01-24 2018-10-31 삼성전자주식회사 Method and apparatus for encoding and decoding image using large transform unit
CN116320411A (en) * 2018-03-29 2023-06-23 日本放送协会 Image encoding device, image decoding device, and program
JP7378035B2 (en) * 2018-09-12 2023-11-13 パナソニックIpマネジメント株式会社 Conversion device, decoding device, conversion method and decoding method

Family Cites Families (79)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6263422B1 (en) * 1992-06-30 2001-07-17 Discovision Associates Pipeline processing machine with interactive stages operable in response to tokens and system and methods relating thereto
US5842033A (en) * 1992-06-30 1998-11-24 Discovision Associates Padding apparatus for passing an arbitrary number of bits through a buffer in a pipeline system
JP3133517B2 (en) * 1992-10-15 2001-02-13 シャープ株式会社 Image region detecting device, image encoding device using the image detecting device
US5598514A (en) * 1993-08-09 1997-01-28 C-Cube Microsystems Structure and method for a multistandard video encoder/decoder
US5610657A (en) * 1993-09-14 1997-03-11 Envistech Inc. Video compression using an iterative error data coding method
US5446806A (en) 1993-11-15 1995-08-29 National Semiconductor Corporation Quadtree-structured Walsh transform video/image coding
JP3169783B2 (en) * 1995-02-15 2001-05-28 日本電気株式会社 Video encoding / decoding system
EP0731614B1 (en) * 1995-03-10 2002-02-06 Kabushiki Kaisha Toshiba Video coding/decoding apparatus
DE69637068T2 (en) * 1995-03-15 2007-12-27 Kabushiki Kaisha Toshiba System for decoding moving pictures
US5680129A (en) * 1995-07-18 1997-10-21 Hewlett-Packard Company System and method for lossless image compression
US5764805A (en) * 1995-10-25 1998-06-09 David Sarnoff Research Center, Inc. Low bit rate video encoder using overlapping block motion compensation and zerotree wavelet coding
CN1165180C (en) * 1996-05-17 2004-09-01 松下电器产业株式会社 Image coding method
KR100403077B1 (en) 1996-05-28 2003-10-30 마쯔시다덴기산교 가부시키가이샤 Image predictive decoding apparatus and method thereof, and image predictive cording apparatus and method thereof
US6292589B1 (en) * 1996-06-21 2001-09-18 Compaq Computer Corporation Method for choosing rate control parameters in motion-compensated transform-based picture coding scheme using non-parametric technique
US5995080A (en) * 1996-06-21 1999-11-30 Digital Equipment Corporation Method and apparatus for interleaving and de-interleaving YUV pixel data
US6101276A (en) * 1996-06-21 2000-08-08 Compaq Computer Corporation Method and apparatus for performing two pass quality video compression through pipelining and buffer management
US6292591B1 (en) * 1996-07-17 2001-09-18 Sony Coporation Image coding and decoding using mapping coefficients corresponding to class information of pixel blocks
FR2755527B1 (en) 1996-11-07 1999-01-08 Thomson Multimedia Sa MOTION COMPENSATED PREDICTION METHOD AND ENCODER USING SUCH A METHOD
US6173013B1 (en) * 1996-11-08 2001-01-09 Sony Corporation Method and apparatus for encoding enhancement and base layer image signals using a predicted image signal
US5956467A (en) * 1996-12-17 1999-09-21 Eastman Kodak Company Encoding color image data for multipass color printers
JPH10178639A (en) 1996-12-19 1998-06-30 Matsushita Electric Ind Co Ltd Image codec part and image data encoding method
US6157746A (en) * 1997-02-12 2000-12-05 Sarnoff Corporation Apparatus and method for encoding wavelet trees generated by a wavelet-based coding method
CN1110963C (en) * 1997-03-26 2003-06-04 松下电器产业株式会社 Image decoding device
JPH11146367A (en) 1997-11-06 1999-05-28 Matsushita Electric Ind Co Ltd Mobile video-phone
US6393060B1 (en) * 1997-12-31 2002-05-21 Lg Electronics Inc. Video coding and decoding method and its apparatus
US5995150A (en) * 1998-02-20 1999-11-30 Winbond Electronics Corporation America Dual compressed video bitstream camera for universal serial bus connection
JP3132456B2 (en) * 1998-03-05 2001-02-05 日本電気株式会社 Hierarchical image coding method and hierarchical image decoding method
IL129203A (en) * 1999-03-28 2002-07-25 Univ Ramot Method and system for compression of images
WO2003021971A1 (en) 2001-08-28 2003-03-13 Ntt Docomo, Inc. Moving picture encoding/transmission system, moving picture encoding/transmission method, and encoding apparatus, decoding apparatus, encoding method, decoding method, and program usable for the same
US6980596B2 (en) * 2001-11-27 2005-12-27 General Instrument Corporation Macroblock level adaptive frame/field coding for digital video content
CN101448162B (en) * 2001-12-17 2013-01-02 微软公司 Method for processing video image
EP1322121A3 (en) 2001-12-19 2003-07-16 Matsushita Electric Industrial Co., Ltd. Video encoder and decoder with improved motion detection precision
JP2003250161A (en) 2001-12-19 2003-09-05 Matsushita Electric Ind Co Ltd Encoder and decoder
MXPA03010827A (en) * 2002-03-27 2004-02-17 Matsushita Electric Ind Co Ltd Variable length encoding method, variable length decoding method, storage medium, variable length encoding device, variable length decoding device, and bit stream.
JP2003319394A (en) * 2002-04-26 2003-11-07 Sony Corp Encoding apparatus and method, decoding apparatus and method, recording medium, and program
KR100491530B1 (en) * 2002-05-03 2005-05-27 엘지전자 주식회사 Method of determining motion vector
US6795584B2 (en) 2002-10-03 2004-09-21 Nokia Corporation Context-based adaptive variable length coding for adaptive block transforms
KR20050085385A (en) * 2002-12-04 2005-08-29 코닌클리케 필립스 일렉트로닉스 엔.브이. Video coding method and device
HUP0301368A3 (en) * 2003-05-20 2005-09-28 Amt Advanced Multimedia Techno Method and equipment for compressing motion picture data
US7580584B2 (en) * 2003-07-18 2009-08-25 Microsoft Corporation Adaptive multiple quantization
US7317839B2 (en) * 2003-09-07 2008-01-08 Microsoft Corporation Chroma motion vector derivation for interlaced forward-predicted fields
JP5280003B2 (en) * 2003-09-07 2013-09-04 マイクロソフト コーポレーション Slice layer in video codec
KR20050045746A (en) 2003-11-12 2005-05-17 삼성전자주식회사 Method and device for motion estimation using tree-structured variable block size
CN101695132B (en) 2004-01-20 2012-06-27 松下电器产业株式会社 Picture coding method, picture decoding method, picture coding apparatus, and picture decoding apparatus thereof
EP2373033A3 (en) * 2004-01-30 2011-11-30 Panasonic Corporation Picture coding and decoding method, apparatus, and program thereof
US7565020B2 (en) * 2004-07-03 2009-07-21 Microsoft Corp. System and method for image coding employing a hybrid directional prediction and wavelet lifting
JP2006174415A (en) * 2004-11-19 2006-06-29 Ntt Docomo Inc Image decoding apparatus, image decoding program, image decoding method, image encoding apparatus, image encoding program, and image encoding method
JP4889231B2 (en) * 2005-03-31 2012-03-07 三洋電機株式会社 Image encoding method and apparatus, and image decoding method
KR101127221B1 (en) * 2005-07-15 2012-03-29 삼성전자주식회사 Apparatus and method for encoding/decoding of color image and video using prediction of color components in frequency domain
US8467450B2 (en) * 2005-09-26 2013-06-18 Mitsubishi Electric Corporation Moving image coding apparatus and moving image decoding apparatus
KR100763196B1 (en) 2005-10-19 2007-10-04 삼성전자주식회사 Method for coding flags in a layer using inter-layer correlation, method for decoding the coded flags, and apparatus thereof
CN101129063B (en) 2005-11-18 2010-05-19 索尼株式会社 Encoding device and method, decoding device and method, and transmission system
JP2007243427A (en) * 2006-03-07 2007-09-20 Nippon Hoso Kyokai <Nhk> Encoder and decoder
KR101200865B1 (en) * 2006-03-23 2012-11-13 삼성전자주식회사 An video encoding/decoding method and apparatus
WO2007116551A1 (en) * 2006-03-30 2007-10-18 Kabushiki Kaisha Toshiba Image coding apparatus and image coding method, and image decoding apparatus and image decoding method
KR100745765B1 (en) * 2006-04-13 2007-08-02 삼성전자주식회사 Apparatus and method for intra prediction of an image data, apparatus and method for encoding of an image data, apparatus and method for intra prediction compensation of an image data, apparatus and method for decoding of an image data
EP2055108B1 (en) 2006-08-25 2011-04-20 Thomson Licensing Methods and apparatus for reduced resolution partitioning
KR20080045516A (en) * 2006-11-20 2008-05-23 삼성전자주식회사 Method for encoding and decoding of rgb image, and apparatus thereof
JP5026092B2 (en) 2007-01-12 2012-09-12 三菱電機株式会社 Moving picture decoding apparatus and moving picture decoding method
JP2008245016A (en) * 2007-03-28 2008-10-09 Canon Inc Image encoding apparatus and method, and program
US8619853B2 (en) * 2007-06-15 2013-12-31 Qualcomm Incorporated Separable directional transforms
US8265144B2 (en) 2007-06-30 2012-09-11 Microsoft Corporation Innovations in video decoder implementations
US8483282B2 (en) 2007-10-12 2013-07-09 Qualcomm, Incorporated Entropy coding of interleaved sub-blocks of a video block
CN101159875B (en) * 2007-10-15 2011-10-05 浙江大学 Double forecast video coding/decoding method and apparatus
WO2009051719A2 (en) * 2007-10-16 2009-04-23 Thomson Licensing Methods and apparatus for video encoding and decoding geometically partitioned super blocks
JP2009111691A (en) * 2007-10-30 2009-05-21 Hitachi Ltd Image-encoding device and encoding method, and image-decoding device and decoding method
US7444596B1 (en) 2007-11-29 2008-10-28 International Business Machines Corporation Use of template messages to optimize a software messaging system
US8503527B2 (en) * 2008-10-03 2013-08-06 Qualcomm Incorporated Video coding with large macroblocks
US8619856B2 (en) 2008-10-03 2013-12-31 Qualcomm Incorporated Video coding with large macroblocks
KR101452859B1 (en) * 2009-08-13 2014-10-23 삼성전자주식회사 Method and apparatus for encoding and decoding motion vector
KR101474756B1 (en) * 2009-08-13 2014-12-19 삼성전자주식회사 Method and apparatus for encoding and decoding image using large transform unit
CN106101717B (en) * 2010-01-12 2019-07-26 Lg电子株式会社 The processing method and equipment of vision signal
KR101487687B1 (en) * 2010-01-14 2015-01-29 삼성전자주식회사 Method and apparatus for encoding and decoding image using large transform unit
JP6056122B2 (en) * 2011-01-24 2017-01-11 ソニー株式会社 Image encoding apparatus, image decoding apparatus, method and program thereof
WO2012176381A1 (en) * 2011-06-24 2012-12-27 三菱電機株式会社 Moving image encoding apparatus, moving image decoding apparatus, moving image encoding method and moving image decoding method
EP2942961A1 (en) * 2011-11-23 2015-11-11 HUMAX Holdings Co., Ltd. Methods for encoding/decoding of video using common merging candidate set of asymmetric partitions
JP5887909B2 (en) * 2011-12-19 2016-03-16 コニカミノルタ株式会社 Image forming apparatus and control method thereof
JP5917127B2 (en) * 2011-12-19 2016-05-11 株式会社ジャパンディスプレイ Liquid crystal display
US9667994B2 (en) * 2012-10-01 2017-05-30 Qualcomm Incorporated Intra-coding for 4:2:2 sample format in video coding

Also Published As

Publication number Publication date
ZA201201157B (en) 2020-02-26
SI2890123T1 (en) 2017-12-29
JP5753327B2 (en) 2015-07-22
CN102484703B (en) 2015-02-25
JP5579310B2 (en) 2014-08-27
HUE043938T2 (en) 2019-09-30
KR20110017300A (en) 2011-02-21
RU2543519C2 (en) 2015-03-10
HUE038258T2 (en) 2018-10-29
CY1120725T1 (en) 2019-12-11
RU2510945C1 (en) 2014-04-10
US20130064291A1 (en) 2013-03-14
HRP20180692T1 (en) 2018-06-29
JP2013502138A (en) 2013-01-17
ZA201502024B (en) 2015-11-25
CA2768181A1 (en) 2011-02-17
CA2815893A1 (en) 2011-02-17
US8204320B2 (en) 2012-06-19
ES2760475T3 (en) 2020-05-14
US8515190B2 (en) 2013-08-20
HRP20171769T1 (en) 2017-12-29
US20130336391A1 (en) 2013-12-19
JP5753328B2 (en) 2015-07-22
CN104581162A (en) 2015-04-29
JP2015173484A (en) 2015-10-01
EP2629526A3 (en) 2013-12-18
CA2877255C (en) 2015-10-06
EP2629518B1 (en) 2017-11-15
DK2890123T3 (en) 2017-11-27
EP3282696A1 (en) 2018-02-14
RU2014104800A (en) 2015-04-27
US20130336392A1 (en) 2013-12-19
PT2629518T (en) 2017-11-23
DK2629518T3 (en) 2017-11-27
KR101474756B1 (en) 2014-12-19
US8842921B2 (en) 2014-09-23
PL2890123T3 (en) 2018-01-31
NO2629526T3 (en) 2018-09-29
EP3282696B1 (en) 2018-12-05
PT2890123T (en) 2017-11-23
JP2015109686A (en) 2015-06-11
US20130336390A1 (en) 2013-12-19
HUE039342T2 (en) 2018-12-28
CY1119803T1 (en) 2018-06-27
EP2629526B1 (en) 2018-05-02
MX2012000614A (en) 2012-01-27
PL3448039T3 (en) 2020-02-28
JP2013179707A (en) 2013-09-09
EP2866442B1 (en) 2017-11-15
IN2015MN00400A (en) 2015-09-04
CA2815893C (en) 2015-02-03
CN103220528B (en) 2017-03-01
CN104581162B (en) 2016-05-04
AU2010283113B2 (en) 2014-07-03
LT2890123T (en) 2017-12-11
EP2866442A1 (en) 2015-04-29
US8971650B2 (en) 2015-03-03
DK2866442T3 (en) 2017-11-27
IN2015MN00401A (en) 2015-09-04
JP6023261B2 (en) 2016-11-09
US20110038554A1 (en) 2011-02-17
ZA201502025B (en) 2015-12-23
EP3448039B1 (en) 2019-11-13
CY1119838T1 (en) 2018-06-27
EP2449778A4 (en) 2013-12-25
DK3448039T3 (en) 2019-11-25
BR122013019724A2 (en) 2016-05-10
NO2866442T3 (en) 2018-04-14
EP2629518A2 (en) 2013-08-21
ES2648089T3 (en) 2017-12-28
CA2815777C (en) 2015-04-28
LT3282696T (en) 2018-12-27
HRP20171768T1 (en) 2017-12-29
WO2011019234A3 (en) 2011-06-23
HUE038282T2 (en) 2018-10-29
EP2449778A2 (en) 2012-05-09
US20120106637A1 (en) 2012-05-03
ES2648091T3 (en) 2017-12-28
PL2629518T3 (en) 2018-01-31
ZA201304973B (en) 2013-09-25
EP3448039A1 (en) 2019-02-27
SI3282696T1 (en) 2019-01-31
PT3282696T (en) 2018-12-17
BR112012001757A2 (en) 2016-04-12
US20150156513A1 (en) 2015-06-04
WO2011019234A2 (en) 2011-02-17
RU2014104796A (en) 2015-08-20
US8792741B2 (en) 2014-07-29
RU2012104828A (en) 2013-08-20
CA2768181C (en) 2015-04-28
PL3282696T3 (en) 2019-02-28
EP2890123A1 (en) 2015-07-01
CA2877241C (en) 2016-10-11
DK2629526T3 (en) 2018-05-22
JP5579309B2 (en) 2014-08-27
CA2815777A1 (en) 2011-02-17
PT2866442T (en) 2017-11-23
CY1119836T1 (en) 2018-06-27
MY153787A (en) 2015-03-13
HRP20182055T1 (en) 2019-02-08
RU2551794C2 (en) 2015-05-27
US8792737B2 (en) 2014-07-29
ES2647908T3 (en) 2017-12-27
SI2866442T1 (en) 2017-12-29
EP2890123B1 (en) 2017-11-15
EP2629518A3 (en) 2013-12-18
US8311348B2 (en) 2012-11-13
CN104581161A (en) 2015-04-29
ZA201502023B (en) 2017-08-30
DK3282696T3 (en) 2019-01-07
LT2866442T (en) 2017-12-11
CN103220525A (en) 2013-07-24
CN103220528A (en) 2013-07-24
LT2629518T (en) 2017-12-11
US20120236938A1 (en) 2012-09-20
ES2701979T3 (en) 2019-02-26
IN2015MN00402A (en) 2015-09-04
PL2866442T3 (en) 2018-01-31
LT2629526T (en) 2018-05-25
US8971649B2 (en) 2015-03-03
SI2629526T1 (en) 2018-06-29
HUE038255T2 (en) 2018-10-29
PL2629526T3 (en) 2018-07-31
NO2890123T3 (en) 2018-04-14
CN104581161B (en) 2016-06-01
SI2629518T1 (en) 2017-12-29
CN104581163A (en) 2015-04-29
JP2015109687A (en) 2015-06-11
BR122013019725A2 (en) 2016-05-10
JP2013214989A (en) 2013-10-17
CN104581163B (en) 2017-05-24
US9386325B2 (en) 2016-07-05
US8798381B2 (en) 2014-08-05
JP2015180086A (en) 2015-10-08
RU2013113038A (en) 2014-04-10
US20140294311A1 (en) 2014-10-02
US20140286585A1 (en) 2014-09-25
MY157501A (en) 2016-06-15
RU2514777C1 (en) 2014-05-10
MY157499A (en) 2016-06-15
AU2010283113A1 (en) 2012-01-12
CN102484703A (en) 2012-05-30
ES2668472T3 (en) 2018-05-18
JP6023260B2 (en) 2016-11-09
NO2629518T3 (en) 2018-04-14
EP2629526A2 (en) 2013-08-21
CY1121001T1 (en) 2019-12-11
JP5746169B2 (en) 2015-07-08
CA2877255A1 (en) 2011-02-17
PT2629526T (en) 2018-05-11
HRP20171767T1 (en) 2017-12-29
HUE048402T2 (en) 2020-08-28

Similar Documents

Publication Publication Date Title
CA2877241C (en) Method and apparatus for encoding and decoding image by using large transformation unit
AU2013201911B2 (en) Method and apparatus for encoding and decoding image by using large transformation unit

Legal Events

Date Code Title Description
EEER Examination request

Effective date: 20150109

EEER Examination request

Effective date: 20150109