|Publication number||US7639263 B2|
|Application number||US 11/627,754|
|Publication date||Dec 29, 2009|
|Filing date||Jan 26, 2007|
|Priority date||Jan 26, 2007|
|Also published as||CN101589609A, CN101589609B, EP2111714A1, EP2111714A4, US20080180456, WO2008091955A1|
|Publication number||11627754, 627754, US 7639263 B2, US 7639263B2, US-B2-7639263, US7639263 B2, US7639263B2|
|Inventors||Donald Karlov, Gilles Khouzam|
|Original Assignee||Microsoft Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (18), Non-Patent Citations (6), Referenced by (4), Classifications (25), Legal Events (3)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This Background is intended to provide the basic context of this patent application and is not intended to describe a specific problem to be solved.
Computer monitors emit color within a color space comprising RGB (red, green, blue) light. Although all colors of the visible spectrum can be produced by merging red, green and blue light, monitors are capable of displaying only a limited gamut (i.e., range) of the visible spectrum. Each pixel presented in the RGB format will include a separate value within a range of 0 to 255 for each of R, G, and B to produce a pixel. However, computers may also emit color within a variety of other color spaces. For example, another color space may comprise data consisting of luminance (Y), chrominance of the blue to yellow color content (U or Cb), and chrominance of the red to cyan color content (V or Cr). As with RGB, pixels in the YUV format are also comprised of separate values for each of Y, U, and V. However, the ranges for each value of R, G, and B do not correspond directly to the ranges for Y, U, and V. For example, in one YUV format, the range of values for Y is 16 to 235, while the ranges for both U and V is 16 to 239. Therefore, for a computer to properly display YUV video content in RGB, the YUV values for each pixel must be converted to corresponding RGB values.
Present methods for conversion from one video format to another are computationally expensive and may require the processing of input data through a matrix transform to produce output. To convert from YUV to RGB, data from each pixel must be processed through a matrix transform with 7 multiplication and 11 add/subtract operations. In practice, a compiler may reduce the common sub-expressions of matrix transforms. For example, the matrix transform to convert YUV to RGB may be readily reduced to 5 multiplication and 7 add/subtract operations. However, even a compiler-reduced matrix transform is computationally expensive. Due to the complexity of pixel conversion, the process typically requires extensive support to include Single Instruction, Multiple Data (SIMD) parallel processing extensions for execution within a useful time. Further, computers that are unable to implement SIMD extensions are unable to easily perform pixel conversion.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The values of each possible component output R, G, and B may be pre-computed for all values of each possible component input Y, U, and V. Each contribution of Y, U, and V input may then be loaded into a register and added in parallel, without overflow, resulting in a computationally inexpensive RGB output from a YUV input. In one embodiment, contributions of Y, U, and V to each of R, G, and B are retrieved from pre-computed tables. The YUV contributions for each value of R, G, and B are packed into three data elements and added together in parallel, resulting in a value for an RGB output.
Although the following text sets forth a detailed description of numerous different embodiments, it should be understood that the legal scope of the description is defined by the words of the claims set forth at the end of this patent. The detailed description is to be construed as exemplary only and does not describe every possible embodiment since describing every possible embodiment would be impractical, if not impossible. Numerous alternative embodiments could be implemented, using either current technology or technology developed after the filing date of this patent, which would still fall within the scope of the claims.
It should also be understood that, unless a term is expressly defined in this patent using the sentence “As used herein, the term ‘______’ is hereby defined to mean . . . ” or a similar sentence, there is no intent to limit the meaning of that term, either expressly or by implication, beyond its plain or ordinary meaning, and such term should not be interpreted to be limited in scope based on any statement made in any section of this patent (other than the language of the claims). To the extent that any term recited in the claims at the end of this patent is referred to in this patent in a manner consistent with a single meaning, that is done for sake of clarity only so as to not confuse the reader, and it is not intended that such claim term be limited, by implication or otherwise, to that single meaning. Finally, unless a claim element is defined by reciting the word “means” and a function without the recital of any structure, it is not intended that the scope of any claim element be interpreted based on the application of 35 U.S.C. §112, sixth paragraph.
With reference to
The computer 110 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 180, via a local area network (LAN) 171 and/or a wide area network (WAN) 173 via a modem 172 or other network interface 170.
Computer 110 typically includes a variety of computer readable media that may be any available media that may be accessed by computer 110 and includes both volatile and nonvolatile media, removable and non-removable media. The system memory 130 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 131 and random access memory (RAM) 132. The ROM may include a basic input/output system 133 (BIOS). RAM 132 typically contains data and/or program modules that include operating system 134, application programs 135, other program modules 136, and program data 137. The computer 110 may also include other removable/non-removable, volatile/nonvolatile computer storage media such as a hard disk drive 141 a magnetic disk drive 151 that reads from or writes to a magnetic disk 152, and an optical disk drive 155 that reads from or writes to an optical disk 156. The hard disk drive 141, 151, and 155 may interface with system bus 121 via interfaces 140, 150.
A user may enter commands and information into the computer 20 through input devices such as a keyboard 162 and pointing device 161, commonly referred to as a mouse, trackball or touch pad. Other input devices (not illustrated) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 120 through a user input interface 160 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). A monitor 191 or other type of display device may also be connected to the system bus 121 via an interface, such as a video interface 190. A video conversion module 192 may be connected to the system bus 121. The video conversion module 192 may convert or modify pixels in accordance with the method described below. In other embodiments, the video conversion module 192 is a component of another element of the computer 110. For example, the video conversion module 192 may be a component of the processing unit 120, and/or the video interface 190. In addition to the monitor 191, computers may also include other peripheral output devices such as speakers 197 and printer 196, which may be connected through an output peripheral interface 190.
Much of the inventive functionality and many of the inventive principles described herein are best implemented with or in software programs or instructions and integrated circuits (ICs) such as application specific ICs. It is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles disclosed herein will he readily capable of generating such software instructions, programs, and ICs with minimal experimentation. Therefore, in the interest of brevity and minimization of any risk of obscuring the principles and concepts in accordance to the present invention, further discussion of such software and ICs, if any, will be limited to the essentials with respect to the principles and concepts of the preferred embodiments.
With reference to
red = (Y − 16) × 1.164 + (V − 128) × 1.596
green = (Y − 16) × 1.164 + (V − 128) × 0.831 − (U − 128) × 0.400
blue = (Y − 16) × 1.164 + (U − 128) × 2.018
From Table 1, in a conversion from YUV to RGB, a value in the RGB color space for red may be composed of some degree of luminance (Y) and some degree of chrominance in the red to cyan range (V or Cr). Likewise, green may be composed of some degree of luminance (Y), some degree of chrominance in the red to cyan range (V or Cr), and some degree of chrominance in the blue to yellow range (U or Cb). Also, blue may be composed of some degree of luminance (Y), and some degree of chrominance in the blue to yellow range (U or Cb). Therefore, as in Table 2, the formula for converting YUV pixels into RGB pixels may be simplified to emphasize the contributions of the Y, U, and V components.
TABLE 2 Red Green Blue (Y − 16) × 1.164 (Y − 16) × 1.164 (Y − 16) × 1.164 + (V − 128) × 1.596 (V − 128) × −0.831 + (U − 128) × −0.400 (U − 128) × 2.018 = Red Output Green Output Blue Output
As illustrated in Table 2, each of the inputs Y, U, and V contribute, in some degree, to a resulting output of Red, Green, and Blue. The total contribution of each value of Y, U, and V may be added together in planar form to produce a respective Red, Green, and Blue output value. Of course, there may be other simplifications of other matrix transforms that may emphasize component contributions.
At block 210, tables may be computed to determine all possible output values corresponding to all possible input values. The tables may consist of a number of values equaling the range of the target format. For example, where the target format is RGB and the range for each component is 0 to 255, there may be three tables of 256 values each representing an entire set of possible output values from an input. The tables may be stored in any form of computer memory described in relation to
a set of tables 300. In one embodiment, an input 305 for Y of 17 is converted by the formula (Y−16)×1.164 to produce a table value 320 of 1.164 as the luminance contribution to each of R 325, G 330, and B 335. With reference to
At block 215, the method 200 may receive an input value for conversion. In one embodiment, pixel values are received in planar mode in response to any modification of an image that results in a per-pixel computing cost. Some examples of operations that may require a per-pixel computing cost are conversions from one video format to another (i.e., YUV to RGB), a shader transform, a lightness or brightness correction, a saturation change, or a color correction. For example, a tuple of YUV pixel values may be received by the method 200 as a plane of Y data, a plane of U data, and a plane of V data, in response to a requested conversion from the YUV format to another format (i.e., YUV to/from RGB). In a further embodiment, the method 200 receives non-planar, or “chunky” data. For example, the YUV data may be received as a single memory unit containing all three Y, U, and V values. The method 200 may consider each piece of chunky data as a 1×1 pixel plane. Each piece of chunky data may be converted into separate planes of data on a pixel-by-pixel basis. Many other embodiments for receiving and interpreting input values also exist.
At block 220, the method 200 may use the lookup tables of block 210 to find a contribution of an input to the value of an output. In one embodiment, a value from a YUV tuple is used to find its contribution to the desired output. For example, with reference to
At block 225, the method 200 may store the input contributions of block 220. In one embodiment, each component's contributions are stored in a particular order within a single memory location. For example, the data may be stored to prevent an overflow error during a subsequent addition process. With reference to
In one embodiment, the memory space is a 32-bit dword in which each contribution is allocated ten bits 405 of the dword with a “buffer” or “gap” 410 between each value. The buffer may be any size that is suitable to prevent an overflow error during a subsequent addition operation. For example, the buffer may be 1 bit in size. In a further embodiment, the memory space may be any structure that may allocate enough memory to store each ordered contribution described above wherein the number of memory spaces may be equal to the number of elements in a target format. For example, the format RGB is made up of three elements; therefore, the method 200 may use three memory spaces 415, 420, 425. Also, each element of a contribution tuple may be stored in the same order over multiple spaces. For example, each of three memory spaces 415, 420, 425 may have data representing a Y value stored in the first ten bits, a U value in the second ten bits, and a V value in the third ten bits, with a 1-bit buffer between each value. Therefore, the contributions of Y, U (Cb,), and V (Cr) to the components R, G, and B may be stored in a different memory space 415, 420, 425, and arranged in the order of contributions to R, contributions to G, and contributions to B. For example, an embodiment that converts YUV data to RGB, the space 415 stores, in order, the contribution of Y to each of R, G, and B, space 420 stores, in order, the contribution of U (Cb,) to each of R, G, and B, and space 425 stores, in order, the contribution of V (Cr) to each of R, G, and B. Of course, there may be many embodiments of memory spaces that store the components in a particular order as well as many sizes of the components stored.
At block 230, with reference to
In another embodiment, the resulting total values are clipped to ensure they remain within an acceptable range for the target format. For example, with reference to
At block 235, the method 200 may determine if there are more values that need to be converted. In one embodiment, the method 200 checks a buffer into which all pre-conversion pixel values are decompressed to determine if any values remain. For example, the buffer may be a FIFO queue into which YUV pixel values are loaded for processing by the method 200. If the method 200 determines that more values need to be converted, the method may return to block 220. If no values remain, the method may proceed to block 240.
At block 240, the method 200 may store the converted or modified image. In one embodiment, the method 200 stores the image directly, on a pixel-by-pixel basis, to a memory, such as a back buffer. For example, in a video display, a compositing layer may decode compressed video to YUV planes, convert the YUV planes to an intermediate RGB buffer, and merge/composite the RGB buffer to a back buffer for later display. A compositor incorporating an intermediate buffer may be useful when converting between video formats that display images at different rates. For example, video playback may operate at a frames-per-second rate that is slower than a typical digital animation sequence. Because video playback is slower, some frames of the slower format may be repeatedly displayed during playback in the faster format to account for the different rates. Rather than re-converting the same frame every time it is displayed, the converted YUV/RGB data may be saved to an intermediate buffer to achieve faster recall of previously-displayed frames. Further, where the conversion step consumes an appreciable amount of computer processing (as when the step converts using the Table 1 matrix transform for each pixel), it may be more efficient to convert a repeated frame only once and cache the resulting data. Employing the transform at each pixel conversion may, therefore, generate an additional Read/Modify/Write command for each processed pixel.
However, as described above in relation to blocks 220 through 235, the processing cost of conversion may be very low. For example, the conversion from YUV to RGB may involve only a table lookup for each target format component followed by parallel addition. Therefore, because the conversion cost may be very low, it may be performed as each pixel is decompressed on a pixel-by-pixel basis and the resulting data may be saved directly to a back buffer. Saving directly to a back buffer may eliminate an intermediate (RGB) buffer as well as a Read/Modify/Write command during the conversion process and may also improve the cache coherency. Further, the Write/Combine capabilities of certain processors may substantially improve the conversion and output performance. Of course, there are many other embodiments for storing and utilizing converted data to eliminate R/M/W commands, intermediate buffers, or hardware components employing these structures.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5872556||Apr 6, 1993||Feb 16, 1999||International Business Machines Corp.||RAM based YUV-RGB conversion|
|US5873990 *||Aug 21, 1996||Feb 23, 1999||Andcare, Inc.||Handheld electromonitor device|
|US5923316||Oct 15, 1996||Jul 13, 1999||Ati Technologies Incorporated||Optimized color space conversion|
|US5936683||Sep 29, 1997||Aug 10, 1999||Neo Magic Corp.||YUV-to-RGB conversion without multiplies using look-up tables and pre-clipping|
|US6097219 *||May 4, 1998||Aug 1, 2000||Oki Data Corporation||Output buffer circuit with adjustable driving capability|
|US6118724 *||Feb 18, 1998||Sep 12, 2000||Canon Kabushiki Kaisha||Memory controller architecture|
|US6172714||May 22, 1996||Jan 9, 2001||Compaq Computer Corporation||Color adjustment table system for YUV to RGB color conversion|
|US6268847||Jun 2, 1999||Jul 31, 2001||Ati International Srl||Method and apparatus for more accurate color base conversion of YUV video data|
|US6349379 *||Feb 18, 1998||Feb 19, 2002||Canon Kabushiki Kaisha||System for executing instructions having flag for indicating direct or indirect specification of a length of operand data|
|US6356277||Jan 22, 1998||Mar 12, 2002||Seiko Epson Corporation||YUV-RGB digital conversion circuit and picture display device and electronic equipment using the same|
|US6384838||Apr 17, 1995||May 7, 2002||Intel Corporation||Optimized lookup table method for converting Yuv pixel values to RGB pixel values|
|US6487308 *||May 22, 1996||Nov 26, 2002||Compaq Computer Corporation||Method and apparatus for providing 64-bit YUV to RGB color conversion|
|US6828982||Oct 31, 2002||Dec 7, 2004||Samsung Electronics Co., Ltd.||Apparatus and method for converting of pixels from YUV format to RGB format using color look-up tables|
|US20010021971 *||Feb 18, 1998||Sep 13, 2001||Ian Gibson||System for executing instructions having flag for indicating direct or indirect specification of a length of operand data|
|US20030052894 *||Sep 6, 2002||Mar 20, 2003||Yuji Akiyama||Method and apparatus for processing image data, storage medium and program|
|US20030120886 *||Dec 21, 2001||Jun 26, 2003||Moller Hanan Z.||Method and apparatus for buffer partitioning without loss of data|
|US20030160900 *||Apr 16, 2002||Aug 28, 2003||Adriana Dumitras||Image and video processing with chrominance attenuation|
|US20060176313||Feb 10, 2005||Aug 10, 2006||Samsung Electronics Co., Ltd.||Luminance preserving color conversion from YUV to RGB|
|1||"A Low-Power Multiplierless YUV to RGB Converter Based on Human Vision Perception," http://ieeexplore.ieee.org/iel2/4387/12506/00574765.pdf?sNumber=.|
|2||"A Low-Power Video Decoder with Power, Memory, Bandwidth and Quality Scalability," http://ieeexplore.ieee.org/iel3/4015/11533/00527516.pdf?sNumber=.|
|3||"An End to End Software Only Scalable Video Delivery System," http://suif.standford.edu/~bks/publications/scalable-video.ps.|
|4||"Integrating Video Rendering into Graphics Accelerator Chips," http://www.hpl.hp.com/personal/Robert-Ulichney/papers/1996-graphics-chip.pdf.|
|5||"RGB/YUV Pixel Conversion" http://www.fourcc.org/fccvrgb.php.|
|6||International Search Report based on International Application No. PCT/US2008/051813-Filed Jan. 23, 2008; Date of Mailing: Jul. 1, 2008.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US9053523 *||Jun 21, 2013||Jun 9, 2015||Intel Corporation||Joint enhancement of lightness, color and contrast of images and video|
|US9137488 *||Oct 26, 2012||Sep 15, 2015||Google Inc.||Video chat encoding pipeline|
|US20110072236 *||Mar 24, 2011||Mimar Tibet||Method for efficient and parallel color space conversion in a programmable processor|
|US20140118477 *||Oct 26, 2012||May 1, 2014||Google Inc.||Video chat encoding pipeline|
|U.S. Classification||345/589, 358/523, 382/274, 358/518, 348/649, 348/630, 345/559, 348/453, 345/549, 382/167, 345/602, 382/162, 345/591|
|International Classification||G09G5/02, H04N9/64, G03F3/08, G09G5/22, G09G5/36, G06K9/00|
|Cooperative Classification||H04N9/67, H04N1/646, H04N1/6019|
|European Classification||H04N9/67, H04N1/60D2, H04N1/64D|
|May 16, 2007||AS||Assignment|
Owner name: MICROSOFT INC., WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KARLOV, DONALD;KHOUZAM, GILLES;REEL/FRAME:019301/0933
Effective date: 20070126
|Mar 18, 2013||FPAY||Fee payment|
Year of fee payment: 4
|Dec 9, 2014||AS||Assignment|
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034542/0001
Effective date: 20141014