|Publication number||US7129962 B1|
|Application number||US 10/104,011|
|Publication date||Oct 31, 2006|
|Filing date||Mar 25, 2002|
|Priority date||Mar 25, 2002|
|Publication number||10104011, 104011, US 7129962 B1, US 7129962B1, US-B1-7129962, US7129962 B1, US7129962B1|
|Inventors||Jean-François Côté, Jean-Jacques Ostiguy|
|Original Assignee||Matrox Graphics Inc.|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (31), Non-Patent Citations (11), Referenced by (43), Classifications (9), Legal Events (4)|
|External Links: USPTO, USPTO Assignment, Espacenet|
The present invention relates generally to video decoding or encoding and, in particular, to the efficient transformation of image data between the frequency and color domains.
It is desirable to compress image signals that are used with computer systems, since an image signal for a single uncompressed high-resolution digitized color image can easily consume several megabytes of memory. Because images tend to have low information content, very good compression rates are usually possible. This is especially true if, as is often the case for image signals used with computer systems, perfect reproduction is not required. In such instances, the low-frequency components of a frequency-domain image signal are perceptually more important in reproducing the image than the high-frequency components of the frequency-domain image signal. Thus, compression schemes that are applied to the frequency-domain version of an image signal do not waste bits in attempting to represent the relatively less significant high-frequency portions of the image signal.
Accordingly, it is desirable to transform an image signal from the spatial domain (also referred to as the “color domain”) to the frequency domain prior to compressing the image signal. Naturally, an inverse operation is required to transform the image signal from the frequency domain back into the color domain prior to representation on a screen, such as a computer monitor. One type of mathematical transform that is suitable for transforming image signals in this manner is the discrete cosine transform (DCT). A DCT includes a pair of transforms, namely a forward DCT (FDCT), which maps a digitized (color-domain) signal to a frequency domain signal, and an inverse DCT (IDCT), which maps a frequency domain signal to a signal in the color domain. The DCT and IDCT are important steps in several international standards such as MPEG-2 (H.262), MPEG-1 (H.261) and MPEG-4 (H.263).
However, although the encoding of an image signal using the FDCT provides a definite advantage in terms of compression, there is an associated penalty in terms of the amount of processing required to decode the image signal when successive images are to be displayed. That is to say, a processing unit designed to decode image signals must be capable of performing the IDCT operation sufficiently quickly to allow full-frame video playback in real time.
By way of example, to perform decoding of a 1024-by-1024-pixel color image requires 49,152 IDCTs of size 8-by-8 (namely, 3×(1024×1024)/(8×8)). Furthermore, the calculation of a single 8-by-8 IDCT in a conventional manner requires more than 9,200 multiplications and more than 4,000 additions. Thus, if 30 images are to be displayed in each second, as is suggested to provide full-motion video, then the total number of multiplications per second rises to over 13 billion and the total number of additions per second reaches more than 5 billion. Such tremendous processing requirements heavily influence the design of the processing unit hardware and software.
With the aim of providing the requisite processing power, several approaches have been considered. One such approach consists of performing a set of single-dimensional IDCTs on the individual rows of each successive received matrix of frequency-domain values, followed by a series of single-dimensional IDCTs on the resultant columns. Such a technique can reduce the number of multiplications and additions required to perform a complete IDCT. It has also been known to perform fused multiply-add instructions to further reduce the number of computations involved in performing the IDCT. In both of these cases, however, the decoding speed of the algorithms is directly dependent on the degree to which the main processor of the host system is occupied with other tasks, such as input/output (I/O) handling.
Other prior approaches have consisted of using a relatively simple software algorithm in conjunction with dedicated hardware support in the form of a dedicated IDCT co-processor. The use of a dedicated IDCT co-processor in the GPU has the potential to offer a more scalable solution since the decoding speed of the IDCT is no longer tied to the speed or availability of the main processor of the host system. However, the addition of a dedicated IDCT co-processor increases the cost and complexity of the hardware employed to effect video decoding.
Yet another way of providing hardware support for IDCT execution has consisted of providing specialized logic gates in the GPU. This approach affords marginal reductions in cost and complexity relative to the IDCT co-processor approach, while continuing to provide a decoding speed that is independent of the processing speed or availability of the main processor. However, the sheer amount of semiconductor real estate occupied by specialized logic gates capable of implementing a standard IDCT algorithm leave little room for the implementation of other important functional blocks of the GPU.
Therefore, it would be advantageous to enable fast decoding of the IDCT in a GPU but without the need to provide additional hardware and without unduly monopolizing the resources of the main processor in the GPU.
The present invention provides a method and system that allow a transform to be computed efficiently using a graphics processing unit.
According to a first embodiment, the present invention provides a graphics processing device for converting coefficients in a video data stream from a first type, e.g., frequency-domain, to a second type, e.g., color-domain. The device includes an input for receiving the video data stream including a set of coefficients of the first type and a storage medium holding a data structure containing a first set of coefficients of the second type. The device further includes a processor communicating with the input and with the storage medium. The processor uses the data structure to convert the set of coefficients of the first type to a second set of coefficients of the second type. The device also includes an output in communication with the processor, for releasing an output video data stream including the second set of coefficients of the second type.
In a specific embodiment, the video data stream at the input is organized into plural sets of coefficients of the first type. The processor uses the data structure repeatedly to convert each such set of coefficients of the first type to a corresponding second set of coefficients of the second type. The processor may also perform a scaling operation on the set of coefficients of the first type prior to using the data structure to convert the set of coefficients of the first type to the second set of coefficients of the second type.
In using the data structure to convert the set of coefficients of the first type to the second set of coefficients of the second type, the processor may execute parallel multiply-accumulate instructions involving the set of coefficients of the first type and the first set of coefficients of the second type.
According to a second broad aspect, the present invention provides computer-readable media tangibly embodying a program of instructions executable by a computer to perform various methods of transforming an ordered set of N first coefficients representative of a signal in a first domain into an ordered set of M second coefficients representative of the signal in a second domain.
In one case, the method includes accessing an ordered set of N ordered sets of M pre-computed factors each, the nth ordered set of pre-computed factors, 1≦n≦N, being associated with the nth one of the first coefficients; and computing the second coefficients as a function of the first coefficients and the ordered sets of pre-computed factors.
In another case, the method includes receiving the video data stream including coefficients of the first type; performing a scaling operation on the coefficients of the first type; converting the scaled coefficients of the first type to coefficients of the second type; performing an inverse scaling operation on the coefficients of the second type; and releasing a video data stream including the inversely scaled coefficients of the second type.
In yet another case, the method includes receiving the video data stream including coefficients of the first type; grouping the coefficients of the first type into a plurality of distinct groups on a basis of a characteristic of each of the coefficients of the first type; converting the distinct groups of the coefficients of the first type to respective intermediate coefficient groups; combining the intermediate coefficient groups, thereby to generate the coefficients of the second type; and releasing an output video data stream including the coefficients of the second type.
In still another case, the method includes receiving the video data stream including coefficients of the first type; performing a scaling operation on each of a plurality of distinct groups of the coefficients of the first type; converting the distinct groups of the coefficients of the first type to respective intermediate coefficient groups; performing an inverse scaling operation on each of the intermediate coefficients groups; combining the inversely scaled intermediate coefficient groups, thereby to generate the coefficients of the second type; and releasing an output video data stream including the coefficients of the second type.
The present invention may also be summarized broadly as a graphics processing unit for converting a frequency-domain video data stream into a corresponding spatial-domain video data stream. The graphics processing unit includes an input for receiving the frequency-domain video data stream including a set of frequency-domain coefficients and a storage medium holding a data structure containing a plurality of color-domain basis sets, one for each of the coefficients in the set of frequency-domain coefficients. The graphics processing unit also includes a processor communicating with the input and with the storage medium. The processor uses the color-domain basis sets to convert the set of frequency-domain coefficients to a set of color-domain coefficients. The graphics processing unit further includes an output in communication with the processor, for releasing the spatial-domain video data stream including the set of color-domain coefficients.
In the following, the term “matrix” is not to be limited to a two-dimensional logical or physical arrangement of elements but is to be interpreted broadly as an ordered set of elements, which may occupy up to three physical dimensions when stored in a memory and any number of logical dimensions when referred to in software.
These and other aspects and features of the present invention will now become apparent to those of ordinary skill in the art upon review of the following description of specific embodiments of the invention in conjunction with the accompanying drawings.
In the accompanying drawings:
With reference to
The role of the processor 12 is to transform each successive N-by-N matrix “F” of frequency-domain coefficients f(x,y), 1≦x≦N, 1≦y≦N, into a corresponding N-by-N matrix “C” of color-domain coefficients c(x,y), 1≦x≦N, 1≦y≦N. To this end, the processor 12 is configured to have access to a memory medium 22 which stores an assortment of pre-computed factors, to be described in greater detail herein below. Once generated, the color-domain coefficients c(x,y), 1≦x≦N, 1≦y≦N, are provided to an output buffer 18, from which individual color-domain coefficients are passed to other subsequent processing stages.
Desirably, the transformation effected by the processor 12 is achieved using IDCT techniques, the details of which are described herein below. However, other transform techniques are possible, which may allow the number of rows in either of the two matrices “C” and “F” to be different from the number of columns and which may also lead to the matrix “C” having different dimensions than the matrix “F”. In the following, it will be assumed, although without requiring this to be the case in all embodiments, that the matrices in both domains are square and have the same dimensions.
It should be mentioned that the term “matrix” has been used because it is easy for one to visualize the pixels of a two-dimensional image being represented as a matrix. However, in its broadest sense, the term “matrix” is not intended to connote a limitation on the manner in which data is stored or on the dimensionality of an array used in software to represent the data. Rather, the term “matrix” is to be interpreted broadly as an ordered set of elements, which may occupy up to three physical dimensions when stored in a memory and any number of logical dimensions when referred to in software.
In one embodiment, each frequency-domain coefficient f(x,y), 1≦x≦N, 1≦y≦N may be represented using 12 bits and each color-domain coefficient c(x,y), 1≦x≦N, 1≦y≦N may be represented using 9 bits. The available dynamic ranges for the frequency-domain and color-domain coefficients has been selected in this case to provide compatibility with certain international standards such as IEEE Standard 1180:1990, dated Dec. 6, 1990, entitled “Standard Specifications for the Implementations of 8 by 8 Inverse Discrete Cosine Transform” and incorporated by reference herein. However, it should be understood that the techniques described herein are not limited to any particular number of bits for the frequency-domain or color-domain coefficients.
In a real-time environment, frames are read from the output buffer 18 at substantially the same rate as frames arrive at the input buffer 14 (e.g., at a rate of 30 frames per second or more in order to give a scene the appearance of smooth motion). Therefore, in order to provide full-motion video with a limited frame latency between input and output, an IDCT-based video decoding operation needs to be performed on each matrix “F” within the amount of time allotted to a frame (on the order of 33 milliseconds or less).
Mathematically, the IDCT relates the color-domain coefficients c(x,y), 1≦x≦N, 1≦y≦N to the frequency-domain coefficients f(x,y), 1≦x≦N, 1≦y≦N in the following manner:
From the above equation for c(x,y), it can be seen that the term f(u,v) is the only term that varies, as it corresponds to the frequency-domain coefficient in row u, column v of the matrix “F”. The other terms in the equation are all constant and therefore can be pre-computed and stored as constant factors.
In particular, it is noted that each element c(x,y) of the matrix “C” is the linear combination, over all u and v, of the elements f(u,v) of the matrix “F” and the pre-computed factors tu,v(x,y). Thus, the matrix “C” is the linear combination, over all u and v, of each element f(u,v) of the matrix “F” with a corresponding pre-computed matrix Tu,v, where the pre-computed matrix Tu,v is defined as the IDCT of a sparse matrix βu,v with all zeros except for a “one” in row u and column v. It should therefore be apparent that the pre-computed matrices Tu,v represent color-domain basis sets for the computation of the complete IDCT. Pre-computation of the individual basis sets, i.e., matrices Tu,v, may be achieved off-line by performing arithmetic operations or through the use of look-up tables.
As shown in
The above technique allows the GPU 10 to perform IDCT computations more efficiently that in prior approaches. In some embodiments of the invention, the efficiency of the IDCT computation can be further increased by virtue of the properties of the instruction set of the processor 12 in the GPU 10.
Specifically, the processor 12 may be equipped with the ability to perform, in a single clock cycle, a multiply-accumulate instruction involving a pair of multi-element vectors. For example, pixel shaders commonly written for the rendering of three-dimensional objects provide such multiply-accumulate capability in their instruction set. Moreover, pixel shaders also often allow independent multiply-accumulate operations to be performed, in parallel, on multiple pairs (e.g., up to 64) of multi-element vectors.
Hence, a multiply-accumulate instruction capable of performing independent multiply-accumulates on 64 pairs of 8-vectors can be viewed as forming a single 8-by-8 matrix that is the element-by-element sum of 8 original matrices, each having been multiplied by a respective one of 8 scalars. This concept can be applied to advantage in the execution of the IDCT in the following way.
As illustrated in
(T1,1×f(1,1))+(T1,2×f(1,2))+ . . . +T(1,8×f(1,8)),
resulting in a first intermediate 8-by-8 matrix, denoted 32. Next, as illustrated in
(T2,1×f(2,1))+(T2,2×f(2,2))+ . . . +T(8,8×f(8,8)),
resulting in a second intermediate 8-by-8 matrix, denoted 34. This process is repeated six more times, resulting in the generation of additional intermediate matrices. It is therefore apparent that eight instantiations of the 8-way parallel multiply-accumulate instruction would be required to execute all of the scalar multiplications required of the IDCT computation.
A final stage is then needed to add the eight intermediate matrices in order to produce the matrix “C”, and this can be achieved via one final instantiation of the parallel multiply-add instruction (with the vector of scalars set to [1 1 1 1 1 1 1 1], since no multiplication is required at this final stage).
Thus, it is noted that a mere 9 instantiations of the above-described parallel multiply-add instruction are sufficient for computation of a complete 8-by-8 IDCT. More generally, an N-by-N IDCT can be computed using parallel multiply-add instructions capable of multiplying and accumulating the product of W matrices of size N×N with a vector of W scalars in as few as N2/W+N2/W2+ . . . +1 operations. This is noteworthy not only because of the savings in terms of computation time with respect to conventional computation of the IDCT, but also because savings are achieved through the exploitation of hardware already extant within the GPU 10. Of course, it should be understood that W need not be equal to a root of N2. If it is indeed the case that W is not a root of N2, then up to logWN2 additional operations may need to be performed.
Notwithstanding the remarkable efficiency of the above-described IDCT computation technique, there are embodiments of the invention for which even more efficient performance can be achieved. This is based on the observation that an image will tend to have a predominance of lower-frequency components. Thus, it is often the case that one or more higher-frequency components, i.e., frequency-domain coefficients f(x,y), 1≦x≦N, 1≦y≦N, occupying positions in the matrix “F” for which the values of x and y are larger rather than smaller, will be zero upon either generation or compression of the matrix “F”. Consequently, any product involving any of these null coefficients will be zero.
Clearly, therefore, performing scalar multiplication and addition involving a null coefficient wastes computing and input/output (I/O) resources and thus it would be advantageous to avoid performing these unnecessary operations by first examining the elements of the matrix “F” prior to computation of the IDCT.
Stated differently, it is not necessary to fixedly assign the range of elements of the matrix “F” to which each parallel multiply-accumulate instruction will be applied. Rather, it is possible to select, for each new N×N matrix “F”, the particular range of elements that will be used to generate the intermediate matrices 32, 34, etc. Hence, for a matrix “F” having Z non-zero elements, a complete IDCT computation could be performed using Z/W┘+Z/W2┘+ . . . +1 parallel multiply-accumulate instructions, where ┘ denotes the rounding up of a fraction to its next highest integer. In the limit, for a highly sparse matrix “F” with W (or fewer) non-zero frequency-domain coefficients, a single parallel multiply-accumulate instruction would suffice to perform the entire IDCT.
Those skilled in the art will appreciate that arithmetic operations involving numerous multiplications and additions are subject to the possibility of overflow and/or loss of precision. By “overflow” is meant the situation in which even though a final result of an arithmetic manipulation may lie in range that is representable by a given number of bits, the intermediate results that lead to the final result are too large to be represented by the given number of bits. This leads to a distorted value for the final result of the operation. On the other hand, by “loss of precision” (sometimes termed “underflow”) is meant the situation in which very small values are truncated or rounded at intermediate stages, causing the individual errors generated by numerous roundings or truncations to accumulate and bias the final result of the operation.
The problems of overflow and loss of precision are more prevalent with fixed-point arithmetic than they are with floating-point arithmetic, since the dynamic range of a fixed-arithmetic representation is much narrower than the dynamic range of a floating-point representation. However, although the use of floating-point arithmetic leads to fewer overflow/underflow issues, its use is not always recommended for high-speed real-time processing applications. Therefore, it is expected that overflow and loss of precision problems will need to be overcome in a graphics processing unit 10 employing a fixed-point processor 12.
One possible solution to the overflow problem is to apply a preventative measure (a “reduction factor”) prior to the execution of any arithmetic operations. In addition, an opposite, compensatory measure is applied to the result obtained after the last arithmetic operation has been executed. The preventative measure, which may be achieved by bit shifting, for example, effectively moves the dynamic range of representable numbers to the subset of rational numbers where it is needed most.
Therefore, rather than limit the absolute smallest and absolute largest values that may be assigned a base-two representation, the preventative measure adjusts the absolute largest value as appropriate, and the granularity associated with the representation of smaller values depends on the number of available bits. Of course, since a result represented in this “shifted” domain is necessarily biased, the compensatory measure, which follows execution of the arithmetic operations, introduces an equal and opposite bias, so that the base-two number accurately represents the result of the entire operation.
By way of example, the overflow problem can be addressed by determining the hypothetical worst-case maximum value of the final result and consequently reducing the value of each of the frequency-domain coefficients as a function of this hypothetical worst-case maximum value. The worst-case overflow arises when all of the frequency-domain coefficients are of the same sign and multiply the greatest amongst all pre-computed factors in the matrix 30.
To prevent overflow, therefore, the IDCT operation performed by the processor 12 may include the steps shown in
Specifically, at step 410, the processor 12 performs a summation (denoted “S”) of the absolute value of all of the frequency-domain coefficients in the matrix “F”. At step 420, the processor 12 determines the largest magnitude (denoted “R”) amongst all of the pre-computed factors (i.e., the largest absolute value of any of the elements of any of the matrices Tu,v, 1≦u≦N, 1≦v≦N). At step 430, the processor 12 determines the product of the values found in steps 410 and 420, resulting in the worst-case maximum value for a color-domain coefficient. At step 440, the product found at step 430 is compared against the maximum value that can be represented by the number of bits available to a color-domain coefficient.
If the number of bits is adequate to represent the worst-case maximum value for a color-domain coefficient, then no reduction in coefficient value is required, and the processor 12 proceeds to step 450, consisting of performing the arithmetic operations as described earlier in this specification. If, however, the worst-case maximum color-domain coefficient is greater than the maximum value that can be represented using the available number of bits, then the processor 12 proceeds to step 460, where a sufficiently high “reduction factor” is applied in order to guarantee that the product formed at step 430, scaled by the reduction factor, will be amenable to representation by the available number of bits. The processor 12 then proceeds to step 450, described above.
Upon execution of the arithmetic operations at step 450, the processor 12 proceeds to step 470, where a compensatory measure, i.e., the inverse of the reduction factor used at step 460, is applied to each color-domain coefficient. Of course, if step 460 was never executed, then no inverse reduction factor needs to be applied. This is the end of the computation of the IDCT for the current input frame. It should be noted that the above-described technique is particularly advantageous in those instances where the final color-domain coefficients (obtained upon completion of step 470) “fit” within the given number of bits, even though certain intermediate results would otherwise have exceeded the maximum value for a color-domain coefficient.
For its part, the loss of precision (underflow) problem can be addressed by providing a magnification factor rather than a reduction factor. Thus, upon execution of step 440 in
If step 442 reveals that the color-domain coefficients are as large as possible, then it is not necessary to provide protection for underflow and the processor 12 proceeds to step 450, described above. However, if the color-domain coefficients are small enough to be magnified, then a suitable magnification factor is applied at step 445. The object of applying a magnification factor is to cause the product formed at step 430, when scaled by the magnification factor, to require the maximum number of available bits for its representation.
Following step 445, the processor 12 then executes steps 450 and 470, described above. It will be appreciated that the compensatory measure applied at step 470 should now take into account whether a reduction factor was applied at step 460 or whether a magnification factor was applied at step 445. Naturally, it is within the realm of possibility that no preventative measure, i.e., neither a reduction factor nor a magnification factor, will have been applied, in which case no compensatory measure needs to be applied at step 470.
It will also be appreciated that the above-described technique for underflow prevention is particularly advantageous in those instances where the final color-domain coefficients (obtained upon execution of step 470) are large enough to require one or more bits of representation, even though the values used in the arithmetic operations corresponding to the IDCT computation (step 450) are, in the absence of magnification, inferior to the smallest number to which a base-two representation can be assigned.
Of further note is the fact that when the IDCT computation involves the creation of intermediate results, such intermediate results may be subject to different conditions, i.e., some may be subject to overflow and others to loss of precision. To this end, it may be advantageous to combine the above-described overflow and underflow prevention techniques and to apply them to intermediate results obtained in the course of executing the IDCT in a stage-wise fashion.
Accordingly, as exemplified in
It is noted that eight intermediate matrices 60 V, 1≦v≦8, are in this case obtained, each such intermediate matrix containing eight rows of eight elements per row, the element in row x, column y of intermediate matrix 60 A being the dot product of the scaled eight-vector of elements in row A of the matrix “F” and the eight-vector of elements occupying position (x,y) in each of the eight matrices in row A of the matrix 30. Since each element of intermediate matrix 60 A is scaled by the scaling factor ΔA, and since the scaling factors for different intermediate matrices are independently selected, it is apparent that in order to add corresponding elements of the eight intermediate matrices 60 1, . . . , 60 8, an inverse scaling factor first needs to be applied to each intermediate matrix.
However, simply applying the inverse scaling factor prior to addition of corresponding elements of the intermediate matrices 60 1, . . . , 60 N may result in underflow, which would defeat the purpose of having applied the scaling factors Δx, 1≦x≦N, in the first place. Therefore, it is advantageous to split the inverse scaling process into two parts, one performed prior to addition of the corresponding elements of the intermediate matrices and another part performed after this element-by-element summation.
A suitable technique for applying inverse scaling to the intermediate matrices in this manner is now described with reference to the flowchart of
At step 720, an inverse scaling factor is applied to each intermediate matrix, if applicable, in order to normalize the scaling factors applied to the intermediate matrices (i.e., in order to render the net scaling for all normalized intermediate matrices equal to the residual scaling factor Δres). Upon such normalization, the matrices could conceivably be directly added together, followed by application of a de-normalizing factor.
However, there is a risk of overflow and this is averted in the following manner. Specifically, at step 730, a “base scaling factor” is set to 2. At step 740, a first intermediate matrix is selected and denoted “P”. For each of the N−1 remaining intermediate matrices, the following steps are performed. Specifically, at step 750, a new intermediate matrix is selected and denoted “Q”. At step 760, the elements of “P” are divided by 2 and the elements of “Q” are divided by the base scaling factor. Addition of the corresponding elements of “P” and “Q” is performed at step 770, resulting in a “temporary sum matrix”.
Now, at step 780, the existence of other intermediate matrices that have not yet been added is determined. If such intermediate matrices exist, then the base scaling factor is doubled (at step 790), matrix “P” is set to equal the temporary sum matrix (at step 792) and the process returns to step 750. However, if all intermediate matrices have been added, then the temporary sum matrix is converted to the final matrix “C” of color-domain coefficients at step 794. Specifically, the matrix “C’ is formed by a de-normalization process, namely, each element of the temporary sum matrix, multiplied by the most recent value of the base scaling factor (i.e., 2N−1), is divided by the residual scaling factor Δres.
The above process allows overflow and underflow to be addressed on the basis of subsets of the elements of the matrix of frequency-domain coefficients “F”, while continuing to reap the benefits arising from pre-computation of the matrices Tu,v and continuing to afford the advantages of using parallel multiply-accumulate instructions.
As a further extension of the technique of
In fact, the dynamic clustering can be based on the absolute value of the frequency-domain coefficients, such that scaling factors will be associated to different groups of elements of “F” having absolute values in approximately the same range. This can further help minimize the risk of underflow and overflow. It is noted that the clusters of elements associated with a common scaling factor may change from one block of frequency-domain coefficients to the next.
It is also within the scope of the invention to further reduce the risk of underflow by ensuring that the order in which the temporary sum matrix is built up at step 770 begins with the addition of intermediate matrices with small-absolute-valued coefficients and continues with the addition of intermediate matrices having elements of progressively greater absolute value. To this end, it may be advantageous to select, as the initial matrix “P”, the intermediate matrix associated with the frequency-domain coefficients of smallest absolute value, and to select, as successive matrices “Q”, those intermediate matrices associated with the frequency-domain coefficients of increasing absolute value. This allows the color-domain coefficients to be accurately represented in a fixed-point notation, even when there is a wide disparity amongst the non-zero values of the frequency-domain coefficients.
Persons skilled in the art should of course appreciate that the present invention may be advantageously applied to transforms other than the inverse discrete cosine transform. For instance, the present invention could just as easily be applied to the forward discrete cosine transform, the forward and inverse discrete sine transforms, the discrete Fourier, Walsh-Hadamard, Hartley and various wavelet transforms, etc. Also, applicability of the present invention extends beyond the realm of image processing to such fields of endeavour as speech processing, audio processing and spread spectrum code-division multiple-access (CDMA), where one- or two-dimensional arrays are subject to transformation.
Those skilled in the art will also appreciate that the processor 12 may be implemented as an arithmetic and logic unit (ALU) having access to a code memory (not shown) which stores program instructions for the operation of the ALU. The program instructions could be stored on a medium which is fixed, tangible and readable directly by the processor, (e.g., removable diskette, CD-ROM, ROM, or fixed disk), or the program instructions could be stored remotely but transmittable to the processor 12 via a modem or other interface device (e.g., a communications adapter) connected to a network over a transmission medium. The transmission medium may be either a tangible medium (e.g., optical or analog communications lines) or a medium implemented using wireless techniques (e.g., microwave, infrared or other transmission schemes).
Those skilled in the art should further appreciate that the program instructions stored in the code memory can be compiled from a high level program written in a number of programming languages for use with many computer architectures or operating systems. For example, the high level program may be written in assembly language such as that suitable for use with a pixel shader, while other versions may be written in a procedural programming language (e.g., “C”) or an object oriented programming language (e.g., “C++” or “JAVA”).
Those skilled in the art will also appreciate that in some embodiments of the invention, the functionality of the processor 12 may be implemented as pre-programmed hardware or firmware elements (e.g., application specific integrated circuits (ASICs), electrically erasable programmable read-only memories (EEPROMs), etc.), or other related components.
While specific embodiments of the present invention have been described and illustrated, it will be apparent to those skilled in the art that numerous modifications and variations can be made without departing from the scope of the invention as defined in the appended claims.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4176684 *||Oct 26, 1977||Dec 4, 1979||Kelso Eileen E||Splash down|
|US4196448 *||May 15, 1978||Apr 1, 1980||The United States Of America As Represented By The Secretary Of The Navy||TV bandwidth reduction system using a hybrid discrete cosine DPCM|
|US4348186 *||Dec 17, 1979||Sep 7, 1982||The United States Of America As Represented By The Secretary Of The Navy||Pilot helmet mounted CIG display with eye coupled area of interest|
|US4479784 *||Mar 3, 1981||Oct 30, 1984||The Singer Company||Eye line-of-sight responsive wide angle visual system|
|US4588382 *||Jan 26, 1984||May 13, 1986||The Singer Company||Wide angle area-of-interest visual image projection system|
|US4634384 *||Feb 2, 1984||Jan 6, 1987||General Electric Company||Head and/or eye tracked optically blended display system|
|US5136401 *||Dec 7, 1990||Aug 4, 1992||Kabushiki Kaisha Toshiba||Image layout apparatus for performing pattern modification such as coloring of an original image|
|US5137450 *||Nov 5, 1990||Aug 11, 1992||The United States Of America As Represented By The Secretry Of The Air Force||Display for advanced research and training (DART) for use in a flight simulator and the like|
|US5242306 *||Feb 11, 1992||Sep 7, 1993||Evans & Sutherland Computer Corp.||Video graphic system and process for wide field color display|
|US5278646 *||Mar 17, 1993||Jan 11, 1994||At&T Bell Laboratories||Efficient frequency scalable video decoding with coefficient selection|
|US5487665 *||Oct 31, 1994||Jan 30, 1996||Mcdonnell Douglas Corporation||Video display system and method for generating and individually positioning high resolution inset images|
|US5726671 *||Oct 7, 1996||Mar 10, 1998||Hughes Electronics||Helmet/head mounted projector system|
|US5748770 *||May 15, 1995||May 5, 1998||Polaroid Corporation||System and method for color recovery using discrete cosine transforms|
|US6196845 *||Jun 29, 1998||Mar 6, 2001||Harold R. Streid||System and method for stimulating night vision goggles|
|US6252576 *||Aug 6, 1998||Jun 26, 2001||In-System Design, Inc.||Hardware-efficient system for hybrid-bilinear image scaling|
|US6314136 *||Aug 1, 1997||Nov 6, 2001||Creative Technology Ltd.||Method for performing wavelet-based image compaction losslessly and low bit precision requirements|
|US6421094 *||Sep 8, 1998||Jul 16, 2002||Lg Electronics Inc.||HDTV video display processor|
|US6452970 *||Oct 22, 1998||Sep 17, 2002||Siemens Aktiengesellschaft||Method and device for processing a digitized image|
|US6504533 *||Apr 17, 2000||Jan 7, 2003||Sony Corporation||Image display apparatus|
|US6683987 *||Mar 8, 2000||Jan 27, 2004||Victor Company Of Japan, Ltd.||Method and apparatus for altering the picture updating frequency of a compressed video data stream|
|US20010008428 *||Jan 17, 2001||Jul 19, 2001||Lg Electronics, Inc.||Device and method for decoding televison video signal|
|US20010019079 *||Dec 22, 2000||Sep 6, 2001||Jean-Louis Massieu||Optoelectronic device and process for acquiring symbols, such as bar codes, using a two-dimensional sensor|
|US20010043163 *||Mar 14, 1997||Nov 22, 2001||Jonathan David Waldern||Method of and apparatus for viewing an image|
|US20020097921 *||Mar 19, 2002||Jul 25, 2002||Shinji Wakisawa||Resolution conversion system and method|
|US20020113898 *||Apr 10, 1998||Aug 22, 2002||Satoshi Mitsuhashi||Picture processing apparatus and method, and recording medium|
|US20020118019 *||Feb 21, 2002||Aug 29, 2002||Konica Corporation||Image processing methods and image processing apparatus|
|US20030021347 *||Jul 24, 2001||Jan 30, 2003||Koninklijke Philips Electronics N.V.||Reduced comlexity video decoding at full resolution using video embedded resizing|
|US20030021486 *||Jul 27, 2001||Jan 30, 2003||Tinku Acharya||Method and apparatus for image scaling|
|US20030053702 *||Feb 21, 2001||Mar 20, 2003||Xiaoping Hu||Method of compressing digital images|
|US20030112333 *||Nov 16, 2001||Jun 19, 2003||Koninklijke Philips Electronics N.V.||Method and system for estimating objective quality of compressed video data|
|US20030206582 *||Feb 28, 2003||Nov 6, 2003||Microsoft Corporation||2-D transforms for image and video coding|
|1||A Faster Discrete Fourier Transform, posted in Oct. 2001 by Ian Kaplan as part of A notebook Compiled While Reading Understanding Digital Signal Processing by Lyons, downloaded from http://www.bearcave.com/misl/misl-tech/signal/fasterdft on Jan. 17, 2002.|
|2||*||Cabeen et al., "Image Compression and the Discrete Cosine Transform", Fall 1998.|
|3||CL 480 MPEG System Decoder User's Manual, Chapter 2: MPEG Overview, pp. 9-20; C-Cube Microsystems 1995.|
|4||Fast DCT (Discrete Cosine Transform) algorithms, posted on unknown date, downloaded from http://www.faqs.org/faqs/compression -faq/part1/section-19.html on Jan. 17, 2002.|
|5||Information Technology-Generic Coding of Moving Pictures and Associated Audio Information: Video, ISO/IEC 13818-2, ITU-T Draft Rec. H.262, May 10, 1994, p. 131.|
|6||Information Technology-Generic Coding of Moving Pictures and Associated Audio Information: Video, ISO/IEC 13818-2: 2000(E), ITU-T Rec. H.262(2000E), pp. 1-120.|
|7||*||Lee, Woobin. "DCT and IDCT" (1995).|
|8||*||LF3320, by Logic Devices Incorporated (Copyright Aug 24, 1999 and Apr. 18, 2001).|
|9||*||Neelamani et al. "Compression Color Space Estimation of JPEG Images Using Lattice Basis Reduction". May 3, 2001.|
|10||Video Demystified,by Keith Jack, Chapter 10: Video Compression / Decompression, pp. 374-385.|
|11||*||Zwernemann, Brad. "An 8X8 DCT Implementation on the Motorola DSP56800E". Rev. 0, Aug. 2001.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7768519 *||Sep 19, 2006||Aug 3, 2010||Nvidia Corporation||High-performance crossbar for high throughput pipelines|
|US8081847 *||Dec 31, 2007||Dec 20, 2011||Brandenburgische Technische Universitaet Cottbus||Method for up-scaling an input image and an up-scaling system|
|US8259814||Nov 12, 2009||Sep 4, 2012||Cisco Technology, Inc.||Processing of a video program having plural processed representations of a single video signal for reconstruction and output|
|US8259817||Nov 12, 2009||Sep 4, 2012||Cisco Technology, Inc.||Facilitating fast channel changes through promotion of pictures|
|US8320465||Nov 12, 2009||Nov 27, 2012||Cisco Technology, Inc.||Error concealment of plural processed representations of a single video signal received in a video program|
|US8326131||Feb 22, 2010||Dec 4, 2012||Cisco Technology, Inc.||Signalling of decodable sub-sequences|
|US8416858||Mar 1, 2009||Apr 9, 2013||Cisco Technology, Inc.||Signalling picture encoding schemes and associated picture properties|
|US8416859||May 21, 2008||Apr 9, 2013||Cisco Technology, Inc.||Signalling and extraction in compressed video of pictures belonging to interdependency tiers|
|US8681876||Nov 12, 2009||Mar 25, 2014||Cisco Technology, Inc.||Targeted bit appropriations based on picture importance|
|US8699578||Jun 17, 2008||Apr 15, 2014||Cisco Technology, Inc.||Methods and systems for processing multi-latticed video streams|
|US8705622||Apr 8, 2009||Apr 22, 2014||Qualcomm Incorporated||Interpolation filter support for sub-pixel resolution in video coding|
|US8705631 *||Jun 17, 2008||Apr 22, 2014||Cisco Technology, Inc.||Time-shifted transport of multi-latticed video for resiliency from burst-error effects|
|US8718388||Dec 11, 2008||May 6, 2014||Cisco Technology, Inc.||Video processing with tiered interdependencies of pictures|
|US8761266||Nov 12, 2009||Jun 24, 2014||Cisco Technology, Inc.||Processing latticed and non-latticed pictures of a video program|
|US8782261||Apr 3, 2009||Jul 15, 2014||Cisco Technology, Inc.||System and method for authorization of segment boundary notifications|
|US8804831||Apr 8, 2009||Aug 12, 2014||Qualcomm Incorporated||Offsets at sub-pixel resolution|
|US8804843||Apr 10, 2012||Aug 12, 2014||Cisco Technology, Inc.||Processing and managing splice points for the concatenation of two video streams|
|US8804845||Jul 31, 2007||Aug 12, 2014||Cisco Technology, Inc.||Non-enhancing media redundancy coding for mitigating transmission impairments|
|US8873932||Dec 11, 2008||Oct 28, 2014||Cisco Technology, Inc.||Inferential processing to ascertain plural levels of picture interdependencies|
|US8875199||Jul 31, 2007||Oct 28, 2014||Cisco Technology, Inc.||Indicating picture usefulness for playback optimization|
|US8886022||Jun 12, 2009||Nov 11, 2014||Cisco Technology, Inc.||Picture interdependencies signals in context of MMCO to assist stream manipulation|
|US8949883||May 12, 2010||Feb 3, 2015||Cisco Technology, Inc.||Signalling buffer characteristics for splicing operations of video streams|
|US8958486||Jul 31, 2007||Feb 17, 2015||Cisco Technology, Inc.||Simultaneous processing of media and redundancy streams for mitigating impairments|
|US8971402||Jun 17, 2008||Mar 3, 2015||Cisco Technology, Inc.||Processing of impaired and incomplete multi-latticed video streams|
|US9077971||Apr 8, 2009||Jul 7, 2015||Qualcomm Incorporated||Interpolation-like filtering of integer-pixel positions in video coding|
|US9235769 *||Mar 15, 2012||Jan 12, 2016||Herta Security, S.L.||Parallel object detection method for heterogeneous multithreaded microarchitectures|
|US9350999||Apr 15, 2014||May 24, 2016||Tech 5||Methods and systems for processing latticed time-skewed video streams|
|US9407935||Dec 30, 2014||Aug 2, 2016||Cisco Technology, Inc.||Reconstructing a multi-latticed video signal|
|US9467696||Oct 2, 2012||Oct 11, 2016||Tech 5||Dynamic streaming plural lattice video coding representations of video|
|US9521420||Aug 12, 2014||Dec 13, 2016||Tech 5||Managing splice points for non-seamless concatenated bitstreams|
|US9609039||Jan 7, 2015||Mar 28, 2017||Cisco Technology, Inc.||Splice signalling buffer characteristics|
|US20050289523 *||Apr 14, 2005||Dec 29, 2005||Broadcom Corporation||Method and apparatus for transforming code of a non-proprietary program language into proprietary program language|
|US20090169128 *||Dec 31, 2007||Jul 2, 2009||Brandenburgische Technische Universitat Cottbus||Method for up-scaling an input image and an up-scaling system|
|US20090257493 *||Apr 8, 2009||Oct 15, 2009||Qualcomm Incorporated||Interpolation filter support for sub-pixel resolution in video coding|
|US20090257500 *||Apr 8, 2009||Oct 15, 2009||Qualcomm Incorporated||Offsets at sub-pixel resolution|
|US20090257501 *||Apr 8, 2009||Oct 15, 2009||Qualcomm Incorporated||Interpolation-like filtering of integer-pixel positions in video coding|
|US20090313668 *||Jun 17, 2008||Dec 17, 2009||Cisco Technology, Inc.||Time-shifted transport of multi-latticed video for resiliency from burst-error effects|
|US20100053863 *||Nov 12, 2009||Mar 4, 2010||Research In Motion Limited||Handheld electronic device having hidden sound openings offset from an audio source|
|US20100118973 *||Nov 12, 2009||May 13, 2010||Rodriguez Arturo A||Error concealment of plural processed representations of a single video signal received in a video program|
|US20100118978 *||Nov 12, 2009||May 13, 2010||Rodriguez Arturo A||Facilitating fast channel changes through promotion of pictures|
|US20130243329 *||Mar 15, 2012||Sep 19, 2013||Herta Security, S.L.||Parallel object detection method for heterogeneous multithreaded microarchitectures|
|WO2008152513A2 *||Jun 11, 2008||Dec 18, 2008||Mercury Computer Systems Sas||Methods and apparatus for image compression and decompression using graphics processing unit (gpu)|
|WO2008152513A3 *||Jun 11, 2008||Jul 7, 2011||Mercury Computer Systems Sas||Methods and apparatus for image compression and decompression using graphics processing unit (gpu)|
|U.S. Classification||345/643, 382/239|
|Cooperative Classification||G06F17/147, H04N19/42, H04N19/60|
|European Classification||H04N7/30, H04N7/26L, G06F17/14M|
|Mar 25, 2002||AS||Assignment|
Owner name: MATROX GRAPHICS INC., CANADA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:COTE, JEAN-FRANCOIS;OSTIGUY, JEAN-JACQUES;REEL/FRAME:012733/0613
Effective date: 20020322
|Sep 4, 2007||CC||Certificate of correction|
|Apr 23, 2010||FPAY||Fee payment|
Year of fee payment: 4
|Mar 17, 2014||FPAY||Fee payment|
Year of fee payment: 8