US 7321625 B2 Abstract Wavelet based multiresolution video representations generated by multi-scale motion compensated temporal filtering (MCTF) and spatial wavelet transform are disclosed. Since temporal filtering and spatial filtering are separated in generating such representations, there are many different ways to intertwine single-level MCTF and single-level spatial filtering, resulting in many different video representation schemes with spatially scalable motion vectors for the support of different combination of spatial scalability and temporal scalability. The problem of design of such a video representation scheme to full the spatial/temporal scalability requirements is studied. Signaling of the scheme to the decoder is also investigated. Since MCTF is performed subband by subband, motion vectors are available for reconstructing video sequences of any possible reduced spatial resolution, restricted by the dyadic decomposition pattern and the maximal spatial decomposition level. It is thus clear that the family of decomposition schemes provides efficient and versatile multiresolution video representations for fully scalable video coding.
Claims(23) 1. A method for determining multiresolution video representations for scalable video coding, the method comprising:
performing subband motion compensated temporal filtering (MCTF) on input video signals, including performing a spatial decomposition on the input video signals, wherein motion vectors are available for subbands of the spatial decomposition; and
obtaining a multiresolution representation of the filtered input video signals along both spatial directions and temporal direction,
wherein spatial scalability is supported by video coders that use the multiresolution video representations, and, when combined with temporal scalability, is limitedly supported by video coders, depending on the representation that is used by the video coders, and further wherein combined spatial scalability and temporal scalability supported by a representation can be characterized by a spatial-temporal scalability (ST) table.
2. The method of
3. The method of
4. A method for determining multiresolution video representations for scalable video coding, the method comprising:
performing subband motion compensated temporal filtering (MCTF) on input video signals, and
obtaining a multiresolution representation of the filtered input video signals along both spatial directions and temporal direction,
wherein the MCTF intertwines arbitrarily with spatial filtering to generate the multiresolution video representations, where the intertwining determines availability of spatially scalable motion vectors for support of combined spatial scalability and temporal scalability and the intertwining is critical for design of a video representation scheme to fulfill scalability requirements for a video coder,
wherein the scalability requirements are signified by a spatial-temporal scalability (ST) table, and the intertwining of MCTF and spatial filtering may be determined by spatial decomposition parameters derived from the given ST-table.
5. The method of
6. The method of
7. The method of
8. The method of
9. A method for determining multiresolution video representations for scalable video coding, the method comprising:
performing subband motion compensated temporal filtering (MCTF) on input video signals, and
obtaining a multiresolution representation of the filtered input video signals along both spatial directions and temporal direction,
wherein the MCTF intertwines arbitrarily with spatial filtering to generate the multiresolution video representations, where the intertwining determines availability of spatially scalable motion vectors for support of combined spatial scalability and temporal scalability, and the intertwining is critical for design of a video representation scheme to fulfill scalability requirements for a video coder,
wherein the scalability requirements are signified by a spatial-temporal scalability (ST) table, and the intertwining of MCTF and spatial filtering may be determined by temporal decomposition parameters derived from the given ST-table.
10. The method of
11. The method of
identifying a staircase shape of the upper-right boundary of the ST-table, the staircase shape having vertical directions and horizontal directions; and
coding the staircase shape with two symbols,
1 for the vertical directions and 0 for the horizontal directions.12. The method of
coding the staircase shape starting from the upper-left corner of the upper-right boundary or from the lower-right corner of the upper-right boundary.
13. A family of multiresolution representations of video signals obtained by performing subband motion compensated temporal filtering (MCTF) for scalable video coding, the multiresolution representation of video signals being specified along two spatial directions and a temporal direction, wherein
the multiresolution representations are obtained by performing MCTF subband by subband for a spatial decomposition on the video signals,
subband MCTF requires spatially scalable motion vectors for subbands of the spatial decomposition,
the spatially scalable motion vectors are critical for reconstruction of spatial resolution reduced video signals, and is supported by video coders that use the multiresolution video representations,
the spatial scalability, when combined with temporal scalability, is limitedly supported by video coders, depending on a video representation that is used by the video coders, and further wherein the combined spatial scalability and temporal scalability supported by the video representation is characterized by a spatial-temporal scalability (ST) table.
14. The family of multiresolution video representations of
15. The family of multiresolution video representations of
16. A family of multiresolution representations of video signals obtained by performing subband motion compensated temporal filtering (MCTF) for scalable video coding, the multiresolution representation of video signals being specified along two spatial directions and a temporal direction, wherein the MCTF intertwines arbitrarily with spatial filtering to generate the family of video representations, and the intertwining patterns determine the availability of spatially scalable motion vectors for support of combined spatial scalability and temporal scalability, and further wherein scalability requirements for a video coder are signified by a spatial-temporal (SI) table, and wherein the intertwining patterns of MCTF and spatial filtering may be determined by one of spatial decomposition parameters and temporal decomposition parameters derived from the given ST-table.
17. The family of multiresolution video representations of
18. The family of multiresolution video representations of
19. The family of multiresolution video representations of
20. The family of multiresolution video representations of
21. The family of multiresolution video representations of
22. The family of multiresolution video representations of
23. The family of multiresolution video representations of
Description The present application relates generally to video coding. More particularly, the present invention relates to a wavelet based multiresolution video representation with spatially scalable motion vectors, and video coders employing such techniques. 1 Introduction Video streaming applications require video coding technologies that provide flexible scalability of a single bit stream, allowing seamless integration of servers, heterogeneous networks, terminals, acquisition and storage devices with different characteristics in a multimedia framework as defined in activity and publications of the Motion Pictures Experts Group (MPEG) in connection with the standard MPEG-21. The so-called universal scalability of a video bitstream requires the flexible reconstruction from a single bitstream of video sequence of reduced temporal resolution, spatial resolution and/or quality resolution with fine granularity. It has been identified that scalability with high flexibility and arbitrary combination of spatial scalability, temporal scalability and signal to noise ratio (SNR) scalability is desired. In particular, transmission over variable-bandwidth networks, storage on a variety of media and operation of different-capability display devices would benefit from such functionality. Conventional video coding processes have a hybrid motion compensation and direct cosine transform (DCT) coding architecture, and various types of scalability have been supported in the standards. The different types of scalability are achieved by layered video coding in these standards, and the approaches have not achieved the desired universal scalability due to the inflexibility of scalability and the sacrifice in coding performance. Alternatively, technologies such as wavelet coding which inherently possess scalability features can be potential candidates to achieve the universal scalability, if their performance matches the state of the art. Wavelet transform has emerged as a tool for statistical signal, image and video processing. The wavelet domain provides a natural setting for many applications involving real-world signals, including estimation, detection, classification, compression and synthesis. Wavelet coding has been a well-known image coding tool which results in highly scalable and extremely efficient image coders. The wavelet transform provides a natural multiresolution representation for digital image signals; it also has other important properties such as energy compaction, locality, decorrelation, edge detection, etc., which are all important for scalable image coding. There have been many approaches extending wavelet techniques from the image coding area to the video coding area since the early 1990's. There are also many MPEG contributions promoting wavelet video coding technologies in MPEG. Most of these approaches take advantage of the highly efficient energy compaction property of wavelet transform to exploit the spatial redundancy of images to achieve coding efficiency in video coding. To exploit the interframe redundancy in video signals, a differential predictive coding technique is usually used in conventional hybrid motion compensation and transform coding processes. Recently, however, the wavelet transform has been used to effectively exploit the temporal redundancy or the interframe redundancy in video coding. The wavelet transform generates a multiresolution representation of video signals in both spatial direction and temporal direction which provides a natural and easy way to achieve spatial scalability and temporal scalability in video coding. In addition, fine granular scalability is easy to accomplish with wavelet based video coding without sacrificing coding efficiency. Recently, MPEG has created an Ad hoc Group (AhG) for exploration of new tracks in video coding in the area of interframe wavelet technology. The present work is related to various multiresolution video representations for scalable video coding with an emphasis on spatially scalable motion vectors. In order to address this problem, a new family of video decomposition processes are introduced; these processes utilize subband MCTF to generate multi-scale representations along the temporal direction. Since MCTF is performed subband by subband, motion vectors are available for reduced spatial resolutions, thus facilitating the support of spatial scalability by video coders that use the multiresolution representation in video coding. The family of video decomposition processes is generated by intertwining single-level temporal filtering (MCTF) and spatial filtering. A different intertwining pattern results in a multiresolution video representation of support of scalable motion vectors for different combinations of spatial scalability and temporal scalability. Thus, a video coder with specified scalability requirements requires a specific video representation process which has to be designed. This paper studies the design of a multiresolution video representation based on scalability requirements. Techniques to transmit the designed video representation process to the decoder are also considered. The disclosed techniques are not restricted to any specific video coder; only the video representation processes for video coding are considered. With a video representation such as those disclosed herein, a video coder may code the representation coefficients with or without quantization and the coder may or may not use a bitplane coding technique. However, these techniques are not discussed herein. The foregoing summary has been provided only by way of introduction. Nothing in this section should be taken as a limitation on the following claims, which define the scope of the invention. By way of introduction, The video source The video coder The video coder The video decoder The components of the video data communication system 2 Multiresolution Representation with Discrete Wavelet Transform 2.1 Wavelet Basis and Multiresolution Representation A wavelet is a small wave having its energy concentrated in time and frequency, thus giving a very good tool for analysis of transient, non-stationary, or time-varying phenomena. With a mother wavelet function ψ(t) and a corresponding scaling function φ(t), a multiscale wavelet basis can be formed by the functions
The coefficients c A multiresolution representation similar to equation (3) can be generated for multi-dimensional signals such as digital images and digital videos, and the coefficients of the linear representation define a multi-dimensional discrete wavelet transform. The linear representation is based on a separable wavelet basis which is constructed based on a one-dimensional (1D) wavelet basis. For example, a two-dimensional (2D) wavelet basis for the linear representation may be formed by the following 2D functions
Notice that every element of 2D separable wavelet basis (5) is made of two elements of a 1D wavelet basis, and it is important to point out that the two elements of the 1D wavelet basis have the identical dilation scale j, since the two dimensions of a digital image are tied together and only the frequency information at the same scale along directions t 2.2 Filter Bank Implementation The discrete wavelet transform is usually implemented with a multi-stage analysis filter bank. Previous work has showed the relation of wavelet coefficient calculation and filter banks. Referring to the drawing, In the example below, a one-dimensional discrete signal s=(s Note that a multi-stage analysis filter bank such as the filter bank
Also illustrated at the bottom of A three dimensional (3D) discrete wavelet transform for video signals can also be implemented with a multi-stage filter bank, similar to the 1D case illustrated in 2.3 Image/Video Coding Discrete wavelet transforms generate a multiresolution representation for images and videos, while significantly reducing the redundancy of the signals. There have been many wavelet image coders that achieve significant coding efficiency by exploiting the residual intra-band/inter-band redundancy. Spatial scalability in a wavelet image coder can be provided by the multiresolution generated by a wavelet transform, and fine granularity scalability can also be achieved by incorporating bit-plane coding techniques. There have been also video coders that use a 3D discrete wavelet transform and achieve reasonable coding efficiency. However, for exploitation of temporal redundancy in video sequences, motion compensation seems more effective than temporal wavelet transform, since the temporal redundancy exists among pixels along the motion directions, not among co-located pixels. In addition, since the decomposition of video signals using a 3D discrete wavelet transform does not provide separate temporal frequency information and spatial frequency information, it is not flexible to reconstruct a video sequence of non-uniformly reduced frame rate and image size. In other words, video coders using a 3D discrete wavelet transform cannot support an arbitrary combination of spatial scalability and temporal scalability. 3 Multiresolution Video Representation with Motion Compensated Temporal Filtering 3.1 Modified Multiscale Wavelet Basis for Video Signals 3.1.1 Separable Wavelet Basis with Hybrid Scale Every element of a 2D separable wavelet basis, such as equation (5), is made of two elements of a 1D wavelet basis, and the two elements of the 1D wavelet basis have the identical dilation scale j. If a 2D wavelet basis allows different dilation scales for the two components that generate an element of the 2D basis, then the coefficients of a signal s(t To show the difference, It does not appear attractive to represent an image based on a wavelet basis with hybrid scales, since it generally does not make sense to consider the horizontal frequency and the vertical frequency separately. However, since the temporal frequency information and the spatial frequency information are not tightly tied together in video signals, it is beneficial to separately present the two types of frequency information in a multiresolution representation of video signals. Therefore, an element of a separable multiscale wavelet basis for video signals may have different scales for temporal direction and spatial directions. Therefore, a multiresolution representation of video signals may be obtained by separating the 2D discrete wavelet transform along the spatial direction and the 1D discrete wavelet transform along the temporal direction. In other words, the decomposition of video signals along the temporal direction is not intertwined with the decomposition along the spatial direction. There are two such implementations which are equivalent. The first implementation is pixel-domain temporal filtering. The second implementation is wavelet-domain temporal filtering. The pixel-domain temporal filtering involves a 1D discrete wavelet transform along the temporal direction followed by a 2D wavelet transform for each frame. The wavelet-domain temporal filtering involves a 2D discrete wavelet transform for each frame followed by a 1D discrete wavelet transform along the temporal direction. 3.1.2 Separable Wavelet Basis with Hybrid Component A separable wavelet basis of higher dimensions is based on one single 1D wavelet basis. The basis of equation (5), above, is an example. In other words, the components of every element of the basis come from the same 1D wavelet basis. However, a similar separable wavelet basis may be formed based on multiple 1D wavelet bases; each element of the basis being constructed with components from these 1D bases. For example, a 2D wavelet basis may be formed, in terms of two wavelet systems {φ 3.2 Motion Compensated Temporal Filtering (MCTF) Since interframe redundancy exists between pixels along motion directions, it is desirable to apply the Haar wavelet transform to pixels along the motion trajectory rather than pixels co-located in two consecutive frames. Therefore, motion should be compensated when the Haar wavelet transform is applied to two consecutive frames. Consequently, the corresponding filtering using Haar wavelet transform is called motion compensated temporal filtering or MCTF. To clearly understand MCTF, it is important to have a detailed analysis of a one-level filtering of two frames. The filtering operation is essentially a pixel-wise operation. Suppose A(m,n) and B(s,t) are two pixels in Frame A
This parallel implementation is equivalent to a sequential implementation, called a lifting implementation, given as follows:
It remains how to determine the correspondence between pixels A(m,n) and B(s,t). Since pixels along the motion direction have the strongest correlation, any algorithm that establishes the correspondence between pixels in two frames has to involve motion estimation. Two pixels that are located along a motion vector are called connected. Unconnected pixels in Frame A 3.3 Video Coding The multiresolution representation of video signals generated by pixel-domain MCTF and spatial wavelet transform, as shown in The MC-EZBC coder is a coder that is fully embedded in quality/bit-rate, and is capable of supporting scalability in spatial resolution and frame rate. The fine granular quality scalability is achieved due to the bitplane coding of EZBC and the inherent pyramidal decomposition in the spatial domain. In addition, MC-EZBC can easily achieve constant quality in frame level and in GOF level, by stopping the bitplane coding at the same bitplane for each frame. Since MCTF generates a pyramidal decomposition of a GOF along the temporal direction, sequences in ½, ¼ or ⅛ frame rate could be easily reconstructed, and the reconstructed frame rate can be determined in the transmission time or in the decoding time. Therefore, temporal scalability is flexibly supported by the MC-EZBC coder. The multiresolution representation in the spatial domain naturally provides flexibility in reconstructing video sequences of reduced sizes. However, the motion compensation in MCTF complicates the problem of spatial scalability in the MC-EZBC coder, since subband-level motion vectors required for size-reduced video reconstruction in MCTF are not available. Recall that motion vectors are available only for frames of full resolution. Thus, motion vectors for lower resolutions have to be derived from the motion vectors for the full resolution by scaling, for example. This clearly leads to a drifting problem. Due to the independence between temporal decomposition and spatial decomposition, the combination of these two types of scalability is trivial. However, the combination of quality scalability with either one of these two types of scalability will have a quality degradation problem. 4 Multiresolution Video Representation with Scalable Motion Vectors 4.1 Wavelet-Domain MCTF With the introduction of hybrid scales to the separable wavelet basis for video signals, a multiresolution representation in terms of the wavelet basis can be obtained by either pixel-domain temporal filtering or wavelet domain temporal filtering. These two implementations are equivalent since the operation of temporal filtering of co-existing pixels in two frames is commutable with the spatial wavelet transform. However, the introduction of motion compensation into the temporal filtering alters this, since motions in video signals are generally nonlinear. Consequently, the multiresolution representation generated by pixel-domain MCTF is not equivalent to that generated by wavelet-domain MCTF. The process of generating a multiresolution representation of video signals via pixel-domain MCTF has been discussed above in Section 3 and is shown in In terms of implementation, the difference between processes of generating multiresolution video representation via pixel-domain MCTF and wavelet-domain MCTF is the order of the MCTF and spatial wavelet transform. Another way to see this difference is that subband MCTF is performed in the wavelet-domain MCTF process but not in the other. The advantage of subband MCTF is that spatially scaled motion vectors corresponding to reduced resolutions are obtained during the process of wavelet-domain MCTF. Due to the independence of the temporal wavelet decomposition and the spatial wavelet decomposition in generating a multiresolution representation for video signals, single-level temporal filtering and single-level spatial filtering can be arbitrarily intertwined. The processes of generating a multiresolution representation of video signals via pixel-domain MCTF, as embodied in 4.2 Spatially Scalable Motion Vectors Since subband-level motion vectors required for size-reduced video reconstruction are not available when the pixel-domain MCTF is used to generate the multiresolution representation, motion vectors for lower spatial resolutions have to be derived from the motion vectors for the full spatial resolution by scaling. It is thus desired to have explicit and accurate subband-level motion vectors in a wavelet video coder for the support of spatial scalability. The availability of subband-level motion vectors implies the spatial scalability of motion vectors. Subband MCTF filtering processes are advantageous over the pixel-domain MCTF process, since subband motion vectors are obtained when performing MCTF subband by subband. In other words, subband MCTF provides spatially scalable motion vectors. A typical example is the wavelet-domain MCTF process shown in The spatial scalability of motion vectors is fully supported by the wavelet-domain MCTF process, since scalable motion vectors corresponding to all possible subbands are available when the first-level MCTF is applied to the frames which have been spatially decomposed to the maximal spatial decomposition level. If the first-level MCTF is applied to a spatial decomposition of some intermediate level, motion vectors corresponding to lower spatial resolutions are not available. Therefore, the size of the lowest spatial subband is the lowest resolution that can be reconstructed with available scalable motion vectors. A similar situation happens for other MCTF iterations. This indicates a close relationship between spatial decomposition and temporal decomposition with respect to the reconstruction of spatially scaled and temporally scaled video sequences. For example, motion vectors corresponding to half spatial resolution are obtained in the second-level MCTF in the process shown in The above discussion shows that the spatial scalability of motion vectors supported by a multiresolution video representation process is closely related to the support of reconstruction of spatially scaled and temporally scaled video sequences, i.e., combined spatial scalability and temporal scalability. Actually, such a video representation process can be characterized by the availability of spatially scalable motion vectors for the support of combined spatial scalability and temporal scalability. A two-dimensional function is defined for this purpose. For a given video representation process Λ, we can denote by α The function α According to the property of equation (16), if there is a bullet at the position (s,t) in an ST-table, i.e., α It is easy to observe from 4.3 Design of a Video Representation Scheme It has been seen that for a given maximal level of temporal wavelet decomposition and a given maximal level of spatial wavelet decomposition, there are many ways to intertwine single-level temporal filtering and spatial filtering, yielding many different multiresolution video representations which include spatially scalable motion vectors for the support of different combinations of spatial scalability and temporal scalability. Such diversity of multiresolution video representations offers flexibility of selecting a video representation scheme to fulfill the desired scalability requirements. Consequently, fundamental problems are how to design such a video representation scheme and how to signal the scheme in a video coder. This subsection is devoted to answering the first question and the second will be discussed in the following subsection. The requirements on spatial scalability and temporal scalability imposed by a coder may be represented by an ST-table. Therefore, the design of a video representation scheme starts with designing an ST-table. Designing an ST-table involves determining entries of the ST-table at the positions marked by a question mark in the table of The spatial decomposition parameter s Below, an example is used to illustrate the design of a video representation scheme to accomplish the scalability requirements for coding 30 Hz 4CIF video sequences. It is generally desirable that the scalable video coder allows reconstruction of a 30 Hz CIF video sequence and a 15 Hz QCIF video sequence from the scalable video bitstream. These requirements on the scalability for the video coder can be equivalently represented in the ST-table shown in the table of Besides the above method to determine the intertwining pattern, there exists another way which is symmetric to the above method. That is, to determine the temporal decomposition level when a spatial filtering is applied, for each spatial wavelet decomposition level. Similarly, these temporal decomposition levels are determined by the temporal decomposition parameters which are defined as follows:
The temporal decomposition parameter t 4.4 Encode a Video Representation Scheme When a multiresolution video representation is used in video coding, it is necessary to signal the selected representation to the decoder. Since its ST-table characterizes the video representation scheme as described above in Section 4.3, encoding the selection of the video representation scheme only requires encoding its corresponding ST-table. Since the spatial decomposition parameters s Similarly, it is also only needed to transmit the temporal decomposition parameters t According to the properties of equations (11-17), it is known that ST-tables illustrated by bullets inserted in the table for one values, as shown in Note that the shape codes shown in 4.5 Wavelet Video Coding with Subband MCTF The effectiveness of MCTF in exploiting temporal redundancy in video coding has been shown by coders such as MC-EZBC and the fully scalable zerotree (FSZ) coder, which use pixel-domain MCTF. The FSZ coder is described by the following document: V. Bottreau, M. Benetiere, B. Felts, and B. Pesquet-Popescu, “A fully scalable 3d subband video codec,” in Proceedings of Since a video sequence is decomposed in multiresolution along the spatial directions and the temporal direction, subband MCTF coders can easily provide spatial/temporal scalability with corresponding spatially scalable motion vectors. Also, subband MCTF coders can easily support quality scalability by using bitplane coding. The subband MCTF wavelet video coding framework is thus a candidate for universal scalable video coding. However, subband MCTF approaches also have an evident disadvantage which is related to the wavelet-domain motion estimation/filtering. Not only the complexity of the motion estimation may increase, but the performance of the motion estimation/filtering in the wavelet domain may decrease. The inefficiency of motion estimation/filtering may thus decrease the coding performance of the in-band MCTF approaches. 5 Conclusions From the foregoing, it can be seen that the present embodiments provide improved video coding methods and apparatus. Wavelet technology was previously established in still image coding, as it favorably combines high coding efficiency with additional advantages like scalability, efficient localized access etc. However, for video coding, motion compensation seems to be crucial to achieve high coding efficiency, especially at low bit rates. Therefore, incorporation of motion compensation into the wavelet image coding framework for video coding is a fundamental issue in wavelet video coding that is intended to achieve universal scalability and high coding efficiency at the same time. There are two different ways to incorporate motion compensation into the wavelet image coding framework. Motion compensated temporal prediction (MCTP) coders have a recursive closed-loop structure and achieve temporal scalability by introducing B-frames. Motion compensated temporal filtering (MCTF) coders have a non-recursive open-loop structure and support flexible temporal scalability due to the multi-scale temporal decomposition. Both MCTP-type and MCTF-type coders may be further classified according to which domain, pixel- domain or wavelet-domain, MCTP or MCTF is applied. One advantage that both wavelet-domain MCTP coders and wavelet-domain MCTF coders have is the spatial scalability of motion vectors. In other words, there are motion vectors corresponding to wavelet coefficients in each subband of wavelet decomposition. Multiresolution video representations generated with wavelet-domain MCTF are the subject of the present application. There are many different video representation schemes which separate spatial-direction decomposition and temporal decomposition but all generate multiresolution representation in the temporal direction and the spatial directions. The schemes in this family are determined by the pattern of intertwining of the spatial filtering and MCTF in the process of decomposing video signals. The major difference of the representation schemes is the availability of motion vectors that are needed in reconstruction of spatial resolution reduced video sequences. The feature that a representation scheme possesses related to the spatially scalable motion vectors determines the support of spatial/temporal scalability by the video representation scheme. Therefore, it is desirable to design a video representation in video coding to fulfill the requirements on scalability imposed by applications. The design of a video representation scheme based on specified scalability requirements was discussed herein. The first step of the design was to determine an ST-table based on the scalability requirements. Once the ST-table is formed, the spatial decomposition parameters or the temporal decomposition parameters can be used to easily construct the video representation or filtering process which possesses the desired property. These two methods are essentially two ways to determine the intertwining pattern. One way is to determine the spatial decomposition level of the frames when a level of MCTF is applied. The other way is to determine the temporal decomposition level when a level of spatial filtering is applied. The problem of how to encode the video representation was also considered. Since the two sets of parameters, spatial decomposition parameters and temporal decomposition parameters, can uniquely determine the filtering process, it is only necessary to transmit one of these two sets of parameters. These parameters essentially represent the numbers of bullets at each row or column in the ST-table. The representation scheme can also be coded by signifying the shape of the upper-right boundary of its ST-table. While a particular embodiment of the present invention has been shown and described, modifications may be made. It is therefore intended in the appended claims to cover such changes and modifications which follow in the true spirit and scope of the invention. Patent Citations
Non-Patent Citations
Referenced by
Classifications
Legal Events
Rotate |