WO2009047692A2 - Method and system for communicating compressed video data - Google Patents

Method and system for communicating compressed video data Download PDF

Info

Publication number
WO2009047692A2
WO2009047692A2 PCT/IB2008/054083 IB2008054083W WO2009047692A2 WO 2009047692 A2 WO2009047692 A2 WO 2009047692A2 IB 2008054083 W IB2008054083 W IB 2008054083W WO 2009047692 A2 WO2009047692 A2 WO 2009047692A2
Authority
WO
WIPO (PCT)
Prior art keywords
video
data
video data
units
encoded
Prior art date
Application number
PCT/IB2008/054083
Other languages
French (fr)
Other versions
WO2009047692A3 (en
Inventor
Hugues J. M. De Perthuis
Stephane E. Valente
Original Assignee
Nxp B.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nxp B.V. filed Critical Nxp B.V.
Publication of WO2009047692A2 publication Critical patent/WO2009047692A2/en
Publication of WO2009047692A3 publication Critical patent/WO2009047692A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A system (600) and method of processing encoded video data employs two decoders (610/620). A first decoder (610) decodes units of encoded video data of a current video frame of a compressed video signal according to an intra picture decoding format, and outputs a first output (615). A memory (630) stores one or more units of encoded video data from one or more previous video frames. A second video decoder (620) decodes units of the encoded video data from the memory (630), and outputs a second output (625). A multiplexer (640) selects between the first output (615) and the second output (625) based on control data indicating units in the current video frame that are reused from one or more preceding video frames, and outputs a decoded video signal (645).

Description

Method and system for communicating compressed video data
FIELD OF THE INVENTION
This invention pertains to the field of compressed video data, and more specifically, a system and method for communicating compressed video data between a transmitter and a receiver.
BACKGROUND AND SUMMARY OF THE INVENTION
New wireless communications protocols and devices are being developed that allow high bandwidth data, such as compressed video data, to be communicated between devices. For example, under the developing WiMedia radio standard, a protocol is provided whereby devices may transmit data at rates of up to a few hundred Mbps over a radio channel at distances of up to a few meters. For a video image size of 1920x1200 pixels and a frame rate of 60 frames/second, the raw video data rate is about 5 Gbps. In order to transmit this video data across a channel that is limited to a data rate of several hundred Mbps, video compression must be employed.
Accordingly, it would be desirable to provide a method and system of communicating high bandwidth video data over a limited bandwidth channel. It would further be desirable to provide such an arrangement that lends itself to low cost implementations for consumer devices.
In one aspect of the invention, a video receiving device comprises: a first video decoder for decoding units of encoded video data of a current video frame of a compressed video signal according to an intra picture decoding format, and for outputting a first output; a memory for storing one or more units of encoded video data from one or more previous video frames; a second video decoder for decoding the units of the encoded video data from the memory and outputting a second output; and a multiplexer for selecting between the first output and the second output based on control data indicating units in the current video frame that are reused from one or more preceding video frames, and for outputting a decoded video signal.
In another aspect of the invention, a video transmitting device comprises: a multiplexer adapted to receive from a video source input video data of a current video frame to be displayed, and also adapted to receive dummy video data having fixed values, and to output selected video data; a video encoder for encoding the video data output by the multiplexer into units of encoded video data according to an intra picture encoding format, and for outputting an encoded output; and control data generating means for generating control data indicating units in the current video frame that have been encoded with the dummy video data, wherein the multiplexer is adapted to select and output the dummy video data when a unit of encoded video data for the input video data matches a corresponding reference unit of encoded input video data from one or more previous video frames, and otherwise to select and output the input video data.
In yet another aspect of the invention, a method is provided for processing encoded video data. The method comprises: using a first video decoder to decode units of encoded video data of a current video frame of a compressed video signal according to an intra picture decoding format, and outputting a first output; storing one or more units of encoded video data from one or more previous video frames; using a second video decoder to decode the units of the encoded video data from the memory and outputting a second output; and selecting between the first output and the second output based on control data indicating units in the current video frame that are reused from one or more preceding video frames, and outputting a decoded video signal.
In still another aspect of the invention, a method is provided for communicating encoded video data. The method comprises: encoding video data of a current video into units of encoded video data according to an infra picture encoding format and outputting an encoded output; selectively removing one or more units of encoded video data in the encoded output that each match a corresponding unit of encoded video data from one or more previous video frames, or replacing said one or more units of encoded video data with a dummy unit; and transmitting the encoded output together with information indicating the one or more units in the current video frame that have been removed or replaced.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 illustrates a plurality of slices within a frame of video data. Fig. 2 is a functional block diagram of one exemplary embodiment of a video encoder.
Fig. 3 illustrates various modes of infra-picture encoding.
Figs. 4A-E illustrate operations of exemplary embodiments of a method of communicating compressed video data. Fig. 5 is a functional block diagram of one exemplary embodiment of a video transmitting device.
Fig. 6 is a functional block diagram of one embodiment of a video receiver. Fig. 7 is a block diagram of a portion of one embodiment of a video decoder.
DETAILED DESCRIPTION OF EMBODIMENTS
In the following detailed description, for purposes of explanation and not limitation, example embodiments disclosing specific details are set forth in order to provide a thorough understanding of an embodiment according to the present teachings. However, it will be apparent to one having ordinary skill in the art having had the benefit of the present disclosure that other embodiments according to the present teachings that depart from the specific details disclosed herein remain within the scope of the appended claims. Moreover, descriptions of well-known apparati and methods may be omitted so as to not obscure the description of the example embodiments. Such methods and apparati are clearly within the scope of the present teachings.
Many video encoding protocols have been developed. Some particularly efficient video compression protocols that can be employed include the ITU-T H264 standard (also known as ISO MPEG-4 Part 10) and the SMPTE 42 IM standard (also known as "VC- 1"). In a system employing H.264 compression, an image or video frame is divided into slices. Each slice is a self contained unit, which can be decoded separately. Figl illustrates a plurality of slices within a frame of video data. As shown in Figl, a slice is not necessarily always aligned with the beginning of a line. Also, the slice mapping can change sometimes from one image or video frame to the next. Each slice in turn includes a plurality of macroblocks of 16x16 pixels each. A slice can include any number of macroblocks. Typically a slice includes several lines of macroblocks.
Fig. 2 is a functional block diagram of one exemplary embodiment of a video encoder 200. As will be appreciated by those skilled in the art, the various functions shown in Fig. 2 may be physically implemented using a software-controlled microprocessor, hard-wired logic circuits, or a combination thereof. Also, while the functional blocks are illustrated as being segregated in Fig. 2 for explanation purposes, they may be combined in any physical implementation. In the H264 standard, three types of encoded pictures are defined: Intra Pictures (I), forward Predicted Pictures (P), and Bi-directional predicted Pictures (B- pictures).
For the I-Pictures, for each macroblock a prediction is made using the pixels belonging to the macroblocks to the left and above the current macroblock (provided they belong to the same slice).
Fig. 3 illustrates various modes of intra-picture encoding. As shown in Fig. 2, in the I-Picture mode, the video encoder 200: determines a prediction mode; calculates a predicted macroblock according to the predicted mode and the line of pixels located to the left and/or above the current macroblock; computes a difference between the prediction block and the actual input video data; transforms the result to the frequency domain, quantifies the frequency domain data; reorders the quantified data; and performs a further entropy compression in the reordered data to produce an encoder output. For the B-Pictures and P-Pictures, the macroblock is further predicted using macroblocks from other images (e.g., a previous image). This can take advantage of the fact that there is typically a lot of redundancy from video frame to video frame. This yields a very large part of the efficiency that H.264 compression provides for video transmission via satellite, terrestrial airwaves, and DVD.
However, when video data from a previous picture is employed in the encoding and encoding process, a video receiver requires a buffer to store the previous picture data, and a memory bus is needed to read and write the picture data to and from the buffer. Now, for the previous example of a video image size of 1920x1200 pixels with 12 bit components, a buffer of 1920x1200x3x16 bits (assuming 12 bit components to be aligned on 16 bit memory location), or 13.8 Mbytes, would be required. The memory bus bandwidth required to store the current picture and retrieve the previous one would be up to
2x13.8x60=1.658 GB/s. The bandwidth actually could be higher because of the inherent inefficiency of memories, especially if it is decided to use non-aligned prediction. In any case, this extremely large amount of bandwidth would require memory with a wide data bus, such as a DDR2 with a 32-bit bus. The additional cost of this memory would make the solution too expensive for many consumer devices.
Accordingly, to allow for a low cost implementation, it is proposed to use only the I-Picture video compression. Therefore, a simplified version of the video encoder 200 shown in Fig. 2 may be employed that does not include the motion estimation ("ME") and ("MC") blocks. As explained above, each slice is a self contained unit, which can be decoded separately. Accordingly, an uncorrected error in one slice of the image is contained within the slice and does not propagate to the rest of the image. A disadvantage of this arrangement is that, as a slice is a self contained unit, it cannot make reference to other slices. That is, a prediction for one macroblock based on the values of the pixels in macroblocks to the left and above will not be possible if these macroblocks belong to a different slice. So coding efficiency is reduced. Therefore, the number of slices is a trade-off between error resilience and efficiency.
Using only I-Picture compression means that to encode and decode a macroblock of 16x16 pixels, it is only required to keep in memory the line of pixels located to the left and top boundaries of current macroblocks. In a typical implementation, where pixels arrive in scan line order, it will be required to have only one line of macroblock in memory plus one line of pixels. This small amount of memory can reasonably be stored in an internal buffer. An additional advantage of intra prediction is it allows a short latency: as soon a line of macroblock is received (i.e., 16 lines of pixels), compression can start.
Also, intra prediction allows a constant quality to be provided by using a constant quantization parameter, i.e., if there is enough bandwidth then variable bit rate can be used rather than constant bit rate. Using temporal prediction might result in flickering on the screen which would be cumbersome, especially in case of fixed image from a computer for instance. Indeed quality would be different between successive images and each time a new reference picture is used.
In summary, it is proposed to use only I-Picture prediction because using prediction from one image to another one would be expensive to implement and would not result in an optimal quality.
Nevertheless, it would still make sense to take advantage that some parts of image are completely identical, for instance to give more bandwidth to error correction or to the compression of other parts of the image which are changing. In particular, it would still be desirable to take advantage at some level of redundancy between consecutive pictures. For example, for applications that transmit video data of a personal computer display, a large part of the screen in general does not change from frame to frame.
Figs. 4A-E illustrate operations of exemplary embodiments of a method of communicating compressed video data. Fig. 4A illustrates the original source video data for three consecutive images or video frames, N, N+l, and N+2. In Fig. 4A, each video frame is divided into five slices. It can be seen that from video frame N to video frame N+l, the video data in slices 1 and 5 is unchanged. However, a portion of the video data in slices 2-4 changes from video frame N to video frame N+l. Similarly, from video frame N+l to video frame N+2, the video data in slices 3-5 changes, while the video data in slices 1 and 2 is unchanged.
As explained above the video data in each slice is compressed independently. Accordingly, Fig. 4C shows, for each video frame, which video slices can be reused from a previous video frame, and Fig. 4B shows which video slices have to be communicated for each video frame. It can be seen in Fig. 4C that: the compressed video data of slice 5 from video frame N can be used in video frame N+l; the compressed video data of slice 1 from video frame N can be used in video frames N+l and N+2, and the compressed video data of slice 2 from video frame N+l can be used in video frame N+2.
In that case, as shown in Fig. 4B, the video transmitting device has to transmit all five slices 1-5 for video frame N, but only has to transmit slices 2-4 for video frame N+l, and slices 3-5 or video frame N+2.
By eliminating the transmission of redundant slices, the limited data bandwidth of the communication channel can be more efficiently allocated for communicating the slices that include non-redundant video data. This can be realized in one embodiment in an arrangement wherein the video transmitting device only transmits those slices that are changed from the previous video frame, and a video receiver maintains copies of reference slices from previous video frames in a buffer memory. In one embodiment, the video transmitting device can transmit some explicit control information to the video receiver (perhaps at a different layer of the communication stack) that indicates which slices are being transmitted in a current video frame, and/or which slices the video receiver should reuse from a previous video frame. In another embodiment, the transmitting device does not send any explicit control data. In this case, the transmitting device may just send full slices, and does not send slices where the macroblocks do not change. In the example of Figs. 4A-C, this means that for video frame N+l the transmitting device just sends slices 2-4 and does not send slices 1 and 5. In that case, the video receiver detects a gap in the received slices, thus inferring the video data for these slices have to be decoded from the slice data of a previously received picture. For example, the receiving device could just look at the slice headers to detect where each slice starts, and if it detects a gap; then the video receiver would know that a slice has been omitted and that it has to use the slice data of a previously received picture.
Although the description above pertains to a case of where the communication of redundant slices is prevented, even greater efficiencies can be obtained if this is extended to the macroblock level.
Figs. 4D-E illustrate the same example of Figs. 4B-C extended to the case where the video transmitting device eliminates redundancy on a macroblock level, rather than on a slice level. Accordingly, Fig. 4E shows, for each video frame, which video slices include macroblocks that can be reused from a previous video frame, and Fig. 4D shows which video slices and macroblocks have to be communicated for each video frame. Fig. 4D shows that for video frame N+l, only slices 2, 4 and a portion of slice 3 need to be transmitted. In particular, for slice 3 of video frame N+l, only those macroblocks inside of the boxed area need to be transmitted, as the macroblocks outside the boxed area are the same as for video frame N and can be reused by the video receiver. Indeed, even though the example indicates that all macroblocks in slices 2 and 4 are communicated, it is possible that slices 2 and 4 could be handled like slice 3, such that only those macroblocks inside of the boxed area need to be transmitted. The macroblocks that do not need to be transmitted can be omitted or replaced by dummy macroblocks that have simple values (e.g., constant values) for encoding and so consume very little data rate. The decision as to whether to transmit the entire slice or only those macroblocks within a slice that have changed from the previous video frame is made by the video transmitting device. For example, the video transmitting device may compare against a threshold the number of macroblocks in a slice that are to be reused, and if the number is below the threshold, then the entire slice is compressed and transmitted. The threshold may be predefined, or may be changed dynamically based on how quickly the rest of the image is changing, any change in the communication channel capacity, etc. In another example, in a case where the rest of the image is hardly changing and there is plenty of available data capacity in the communication channel, the transmitting device could decide to transmit the entire slice even though very few macroblocks have changed from the previous video frame. As in the case of the example of Figs. 4B-C where the t redundant slices are omitted, in the example of Figs. 4D-E where redundant macroblocks are omitted a transmitting device can transmit some explicit control information to the video receiver (perhaps at a different layer of the communication stack) that indicates which macroblocks or each slice are being transmitted in a current video frame, and/or which macroblocks the video receiver should reuse from a corresponding slice of a previous video frame. Alternatively, in another embodiment, the transmitting device does not send any explicit control data at all. For example, the H.264 standard provides for a "skipped macroblock." Accordingly, the video transmitting device could define a slice as a "P slice" conveying "Intra" macroblocks and "P Skip" macroblocks. The meaning of a P Skip macroblock is to use a temporal prediction mode, where the motion vector is inferred directly from the motion vector predictor and no residual texture information is transmitted. If the slice comprises only Intra macroblocks and P Skip macroblocks, the motion vector predictors will be all zeros. So the video receiver would interpret the skipped macroblock coding to determine that the macroblock is a direct copy of the corresponding macroblock from a previous image. So you get the same functionality: a P Skip macroblock is a macroblock where decoded pixels are exactly the same ones as in the reference macroblock.
As it is not possible to decode macroblocks independently within a slice, a video receiver implementing the approach illustrated in Figs4D-E requires two video decoders, as will be described in further detail with respect to Fig6 below.
Fig. 5 is a functional block diagram of one exemplary embodiment of a video transmitting device 500. As will be appreciated by those skilled in the art, the various functions shown in Fig. 5 may be physically implemented using a software-controlled microprocessor, hard-wired logic circuits, or a combination thereof. Also, while the functional blocks are illustrated as being segregated in Fig. 5 for explanation purposes, they may be combined in any physical implementation.
The video transmitting device 500 includes a multiplexer 510, a video encoder 520, and a control data generator 530. Video encoder 500 may be implemented according as a simplified version of video encoder 200 of Fig 2 with the motion estimation (ME) and motion compensation (MC) blocks omitted.
Multiplexer 510 receives from a video source input video data of a current video frame to be displayed, and also receives dummy video data 540 having fixed values. Multiplexer 510 selects and outputs dummy video data 540 whenever a unit (e.g., macroblock) of encoded video data for the input video data matches a corresponding reference unit of encoded input video data from one or more previous video frames. Otherwise, multiplexer 510 selects and outputs the input video data.
Video encoder 520 encodes the video data output by multiplexer 510 into units (e.g., macroblocks) of encoded video data according to an intra picture encoding format, and outputs the encoded video data. When video recorder 520 receives dummy video data 540 having simple (e.g., fixed) values, it generates a dummy unit (e.g., macroblock) of encoded video data that consumes very little data rate.
Thus, video transmitting device 500 replaces macroblocks that are the same as a corresponding reference macroblock from a previous video frame with a "dummy macroblock" that consumes very little data rate. Meanwhile, control data generator 530 generates control data that indicates explicitly which macroblocks have been replaced with "dummy macroblocks" in a given slice or video frame, and/or which macroblocks the video receiver should reuse from a previous video frame. This control information can be communicated from the video transmitting device to the video receiver, perhaps at a different layer of the communication stack.
Alternatively, in another embodiment, the video transmitting device is modified from that shown in Fig. 5 so that the control information is communicated implicitly in the transmitted compressed video signal, as explained in greater detail above. In that case, for implicit description of the reused macroblocks of each slice, the functionality of multiplexer 510 is included in the video encoder, and side data generator 530 and dummy data 540 are omitted.
Fig. 6 is a functional block diagram of one embodiment of a video receiver 600. As will be appreciated by those skilled in the art, the various functions shown in Fig6 may be physically implemented using a software-controlled microprocessor, hard-wired logic circuits, or a combination thereof. Also, while the functional blocks are illustrated as being segregated in Fig. 6 for explanation purposes, they may be combined in any physical implementation.
Video receiver 600 includes a buffer memory 630, first and second video decoders 610 & 620, and a multiplexer 640 for switching between outputs of first and second video decoders 610 & 620. First video decoder 610 decodes compressed macroblocks for a current frame and produces a first output 615. Meanwhile, second video decoder 620 decodes compressed macroblocks from a previous video frame that are stored in memory 630 and produces a second output 625. Multiplexer 640 switches between first output 615 and second output 625 based on control information 655 received from a video transmitting device, and outputs a decoded video signal 645. The control information may be in the form of a bit map for each macroblock in each slice, where each bit identifies whether the corresponding macroblock has been transmitted and needs to be decoded, or whether instead the corresponding macroblock should be reused from a previous video frame, and therefore retrieved from memory 630. Fig. 7 is a block diagram of a portion of one embodiment of a video decoder 700 that may be employed as the first and/or second video decoder 610/620 in video receiver 600. Video decoder 700 is a simplified version of the video decoder 200 of Fig. 2.
Also, in the case where the video receiver 600 detects which macroblocks are to be reused from memory 630, then first video decoder 610 may include a control unit which emulates the motion encoding (ME) and motion compensating (MC) blocks of decoder 200 to determine that a macroblock is completely predicted from the corresponding macroblock of a previous video frame and therefore can be skipped and instead decoded from memory by second video decoder 620. Because memory 630 only stores the compressed video data comprising the slices for one video frame, the data capacity and the bus bandwidth requirements are greatly reduced compared to an arrangement where P-Pictures or B-Pictures are processed as explained above. For example, in a typical embodiment the required memory capacity may be < 1 MB, and the required memory bus bandwidth may be a few tens of Mbytes/second. Within memory 630, the compressed video data is updated periodically. In particular, when the video receiver 600 receives an entire slice of data without reusing any of the macroblocks from a previous video frame (i.e., when the entire slice is coded), then video receiver 600 updates memory 630 with the new slice form the current video frame.
While preferred embodiments are disclosed herein, many variations are possible which remain within the concept and scope of the invention. For example, in a case where image quality is more important than latency, for a changed image N+l, the video transmitting device could transmit a few slices during the time when it would normally transmit all of the compressed video data for frame N+l, while the video receiver still presents image N. Then, during the time when the video transmitting device would normally transmit the compressed video data for frame N+2, it could instead transmit the remaining slices for frame N+l and present the image for N+l at the time when the image for N+2 is normally presented. Such variations would become clear to one of ordinary skill in the art after inspection of the specification, drawings and claims herein. The invention therefore is not to be restricted except within the spirit and scope of the appended claims.

Claims

CLAIMS:
1. A video receiving device (600), comprising: a first video decoder (610)for decoding units of encoded video data of a current video frame of a compressed video signal according to an intra picture decoding format, and for outputting a first output (615); a memory (630) for storing one or more units of encoded video data from one or more previous video frames; a second video decoder (620) for decoding the units of the encoded video data from the memory (630) and outputting a second output (625); and - a multiplexer (640) for selecting between the first output (615) and the second output (625) based on control data indicating units in the current video frame that are reused from one or more preceding video frames, and for outputting a decoded video signal (645).
2. The device (600) of claim 1, wherein the units are macroblocks of encoded video data.
3. The device (600) of claim 2, wherein the memory (630) stores groups of macroblocks as slices, and wherein the control data indicates which macroblocks in each slice in the current video frame should be reused from the memory (630).
4. The device (600) of claim 2, wherein the memory (630) stores groups of macroblocks as slices, and wherein the control data indicates whether an entire slice in the current video frame should be reused from the memory (630).
5. The device (600) of claim 2, wherein the memory (630) is updated to replace a slice of encoded video data from one or more previous frames stored in the memory (630) with a corresponding slice of encoded video data from the current video frame when the control data indicates that none of the macroblocks of the slice of encoded video data stored in the memory (630) are reused in the current video frame.
6. The device (600) of claim 1, wherein the control data is received from a data transmitter as explicit side data accompanying the compressed video signal.
7. The device (600) of claim 1, wherein the control data is implicit in the compressed video signal and the device detects the control data from the compressed video signal.
8. A video transmitting device (500), comprising: - a multiplexer (510) adapted to receive from a video source input video data of a current video frame to be displayed, and also adapted to receive dummy video data (540) having fixed values, and to output selected video data; a video encoder (520) for encoding the video data output by the multiplexer (510) into units of encoded video data according to an intra picture encoding format, and for outputting an encoded output; and control data generating means (520) for generating control data indicating units in the current video frame that have been encoded with the dummy video data, wherein the multiplexer (510) is adapted to select and output the dummy video data (540) when a unit of encoded video data for the input video data matches a corresponding reference unit of encoded input video data from one or more previous video frames, and otherwise to select and output the input video data.
9. The device of (500) claim 8, wherein the units are slices of encoded video data.
10. The device (500) of claim 8, wherein the units are macroblocks of encoded video data.
11. The device (500) of claim 10, wherein the multiplexer does not select the dummy video data (540) for any portion of a slice when a number of macroblocks in the slice that match their corresponding reference macroblock is less than a threshold number.
12. The device (500) of claim 11, wherein the video transmitting device (500) updates the reference macrob locks of the slice when the number of macrob locks in the slice that match their corresponding reference macrob lock is less than the threshold number.
13. A method of processing encoded video data, comprising: using a first video decoder (610) to decode units of encoded video data of a current video frame of a compressed video signal according to an intra picture decoding format, and outputting a first output (615); storing in a memory (630) one or more units of encoded video data from one or more previous video frames; using a second video decoder (620) to decode the units of the encoded video data from the memory (630) and outputting a second output (625); and selecting between the first output (615) and the second output (625) based on control data indicating units in the current video frame that are reused from one or more preceding video frames, and outputting a decoded video signal (645).
14. The method of claim 13, wherein the units are macrob locks of encoded video data.
15. The method of claim 14, wherein storing in memory (630) one or more units of encoded video data from one or more previous video frames comprises storing groups of macroblocks as slices, and wherein the control data indicates which macroblocks in each slice in the current video frame should be reused from the memory (630).
16. The method of claim 14, wherein storing in memory (630) one or more units of encoded video data from one or more previous video frames comprises storing groups of macroblocks as slices, and wherein the control data indicates whether an entire slice in the current video frame should be reused from the memory (630).
17. The method of claim 14, further comprising updating the memory (630) to replace a slice of encoded video data from one or more previous frames stored in the memory with a corresponding slice of encoded video data from the current video frame when the control data indicates that none of the macroblocks of the slice of encoded video data stored in the memory (630) are reused in the current video frame.
18. The method of claim 13, further comprising receiving side data including control data.
19. The method of claim 13, further comprising detecting the control data implicity from the compressed video signal.
20. A method of communicating encoded video data, comprising: encoding video data of a current video into units of encoded video data according to an intra picture encoding format and outputting an encoded output selectively removing one or more units of encoded video data in the encoded output that each match a corresponding unit of encoded video data from one or more previous video frames, or replacing said one or more units of encoded video data with a dummy unit; and - transmitting the encoded output together with information indicating the one or more units in the current video frame that have been removed or replaced.
21. The method of claim 20, further comprising generating control data comprising the information indicating the one or more units in the current video frame that have been removed or replace, wherein transmitting the information indicating the one or more units in the current video frame that have been removed or replace comprises transmitting the control information as side data.
22. The method of claim 20, comprising transmitting a compressed video signal including the encoded output, wherein the information indicating the one or more units in the current video frame that have been removed or replaced is implicit in the compressed video signal.
23. The method of claim 20, wherein the units are slices of encoded video data.
24. The device of claim 20, wherein the units are macroblocks of encoded video data.
25. The method of claim 24, further comprising updating the reference macrob locks of the slice when the number of macrob locks in the slice that match their corresponmding reference macrob lock is less than the threshold number.
PCT/IB2008/054083 2007-10-08 2008-10-06 Method and system for communicating compressed video data WO2009047692A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP07291218 2007-10-08
EP07291218.1 2007-10-08

Publications (2)

Publication Number Publication Date
WO2009047692A2 true WO2009047692A2 (en) 2009-04-16
WO2009047692A3 WO2009047692A3 (en) 2011-10-13

Family

ID=40549678

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2008/054083 WO2009047692A2 (en) 2007-10-08 2008-10-06 Method and system for communicating compressed video data

Country Status (1)

Country Link
WO (1) WO2009047692A2 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006006077A1 (en) * 2004-07-08 2006-01-19 Canon Kabushiki Kaisha Conditional replenishment for motion jpeg2000
US20060282855A1 (en) * 2005-05-05 2006-12-14 Digital Display Innovations, Llc Multiple remote display system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006006077A1 (en) * 2004-07-08 2006-01-19 Canon Kabushiki Kaisha Conditional replenishment for motion jpeg2000
US20060282855A1 (en) * 2005-05-05 2006-12-14 Digital Display Innovations, Llc Multiple remote display system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
STEVEN MCCANNE ET AL: "Low-Complexity Video Coding for Receiver-Driven Layered Multicast", IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, IEEE SERVICE CENTER, PISCATAWAY, US, vol. 15, no. 6, 1 August 1997 (1997-08-01) , XP011054678, ISSN: 0733-8716 *

Also Published As

Publication number Publication date
WO2009047692A3 (en) 2011-10-13

Similar Documents

Publication Publication Date Title
US20210314584A1 (en) Signaling change in output layer sets
US9848203B2 (en) Decoding and encoding of pictures of a video sequence
CA2686438C (en) Digital signal coding/decoding apparatus
US10250897B2 (en) Tile alignment signaling and conformance constraints
AU2018236689A1 (en) Video data stream concept
CN107197313B (en) Method for encoding/decoding video stream and apparatus for decoding video stream
CN109196866B (en) Method, apparatus and storage medium for substream multiplexing for display stream compression
WO2014006854A1 (en) Device for signaling a long-term reference picture in a parameter set
KR20040099101A (en) Variable length encoding method, variable length decoding method, storage medium, variable length encoding device, variable length decoding device, and bit stream
US20200304837A1 (en) Adaptive syntax grouping and compression in video data
US20110299605A1 (en) Method and apparatus for video resolution adaptation
WO2016144445A1 (en) Method and apparatus for video coding using adaptive tile sizes
US20150003536A1 (en) Method and apparatus for using an ultra-low delay mode of a hypothetical reference decoder
US8179960B2 (en) Method and apparatus for performing video coding and decoding with use of virtual reference data
US8774273B2 (en) Method and system for decoding digital video content involving arbitrarily accessing an encoded bitstream
WO2009047692A2 (en) Method and system for communicating compressed video data
KR20060068254A (en) Video encoding method, video decoding method, and video decoder
WO2009047696A2 (en) Method and system for processing compressed video having image slices
AU2015224479B2 (en) Decoding and encoding of pictures of a video sequence
WO2024080916A1 (en) Inter-predicted reference picture lists

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08807895

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08807895

Country of ref document: EP

Kind code of ref document: A2