WO1999018714A2 - Method and circuit for the capture of video data in a pc ________________________________________________________ - Google Patents

Method and circuit for the capture of video data in a pc ________________________________________________________ Download PDF

Info

Publication number
WO1999018714A2
WO1999018714A2 PCT/US1998/020907 US9820907W WO9918714A2 WO 1999018714 A2 WO1999018714 A2 WO 1999018714A2 US 9820907 W US9820907 W US 9820907W WO 9918714 A2 WO9918714 A2 WO 9918714A2
Authority
WO
WIPO (PCT)
Prior art keywords
color space
component data
space component
dct
dct coefficients
Prior art date
Application number
PCT/US1998/020907
Other languages
French (fr)
Other versions
WO1999018714A9 (en
WO1999018714A3 (en
Inventor
Hu Xiaoping
Original Assignee
Sigma Designs, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sigma Designs, Inc. filed Critical Sigma Designs, Inc.
Publication of WO1999018714A2 publication Critical patent/WO1999018714A2/en
Publication of WO1999018714A3 publication Critical patent/WO1999018714A3/en
Publication of WO1999018714A9 publication Critical patent/WO1999018714A9/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour

Definitions

  • the present invention relates to bufferless compression of video data.
  • Video capture chips are used for capturing still image or live video, and may be used together with a video sensor and signal processing circuit to create a video camera.
  • a USB interface in the video capture chip to interface with a computer, the USB interface has a much smaller bandwidth than the camera generates.
  • a USB interface has a bandwidth of 12 M bits per second, and only 8 M bits per second can be allocated to a single isochronous channel.
  • the image data could be compressed.
  • a data rate for Common Interchange Format (CIF) resolution video (352 x 288) in 4:2:0 format at a rate of 30 frames per second is approximately 35.6 M bits/s.
  • CIF Common Interchange Format
  • One way to transmit this data across a USB using a 8 M bits/s channel is to compress this data at a compression ratio of approximately 4.5: 1.
  • known lossless compression engines are not generally this effective, and all lossy compression engines utilize an intermediate buffer for compression of video data. This intermediate buffer substantially increases the manufacturing costs of such a system. Accordingly, hardware costs could be substantially reduced if this intermediate buffer were eliminated. Moreover, less CPU power is required to decompress the data.
  • each macroblock is processed.
  • Each macroblock comprises a plurality of pixels, each of which is defined by color space components.
  • a color space is a mathematical representation for a color. For example, RGB, YIQ, and YUV are different color spaces which provide different ways of representing a color which will ultimately be displayed in a video system.
  • a macroblock in YUV format contains data for all Y, U, V components. Y is the luma component, or black and white portion, while U and V are color difference components.
  • Pixels in each macroblock are traditionally stored in blocks since they are compressed. Each block comprises 8 lines, each line having 8 pixels.
  • Three types of macroblocks are available in MPEG 2.
  • the 4:2:0 macroblock consists of four Y blocks, one U block, and one V block.
  • a 4:2:2 macroblock consists of four Y blocks, two U blocks, and two V blocks.
  • a 4:4:4 macroblock consists of four Y blocks, four U blocks, and four V blocks.
  • a Discrete Cosine Transform is performed on each 8 x 8 block of pixels within each macroblock, resulting in an 8 x 8 block of horizontal and vertical frequency coefficients.
  • the DCT process is two dimensional, where DCT is performed on each row and column of pixels.
  • the two dimensional process is difficult to perform without an intermediate buffer to store 8 lines of video data. It would be desirable to perform the DCT process without this intermediate buffer, resulting in an increase in efficiency of the DCT process and a decrease in hardware costs.
  • Resolution of video is often different from the resolution of the computer display on which the video will be displayed.
  • the video resolution often should be scaled to fit within a desired window, such as by vertical and horizontal scaling. Scaling down can be performed by averaging, while scaling up can be accomplished by interpolation.
  • the present invention provides a video capture chip with a USB interface.
  • the video capture chip When combined with a video sensor and signal processing circuit, the video capture chip is capable of capturing live video and still images, and sending the data through a USB to a computer.
  • the present invention may be used in a video camera, surveillance watcher, scanner, copier, fax machine, digital still picture camera, or other similar device.
  • a method for combining vertical scaling and color format conversion is disclosed.
  • Vertical scaling and 4:2:2 to 4:2:0 color format conversion are simultaneously performed on incoming Y, U, and V data.
  • each byte of the Y, U, and V data are separated.
  • a scaling factor is determined, the scaling factor indicating a number of bytes to average.
  • the scaling factor is equal to 1
  • a 2: 1 scale down is performed for each U and V byte.
  • the scaling factor is equal to f, where f is greater than 1
  • a 2f: 1 scale down is performed for each U and V byte when the scaling factor is equal to f.
  • the scaling factor is- equal to f, where f is greater than 1
  • an f: l scale down is performed for each Y byte.
  • a method for performing a one dimensional DCT on a line of pixels to create a DCT coefficient y(u) is disclosed.
  • a sequence of pixels is accepted.
  • a cosine operation is then performed on adjacent sets of the sequence of pixels to generate a sequence of one dimensional DCT coefficients. This is accomplished without storing the sequence in a buffer through use of a register. Through elimination of the buffer required in the traditional two dimensional DCT, efficiency is improved, and manufacturing costs are substantially reduced.
  • a method for compressing DCT coefficients, or other data is disclosed to offset the lower compression ratio resulting from the one dimensional DCT.
  • a plurality of DCT coefficients are accepted.
  • a pattern code is then generated for the plurality of DCT coefficients.
  • the pattern code comprises a plurality of bits, each one of the plurality of bits corresponding to one of the plurality of DCT coefficients.
  • Each one of the plurality of bits is 0 when the DCT coefficient is 0, and is otherwise 1.
  • Nonzero DCT coefficients are identified using the pattern code.
  • Each zero DCT coefficient is encoded with zero bits.
  • a coefficient table is prepared, the coefficient table having a plurality of code pairs, each of the plurality of pairs having a length code and a Huffman code.
  • a pattern table is prepared, the pattern table having a plurality of code pairs, each of the plurality of pairs having a length code and a Huffman code.
  • a table lookup is performed for each non-zero DCT coefficient within the coefficient table.
  • a table lookup is performed for each pattern code within the pattern table. Optimum compression is achieved since a majority of the non-zero coefficients have common values which can be compressed through Huffman encoding.
  • the present invention provides a method and system for vertically scaling the live video signal data and performing a 4:2:2 to 4:2:0 color format conversion simultaneous with the vertical scaling step. Moreover, a one-dimensional bufferless discrete cosine transform is performed on the scaled live video signal data to create a plurality of scaled DCT coefficients. Each of the plurality of the scaled DCT coefficients is then Huffman encoded.
  • FIG. 1 illustrates a USB video capture chip according to a presently preferred embodiment of the present invention.
  • FIG. 2 illustrates a sealer according to a presently preferred embodiment of the present invention.
  • FIG. 3 illustrates an implementation of the vertical sealer according to a presently preferred embodiment of the present invention.
  • FIG. 4 illustrates a compression engine according to the present invention.
  • FIG. 5 is a flow diagram illustrating a method for performing a one-dimensional DCT according to a presently preferred embodiment of the present invention.
  • FIG. 6 illustrates an interface between a scaled one dimensional DCT and Huffman Encoder according to the present invention.
  • FIG. 7 illustrates a Huffman Encoder according to a presently preferred embodiment of the present invention.
  • FIG. 8 illustrates a coefficient selection module of the Huffman Encoder according to a presently preferred embodiment of the present invention.
  • FIG. 9 illustrates a presently preferred embodiment of DC adjustment performed during the coefficient selection.
  • FIG. 10 is a flow diagram illustrating the DC adjustment performed according to a presently preferred embodiment of the present invention.
  • FIG. 1 1 illustrates a pattern code generation module of the Huffman Encoder according to a presently preferred embodiment of the present invention.
  • FIG. 12 illustrates a table lookup module of the Huffman Encoder according to a presently preferred embodiment of the present invention.
  • a USB video capture chip according to a presently preferred embodiment of the present invention is shown.
  • a video sensor and signal processor 20 provides color space component data 22 to the USB video capture chip.
  • the USB video capture chip comprises a sealer 24, a DCT module 26, a Huffman coding module 28, and a sync and syntax control module 30.
  • still image data 32 bypasses the video capture chip and goes directly to a USB interface 34 for transmitting data to a CPU.
  • live video is compressed by the USB video capture chip before being sent to the USB interface 34.
  • a software driver may then decompress the video data and send the decompressed data to an application.
  • Each line of incoming color space component data 36 comprises first color space component data, second color space component data, and third color space component data.
  • the first, second, and third color space component data correspond to Y, U, and V data, respectively, and each line of 4:2:2 YUV data is split by a color space component separator, or YUV separator 38 into Y 40, U 42, and V 44 buffers.
  • a color space component separator or YUV separator 38 into Y 40, U 42, and V 44 buffers.
  • the Y, U and V buffers each comprise a four-byte buffer.
  • a horizontal sync signal 46 indicates the start of a new horizontal scan line of a video frame. Scaling is synchronized with a video clock signal 48. Tracking of the Y, U, and V components is performed by counting each byte received at the horizontal sync signal 46.
  • input data is interleaved YUYV data. Therefore, even bytes comprise Y bytes, while odd bytes comprise U or V bytes.
  • the separator may be implemented with a multiplexer, or equivalent means for separating the Y, U and V bytes.
  • the Y, U, and V data is then multiplexed by a 3: 1 32-bit multiplexer 50.
  • the multiplexer 50 controls buffer selection and sends 4 bytes of Y, U or V data to be scaled.
  • the Y buffer 40 is selected, the Y buffer 40 is accessed twice before switching to the U 42 or V 44 buffer.
  • the multiplexed data is then processed by a horizontal sealer 52 and a vertical sealer 54 according to the present invention.
  • the horizontal 52 and vertical 54 sealers may be implemented in pipeline.
  • the horizontal sealer 52 is adapted for performing a 2: 1 or 4: 1 averaging operation on each color component, depending on a horizontal scale factor.
  • the horizontal scale factor is 2
  • two bytes are selected from one of the four byte buffers 40- 44.
  • the horizontal scale factor is 4, all four bytes are selected from one of the four byte buffers 40-44.
  • the selected bytes are then averaged and rounded.
  • the horizontal sealer 52 then outputs a single averaged byte.
  • the vertical sealer 54 is adapted for performing vertical scaling and color format conversion on the horizontally scaled data in a single process according to the present invention.
  • a 2f: 1 scale down on each byte of the U and V components is performed for a scaling factor equal to f .
  • a f : 1 scale down is performed on each byte of the Y component where f is an integer greater than 1 , since no scaling is required where f is equal to 1.
  • This scaled data is then sent to a DCT module.
  • a line buffer control module 56 controls data flow to a YUV line buffer, or DCT buffer 58.
  • the line buffer control module 56 comprises a multiplexer which dispatches data to the YUV line buffer, or DCT input buffer 58, for use by a DCT module.
  • the YUV line buffer 58 may be used to store intermediate accumulation results for the vertical sealer 54.
  • data is dispatched in 10 bit blocks. However, one of ordinary skill in the art will readily recognize that blocks comprising greater or fewer bits may be dispatched.
  • the multiplexer dispatches YUV data from the vertical sealer to a Y, U, or V block, respectively, within the YUV line buffer.
  • the DCT module may then process selected bytes 60 within the YUV line buffer.
  • FIG. 3 an implementation of the vertical sealer 54 according to a presently preferred embodiment of the present invention is presented.
  • a means for vertically scaling the live video signal data and means for performing a 4:2:2 to 4:2:0 color format conversion simultaneous with the vertical scaling step are provided.
  • Incoming color space component data 62 is obtained from the horizontal sealer.
  • a means for adding vertically aligned component values is provided.
  • a 10-bit accumulator 64 performs adding required during averaging of this color space component data to produce a sum.
  • An accumulator 64 is provided having a first input operatively coupled to the incoming color space component data 62, a second input operatively coupled to an initializer value 66 for rounding accumulated data, a third input operatively coupled to a component signal 68 adapted for selecting the first, second, or third color space component to be scaled, a fourth input operatively coupled to a set_initial signal 70 used to reset the accumulator, a fifth input 72 for receiving intermediate accumulation results, and an output 74 producing the sum of the color space component data to be averaged. Rounding is performed by adding an initializer value to the sum.
  • a shifting means is provided.
  • a shifter 16 is provided having a first input 78 operatively coupled to the accumulator output, a second input 80 indicating a number of bits to shift the sum right, and an output 82.
  • the shifter shifts the sum right by a number of bits equal to shift_bits to divide the sum by a multiple of 2 to produce an averaged sum.
  • a multiplexing means is operatively coupled to the shifter 76 and accumulator 64 for selecting YUV data to be sent to a line buffer control module.
  • the multiplexer 84 includes a first input 86 operatively coupled to the accumulator output, a second input 88 operatively coupled to the shifter output, a select line 90 operatively coupled to a f ⁇ nal_shift signal indicating when a final shift is to be performed, and an output 92, the select line 90 selecting the second input 88 when the final shift is to be performed, and otherwise selecting the first input 86.
  • a buffer control module 94 is provided for storing the multiplexer output, the buffer control module 94 adapted for providing the multiplexer output to a DCT module when the f ⁇ nal_shift signal indicates the final shift is to be performed, and otherwise providing the multiplexer output to the fifth accumulator input.
  • the line buffer control module is operatively coupled to the accumulator to store intermediate accumulation results.
  • the buffer control module 94 is adapted for storing the multiplexer output in a YUV line buffer 96.
  • An extract bits module 98 sends this data to the DCT module.
  • Control logic 100 generates necessary control signals for the accumulator 64, shifter 76, multiplexer 84 and line buffer control module 94. For example, the number of bits to shift the data, shift_bits, is sent to the shifter 76.
  • the control logic is regulated by a scaling factor 102, a vertical_sync signal 104 indicating the start of a frame, and the rate 106 the vertical sealer receives bytes from the horizontal sealer.
  • the scaling factor 102 is an integer, and will generally be 1 or 2.
  • the control logic 100 During vertical scaling, the control logic 100 generates three signals for use by the accumulator 64.
  • the initializer value is generated indicating a value to initialize the accumulator 64 for rounding.
  • a y_comp signal indicates that the present component being scaled is the Y component. For example, if the component is a Y component, the y_comp signal is 1. In all other instances, the y_comp signal is 0. As described above, this is performed by clock counting.
  • a set_initial signal is used to reset the accumulator 64 to the initializer value at the beginning of scaling each Y, U, or V component.
  • Data flow during vertical scaling varies according to the scale factor.
  • the control logic 100 generates a two bit path_select signal 108 indicating the direction of the data flow, since data may flow in three directions: from the FMUX 84 to the line buffer 96, from the line buffer 96 to the extract bits module 98, and from the line buffer 96 to the accumulator 64.
  • 1:1 scaling data flows from the FMUX 84 to the line buffer 96.
  • 2: 1 scaling data flows from the FMUX 84 to the line buffer 96 for even lines.
  • For odd lines data flows sequentially from the line buffer 96 to the extract bits module 98, and from the line buffer 96 to the accumulator 64.
  • 4: 1 scaling four input lines are processed.
  • the control logic 100 sends a final_shift signal to the FMUX 84 indicating when the accumulation process is complete. Therefore, when fmal_shift is 1, the FMUX 84 selects the output of the shifter 76, and otherwise selects the output of the accumulator 64.
  • the control logic 100 further generates a lineout_parity 110 indicating a line number of the line after scaling is completed, as well as a signal 112 indicating a start of a new horizontal line.
  • a lineout_parity 110 indicating a line number of the line after scaling is completed, as well as a signal 112 indicating a start of a new horizontal line.
  • data flows from the line buffer 96 to an extract bits module 98. According to a presently preferred embodiment, the lowest 8 bits from the 10-bit line buffer data are extracted.
  • the compression engine comprises a one dimensional DCT 114 integrated with quantizers, a Huffman encoding block 116, and a syntax protocol and sync control block 118 coupled to the USB interface 120.
  • the compression engine encodes each frame on a scanline basis. Each line comprises 8-pixel segments.
  • each frame starts with a picture_start_code and each scanline starts with a line_start_code.
  • the line_start_code distinguishes between even lines comprising Y components only and odd lines comprising Y, U and V components.
  • FIG. 5 a flow diagram illustrating a method for performing a one-dimensional DCT according to a presently preferred embodiment of the present invention is presented.
  • a means for performing a one-dimensional bufferless discrete cosine transform on the scaled live video signal data to create a plurality of scaled DCT coefficients is provided.
  • the one dimensional DCT is performed on each line of 8 pixels to create a DCT coefficient y(u).
  • a plurality of pixels is accepted at step 122, each of the plurality of pixels x. designated by an integer i, where i is an integer selected from the
  • a DCT coefficient selector u
  • a pixel is selected and intermediate values are initialized at step 126.
  • a cosine operation is performed on ((2i + 1) * u ⁇ /16) to create a result, where u is an integer selected from the group consisting of 0, 1, 2, 3, 4, 5, 6, and 7, and where u designates a DCT coefficient.
  • the pixel x is initialized.
  • the result is an integer selected from the group consisting of 0, 1, 2, 3, 4, 5, 6, and 7, and where u designates a DCT coefficient.
  • a DCT coefficient y(u) is determined at step 138.
  • a constant is determined, the constant being 1/ sqrt(2) when u is 0, the constant otherwise being 1.
  • the summed value is multiplied by the constant to create a product at step 142.
  • the product is then divided by 2 at step 144.
  • the steps of performing and multiplying are repeated for each of the plurality of pixels until all DCT coefficients u are determined to be calculated at step 146. These steps are performed for each DCT coefficient u at step 148 until the process is completed at step 150.
  • the scaled DCT is further divided by a quantizer.
  • a quantizer q(u) corresponding to the DCT coefficient y(u) is selected, where u is an integer selected from the group consisting of 0, 1, 2, 3, 4, 5, 6, and 7, where the quantizer q(0) is 5.656, the quantizer q(l) is 11.0, the quantizer q(2) is 13.0, the quantizer q(3) is 15.0, the quantizer q(4) is 17.0, the quantizer q(5) is 19.0, the quantizer q(6) is 21.0, and the quantizer q(7) is 23.0.
  • the DCT coefficient y(u) is then divided by the quantizer q(u).
  • the method for performing a one-dimensional DCT may be implemented in software or firmware, as well as in programmable gate array devices, ASIC and other hardware.
  • each DCT coefficient byte is written to the buffer in synchronization with a DCT clock when enabled by a WRITE_ENABLE signal.
  • the Huffman Encoder reads each byte from the buffer when enabled by a READ_ENABLE signal.
  • the READ_ENABLE signal is enabled during coefficient selection, and disabled during Huffman encoding.
  • a Huffman Encoder accordmg to the present invention is illustrated.
  • a coefficient to be Huffman encoded is selected at 154.
  • pattern code generation is performed at 156.
  • table lookup is performed at 158. Therefore, a means for Huffman encoding each of the plurality of scaled DCT coefficients includes a means for selecting a coefficient to be Huffman encoded, means for pattern code generation, and table lookup means.
  • coefficient selection means of the Huffman Encoder according to a presently preferred embodiment of the present invention is presented.
  • a multiplexer DC_MUX 160 has a select line 162, a first input 164 coupled to an incoming DCT coefficient received from the one dimensional DCT output, a second input 166 coupled to a DC Adjustment block 168, and an output 170.
  • the select line 162 is 1. In all other instances, the
  • select line 162 is 0.
  • the multiplexer DC_MUX 160 selects the second input 166 and places it at the multiplexer output 170.
  • the select line 162 is 0, the first input 164 is selected and passed through to the multiplexer output 170.
  • the DC adjustment block 168 includes a DC prediction block 174 and a subtraction block 176.
  • the DC prediction block 174 includes a horizontal sync input 178 indicating the start of a new line, a component_id input 180 indicating a Y, U or V component, an initiaLpred input 182 used for initialization, a DC component input 184 providing the Y, U, or V component as indicated by the component_id input 180, and a DC_pred output 186.
  • a plurality of registers is provided for initialization, with each one of the plurality of registers allocated for each of the Y, U, and V components.
  • the DC prediction block 174 initializes each of the plurality of registers with the initial_pred input 182 value.
  • the initiaLpred input value is 64.
  • the subtraction block 176 has a first input coupled to the DC component input 172, a second input coupled to the DC prediction block output 186, and an output 188. For each 8-byte Y, U, and V component, the second input, or corresponding register value, is subtracted from the first input, or DC component value 172. The plurality of registers are then initialized to contain the DC component input value 172.
  • the DC adjustment process is illustrated in FIG. 10.
  • the horizontal sync signal indicates the start of a new line.
  • each one of the plurality of registers is initialized. For each 8-byte component segment, steps 192-196 are performed.
  • the most recent DC component value is assigned to a temporary memory location.
  • the register value corresponding to the Y, U, or V component is subtracted from the most recent DC component value and sent to the DC_MUX 160.
  • the value stored in the temporary memory location is stored in the register corresponding to the Y, U, or V component.
  • the component_id 0, 1, and 2 may be provided for components Y, U, and V, respectively.
  • a state machine may provide the component_id in the sequence of ⁇ 0, 1, 0, 2, 0, 1, 0, 2,... ⁇ where the Huffman encoding block will process each scanline on an 8-pixel basis in the order of Y, U, Y, V, Y, U, Y, V...
  • the Huffman encoding block will process each scanline on an 8-pixel basis in the order of Y, U, Y, V, Y, U, Y, V...
  • components may be received in an alternative order.
  • pattern code generation means according to a presently preferred embodiment of the present invention is illustrated.
  • a plurality of DCT coefficients are generated by the DCT module.
  • a pattern code is then generated for each of the plurality of DCT coefficients to identify which coefficients are coded, since only the nonzero coefficients are coded.
  • the pattern code generated includes a plurality of bits, each one of the plurality of bits corresponding to one of the plurality of DCT coefficients.
  • each one of the plurality of bits is 0 when the DCT coefficient is 0. In all other instances, the corresponding bit is 1.
  • This pattern code may be generated by performing a bitwise OR operation for each one of the plurality of DCT coefficients.
  • an adjusted DCT coefficient 198 is provided by the multiplexer DC_MUX.
  • a bitwise OR operation 200 is performed on the adjusted DCT coefficient 198 to produce an output comprising one of the plurality of bits in the pattern code.
  • a l :n 1-bit MUX 202 having an input 204, a plurality of select lines 206, and n outputs 208 is provided.
  • a pattern code byte 210 will be generated. Therefore, the l:n MUX 202 comprises a 1 :8 MUX to accomodate 8 DCT coefficients and a corresponding 8 bit pattern code.
  • the output of the bitwise OR operation 200 is operatively coupled to the 1:8 1-bit MUX 202.
  • a coefficient id is operatively coupled to the 1 :8 1-bit MUX and 1 :8 8-bit MUX select lines 206 for selecting which one of 8 coefficients is to be processed.
  • the output of the bitwise OR operation 200 is then placed in the corresponding bit in the pattern code 210.
  • the adjusted DCT coefficient is similarly stored in a corresponding byte in an n byte Huffman Input Buffer 212.
  • a delay 214 of one clock is provided for synchronization with the pattern code generation.
  • a l :n MUX n-bit 216 having an input 218, n outputs 220, and a plurality of select lines 206 coupled to the coefficient id is provided for storing the adjusted DCT coefficient in the Huffman Input Buffer 212.
  • the MUX 216 comprises a 1 :8 8-bit MUX.
  • the adjusted DCT coefficient 198 is passed through the input of the 8-bit MUX 216 to a byte in the n byte Huffman Input Buffer 212 corresponding to the coefficient id.
  • a coefficient table is prepared including a plurality of code pairs, each of the plurality of pairs having a length code and a Huffman code.
  • a pattern table is prepared including a plurality of code pairs, each of the plurality of pairs having a length code and a Huffman code.
  • a multiplexer HMUX 222 having a plurality of inputs 224 operatively coupled to the pattern code and the Huffman Input Buffer, a plurality of select lines 226 coupled to the coefficient id and a selection bit 228 for selecting a pattern code 230 or a DCT coefficient 232 for Huffman coding, and an output is provided.
  • the selection bit 228 indicates the start of the 1 byte pattern code 230 and 8 bytes of DCT coefficients 232 which form a segment.
  • the pattern code 230 is operatively coupled to a first one of the plurality of inputs and each of the DCT coefficients in the Huffman Input Buffer 232 are operatively coupled to a different one of the plurality of inputs.
  • the selection bit 228 is in a first state
  • the pattern code 230 is passed through to the multiplexer 222 output.
  • the selection bit 228 is in a second state
  • one of the plurality of bytes in the Huffman Input Buffer 232 corresponding to the coefficient id 226 is passed through to the multiplexer 222 output.
  • Nonzero DCT coefficients are then identified using the pattern code.
  • Table select 234 selects a pattern table or coefficient table.
  • the selection bit 228 and table select 234 can be made the same signal.
  • a table lookup 236 is performed for each non-zero DCT coefficient within the coefficient table to Huffman encode the non-zero DCT coefficient.
  • Each zero DCT coefficient is encoded with zero bits, meaning that the coefficient is skipped in the bitstream.
  • the pattern code is always coded and transmitted.
  • a table lookup 236 is performed for the pattern code within the pattern table to Huffman encode the pattern code.
  • Huffman encoding of the pattern code and DCT coefficients produces a 4 bit length code 238 and a 14 bit Huffman code 240.
  • the length and Huffman code for a zero DCT coefficient are zero.
  • the Huffman encoded pattern code and DCT coefficients are then sent to a Sync and Syntax control block 242.
  • the sync and syntax control block provides control logic for sending each Huffman Code to a USB FIFO buffer.
  • the sync and control block provides a hne dropping mechanism, a state machine, and a data multiplexer.
  • the line dropping mechanism drops a line if the USB FIFO almost full condition is true and the current line is an even line.
  • a Y line is dropped to prevent the USB FIFO buffer from becoming full and allowing incoming data to be discarded.
  • the USB FIFO almost full condition may be true if the USB FIFO has less than 256 bytes of free space.
  • the state machine and data multiplexer provide a compressed bitstream to the USB interface from the Huffman-Encoder. If the compressed bitstream does not lie on a byte boundary, the bitstream is stuffed with l's. The resulting bitstream is then output to the USB Bus.

Abstract

A method and system for capturing live video signal data using bufferless data compression are disclosed. Live video signal data is vertically scaled. A 4:2:2 to 4:2:0 color format conversion is performed simultaneous with the vertical scaling step. A one-dimensional bufferless discrete cosine transform is performed on the scaled live video signal data to create a plurality of scaled DCT coefficients. Each of the plurality of scaled DCT coefficients is then Huffman coded. Each of the Huffman encoded DCT coefficients may then be sent via a USB interface to a USB bus.

Description

This application is submitted in the name of Inventor Xiaoping Hu, assignor to Sigma Designs Inc., a California Corporation.
SPECIFICATION Multi-function USB Video Capture Chip Using Bufferless Data Compression
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to bufferless compression of video data.
2. The Prior Art With the development of multi-media systems, the prospect of inputting live video into a computer system has become common. Video capture chips are used for capturing still image or live video, and may be used together with a video sensor and signal processing circuit to create a video camera. Although it would be desirable to include a USB interface in the video capture chip to interface with a computer, the USB interface has a much smaller bandwidth than the camera generates.
At present, a USB interface has a bandwidth of 12 M bits per second, and only 8 M bits per second can be allocated to a single isochronous channel. In order to capture live video at a high resolution, the image data could be compressed. For example, a data rate for Common Interchange Format (CIF) resolution video (352 x 288) in 4:2:0 format at a rate of 30 frames per second is approximately 35.6 M bits/s. One way to transmit this data across a USB using a 8 M bits/s channel is to compress this data at a compression ratio of approximately 4.5: 1. However, known lossless compression engines are not generally this effective, and all lossy compression engines utilize an intermediate buffer for compression of video data. This intermediate buffer substantially increases the manufacturing costs of such a system. Accordingly, hardware costs could be substantially reduced if this intermediate buffer were eliminated. Moreover, less CPU power is required to decompress the data.
During MPEG I and MPEG II encoding, each macroblock is processed. Each macroblock comprises a plurality of pixels, each of which is defined by color space components. A color space is a mathematical representation for a color. For example, RGB, YIQ, and YUV are different color spaces which provide different ways of representing a color which will ultimately be displayed in a video system. A macroblock in YUV format contains data for all Y, U, V components. Y is the luma component, or black and white portion, while U and V are color difference components.
Pixels in each macroblock are traditionally stored in blocks since they are compressed. Each block comprises 8 lines, each line having 8 pixels. Three types of macroblocks are available in MPEG 2. The 4:2:0 macroblock consists of four Y blocks, one U block, and one V block. A 4:2:2 macroblock consists of four Y blocks, two U blocks, and two V blocks. A 4:4:4 macroblock consists of four Y blocks, four U blocks, and four V blocks.
During encoding, a Discrete Cosine Transform (DCT) is performed on each 8 x 8 block of pixels within each macroblock, resulting in an 8 x 8 block of horizontal and vertical frequency coefficients. Typically, the DCT process is two dimensional, where DCT is performed on each row and column of pixels. However, the two dimensional process is difficult to perform without an intermediate buffer to store 8 lines of video data. It would be desirable to perform the DCT process without this intermediate buffer, resulting in an increase in efficiency of the DCT process and a decrease in hardware costs.
Resolution of video is often different from the resolution of the computer display on which the video will be displayed. In order to display the video on various computer displays, the video resolution often should be scaled to fit within a desired window, such as by vertical and horizontal scaling. Scaling down can be performed by averaging, while scaling up can be accomplished by interpolation.
Various color formats have been developed for use with image and video encoding and decoding. To facilitate the transfer of data, most MPEG II video encoders accept various video formats, such as the 4:2:2 YUV video format, and use the 4:2:0 format for data storage. Therefore, color format conversion from the 4:2:2 format to the 4:2:0 format is known to be performed. In known systems, color format conversion and scaling are performed in two separate processes. It would be extremely advantageous if vertical scaling and color format conversion could be combined into one process. Through combining these two processes, efficiency of the video capture chip could be improved with a reduced hardware cost.
Accordingly, it would be desirable to provide a method and system for capturing still images or live video with improved efficiency and reduced hardware costs. These advantages are achieved in an embodiment of the invention in which color format conversion and vertical scaling are performed in one process, in which a one- dimensional DCT process is performed without an intermediate buffer, and in which Huffman coding is tailored to the particular DCT.
BRIEF DESCRIPTION OF THE INVENTION The present invention provides a video capture chip with a USB interface. When combined with a video sensor and signal processing circuit, the video capture chip is capable of capturing live video and still images, and sending the data through a USB to a computer. With the addition of application software, the present invention may be used in a video camera, surveillance watcher, scanner, copier, fax machine, digital still picture camera, or other similar device.
According to a first aspect of the present invention, a method for combining vertical scaling and color format conversion is disclosed. Vertical scaling and 4:2:2 to 4:2:0 color format conversion are simultaneously performed on incoming Y, U, and V data. According to a presently preferred embodiment of the present invention, each byte of the Y, U, and V data are separated. A scaling factor is determined, the scaling factor indicating a number of bytes to average. When the scaling factor is equal to 1, a 2: 1 scale down is performed for each U and V byte. When the scaling factor is equal to f, where f is greater than 1, a 2f: 1 scale down is performed for each U and V byte when the scaling factor is equal to f. In addition, when the scaling factor is- equal to f, where f is greater than 1 , an f: l scale down is performed for each Y byte. Through the reduction of the vertical scaling and color format conversion into one process, the line buffer size and logical gate count may be reduced by half.
According to a second aspect of the present invention, a method for performing a one dimensional DCT on a line of pixels to create a DCT coefficient y(u) is disclosed. According to a presently preferred embodiment of the present invention, a sequence of pixels is accepted. A cosine operation is then performed on adjacent sets of the sequence of pixels to generate a sequence of one dimensional DCT coefficients. This is accomplished without storing the sequence in a buffer through use of a register. Through elimination of the buffer required in the traditional two dimensional DCT, efficiency is improved, and manufacturing costs are substantially reduced.
According to a third aspect of the present invention, a method for compressing DCT coefficients, or other data, is disclosed to offset the lower compression ratio resulting from the one dimensional DCT. According to a presently preferred embodiment of the present invention, a plurality of DCT coefficients are accepted. A pattern code is then generated for the plurality of DCT coefficients. The pattern code comprises a plurality of bits, each one of the plurality of bits corresponding to one of the plurality of DCT coefficients. Each one of the plurality of bits is 0 when the DCT coefficient is 0, and is otherwise 1. Nonzero DCT coefficients are identified using the pattern code. Each zero DCT coefficient is encoded with zero bits. A coefficient table is prepared, the coefficient table having a plurality of code pairs, each of the plurality of pairs having a length code and a Huffman code. In addition, a pattern table is prepared, the pattern table having a plurality of code pairs, each of the plurality of pairs having a length code and a Huffman code. A table lookup is performed for each non-zero DCT coefficient within the coefficient table. Similarly, a table lookup is performed for each pattern code within the pattern table. Optimum compression is achieved since a majority of the non-zero coefficients have common values which can be compressed through Huffman encoding.
Therefore, the present invention provides a method and system for vertically scaling the live video signal data and performing a 4:2:2 to 4:2:0 color format conversion simultaneous with the vertical scaling step. Moreover, a one-dimensional bufferless discrete cosine transform is performed on the scaled live video signal data to create a plurality of scaled DCT coefficients. Each of the plurality of the scaled DCT coefficients is then Huffman encoded.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates a USB video capture chip according to a presently preferred embodiment of the present invention.
FIG. 2 illustrates a sealer according to a presently preferred embodiment of the present invention.
FIG. 3 illustrates an implementation of the vertical sealer according to a presently preferred embodiment of the present invention.
FIG. 4 illustrates a compression engine according to the present invention.
FIG. 5 is a flow diagram illustrating a method for performing a one-dimensional DCT according to a presently preferred embodiment of the present invention.
FIG. 6 illustrates an interface between a scaled one dimensional DCT and Huffman Encoder according to the present invention.
FIG. 7 illustrates a Huffman Encoder according to a presently preferred embodiment of the present invention.
FIG. 8 illustrates a coefficient selection module of the Huffman Encoder according to a presently preferred embodiment of the present invention.
FIG. 9 illustrates a presently preferred embodiment of DC adjustment performed during the coefficient selection.
FIG. 10 is a flow diagram illustrating the DC adjustment performed according to a presently preferred embodiment of the present invention.
FIG. 1 1 illustrates a pattern code generation module of the Huffman Encoder according to a presently preferred embodiment of the present invention.
FIG. 12 illustrates a table lookup module of the Huffman Encoder according to a presently preferred embodiment of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
In the following description, a preferred embodiment of the invention is described with regard to preferred process steps and data structures. However, those skilled in the art would recognize, after perusal of this application, that embodiments of the invention may be implemented using a set of general purpose computers operating under program control, and that modification of a set of general purpose computers to implement the process steps and data structures described herein would not require undue invention.
Referring first to FIG. 1 , a USB video capture chip according to a presently preferred embodiment of the present invention is shown. A video sensor and signal processor 20 provides color space component data 22 to the USB video capture chip. The USB video capture chip comprises a sealer 24, a DCT module 26, a Huffman coding module 28, and a sync and syntax control module 30. According to a presently preferred embodiment of the present invention, still image data 32 bypasses the video capture chip and goes directly to a USB interface 34 for transmitting data to a CPU. However, live video is compressed by the USB video capture chip before being sent to the USB interface 34. A software driver may then decompress the video data and send the decompressed data to an application.
Referring now to FIG. 2, a sealer according to a presently preferred embodiment of the present invention is shown. Each line of incoming color space component data 36 comprises first color space component data, second color space component data, and third color space component data. According to a presently preferred embodiment of the present invention, the first, second, and third color space component data correspond to Y, U, and V data, respectively, and each line of 4:2:2 YUV data is split by a color space component separator, or YUV separator 38 into Y 40, U 42, and V 44 buffers. However, one of ordinary skill in the art will readily recognize that the present invention may be easily modified without undue experimentation to accomodate other color space components and formats. According to a presently preferred embodiment, the Y, U and V buffers each comprise a four-byte buffer. A horizontal sync signal 46 indicates the start of a new horizontal scan line of a video frame. Scaling is synchronized with a video clock signal 48. Tracking of the Y, U, and V components is performed by counting each byte received at the horizontal sync signal 46. According to a presently preferred embodiment, input data is interleaved YUYV data. Therefore, even bytes comprise Y bytes, while odd bytes comprise U or V bytes. The separator may be implemented with a multiplexer, or equivalent means for separating the Y, U and V bytes.
The Y, U, and V data is then multiplexed by a 3: 1 32-bit multiplexer 50. The multiplexer 50 controls buffer selection and sends 4 bytes of Y, U or V data to be scaled. When the Y buffer 40 is selected, the Y buffer 40 is accessed twice before switching to the U 42 or V 44 buffer. The multiplexed data is then processed by a horizontal sealer 52 and a vertical sealer 54 according to the present invention. The horizontal 52 and vertical 54 sealers may be implemented in pipeline.
The horizontal sealer 52 is adapted for performing a 2: 1 or 4: 1 averaging operation on each color component, depending on a horizontal scale factor. When the horizontal scale factor is 2, two bytes are selected from one of the four byte buffers 40- 44. When the horizontal scale factor is 4, all four bytes are selected from one of the four byte buffers 40-44. The selected bytes are then averaged and rounded. The horizontal sealer 52 then outputs a single averaged byte.
The vertical sealer 54 is adapted for performing vertical scaling and color format conversion on the horizontally scaled data in a single process according to the present invention. A 2f: 1 scale down on each byte of the U and V components is performed for a scaling factor equal to f . A f : 1 scale down is performed on each byte of the Y component where f is an integer greater than 1 , since no scaling is required where f is equal to 1. This scaled data is then sent to a DCT module. A line buffer control module 56 controls data flow to a YUV line buffer, or DCT buffer 58. According to a presently preferred embodiment of the present invention, the line buffer control module 56 comprises a multiplexer which dispatches data to the YUV line buffer, or DCT input buffer 58, for use by a DCT module. Moreover, the YUV line buffer 58 may be used to store intermediate accumulation results for the vertical sealer 54. According to a presently preferred embodiment of the present invention, data is dispatched in 10 bit blocks. However, one of ordinary skill in the art will readily recognize that blocks comprising greater or fewer bits may be dispatched. The multiplexer dispatches YUV data from the vertical sealer to a Y, U, or V block, respectively, within the YUV line buffer. The DCT module may then process selected bytes 60 within the YUV line buffer.
Referring now to FIG. 3, an implementation of the vertical sealer 54 according to a presently preferred embodiment of the present invention is presented. According to the present invention, a means for vertically scaling the live video signal data and means for performing a 4:2:2 to 4:2:0 color format conversion simultaneous with the vertical scaling step are provided. Incoming color space component data 62 is obtained from the horizontal sealer.
According to a preferred embodiment, a means for adding vertically aligned component values is provided. A 10-bit accumulator 64 performs adding required during averaging of this color space component data to produce a sum. An accumulator 64 is provided having a first input operatively coupled to the incoming color space component data 62, a second input operatively coupled to an initializer value 66 for rounding accumulated data, a third input operatively coupled to a component signal 68 adapted for selecting the first, second, or third color space component to be scaled, a fourth input operatively coupled to a set_initial signal 70 used to reset the accumulator, a fifth input 72 for receiving intermediate accumulation results, and an output 74 producing the sum of the color space component data to be averaged. Rounding is performed by adding an initializer value to the sum.
In addition, a shifting means is provided. A shifter 16 is provided having a first input 78 operatively coupled to the accumulator output, a second input 80 indicating a number of bits to shift the sum right, and an output 82. Thus, the shifter shifts the sum right by a number of bits equal to shift_bits to divide the sum by a multiple of 2 to produce an averaged sum.
A multiplexing means, multiplexer FMUX 84, is operatively coupled to the shifter 76 and accumulator 64 for selecting YUV data to be sent to a line buffer control module. The multiplexer 84 includes a first input 86 operatively coupled to the accumulator output, a second input 88 operatively coupled to the shifter output, a select line 90 operatively coupled to a fιnal_shift signal indicating when a final shift is to be performed, and an output 92, the select line 90 selecting the second input 88 when the final shift is to be performed, and otherwise selecting the first input 86. A buffer control module 94 is provided for storing the multiplexer output, the buffer control module 94 adapted for providing the multiplexer output to a DCT module when the fιnal_shift signal indicates the final shift is to be performed, and otherwise providing the multiplexer output to the fifth accumulator input. Thus, the line buffer control module is operatively coupled to the accumulator to store intermediate accumulation results. The buffer control module 94 is adapted for storing the multiplexer output in a YUV line buffer 96. An extract bits module 98 sends this data to the DCT module.
Control logic 100 generates necessary control signals for the accumulator 64, shifter 76, multiplexer 84 and line buffer control module 94. For example, the number of bits to shift the data, shift_bits, is sent to the shifter 76. The control logic is regulated by a scaling factor 102, a vertical_sync signal 104 indicating the start of a frame, and the rate 106 the vertical sealer receives bytes from the horizontal sealer. The scaling factor 102 is an integer, and will generally be 1 or 2.
During vertical scaling, the control logic 100 generates three signals for use by the accumulator 64. First, the initializer value is generated indicating a value to initialize the accumulator 64 for rounding. Second, a y_comp signal indicates that the present component being scaled is the Y component. For example, if the component is a Y component, the y_comp signal is 1. In all other instances, the y_comp signal is 0. As described above, this is performed by clock counting. Third, a set_initial signal is used to reset the accumulator 64 to the initializer value at the beginning of scaling each Y, U, or V component. According to a presently preferred embodiment of the present invention, the initializer value = shift_bits = scale_factor minus y_comp.
Data flow during vertical scaling varies according to the scale factor. The control logic 100 generates a two bit path_select signal 108 indicating the direction of the data flow, since data may flow in three directions: from the FMUX 84 to the line buffer 96, from the line buffer 96 to the extract bits module 98, and from the line buffer 96 to the accumulator 64. During 1:1 scaling, data flows from the FMUX 84 to the line buffer 96. During 2: 1 scaling, data flows from the FMUX 84 to the line buffer 96 for even lines. For odd lines, data flows sequentially from the line buffer 96 to the extract bits module 98, and from the line buffer 96 to the accumulator 64. During 4: 1 scaling, four input lines are processed. For the first hne, data flows from the FMUX 84 to the line buffer 96. For the second and third lines, data flows from the line buffer 96 to the accumulator 64 and from the FMUX 84 to the line buffer 96, sequentially. For the fourth line, data flows sequentially from the line buffer 96 to the accumulator 64 and from the line buffer 96 to the extract bits module 98. According to a preferred embodiment, the control logic 100 sends a final_shift signal to the FMUX 84 indicating when the accumulation process is complete. Therefore, when fmal_shift is 1, the FMUX 84 selects the output of the shifter 76, and otherwise selects the output of the accumulator 64. The control logic 100 further generates a lineout_parity 110 indicating a line number of the line after scaling is completed, as well as a signal 112 indicating a start of a new horizontal line. During DCT data access, data flows from the line buffer 96 to an extract bits module 98. According to a presently preferred embodiment, the lowest 8 bits from the 10-bit line buffer data are extracted.
Referring now to FIG. 4, a compression engine according to the present invention is presented. The compression engine comprises a one dimensional DCT 114 integrated with quantizers, a Huffman encoding block 116, and a syntax protocol and sync control block 118 coupled to the USB interface 120. The compression engine encodes each frame on a scanline basis. Each line comprises 8-pixel segments. According to a presently preferred embodiment of the present invention, each frame starts with a picture_start_code and each scanline starts with a line_start_code. The line_start_code distinguishes between even lines comprising Y components only and odd lines comprising Y, U and V components.
Referring now to FIG. 5, a flow diagram illustrating a method for performing a one-dimensional DCT according to a presently preferred embodiment of the present invention is presented. A means for performing a one-dimensional bufferless discrete cosine transform on the scaled live video signal data to create a plurality of scaled DCT coefficients is provided. According to a presently preferred embodiment of the present invention, the one dimensional DCT is performed on each line of 8 pixels to create a DCT coefficient y(u). First, a plurality of pixels is accepted at step 122, each of the plurality of pixels x. designated by an integer i, where i is an integer selected from the
group consisting of 0, 1 , 2, 3, 4, 5, 6, and 7. At step 124, a DCT coefficient selector, u, is initialized. In addition, a pixel is selected and intermediate values are initialized at step 126. Next, at step 128, a cosine operation is performed on ((2i + 1) * uπ/16) to create a result, where u is an integer selected from the group consisting of 0, 1, 2, 3, 4, 5, 6, and 7, and where u designates a DCT coefficient. Next, at step 130, the pixel x. and the result
of the cosine operation are multiplied to create a value for summation. In addition, the value for summation is successively added to create a summed value at step 132. If it is determined at step 134 that steps 128-132 have not been performed for all pixels, a next pixel is select at step 136, and steps 128-132 are repeated. Once calculations are performed for all pixels, a DCT coefficient y(u) is determined at step 138. First, at step 140, a constant is determined, the constant being 1/ sqrt(2) when u is 0, the constant otherwise being 1. The summed value is multiplied by the constant to create a product at step 142. The product is then divided by 2 at step 144. The steps of performing and multiplying are repeated for each of the plurality of pixels until all DCT coefficients u are determined to be calculated at step 146. These steps are performed for each DCT coefficient u at step 148 until the process is completed at step 150.
According to a presently preferred embodiment of the present invention, the scaled DCT is further divided by a quantizer. A quantizer q(u) corresponding to the DCT coefficient y(u) is selected, where u is an integer selected from the group consisting of 0, 1, 2, 3, 4, 5, 6, and 7, where the quantizer q(0) is 5.656, the quantizer q(l) is 11.0, the quantizer q(2) is 13.0, the quantizer q(3) is 15.0, the quantizer q(4) is 17.0, the quantizer q(5) is 19.0, the quantizer q(6) is 21.0, and the quantizer q(7) is 23.0. The DCT coefficient y(u) is then divided by the quantizer q(u). According to a preferred embodiment, the method for performing a one-dimensional DCT may be implemented in software or firmware, as well as in programmable gate array devices, ASIC and other hardware.
Referring now to FIG. 6, an interface between the scaled one dimensional DCT 26 and Huffman Encoder 28 shown in FIG. 1 is illustrated. The one dimensional DCT 26 outputs each DCT coefficient, which is stored in a buffer 152 for use by the Huffman Encoder 28. The buffer 152 is provided to store accumulated DCT coefficients, since according to a presently preferred embodiment, the Huffman Encoder 28 uses a greater number of clock cycles than the DCT module to process each 8 bytes of DCT coefficients. According to a presently preferred embodiment of the present invention, each DCT coefficient byte is written to the buffer in synchronization with a DCT clock when enabled by a WRITE_ENABLE signal. The Huffman Encoder reads each byte from the buffer when enabled by a READ_ENABLE signal. The READ_ENABLE signal is enabled during coefficient selection, and disabled during Huffman encoding.
Referring now to FIG. 7, a Huffman Encoder accordmg to the present invention is illustrated. A coefficient to be Huffman encoded is selected at 154. Next, pattern code generation is performed at 156. Finally, table lookup is performed at 158. Therefore, a means for Huffman encoding each of the plurality of scaled DCT coefficients includes a means for selecting a coefficient to be Huffman encoded, means for pattern code generation, and table lookup means. Referring now to FIG. 8, coefficient selection means of the Huffman Encoder according to a presently preferred embodiment of the present invention is presented. A multiplexer DC_MUX 160 has a select line 162, a first input 164 coupled to an incoming DCT coefficient received from the one dimensional DCT output, a second input 166 coupled to a DC Adjustment block 168, and an output 170. When the incoming DCT coefficient is a DC component ZQ, the select line 162 is 1. In all other instances, the
select line 162 is 0. When the select line 162 is 1, the multiplexer DC_MUX 160 selects the second input 166 and places it at the multiplexer output 170. When the select line 162 is 0, the first input 164 is selected and passed through to the multiplexer output 170.
Referring now to FIG. 9, a DC component adjustment block according to a presently preferred embodiment of the present invention is illustrated. When the incoming DCT coefficient is a DC component 172, the DC component 172 is adjusted. The DC adjustment block 168 includes a DC prediction block 174 and a subtraction block 176.
The DC prediction block 174 includes a horizontal sync input 178 indicating the start of a new line, a component_id input 180 indicating a Y, U or V component, an initiaLpred input 182 used for initialization, a DC component input 184 providing the Y, U, or V component as indicated by the component_id input 180, and a DC_pred output 186. According to a presently preferred embodiment, a plurality of registers is provided for initialization, with each one of the plurality of registers allocated for each of the Y, U, and V components. When the horizontal sync input 178 indicates the start of a new line, the DC prediction block 174 initializes each of the plurality of registers with the initial_pred input 182 value. According to a presently preferred embodiment of the present invention, the initiaLpred input value is 64.
The subtraction block 176 has a first input coupled to the DC component input 172, a second input coupled to the DC prediction block output 186, and an output 188. For each 8-byte Y, U, and V component, the second input, or corresponding register value, is subtracted from the first input, or DC component value 172. The plurality of registers are then initialized to contain the DC component input value 172.
The DC adjustment process is illustrated in FIG. 10. The horizontal sync signal indicates the start of a new line. At step 190, each one of the plurality of registers is initialized. For each 8-byte component segment, steps 192-196 are performed. At step 192, the most recent DC component value is assigned to a temporary memory location. Next, at step 194, the register value corresponding to the Y, U, or V component is subtracted from the most recent DC component value and sent to the DC_MUX 160. At step 196, the value stored in the temporary memory location is stored in the register corresponding to the Y, U, or V component. For example, the component_id 0, 1, and 2 may be provided for components Y, U, and V, respectively. A state machine may provide the component_id in the sequence of {0, 1, 0, 2, 0, 1, 0, 2,... } where the Huffman encoding block will process each scanline on an 8-pixel basis in the order of Y, U, Y, V, Y, U, Y, V... However, one of ordinary skill in the art will readily recognize that components may be received in an alternative order.
Referring now to FIG. 11 , pattern code generation means according to a presently preferred embodiment of the present invention is illustrated. A plurality of DCT coefficients are generated by the DCT module. A pattern code is then generated for each of the plurality of DCT coefficients to identify which coefficients are coded, since only the nonzero coefficients are coded. The pattern code generated includes a plurality of bits, each one of the plurality of bits corresponding to one of the plurality of DCT coefficients. According to a presently preferred embodiment of the present invention, each one of the plurality of bits is 0 when the DCT coefficient is 0. In all other instances, the corresponding bit is 1. This pattern code may be generated by performing a bitwise OR operation for each one of the plurality of DCT coefficients.
According to a presently preferred embodiment of the present invention, an adjusted DCT coefficient 198 is provided by the multiplexer DC_MUX. A bitwise OR operation 200 is performed on the adjusted DCT coefficient 198 to produce an output comprising one of the plurality of bits in the pattern code. A l :n 1-bit MUX 202 having an input 204, a plurality of select lines 206, and n outputs 208 is provided. According to a presently preferred embodiment, for each 8 bytes of DCT coefficient, a pattern code byte 210 will be generated. Therefore, the l:n MUX 202 comprises a 1 :8 MUX to accomodate 8 DCT coefficients and a corresponding 8 bit pattern code. The output of the bitwise OR operation 200 is operatively coupled to the 1:8 1-bit MUX 202. A coefficient id is operatively coupled to the 1 :8 1-bit MUX and 1 :8 8-bit MUX select lines 206 for selecting which one of 8 coefficients is to be processed. The output of the bitwise OR operation 200 is then placed in the corresponding bit in the pattern code 210.
The adjusted DCT coefficient is similarly stored in a corresponding byte in an n byte Huffman Input Buffer 212. A delay 214 of one clock is provided for synchronization with the pattern code generation. A l :n MUX n-bit 216 having an input 218, n outputs 220, and a plurality of select lines 206 coupled to the coefficient id is provided for storing the adjusted DCT coefficient in the Huffman Input Buffer 212. According to a presently preferred embodiment of the present invention, the MUX 216 comprises a 1 :8 8-bit MUX. The adjusted DCT coefficient 198 is passed through the input of the 8-bit MUX 216 to a byte in the n byte Huffman Input Buffer 212 corresponding to the coefficient id.
Referring now to FIG. 12, a Table Lookup module, or table lookup means, for Huffman-coding the pattern code and DCT coefficients according to a presently preferred embodiment of the present invention is shown. A coefficient table is prepared including a plurality of code pairs, each of the plurality of pairs having a length code and a Huffman code. A pattern table is prepared including a plurality of code pairs, each of the plurality of pairs having a length code and a Huffman code. A multiplexer HMUX 222 having a plurality of inputs 224 operatively coupled to the pattern code and the Huffman Input Buffer, a plurality of select lines 226 coupled to the coefficient id and a selection bit 228 for selecting a pattern code 230 or a DCT coefficient 232 for Huffman coding, and an output is provided. According to a presently preferred embodiment of the present invention, the selection bit 228 indicates the start of the 1 byte pattern code 230 and 8 bytes of DCT coefficients 232 which form a segment. The pattern code 230 is operatively coupled to a first one of the plurality of inputs and each of the DCT coefficients in the Huffman Input Buffer 232 are operatively coupled to a different one of the plurality of inputs. When the selection bit 228 is in a first state, the pattern code 230 is passed through to the multiplexer 222 output. When the selection bit 228 is in a second state, one of the plurality of bytes in the Huffman Input Buffer 232 corresponding to the coefficient id 226 is passed through to the multiplexer 222 output. Nonzero DCT coefficients are then identified using the pattern code. Table select 234 selects a pattern table or coefficient table. When the timing condition coincides, the selection bit 228 and table select 234 can be made the same signal. Thus, when the selection bit, or table select 234, is in the second state, a table lookup 236 is performed for each non-zero DCT coefficient within the coefficient table to Huffman encode the non-zero DCT coefficient. Each zero DCT coefficient is encoded with zero bits, meaning that the coefficient is skipped in the bitstream. However, the pattern code is always coded and transmitted. When the selection bit, or table select 234, is in the first state, a table lookup 236 is performed for the pattern code within the pattern table to Huffman encode the pattern code. According to a presently preferred embodiment, Huffman encoding of the pattern code and DCT coefficients produces a 4 bit length code 238 and a 14 bit Huffman code 240. The length and Huffman code for a zero DCT coefficient are zero. The Huffman encoded pattern code and DCT coefficients are then sent to a Sync and Syntax control block 242.
The sync and syntax control block provides control logic for sending each Huffman Code to a USB FIFO buffer. The sync and control block provides a hne dropping mechanism, a state machine, and a data multiplexer. The line dropping mechanism drops a line if the USB FIFO almost full condition is true and the current line is an even line. Thus, a Y line is dropped to prevent the USB FIFO buffer from becoming full and allowing incoming data to be discarded. For example, the USB FIFO almost full condition may be true if the USB FIFO has less than 256 bytes of free space.
The state machine and data multiplexer provide a compressed bitstream to the USB interface from the Huffman-Encoder. If the compressed bitstream does not lie on a byte boundary, the bitstream is stuffed with l's. The resulting bitstream is then output to the USB Bus.
While embodiments and applications of this invention have been shown and described, it would be apparent to those skilled in the art that many more modifications than mentioned above are possible without departing from the inventive concepts herein. The invention, therefore, is not to be restricted except in the spirit of the appended claims.

Claims

What is Claimed is:
1. A method for simultaneously performing vertical scaling and 4:2:2 to 4:2:0 color format conversion on incoming color space component data of a video frame, the incoming color space component data having first color space component data, second color space component data, and third color space component data, the method comprising the following steps: separating each byte of the first, second, and third color space component data; determining a scaling factor, the scaling factor indicating a number of bytes to average; first performing an f: l scale down for each byte of the first color space component data when the scaling factor is equal to f; and second performing a 2f : 1 scale down for each byte of the second and third color space component data when the scaling factor is equal to f.
2. The method according to claim 1, wherein the first performing step is performed when f is an integer greater than 1.
3. A method for simultaneously performing vertical scaling and 4:2:2 to 4:2:0 color format conversion on incoming Y, U, and V data of a video frame, the method comprising the following steps: separating each byte of the Y, U, and V data; determining a scaling factor, the scaling factor indicating a number of bytes to average; first performing an f: 1 scale down for each Y byte when the scaling factor is equal to f; and second performing a 2f : 1 scale down for each U and V byte when the scaling factor is equal to f.
4. The method according to claim 3, wherein the first performing step is performed when f is an integer greater than 1.
5. A method for simultaneously performing vertical scaling and 4:2:2 to 4:2:0 color format conversion on incoming 4:2:2 color space component data of a video frame, the incoming color space component data having first color space component data, second color space component data, and third color space component data, the method comprising the following steps: separating the incoming 4:2:2 color space component data into the first color space component data, the second color space component data, and the third color space component data; first performing vertical scaling on the first color space component data, the second color space component data, and the third color space component data; and second performing a 4:2:2 to a 4:2:0 color format conversion on the first color space component data, the second color space component data, and the third color space component data, the second performing step being executed simultaneously with the first performing step.
6. The method according to claim 5, the video frame having a plurality of horizontal scan lines, the method being performed in response to a signal indicating a start of a new horizontal scan line.
7. A method for simultaneously performing vertical scaling and 4:2:2 to 4:2:0 color format conversion on incoming 4:2:2 color space component data of a video frame, the incoming color space component data having first color space component data, second color space component data, and third color space component data, the method comprising the following steps: separating the incoming 4:2:2 color space component data into the first color space component data, the second color space component data, and the third color space component data; multiplexing the first, second, and third color space component data to allow one of the first, second, and third color space component data to be scaled; horizontally scaling the multiplexed color space component data; first performing vertical scaling on the multiplexed color space component data; and second performing a 4:2:2 to a 4:2:0 color format conversion on the multiplexed color space component data, the second performing step being executed simultaneously with the first performing step.
8. A method for performing a DCT on a line of pixels to create a DCT coefficient y(u), the method comprising the following steps: accepting a sequence of pixels; performing a cosine operation on adjacent sets of the sequence of pixels without storing the sequence in a buffer more than 16 bits in size; and generating a sequence of one dimensional DCT coefficients in response to the step of performing.
9. A method for performing a DCT on a line of pixels to create a DCT coefficient y(u), the method comprising the following steps: accepting a sequence of pixels; and performing a one dimensional DCT on the sequence of pixels without a buffer through use of a register.
10. A method for performing a one dimensional DCT on a line of pixels to create a DCT coefficient y(u), the method comprising the following steps: accepting a plurality of pixels, each of the plurality of pixels x. designated by an
integer i, where i is an integer selected from the group consisting of 0, 1, 2, 3, 4, 5, 6, and 7; performing a cosine operation on ((2i + 1) * uπ/16) to create a result, where u is an integer selected from the group consisting of 0, 1, 2, 3, 4, 5, 6, and 7, and where u designates a DCT coefficient; multiplying the pixel x and the result of the cosine operation to create a value
for summation; repeating the steps of performing and multiplying for each of the plurality of pixels, the step of repeating further including the step of successively adding the value for summation to create a summed value; determining a constant, the constant being l/sqrt(2) when u is 0, the constant otherwise being 1 ; multiplying the summed value by the constant to create a product; and dividing the product by 2.
11. The method according to claim 10, the method further comprising the following steps: selecting a quantizer q(u) corresponding to the DCT coefficient y(u), where u is an integer selected from the group consisting of 0, 1, 2, 3, 4, 5, 6, and 7, where the quantizer q(0) is about 5.656, the quantizer q(l) is about 11.0, the quantizer q(2) is about 13.0, the quantizer q(3) is about 15.0, the quantizer q(4) is about 17.0, the quantizer q(5) is about 19.0, the quantizer q(6) is about 21.0, and the quantizer q(7) is about 23.0; and dividing the DCT coefficient y(u) by the quantizer q(u).
12. A method for compressing a DCT coefficient, the method comprising the following steps: accepting a plurality of DCT coefficients; generating a pattern code for the plurality of DCT coefficients, the pattern code having a plurality of bits, each one of the plurality of bits corresponding to one of the plurality of DCT coefficients, the each one of the plurality of bits being 0 when the DCT coefficient is 0, and otherwise being 1 ; identifying nonzero DCT coefficients using the pattern code; encoding each zero DCT coefficient with zero bits; preparing a coefficient table, the coefficient table having a plurality of code pairs, each of the plurality of pairs having a length code and a Huffman code; preparing a pattern table, the pattern table having a plurality of code pairs, each of the plurality of pairs having a length code and a Huffman code; performing a table lookup for each non-zero DCT coefficient within the coefficient table; and performing a table lookup for the pattern code within the pattern table.
13. The method according to claim 12, the step of generating further comprising the following steps: performing a bitwise OR operation for each one of the plurality of DCT coefficients.
14. A method for Huffman encoding a DCT coefficient, the method comprising the following steps: accepting a plurality of DCT coefficients; selecting one of the plurality of DCT coefficients; generating a pattern code for the selected DCT coefficient; and performing a table lookup for the pattern code and the selected DCT coefficient.
15. A method for Huffman encoding a plurality of DCT coefficients, the method comprising the following steps: accepting a plurality of DCT coefficients; generating a pattern code having a plurality of bits corresponding to the plurality of DCT coefficients; and performing a table lookup for the pattern code and the plurality of DCT coefficients.
16. A method for compressing a DCT coefficient, comprising: performing a one-dimensional bufferless discrete cosine transform on live video signal data to create a plurality of DCT coefficients; and Huffman encoding each of the plurality of DCT coefficients.
17. A method for compressing a DCT coefficient, comprising: performing a one-dimensional bufferless discrete cosine transform on live video signal data to create a plurality of DCT coefficients; selecting one of the plurality of DCT coefficients; generating a pattern code for the selected DCT coefficient; and performing a table lookup for the pattern code and the selected DCT coefficient.
18. The method according to claim 17, the selecting step further including the following step: adjusting the DCT coefficient if the DCT coefficient is a DC component.
19. A method for compressing a plurality of DCT coefficients, comprising: first performing a one-dimensional bufferless discrete cosine transform on live video signal data to create a plurality of DCT coefficients; generating a pattern code for the plurality of DCT coefficients; and second performing a table lookup for the pattern code and each one of the plurality of DCT coefficients.
20. The method according to claim 19, wherein the generating step includes the following sub-step: generating a pattern code having a plurality of bits, each one of the plurality of bits corresponding to one of the plurality of DCT coefficients, the each one of the plurality of bits being 0 when the corresponding DCT coefficient is 0, and otherwise being 1.
21. The method according to claim 19, the generating step including the following sub-step: performing a bitwise OR operation for each one of the plurality of DCT coefficients.
22. The method according to claim 19, the second performing step including the following sub-steps: preparing a coefficient table, the coefficient table having a plurality of code pairs, each of the plurality of code pairs having a length code and a Huffman code; preparing a pattern table, the pattern table having a plurality of code pairs, each of the plurality of code pairs having a length code and a Huffman code; performing a table lookup for each non-zero DCT coefficient within the coefficient table; and performing a table lookup for each pattern code within the pattern table.
23. A method for capturing live video signal data using bufferless data compression, the method comprising the following steps: vertically scaling the live video signal data; performing a 4:2:2 to 4:2:0 color format conversion simultaneous with the vertical scaling step; performing a one-dimensional bufferless discrete cosine transform on the scaled live video signal data to create a plurality of scaled DCT coefficients; and Huffman encoding each of the plurality of scaled DCT coefficients.
24. The method according to claim 23, the method further including the following steps: sending each of the Huffman encoded DCT coefficients via a USB interface to a USB bus.
25. An apparatus for simultaneously performing vertical scaling and 4:2:2 to 4:2:0 color format conversion on incoming 4:2:2 color space component data of a video frame, the incoming color space component data having first color space component data, second color space component data, and third color space component data, the apparatus comprising: an accumulator having a first input operatively coupled to the incoming color space component data, a second input operatively coupled to an initializer value for rounding accumulated data, a third input operatively coupled to a component signal adapted for selecting the first, second, or third color space component to be scaled, a fourth input operatively coupled to a set_initial signal used to reset the accumulator, a fifth input for receiving intermediate accumulation results, and an output producing a sum of the incoming color space component data; a shifter having a first input operatively coupled to the accumulator output, a second input indicating a number of bits to shift the sum right, and an output; a multiplexer having a first input operatively coupled to the accumulator output, a second input operatively coupled to the shifter output, a select line operatively coupled to a final_shift signal indicating when a final shift is to be performed, and an output, the select line selecting the second input when the final shift is to be performed, and otherwise selecting the first input; and a buffer control module for storing the multiplexer output, the buffer control module adapted for providing the multiplexer output to a DCT module when the final_shift signal indicates the final shift is to be performed, and otherwise providing the multiplexer output to the fifth accumulator input.
26. An apparatus for simultaneously performing vertical scaling and 4:2:2 to 4:2:0 color format conversion on incoming 4:2:2 color space component data of a video frame, the incoming color space component data having first color space component data, second color space component data, and third color space component data, the apparatus comprising: means for adding vertically aligned component data values to produce a sum; shifting means for shifting the sum right to average the sum over a number of lines for a given scaling factor; and multiplexing means for providing the averaged sum to a DCT module.
27. An apparatus for capturing live video signal data using bufferless data compression, the apparatus comprising: means for vertically scaling the hve video signal data simultaneous with a 4:2:2 to
4:2:0 color format conversion; means for performing a one-dimensional bufferless discrete cosine transform on the scaled live video signal data to create a plurality of scaled DCT coefficients; and means for Huffman encoding each of the plurality of scaled DCT coefficients.
28. An apparatus for Huffman encoding a DCT coefficient, comprising: means for selecting a DCT coefficient; means for generating a pattern for the DCT coefficient; and means performing a table lookup for the pattern and the DCT coefficient.
29. An apparatus for compressing a DCT coefficient, comprising: means for performing a one-dimensional bufferless discrete cosine transform on live video signal data to create a plurality of scaled DCT coefficients; and means for Huffman encoding each of the plurality of scaled DCT coefficients.
30. An apparatus for compressing a DCT coefficient, comprising: means for performing a one-dimensional bufferless discrete cosine transform on live video signal data to create a plurality of scaled DCT coefficients; means for selecting one of the plurality of scaled DCT coefficients; means for generating a pattern code for the selected one of the plurality of scaled DCT coefficients; and means performing a table lookup for the pattern code and the scaled DCT coefficients.
31. An apparatus for compressing a plurality of DCT coefficients, comprising: means for performing a one-dimensional bufferless discrete cosine transform on live video signal data to create a plurality of scaled DCT coefficients; means for generating a pattern code for the plurality of scaled DCT coefficients; and means performing a table lookup for the pattern code and the scaled DCT coefficients.
PCT/US1998/020907 1997-10-06 1998-10-05 Method and circuit for the capture of video data in a pc ________________________________________________________ WO1999018714A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/943,772 1997-10-06
US08/943,772 US6184936B1 (en) 1997-10-06 1997-10-06 Multi-function USB capture chip using bufferless data compression

Publications (3)

Publication Number Publication Date
WO1999018714A2 true WO1999018714A2 (en) 1999-04-15
WO1999018714A3 WO1999018714A3 (en) 1999-09-02
WO1999018714A9 WO1999018714A9 (en) 1999-10-07

Family

ID=25480231

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1998/020907 WO1999018714A2 (en) 1997-10-06 1998-10-05 Method and circuit for the capture of video data in a pc ________________________________________________________

Country Status (2)

Country Link
US (3) US6184936B1 (en)
WO (1) WO1999018714A2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6614486B2 (en) 1997-10-06 2003-09-02 Sigma Designs, Inc. Multi-function USB video capture chip using bufferless data compression
US6654956B1 (en) 2000-04-10 2003-11-25 Sigma Designs, Inc. Method, apparatus and computer program product for synchronizing presentation of digital video data with serving of digital video data
US6687770B1 (en) 1999-03-08 2004-02-03 Sigma Designs, Inc. Controlling consumption of time-stamped information by a buffered system
EP1653371A2 (en) * 2004-11-02 2006-05-03 GILARDONI S.p.A. Electronic system for the transfer of digital data strictly in real time
EP1692655A2 (en) * 2003-12-11 2006-08-23 Infocus Corporation System and method for processing image data
JP2009153128A (en) * 2001-06-05 2009-07-09 Qualcomm Inc Selective chrominance decimation for digital images

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6690834B1 (en) 1999-01-22 2004-02-10 Sigma Designs, Inc. Compression of pixel data
US6704310B1 (en) * 1999-06-30 2004-03-09 Logitech Europe, S.A. Header encoding method and apparatus for packet-based bus
US6429900B1 (en) * 1999-07-30 2002-08-06 Grass Valley (U.S.) Inc. Transmission of wideband chroma signals
TW452708B (en) * 1999-11-24 2001-09-01 Winbond Electronics Corp Architecture for fast compression of 2-dimensional image data
US6829016B2 (en) * 1999-12-20 2004-12-07 Texas Instruments Incorporated Digital still camera system and method
KR100525785B1 (en) 2001-06-15 2005-11-03 엘지전자 주식회사 Filtering method for pixel of image
US6741263B1 (en) * 2001-09-21 2004-05-25 Lsi Logic Corporation Video sampling structure conversion in BMME
US6934337B2 (en) 2001-09-27 2005-08-23 Intel Corporation Video capture device and method of sending high quality video over a low data rate link
US20040183948A1 (en) * 2003-03-19 2004-09-23 Lai Jimmy Kwok Lap Real time smart image scaling for video input
US7852405B1 (en) * 2003-06-27 2010-12-14 Zoran Corporation Method and apparatus for high definition capture
US7292233B2 (en) * 2004-02-11 2007-11-06 Seiko Epson Corporation Apparatus and method to connect an external camera to an LCD without requiring a display buffer
JP4794911B2 (en) 2005-05-31 2011-10-19 キヤノン株式会社 Image processing device
TWM299424U (en) * 2006-02-22 2006-10-11 Genesys Logic Inc Web camera
US7724305B2 (en) * 2006-03-21 2010-05-25 Mediatek Inc. Video data conversion method and system for multiple receivers
US20080012953A1 (en) * 2006-07-13 2008-01-17 Vimicro Corporation Image Sensors
US8237830B2 (en) 2007-04-11 2012-08-07 Red.Com, Inc. Video camera
ES2486295T3 (en) 2007-04-11 2014-08-18 Red.Com, Inc. Video camera
KR101409526B1 (en) 2007-08-28 2014-06-20 한국전자통신연구원 Apparatus and Method for keeping Bit rate of Image Data
WO2009028830A1 (en) * 2007-08-28 2009-03-05 Electronics And Telecommunications Research Institute Apparatus and method for keeping bit rate of image data
TWI521939B (en) * 2008-02-27 2016-02-11 恩康普丁公司 System and method for low bandwidth display information transport
US8872856B1 (en) * 2008-08-14 2014-10-28 Zenverge, Inc. Macroblock based scaling of images using reduced memory bandwidth
US8723891B2 (en) * 2009-02-27 2014-05-13 Ncomputing Inc. System and method for efficiently processing digital video
TW201225654A (en) 2010-12-13 2012-06-16 Sonix Technology Co Ltd Image accessing apparatus and image data transmission method thereof
WO2014127153A1 (en) 2013-02-14 2014-08-21 Red. Com, Inc. Video camera
US9588925B2 (en) * 2014-09-17 2017-03-07 Valens Semiconductor Ltd. USB extension for lossy channel
WO2019010233A1 (en) 2017-07-05 2019-01-10 Red. Com, Llc Video image data processing in electronic devices

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1992010911A1 (en) * 1990-12-10 1992-06-25 Eastman Kodak Company Image compression with color interpolation for a single sensor image system

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS60204121A (en) 1984-03-29 1985-10-15 Fujitsu Ltd Phase synchronization circuit
US4675612A (en) 1985-06-21 1987-06-23 Advanced Micro Devices, Inc. Apparatus for synchronization of a first signal with a second signal
US4876660A (en) 1987-03-20 1989-10-24 Bipolar Integrated Technology, Inc. Fixed-point multiplier-accumulator architecture
US4823260A (en) 1987-11-12 1989-04-18 Intel Corporation Mixed-precision floating point operations from a single instruction opcode
US5142380A (en) 1989-10-23 1992-08-25 Ricoh Company, Ltd. Image data processing apparatus
US5253078A (en) 1990-03-14 1993-10-12 C-Cube Microsystems, Inc. System for compression and decompression of video data using discrete cosine transform and coding techniques
US5196946A (en) 1990-03-14 1993-03-23 C-Cube Microsystems System for compression and decompression of video data using discrete cosine transform and coding techniques
US5270832A (en) 1990-03-14 1993-12-14 C-Cube Microsystems System for compression and decompression of video data using discrete cosine transform and coding techniques
US5191548A (en) 1990-03-14 1993-03-02 C-Cube Microsystems System for compression and decompression of video data using discrete cosine transform and coding techniques
US5341318A (en) 1990-03-14 1994-08-23 C-Cube Microsystems, Inc. System for compression and decompression of video data using discrete cosine transform and coding techniques
US5218431A (en) 1990-04-26 1993-06-08 The United States Of America As Represented By The Secretary Of The Air Force Raster image lossless compression and decompression with dynamic color lookup and two dimensional area encoding
CA2062200A1 (en) 1991-03-15 1992-09-16 Stephen C. Purcell Decompression processor for video applications
US5515107A (en) 1994-03-30 1996-05-07 Sigma Designs, Incorporated Method of encoding a stream of motion picture data
US5528309A (en) 1994-06-28 1996-06-18 Sigma Designs, Incorporated Analog video chromakey mixer
JP2933487B2 (en) * 1994-07-15 1999-08-16 松下電器産業株式会社 How to convert chroma format
US5574572A (en) * 1994-09-07 1996-11-12 Harris Corporation Video scaling method and device
US5638130A (en) * 1995-05-25 1997-06-10 International Business Machines Corporation Display system with switchable aspect ratio
US5982459A (en) * 1995-05-31 1999-11-09 8×8, Inc. Integrated multimedia communications processor and codec
US5844617A (en) * 1995-10-05 1998-12-01 Yves C. Faroudja Method and apparatus for enhancing the vertical resolution of a television signal having degraded vertical chrominance transitions
US5832120A (en) * 1995-12-22 1998-11-03 Cirrus Logic, Inc. Universal MPEG decoder with scalable picture size
US5719511A (en) 1996-01-31 1998-02-17 Sigma Designs, Inc. Circuit for generating an output signal synchronized to an input signal
US6052744A (en) 1997-09-19 2000-04-18 Compaq Computer Corporation System and method for transferring concurrent multi-media streams over a loosely coupled I/O bus
US6184936B1 (en) 1997-10-06 2001-02-06 Sigma Designs, Inc. Multi-function USB capture chip using bufferless data compression

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1992010911A1 (en) * 1990-12-10 1992-06-25 Eastman Kodak Company Image compression with color interpolation for a single sensor image system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TSAI Y T: "COLOR IMAGE COMPRESSION FOR SINGLE-CHIP CAMERAS" IEEE TRANSACTIONS ON ELECTRON DEVICES, vol. 38, no. 5, 1 May 1991, pages 1226-1232, XP000200683 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6614486B2 (en) 1997-10-06 2003-09-02 Sigma Designs, Inc. Multi-function USB video capture chip using bufferless data compression
US6687770B1 (en) 1999-03-08 2004-02-03 Sigma Designs, Inc. Controlling consumption of time-stamped information by a buffered system
US6654956B1 (en) 2000-04-10 2003-11-25 Sigma Designs, Inc. Method, apparatus and computer program product for synchronizing presentation of digital video data with serving of digital video data
JP2009153128A (en) * 2001-06-05 2009-07-09 Qualcomm Inc Selective chrominance decimation for digital images
US7965775B2 (en) 2001-06-05 2011-06-21 Qualcomm, Incorporated Selective chrominance decimation for digital images
EP1692655A2 (en) * 2003-12-11 2006-08-23 Infocus Corporation System and method for processing image data
EP1692655A4 (en) * 2003-12-11 2013-04-17 Seiko Epson Corp System and method for processing image data
EP1653371A2 (en) * 2004-11-02 2006-05-03 GILARDONI S.p.A. Electronic system for the transfer of digital data strictly in real time
EP1653371A3 (en) * 2004-11-02 2007-10-31 GILARDONI S.p.A. Electronic system for the transfer of digital data strictly in real time

Also Published As

Publication number Publication date
US6184936B1 (en) 2001-02-06
WO1999018714A9 (en) 1999-10-07
US6275263B1 (en) 2001-08-14
US20010043282A1 (en) 2001-11-22
WO1999018714A3 (en) 1999-09-02
US6614486B2 (en) 2003-09-02

Similar Documents

Publication Publication Date Title
US6184936B1 (en) Multi-function USB capture chip using bufferless data compression
US8098941B2 (en) Method and apparatus for parallelization of image compression encoders
AU676012B2 (en) Dual memory buffer scheme for providing multiple data streams from stored data
US5325126A (en) Method and apparatus for real time compression and decompression of a digital motion video signal
JP4138056B2 (en) Multi-standard decompression and / or compression device
CN1153451C (en) A multiple format video signal processor
KR101223983B1 (en) Bitrate reduction techniques for image transcoding
US5835145A (en) Conversion system using programmable tables for compressing transform coefficients
EP0493128A2 (en) Image processing apparatus
EP0651562A1 (en) An electronic camera utilizing image compression feedback for improved color processing
CN101990095B (en) Method and apparatus for generating compressed file, camera module associated therewith, and terminal including the same
US6256350B1 (en) Method and apparatus for low cost line-based video compression of digital video stream data
JPH05207460A (en) Multiplex transmitter and its system for picture signal
JPH10229562A (en) Reduction of memory required for decompression by storing compression information using technology based on dct
US5568278A (en) Image data coding and decoding method and apparatus with a plurality of DCT's, quantizers, and VLC's
EP0671102A4 (en) Picture-in-picture tv with insertion of a mean only frame into a full size frame.
US7209266B2 (en) Image processing apparatus
EP0634074A1 (en) Method and apparatus for compressing and decompressing a sequence of digital video images using sync frames
US6353685B1 (en) Method and apparatus for image compression
WO2007119171A2 (en) Video receiver providing video attributes with video data
KR100249235B1 (en) Hdtv video decoder
EP0843483B1 (en) A method for decoding encoded video data
JPH0678297A (en) Method for encoding of digital video signal
KR960013232B1 (en) Fixed rate video signal compressing encoding/decoding method
KR950008640B1 (en) Image compression coding method and decoding method for bit fixation

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): CA CN JP

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: A3

Designated state(s): CA CN JP

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

AK Designated states

Kind code of ref document: C2

Designated state(s): CA CN JP

AL Designated countries for regional patents

Kind code of ref document: C2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

COP Corrected version of pamphlet

Free format text: PAGES 1-23, DESCRIPTION, REPLACED BY NEW PAGES 1-14; PAGES 24-36, CLAIMS, REPLACED BY NEW PAGES 15-22; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA