|Publication number||US7684627 B2|
|Application number||US 10/954,647|
|Publication date||Mar 23, 2010|
|Filing date||Sep 29, 2004|
|Priority date||Sep 29, 2004|
|Also published as||US20060072839|
|Publication number||10954647, 954647, US 7684627 B2, US 7684627B2, US-B2-7684627, US7684627 B2, US7684627B2|
|Original Assignee||Intel Corporation|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (17), Non-Patent Citations (3), Referenced by (10), Classifications (9), Legal Events (4)|
|External Links: USPTO, USPTO Assignment, Espacenet|
Context-adaptive arithmetic coding is a technique for entropy coding of bi-tonal images as evinced by a variety of current industry standards (e.g., Joint Bi-level Image Experts Group (JBIG, JBIG2), etc.). Essentially, pixels of an image are compressed into a series of bits that have a unique pattern. A compression algorithm maps a sequence of symbols embedded in the image being compressed into a real number interval. The compression algorithm successively divides the interval into sub-intervals based on inputted symbols parsed from the image and their associated probabilities (likelihoods that a parsed symbol is matched with a known symbol). The performance of the compression algorithm is in direct proportion to how well it does with estimating the symbol probabilities, because as symbols become known the compression algorithm adjusts itself if the estimations were incorrect.
A model is used by a compression algorithm to estimate symbol probabilities for input symbols based on contexts associated with the input symbols. The model maintains a state file for each context of each input symbol, and the model is updated as symbols are received and processed.
Decompression proceeds in a similar manner as compression. That is, compressed pixels represented as a series of bit values are acquired and their contexts are resolved. Based on contexts, a most likely symbol is determined and that most likely symbol is outputted as a portion of the decompressed image. Again, if the decompression algorithm does not estimate the symbols or pattern correctly, then the performance of decompression degrades as the decompression algorithm adjusts itself.
Because arithmetic decompression is heavily dependent on the contexts of previously processed compressed values and contexts of values that may have not yet been processed, the decompression performance becomes restricted. Thus, portions of the decoding remain idle until enough information becomes available to predict a context. Stated another way, sequential dependencies which are inherent in a binary compressed image restrict processing throughput of existing implementations of decompression algorithms to a tightly coupled process that linearly progresses.
As used herein an “image” may be considered a logical grouping of one or more pixels. Thus, an image may be segmented in any configurable manner into a series of blocks, swaths, or portions, such that each block, swath, or portion is itself an image. In this manner, any logical segmentation of pixels can combine with other segmentations to form a single or same image. Conversely, a single segmentation may be viewed as a single image.
A pixel may be bi-tonal which means that it is represented as a single bit of information (0 or 1). Additionally, a single pixel may be represented as a series of bits, such as words of varying and configurable lengths (e.g., 8, 16, 32, 64, 128, etc.). The pixels may also be nibbles, such as 4 bits, etc. For purposes of various embodiments presented herein, if a pixel is represented as a word, then each bit of that word may be treated and processed in a bi-tonal manner. For example, if a pixel includes 8 bits of information to represent a plurality of potential colors, then each bit of the pixel can be individually processed in the manners discussed herein and below as a single binary decision of 0 or 1. Additionally, if pixels are processed in lengths greater than 1 bit, then processing may be done in a planar fashion. For example, bit 1 in each pixel instances may be treated as image 1 and bit N in each pixel instances may be treated as image N.
The term “piping” or “pipelining” is commonly used and understood in the programming arts and refers to the ability to transmit data as it is received or produced without the need to buffer the data until some particular processing completes or until some configurable amount of data is accumulated. For example, suppose a particular set of data (D) is processed by a first algorithm A1. A1 produces another set of data (D′) from D. D′ is processed by another algorithm A2. In this example, D is sent to A1 as soon as A1 produces any portion of D′; that portion is immediately transmitted to A2 for processing. With piping techniques, A2 does not have to wait to begin processing until the entire set of D′ is produced from A1; rather, A2 initiates processing as soon as A1 produces any portion of D′. Moreover, the data may be instructions processed by a machine. For example, an instruction (data) within a machine may be in a fetch stage, decode stage, or execute stage; and the final result in a write-back stage. While one instruction is moved into a decode stage, the next instruction can move into the fetch stage. Thus, as a first instruction (data) moves through the rest of the instruction stages, the subsequent instructions (data) also get pulled through.
An arithmetic decoder is any existing or custom-developed algorithm that performs image decompression against a binary compressed image. Arithmetic decoders are readily known and available in the industry. Any of these decoders or custom-developed decoders, which are modified such that their output and input are interfaced with the processing described herein, are intended to be included within the various embodiments presented with this invention. The decoder progresses through an image by recognizing patterns and speculating on what future patterns may be prior to actually processing some compressed pixels from the compressed image. Once pixels' contexts, which were speculated on become known, the decoder updates itself as necessary to account for what the actual decompressed values for those pixels should be.
In various embodiments, the arithmetic decoder is context-adaptive. This means that the decoder adapts for context as contexts for the pixels become known during decompression. Decompression is driven by contexts associated with the pixels being decompressed.
Initially, at 110, a compressed pixel stream represented as bit values is acquired. Acquisition may be achieved via any buffer or buffering service that sequentially parses a compressed image (e.g., in raster order). At 120, the compressed pixels are decoded, after contexts and probabilities are resolved as discussed further herein and below. Additionally, once a particular pixel is decoded that information is used to form contexts for other pixels that are not yet decoded. For the first pixel any initial context for that first pixel may be assumed for purposes of processing. For subsequent processing (after the initial pixel), the method 100 progresses in the following manner.
A decoded pixel, which is initially decoded by a decoding algorithm, is piped to a process that forms its context as additional pixels are received, at 121. The context forming process outputs a context for the decoded pixel. The context may be formed based on a variety of information, such as, but not limited to, environmental variables and other decoded decisions (decoded pixels) produced by 120.
The probable context is then piped from 121 to a context state index process, at 122. The context state index process maintains an entry for each possible context of the pixels. Moreover, each entry includes an index into a probability estimation table (PET) and a sense. The sense stores the most probable symbol for a given context as either a 1 or 0. Thus, the processing, at 121, returns an index into the PET and a sense.
In an embodiment, the PET may be implemented as a hard coded state machine that contains several states with associated probabilities. The probabilities are associated with estimates for current decisions being made. As the process of decoding progresses, state transitions (also known as renormalizations) are made and updated to the context state index process. Thus, at 122, probabilities for symbols of decoded pixels are acquired and the most probable sense (0 or 1) is fed to the decoding algorithm, at 120. Next, the decoding algorithm in response to the most probable sense selects a symbol from a symbol library to represent the decompressed pixel.
As contexts and decisions are resolved for decoded pixels; the resolved contexts are recycled, at 124, back to the processing at 122 and 123. Thus, the processing at 121, 122, and 123 all concurrently process different decoded pixels received from the decoding process, at 120. Typically, the processing at 122 and 123 would be idle until enough context of a current pixel being processed by 121 is known in order to resolve the index into the PET and the probability for the symbol. However, this is not the case with embodiments of this invention, because the decoding process continues to decode pixels and pipe them to 121, when a decision is known for each of those decoded pixels, at the conclusion of 123, the decisions are immediately piped back to the processing at 122 and 123. In this manner, several pixels may be processed at once and each pixel may be in different stages of the processing. Therefore, no process remains idle and each process is more completely utilized with embodiments presented herein. As a result, the processing of the method 100 improves processing throughput associated with context-adaptive image decompression.
In an embodiment, the method 100 utilizes context formation templates. These context formats may be existing templates used in the industry for binary image compression and decompression. For example, several templates from the Joint Bi-level Image Experts Group (JBIG) exist and may be used with embodiments presented herein. A context template represents a group of pixels in a defined pattern. If the template pattern is matched to a series of decompressed pixels, then it provides a known context for a current pixel being processed. Some example templates from the JBIG2 standards include 16-pixel templates, 13-pixel templates, 3-row 10-pixel templates, and 2-row 10-pixel templates.
As further illustration of the processing associated with the method 100 consider the following scenario. Assume that a JBIG2 context template is associated with a given compressed image. The identity of the template may be included as header information within the compressed image. In the present example, assume also that the JBIG2 template is a 10-pixel template. This means that a current pixel being processed at any point during the processing of the method 100 may have its proper context resolved once 10 prior pixels' contexts are resolved to match a pattern in the 10-pixel template.
Continuing with the present example, at 121, during the first cycle through the method 100 the results of three previous operations (processing of 122, 123, and 124) are not known. That is, 3 pixels or bits are unknown in the template. This is so, because 10 pixels or bits are needed to match to the template in order to resolve a current pixel's context. Therefore, at any point during processing the processes at 121, 122, and 123 will be working to resolve a current pixel's context. Thus, at any point during processing 7 pixels will be resolved and three are not resolved. It should be reiterated that the example is being presented for purposes of illustration only, since in some embodiments, more than three pixels may be unknown. This may occur if the pipelining is done in a different manner or if a machine processing the method 100 consumes more than 3 cycles in the pipe for processed used in resolving prior pixels' contexts.
In the example, one of the three unknown bits (pixels) will be known during the next cycle through the method 100 before 122 processes, since that one bit will be resolved at 124 and recycled back to 122. Thus, in the first cycle, the processing at 121 forwards the known 7 bits to 122 and speculates on the probable contexts for the known 7 bits.
In an embodiment, the processing at 122 is a context state memory that resolves a sense and an index into the PET (depicted by the processing of 123). That context state memory may be segmented and executed in parallel as duplicated or replicated copies in memory. In one example, the context state memory is segmented into four banks in memory, such that the processing at 122 is capable of simultaneously performing four independent lookups within the context state.
During a second cycle through the method 100, the processing at 122 has already received 7 known bits from the processing at 121 during the first cycle and also receives a new decision for a resolved context from the processing at 124. Thus, 8 bits of context are now known. Each lookup within the context state uses the 8 bits of information to select a sense and an index into the PET. Thus, the processing at 122 produces 4 potential PET indexes and senses (possible combinations using the 8 bits) during the second cycle, since 2 bits are unknown.
Again, continuing with the present example, 4 sets of indexes and senses are provided to the processing at 123 at the conclusion of cycle 2. At the start of cycle 3 another decision for a context is supplied via 124 to the processing of 123. In an embodiment, that new decision is multiplexed or used to reduce the 4 sets of indexes and senses down to two sets of indexes and senses. At this point, there are two speculative threads and two possible results or probable contexts for a yet unprocessed decision. Moreover, similar to the context state memory of the processing at 121, the processing at 123 (PET processing) may be replicated in memory to perform two or more multiple simultaneous lookups into the PET. The lookups provide a probability estimate for the associated senses (symbols).
At the start of cycle 4, another decision or context is known and is recycled back to the processing just after 123. Here, in an embodiment, another multiplexer or selection algorithm may be deployed to select between the two available probabilities and Senses, which were produced during cycle 3 by the processing of 123. Selection is made based on a recycled decision recycled or piped back into the processing. The final probability is then used to select a proper value for the symbol (sense) as a 0 or 1, since each sense (symbol) may have two versions or values. The proper version of the symbol is then selected by the decoder at 120 and outputted as a decompressed portion of the original compressed image.
It has been demonstrated that multiple processes some processing in duplicate and some in parallel, may be arranged and interfaced in manners that improve image decompression throughput. This is achieved by pipelining and recycling/re-inputting decisions (contexts) to different portions of the processing of the method 100 and processing each of the processes even when only limited context information is initially available. Thus, the processes are not idle and each process is working efficiently to resolve contexts for the decompression of the image.
Initially, a compressed image or some desired/configurable portion of the compressed image is acquired and made available for consumption by the processing of the method 200. Acquisition may occur in a variety of manners. For example, another buffering service may be interfaced to the method 200 and may provide configurable chunks of the compressed image to the method 200 when requested.
At 210A the compressed pixels (binary values for bits that represent the compressed pixels in the compressed image) are piped to a context former, a context state indexer, and a probability estimation table. Simultaneous to 210A and at various points during the processing of the method 200, decoded versions of some pixels are resolved, such that context for other pixels that are still be decoded may be resolved. Thus, at 210B, these decoded pixels or their resolved contexts are supplied or piped back into the processing.
In an embodiment, at 211, a context template may be acquired for the compressed image being decompressed. The template permits a decoder to resolve pixel contexts once a configurable number of decompressed pixels and their contexts are resolved and matched to the template. The identity of the template may be acquired from a header associated with the compressed image or may be configured as a parameter or environmental variable associated with the processing of the method 200.
At 220, the processing of the method 200 iterates until the compressed image is decompressed. During a single iteration, a variety of sub-processes execute. These sub-processes may include the context former, the context state indexer, and the probability estimation table (PET). Processing associated with these sub-processes along with a decoder was described in detail above with the processing of the method 100 depicted in
In a similar manner, at 222, the sub-process associated with the PET selects candidate probabilities for the PET entries identified by the reduced candidate indexes which are supplied from context state indexer, at 221. As 210B progresses and decodes pixels and their contexts, these decisions are supplied back to the PET, at 222. This permits the PET to reduce the candidate probabilities for candidate senses (symbols) to a single selection of choice.
Accordingly, at 223, a single candidate probability for a single candidate sense is used to acquire the proper symbol in response to the probability. Again, each symbol may have two values one for 0 and one for 1 (binary). Thus, a single sense actually represents two different values for a single symbol. The probability permits the proper selection between the two values for the symbol. Once the correct symbol is known, that symbol is outputted, at 224, as a portion of the decompressed image.
In an embodiment, the processing of the method 200 is interfaced to a decoder algorithm (decoder). That is, the output decisions made to the decoder are intercepted and fed to the method at 210B. In this manner, existing decoders may be integrated and interfaced to the processing of the method 200. On technique to do this is to intercept output decisions of such a decoder and pipelining those decisions back through the processing of the method 200.
In another embodiment, the method 200 processes bi-tonal pixels (represented as single bit values 0 or 1). In still another embodiment, the pixels may be configurable word lengths representing a plurality of bit values. In this latter embodiment, each bit of a multi-bit pixel is processed independently through the processing of the method 100.
At 310, a compressed image is received for decompression. At 320, a context template which assists in resolving contexts for the compressed pixels of the image is acquired. In an embodiment, the identity of the context template is acquired as an initial header embedded within the compressed image. Alternatively, the identity of the context template may be acquired as a processing parameter or environmental variable associated with the method 300.
At 330, the compressed image is parsed to acquire compressed pixel values. At 340, each compressed pixel is processed through the remainder of the method 300 in the following manners. It is noted that a same pixel may be concurrently and simultaneously processed by different portions of the processing during a single iteration and some different pixels may be processed concurrently and in parallel by different portions of the processing. Also, at the conclusion of a single iteration, at 340, additional pixels are parsed from the compressed image and supplied to the processing.
After a single iteration, a most probable symbol for a pixel is identified, at 341. At 342, the identified symbol is outputted as a portion of the decompressed image. In this manner, the method 300 may serially iterate the compressed image pixel by pixel and generate decompressed versions of each pixel as a symbol. The symbols may be buffered, piped, and/or streamed to memory, storage, other devices, or other applications for subsequent or immediate consumption.
During iterations, at 343, the pixels are piped through various portions of the processing, such that the portions of the processing remain active and are generally not idle during image decompression. Additionally, at 342, during iterations as a particular context for a particular pixel is resolved that resolved context is fed back into the processing for purposes of resolving contexts of other pixels currently being processed.
Correspondingly, at 344, pixel contexts are resolved in response to matches occurring to the context template and in response to resolved contexts that are fed back into the processing. Some portions of the processing may be working on pixel P, while another portion is working on pixel P+1, and still another portion is working on pixel P+2. Thus, as contexts for each of these pixels become known, they are immediate fed back into the appropriate processing so that each of the pixels can have their contexts properly resolved as well.
In an embodiment, at 345, the processing may speculate on contexts of the pixels based on patterns that are emerging. Thus, one portion of the processing may identify two or more potential candidate contexts for a particular pixel being processed based on the pattern of decompression that it was supplied and based on the context template. The candidate contexts can be whittled or reduced once resolved contexts are fed back into the processing. In an embodiment, the processing may not wait for a resolved context; rather it produces candidate contexts which are buffered in an intermediate location and reduced once a resolved context is supplied.
In similar manners and in another embodiment, at 346, the processing may resolve candidate index values from speculative contexts and associated those index values with probable symbols (senses). The indexes are associated with entries in a PET. The PET produces the candidate or speculative probabilities for the candidate contexts and senses. In an embodiment, the PET produces two or more candidate or speculative probabilities for two or more candidate symbols and provides the same to an intermediate buffer or multiplexer. The multiplexer then receives a resolved context used to pick the proper probability and sense when the resolved context is fed back into the processing. Thus, the PET does not idle waiting on the resolved context.
In an embodiment, the processing depicted at 344-346 may be replicated and processed as multiple instances of one another to achieve even more processing throughput during image decompression. In these embodiments, multiplexers may be used after the processing of 344-346 in order to reduce the number of probable context and symbols for iterations of the method 300.
At 410, compressed pixels are piped through a pixel decompression process. In an embodiment, the pixel decompression process may include a decoder, a context former, a context state indexer, and a PET. These sub-processes and their processing were described in detail above with methods 100, 200, and 300 of
The decompression process, at 411, may have selective portions of its process concurrently processed or processed in parallel. Moreover, in an embodiment, a context state indexer and a PET associated with the decompression process may be replicated as two or more duplicate instances of themselves and processed independently and concurrently within the decompression process.
At 420, as the decompression process progresses through the compressed pixels, contexts for some pixels become known or resolved. The resolved contexts are recycled or fed back into selective portions of the decompression process. For example, in an embodiment, resolved contexts for pixels are recycled back to the post processing of the context state indexer and the PET.
In an embodiment, at 421, context may be based in least in part on a context template associated with the compressed pixels. The context template assists in recognizing patterns of pixels in configurable numbers and arrangements against pixels being processed by the decompression process.
At 430, as contexts are resolved for the compressed pixels, symbols associated with the decompressed pixels are outputted as decompressed versions of the compressed pixels.
The image decompression apparatus 500 may include a pixel context former 501, a context state indexer 502, and a probability estimation table 502. The image decompression apparatus 500 may also include an image arithmetic decoder 504. In an embodiment, the image decompression apparatus 500 also may include a first multiplexer 502A and/or a second multiplexer 503A. The arrangement and connections depicted in
The pixel context former 501 is adapted to form contexts associated with compressed pixel values supplied to it. In some instances, the pixel context former 501 speculates on contexts for a given pixel based upon its existing state and what is presently known about previously processed pixels and their contexts. During operation of the image decompression apparatus 500, the pixel context former 501 supplies candidate contexts to the context state indexer 502.
The context state indexer 502 maintains an entry for each pixel and its perceived or potential candidate based on dependencies of other pixels that surround a particular pixel being processed. Moreover, the context state indexer 502 is adapted to house with each entry an index into the PET 503 and a sense (symbol) associated with the perceived context. In an embodiment, the context state indexer 502 is adapted to process in parallel with duplicate instances of itself. In this manner, the context state indexer 502 may segmented within a memory device, such that a single memory partition includes a duplicate and replicated version of the context state indexer 502. In another embodiment, the context state indexer 502 may also be split and partitioned into banks within the memory device. During operation of the image decompression apparatus 500, the context state indexer supplies candidate indexes and candidate senses to the PET 503.
The PET 503 is adapted to determine candidate probabilities for a given candidate sense (symbol) based on known contexts for a pixel being decompressed. In an embodiment, the PET 503 may be replicated within memory as multiple and concurrent processing instances of itself; similar to embodiments described with the context state indexer 502. In an embodiment, the PET 503 may be split and partitioned within banks of memory. During operation of the image decompression apparatus 500, the PET 503 supplies a single sense and a probability to an image arithmetic decoder 504 or supplies multiple senses and their associated probabilities to the image arithmetic decoder 504.
The pixel context former 501, the context state indexer 502, and the PET 503 are adapted to process concurrently with one another and against the same or different ones of the compressed pixels. During operation of the image decompression apparatus 500 when a particular context for a particular pixel becomes known or resolved, that resolved context is fed back into the context state indexer 502 and the PET 502 in order to resolve contexts for other pixels that are being processed.
In an embodiment, the resolved contexts are fed back to a first multiplexer 502A and/or a second multiplexer 503A. The multiplexers 502A and 503A reduce the number of candidate indexes produced by the context state indexer 502 and reduce the number of candidate probabilities produced by the PET 503. In this manner, the context state indexer 502 and the PET 503 does not idle waiting on resolved contexts for pixels being processed. The first multiplexer 502A is adapted to receive candidate indexes from the context state indexer 502 and resolved contexts in order to reduce the number of potential indexes supplied to the PET 503. Similarly, the second multiplexer 503A is adapted to receive candidate probabilities for senses from the PET 503 and resolved contexts in order to reduce the number of potential probabilities and senses provided to an image arithmetic decoder 504.
In an embodiment, the image decompression apparatus 500 may include an image arithmetic decoder 504. The image arithmetic decoder 504 is adapted to make decisions that select symbols for a compressed pixel. The PET provides a probability and sense (symbol) to the arithmetic decoder 504. The arithmetic decoder in response to this information selects a proper value for the symbol (0 or 1) and acquires that value, which is then outputted as a decompressed version of the compressed pixel. This in turn resolves a context for that original compressed pixel, which is then fed back to the context former 501, the context state indexer 502, and the PET 503.
In an embodiment, the image decompression apparatus 500 is implemented as a specialized hardware accelerator within a microchip's architecture. In another embodiment, the hardware accelerator is implemented within a programmable processor engine of an image signal processor.
The image processing system 600 includes a pixel context former 601, a context state indexer 602, a probability estimation table (PET) 603, and a printer 604. The adaptations and operations associated with the context state former 601, the context state indexer 602, and the PET 603, were described in detail above with respect to methods 100, 200, 300, and 400 of
The printer 604 is adapted to print decompressed versions of the compressed pixels, which are produced by an image arithmetic decoder interfaced to the pixel context former 601, the context state indexer 602, and the PET 603. That is, the printer 604 is adapted to receive symbols streamed from an arithmetic decoder or from a buffer which accumulates the symbols. The symbols represent a decompression of a compressed image. The printer 604 may spool the symbols for subsequent printing to a print media or may directly print the symbols on the print media. The print media may be any medium capable of being printed to, such as, but not limited to, paper, plastic, metal, glass, cardboard, food, etc.
In an embodiment, the image processing system 600 also includes a display 605 adapted to present the decompressed versions of the compressed pixels as a decompressed image. The display 605 may be integrated into another device or machine or may be a stand alone device or machine. Thus, the display 605 may include, but is not limited to, a printer, a monitor interfaced to a processor, a television, a phone, an appliance, a vehicle, a boat, or an aircraft.
In another embodiment, the image processing system 600 may also include a memory device, a storage device, or media 606. Thus, the decompressed versions of the compressed pixels may be housed in memory, storage, and/or on media 606. For example, a decompressed image may be indexed and stored within a memory location, a storage location (e.g., fixed disk, etc.), and/or on removable storage media (e.g., Compact Disks (CDs), Digital Versatile Disks (DVDs), diskettes, etc.).
The above description is illustrative, and not restrictive. Many other embodiments will be apparent to those of skill in the art upon reviewing the above description. The scope of embodiments of the invention should therefore be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
The Abstract is provided to comply with 37 C.F.R. §1.72(b) in order to allow the reader to quickly ascertain the nature and gist of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims.
In the foregoing description of the embodiments, various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments of the invention have more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may lie in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the Description of the Embodiments, with each claim standing on its own as a separate exemplary embodiment.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US4862167 *||Feb 24, 1987||Aug 29, 1989||Hayes Microcomputer Products, Inc.||Adaptive data compression method and apparatus|
|US5297220 *||Aug 24, 1992||Mar 22, 1994||Ricoh Company, Ltd.||Image processing system for image compression and decompression|
|US5309381 *||Apr 8, 1992||May 3, 1994||Ricoh Company, Ltd.||Probability estimation table apparatus|
|US5313203 *||Aug 26, 1992||May 17, 1994||Kabushiki Kaisha Toshiba||Coding apparatus and method for coding information symbol strings by assigning fixed length codes thereto|
|US5471206 *||Dec 5, 1994||Nov 28, 1995||Ricoh Corporation||Method and apparatus for parallel decoding and encoding of data|
|US5583500 *||Dec 23, 1993||Dec 10, 1996||Ricoh Corporation||Method and apparatus for parallel encoding and decoding of data|
|US5710562 *||Aug 31, 1995||Jan 20, 1998||Ricoh Company Ltd.||Method and apparatus for compressing arbitrary data|
|US5717394 *||Dec 17, 1996||Feb 10, 1998||Ricoh Company Ltd.||Method and apparatus for encoding and decoding data|
|US5781136 *||Dec 18, 1996||Jul 14, 1998||Mitsubishi Denki Kabushiki Kaisha||Digital information encoding device, digital information decoding device, digital information encoding/decoding device, digital information encoding method, and digital information decoding method|
|US5809176 *||Oct 18, 1995||Sep 15, 1998||Seiko Epson Corporation||Image data encoder/decoder system which divides uncompresed image data into a plurality of streams and method thereof|
|US6055338 *||Aug 21, 1997||Apr 25, 2000||Sumitomo Metal Industries Limited||Bi-level adaptive coding using a dual port memory and a context comparator|
|US6094151 *||Jan 5, 1998||Jul 25, 2000||Ricoh Company, Ltd.||Apparatus and method for finite state machine coding of information selecting most probable state subintervals|
|US6351569 *||Jan 14, 1998||Feb 26, 2002||Mitsubishi Denki Kabushiki Kaisha||Coding method, decoding method, coding device and decoding device|
|US6882751 *||Jun 14, 2001||Apr 19, 2005||Canon Kabushiki Kaisha||Arithmetic decoding method and device and storage medium|
|US7095344 *||Apr 10, 2003||Aug 22, 2006||Mitsubishi Denki Kabushiki Kaisha||Digital signal encoding device, digital signal decoding device, digital signal arithmetic encoding method and digital signal arithmetic decoding method|
|US7286710 *||Oct 1, 2003||Oct 23, 2007||Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V.||Coding of a syntax element contained in a pre-coded video signal|
|US20050179572 *||Feb 9, 2004||Aug 18, 2005||Lsi Logic Corporation||Method for selection of contexts for arithmetic coding of reference picture and motion vector residual bitstream syntax elements|
|1||*||Tarui, M.; Oshita, M.; Onoye, T.; Shirakawa, I., "High-speed implementation of JBIG arithmetic coder", TENCON 99. Proceedings of the IEEE Region 10 Conference, vol. 2, Sep. 15-17, 1999 pp. 1291-1294 vol. 2.|
|2||Tompkins, Dave , et al., "Coding of Still Pictures, Additional Extension Segments for JBIG2", ISO/IEC JTC 1/SC 29/WG1 N1318, Jul. 1999 Meeting-Vancouver, (Jul. 6, 1999), 5 pgs.|
|3||Tompkins, Dave , et al., "Coding of Still Pictures, Additional Extension Segments for JBIG2", ISO/IEC JTC 1/SC 29/WG1 N1318, Jul. 1999 Meeting—Vancouver, (Jul. 6, 1999), 5 pgs.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7826672 *||Sep 4, 2009||Nov 2, 2010||Asml Netherlands B.V.||Lithographic apparatus and device manufacturing method utilizing a multiple dictionary compression method for FPD|
|US8681893||Oct 7, 2009||Mar 25, 2014||Marvell International Ltd.||Generating pulses using a look-up table|
|US8761261||Jul 28, 2009||Jun 24, 2014||Marvell International Ltd.||Encoding using motion vectors|
|US8817771||Jul 13, 2011||Aug 26, 2014||Marvell International Ltd.||Method and apparatus for detecting a boundary of a data frame in a communication network|
|US8897393||Oct 16, 2008||Nov 25, 2014||Marvell International Ltd.||Protected codebook selection at receiver for transmit beamforming|
|US8902994||Jul 25, 2013||Dec 2, 2014||Marvell International Ltd.||Deblocking filtering|
|US8908754||Aug 14, 2013||Dec 9, 2014||Marvell International Ltd.||Decision feedback equalization for signals having unequally distributed patterns|
|US8942312||Aug 12, 2013||Jan 27, 2015||Marvell International Ltd.||WCDMA modulation|
|US8953661||Oct 21, 2013||Feb 10, 2015||Marvell International Ltd.||Wireless device communication in the 60 GHz band|
|US20090324111 *||Dec 31, 2009||Asml Netherlands B.V.||Lithographic Apparatus and Device Manufacturing Method Utilizing a Multiple Dictionary Compression Method for FPD|
|U.S. Classification||382/233, 382/234|
|Cooperative Classification||H04N19/91, H04N19/44, H04N1/411|
|European Classification||H04N7/26A4V, H04N1/411, H04N7/26D|
|Jan 18, 2005||AS||Assignment|
Owner name: INTEL CORPORATION, CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:RANGANATHAN, SRIDHARAN;REEL/FRAME:015605/0596
Effective date: 20041203
Owner name: INTEL CORPORATION,CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:RANGANATHAN, SRIDHARAN;REEL/FRAME:015605/0596
Effective date: 20041203
|Nov 1, 2013||REMI||Maintenance fee reminder mailed|
|Mar 23, 2014||LAPS||Lapse for failure to pay maintenance fees|
|May 13, 2014||FP||Expired due to failure to pay maintenance fee|
Effective date: 20140323