CA2195110A1 - Method and apparatus for compressing images - Google Patents

Method and apparatus for compressing images

Info

Publication number
CA2195110A1
CA2195110A1 CA002195110A CA2195110A CA2195110A1 CA 2195110 A1 CA2195110 A1 CA 2195110A1 CA 002195110 A CA002195110 A CA 002195110A CA 2195110 A CA2195110 A CA 2195110A CA 2195110 A1 CA2195110 A1 CA 2195110A1
Authority
CA
Canada
Prior art keywords
image
data
compression
compressed
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002195110A
Other languages
French (fr)
Inventor
Stephen G. Johnson
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Johnson Grace Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Johnson Grace Co filed Critical Johnson Grace Co
Publication of CA2195110A1 publication Critical patent/CA2195110A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/94Vector quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding

Abstract

A system and method is disclosed that compresses and decompresses images. The compression system and method (126, 128, 130, 132) includes an encoder (130) which compresses images and stores such compressed images in a unique file format, and a decoder (110) which decompresses images. The encoder (130) optimizes the encoding process to accommodate different image types with fuzzy logic methods (152) that automatically analyze and decompose a source image, classify its components, select the optimal compression method for each component, and determine the optimal parameters of the selected compression methods. The encoding methods inlcude: a Reed Spline Filter (138), a discrete cosine transform (136), a differential pulse code modulator (140), and enhancement analyzer (144), an adaptive vector quantizer (134) and a channel encoder (132) to generate a plurality of data segments that contain the compressed image. The plurality of data segments are layered in the compressed file (104) to optimize the decoding process. The first layer allows the decoder (110) to display the compressed image as a miniature or a coarse quality full sized image, the decoder (110) then adds additional detail and sharpness to the displayed image as each new layer is received. The decoder (110) uses optimal decompression methods to expand the compressed image file.

Description

wo g6~02xgs 2 1 9 5 1 1 0 PCT~.'i9~/08~27 .

METFOD AND APPARAT~S FOR CONPRESSING r~AGES
Back3round of the Invention Field of the Invention This invention relates to the compression and decompression of digital data and, more particularly, to the reduction in the amount of digital data nec~qsAry to store and transmit images.
Backqro~n~ of the Invention Image compression systems are commonly used in~ ~rs to reduce the storage space and transmittal times associated with storing, transferring and retrieving images. Due to increased use of images in computer applications, and the increase in the transfer of images, a variety of image compression techniques have attempted to solve the problems ~q~or;~eA with the large amounts of storage space (i.e., hard disks, tapes or other devices) needed to store images.
Conventional devices store an image as a two-dimensional array of picture elements, or pixels. The number of pixels determines the resolution of an image. Typically the resolution is measured by stating the number of horizontal and vertical pixels r~nn~;nr~d in the two dimensional image array. For example, a 640 by 480 image has 640 pixels across and 480 from top to bottom to total 307,200 pixels.
While the number of pixels represents the image resolution, the number of bits assigned to each pixel represents the number of available intensity levels of each pixel. For example, if a pixel is only assigned one bit, the pixel can represent a maximum of two values. Thus the range of colors which can be assigned to that pixel is limited to two (typically black and white). In oolor images, the bits assigned to each pixel represent the intensity values of the three primary colors of red, green and blue. In present ~true color" applications, each pixel is normally represented by 24 bits where 8 bits are assigned to each primary color W096~02895 21 ~5 1 ~ O ~ r~
--allowing the ~nrr~;nr of 16.8 million (2' x 2' x 2') different colors.
Conseguently, color images require large amounts of storage capacity. For example, a typical cDlor ~4 bits per pixel) image with a resolution of 640 by 480 requires approximately 922,0Q0 bytes of storage. A larger 2g-bit color image with a 2000 by 2000 pixel resolution requires approximately twelve million bytes of storage. As a result, image-based applications such as interactive shopping, 1~ multimedia products, electronic games and other image-based presentations require large amounts of storage space to display high quality color images.
In order to reduce storage requirements, an image iB
compressed ~encoded) and stored as a smaller file which requires less storage space. ln order to retrieve and view the compressed image, the compressed image file is PYp~n~
(decoded~ to its original size. The decoded (or "reconstructed"~ image is usually an imperfect or ~lossy"
repr~Pnt~t;o~ of the original image becau8e some information may be lost in the compression process. Normally, the greater the amount of compression the greater the divergence between the original image and the reconstructed image. The amounS of compression i5 often referred to as the compression ratio. The compression ratio is the amount of storage space needed to store the original (uncompressed~ digitized image file divided by the amount of storage space needed to store the corrrrpon~ing ~ ~ss d image file.
By reducing the amount of storage space needed to store an image, compression is also used to reduce the time needed to transfer ard , i r~te images to other locations. In order to transfer an image, the data bits that represent the image are sent via a data channel to another location. ~he sequence of transmitted bytes i9 called the data stream.
Generally, the image data is encoded and the compressed image data stream is sent over a data channel and when received, the com~ressed image data is decoded to recreate ~he original WO9Gl02~95 2 1 ?5 1 1 a r~ c 7 .

~- image. Thus, compression speeds the transmission of image files by reducing their size.
Several processes have been developed for compressing the data required to represent an ima3e. Generally, the processes rely on two methods: l) spatial or time domain compression, and 2) frequency domain compression. In frequency domain compression, the binary data representing each pixel in the space or time domain are mapped into a new coordinate system in the frequency domain.
10In general, the mathematical transforms, such as the discrete cosine transform (DCT), are chosen so that the signal energy of the original image is preserved, but the energy is ~nnc~n~rated in a relatively few transform coefficients. Once tran~formed, the data i8 compressed by quantization and ~nro~ing of the transform coefficients.
Optimization of the process of compressing an image includes increasing the compression ratio while r~;n~;ning the quality of the original image, reducing the time to encode an image, and reducing the time to decode a essed image. In general, a process that increases the compression ratio or decreases the time to compress an image results in a loss of image quality. A process that increases the compression ratio and maintains a high quality image often results in longer en~o~;ng and ~eco~;rg times. Accordingly, it would be adv~ntageo~l~ to increase the compression ratio and reduce the time needed to encode and decode an image while r-;nt~;n;ng a high quality image.
It is well known that image encoders can be optimized for specific image types. For example, different types of 33 images may include graphical, photographic, or Ly~o~La~hic information or combination~ thereof. As discussed in more detail below, the encoding of an image can be viewed as a multi-step process that uses a variety of compression methods which include filters, mathematical transformations, quantization techniques, etc. In general each compression method will compress different image types with varying wog61m89s ~ I q ~ t t ~ PCT~S951U8827 comparative efficiency. These compression methods can be selectively applied to optimize an encoder with respect to a certain type of image. In addition to selectively applying various compression methods, it is also possible to optimize an encoder by varying the parameters (e.g., quantization tables~ of a particular compression method.
Broadly speaking, however, the prior art does not provide an adaptive encoder that Al!t~ ~ically ~ ECS a source image, classifies its parts, and selects the optimal compression methods and the optimal parameters of the selected compression methods resulting in an optimized encoder that increases relative compression rates.
Once an image is optimally compressed with an encoder, the set of compressed data are stored in a file. The structure of the compressed file is referred to as the file format. The file format can be fairly simple and common, or the format can be quite complex and include a particular sequence of compressed data or various types of control instructions and codes.
The file format ~the structure of the data in the file) is ~cpeci~lly important when compressed data in the file will be read and processed sequentially and when the user desires to view or transmit only part of a compressed image file.
Accordingly, it would be advantageous to provide a file format that "layers" the compressed image c~ ~q, arranging those of greatest visual importance first, those of sero~ry visual importance second, and so on. Layering the compressed file format in such a way allows the first segment of the ~ ~ssed image file to be decoded prior to the L~ ;n~r of the file being received or read by the decoder.
The decoder can display the first segment (layer) as a miniature version of the entire image or can enlarge the m;n;~tnre to display a coarse or "splash" quality rendition of the original image. As each successive file segment or layer is received, the decoder ~nh~n~ec the quality of the _~ _ W096,l028~5 r ,/, i~ ~ 9 ~ J ~

displayed picture by selectively adding detail and correcting pixel values.
Like the ~ncQ~;n~ process, the ~Pco~ing of an image can be viewed as a multi-step process that uses a variety of decoding methods which include inverse mathematical transformations, inverse ~nt{7at;nn techniques, etc.
Conventicnal decoders are designed to have an inverse function relative to the ~nco~;ng system. These inverse ~roA;ng methods must match the ~nro~;ng process used to encode the image. In addition, where an encoder makes content-sensitive adaptations to the compression algorithm, the decoder must apply a matching content-sensitive ~PcQ~;ng process.
Generally, a decoder is designed to match a specific pnrr~;ns process. Prior art compression systems exist that allow the decoder to adjust particular parameters, but the prior art encoders must also transmit ~- - ying tables and other information. In addition, many conventional decoders are limited to specific ~c~;ng methods that do not 23 acc~ ' te content-sensitive adaptations.
Summarv of the Invention The problems outlined above are solved by the method and apparatus of the present inven~ion. That is, the It~-based image compression system of the present invention 2S includes a unique encoder which compresses images and a - unique decoder which ~ esses images. The unique compression system obtains high compression ratios at all image quality levels while achieving relatively quick ~n~o~ing and d~co~;ng times.
A high compression ratio enables faster image tr~n~~;csion and reduces the amount of storage space required to store an image. When compared with conv~nt;~n~l compression techniques, such as the Joint Photographic Experts Group (JPEG), the present invention significantly increases the compression ratio for color images which, when decompressed, are of comparable quality to the JPEG images.

WO'J6/Q~895 2 t~ q 5 ~ r~us~-~ 7 The exact i~ u~. t over JPE~ will depend on image content, :~
resolution, and other factors.
Smaller image files translate into direct storage and tr~n~~;Cci~n time savings. In ~;tion, the present invention reduces the number of op~t~onc to encode and decode an image when compared to JPEG and other compression methods of a similar nature. ~ c; n~ the number of operation3 reduces the amount of time and computing resources needed to encode and decode an image, and thus improves computer system response times.
Furthermore, the image compression system of the present invention optimizes the Pnro~;ng process to ac ~ te different image types. As ~Ypl~;n~d below, the present invention uses fuzzy logic techniques to automatically analyze and ~ L~ce a source image, classify its c~mp~n~nts~ select the optimal compression method for each ~ ~on~"l, and determine the optimal content-sensitive parameters of the selected ~ ~Les~ion methods. The encoder does not need prior information regarding the type of image or information regarding which compression methods to apply.
Thus, a user does not need to provide compression system customization or need to set the parameters of the compression methods.
The present invention is designed with the goal of providing an image compression system that reliably eSSCb any type of image with the highest achievable efficiency, while r~int~;ntng a consistent range of viewing qualities. Automating the sy6tem~s adaptivity to varied image types allows for a minimum of human intervention in the 30 ~n~; ng proce3s and results in a system where the compression and decompression process are virtually tran3parent to the u6ers.
The encoder and decoder of the present invention contain a library of encoding methods that are treated as a "toolbox.-l The tool~ox allows the encoder to selectively apply particular encoding methods or tool3 that optimize the W096t0289s 2 1 ~ 5 1 1 0 P~

compression ratio for a particular image ~ , -nt. The toolbox approach allows the encoder to support many different encoding methods in one program, and a~ tes the invention of new --nro~;n~ methods without invalidating existing decoders. The toolbox approach thus allows upgraA~h;l;ty for future illl~L~ C in compression methods and adaptation to new technologies.
A further feature of the present invention is that the encoder creates a file format that segments or "layers" the compressed image. The layering of the compressed image allows the decoder to display image file segments, heg;nn;ng with the data at the front of the file, in a coherent sequence which begins with the ~-rr~; ng and display of the information that constitutes the core of the image as defined by human perception. This core information can appear as a good quality m;niatllre of the image and/or as a full sized ~splash" or coarse quality version of the image. Both the miniature and splash image enable the user to view the essence of an image from a relatively small amount of encoded data. In applications where the image file is being transmitted over a data channel, such as a telephone line or limited bandwidth wireless channel, display of the ~;n;_t--re and/or splash image occurs as soon as the first segment or layer of the file is received. This allows users to view the image quickly and to see detail being added to the image as subsequent layers are received, decoded, and added to the core image.
The decoder ~ sses the ~;n;at~re and the full sized splash quality image from the same information. User ~per;fi~d preferences and the ~ppl;c_tion determine whether the m;n;~t~re and/or the full sized splash quality image are displayed for any given image.
Whether the cirst layer is displayed as a miniature or a splash quality full size image, the receipt of each successive layer allows the decoder to add additional image detail and sharpness. Information from the previous layer is _ 7 _ w096/02~ 2 1 95 1 1 ~ ~ PCT~S9C108827 supplemented, not ~;~c~r~d, 80 that the image is built layer by layer. Thus a single compressed file with a layered file format can store both a t' ~;l and a full size version of the image and can store the full size version at various quality levels without storing any r~ n~An~ information.
The layered approach of the present invention allows the tr~n~;ssion or ~o~i n~ of only the part of the compressed file which i9 n~QsAry to display a desired image quality.
Thus, a single compressed file can generate a t' ' ~; 1 and different quality full size images without the need to recompress the file to a smaller size and lesser quality, or store multiple files compressed to different file sizes and quality levels.
This feature is particularly advantageous for on line service applications, such as ~hnpping or other applications where the user or the application developer may want several thn~hn~;l images downloaded and presented before the user chooses to receive the entire full size, high quality image.
In addition to conserving the time and tr~n~ission costs associated with viewing a variety of high quality images that may not be of interest, the user need only subsequently download the ,. ;n~ of each image file to view the higher detail versions~of the image.
The layered format also allows the storage of different layers of the compressed data file separate from one another.
Thus, the core image data (m~n;~ture) can be stored locally le.g., in fast ~AM memory for fast access), and the higher quality n~nh~n~ ' n layers can be stored remotely in lower cost bulk storage.
A further feature of the layered file format of the present invention allows the ~ttion of other compressed data information. The layered and se_ to~ file format is extendable so that new layers of compressed information such as sound, text and video can be added to the compressed ;mage data file. The ~t~n~hle file format allows the compression W096/02895 ~!1 951 1 0 r ~ s( 7 v system to adapt to new image types and to combine c~ esscd image data with sound, text and video.
Like the encoder, the decoder of the present invention includes a toolbox of d--o~ing methods. The ~~c-.~ing process can begin with the decoder first determining the n.-~o~ing methods used to encode each data segment. The decoder determines the -nco~;n~-j methods from instructions the encoder inserts into the compressed data file.
Adding decoder instructions to the compressed image data provides several advantages. A decoder that reCo~Aini 7._C the instructions can decode files from a variety of different encoders, scc~ 'ote content-sensitive ~nccl~in~AJ methods, and adjust to user specific needs. The decoder of the present invention also skips parts of the data stream that contain data that are unnecessary for a given rendition of the image, or ignore parts of the data stream that are in an unknown ~ormat. The ability to ignore unknown formats allows future ~ile layers to be added while ~-;nt_;nin; 5 _-~ihility with older decoders.
In a preferred ~ of the present invention, the encoder compresses an image using a first Reed Spline Filter, an image classifier, a discrete cosine transform, a second and third Reed Spline Filter, a differential pulse code ~ lator, an enhancement analyzer, and an adaptive vector ~uantizer to generate a plurality of data segments that contain the compressed image. The plurality of data 9=_ tS
are further compressed with a channel encoder.
The Reed Spline Filter ;n~-~ln~c a color space conversion transform, a ~r;~t;on step and a least mean squared error (LMSF) spline fitting step. The output of the first Reed Spline Filter is then analyzed to determine an image type for optimal compression. The first Reed Spline Filter outputs three . ,. -nts which are analyzed by the image classifier.
The image classifier uses fuzzy logic techniques to classify the image type. Once the image type is determined, the first ~ t is separated from the second and third -ntc _g_ W096/112N95 2 ~ ~ 5 1 1 ~ ~CT~S95~8827 and further compre9sed with an optimized discrete cosine transform and an adaptive vector quantizer. The second and third ts are further compressed with a second and third Reed Spline Filter, the adaptive vector quanti~er, and a differential pulse code ~ tor The ~nh~n~ --t analyzer ~nh~n~rc areas of an image determined to be the most visually lmportant, 9uch aa text or edges. The ~nh~nr~m~nt analyzer determines the visual priority of pixel blocks. The pixel block ~i .c;onc typically correcpond to 16 x 16 pixel blocks in the source image. In addition, the ~n~n~ t analyzer prioritizes each pixel block so that the most important ~nh~n~
information is placed in the earliest ~nh~ m~nt layers so that it can be decoded first. The output of the ~nh~n~ -~
analyzer i8 compressed with the adaptive vector ~nt;~Q~
A user may~set the encoder to compute a color paletteoptimized to the color image. The color palette is cc~inP~
with the output of the discrete cosine transform, the adaptive vector quantizer, the differential pulse code modulator, and the ~nh~nc analyzer to create a plurality of dasa se~ c The channel encoder then interleaves and compresses the plurality of data se tq ~3rief Descri~tion of the Drawinq6 These and other aspects, advantages, and novel features of the invention will become apparent upon reading the following detalled descriptlon and upon re~erence to ~: ~ ying drawings in which:
FIG. 1 is a block dlagram of an image compresslon system that encodes, tran9fers and decodes an image and includes a source lmage, an encoder, a compressed flle, a first storage device, a data channel, a data stream, a decoder, a display, a second storage device, and a printer;
FIG. 2 illustrates the multi-step ~co~ process and includes the source image, the encoder, the co~pressed file, the data channel, the data stream, the decoder, a t' ' ~il W0~6~028~s 2 1 9 5 1 1 0 r~

image, a splash image, a panellized standard image, and the final represPntAtio~ of the source image;
FIG. 3 is a block diagram of the encoder showing the four stages of the ~nro~irg process;
~ 5 FIG. 4 is a block diagram of the encoder showing a first Reed Spline Filter, a color space conversion t ~,~f~ ..., a Y
miniature, a U miniature, an X m;n;atnre~ an image classifier, an optimized discrete cosine transform, a discrete cosine transform residual calculator, an adaptive vector quantizer, a second and third Reed Spline Filter, a Reed Spline residual calculator, a differential pulse coder modulator, an Pnh~nl ~ analyzer, a high resolution residual calculator, a palette selector, a plurality of data se tc and a channel encoder;
FIG. 5 is a block diagram of the image formatter;
FIG. 6 is a block diagram of the Reed Spline Filter;
FIG. 7 is a block diagram of the color space conversion transform;
FIG. 8 is a block diagram of the image classifier;
FIG. 9 is a block diagram of the optimized discrete cosine transform;
FIG. 10 is a block diagram of the DCT residual calculator;
FIG. ll is a block diagram of the adaptive vector quantizer;
FIG. 12 i9 a block diagram of the second and third Reed Spline Filters;
FIG. 13 is a block diagram of the Reed Spline residual calculator;
FIG. 14 is a block diagram of the differential pulse code modulator;
FIG. 15 is a block diagram of the Pnh~n~ - t analyzer;
FIG. 16 is a block diagram of the high resolution residual calculator;
FIG. 17 is the block diagram of the palette selector;
FIG. 18 is the block diagram of the channel encoder;

.. .. . . , _ _ _ _ _ . _ _ _ _ _ ~ . .

WOg6/~\2%95 ~Iq'51 la ~ p~ Jj,7,,.~ ~
.

FIG. l9 is a block diagram of the vector ~nti~tinn process;
FIGs. 20a and 20b show the segmented architecture of the data stream;
FIG. 21 illustrates the normal segment;
FIG. 22a, 22b, 22c and 22d illustrate the layering and interleaving of the plurality of data SPr - -~t~;
FIG. 23 is a block diagram of the decoder of the present invention;
FIG. 24 illustrates the multi-step ~rrQ~;ng process and inrln~t~ a Ym ~iniAtllre~ a Um miniature, an Xm miniature, the th-lmhnAil min;A~Ilre, the splash image and the standard image, and the rnhAnre~ image;
FIG. 25 is a block diagram of the decoder and includes an inversA Huffman encoder, an inverse DPCM, a de~Antiz~r, a combinexr an inverse DCT, a demultipl~rr, and an adder;
FIG. 26 is a block diagram of the decoder and includes the interpolator, interpolation factors, a scaler, scale factors, a replicator, and an inverse color converter;
FIG. 27 is a block diagram of the decoder that includes the inverse Huffman encoder, the ~ ~ inrr, the de~l~nti7~r, the inver~e DCT, a pattern matcher, the adder, the interpolator, and an ~nhAnr~nt overlay builder;
FIG. 28 is block diagram of the scaler with an input to output ratio of five-to-three in the one dimensional case;
FIG. 29 illustrates the process of hl 1 i n~r interpolAti rn;
FIG. 30 is a block diagram of the process of optimi~;
the compression methods with the image classifier, the ~nhAnr analyzer, the optimized DCT, the A~Q, and the ~:
channel encoder;
FIG. 31 is a block diagram of the image classifier;
FIG. 32 is a flow chart of the process of creating an adaptive uniform DCT quantization table;

W096~289~ S~, I
~951 i~

FIG. 33 illustrates a table of several examples showing the mapping from input meas~L, ntq to input sets to output sets;
FIG. 34 is a bloek diagram of image data compression;
~ 5 FIG. 35 i9 a block diagram of a spline decimation/interpolation filter;
FIG. 36 is a block diagram of an optimal spline filter;
FIG. 37 is a vector repres~ntat;on of the image, processed image, and residual image;
FIG. 38 is a block diagram showing a basic optimization block of the present invention;
FIG. 39 is a graphical illustration of a one~ cion~l bi-linear spline projection;
FIG. 40 is a schematic view showing periodic replication of a two-dimensional image;
FIGs. 41a, 41b and 41c are perspective and plan views of a two-dimensional planar spline basis;
FIG. 42 is a diagram showing repLesenLations of the h~Y~g~n~l tent function;
FIG. 43 is a flow diagram of compression and reconstruction of image data;
FIG. 44 is a graphical reproqontat;on of a normalized frequency response of a one-~; -qi~n~l bi-linear spline basis;
FIG. 45 is a graphical repr~sont~tion of a one-dimensional eigenfilter frequency response;
FIG. 46 is a perspective view of a two-dimensional eigenfilter frequency response;
FIG. 4~ is a plot of standard error as a function of frequency for a one~ n~ 5imlqoi~l image;
FIG. 4~ is a plot of original and reconstructed one-dimensional images and a plot of standard error;
FIG. 49 is a first two-dimensional image reconstruction for different compression factors;
FIG. 50 is a second two-dimensional image reconstruction for different compression factors;

.. . . _ .. ... . _ _ _ .. . . _ ... ... .

21q~
WO 'J61/~28!J5 PC~IUS~108X~'7 FIG. 51 is plots of standard error for representative images 1 and 2;
FIG. 52 is a compressed two- miniature using the optimized decomposition weights;
FIG. 53 is a block diagram of a preferred adaptive compression scheme in which the method of the present invention is particularly suited;
FIG. 54 is a block diagram showing a combined sublevel and optimal-spline compression arrangement;
FIG. 55 is a block diagram showing a combined sublevel and optimal-spline reconstruction aLr_ , t;
FIG. 56 is a block diagram showing a multi-resolution optimized interpolation dLLdny. t; and FIG. 57 is a block diagram showing an ~ nt of the optimizing process in the image domain.

Det~led Descri~tion of the Tnventinn FlG. 1 illustrates a block diagram of an image compression system that includes a source image 100, an encoder 102, a compressed file 104, a first storage device 106, a - ~~ation data channel 108, a decoder llO, a display 112, a gecond storage device 114l and a printer 116.
The source image 100 is ~ eseeLed as a two-dimensional image array of picture elements, or pixels. The number of pixels determines the resolution of the source image 100, which is typically measured by the number of hori~ontal and vertical pixels c~nt~in~ in the two~ inn~l image array.
Each pixel is assigned a number of bits that ~Lesen~
the intensity level of the three primary colors: red, green, and blue. In the preferred. 's~- t, the full-color source image 100 is represented with 24 bits, where 8 bits are assigned to ea~h primary color. Thus, the total storage required for an uncompressed image is computed as the number of pixels in the image ti~es the number of 'O ts used to represent each pixel (referred to as bits per pixel).

W096/~289s 2 ' 9 5 1 1 0 ~ 7 As discussed in more detail below, the encoder 102 uses ~c;~tion, filtering, matS tic~l transforms, and quantization techniques to c~ en~te the image into fewer data samples representing the image with fewer bits per pixel than the original format. Once the source image 100 is compressed with the encoder 102, the set of compressed data are assembled in the compressed file 104. The compressed file 104 is stored in the first storage device 106 or transmitted to another location via the data channel 108. If the compressed file 104 is transmitted to another location, the data stored in the compressed file 104 is transmitted sequentially via the data channel 108. The ~eq~l~nre of bits in the compressed file 104 that are transmitted via the data channel 108 is referred to as a data stream 118.
The decoder 110 expands the compressed file 104 to the original source image size. During the process of ~Co~irg the compressed file 104, the decoder 110 displays the ~p~n~Pd source image 100 on the display 112. In addition, the decoder 110 may store the eYr~n~ compressed file 104 in the second storage device 114 or print the ~r~r~d compressed file 104 on the printer 116.
For example, if the source image 100 comprises a 640 x 480, 24-bit color image, the amount of memory needed to store and display the source image 100 i5 apprn~;~ ly 922,000 bytes. In the preferred ~ , the encoder 102 computes the highest compression ratio for a given ~ro~;ng quality and playback model. The playback model allows a user to select the ~co~;ng mode as is discussed in more detail below. The CU~ 1 essed data are then assembled in the compressed file 104 for transmittal via the data channel 108 or stored in the first storage device 106. For example, at a 92-to-1 compression ratio, the 922,000 bytes that represent the source image 100 are compressed into approximately 10,000 bytes. In addition, the encoder 102 arranges the compressed data into layers in the compressed file 104.

w096l0289~ ~I q 5 ~ I C ~ PCT~Sg5/088~7 .

Referring to FI~. 2, it can be seen that the layering of the compressed file 104 allows the decoder 110 to display a ~hn~hn~l image and progressively improving quality versions of the source image 100 before the decoder llO receives the entire compressed file 104. The first data ~p~n~ by the decoder 110 can be viewed as a ~ ;1 miniature 120 of the original image or as a coarse ~uality "splash" image 122 with the same dimensians as the original image. The splash image 122 is a result of interpolating the ~h~ n~;l miniature to the dimensions of the original image. As the decoder 110 c~n~; n~l~q to receive data ~rom the data stream 118, the decoder llO creates a standard image 124 by decoding the second layer of information and adding it to the splash image 122 data to create a higher quality image. The encoder 102 lS can create a user-specified number of layers in which each layer i3 decoded and added to the displayed image as data i9 received. Upon receiving the entire compressed file 104 via the data stream 118, the decoder llO digplayg an ~nh~
image lOS that is the highest quality reconstructed image that can be obtained from the compressed data stream 118.
FI~. 3 illustrates a block diagram of the encoder 102 constructed in acc~Ld~ce with the present invention. The encoder 102 compresses the source image 100 in four main stages. In a first stage 12~, the source image lOo i5 formatted, processed by a Reed 5pline Filter and color converted. In a second stage 128, the encoder 102 classifiea the source image 100 in blocks. In a third stage 130, the encoder 102 selectively applies particular ~n~o~ling methods that opti~ize t~e _~ ~ssion ratio. Finally, the compressed data are interleaved and channel encoded in a ~ourth stage 132.
The encoder 102 c~n~lr~ a library of encoding methods that are treated as a toolbox. The toolbox allows the encoder 102 to selectively apply particular encoding methods 3~ that optimize the compression ratio for a particular image type. In the preferred ~h~ , the encoder 102 includes W096l028~5 2 1 95 1 1 ~ r~ rl~ 7 .

at least one of the following: an adaptive vector qu~nti7er (AVQ 134), an optimized discrete cosine transform (optimized DCT 136), a Reed Spline Filter 138 (RSF), a differential pulse code modulator (DPCM 140), a run length encoder (RLE
142), and an ~nh~n, L analyzer 144.
FIG. 4 illustrates a more detailed block diagram of the encoder 102. The first stage 126 of the encoder 102 includes a formatter 146, a first Reed Spline Filter 148 and a color space converter 150 which produces Y data la6, and U and X
data 188. The second stage 128 includes an image classifier 152. The third stage includes an optimized discrete cosine transform and adaptiYe DCT quantization loptimized DCT 136), a DCT residual calculator 154, the adaptive vector quantizer (AVQ 134), a second and a third Reed Spline Filter 156, a Reed Spline residual calculator 158, the differential pulse code modulator lDPCM 140), a resource file 160, the ~nh~n~ ~ analyzer 144, a high resolution residual calculator 162, and a palette selector 164. The fourth stage includes a plurality of data ~e~ q 166 and a channel encoder 168. The output of the channel encoder 168 is stored in the compressed file 104.
The formatter 146, as shown in more detail in FIG. 5, converts the source image 100 from its native format to a 24-bit red, green and blue pixel array. For example, if the source image 100 is an 8-bit palletized image, the formatter converts the 8-bit palletized image to a 24-bit red, green, and blue equivalent.
The first Reed Spline Filter 148, illustrated in more detail in FIG. 6, uses a two-step process to compress the - 30 formatted source image 100. The two-step process co~prises a decimation step performed in block 170 and a spline fitting step performed in a block 172. As ~Ypl~;n~d in more detail below, the ~ n step in the block 170 decimates each color c~ of red, green, and blue by a factor of two along the vertical and horizontal dimensions using a Reed Spline decimation kernal. The decimation factor is called Wo96fU2~95 ~'1 a5 ~ 10 rc~
.

"tau.U The R tau2' decimated data 174 COL~G~ Id5 to the red c ~ -nt decimated by a factor of 2. The G_tau2' ~~cim-ted data 176 correspond9 to the green ~ , decimated by a factor of 2. The B_tau2' decimated data 178 ~~LLes~uds to the blue compmn~nt decimated by a factor of 2.
In the spline fitting step in block 172, the ~irst Reed Spline Filter 148 partially restores the source image detail lost by the decimation in block 170. The spline fitting step in block 172 processes the R_tau2' decimated data 172, the G_tau2' decimated data, and the B_tau2' decimated data to calculate optimal reconstruction weights.
As ~YplAin~A in more detail below, the decoder 110 will interpolate the ~P~ t~~ data into a full sized image. In this interpolation, the decoder 110 uses the reconstruction weights which haYe been calculated by the Reed Spline ~ilter in such a way as to minimize the mean squared error between the original image , ~~t~~ and the interpolated image ,_ tq Accordingly the Reed 9pline Filter 149 causes the interpolated image to match the original image more closely and increases the overall sharpness of the interpolated picture. In addition, reducing the error arising from the ~i~~ti~n step in block 170 reduces the amount of data needed to represent the residual image. The residual image is the dif$erence between the reconstructed image and the original image.
The reconstruction weights output $rom the Reed Spline Filter 148 form a Uminiature'' of the original source image 100 for each primary color o~ red, green, and blue, wherein each red, green, and blue m;n; ~tnre is one-yuarter the resolution o$ the original source image lQo when a tau o$ 2 is used.
More specific~lly~ the preferred color space converter 150 transforms the R_tau~ miniature 180, the G_tau2 miniature 182 and the 3_tau2 miniature 184 output by the $irst Reed Spline Filter 148 into a dif~erent color coordinate system in which one t is the lnm;n~nre Y data 186 and the other WO9611~289~ ~ ~ 9 ~ ~ I C I ~

two ~ --tS are related to the ch, nRnre U and X data 188. The color space converter 150 transforms the RGB to the YUX color space according to the following formulas:
Y = 0.29900R + 0.58700G + 0.11400B
U = 0.16870R + 0.33120G + 0.50000B
X = 0.50000R - 1.08216G + 0.91869B
Referring to FIG. 6, it can be seen that a R_tau2 miniature 180 coLLes~v.lds to a miniature that is decimRted and spline fitted by a factor of 2. A G_tau2 miniature 182 corresponds to a green min;Rt~lre that i5 decimated and spline fitted by a factor of 2. A B_tau2 miniature 184 corresponds to a blue miniature that is ~r;m-ted and spline fitted by a factor of 2.
FIG. 7 illustrates the color space converter 150 of FIG.
4. The color space converter 150 transforms the R_tau2 miniature 180, the G_tau2 miniature 182 and the B_tau2 miniature 184 output by the first Reed Spline Filter 148 into a different color coordinate system in which one~ ~nt iS
the l~min~nre Y data 186 and the other two components are related to the elll. nRnre U and X data 188 as shown in FIG.
4. Thus the color space converter 150 transforms the R_tau2 miniature 180, the G_tau2 m;ni~tllre 182 and the s_tau2 miniature 184 into a Y_tau2 miniature 190, a U_tau2 miniature 192 and an X_tau2 mi n; Rttlre 194.
Referring to FIG. 8, it can be seen that the second stage 128 of the encoder 102 includes an image clR~s;f;~r 152 that determines the image tyoe by analyzing the Y_tau2 m; n; Rtllre 190, the U_tau2 miniature 192 and the X_tau2 m;ni~tnre 194. The image classifier 152 uses a fuzzy logic - 30 rule base to classify an image into one or more of its known classes. In the preferred embodiment, these classes include gray scale, graphics, text, photographs, high activity and low activity images. The image classifier 152 also decomposes the source image 100 into block units and classifies each block. Since the source image 100 includes a ~ 'inRt;o~ of different image types, the image rlR~sifier wo~ 2gg~ ~1951 10 rclllJ~

152 sub-divides the source image 100 into distinct regions.
The image classi ier 152 then outputs the control script 196 that specifie3 the correct ~_ession methods for each region. The control script 196 cpDri f; ~ which compression methods to apply in the third stage 130, and specifies the channel ~n~ n~ methods to apply in the fourth stage 132.
As shown in FIG. 4, during the third stage 130, the encoder 102 uses the control script ~96 to select the optimal compression methods fro~ its compression toolbox. The encoder 102 separates the Y data 186 from the U and X data 188. Thus, the encoder 102 separates the Y_tau2 miniature 190 from the U_tau2 ~iniature l9Z and the X_tau2 miniature 194, and passes the Y_tau2 mini~tn~e 190 to the optimized DCT
136, and passes the U_tau2 miniature 192 and the X_tau2 miniature 194 to a second and third Reed Spline Filter 156.
As illustrated in FIG. 9, the optimized DCT 136 subdivides the Y_tau2 miniature 190 into a ~et of 3 x 8 pixel blocks and transforms each 8 x 8 pixel block into sixty-four DCT coeificients 198. The DCT coef~icients include the AC
terms 200 and the DC terms 201. The DCT coeffi~i~ntc 198 are analyzed by the optimized DCT 136 to determine optimal quantization step sizes and reconstruction values. The optimized DCT 136 stores the optimal ~nti7~ti~n step sizes (uniform or non-uniform) in a ~l~n~7 t~n table Q 202 and outputs the reconstruction values to the CS data segment 204.
The optimized DCT 136 then quantizes the DCT coefficients 198 according to the quantization table Q 202. Once quantized, the optimized DCT 136 outputs the DCT quantized values 206 to the DCT data segment 208.
In order to preserve the image information lost by the optimized DCT 136, the rcT residual calculator 154 ~shown in FIG. 10) computes and compresses the DCT residual. The DCT
residual calculator 154 de~=nti7~c in a de~n~iz~r 209 the DCT quantized values 206 stored in the DCT data segment 208 by multiplying the reconstruction values in thc C6 data segment 204 with the DCT quantized values 206. The DCT

WO 91j~02~195 2 ~ 9 5 1 1 u PCT~IS9~108827 residual calculator 154 then reconstructs the dequantized DCT
cu..,~one..Ls with an inverse DCT 210 to generate a reconstructed dY_tau2 miniature 211. The reconstructed dY tau2 miniature 211 is subtracted from the original Y tau2 miniature 190 to create an rY_tau2 residual 212.
Referring to FIG. 11, it can be seen that the rY tau2 residual 212 is further compressed with the AVQ 134. The technique of vector quantization is used to represent a block of information as a single index that requires fewer bits of storage. As PY~l~;n~d in more detail below, the AVQ 134 maintains a group of commonly occurring block patterns in a set of co~hookc 214 stored in the resource file 160. The index references a particular block pattern within a particular cod~hnok 214. The AVQ 134 compares the input block with the block patterns in the set of rod~hookc 214.
If a block pattern in the set of nO~PhQnkc 214 matches or closely appro~ir-tpc the input block, the AVQ 134 replaces the input block pattern with the index.
Thus, the AVQ 134 compresses the input block information into a list of indexes. The indexes are decu,l,~lussed by r~p~ ~r; ng each index with the block pattern each index references in the set of ~o~hookq 214. The decoder 110, as explained in more detail below, also has a set of the cod~ho~c 214. During the decn~i ns process the decoder 110 uses the list of indexes to reference block patterns stored in a particular no~hQnk 214. The original source cannot be precisely .c~uveL~d from the compressed representation since the indexed patterns in the CQ~Phook will not match the input block exactly. The degree of loss will depend on how well the cn~PhQnk matches the input block.
As shown in FIG. 11, the AVQ 134 compresses the rY_tau2 residual 212, by sub-dividing the rY_tau2 residual 212 into 4 x 4 residual blocks and comparing the residual blocks with ~od~hQQk patterns as explained above. The AVQ 134 replaces the residual blocks with the codehQQk indexes that minimize the squared error. The AVQ 134 outputs the list of codebook W09f,11~95 ~ ~ ~ 51 1~ r~ . 7 indexes t the ~1 data segment 224. Thus, the VQ1 data segment 224 is a list of co~P~oo~ indexes that identify block patterns in the Go~hork. As PYplA~npd in more detail below, the AVQ 134 of the preferred . ~-~i t also generates new codebook patterns that the AVQ 134 outputs to the set of c~Ph~o~R 2lg. The added co~Pho~k patterns are stored in the VQC3 data segment 223.
FIG. 12 illustrates a block diagram of the aecond Reed Spline Filter 225 and third Reed Spline Filter 22~. Once the image cla~sifier 152 de~rminP~ the particular image type, the U_tau2 miniature 192 and the X_tau2 minintnre lg4 are further decimated and filtered by the second Reed Spline Filter 225. Like the first Reed Spline Filter 148 sho~n in FIG. 6, the second Reed Spline Filter Z25 compresses the U_tau2 miniature 192 and the X_tau2 miniature 194 in a two-step process. First, the U tau2 mini~t-lre 192 and the X_tau2 miniature 194 are vertically and horizontally decimated by a factor of two. The ~Pcir-te~ data are then spline fitted to determine optimal reconstruction weights that wlll m;n;mi~e the mean square error of the reconstructed ~pc1r~ted miniatures. Once complete, the second Reed Spline Filter 225 outputs the optimal reconstruction values to create a U_tau4 min1a~l-re 226 and an X_tau4 miniature 228.
The third Reed Spline Filter 227 ~Pc1r-tes the U_tau4 miniature 226 and the X_tau4 miniature 228 vertically and horiz~nt~l1y by a factor of four. The ~prir-~ed image data are again spline fitted to create a U taul6 miniature 230 and an X_taul6 miniature 232.
In FIG. 13 the Reed Spline residual calculator 15a preserves the image information lost by the second Reed Spline Filter Z25 and the third Reed Spline Filter 227 ~y _ ~_ting and compressing the Reed Spline Filter residual.
The Reed Spline residual calculator 158 reconstructs the U_tau4 miniature 226 and X_tau4 mi n; ~nre 228 by interp~ln~inr, the U_taul6 m;n;~ture 230 and the X_taul6 miniature 232. The interpolated U_taul6 miniature 230 is wo96/o~s5 2 ~ 95 1 l ~

referred to as a dU_tau4 min; at~re 234. The interpolated X taul6 miniature 232 is referred to as a dX_tau4 m; n; atnre 236. The dU_tau4 miniature 234 and dX_tau4 miniature 236 are subtracted from the actual U_tau4 miniature 226 and X tau4 ~ 5 miniature 228 to create an rU tau4 residual 238 and an rX_tau4 residual 240.
As illustrated in FIG. 11, the rU tau4 residual 238 and the rX_tau4 residual 240 are further compressed with the AVQ
134. The AVQ 134 subdivides the rU_tau4 residual 238 and the rX tau4 residual 240 into 4 x 4 residual blocks. The residual blocks are compared with blocks in the set of co~ohonkq 214 to find the ~ hook patterns that minimize the squared error. The AVQ 134 compresses the residual block by assigning an index that identifies the correspnr~;ng block pattern in the set of co~ohonkq 214. Once complete, the AVQ
134 outputs the compressed residual as the VQ3 data segment 242 and the VQ4 data segment 244.
The U taul6 miniature 230 and the X taul6 m;n;Rture 232 are also compressed with the DPCM 140 as shown in FIG. 14.
The DPCM 140 outputs the low-detail color ~ ,-nents as the URCA data segment 246 and the XRCA data segment 248. The URCA data segment 246 and the XRCA data segment 248 form the low-detail color c~mpnnPntq that the decoder 110 uses to create the color th~mhnR;l m;n;~t~re 120 if this is included as a playback option in the compressed data stream 118.
FIG. 15 illustrates the onhA- ~r t analyzer 144 of the preferred ~mhn~; t. The Y tau2 miniature 190, the U tau4 miniature 226, and the X tau4 m;n;~t~re 228 are analyzed to determine an ~nhRn~m~nt list 250 that specifies the visual priority of every 16 x 16 image block. The enhancement analyzer 144 determines the visual priority of each 16 x 16 ~ image block by convolving the Y_tau2 miniature 190, the tau4 min;~tnre 226, and the X tau4 miniature 228 and comparing the result of the convolution to a threshold value E 252. The threshold value E 252 is user defined. The user can set the threshold value E 252 from zero to 200. The ___ __ ____ .. _ ___ .

W0~6,028g~ 2 ! ~ 5 ~ ~ a I ~. u~ ) .

threshold value E 2S2 determines how much ~nh~n information the encoder 102 adds to the compressed file 104.
Thus, setting the threshold value E 252 to zero will suppress any image ~nh~nl t information.
If the result of convol~ing a particular 16 x 16 high resolution block is greater than the threshold value E 2S2, the 16 x 16 high-resolution block is prioritized and added to the ~nh~n~ ' list 250. Thus the ~nh~nr~m~nt list 250 identifies which 16 x 16 blocks are coded and prioritizes how the 16 x 16 coded blocks are listed.
The high resolution residual calculator 162, as shown in FI&. 16, determines the high r~nlution residual for each 16 x 16 high resolution block identified in the ~nh~n~ ' list 250. The high resolution residual calculator 162 translates the ~21 data segment 224 from the AVQ 134 into a reconstructed r~_tau2 residual 212 by ~apping the indexes in the VQl data ses~ent 224 to the patterns in the r~d~hook.
The reconstructed rY tau2 residual is added to the dY_tau2 miniature 254 ~de~l~nti~ed DCT c~r~n~n~n~. The result is interpolated by a factor of two in the vertical and horizontal dimensions and is subtracted from the original Y_tau2 l9C miniature to form the high resolution residual.
The high resolution residual calculator 162 then extracts high resolution 16 x 16 ~locks from the high resolution residual according to the priorities in the ~nh~n_ list 250. As will be ~Yrl~n~ in more detail below, the high resolution residual c~1clll~t~r 162 outputs the highe~t priority blocks in the first ~nh~n. t layer, the next-highest priority blocks in the second ~nh~n~ ~
layer, etc. The high resolution residual blocks are referred to as the xr_Y residual 2S6.
The xr_Y residu31 256 is further compressed with the AVQ
134. The AVQ 134 subdivides the xr_Y residual 256 into 4 x 4 residual blocks. The residual blocks are compared with blocks in the co~ho~. If a residual block corresponds to a block pattern in the ~o~h~k, the AVQ 134 compresses the W096~0~9s 21951 1 0 r~

4 x 4 residual block by assigning an index that identifies the corr~p~n~ing block pattern in the co~hook. Once complete, the AVQ 134 outputs the essed high resolution residual to the VQ2 data segment 258.
FIG. 17 illustrates a block diagram of the palette selector 164. The palette selector 164 . ~ a "best-fitl' 24-bit color palette 260 for the decoder 110. The palette selector 164 is optional and ~s user defined. The palette se}ector 164 c ,~tPq the color palette 260 from the Y tau2 miniature 190, the U tau2 miniature 192 and the X_tau2 miniature 154. The user can select a number of palette entries N 262 to range from 0 to 255 entries. If the user selects a zero, no palette is computed. If enabled, the palette selector 164 adds the color palette 260 to a plurality of data se. -n~C 166.
The channel encoder 168, as shown in FIG. 18, interleaves and channel encodes the plurality of data segments 166. Based on the user defined playback model 261, the plurality of data sesm nts 166 are interleaved as follows: 1) as a single layer, single-pass comprising the entire image, 2) as two layers comprising the th-lmhn~;l miniature 120 and the L~ ;n~r of the image 122 with enhancement information interleaved into each data block (panel) in the second layer, and 3) as multiple layers comprising the ~hllmhn~;1 miniature 120, the standard image 124, the sharp image 105, and additional layers as specified by the user. For each playback model an option exists to interleave the data for p~nPll; ~ed or non-p~n~ ed display.
The user defined playback model 261 is described in more ~ 30 detail below.
After interleaving the plurality of data se, -nts 166, the channel encoder 168 compresses the plurality of data eeg -t5 166 in response to the control script 196. In the preferred ~mho~; t, the channel encoder 168 compresses the plurality of data 5e_ 5 166 with: 1) a Huffman ~nro~;ng process that uses fixed tables, 2) a ~uffman process that w0~6l0289~ 2 1 ~ 5 t l ~ PCT~59~/U8827 uses adaptive tables, 3) a conventional ~Zl coding techni~ue or 4) a run-length ~n~o~in~ proceas. The channel encoder 168 chooses the optimal compression method based on the image type i~ontifi~d in the control script 196.
The ~ntive Vector Ouantizer The pre~erred 'a~i ~ of the AVQ 134 is illustrated in FIG. 19. More specifically, the AVQ 134 optimizes the vector quantization techniques described above. The AVQ 134 sub-divides the image data into a set of 4 x 4 pixel blocks 216. The 4 x 4 pixel blocks 216 include sixteen tl6) elements X~,X"X3...Xl6 218, that start at the upper left-hand corner and move left to right on every row to the bottom right-hand corner.
~ he cod~hook 214 of the present invention comprises M
prede~ermined sixteen-element vectors, Pl,P2~P3,... ,P~ 220, that correspond to common patterns found in the population of images. The indexes I"I2,I3,...,In222 reier respectively to the patterns P1,P2, P3,...,P~ 220.
Pinding a best-fit pattern ~rom the co~ho~k 214 requires comparing each input block with every pattern in the ro~hnnk 214 and aelecting the index that corresponds to the pattern with the minimum squared error summed over the 16 elements in the 4 x 4 block. The optimal code, C, for an input vector, X, i5 the index j such that pattern P~
satisiies:

r(X~~P~3)21-min ~ rtX~-P~
iol 16 ~ k l~O l 16 where: X, is the ith element of the input vector, X
and P1~ is the ith element of the VQ pattern P~ .
The , - ri snn equation finds the best match by selecting the minimum error term that results from comparing the input block with the no~honk patterns. In other words, the AVQ 134 calculates the mean sguared error term associated with each pattern in the cn~hook 214 in order to determine wog~/o~g~ 2 1 9 ~1 1 0 PCT~59~/08827 which pattern in the co~hor,k 214 has the minimum squared error (also referred to as the minimum error). The error term is the mean square error produced by subtracting the pattern element P~ from the input block element Xi, squaring the result and dividing by sixteen (16).
The process of searching for a matching pattern in the ~od~hork 214 is time-consuming. The AVQ 134 of the preferred ~mhc~; t accelerates the pattern - -trh;n5 process with a variety of techniques.
First, in order to find the optimal codebook pattern, the AVQ 134 compares each input block term X1 to the corr~qpon~inrj term in the cod~hook pattern P; being tested and calculates the total squared error for the first co~Phork pattern. This value is stored as the initial minimum error.
For each of the other patterns P~ = P2lP3l.. ~PM~ the AVQ 134 subtracts the X and Pl~ terms and squares the result. The AVQ 134 compares the resulting squared error to the minimum error. If the squared error value is less than the minimum error, the AVQ 134 continues with the next input term X2 and computes the squared error associated with X2 and P~. The AVQ 134 adds the result to the squared error of the first two terms. The AVQ 134 then compares the ar l~te~ squared error for Xl and Xz to the minimum error. If the A~ _ lAted squared error i9 less than the minimum error the squared error r~l cl-l ation continues until the AVQ 134 has evaluated all 16 terms.
If at any time in the comparison, the Acc~ ed squared error for the new pattern is greater than the minimum squared error, the current pattern is ; ~;~tPly rejected and the AVQ 134 discontinues calculating the squared error for the r~m~;n;nrj input block terms for that pattern. If the total squared error for the new pattern is less than the minimum error, the AVQ 134 replaces the minimum error with the squared error from the new pattern before making the comparisons for the re~~ining patterns.

Also, if the accumulated squared error for a particular codebook pattern is less than a pre-determined threshold, the codebook pattern is immediately accepted and the AVQ 134 quits testing other codebook patterns. Furthemore, the codebook patterns in the present invention are ordered according to the frequency of matches. Thus, the AVQ 134 begins by comparing the input block with patterns in the codebook 214 that are most likely to match. Still further, the codebook patterns are grouped by the sum of their squared amplitudes. Thus the AVQ 134 selects a group of similar codebook patterns by summing the squared amplitude of an input block in order to determine which group of codebook patterns to search.
Besides improving the time it takes for the AVQ 134 to find an optimal codebook pattern, the AVQ 134 includes a set of codebooks 214 that are adapted to the input blocks (i.e., codebooks 214 that are optimized for input blocks that contain DCT residual values, high resolution residual values, etc.). Finally, the AVQ 134 of the preferred embodiment, adapts a codebook 214 to the source image 100 by devising a set of new patterns to add to a codebook 214.
Therefore, the AVQ 134 of the preferred embodiment has three modes of operation: 1) the AVQ 134 uses a specified codebook 214, 2) the AVQ 134 selects the best-fit codebook 214, or 3) the AVQ 134 uses a combination of existing codebooks 214, and new patterns that the AVQ 134 creates. If the AVQ 134 creates new patterns, the AVQ 134 stores the new patterns in the VQCB data segment 223.
The Compressed File Format FIGs. 20a and 20b illustrate the segmented architecture of the data stream 118 that results from transmitting the compressed file 104. The segmented architecture of the compressed file 104 in the preferred embodiment allows layering of the compressed image data. Referring to FIG. 2, the layering of the compressed file 104 allows the decoder 110 to display the thumbnail miniature 120, the splash image wo 96/02Rg~ 2 1 9 5 1 1 0 rcT/Il~9~l08827 122 and the standard image 124 before the entire c~ Lcssed file 104 is transferred. As the decoder llO receives each successive layer of , -ntq, the decoder 110 adds additional detail to the displayed image.
In addition to layering the compressed data, the segmented architecture allows the decoder llO of the preferred: ' '; nt: 1) to move from one segment to the next in the stream without fully decoding segments of data, 2) to skip parts of the data stream 118 that contain data that is llnn~c~ss~ry for a given rendition of the image, 3) to ignore parts of the data stream 118 that are in an unknown format, 4) to process the data in an order that is configurable on the fly if the entire data stream 118 is stored locally, and 5) to store different layers of the compressed file 104 separately from one another.
As shown in FIG. 20a, the byte arrangement of the data stream 118 and the compressed file 104 includes a header segment 400 and a normal segment 402. The header segment 400 contains header information, and the normal segment 402 contains data. The header segment 400 is the first segment in the compressed file 104 and is the first segment transmitted with the data stream 118. In the preferred o~ , the header segment 400 is eight bytes long.
As shown in FIG. 20b, the byte arrangement of the header segment 400 includes a byte 0 406 and a byte 1 408 of the header segment 400. Byte 0 406 and byte 1 408 of the header segment 400 identify the data stream 118. Byte 1 408 also indicates if the data stream 118 contn;nq image data (indicated by a "G") or if it con~ i n~ resource data ~indicated by a "Gn). Resource data includes color lookup ~ tables, font information, and vector ~uantization tables.
Byte 2 410, byte 3 412, byte 4 414, byte 5 416, byte 6 418 and byte 7 420 of the header segment 400 specify which encoder 102 created the data stream 118. As new encoding methods are added to the encoder 102, new versions of the encoder 102 will be sold and distributed to decode the data wog6//l2xgs ') t q 5 1 ~ G ~._JIU..,~ I

encoded by the new methods. Thus, to remain comp~tible with prior encoders iO2, the decoder 110 needs to identify which encoder 102 generated the compressed data. In the preferred ~mhn~;-~nt~ byte 7 420 identifies the encoder 102 and byte 2 410, byte 3 412, byte 4 414, byte 5 416, and byte 6 418 are reserved for future enhallc~ -c to the encoder 102.
FIG. 21 illustrates the normal segment 402 as a ~e~a .Ice of bytes that are logically separated into two sections: an identifier section 422 and a data section 424. The identifier section 422 precedes the data sectio~ 424. The identifier section 422 specifies the size of the normal segment 402, and identifies a segment type. The data section 424 onnt~;nR information about the source image 100.
The identification section 422 i9 a sequence of one, two, or three bytes that identifies the length of the normal segment 402 and the segment type. The segment type is an integer number that specifies the method of data ~nro~ing.
The compressed file 104 ccnt~n~ 256 possible segment types.
The data in the normal segment 402 is forr-tt~d according to the segment type. In the preferred ~mh~ t, the normal 8~ c 402 are optimally formatted for the color palette, the Huffman bitstreams, the Huffman tables, the im~ge panels, the codebook information, the vector dequantization tables, etc.
For example, the file format of the preferred ~mho~;
allows the use of different Huffman bitstreams such as an 8-bit Huffman stream, a 10-bit Huffman stream, and a DCT
Huffman stream. The encoder 102 uses each Huffman bitstream to optimize the compressed file 104 in response to different image types. The identification section 422 identifies which Huffman encoder was used and the normal segment 402 cnnt~;
the compressed data.
FIG5. 22a, 22b, 22c, and 22d illustrate the layering and interlea~ing of the plurality of data ~_ tS 166 in the 3~ compressed file 104 of the preferred ~mhQ~;m~nt. The plurality of data segmenSs 166 in the compressed file 104 are W096~0289s 2 ~ 9 5 1 1 0 PCTI~S~108827 interleaved ba5ed on the user deiined playback model 261 as follows: 1~ as a single-pass, non-panellized image (FIG.
22a), 2) as a single-pass, p~nP~1;7~ image (FIG. 22b), 3) as two layers comprising the th~ n~;l miniature 120, and the sharp image 125 (~IG. 22c) and 4) as multiple layers comprising the tl ' 'il m;n;At~l~e 120, the standard image 124, and the sharp image 125 (FIG. 22d).
Block diagram 426 in FIG. 22a shows the compressed file format for the single-pass, non-panellized image. The compressed file 104 begins with the header, the optional color palette and the resource data such as the tables and Huffman Pnco~;ng information. The plurality of data se~ -- c 166 are not interleaved or layered. Thus, the decoder 110 must receive the entire compressed file 104 before any part of the source image 100 can be displayed.
Block diagram 428 in FIG. 22b shows the compressed file 104 for the single-pass, p~nelliz~d image. The plurality of data se_ ts 166 are interleaved panel-by-panel, so that all of the se_ R for each panel are contiguously transmitted.
The decoder 110 can expand and display a panel at a time until the entire compressed file 104 is ~Yr~n~d.
Block diagram 430 in FIG. 22c shows the compressed file format of the thn~hn~il miniature 120, the splash image 122 and the final or sharp image 125. The plurality of data ~__ '~ 166 are interleaved panel-by-panel and the resolution components for the th~mhn~;l miniature 120 and splash image 122 exist in the first layer, the panels for the final image exist in the second layer. The first layer includes selected portions of the plurality of data s _ R
166 that are needed to decode the panels of the ~
miniature 120 and splash image 122. Thus, the compressed - file 104 only stores the low detail color ~ o~n~5 (U~CA
data segment 246, the XRCA data segment 248), the DC terms 201 and as many as the first five AC terms 200 in the first layer. The number of AC terms 200 depends on the user-selected quality of the ~' ' -;1 miniature 120.

. _ ~096~0289~ ~l9 5 ll ~ PC~ ~/(t8827 The plurality of data s~_ ~c 166 in the ~irst layer are also interleaved panel-by-panel to allow the ~
miniature 120 and splash image 12Z to be decoded a panel at a time. The second layer c~n~;"C the L. -ining plurality of data segments 166 needed to expand the compressed ~ile 104 into the final image. The plurality of data segments 166 in the second layer are also interleaved panel-by-panel.
Block 43Z irl~FIG. 22d 8hows the ~ s3ed file format o~ the t~llr~n~;l image 120, the splash image 122, the layered standard image 124, and the sharp image 125. The ~h"mhn~il miniature 120 and splash image 122 are arranged in the ~irst layer as described above. The rPm~in;ns data segments 166 are layered at di~ferent quality levels. The multi-layering is accomplished by layering and interleaving panel infor~-ti~ soci~ted with the V22 data segment 258 (high resolution residual). The multiple layers allow the display of all the panels at a particular level of detail before ~co~;"g the panels in the next layer.
The Decoder FIG. 23 illustrates the decoder 110 of the present invention. The decoder 110 takes as input the compressed data stream 118 and expands or decode3 it into an image for viewing on the display 112. As ~P1A;n~d a~ove, the compressed file 104 and the transmitted data stream 118 include image __ s that are layered with a plurality of panels 433. The decoder 110 expands the plurality of panels 433 one at a time.
As illustrated in FICi. 24, the decoder 110 expand~ the compres~ed file 104 in ~our steps. In a ~irst step 434, the decoder 110 expands the first layer of image data in the comFressed ~ile 104 or the data strea~ 118 into a Ym miniature 436, a Um miniature 438, and an Xm miniature 440.
}n a second step 44a, the decoder 110 u~es the Ym miniature 436, the Um miniature 438, and an ~m miniature 440 to generate the th-~n~ ni~tll~e 120, and the Gplash image 122. In a third step 444, the decoder 110 receives a second Wos~/02xg5 2 1 9 ~ r ~ 7 layer of image data and generates the higher detail panels ' 445 needed to expand the ~h~ m;r;~t~re 120 into a L standard image 124, a fourth step 446 the decoder 110 ~ receives a third layer of image data to generate higher detail panels to enhance the detail of the standard image in order to create an enhanced image 105 that corresponds to the source image 100.
FIG. 25 illustrates the elements of the first step 434 in which the decoder 110 expands the AC terms 200, the DC
terms 201, the URCA data segment 246, and the XRCA data segment 248 into the Ym miniature 436, the Um miniature 438, and Xm m;ni~tnre 440. The first step 43g includes an inverse Huffman encoder 458, an inverse DPCM 476, a de~uantizer 450, a cmmhin~r 452, an inverse DCT 476, a demultiplexer 454, and an adder 456.
The decoder 110 then separates the DC terms 201 and the AC terms 200 from the URCA data segment 246 and the XRCA data segment 248. The inverse Xuffman encoder 458 decompresses the first layer of the data stream 118 which includes the AC
terms 200, the URCA data segment 246, and the XRCA data segment 248. The inverse DPCM 476 further expands the DC
terms 201 to output DC terms 201'. The dequantizer 450 further expands the AC terms 200 to output AC terms 200' by multiplying the output AC terms 200~ with the ~l~rt;7~ti~n factors 478 in the c,uantization table Q 202 to output 8 x 8 DCT coefficient blocks 482. The ~l~n~;z~tion table Q 202 is stored in the CS data segment 204 (not shown).
The c ~;n~r 452 c ~;n~c the output DC terms 201' with the 8 x 8 DCT coefficient blocks 482. The decoder 110 sets the inverse DCT factor 480, and the inverse DCT 476 outputs the DCT coefficient blocks 482 that correspond to the Ym ~ miniature 436 that is l/256th the size of the original image.
~ The demultiplexer 454 separates the inverse Huffman encoded URCA data segment 246 from the XRCA data segment 248.
The inverse DPCM 476 then expands the URCA data segment 246 and the XRCA data segment 248 to generate the blocks that .. . . .. .. _ , . .. _ .. . , , ... .. . .. .. _ _ _ _ _ _ _ _ r W0~6/028'35 2 t 9 5 ~ t ~ rcT~s~o~27 .

correspond to the Um miniature 438 and the Xm miniature 440.
Tke adder 456 translates the blocks CO1L-S~ ;n5 to the Um ~; ni n~nre 438 and the Xm miniature 440 into blocks that correspond to a Xm m; n; nt--re 460.
5FIG. 26 illustrate5 the second step 442 in which the deooder 110 expands the Ym miniature 436, the Um m; n; ~tnre 438, and the Xm miniature 460 that the decoder 110 further inoludes the interpolator 462 that operates on the Um miniature 436, the Um miniature 438 and the Xm m;niatnre 460.
10The interpolator 462 i9 controlled by a Ym interpolation factor 484, a Um interpolation factor 486, and a Xm in~erpolation factor 496. A scaler 466 is ccntrolled ~y a Ym scale factor 490, a Um scale ~actor 492, a Xm scale factor 494. The decoder 110 further includes the replicator 464 and 15the inverse color converter. The int~rrnl~to~ 462 uses a linear interpolation process to enlarge the Ym miniature 436, the Um miniature 438, and the Xm m;n;~t~re 460 ~y one, two or four times in both the horizontal and vertical directions.
The Ym interpnl~t~on factor 484, the Um interp~ nn 20factor 486, and the Xm interpolation factor 488 control the amount o~ interpolation. The size of the source image 100 in the compressed file 104 is fixed, thus the decoder 110 may need to enlarge or reduce the ~Ypan~d image ~efore display.
The decoder 110 sets the Ym interpol~tio~ factor 484 to a 25power of 2 ~i.e., 1, 2, 4, etc.~ in order to optimize the ~o~;ng process. However, in order to display an Pypnn~pd image at the proper size, the scaler 466 scales the interpolated image to .~ te different display formats.
The interpolator 462 also expands the Um miniature 438 30and the Xm miniature 440. Like the Ym interpolation factor 484, the decoder 110 sets the Um interpolation factor 486 and the Xm interpolation factor 496 to a power of two. The decoder 110 sets the Ym interpolation factor 484, and the Um interpolation factor 486 so that the Um miniature 438 and Xm 35mi n; atn~e 460 approximate the size of the interpolated and scaled Ym miniature 436.

WO96/0289~ ~c I /~ 1 J
2'951 1~

After interpnlation, the scaler 466 enlarges or reduces the interpolated Ym miniature based on the Ym scale factor 490. In the preferred ~ '-d; t, the decoder 110 sets t_e ~ Ym interpnlatinn factor 484 80 that the interpolated Ym miniature 436 is nearly twice the size of the ~' ' -il miniature 120. The decoder 110 then sets the Ym scale factor 490 to reduce the interpolated Ym min;ature 436 to the display size of the ~h~ nR;l miniature 120. The scaler 466 interpolates the Um miniature 458 and the Xw miniature 460 10with the Um scale factor 492, and the Xm scale factor 494.
The decoder 110 sets the Xm scale factor 494, the Um scale factor 492, as necP~s~ry to scale the image to the display size.
The inverse color converter 468 transforms the 15interpolated and scaled miniatures into a red, green, and blue pixel array or a palletized image as required by the display 112. When converting to a palletized image, the inverse color converter 468 also dithers the converted image.
The decoder 110 displays the interpolated, scaled and color 20converted miniatures as the ~ miniature 120.
In order to create the splash image 122, the decoder 110 expands the interpolated Ym m;ni~tllre 436, the interpolated Um miniature 438 and the interpolated Xm min;~tllre 440 with a second interpol~;nn process that uses a Ym splash 25interpnlR~.inn factor 498, a Um splash interpolation factor 500, and an Xm splash interpolation factor 502. Like the ~;l miniature 120, the decoder 110 also sets the splash interpolation factors to a power of two.
The interpolated data are then P~pRnAPd with the 30replicator 464. The replicator 464 enlarges the interpolated - data one or two times by replicating the pixel information.
- The replicator 464 enlarges the interpolated data based on a Ym replication factor 504, a Um replication factor 506, and an Xm replication factor 508. The decoder llo sets the Ym 35replication factor 504, the Um replication factor 506, and , . . . , . . ,, . . . .. , . , _ . , . _ , .... . _ .. , . . . , ... .. , .. _ .. , _ _ _ ~v096/~ s ~ 0 Pc~ sgslo8x27 .

the Xm replication factor 5Q8 so that the replicated image is one-fourth of the display size. ,~
The inverse color con~erter 468 transforms the replicated image data into red, green and blue image data.
The replicator 464 then again replicates the red, green, and blue image data to match the display size. The decoder 110 displays the rPstllting splash image 122 on the display 112.
F}G. 27 illustrates the third step 3 in which the decoder 110 generates the higher detail panels to expand the t' ' ~; 1 miniature 120 into a standard image 124. FIG. 27 also illustrates the fourth step 446 in which the decoder 110 generates generate higher detail panels to enhance the detail of the standard image in order to create an ~nh~nc~ image 105 that corresponds to the source image 100.
The ~rCo~ng of the standard image 124 and the ~nh~nced image 105 requires the inverse ~uffman encoder 458, the combiner 452, the dequantizer 450, the inverse DCT 476, a pattern matcher 524, the adder 456, the interpolator 462, and an edge overlay builder 516. The decoder 110 adds additional detail to the displayed image as thc decoder 110 receives new layers of compressed data. The additional layers include new panels of the DCT data segment 208 lcrnt~;n;n~ the rr-~;n;ng AC terms 200'~, the VQ1 data segment 224, the VQ2 data segment 7.58, the ~nh~nr - lor~ion data segment 510, the VQ3 data segme~t 242, and the VQ4 data segment 244.
The decoder 110 builds upon the Ym m;r;atnre 436, the Um m;ni~tnre 438 and the Xm miniature 440 calculated for the ~' ' .;l miniature 120 by PYr~n~;ng the next layer of image detail. The next layer c~n~;nc a portion of the DCT data segment 208, the VQ1 data segment 224, the VQ2 data segment 258, the Pnh~n~ ~ location data segment 51Q, the V~3 data segment 242, and the VQ4 data segment 244 that ~LLcb~u--d to the standard image.
The inverse Kuffman encoder 458 ~r csses the DCT
data segment 208 and the VQ1 data ~egment 224 ~the DCT
residual). The cC-~inpr 452 ~ in~c the ~CT infor~ n wos6/o~9~ 21951 19 r ~

from the inverse Huffman encoder 45& with the AC terms 200 and the DC terms 201. The dequantizer 450 reverses the quantization process by multiplying the DCT quantized values 206 with the quantization factors 478. The dequantizer obtains the correct quantization factors 478 from the quantization table Q 202. The dequantizer outputs 8 x 8 DCT
coefficient blocks 482 to the inverse DCT 476. The inverse DCT 476 in turn, outputs the 8 x 8 DCT coefficient blocks 482 that correspond to a Y image 509 that is 1~4th the size of the original image.
The pattern matcher 524 replaces the DCT residual blocks 512 by finding an index to a ~-trh;~g pattern block in the co~Phook 214. The adder 456 adds the DCT residual blocks 512 to the DCT coefficient blocks 482 on a pixel by pixel basis.
The interpolator 462 interpolates the output of the adder 456 by a factor of four to create a full size Y image 520. The interpolator 462 performs bilinear interpolation to enlarge the Y image 520 horizontally and vertically.
The inverse Huffman encoder 458 decompresses the VQ2 data segment 258 (the high rPcnl~ltinn residual) and the Pnh~n~ t location data segment 510. The pattern matcher 524 uses the cn~Ph~nk indexes to retrieve the matching pattern blocks stored in the no~PhQnk 214 to expand the VQ2 data segment 258 to create 16 x 16 high resolution residual blocks 514. An ~nh~n~ ~ overlay builder 516 inserts the 16 x 16 high r~sclu~jon residual blocks into a Y image overlay 518 specified by the edge location data segment 510.
The Y image overlay 518 is the size of the original image.
The adder 456 adds the Y image overlay 518 to the full sized Y image 520.
To calculate the full sized U image 522, the inverse Huffman encoder 458 expands the VQ3 data segment 242. The pattern matcher 524 uses the codebook indexes to retrieve the matching pattern blocks stored in the rodPhonk 214 to expand the VQ3 data segment 242 into 4 x 4 rU tau4 residual blocks 526. The interpolator 462 interpolates the Um miniature 438 2 ~
wo ~6/n2sss ~ c ~
.

by a factor of f~our and the adder 456 adds the 4 x 4 rU_tau4 residual blocks S26 to the interpolated Um m;n;~nre 438 in order to create a Um+r miniature 528. The interpolator 462 interpolates the Um+r miniature 528 by a factor of four to create the full sized U image 522.
To calculate the full 5ized X image 530, the inverse Huffman encoder 453 expands the VQ4 data segment 244. The pattern matcher 524 uses the co~ohock indexes to retrieve the matching pattern blocks stored in the ~o~ohon~ 214 to expand the VQ4 data segment 244 into 4 x 4 rX_tau4 residual ~locks.
The decoder 110 then translates the 4 x 4 rX tau4 residual blocks 532 into 4 x 4 r~_tau4 residual blocks 534. The interpolator 462 interpolates the Xm min~nture 460 by a factor of four, and the adder 456 adds the 4 x 4 rV tau4 residual blocks 534 to the interpolated Xm miniature 460 in order to create a Xm+r miniature 536. The interpolator 462 interpolates the Xm~r miniature 536 ~y a factor cf four to create the full sized X image 530.
The decoder stores the full sized Y image 520, the full sized U image 522, and the full sized X image 530 in local memory. The in~erse color converter 468 then converts the full s.ized Y image 520, the full sized U image 522, and the full sized X image 530 into a full sized red, green, and blue image. The panel is then added to the displayed image. This process is completed for each panel until the entire source image 100 is o~rnn~
In the forth step the decoder 110 receives the third image layer and ~uilds upon the full sized Y image 520, the full sized U image 522, and the full sized X image 530 stored in local memory to generate the onh~n~od image 105. Ihe third image data layer cnnt~in~ the L~ ~;n;n5 portion of the 3CT data negment 208, the VQl data segment 224, the VQ2 da~a segment 258, the onh~nl ' locatjnn data segment 510, the VQ3 data seg~ent 242, and the VQ4 data segment 244 that coLLc~vnd to the onh~nred image 105.

W096/02895 2 1 95 I f ~ PCT~S9Sl08827 The decoder 110 repeats the process illustrated in FIG. 27 to generate a new full sized Y image 520, a new full sized U
image 522, and a new full sized X image 530. The new full sized Y image 520 is added to the full sized Y image generated in the third step 444. The new full sized U image 522 is added to the full sized U image 522 generated in the third step 444. The new full sized X image 530 is added to the full sized X image generated in the third step 444.
The inverse color converter 468 converts the full sized Y image 520, the full sized U image 522, and the full sized X image 530 into a full sized red, green, and blue image.
The panel is then added to the displayed image. This process is completed for each panel until the entire enhanced image 105 is r-~Tt~n~td.
The inverse DCT 476 of the preferred r~~to~i -t is a mathematical transformation for mapping data in the time ~or spatial) domain to the frequency domain, based on the ~cosine" kernel. The two ~; ~;nn~l version operates on a block of 8 x 8 ~ s.
Referring to FIG. 9, the compressed DCT coefficients 198 are stored as DC terms 201 and AC terms 200. In the preferred '~ , the inverse DCT 476 as shown in FIGs.
25 and 27 ; n~5 the process of transformation and decimation in the fLeyu~ and spatial domains ~fley~ene~
and then spatial) into a single operation in the frequency domain. The inverse DCT 476 of the present invention provides at least a factor of 2 in implementation efficiency and is ut;l;7ed by the decoder 110 to expand the ~hl~mhn;t;l miniature 120 and splash image 122.
The inverse DCT 476 receives a seriu~nce of DC terms 201 and AC terms 200 which are frequency coefficients. The high frequency terms are arbitrarily discarded at a pr~ef; n~d frer~uency to prevent ~1;Ft~;ng. The discarding of the high ; frequency terms is er~uivalent to a low pass filter which passes everything below a predefine frey-uency while attPnn;~t;ng all the high frequencies to zero.

WO 961021J9~ 2, ~ O ~ PCT111895108827 The equation for an inverse DCT is:

fy~x = 4~ CU~ C~- ~V,u- cos~ 16 l~ u ~ cos(2 Y+

where u:eO7 v:=o7 x:=07 y:=07 Ca: =_~ (u=O) + [u,~O) The inverse DCT 476 generate3 an 8 x 8 output matrix that is decimated to a 4 x 4 matrix then to a 2 x 2 matrix.
The inver~e DCT 476 then decimates the output matrix by subsampling with a filter. After subsampling, an averaging filter smooths the output. Smoothing i8 accomplished by using a running average of the adjacent elements to form the output.
For example, for a 4 x 4 output matrix the 8 x 8 matrix from the inver~e DCT 476 is sub-divided into sixteen 2 x 2 regions, and adjacent elements within each 2 x 2 region i8 averaged to form the output. Thus the sixteen regions form a 4 x 4 ~atrix output.
Fcr a 2 x 2 output matrix, the 8 x 8 matrix from the inverse DCT 476 is sub-divided into four 4 x 4 regions. The adjacent ~l ~m~n~ ~ within each 4 x 4 matrix region are a~eL~_d to form the output. Thus, the four regions form a 2 x 2 matrix output.
In addition, since most of the AC coefficients are zero, the inverse DCT 476 i8 ~ ;ed by , ~ In;n~ the inverse DCT equation6 with the averaging and the ~eC~r~ n equations. Thus, the creation of a 2 x 2 output matrix where a given X is an 8 x 8 input matrix that consists of DC terms 201 and AC terms 200 is stated ~ormally as:

WOg6/02895 2, 9 5 1 1 ~ PC~/USg~l08827 XO O xO,l ~ ~ ~ ~ O O
Xl o Xl 1 ~ ~ ~ ~ ~ ~
o o o o o o o o X:- ~ ~ ~ ~ ~ ~ ~ ~
o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o o All elements with i or j greater than l are set to zero.
The setting of the high frequency index to zero is equivalent to filtering out the high frequency coefficients from the signal.
Assigning Y as the 2 x 2 output matrix, the decir~t~d output is thus equal to:

Yoo:=xo~o+(k~ ~xo~l))+(kl (xl~o))+(k2 (x~
Yo~l:=xo~c-(kl-(xo~l~)+(kl-(xl~o))-(k,-(xll)) y~,O eXO,O+ (kl- (Xo,l) ) ~ (kl- (Xl.O) ) - (ka- ~Xl 1) ) Y11 =XCO-(kl-(XO.1))-(k1 (Xl.O))+(k2 (Xll)) where k~ (c~l)+c(3)+c~5)~c(7)) ( 16) k2:=(kl)2 The creation of a 4 x 4 output matrix where a given X is an 8 x 8 input matrix that consists of DC terms 201 and AC
terms 200 is stated formally as:
All ,~ t~ with i or j greater than 3 are set to zero.
It is possible to ; l~- the calculations in the 2 x 2 case where the two dimensional e~uation is ~ c ~ sed downward; however, performing the one dimensional approach twice reduces complexity and decreases the calculation time.
In the preferred embodiment, the inverse DCT 476 computes an additional one-dimensional row inverse DCT, and then a one-2 l-q ~
WO 96/0~895 ~ PcT~Ts9llo882J
*

Xoa XOl ~ 2 Xo~ O ~ ~ ~
Xl, ~ Xl, 1 Xl, 2 X" ~ O O O O
X, C X2,1 X2.2 X2.3 ~ ~ ~ ~
X = X~o X~,l X,,z X3 ~ O O O O
O O O O O O O O
O O O O O O O O
O O O O O O O O
O O O O O O O O
-qinn~l column inverse DCT.
The equation for a one ~i cion~l case ig as follows:
~ldout~ are the elements of the one ~; ~inn~l case) ldoutO:=inO+(k~-inl~+~k2- in~ + ~k3-in3) ldoutl:=inO+(k4-inl)-lk~-in2)-tk5-in3 ldout~:=inO-~k4-in1~-lkz~ in2) + (k5-in~
ldout3:=inO-(k1- inl) + tkz ~ in~-(k3-in3~

k = c(l~+c(31 ~ = c~2~+c~6) k = c~31-ct71 k = c(5~+c(7l ~ = c~5~+c~1 where c~k) is defined as in the 2 x 2 output matrix.
The scaler 466 of the pre~erred ~h~ L is also shown in FIG. 27. More specifically the scaler 466 nt;7;7~q a g~n~ i7ed routine that gcales the image up or down while reducing ~ qins and reconstruction noise. ~icaling can ke descri~ed as a combination of decimation and interpolRt;on.
The decimation step consists of downRI l;ng and using an anti-aliasing filter; the interpolation step conaists o~
pixel filling using a reconstruction filter for any scale factor that can ~e ~ ed hy a rational number P/~
where P and Q are integers ~csoc;~ted with the interpolation and ~n;~-t;nn ratios.

WO9G~0289~ 2 1 9 5 1 1 0 The scaler 466 decimates the input data by dividing the source image into the desired number of output pixels and then radiometrically weights the input data to form the n~Cpsc~ry output. FIG. 28 illustrates the scaler 466 with an input to output ratio of five-to-three in the one ~;, c;nn~l case. Input pixel Pl 538, pixel P2 540, pixel 23 542, pixel P, 544, and pixel Ps 546 contain different data values. The output pixel Xl 548, pixel Xl 550, and pixel X3 552 are computed as follows:
G Pl + ( P3)(0.67) X2 = (P2)(0.33~ + P3 + (P,) (0.33) X3 ' (P,) (0.66) + Ps The ~cir~ted data is then filtered with a reconstruction filter and an area average filter. The reconstruction filter interpolates the input data by replicating the pixel data. The area average filter then area averages by integra~; n5 the area covered by the output pixel.
If the output ratio is less than 1 (i.e, interpolation is n~r~Cs~ry), the interpolator 462 utilizes bilinear interpolation. FIG. 29 illustrates the operation of the h; 1; n~ar interpolation. Input pixel A 554, input pixel B
556, input pixel C 558, and input pixel D 560, and reference point X 562 are interpolated to create output 564. For this example reference point X 562 is ~ to the right of pixel A
554 and 1-~ to the right of pixel C 558, and reference point X 562 is ~down from pixel A 554 and 1-~ up from pixel B 556.
Reference point X 562 is stated formally as:
X G (1-~) * ( (1-~ ~A~B) + ~*((l-~)~C+~D).
The Im~e Classifier The preferred ~ 'i~nt of the image classifier 152 is illustrated in FIG. 8. More specifically, the image classifier 152 uses fuzzy logic techniques to determine which compression methods will optimize the compression of various regions of the source image 100. The image classifier 152 adds intelligence ~o the encoder 102 by providing the means W0!~/02~9~ 2 ~ ~ 5 ! I Q PCT~IS9~/0~827 .

to decide, based on statistical characteristics of the image, what "tools" (combinations of compression methods~ will best U~ L~S the image.
The source image 100 may include a combination of different image types. For example, a photograph could show a person framed in a graphical border, wherein the person i8 wearing a shirt that C~nt~in~ printed text. In order to optimize the ~ , ession ratio for the regions of the image that contain different image types, the image classifier 152 subdivides the source image lO0 and then outputs the control script 196 that speci_ies the correct c , ~ssion methods for each region. Thus, the image classifier 152 provides a customized, "most-e_ficient" compression ratio for multiple image ty~es.
The image classifier 152 uses fuzzy logic to infer the correct -~saion steps from the image content. Image content is inherently "fuzzy" and is not amenable to simple discrete ~l~ccif;cat;on. Images will thus tend to belong to se~eral ~classes." For example, a classification scheme might include one class for textual images and a second class ~or photographic images. 9ince an image may comprise a photograph of a person wearing a shirt ~ont. ~; n i ng printed text, the image will belong to both classes to varying degrees. ~ikewise, the same image may be high contrast, ''grainy~U black and white andJor high activity.
Fuzzy logic is a set-theoretic approach to classification of objects that assigns degrees o_ - -rqhip in a particular class. In classical set theory, an object either belongs to a set or it does not; membership i8 either 100% or 0%. In fuzzy set theory, an object can be partly in one set and partly in another. The fuzziness is of greater significance when the content must be categorized for the purpose of applying a~Lu~Liate compression techni~ues.
Relevant categories in image compression include t photographic, graphical, noisy, and high-energy. Clearly the boundaries of these 3ets are not sharp. A scheme that W09C/02~95 ~1 9 51~ ~ PCT~lS95/1)~27 matches a~Lo~Liate compre3sion tools to image content must reliably distinguish between content types that require different compression techniques, and must also be able to judge how to blend tools when types re~uiring different tools overlap.
FIG. 30 illustrates the optimization of the c~ Lession process. The optimization process analyzes the input image 600 at different level9. In the top level analysis 602 the image classifier 152 ~: , Eea the image into a plurality of subimages 604 (regions) of relatively h~ Lq content as defined by a classification map 606. The image classifier 152 then outputs the control script 196 that specifies which compression methods or ~tools" to employ in compressing each region. The compression methods are further optimized in the second level analysis 608 by the ~-nh_nr~ t analyzer 144 which determines which areas of an image are the most visually important (for example, text and strong lnm;n-nre edgesi. The compression methods are then further optimized in the third level analysis 610 with the optimized DCT 156, AVQ 134, and adaptive methods in the channel encoder 168.
The second level analysis 608 and the third level analysis 610 determine how to adapt parameters and tables to a particular image.
The fuzzy logic image classifier 152 provides adaptive ~intelligent" branching to a~ pLiate compression methods with a high degree of computational simplicity. It is not feasible to provide the encoder 102 with an exhaustive mapping of all posc;hl~ rc~h;n-tirnR of inherently non-linear, discr,st~nu~~us, multi~ ion~1 inputs (image mea~uL, tR) onto desired control scripts 196. The fuzzy logic image classifier 152 reduces such an analysis.
Furthermore, the fuzzy logic image classifier 152 ensures that the encoder 102 makes a smooth transition from one compression method ~as defined by the control script 196) to another compression method. As image content becomes "more like" one class than another, the fuzzy controller ~096/~2xg~ ~ l 351 ~
.

avoids the discrete switching from one compression method to another compression method.
The fuzzy logic image ~1 ~CBj ~; Pr 152 receives the image data and de~Pr~i n~C a set of image mea~uL~ ~c which are mapped onto one or more input sets. The image classifier 152 in turn map5 the input sets to COLL 7~ U~in5 output sets that identify which compression method9 to apply. The output sets are then blended (U~fll7z;f;ed~) to generate a control script 196. The process of mapping the input image to a particular control script 196 thus require3 three sets of rules: 1) rules for mapping input mea~uL~ ~2 onto input sets ~e.g., degree of membership with the nhigh activity~ lnput set = Ft average of AC coefficients 56-63]); 2) rules for mapping input sets onto output sets (e.g., if graphical image, use DCT quantization table 5 and 3) rules for defuzzification that mediate between membership of several output sets, i.e., how the membership of more than one output sets, should be blended to generate a single control script 196 that controls the compression process.
Still further, the fuzzy logic rule base is easily ~-;n~A;n~. The rules are modular. Thus, the rules can be understood, researched, and modified independently of one another. In addition, the rule bases are easily modified allowing new rules to make the image classifier 152 more sensitive to different types of image content. Furthermore, the fuzzy logic rule base is P~Pn~hle to include additional imaye types specified by the user or learned using neural network or genetic P1U~L ! n~ methods.
FIG. 31 illustrates a block diagram oE the image classifier 152. In block 612 the image classifier 152 determines a set of input mea~u.~ ~ 614 that correspond to the source image 100. In order to determine the input mea~uLe s 614, the image classifiêr 152 sub-divides the source image 100 into a plurality of blocks. To conserve comp~ t;nnc, the user can enable the image classifier 152 to W096l0289~ 2 i 9 5 1 1 ~ PCT~iS9~108827 select a random sample of the plurality of blocks to use as the basis of the input mea~u" s 614.
The image cl~s~ r 152 determines the set of input mea~u~l-~ q 614 from the plurality of blocks using a variety lof methods. The image classifier 152 calculates the mean, the variance, and a hi3togram of all three color ~5.
The image classifier 152 performs a discrete cosine transform of the image blocks to derive a set of DCT c~m~n~ntq wherein each DCT coefficient is histoyr ~ to provide a frequency domain profile of the i~ uted image. The image classifier 152 performs special convolutions to gather information about edge content, texture content, and the efficacy of the Reed Spline Filter. The image classifier 152 derives spatial domain blocks and matches the spatial domain blocks with a special VQ-like pattern list to provide information about the types of activity con~i n~ in the picture. Finally, the image classifier scans the im.~age for common and possibly loc~l i7e~ features that bear on the compressibility of the image (such as typed text or scanning artifacts).
In block 616 the image classifier 152 analyzes the input meaauL, c 614 generated in block 612 to determine the extent to which the source image 100 belongs to one of the fuzzy input sets 618 within the input rule base 620. The input rule base 620 identifies the list of image types. In the preferred ~mhod;r ~, the image classifier 152 c~nt~;rC
input sets 618 for the following image types: scale, text, graphics, photographic, color depth, degrel of activity, and special features.
Membership in the activity input set ahd the scale image input set are determined by the input med~uL~ tc 614 for ! the DCT coefficient histogram, the spatial statistics, and the convolutions. r' '-rsh;p in the text image input set and the graphic input set correspond to the input mea~u~ q 614 for a linear combination of high frequency DCT
coefficients and gaps in the lnm;n~nce histogram. The W09~)/0289~ ~t95l la ~ PcTrUsg~8877 photographic input set is the complement of the graphic input set.
The color de~pth input set includes four classifi~ mnc gray sca}e images, 4-bit image9, 8-bit images and 24-bit images. The color depth input corresponds to the input meaa~l tS 614 for the Y, U and X co~or , _ -ntg. A
small dynamic range in the U and X color c. ~,nP~ indicate~
that the picture is likely to be a gray Dcale image, while gaps in the Y , L hi5togram reveals whether the image lo was once a palettized 4-bit or 8-bit image.
The special feature input set correspond9 to the input measurements 61~ for the common or lo~li 7~d fcatures that bear on the . ~C~ih1lity of the image. Thus the special feature input set identifies such artifacts as black borders caused by inaccurate scanning and graphical titling on a photographic image.
In block 622 the image clas5ifier 152 maps the input sets 618 onto autput sets 624 according to the output rule base 626. The image classiiier 152 applies the output rule base 626 to map each input set 618 onto membership of each fuzzy output set 624. The output sets 624 determine, for example, how many CS terms are stored in the CS data segment 204 and the optimization of the VQ1 data segment 224, the VQ2 data segment 258, the VQ3 data segment 242, the VQ4 data segment 244, and the number of VQ patterns to use. The output sets also determine whether the encoder 102 performs an optimized DCT 136 and which ~n~; 7~tion tables Q 202 to apply.
For the second Reed Spline Filter 225 and the third Reed Spline Filter 227, the output sets 62g adjust the ~ci~~ti~n factor tau and the orientation of the kernal function.
Finally, the output sets determine whether the channel encoder 168 utilizes a fixcd Huffman encoder, and adaptive Huffman encoder or an ~Z1. FIG. 33 illustrates se~eral ~Y~mp~c of mapping irom input measurements 614 to input sets 618 to output sets 624.

W096/02895 2, 9 5 1~0 PCT~iS95/tiS827 .

Referring to FIG. 31, in block 626 the image classifier constructs a classification map 628 based upon membership within the output sets. The r~ ACRif; cation map 628 identifie5 independent regions in the source image 100 that ~ 5 are ;nCi~p~ )ltly compressed. Thus the image classifier 152 - identifies the regions of the image that belong to c -tihle output sets 624. These are regions that contain relatively i..,n,~ n~,u5 image contrast and call for one method or set or complementary methods to be applied to the entire region.
In block 630 the image r1 ~qc; fier 152 converts (defuzzifies)l based on the defuzzification rule base 632, the '- ~hip of the fuzzy output sets 624 of each independent region in order to generate the control script 196. The control script 196 r'r~nt~;nq instructions for which compression methods to perform and what parameters, tables, and optimization levels to employ for a particular region of the source image 100.
The i~nh~nz nt Analvzer The preferred Pmhn,ii. -t of the enhancement analyzer 144 is illustrated in FIGs. 4, 15 and 30. More specifically, the Pnh~n~ t analyzer 144 P~m;nPc the Y tau2 miniature 190, the U_tau2 miniature 192, and the X_tau4 miniature 228 to determine the pnh~n, t priority of image blocks that coLLe~..d to 16 x 16 blocks in the original source image 100. The Pnh~nC t analyzer 144 prioritizes the image blockc by l) calcul~t;nr3 the mean of the Y_tau2 miniature 190, the U_tau2 miri~tl7re 192, and the X tau4 min;~tn~e 228, and 2) testing every color block against a normalized threshold value E 252 for the Y tau2 miniature 190, the U tau2 miniature 192, and the X_tau4 miniature 228. A list of blocks that exceed the threshold value E 252 are added to the Pnh~n~ t list 250.
' The Pnh~"c~ t ana}yzer 144 determines a threshold ~ value Ey for the Y tau2 miniature 190, a threshold value E~~ 35 for the U_tau2 miniature 192, and a threshold value Ex for the X_tau4 miniature 228. Once the enhancement analyzer 144 WO t36~0~895 2 1 9 ~ t 1 ~ 7~
.

computes the threshold value Ey~ the threshold value Ev anh the threshold value E~, the t~n~-~n~ ~r~lt analyzer 144 tests each 8 x 8 Y_tau2 block, each 4 x 4 U_tau4 block and each 4 x 4 X tau4 block ~each block colLes~oLlds to a 16 x 16 block in the source image lOo~ as follows:
Every pixel in the test block i5 convolved with the following filter masks:
M~ 1,-2,-1,0,0,0,1,2,1}
M2 = {1,0,-1,2,0,-2,1,0,-1~
to compute two statistics Sl and S2.
Masks M, and ~2 are convolved with a three by three block of pixels centered on the pixel being tested. The three by three block of pixels is represented as:
Xll XlZ X13 X2~ X2, X2~
X~ X3~ X~s where the pixel x22 is the pixel being tested. Thus the statistics are calculated with the following equations:
Sl = ~-1 'Xll) - (2'X12) - (1 'X13) + ~1-X31) -I (2 'X32) + (l'x33 S2 -- ~l xll)-~xl3)+(2~x2l) -(l x,3)t(1 .c3l)-~l~x~3) If Sl plus S2 is greater than the threshold value Ey for a particular 8 x 8 Y_tau2 block, the ~n~ -nt analyzer 144 adds the 8 x 8 Y_tau2 block to the ~nh~n~ t list 250. If Sl plus S, is greater than the threshold value E~ for a particular 4 x 4 U_tau4 block, the ~nl ~~ ~~~ analyzer 144 adds the 4 x 4 U_tau4 block to the ~nh~n' ~ list 250. If Sl plus S2 is~greater than the threshold value Ex for a particular 4 x 4 x tau4 block the ~nh~n~ ' analyzer 144 add~ the 4 x 4 X_tau4 block to the ~n~nt ~ list 250.
Tn ~At1;tion to thet-nh~nl t list 250, the t~nh~n, analyzer 144 also uses the DCT coefficients 198 to identify visually unimportant "texture" regions where the c~ L~ssion xatio can be increased without signi~icant loss to the image ~uality.

w096/02x95 ~ 1 ~ 5 1 1 ~
.

O~timized DCT
The preferred ~ho~i~cnt of the optimized DCT 136 is ~ illustrated in FIC. 9. More specifically, the optimized DCT
a 136 uses the quantization table Q 202 to assign the DCT
coeffi~i~nts (DC terms 200 and AC terms 201) quantization step values. In addition, the quantization step values in the qtltnt;7zttion table Q 202 vary ~ n~;ng on the optimized DCT 136 operation mode. The optimized DCT 136 operates in four DCT modes as follows: 1~ switched fixed uniform DCT
qnztnt;7~tion tables that correspond to image classification, 2~ optimal reconstruction values, 3) adaptive uniform DCT
quantization tables, and 4) adaptive non-uniform DCT
qnztnt;7ztti~n tables.
The fixed DCT quant;7at;~n tables are tuned to different image types, including eight standard tables corrospnn~;ng to images differing along three ~i -c;~nc photographic versus graphic, small-scale versus large-scale, and high-activity versus low-activity. In the preferred c '-'; t, additional tables can be added to the resource file 160 (not shown).
The control script 196 defines which standard table the optimized DCT 136 uses in the fixed-table DCT mode. In the fixed-table mode, quantized step values for each DCT
coefficient is obtained by linearly ~-ztnt;7.;ng each xl DCT
coefficient with the qn;tn~;7~tion value ql in qn~nt;7~t;~n table Q. The mathematical r~lztt;onch;p for the quantization .~ced~L~ is:
for i = 0, 1, ..., 63 if xl ~= 0, .. , c~=~
q~

if xl c o, W096"0289s ~-,.l~, ~!. /
.

cl=~

Reconstruction i8 also linear unless reconstruction values have bee~ comruted and stored in the CS data segment 204. ~etting r denote the de~l~n~;7~ DCT coeffici~n~q, the linear dequantization formula is:
S for i = 0, 1,...... ,63 rl - Cl ~ ql In the fixed-table DCT mode, the optimized DCT 136 can also compute the optimal reconstruction values stored in the CS data segment 204. While the DC term 201 is always calculated linearly, the CS reconstruction values represent the conditional expected value of each quantized level of each AC term 200. The CS reconstruction values are calculated for each AC term 200 by first r~lrl~lat;n~ an absolute value frequency histogra~, Hl for the ith coefficient (for i - 1, 2, ..., 63~ over all DCT blocks in the source image, N, as follows:
for ~ = o, 1, ..., N
Hi(k) = frequency (abs(x1~) = k) where xl~ - the value of the ith coefficient in the jth DCT block.
Second, the centroid of coefficient values is calculated between each quantization step. The formula for the centroid of the ith roPfficiPnt in the kth rl~nr~ ion interval is:

J'q ~ ~]

where W096/02895 2 ~ 9 5 1 7 ~ PCT~S95~0R827 ~5 q Tl~k) = ~ H( j~
q -- This provides a non-linear mapping of quantized coefficients onto reconstructed vaiues as follows:
rl = CSi(ql~ for i D 1, 2, ..., 63 In the adaptive uniform DCT quantization mode, the image the classifier 152 outputs the control script 196 that directs the optimized DCT 136 to adjust a given DCT uniform quantization table Q 202 to provide more efficient compression while holding the visual quality constant. This method adjusts the DCT q~l~rt;7atinn step sizes such that the compressed bit rate (entropy~ after quantizing the DCT
coefficients is minimized subject to the constraint that the visually-weighted mean squared error arising from the DCT
tization is held constant with respect to the base quantization table and the user-supplied qn~nti7atinn parameter L
The optimized DCT 136 uses marginal analysis to adjust the DCT qll=nti7nt;nn step sizes. A "marginal rate of transformation (MRT) n i8 C_, tPd for each DCT coefficient The MRT represents the rate at which bits are '~transformed"
into (a reduction of~ the visually weighted mean squared error (VMSE). The MRT of a coefficient is defined as the ratio of 1~ the marginal change in the encoded bit rate with respect to a quantization step value q to 2~ the marginal change in the visual mean square error with respect to the quantization step value q.
MRT (bits/VMSE~ ratio is calculated as follows:

MRT (bits~VMSE) = ((~bits/~q)/(~VMSE/~q)).

~ 30 Increasing the ~-~nt;7~t;on step value q will add more bits to the representation of the corresponding DCT
coefficient. ~owever, adding more bits to the representation ~7/~5~ i~
wo g~ro28g~ r ~ 7 of a DCT coefficient will reduce the VMSE. Since the bits added to the step value g are usually transformed into VM~iE
reduction, the MRT i3 generally negative.
The MRT is calculated for all of the DCT coefficients.
The adaptive method utilized by the optimized DCT 136 adjusts the quantization step values q of the ~uantized table Q 202 by reducing the quantization step value g corresponding to the maximum M~T and increasing the quantization step value q corresponding to the minimum ~RT. The optimized DCT 136 repeats the process until the MRT is equalized across all of the DCT coefficients while holding the ~MSE constant.
FIG. 32 shows a flow chart of the process of creating an adaptive uniform DCT quantization table. In a step 700 the optimized DCT 136 computes the MRT values for all DCT
coefficients i. In step 702 the optimlzed DCT 136 compares the MRT values, if the MRT values are the same, the optimized DCT 136 uses the resulting ~nt;7at;~n table Q 202. If the MRT values are not equal, the optimized DCT 136 finds the minimum MRT value and the maximum MRT value for the DCT
coefficients i in step 706.
In step 708l the optimized DCT 136 increases the quantization step value q~O~ corresponAins to the ~inimum MRT
value and decreases the quantization step ~alue q~g~
associated with the maximum MRT value. Increa~ing qlw which reduces the number of bits devoted to the cvLL~ inr~ DCT
coefficient but does not increase VMSE appreciably.
r;ng the ~ntizatirn step value q~ increases the nu~ber of bits devoted to the .oLL-~y-~n~;n~ dCT coefficient and reduces the VMS~ significantly. The optimized DCT 136 offsets the adjustments for the quantization step values and q~ in order to keep the VMSE constant.
The optimized DCT 136 returns to step 700, where the process is repeated until all MRT values are equal. Once all of the quantization step values q are determined the resulting quantization table Q 202 is c ~ete.
The Reed SPl;n~ Filter w09~0289s ~ t 1 0 PCT~895l08827 FIGs. 34-57 illustrate a preferred embodiment of the Reed Spline Filter 13a which is a~vantageously used for the first, second and third Reed Spline Filters 148, 225, and 227. The Reed Spline Filter described in FIG. 34 - 57 is in terms of a generic image format. In particular the image input data comprises Y image input which corresponds ~or example to the red, green and blue image data in the first Reed Spline Filter 148 in the foregoing discussion. In like manner the outputs of the Reed Spline Filter 138 described as reconstruction values should be understood to correspond, for example, to the R_tau2 miniature 180, the G_tau2 miniature 182 and the B_tau2 miniature 184 of the first Reed Spline Filter 138.
The Reed Spline Filter is based on the a least-mean-square error (LMS)-error spline approach, which is ~Yt~n~Ahle to N dimensions. One- and two-dimensional image data compression utilizing linear and planar splines, respectively, are shown to have compact, closed-form optimal solutions for convenient, effective compression. The computational efficiency of this new method is of special interest, because the compression/reconstruction algorithms proposed herein involve only the Fast Fourier Transform (FFT~
and inverse FFT types of processors or other high-speed direct convolution algorithms. Thus, the compression and reconstruction from the compressed image can be extremely fast and realized in existing hardware and software. Fven with this high computational efficiency, good image quality i8 obtained upon reconstruction. An important and practical consequence of the disclosed method is the convenience and versatility with which it is integrated into a variety of hybrid digital data compression systems.
I. SPLINE PILTER UVICKVl~ll The basic process of digital image coding entails transforming a source image X into a "compressed" image Y
such that the signal energy of Y is concentrated into fewer w096l0289~ 5l~0 r~ . 7 .

elements than the signal energy of X, with some provisions regarding error. ~5 depicted in FIG. 34, digital source image data 1002 represented by an appropriate N~ ncioncl array X is supplied to compression ~lock 1004, whereupon image data X is transformed to compressed data Y' via a first generalized process represented here as ~tXj=Y'. Compressed data may be stcred or transmitted (process block 1006~ to a ~remote~l reconstruction block 1008, whereupon a second generalized process, G'(Y')=X', operates to transform compressed data Y' into a reconstructed image X'.
G and G' are not n~cs~rily processes of mutual inversion, and the processes may not conserve the full information content of image data X. Consequently, X' will, in general, differ from X, and information is lost through the coding/reconstruction process. The residual image or so-called residue is generated by supplying compressed data Y' to a "local" reconstruction process 1005 followed by a difference process 1010 which ~om~stes the residue ~X=X-X' 1012. Preferably, X and X' are sufficiently close, so that the residue ~X 1012 is small and may be transmitted, stored along with the compressed data Y~, or discarded. ~nhce~lPnt to the remote reconstruction process 1008, the residue ~X
1012 and reconstructed image X~ are supplied to adding process lOC7 to generate a restored image X'+~X=X" 1003.
In practice, to reduce computational overhead associated with large images during compression, a decimating or sllhs ,l;ng process may be performed to reduce the number of samples. Decimation i8 commonly characterized by a reduction factor 7 ~tau), which indicates a measure of image data elements to _ ~ssed data Pl ~. ~owever, one skilled in the art will appreciate that image data X must be filtered in conjunction with decimation to avoid aliasing. As shown in FI~. 35, a low-pass input filter ~ay take the form of a pointwise convolution of image data X with a suitable convolution filter 1014, preferably implemented using a matrix filter kernel. A decimation process 1016 then wo g6/02NgS 2 t 9 5 1 1 0 Pcrlu~gsl08827 produces compressed data Y', which is substantially free of aliasing prior to subsequent process steps. While the ~ convolution or decimation filter 1014 attenuates aliasing - effects, it does so by reducing the number of bits required to represent the signal. It is "low-pass'~ in nature, reducing the information content of the reconstructed image X~. Consequently, the residue QX 1012 will be larger, and in part, will offset the compression attained through decimation.
10The present invention disclosed herein solves this problem by providing a method of opt;m;7ing the compressed data such that the mean-square-residue cQX~ is minimized, where ~c ~" shall herein denote an averaging process. As shown in FIG. 36, compressed data Y', generated in a manner similar to that shown in FIG. 35, is further processed by an optimization process 1018. Accordingly, the optim;z~t;on process 1018 is ~PpPn~Pnt upon the properties of convolution filter 1014 and is constrained such that the variance of the mean-square-residue is zero, ~<QX~=0. The disclosed method of filter optimization "matches~ the filter response to the image data, thereby m;nim;z;ng the residue. Since the decimation filter 1014 is low-pass in nature, the optimization process 1018, in part, compensates by effectively acting as a ~self-tuned" high-pass filter. A
brief descriptive overview of the opt; m; ~t; ~n pLUCedUL~ is provided in the following sections.
A. Imace AD~roximation bv S~line Fnn~tions As will become clear in the following detailed description, the input decimation filter 1014 of FIG. 36 may 0 be regarded as a projection of an image data vector X onto a W096/0289~ PCT~J~10~827 .

set of ~asis functions that constitute ahifted, but overlapping, spline functions {~k(~) l such that X ~ XIt~k(X) ~

where X' i8 the reconstructed image vector and X~ i5 the decomposition waight. The image data vector X is thus approYi~ared by an array of preferably rc~rut~tionally simple, ~nt~nll~--c functions, such as lines or planes, allowing also an efficient reconstruction of the original image.
According to the method, the basis functions need not be orthrgon~l and are preferably chosen to overlap in order to provide a cont;nllol~C approximation to image data, thereby rendering a non-diagonal basis correlation matrix:
A~ (x) ~(x).

This property is exploited by the method of the present invention, since it allows the user to ~adapt'~ the response of the filter by the nature and degree of cross-correlation.
Furthermore, the basis of spline ~unctions need not be complete in the sense of spAnn1ng the space of all image data, but preferably generates a close appr~Yi~-t1On to image X. It is known that the ~romrosition of image vector ~ into ccmron~nt~ of differing spline basi3 functions {~k(~) ~ is not unl~ue. The method herein ~rlosed optimizes the projection by ad~usting the weights X~ such that the di~ferential variations of the average residue vanishes, ~c~ =0, or equivalently ce~ min. In general, it will be expected that a more complete basis set will provide a smaller residue and better compression, which, however, requires greater computational overhead and greater compression. Accordingly, it is pre~erable to utilize a computationally simple basis set, which is easy to --niplllate in closed form and which W096/~289~ 2 t q5 1 1 ~ P~

renders a small residual image. This residual image or residue ~X is preferably retained for subsequent processing v or reconstruction. In this respect there is a compromise between computational complexity, compression, and the magnitude of the residue.
In a schematic view, a set of spline basis functions S'={~k} may be regarded as a subset of vectors in the domain of possible image vectors S={~}, as depicted in FIG. 37. The ~c -sition on projection of X onto , ~ntc of S~ is not uni~ue and may be accomplished in a number of ways. A
preferable criterion set forth in the present description is a least-mean-square (LMS) error, which minimizes the overall difference between the source image X and the reconstructed image X'. Geometrically, the residual image ~X can be thought of as a minimal vector in the sense that it is the shortest possible vector connecting X to X'. That is, ~X
might, for instance~ be or~h~~J~n~l to the subspace S', as shown in FIG. 37. As it will be elaborated in the next section, the projection of image vector X onto S' is approximated by an expression of the form:
XGX/ = ~X!~k(X) The "best" X~ is determined by the constraint that ~X=X-X' is mi ni mi ~ed with respect to variations in the weights X~:
~ x2~ = ~ ((X--~ X ~ (x) ~ ) = O

which by analogy to FIG. 37, described an orthogonal projection of X onto S'.
Generally, the above system of e~uations which determines the optimal Xk may be regarded as a linear transformation, which maps X onto S' optimally, represented here by:

-5g-W0~02~95 2 1 ~5 1 ~ ~ PCT/US9~/08x~1 .

A(%k~ = X~k(x), where A~ iB a transformation matrix having elements representing the correlation between bases vectors ~1 and ~.
The optimal weights X~ are determined by the inverse operation A-':

Xk_A (X~i~k(X) ~ ' rendering compression with the least residue. One skilled in the art of LMS criteria will know how to express the processes given here in the geometry of multiple dimensions.
Hence, the processes described herein are applicable to a variety of image data types.
The present brief and general description has direct pro~ecRinS counterparts depicted in PI~. 36. The operation Xl; ~k ( X~
represents a convolution filtering process 1014, and A-'(X~(x)) represents the optimizing process 1018.
In addition, as will be demonstrated in the following sections, the inverse operation A-i is equivalent to a ~o-called inverse eigenfilter when taken over to the conjugate image domain. Specifically, DFT Xk s Al DF~ (X ~(x~), where DFT is the familiar discrete Fourier transform ~DPT) and A~ are the eigenvalues of A. The equivalent optimization ~lock 1018, shown in FIG. 38, comprlses three steps: (l) a discrete Fourier transformation (DFT~ 1020; (2) inverse eigenfiltering 1022; and l3) an inverse discrete Fourier wo961028g~ 2 ~ 951 1~ r~

transformation ~DFT~) 1024. The advantages of this rmh~ t, in part, rely on the fast coding/reconstruction speed, since only DFT and DFT-l are the primary computat;nnc, where now the optimization is a simple division. Greater elaboration into the principles of the method are provided in Section II where also the presently contemplated preferred : ~ ~i tS are derived as closed form solutions for a one-dimensional linear spline basis and two~ Cion~l planar spline bases. Section III provides an operational description for the preferred method of compression and reconstruction utilizing the optimal ~luceduLe disclosed in Section II. Section IV discloses results of a reduction to practice of the preferred embodiments applied to one- and two~ -cion~l images. Finally, Section V discloses a preferred method of the filter optimizing process implemented in the image domain.
II. IMAGE DATA ~ SS1~N BY OPTIMAL SPLINE lh-~OLATION
A. One-Dim~ncional Data Com~ression bY LMS-Error Dinear S~lines For one-~; c;~n~l image data, bi-linear spline functions are < ''in~d to approximate the image data with a resultant linear interpolation, as shown in FIG. 39. The resultant closed-form appr~;m-t;ng and optimizing process has a significant advantage in computational simplicity and speed.
Letting the decimation index T and image sampling period t be fixed, positive integers r,t=1,2,..., and letting X(t) be a periodic sequence of data of period nr, where n i8 also an integer, cnnci~r a periodic, linear spline 1014 of period nr of the type, F~t) = F(t+nT), (1) where wo~6~028gs 2 ~ T ~ PCTIU~088Z7 ~2) as shown by the functions ~(t) 1014 o~ FIG. 39.
The family o~ shi~ted linear splines F(t) i5 defined as follows:
~k(t) = F(t-~r) for ~k=0,l,2,...,(n-1)). (3) One object of the present Pm~o~ir ~ is to approximate X(tl by the n-point sum:

S ( t) -- ~ Xk~k ( ~ ) /

in a least-mean-s~uares fashion where XD~ ~ . . ,k~.1 are n reconstruction weights. Observe that the two-point sum in the interval O<t~T is:

X~'o ( t) +Xl~l ( t) =Xo (l--_ ) +Xl (l ~
=X0+(Xl-X0) t Hence, 8(t~ 1030 in ~uation 4 represents a linear interpolation o~ the original wave~orm X(t~ 1002, as shown in FIG. 39.
To ~ind the "best" weights X0,...... ~X~1~ the qual~ty L(Xo,X1,...,Xnlj i8 m;n;r;~Pd L(X~/X" ~ ~ ~Xn-~ t) --~X~ c(t)~ )' (6) where the sum has been taken over one period p~us T of the data. Xx is minimized by differ~nti~t;ng as ~ollows:

W096/0~9~ ~951 10 r~u~7.,.~ 7 ~ ~ (7) ~ =(2 [~ X(t)~(t)-~ X~ ~X(t)~(t) ]) ~ ~-This leads to the system, ~ A~kXk Yj, (8) of linear equations for Xk, where A~k = ~ ~(t)~k~t) for(j,k=O,l,.. .,n-l) (9) and nr Y~ = ~ X~t)~(t) for(j=O,l,.... ,n-l) (10) The term Y~ in Equation 10 is reducikle as follows:

Y~= ~ X(t)F(t-jT) (11) (j.l~ ~
= ~ X(t)F(t-jT).
t ~

etting (t-jTI = m, then:

Y~ = ~ X(m~j~)F(m) for (j=0,1,2,...,n-l). (12 n.~

W096l02~95 ~ t, ? 5 1 1 ~ PCTlUSgS~7 The Y~'5 in Equation 12 represent the co~pressed data to be transmitted or stored. Note that this ~nro~ing scheme involves n correlation operations on only 2~-1 points.
Since F(t) is assumed to be periodic with period nT, S the matrix form ~f ~k in Equation 9 can be reduced by substltution Equation 3 into Equation 9 to obtain:

A~ F~m+(j-k)7)F(m) (F(m))Z ~ a if j-k-0 mod n (13) F(m~r)F(m) ~ ,B if j-k=+l mod n O otherwise By Equation 13, ~k can be expressed also in circulant form in the following manner:
A~k a;~ n ~ (14) where (k~j)n denotes (k-j) mod n, and ao = ~, al =,~, az = 0,.. , a~l s,B (15) 2 ~ ~ 5 ~ ~ ~
W096l0~89~ r~l~t~ J

Therefore, ~k in Equations 14 and 15 has explicitly the following etluivalent circulant matrix representations:
Ao,o Ao~ A -A ] e Al~o Al,l ; Al,n-An-l,O An-l,l '' An-l,n-l [~atk-,~) }]
aO al a2 an-l aAl aO a1 ~ an, (16) ~n-2 anlaO ~- an3 a~ a2 a, ~-- aO
1~ ~ 1 ~ tY 13 ~
= O ~ a ~-- O
i s ~
~ O O tY.

One skilled in the art of matrix and filter analysis will appreciate that the periodic boundary conditions imposed on the data lie outside the window of observation and may be defined in a variety of ways. Nevertheless, periodic boundary conditions serve to simplify the process implementation by insuring that the correlation matrix [~]
has a calculable inverse. Thus, the optim;7ati~n process involves an inversion of [~k], of which the periodic boundary conditions and consequent circulant character play a preferred role. It is also recogn;zed that for certain spline functions, symmetry rendered in the correlation matrix lS allows inversion in the absence of periodic image boundary conditions.
B. Two-Dimensional Data Com~ression bv p~n~r S~lines For two-dimensional image data, multi-planar spline functions are c~in~d to approximate the image data with a W0~6l0~95 2 ~ q 5 1~ ~ PCT~IS9~108~27 .

resultant planar interpolation. In FIG. 40, Xttl,t2) iB a doubly periodic array of image data ~e.g., still image) of periods nl~ and n2r, with respect to the integer variables t, and t2 where T i5 a multiple of both tl and t2. The actual image 1002 to be compressed can be viewed a5 being repeated per~ lly tllL~u~ UL the plane as shown in the FIG. 40.
Each subimage of the ~Y~n~ picture is separated by a border 1032 (or gutter) of zero intensity of width T. This border is one of several possible preferred nboundary conditions" to achieve a doubly-periodic image.
~ nqj~ now a doubly periodic planar spline, F~tL, t2) which has the form of a six-sided pyramid or tent, centered at the origin and is repeated periodically with periods n1T
and n2r with respect to integer variables tl and t2, respectively. A pel~e~Live view of such a planar spline function 1034 is shown in EIG. 41a and may hereinafter ~e referred to as "h~Y~n~l tent. a Following the one-dimensional case by analogy, letting:
~lk~tllt2~=~(tl-klT~t~-k2r~ (17) for ~kl=O,l,...,nl-1) and ~k2=O,l,...,n~-l), the nbect"
weights X~l~, are found suoh that:

"", ", r r~ L
~X~k) = ~ tl,t2~ ~ x~k~k,(t1,t2) ) ~18) T l k"k,.O

is a minimum.

W09610289~ 2 ~ PCT~S9~10882 A condition for L to be a ~inimum is tl ~ t2 ) ~ Xk,k ~k k ( tl ~ t2 ) ~ ( tl ~ t ) ) ~,;, t"t,.-r k"k,.o [t ~ ( tl~ t2) ~ ( tl, t2) " t,--- r n,-l,n,-l n,r,n,r Xk,k, ~ , ( tl ~ t2 ) ~/k k ( tl ~ t2 ) k"k,-O t" t,--r l-li O .

(19) The best coefficients Xklk2 are the solution of the 2nd-order tensor e~uation, A~,k,k,Xk,k, J.~. (20) where the summation is on kl and k2, n r,n,r A~ k k = ~ , ( tl ~ t2 ) ~k,k, (tl,t2) (21~
t" t,---W0 96t~1~89~ 0 PCl'tUS9~10882't .

and rl~r,r~,r y~ X ( tl ~ t2 ) ~ ( tl ~ t2 ) ~ 2 2 ) with the visua1 aid of FIG. 41a, the tensor Y~l~2 reduces as follows:
r.~T"i,T
y~ X(tl~ t21~ tlr t2) ~,......
= ~ X(tl~tz)F(tl-jlT~t2-jzr) (23) e,. t, ~- r T (~,-1~ 1 ~ X(tl,t2)F(t,-~ t~-j2T) C,~ r C~ ( j,-l) ~

~etting tk-jkr=mk for k = 1,2, then Y~J = ~ x(mll jlT~m2 Ij~r1F~,m2) ~24) for (il = o,l,...,nl-l) and (j, = O,l,...,n~-l), where F~m"m2) is the doubly periodic, six-sided pyramidal fur.ction, 6hown W096/02895 219~ r~ J~
~

in FIG. 41a. The tensor transform in E~uation 21 is treated in a similar fashion to obtain r~,~ ,n,r t"C,.-r ~ t2)~,k(tl,t,) ~ Ftm1+(jl-k1)r,m,+ ( j2-k2) T) F(ml,m~) r . l [F(ml,m~)]~ ~ a r-l if ~iL-k1) } 0 mod n1 A ~j2-k2) = O mod n, F(mliT~m2) F(~ m2) 9 ~, I,'-~l ~-1 if ~ k1) - il mod n1 A (jZ-k2) = O mod n, F(ml~m~i7)F(ml~m~) 9~y ~"1~'---1 r-l if (j~-k1) = 0 mod n1 A (j2-kl) = il mod n, F(mlir,m~iT)F(m1,m2) e ~
~4~-r~l if (j1-k1) = il mod n1 A (i,-k2) - il mod n, ~ ~ F(mliT,m~i~)F(ll~l,m~) 9 71 ~ if (jl-kl) = +1 mod n1 A (j2-k2) = il mod n2.

(25) The values of ~, ~, y, and ~ depend on T, and the shape and oriPnt~ti~n of the h~Yr~g~nr~l tent with respect to the image domain, where for example m1 and m~ represent row and column indices. For greater flPY;hility in tailoring the hexagonal tent function, it is possible to utilize all parameters of the [~1~2~1k2]~ However, to minimize calculational overhead it is preferable to employ symmetric hPY~g~n~, disposed over the image domain with a bi-directional period z Under these conditions, ~ and ~=0, simplifying [~1~2~1k2] considerably. Specifically, the hPY~gon~l tent depicted in FIG. 41a and having an orientation -6~-WV~J6102#9~ 2 ~ ~ ~ T ~ O ~ ~ ' P~

depicted in FIG. 41b is described by the preferred case in which ~=y=~ and ~=0. It will be appreciated that other orientations and shapes of the hexagonal tent are possible, as depicted, for example, in FIG. 41c. Combinations of hexagonal tents are also possible and embody specific preferable attributes. For example, a superposition of the hexagonal tents shown in FIG. 41b and 41c effectively ~symmetrizes" the ~ ssion process.
From Fquation 25 above, ~1~2klkz can be expressed in circulant form by the following expression:
A~ k,k, a(k,~ ,. (k,~ n, (26) where ~t-jt}nt denote (kl - il) mod nt, e=1,2, and aOO aOl ao2 ~ a~
a1O all al, - al~
[a,,] = a20 azl a,2 - a2,~-. i ~. i a~,0 a~ll a~-l,2 a~-l~-1 ~ 27) Y ~ O ~-- O ,~
3 ~ 0 - 0 0 O o O o o i i i . i i O O O - O O
~ O O O
, where ~sl = 0,1,2,... nl-1) and ~52 = 1,2,3,...,n,-1). Note that when [a,1,z] i9 represented in matrix form, it is "block circulant."
C. Com~ression-Reconstr~ction Alaorit' Because the objective is to apply the above-disclo8ed LMS error linear spline interpolation technique3 to ima~e WO9C102X9~ 2 t 95 1 1 0 sequence coding, it is advantageous to utilize the tensor formalism during the course of the analysis in order to readily solve the linear systems in equations 8 and 20.
Here, the tensor summation convention is used in the analysis for one and two ~ qjo~c. It will be appreciated that such convention may readily apply to the general case of N
dimensions.
1. Linear Transformation of Tensors A linear transformation of a lst-order tensor is written as Yr = Ar~X ~sum on s) , (28) where AT. is a linear transformation, and Yr~X~ are lst-order tensors. Similarly, a linear transformation of a second order tensor is written as:
Yrr = Arr,~X,"~ (sum on sl,s2). (29) The product or composition of linear transformations is defined as follows. When the above Equation 29 holds, and Zq,g,= Bq,q,r,r,Yr,r" (30) then Zq~c~ = Bq,q,r,r~Ar,r~rrXoo ~ (31) ~ence, Cqq,o = Bg,q,r,r,Ar,r"," (32) ~ is the composition or product of two linear transformations.
2. ~irculant Transformation of lst-Order Tensors W096/0~95 ~Iq 5 1 1 0 PcT/usg~JnR8~7 The tensor method for solving equations 8 and 20 is illustrated for the 1-dimensional ca~e below:
Letting Ar~ Lt yLesent a circulant tensor of the form:
Ar~=al~-r~ for~rrs=o~lt2t~.~tn and considering the n special lst-order tensors as W~) = (w')~ for ~e=0,1,2,.. ~n-1), (34) where w is the n-th root of unity, then Ar,W(I) = A(l)W(r) , (35l where r -I
~(I) = ~ aJ(w~)J (36) j-C

are the distinct eigenvalues of Ar~ The terms W(~ are Orl~ ht~t~n ~

W(e)W(j) = ln for e--j At this point it is convenient to normali2e these tensors as follows:
,~e)~ 1 W(e) for (e=0,1,2,...,n-1). (38) W096~02895 2, 9 5 1 1 Q PCTIIIS9~1~8827 ~e) ev1dently also satisfies the orthonormal property, i.e., ~(e)~ (39) where ~ is the Kronecker delta function and * represents complex conjugation.
A linear transformation is formed by summing the n dyads ~ (e) for e = 0,1,...,n-1 under the summation sign as follows:
Ar~= ~ A(e)~(e)~(e) . ~40) .~

Then Ar~ S) = ~A(e)~r(e~, ~o = ~ A(e)~r(~ (41) n-l = A(j)~r(j) Since Ar. has by a simple verification the same eigenvectors and eigenvalues as the transformation A~, has in Equations 9 and 33, the transformation ~. and Ar~ are equal.
3. Inverse Transformation of lst-Order Tensors.
The inverse transformation of A" is shown next to be Ar~ 7T~ 42) 2 ~
Wo ~,t(~ s ~ ~ ~ P~rlu~ssll~ss27 .

This is proven easily, as shown below:
n-l n-l Ar"A;tl = ~ ~ A ( e ) 1 'P~~
n 1 n-l A ( e ) n-~
= ~ ~ A(f)~ t'~ 143) ~ 1) rt = ;~ 1 ~6Jrt~ S t
4. Solvinq lst-Order Tensor E~uatinnc The solution of a lst-order tensor e~uation Yr=P~X is given by AqrYr= AqrAr~X, = ~qlX~= X;I (44 so that Xr = ~rr n-l t- y = ~ [ ~ Y~] ~ot = ~ ~ [n~~ Y

= OFT [~7~DFT-~(Yk)] -where DFT denotes the discrete Pourier Transform and DFT-denotes its inverse discrete Fourier Transform.
An alternative view of the above solutinn method is derived below for one ~; c~nn using standard matrix methods. A linear transfnrr-~ina of a lst-order tensor caa be represented by a matrix. For example, let A denote A~, i~
matrix form. If Ar~ is a circulant transformation, then A is also a circulant matrix. From matrix theory it is known that every circulant matrix ic "similar" to a DFT matrlx. If Q
de~otes the DPT matrix of dimension (nxn~, and Q' the complex W096~0289s 2 ~ 9 5 1 1 0 PCT~S9~108827 conjugate of the DFT matrix, and A i8 defined to be the eigenmatrix of A, then:
A = QAQ' . (46) The solution to y = Ax is then x = A-ly = QA-l~Qly) .
s For the one-dimensional process described above, the eigenvalues of the transformation operators are:
n -I
A(e) = ~ a~(w~)~
= DFT(a~) .

where ac=~, al=~, ..., an,=0, anl=~, and ~n=1. Hence:
A(e) = ~+~+~ (48) = ~+~ +~
A direct extension of the lst-order tensor concept to the 2nd-order tensor will be apparent to those skilled in the art. sy solving the 2nd-order tensor equations, the results are extended to compress a 2-D image. FIG. 42 depicts three possible hexagonal tent functions for 2-dimensioned image compression indices T=2,3,4. The following table -- ,1; f;es the relevant parameters for impl~ t;ng the hP~Ag~nAl tent functions:
r~1 ~n Indexr-2 T.3 T-4 (T~
r _ ~ ~,.n Ratio 4 9 16 ~T~
a2+6b2a2+6b2+l2c2 a2+6b~
+12c2 +18d2 b2 2(c2+bc) 2d2+2db +4dc+c2 ~q5T la WO'J6/1\2X'3~ CT~IS9~l08827 gai~ ¦ al6b ¦ a+6b+12c ¦ a~6b +12c~18d The algorithms for compressing and reconstructlng a still image are explained in the 8l~r~i ng sections.
III. ~Jv~b~Vl~n OF CODI~G~ CuN~i~u~ JN SC}IE~I~
A block diayram of the compression/reconstruction scheme is shown in PIG. 43. The signal source 1002, which can have ncion up to N, is first passed through a low-pass filter ~LPF). This low-pass filter is implemented by convolving (in a process block 1014) a chosen spline filter 1013 with the input source 1002. Por example, the normalized frequency response 1046 of a one-dimensional linear spline is shown in FIG. 44. Referring again to FIG. 43, it can be ~een that immediately following the LPF, a subsampling procedure is lS used to reduce the signal size 1016 by a factor ~. The information cont~n~d in the subsampled source is not optimized in the least-mean-square sense. Thus, an optimization procedure is needed to obtain the best reconstruction weights. The optimization process can be divided into three consecutive parts. A DFT lOZ0 maps the non-optimized weights into the image conjugate domain.
Thereafter, an inverse eigeniilter process 1022 optimizes the compressed data. The frequency response plots for some typical eigenfilters and inverse eigenfilters are shown in FIG. 45 and 46. After the inYerse eigenfilter lû22, a DFT1 process block 1~24 maps its input back to the original image domain. When the optimized weights are derived, reconstructiOn can proceed. The reconstruction can be vie~ed as ~veLL ling followed by a reconstruction low-pass filter.
The embodiment of the optimized spline ri1ter described above may employ a DFT and DFT-l type transform processes.
However, those skilled in the art of digital image proc~ssing will appreciate that it is preferable to employ a Fast Fourier Trans orm SFFT) and FPT-I processes, which W096/0289~ 2 ' q 5 ~ ~ ~ PCT~S9510882~

* substnn~ y reduce computation overhead associated with conjugate transform operations. Typically, such an -t is s~iVen by the ratio of computation steps required to transform a set of N elements:

PFT 21Og2(~

which improves with the size of the image.
A. The Com~ression Method The coding method is specified in the following steps:
1. A suitable value of T (an integer) is chosen. The compression ratio is T2 for two-dimensional images.
2. Equation 23 is applied to find Y~ 2, which is the compressed data to be transmitted or stored:
n,~ ,n,7 y~ X(tl,t2)~ ,(tl~t2) n"n, X(tl, t2) F(tl--jlT~ t2 j2T) t,. t,--r = ~ ~ X(~1~t2)F( tl - jlT~ t2 j2T) t,~lt,-l~7 e,-5~ r B. The Reconstruction Method The reconstruction method is shown below in the following steps:
1. Find the FET1 of Y~ 2 (the compressed data).
2. The results of step 1 are divided by the eigenvalues ~(e,m) set forth below. The eigenvalues ~(e,m) are found by ~Yt~n~ing Equation 48 to the two-dimensional case to obtain:
A(e,~) = ~+~ +~;~+~2=+~2~+~n+~l~2) ~ (49 WO9~/02#')5 ~lq 5 1 1 ~0 PCT~S9~0##27 .

where ~l is the n~-th root of unity and ~2 is the n2-th root of unity.

3. The PFT o~ the results from step 2 is then taken.
After computing the FFT, Xk~2 (the optimized w2ights) are ~int A.

4. The recovered or reconstructed image is:
n,-l.n,-l S ~ t~ Xk,k,~k,k~ ( tl ~ t~ ) -
5. Preferably, the residue is computed and retained with the optimized weights:
~X~tl, t2~ = X(tl, t~) -S(tl~ t~l -Although the optimizins procedure outlined above appears to be associated with an image reconstruction process, it may be implemented at any stage between t~e afv~
compression and reconstruction. It is preferable to i ~
the optimizing process immediately after the initial compression so as to minimi7e the residual image. The preferred order has an advantage with regard to storage, transmission and the incorporation of subsequent image processes.
C. Res~onse Conslderations The inverse eigen~ilter in the conjugate domain is described as ~ollows:

~(i,i) =~11~3~ . ~51) where A(i,j) can be considered as an estimation of the frequency response o~ the 'inPd decimation and wo 961028gS 2 ~ 9 5 1 1 0 PCTlUSg~0882~

interpolation filters. The optimization process H(i,j) attempts to "undo" what is done in the ~;nPd decimation/interpolation process. Thus, H~i,j) tends to restore the original signal bandwidth. For example, for r=2, the ~cir-t;on/ interpolation c 'in~tinn is described as having an impulse response rAOA ~ ling that of the following 3x3 kernel:
O ,~ ,B
R= ,B a~ ,B . (52 ~ ~ O

Then, its conjugate domain counterpart, A(i,j)¦aeN, will be A~ = ~ t 2~[COS( N ) co ( N ) [ ( N )~

where i,j are frequency indexes and N represents the number of frequency terms. Hence, the implementation acc~mrliqhPd in the image conjugate domain is the conjugate equivalent of the inverse of the above 3x3 kernel. This relationship will be utilized more explicitly for the Amho~;mArt disclosed in Section V.
. Nl ~AL SIMWL~TIONS
A. One-D;mAncional Case For a one-dimensional implF ~ti~n~ two types of signals are demonstrated. A first test is a cosine signal which is useful for observing the relationship between the standard error, the size of r and the signal frequency. The standard error is defined herein to be the square root of the ~ average error:
; _ ~/2 N ~ ~X(t)) 2 A second one-dimensional signal is taken from one ~ine of a grey-scale still image, which is monci~Ared to be realistic data for practical image compression.

u~osc/02895 ~1 ~ 511 G PCT~g~108827 FIG. 47 shows the plots of standard error versus frecuency of the cosine signal for different degrees of decimation r 10~6. The general trend i8 that as the input signal frequency becomes higher, the standard error increases. In the low frequency range, 5maller values of r yield a better performance. One abnormal ph~n~ .~ nn exists for the r=2 case and a normalized input frequency of 0.25.
For this particular situation, the linear spline and the cosine signal at discrete grid points can match perfectly so that the standard error is substantially equal to 0.
Another test example comes from one line of realistic still image data. FIG. 48a and 48b show the reconstructed signal waveform 1060 for T=2 and r=4, respectively, superimposed on the original image data 1058. FIG. 48a shows 1S a gocd quality of reconstruction for T=2. For T=4, in FIG.
48b, some of the high frequency cnmpnn~n~C are lost due to the comh;n~d decimation/interpolation ~uceduL~. FIG. 48c presents the ~rror plot 1062 for this particular test example. It will be appreoiated that the non-linear error accnm~ ion versus ~ecir~;nn parameter r may be exploited to minimize the combination of optimized weights and image residue.
s. Two-Dimens; nn~l Gase For the two-~;r ~inn~l case, realistic still image data are used as the test. FIG. 49 and 50 show the original and reconstructed images for r=2 and r=~. For r=2, the reconstructed image 1066, 1072 is substantially similar to the original. However, for T=4, there are zig-zag patterns along specific edges in images. This is due to the fact that ~0 the interpolatlon less accurately tracks the high frequency compnnonts. AS described earlier, subst~n~ y complete reconstruction is achieved by re~;n;ng the minimized residue ~X and adding it back to the approximated image. In the next section, several methods are proposed for implPmonting this -~o-W096l02~s 2 ~ ~ 5 1 ~ ~ PCT/US~10~27 process. FIG. 51 shows the error plots as functions of T for both images.
An additional aspect of interest is to look at the optimized weights directly. When these optimal weights are viewed in picture form, high-quality miniatures 1080, 1082 of the original image are obtained, as shown in FIG. 52. Hence, the present ~mhn~i- t is a very powerful and accurate method for creating a "~h~mhn~ reproduction of the original image.
V. ALTERNAT}VE ~M~TMP~S
Video compression is a major ._ t of high-definition television (HD~V). According to the present invention, video compression is formulated as an equlvalent three-dimensional approximation problem, and is amenable to the technlque of optimum linear or more generally by hyperplanar spline interpolation. The main advantages of this approach are seen in its fast speed in coding/reconstruction, its suitability in a VLSI hardware ;mrl~m~nt~tion, and a variable compression ratio. A
principal advantage of the present invention is the versatility with which it is incorporated into other compression systems. The invention can serve as a "front-end" compression platform from which other signal processes are applied. Moreover, the invention can be applied iteratively, in multiple dimensions and in either the image or image conjugate domain. The optimizing method can for example apply to a compressed image and further applied to a corr~p~n~;rg compressed residual image. Due to the inherent low-pass filtering nature of the interpolation process, some edges and other high-frequency features may not be preserved in the reconstructed images, but which are retained through the residue. To address this problem, the following procedures are set forth:
Procedure (a~

21~T 1~
W0~6~0~s ~IIU~

Since the theoretical formulation, derivation, and implementation of the ~icrlo~ ession method do not depend strongly on the choice of the interpolation kernel function/ other kernel functions can be applied and their performances compared. So far, due to its simplicity and excellent per~ormance, only the linear spline function has been applied. ~igher-order splines, such as the quadratic spline, cubic spline could also be employed. Aside from the polynomial spline functions, other more ~ icated function forms can be used.
Proced-l~e ~b) Another way to improve the compression method is to apply certain adaptive techniques. FIG. 53 illustrates such an adaptive scheme. For a 2-D image 1002, the whole image can be divided into subimages of smaller size 1084. Since different subimages have different local features and statistics, different compression schemes can be applied to these different subimages. An error criterion is evaluated in a process step 1086. If the error is below a certain threshold d~terminod in a process step 1088t a higher compression ratio is choser. for that subimage. If the error goes ~bove this threshold, then a lower compression ratio i 5 chosen in a step 1092 for that subimage. E30th multl-kernel functions 1090 and multi-local- ~assion ratios provide good adaptive modification.
Pror~ e (c) Subband coding terhni~l~c have been widely used in digital speech coding. Recently, subband coding is also applied to digital image data compression. The basic approach of subband coding is to split the signal into a set o~ freguency bands, and then to compress each subband with an efficient compression algorithm which matches the statistics of that band. The subband coding techniques divide the whole frequency band into smaller frequency s"hh~n~c. Then, when these subbands are demodulated into the b~h~n~, the W096l0289~ 2 1 ~ 5 1 1 ~ rCTlllS9~108827 resulting equlvalent bandwidths are greatly reduced. Since the suhh~n~R have only low frequency c~mrnn~ntc, one can use the above described, linear or planar spline, data compression technique for coding these data. A 16-band filter compression system i8 shown in FIG. 54, and the corr~Rpon~ing reconstruction system in FIG 55. There are, of course, many ways to ;r,pll t this filter bank, as will be appreciated by those skilled in the art. For example, a common method is to exploit the Quadrature Mirror Filter structure.
V. IMAGE DOMAIN IMPLEKENTATION
The ~ho~i tR described earlier utilize a spline filter op~;mi7~tion process in the image conjugate domain using an FFT processor or equivalent thereof. The present invention also provides an equivalent image domain implementation of a spline filter optimization process which presents distinct advantages with regard to speed, memory and process application.
Referring back to Equation 45, it will be appreciated that the transform processes DFT and DFT-' may be Rnh into an equivalent con~ugate domain convolution, shown here briefly:
X~ = DFT ~ DFTI(Yk~]

= DFT ~FTI ~FT~A )~2FT (Yk)~

If Q = DFT (1/~r~, then:
Xj = DFT[DFT-(n) DFT (Yk)~

= n~Yk Furthermore, with Ar=DFT~a~), the optimi_ation process may be completely carried over to an image domain - implementation knowing only the ~orm oi the input spline ~ filter function. The transform processes can be performed in Wog6~2895 2 1 ~ 5 11 ~ rc "~
.

advance to generate the image domain equivalent. of the inverse eigenfiLter. As shown in FIG. 57, the image domain spline optimizer Q operates on compressed image data Y' generated by a first convolution process 1014 foLlowed by a decimation process 1016, as previously described. Off-line or perhaps adaptiYely, the tensor transformation A (a~ shown for example in Equation 25 above) is supplied to an FFT type processor 1032, which computes the transformation eigenvalues ~. The tensor of eigenvalues is then inverted at process 10block 1034, followed by FFT-' process block 1036, generating the image domain tensor ~. The tensor Q is supplied to a second convolution process 103B, ~;.eLeu~vn ~ is con~olved with the non-optimized compressed image data Y' to yield optimized compressed image data Y'~.
15In practice, there is a compromise between accuracy and economy with regard to the specific form of n The optimizer tensor Q should be of sufficient size for adequate approximation of:

(l~Fl'SA) I

On the other hand, the term n should be small enough to be computationally tractable for the online convolution process 103B. It has been found that two-~ n~inn-l image compression using the preferred h~Y~gon~l tent spline is adequately optimized by a 5x5 matrixr and preferably a 7x7 matrix, for example, with the following form:
0 h -g g e e g h f e d c d e -g e c b b c e .g d b a b d g . .
e c b b c e -g e d c d e f h g e e g -g h C
Additionally, to reduce nnmrutation~l overhead, the smallest elements (i.e., the elements near the perimeter~ such as f, = ~

~ ' 95 1 ~ O
WO9G/02895 PCT~89~l08827 g, and h may be set to zero with little noticeable effect in the reconstruction The principal advantages of the present preferred ' -';r--t are in computational saving above and beyond that of the previously described conjugate domain inverse eigenfilter process (FIG. 38, 1018). For example, a two-~ dimensional FFT process may typically require about N210g complex operations or equivalently 6N21Og2N multiplications.
The total number of image conjugate filter operations is of order 10N21cg2N. On the other hand, the presently described t7x7~ kernel with 5 distinct operations per image element will require only 5N2 operations, lower by an important factor of log2N. ~ence, even for reasonably small images, there is significant i,..~ in computation time.
Additionally, there is substantial reduction in buffer demands because the image domain process 1038 requires only a 7x7 image block at a given time, in contrast to the conjugate process which requires a full-frame buffer before processing. In addition to the lower demands on computation with the image domain process 1038, there is virtually no latency in trAnFm;qC;on as the process is done in pipeline.
Finally, "power of 2" constraints desirable for efficient FFT
processing is eliminated, allowing convenient application to a wider range of image ~; ci~nc The above detailed description i5 intended to be exemplary and not limiting. From this detailed description, taken in conjur.ction with the Apr~n~ drawings, the advantages of the present invention will be readily understood by one who is skilled in the relevant technology.
The present apparatus and method provides a unique encoder, compressed file format and decoder which compresses images and decodes compressed images. The unique compression system increases the compression ratios for comparable image quality while achieving relatively quick encoding and decoding times, optimizes the ~n~;ng process to ~ te different image types, selectively applies particular encoding methods for a wos~/0~9s ~ ~ 951 1Q r I~U~

particular image type, layers the image quality ~ L"''~ C in the compressed image, and generates a file format that allows the addition of other compressed data information.
While the above detailed description has shown, described and pointed out the flln, ~1 novel features of the invention as applied to various . '~'; 's, it will be understood that various icci~nc and substitutions and changes in the form and details of the illustrated device may be made ~y those skilled in the art, without departing from the spirit of the invention.

-a~-

Claims (108)

WHAT IS CLAIMED IS:
1. An encoder that compresses digital images, comprising:
a first data compressor that receives an input digital image comprising a plurality of pixels, each pixel having at least one component representing an intensity level, said pixels of said input digital image being represented by a first plurality of data bytes, said first data compressor outputting decimated data bytes, said decimated data bytes comprising a second plurality of data bytes, said second plurality of data bytes being less in number than said first plurality of data bytes, said second plurality of data bytes representing said plurality of pixels;
a second data compressor that receives data corresponding to said second plurality of data bytes, said second data compressor outputting further decimated data bytes, said further decimated data bytes comprising a third plurality of data bytes that represent a compressed coarse quality digital image;
a residual calculator that receives said first plurality of data bytes and said second plurality of data bytes, said residual calculator expanding said third plurality of data bytes to a fourth plurality of data bytes having a same number of data bytes as said second plurality bytes, said residual calculating a difference between said second plurality of data bytes and said fourth plurality of data bytes, said difference comprising a plurality of residual data bytes; and a compressor that compresses said plurality of residual data bytes to generate compressed residual data bytes, said encoder outputting said third plurality of data bytes and said compressed residual data bytes for storage as a compressed image, said third plurality of data bytes representing a coarse quality digital image, said compressed residual data bytes expandable to said plurality of residual data bytes and combinable with said third plurality of data bytes to convert said third plurality of data bytes to said second plurality of data bytes representing a higher quality image.
2. A decoder that receives compressed data input stream representing a digital image, said decoder outputting expanded data representing a reconstructed digital image, said decoder comprising:
a decompressor that receives a first portion of said compressed data input stream, said decompressor expanding said first portion of said compressed data stream to first expanded data and outputting said first expanding data as a first output data stream to be displayed as a coarse quality digital image;
a second compressor that receives a second portion of said compressed data input stream, said decompressor expanding said second portion of said data input stream to second expanded data; and an adder that combines said second expanded data with said first expanded data to generate a second output data stream to be displayed as a higher quality digital image, said first output data stream and said second output data stream independently selectable for display as a digital image.
3. A system that compresses and decompresses data representing a digital image so that different quality levels of said digital image can be selectably displayed on a video monitor, said system comprising:
an encoder that compresses said data into compressed data layers that comprise data having a plurality of data formats, wherein at least a first one of said data layers comprises data representing a low quality image and at least a second one of said data layers comprises data to convert said low quality image to a higher quality image; and a decoder that receives said compressed data layers, said decoder expanding said first one of said data layers to generate first expanded output data representing a low quality image, said decoder expanding said second one of said data layers to generate second expanded output data, said decoder combining said second expanded output data with said first expanded output data to generate output data representing a higher quality image.
4. A method of producing image information indicative of an original image to be sent over a channel in a way to receive the image information in progressively-rendered stages, comprising:
forming a first compressed version of the image, the first compressed version being compressed relative to the original image by a first compression technique, and the first compressed version being of a smaller overall file size than the original image, the first compressed version including sufficient information such that, when displayed, a reduced-resolution version of the original image can be seen; and forming a second compressed version of the image, said second compressed version including additional information about the image beyond that information produced by said first compressed version to produce a second image which has further resolution than said reduced resolution version, said second compressed version formed using a second compression technique which is different than said first compression technique; and producing an output file indicative of said first compressed version and said second compressed version.
5. A method as in claim 4, wherein said forming a second compressed version includes analyzing at least a portion of information indicative of the image to determine an optimal compression scheme which will optimally compress said information from among a plurality of different compression schemes; and forming said second compressed version using said optimal compression technique as part of said second compression technique.
6. A method as in claim 5, further comprising dividing said information into blocks, and classifying each block of the image according to a particular one of said plurality of compression schemes that optimize an amount of compression for each said block.
7. A method as in claim 4, further comprising producing a third compressed version of the image, said third compressed version comprising information providing a highest quality version of the image and including additional information beyond that information producing by said first and second composed versions.
8. A method as in claim 4, further comprising:
initially selecting a number of stages in which said image will be compressed;
and further comprising:
compressing said image in said number of stages, said first and second compressed versions being the first two stages of said number of stages.
9. A method as in claim 4, wherein said forming a first compressed version comprises obtaining a thumbnail image of the original image, said thumbnail being a version of the image that is sized and intended to be displayed in a smaller scale than the original image, over a smaller of pixels than are contained in the original image; and interpolating said thumbnail image into a full sized image as said first compressed version.
10. A method as in claim 9, further comprising fitting said thumbnail image to a function which increases its accuracy.
11. A method as in claim 9, further comprising processing information indicative of said thumbnail image to determine which of a plurality of compressing schemes will optimize a compression ratio for said second compressed version, and said forming a second compressed version comprises compressing said information using the compression scheme which will best compress said image.
12. A method as in claim 11, wherein said plurality of compression schemes include vector quantization, discrete cosine transform, pulse code modulation, and run length encoding.
13. A method as in claim 4, wherein said forming a first compressed version comprises decimating the original image by a predetermined factor along particular dimensions.
14. A method as in claim 13, wherein said decimating comprises decimating each of the color components of the image by a factor of 2 along vertical and horizontal dimensions.
15. An image compression apparatus, comprising:

a first element operating to receive a source image to be compressed;
a formatting element which changes said source image into a form which is susceptible of being processed;

a first compression device including a decimating element which decimates and compresses said source image using a first compression technique that produces information indicative of a reduced quality image;
a second image compressing device, which provides second image information about the source n addition to that contained in said first image information, said second image compressing device compressing using a second compression technique which is different than said first compression technique; and a message assembling element, assembling said first image information into a first part of a message to be transmitted and said second image information into a second part of the message, said first and second parts of the message being separately readable.
16. An apparatus as in claim 15, further comprising an image-classifying element which determines a most efficient compression technique to compress information indicative of said source image among a plurality of compression techniques and said second image compressing device compresses said image to form said second image information using said most efficient technique.
17. An apparatus as in claim 15, wherein said first image information is a decimated and re-interpolated image.
18. A method of transferring a progressively-rendered, compressed image, over a finite bandwidth channel, comprising:
producing a coarse quality compressed image at a source and transmitting said coarse quality compressed image over a channel as a first part of a transmission to a destination end;
receiving the coarse quality compressed image at a receiver at the destination end at a first time and displaying an image based on said coarse quality compressed image on a display system of the receiver when received at said first time;
creating additional information about the image, at the source end, from which a standard quality image can be displayed, said standard quality image being of a higher quality than said coarse quality image, and sending compressed information over said channel indicative of information for said standard quality image, said sending said standard quality image information occurring subsequent in time to said sending of all of said information for said coarse quality image;
receiving said standard quality image information at the receives at a second time, subsequent to the first time, and decompressing said standard quality image information, to improve the quality of the image displayed on said display system, and to display said standard quality image;

obtaining further information about the image beyond the information in said standard quality image, to provide an enhanced quality image, and compressing said information for said enhanced quality image, said enhanced quality image having more image details than said standard quality image;
transmitting said information for said enhanced quality image, at a time subsequent to transmitting said information for said coarse quality image and said standard quality image; and receiving said enhanced quality image information at said receiver, at a third time subsequent to said first and second times, and updating a display on said display system to display the additional enhanced quality image.
19. A method as in claim 18, wherein said producing the coarse quality image uses a different compression technique than said creating additional information indicative of the standard quality image.
20. A method as in claim 18, wherein said coarse quality image includes information indicative of a miniature version of an original image, and said displaying the coarse quality image comprises interpolating said miniature to a size of the original image and displaying said image.
21. A method as in claim 19, wherein said creating additional information comprises determining a characteristic of the image, determining which of a plurality of different compression techniques will best compress the characteristic determined; and compressing said image using the determined technique.
22. A method as in claim 21, further comprising determining a plurality of areas in said image, and determining, for each area, which of the plurality of different compression techniques will optimize the compression ratio.
23. A method as in claim 22, further comprising interleaving and channel encoding different portions of the compressed image.
24. A method as in claim 22, wherein said compression techniques include vector quantization and discrete cosine transform.
25. A method as in claim 20, wherein said obtaining a miniature comprises decimating along vertical and horizontal axes.
26. A layered progressively-compressed image compression system, comprising:
a first image compression element obtaining a source image to be compressed and compressing said source image using a first image compression scheme to produce a first image layer;
a second image compression element, said second image compression element compressing information indicative of said source image using a different compression technique than said first image compression element to produce a second image layer; and an output message assembling element, said output message assembling element receiving said first and second image layers from said first and second image compressing elements, respectively, a first image layer stored in a first area which will be output first, said first image layer including information from which a coarse image can be reconstructed; and said second image layer including information from which a finer image, having more detail than said coarse image, can be reconstructed, and said second layer being stored in a location where it will be transmitted after said first layer is transmitted.
27. A system as in claim 26, wherein said first layer indicative of a coarse image is produced by obtaining a thumbnail miniature of the original information by decimating the source image, and interpolating said thumbnail miniature to a size of a full image display.
28. A system as in claim 26, wherein there are a plurality of layers, each layer including a complete set of information to be displayed at a decoding end, each layer progressively including more information than a previous layer.
29. A method of transmitting and displaying a compressed image comprising:
first obtaining and sending a first layer of information indicative of a compressed miniature image at a first time;
first receiving said first layer at said decoder end and decompressing and displaying a first coarse image indicative thereof;
second obtaining and sending information indicative of a compressed improved resolution image having more details than said first coarse image, and transmitting said information at a second time subsequent to said first time; and second receiving and decompressing said improved resolution image information to provide an updated display which improves the resolution of said first coarse image.
30. A method as in claim 29, wherein said obtaining coarse information comprises:
transmitting information indicative of a compressed miniature of the image;
receiving the compressed miniature of the image;
interpolating the compressed miniature of the image into a full sized image; and displaying the full sized image.
31. A method as in claim 30, wherein the first coarse image is compressed using a first compression technique and the second image is compressed using a second compression technique which is different from the first compression technique.
32. A method as in claim 31, further comprising determining which of a plurality of different image compression techniques will most efficiently code information indicative of said image.
33. A method as in claim 32, wherein said determining uses fuzzy logic techniques.
34. Image encoding system, comprising:

a first element, operating to receive a source image and to format the source image in a way to allow its coding;
a first compression coder, which filters the formatted source image, to form a first compressed and coarse quality image;
an image classifier, operating to classify the information contained in the image according to a characteristic thereof that is related to an amount by which the image can be compressed;
a compression encoder, determining one of a plurality of compression methods that will optimize the amount of compression based on a result of said image classifier, and encoding said information using the optimized compression method to produce a second compressed image;
a message assembling element interleaving information indicative of the first image and the second image into a desired form in message transmitting format, and transmitting said message to a channel.
35. A system as in claim 34, wherein said first element includes a decimating stage, and said compression encoder uses a different kind of compression than said decimating.
36. An image decoder system, comprising:

a first element, connected to a transmission channel to receive transmitted, compressed data indicative of an image therefrom, said compressed data received in layers;
a display interface which receives information to be displayed;
a first layer detector and decompression element, detecting a complete first layer, and decompressing said first layer when complete, to produce first information indicative of a reduced quality image, based on said first layer after decoding said first layer using a decompression technique and sending said first information to said display interface; and a second layer detector and decompression element, receiving a second layer of image information, compressed using a different compression technique than said first layer, and detecting that at least a unit of said second layer has been completely received, and decompressing said second layer to produce additional information which is coupled to said display interface to improve a displayed image resolution.
37. A system as in claim 36, further comprising a third layer detector, receiving and decompressing a third layer of information, forming a final display.
38. A system as in claim 37, wherein said units of said second layer are display panels, each panel displayed when completed, to form the second layer of the image in panels.
35. A method as in claim 31, wherein said first obtaining comprises decimating data on the image to form a reduced quality image, fitting the decimated data to a first model which partially restores source image detail lost by decimation, and calculating reconstruction values from the fitting.
40. A method as in claim 39, further comprising using said reconstruction weights to interpolate the decimated data into a full sized image while minimizing a mean squared error between original image components and interpolated image components.
41. A method as in claim 31, wherein said first step comprises forming miniature versions of the original source image for each of a plurality of primary colors.
42. A system as in claim 36, further comprising an image classifying module, that determines a characteristic of the image indicative of a best technique of compression, to output a measure indicative of said best technique.
43. A system as in claim 42, wherein said image classifying module uses fuzzy logic techniques.
44. A system as in claim 42, wherein said image types include gray scale, graphics, text, photographs, high activity and low activity images.
45. A method as in claim 29, wherein said first obtaining comprises obtaining a miniature image, and further comprising analyzing the miniature image to classify the image into one of a plurality of classes indicative of which of a plurality of compression techniques will best compress said image.
46. A method of encoding a source image, comprising:
obtaining a first compressed version of the image, said first compressed version of the image corresponding to a coarse version of the image indicative of coarse details only, said first compressed version of the image obtained using a first compression technique;
analyzing said coarse version of the image to determine which of a plurality of different compression techniques will best further compress said image; and further compressing said image to obtain further information indicative of a better rendering of said image, than said coarse version of said image, using the compression technique determined by said analyzing.
47. A method as in claim 46, wherein said first compression technique includes decimation of image components followed by interpolation.
48. A method as in claim 46, further comprising:
dividing information indicative of the image into a plurality of block units;
classifying each block unit according to a characteristic which will most efficiently compress said each block unit; and outputting a control script that specifies an optimized compression method for each said block unit.
49. A method as in claim 48, further comprising compressing, in a third stage, according to the control script.
50. A method as in claim 49, further comprising channel encoding according to the control script.
51. A method as in claim 46, further comprising evaluating information indicative of the coarse image, and determining discrete cosine transform coefficients of said information.
52. A method as in claim 51, further comprising obtaining a reconstructed coarse image from the discrete cosine transform coefficients, determining a residual between the reconstructed image and the coarse image and compressing the residual.
53. An image encoding device, comprising:
a first stage, operating to produce first information indicating a reduced quality compressed version of the original image;
a second stage which analyzes the first information to determine an image classification thereof, and outputs a control script indicative of an efficient compression method based on said image classification;
a third stage, responsive to said control script, to select a compression method from among a plurality of compression methods based on said control script and compressing image information using said compression method to produce second information; and a fourth stage which assembles the first information and the second information into a message to be sent.
54. A method as in claim 53, wherein said second stage comprises a discrete cosine transform device, operating to obtain discrete cosine transform coefficients indicative of the image and to determine quantization step sizes therefrom.
55. A device as in claim 53, wherein said first stage comprises a component separator, separating chrominance components from luminance components; wherein said first stage decimates said chrominance components; and said third stage compresses said luminance components using a discrete cosine transform technique.
56. A device as in claim 55, wherein said second stage comprises a discrete cosine transform coefficient determining device, determining optimal quantization step sizes, and quantizing discrete cosine transform coefficients using said optimal step sizes.
57. A device as in claim 56, further comprising a dequantizer for dequantizing the discrete cosine transform quantized values to determine an error between the dequantized values and the original image to form a residual, and a fifth stage operating for compressing said residual.
58. A device as in claim 57 wherein said fifth stage comprises an adaptive vector quantizer operating to compress the residual by matching the residual against a group of commonly-occurring block patterns in a codebook.
59. A device as in claim 55, wherein said chrominance is compressed by decimating the color, and fitting the decimated data to a spline function to determine optimal reconstruction weights to minimize a mean squared error.
60. An apparatus as in claim 15, wherein said first and second parts of the message are totally separate.
61. A method of encoding an image, comprising:
processing the image according to a first technique to produce a processed image;
separating color components of the processed image from intensity components of the processed image;
compressing said color components of the image by compressing using a color component compression technique; and further compressing the intensity components of the image using a intensity component compression technique different than the color component compression technique.
62. A method as in claim 61 wherein said first technique is a compression technique which includes a decimation technique, said color component compression technique includes a discrete cosine transform compression technique and said intensity component compression technique includes a differential pulse code modulation technique.
63. A method as in claim 61 wherein said first technique includes a decimation technique followed by a technique of reconstructing information from the decimated data obtained from the decimation technique.
64. A method as in claim 61 further comprising determining optimal compression techniques, and producing a control script indicative thereof, at least one of said compression techniques being chosen based on said control script.
65. A method as in claim 61 wherein said color component compression technique is an optimized discrete cosine transform.
66. A method as in claim 64 wherein said color component compression technique is an optimized discrete cosine transform.
67. A method as in claim 66 further comprising determining optimal quantization step sizes based on said control script.
68. A method as in claim 66 further comprising reverse discrete cosine transforming information obtained by said color component compression technique using the discrete cosine transform-compressed signal, and determining a difference between the reverse-discrete-cosine transformed signal and the original signal to determine an error signal there between.
69. A method as in claim 68 further comprising comparing said error to a codebook, and choosing a codebook entry which matches most closely with said error.
70. A method as in claim 61 wherein said further compression is by a differential pulse code modulation.
71. A compression device, comprising:
a first element receiving an image to be compressed;

a second element carrying out an initial compression on said image received by said first element to produce an output initially-compressed image indicative thereof;

a third element which separates said output initially-compressed image into intensity components and color components;
a fourth element which compresses said color components using a first compression technique, and a fifth element which compresses said intensity components using a second compression technique, different than said first compression technique.
72. An encoding apparatus as in claim 71 wherein said first compression technique is a discrete cosine transform technique and said fifth compression technique is a differential PCM technique.
73. A system as in claim 72 wherein said second element is a decimating and curve fitting compressor.
74. A method of compressing data, comprising:
first compressing said data using a discrete cosine transform technique to produce an output signal indicative of a discrete cosine transform-compressed data;
reviewing said discrete cosine transform-compressed data, and re-converting said discrete cosine transform-compressed data to reconstructed data of the same form as the starting data, and determining differences between said starting data and said reconstructed data;
comparing said differences to a plurality of quantized differences from a codebook and choosing a closest match; and forming an output message that includes coefficients of said discrete cosine transform and an index associated with said codebook, as compressed data indicative of the data.
75. A method as in claim 74 wherein said data to be compressed is an original image which has been pre-compressed using a technique which is different than said discrete cosine transform technique and said codebook technique.
76. A method as in claim 74 wherein said first technique is a decimate and curve fitting technique.
77. A method of selectively coding an image, comprising:
dividing said image into a plurality of areas, each area representing a portion of the image;
comparing each said area with a value indicating whether said area should or should not be rendered in an enhanced mode;

adding a prioritized value to an enhancement list for each of said areas that will be rendered in the enhanced mode;
compressing values which are on said enhancement list using a high resolution compression technique; and compressing values which are not on said enhancement list using a different compression technique.
78. A method as in claim 77 wherein said high resolution compression technique is a high resolution residual calculator.
79. A method as in claim 74 wherein said areas are blocks of a pre-compressed image.
80. A method as in claim 74 further comprising compressing said image using a discrete cosine transform, wherein said high resolution compression technique is a high resolution residual calculator, and said different compression technique less high resolution residual calculators.
81. An element as in claim (control script) further comprising a channel encoder, obtaining a plurality of data segments each of which indicates data from one of said compression techniques, said encoding being done in accordance with the control script.
82. An image compression system, comprising:
a first compression element which pre-compresses an image to produce a pre-compressed image;
a color converter device which separates said first pre-compressed image into intensity components and color components;
an image classifier, which classifies said image to determine at least one compression technique which will most efficiently compress said image, and produces a control script indicative thereof;
a compression element, including elements for operating according to one of a plurality of different compression techniques, receiving said control script and compressing based on one of said plurality of compression techniques based on said control script; and a channel coder which interleaves and channel-codes information from said compression element, said channel coder being responsive to said control script, and channel coding in accordance therewith.
83. A system as in claim 82 wherein said channel coder includes a plurality of different kinds of encoding techniques, which are selected by said control script.
84. A system as in claim 82 wherein said compressing element includes a discrete cosine transform compressing element and a differential pulse code modulation compressing element.
85. An adaptive vector quantizing compression device, comprising:
a first element for receiving data to be processed;
an image subdivider, operating to subdivide the image data into a set of predetermined length of pixel blocks;
a codebook, which includes a plurality of vectors that correspond to common patterns found in the population of data;
a processing element, operating to determine a best match between said image element and said codebook by determining a minimum squared error summed over all elements in the block and to produce an index indicative thereof; and transmitting the codebook value in place of the original image data.
86. A compressed file format, comprising:
a header segment which describes overhead information about the system and an object being compressed;
a plurality of segments of image information, each segment separate from each other segment, and each segment including separate information therefrom;
each of said separate information being separately displayable information, and at least one segment of said information including an initial low resolution image.
87. A data format as in claim 86, wherein said header segment includes information about a following data stream, including an indication of whether the data stream includes image information or includes resource information.
88. The system as in claim 87, wherein said resource information includes look-up tables and vector quantization tables.
89. The system as in claim 86, wherein said header include information indicative of a version of coding used for the image information.
90. A decoder system which decodes a compressed file into an uncompressed file, comprising:
an element which receives the compressed file including a plurality of panels;

a decompression element which serially expands each panel to produce file information therefrom; and a file memory, storing information indicative of the file, and receiving more information from each panel to provide progressively more file information than that present in a previous panel.
91. A decoder as in claim 90, wherein said file is an image and a first panel of image data is decompressed to provide a coarse representation of the image.
92. A decoder as in claim 91, wherein said first, coarse layer of the image comprises a miniature version of the image, having a smaller size than an original image, and which is interpolated to a full size image.
93. A decoder as in claim 92, comprising a first decoding element which decompresses at least one panel comprising a thumbnail image, a second decoding element which decompresses at least one panel comprising a splash image, a third decoding element which decompresses at least one panel comprising information for a standard image and a fourth step which decompresses at least one panel to provide information to provide a high detail image.
94. A decoder as in claim 92, wherein said interpolation includes an interpolator, controlled according to an interpolation factor, said interpolation factor controlling an amount of interpolation.
95. A decoder as in claim 91, further comprising an element for decompressing a discrete cosine transform ("DCT") data segment.
96. A decoder as in claim 95, further comprising an element for compressing a vector quantitized residual from the DCT data segment.
97. A method of compressing an image, comprising:
decomposing an initial image to be compressed into a plurality of sub-images, each sub-image having a content which is homogeneous in content of a particular feature;
analyzing each of said sub-images and determining which of said sub-images are visually important;
optimizing compression methods for each of said sub-images in a way such that visually important sub-images have more information associated therewith.
98. A method as in claim 97, wherein one of said compression methods is a discrete cosine transform ("DCT") and said optimizing includes setting a control script indicating a quantization step size of said DCT.
99. A method as in claim 97, wherein said compression technique is a vector quantization, and said optimizing includes setting a feature of a codebook used in said vector quantization.
100. A method as in claim 97, wherein said classification uses fuzzy logic to determine, among a plurality of classes, whether said image content is more like one class or more like another class.
101. A method as in claim 100, wherein said compression further comprises mapping input sets to corresponding output sets, said output sets indicating which of a plurality of compression methods to apply, and blending said output steps to provide a control script.
102. A method of classifying an image, comprising:
dividing said image into a plurality of sub-images, each said sub-image having a characteristic which is uniform within the subimage by at least a predetermined amount;
carrying out a first kind of image compression on the subimage;
obtaining a combinational overview on the results of the first kind of image compression to determine a profile of the image component; and comparing said combinational overview with a plurality of rules, using a plurality of fuzzy input sets having an input rule base, said input sets indicating a plurality of image types, to determine a type of said sub-image.
103. A method as in claim 102, wherein said first image compression is a DCT compression, and said combinational overview is a combination of each DCT component, histogrammed to provide a frequency domain profile.
104. A method as in claim 103, further comprising determining of plurality of spacial domain blocks, and matching the spacial domain blocks with a special pattern list.
105. A method of identifying a optimal compression technique for a portion of an image, comprising:
determining a histogram of coefficients of at least a first compression technique;and matching said histogram, using fuzzy logic, to a closest match to determine an ideal compression technique; and determining components of said image;
106. A method of determining enhancement values for an image, comprising;
dividing said image into a plurality of sub-images, each said sub-image having a predetermined characteristic;
testing a parameter of each said sub-image against a threshold value, said threshold value being one which indicates that said sub-image has a lot of changes from a previous sub-image; and determining those parts of the image which compare with said normalized threshold as being enhanced portion images.
107. Method as in claim 106, wherein said value is values of color components, said color components being compared against a normalized threshold value for said color components.
108. Method as in claim 107, further comprising compressing said image using a discrete cosine transform technique and analyzing coefficients of the discrete cosine transform to determine regions where the compression ratio can be adjusted without effecting its quality.
CA002195110A 1994-07-14 1995-07-14 Method and apparatus for compressing images Abandoned CA2195110A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US27616194A 1994-07-14 1994-07-14
US08/276,161 1994-07-14

Publications (1)

Publication Number Publication Date
CA2195110A1 true CA2195110A1 (en) 1996-02-01

Family

ID=23055447

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002195110A Abandoned CA2195110A1 (en) 1994-07-14 1995-07-14 Method and apparatus for compressing images

Country Status (8)

Country Link
US (2) US5892847A (en)
EP (1) EP0770246A4 (en)
JP (1) JP2000511363A (en)
AU (1) AU698055B2 (en)
BR (1) BR9508403A (en)
CA (1) CA2195110A1 (en)
MX (1) MX9700385A (en)
WO (1) WO1996002895A1 (en)

Families Citing this family (266)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE19518705C1 (en) * 1995-05-22 1996-11-21 Siemens Ag Method for coding image sequences in a transmission unit
JP3516534B2 (en) * 1995-08-31 2004-04-05 シャープ株式会社 Video information encoding device and video information decoding device
US6169820B1 (en) * 1995-09-12 2001-01-02 Tecomac Ag Data compression process and system for compressing data
US5987181A (en) 1995-10-12 1999-11-16 Sharp Kabushiki Kaisha Coding and decoding apparatus which transmits and receives tool information for constructing decoding scheme
US5764921A (en) * 1995-10-26 1998-06-09 Motorola Method, device and microprocessor for selectively compressing video frames of a motion compensated prediction-based video codec
WO1997030551A1 (en) * 1996-02-14 1997-08-21 Olivr Corporation Ltd. Method and systems for progressive asynchronous transmission of multimedia data
EP0817121A3 (en) * 1996-06-06 1999-12-22 Matsushita Electric Industrial Co., Ltd. Image coding method and system
GB9613039D0 (en) * 1996-06-21 1996-08-28 Philips Electronics Nv Image data compression for interactive applications
JP3864400B2 (en) * 1996-10-04 2006-12-27 ソニー株式会社 Image processing apparatus and image processing method
JP3408094B2 (en) * 1997-02-05 2003-05-19 キヤノン株式会社 Image processing apparatus and method
US7284187B1 (en) * 1997-05-30 2007-10-16 Aol Llc, A Delaware Limited Liability Company Encapsulated document and format system
US5973734A (en) 1997-07-09 1999-10-26 Flashpoint Technology, Inc. Method and apparatus for correcting aspect ratio in a camera graphical user interface
EP0899960A3 (en) * 1997-08-29 1999-06-09 Canon Kabushiki Kaisha Digital signal coding and decoding
US6281873B1 (en) * 1997-10-09 2001-08-28 Fairchild Semiconductor Corporation Video line rate vertical scaler
US6216119B1 (en) * 1997-11-19 2001-04-10 Netuitive, Inc. Multi-kernel neural network concurrent learning, monitoring, and forecasting system
US6141017A (en) * 1998-01-23 2000-10-31 Iterated Systems, Inc. Method and apparatus for scaling an array of digital data using fractal transform
US6400776B1 (en) * 1998-02-10 2002-06-04 At&T Corp. Method and apparatus for high speed data transmission by spectral decomposition of the signaling space
EP0936813A1 (en) * 1998-02-16 1999-08-18 CANAL+ Société Anonyme Processing of digital picture data in a decoder
US6567119B1 (en) * 1998-03-26 2003-05-20 Eastman Kodak Company Digital imaging system and file format for storage and selective transmission of processed and unprocessed image data
US7453498B2 (en) * 1998-03-26 2008-11-18 Eastman Kodak Company Electronic image capture device and image file format providing raw and processed image data
US6377706B1 (en) * 1998-05-12 2002-04-23 Xerox Corporation Compression framework incorporating decoding commands
US6256415B1 (en) * 1998-06-10 2001-07-03 Seiko Epson Corporation Two row buffer image compression (TROBIC)
US6275614B1 (en) 1998-06-26 2001-08-14 Sarnoff Corporation Method and apparatus for block classification and adaptive bit allocation
US6583890B1 (en) * 1998-06-30 2003-06-24 International Business Machines Corporation Method and apparatus for improving page description language (PDL) efficiency by recognition and removal of redundant constructs
US6148106A (en) * 1998-06-30 2000-11-14 The United States Of America As Represented By The Secretary Of The Navy Classification of images using a dictionary of compressed time-frequency atoms
US6646761B1 (en) * 1998-08-12 2003-11-11 Texas Instruments Incorporated Efficient under color removal
US6266442B1 (en) * 1998-10-23 2001-07-24 Facet Technology Corp. Method and apparatus for identifying objects depicted in a videostream
US6298169B1 (en) * 1998-10-27 2001-10-02 Microsoft Corporation Residual vector quantization for texture pattern compression and decompression
US6624761B2 (en) 1998-12-11 2003-09-23 Realtime Data, Llc Content independent data compression method and system
US6260031B1 (en) * 1998-12-21 2001-07-10 Philips Electronics North America Corp. Code compaction by evolutionary algorithm
KR100313217B1 (en) * 1998-12-23 2001-12-28 서평원 Pipeline DCT device
US6317141B1 (en) 1998-12-31 2001-11-13 Flashpoint Technology, Inc. Method and apparatus for editing heterogeneous media objects in a digital imaging device
JP3496559B2 (en) * 1999-01-06 2004-02-16 日本電気株式会社 Image feature generation apparatus and image feature generation method
US6604158B1 (en) * 1999-03-11 2003-08-05 Realtime Data, Llc System and methods for accelerated data storage and retrieval
US6601104B1 (en) 1999-03-11 2003-07-29 Realtime Data Llc System and methods for accelerated data storage and retrieval
FR2792151B1 (en) * 1999-04-08 2004-04-30 Canon Kk METHODS AND DEVICES FOR ENCODING AND DECODING DIGITAL SIGNALS, AND SYSTEMS IMPLEMENTING THE SAME
US6788811B1 (en) * 1999-05-10 2004-09-07 Ricoh Company, Ltd. Coding apparatus, decoding apparatus, coding method, decoding method, amd computer-readable recording medium for executing the methods
US6421467B1 (en) * 1999-05-28 2002-07-16 Texas Tech University Adaptive vector quantization/quantizer
AU5776200A (en) * 1999-06-28 2001-01-31 Iterated Systems, Inc. Jpeg image artifacts reduction
JP3784583B2 (en) 1999-08-13 2006-06-14 沖電気工業株式会社 Audio storage device
US6779040B1 (en) * 1999-08-27 2004-08-17 Hewlett-Packard Development Company, L.P. Method and system for serving data files compressed in accordance with tunable parameters
AU7346700A (en) * 1999-09-03 2001-04-10 T. C. Cheng Fast and efficient computation of cubic-spline interpolation for data compression
US6768817B1 (en) 1999-09-03 2004-07-27 Truong, T.K./ Chen, T.C. Fast and efficient computation of cubic-spline interpolation for data compression
US6625308B1 (en) * 1999-09-10 2003-09-23 Intel Corporation Fuzzy distinction based thresholding technique for image segmentation
US6658399B1 (en) * 1999-09-10 2003-12-02 Intel Corporation Fuzzy based thresholding technique for image segmentation
EP1091576A1 (en) 1999-09-30 2001-04-11 Koninklijke Philips Electronics N.V. Picture signal processing
US7053944B1 (en) 1999-10-01 2006-05-30 Intel Corporation Method of using hue to interpolate color pixel signals
US6393154B1 (en) 1999-11-18 2002-05-21 Quikcat.Com, Inc. Method and apparatus for digital image compression using a dynamical system
US6992671B1 (en) * 1999-12-09 2006-01-31 Monotype Imaging, Inc. Method and apparatus for compressing Bezier descriptions of letterforms in outline fonts using vector quantization techniques
US7158178B1 (en) 1999-12-14 2007-01-02 Intel Corporation Method of converting a sub-sampled color image
US6628827B1 (en) 1999-12-14 2003-09-30 Intel Corporation Method of upscaling a color image
US7068854B1 (en) * 1999-12-29 2006-06-27 Ge Medical Systems Global Technology Company, Llc Correction of defective pixels in a detector
US6600495B1 (en) * 2000-01-10 2003-07-29 Koninklijke Philips Electronics N.V. Image interpolation and decimation using a continuously variable delay filter and combined with a polyphase filter
US20010047473A1 (en) 2000-02-03 2001-11-29 Realtime Data, Llc Systems and methods for computer initialization
US6700589B1 (en) * 2000-02-17 2004-03-02 International Business Machines Corporation Method, system, and program for magnifying content downloaded from a server over a network
WO2001069937A2 (en) * 2000-03-13 2001-09-20 Point Cloud, Inc. Two-dimensional image compression method
US6549674B1 (en) * 2000-10-12 2003-04-15 Picsurf, Inc. Image compression based on tiled wavelet-like transform using edge and non-edge filters
US6937362B1 (en) * 2000-04-05 2005-08-30 Eastman Kodak Company Method for providing access to an extended color gamut digital image and providing payment therefor
US6938024B1 (en) * 2000-05-04 2005-08-30 Microsoft Corporation Transmitting information given constrained resources
US20020029285A1 (en) * 2000-05-26 2002-03-07 Henry Collins Adapting graphical data, processing activity to changing network conditions
US6781608B1 (en) * 2000-06-30 2004-08-24 America Online, Inc. Gradual image display
AU2001273326A1 (en) * 2000-07-11 2002-01-21 Mediaflow, Llc System and method for calculating an optimum display size for a visual object
GB2364843A (en) * 2000-07-14 2002-02-06 Sony Uk Ltd Data encoding based on data quantity and data quality
US6754383B1 (en) * 2000-07-26 2004-06-22 Lockheed Martin Corporation Lossy JPEG compression/reconstruction using principal components transformation
US7194128B1 (en) 2000-07-26 2007-03-20 Lockheed Martin Corporation Data compression using principal components transformation
US6891960B2 (en) 2000-08-12 2005-05-10 Facet Technology System for road sign sheeting classification
US20020035566A1 (en) * 2000-09-20 2002-03-21 Choicepoint, Inc. Method and system for the wireless delivery of images
US9143546B2 (en) 2000-10-03 2015-09-22 Realtime Data Llc System and method for data feed acceleration and encryption
US7417568B2 (en) 2000-10-03 2008-08-26 Realtime Data Llc System and method for data feed acceleration and encryption
US8692695B2 (en) 2000-10-03 2014-04-08 Realtime Data, Llc Methods for encoding and decoding data
JP4627110B2 (en) * 2000-10-16 2011-02-09 富士通株式会社 Data storage
US6870961B2 (en) * 2000-11-10 2005-03-22 Ricoh Company, Ltd. Image decompression from transform coefficients
JP3893474B2 (en) * 2000-11-10 2007-03-14 富士フイルム株式会社 Image data forming method and image data recording apparatus
US7340676B2 (en) * 2000-12-29 2008-03-04 Eastman Kodak Company System and method for automatic layout of images in digital albums
US7386046B2 (en) 2001-02-13 2008-06-10 Realtime Data Llc Bandwidth sensitive data compression and decompression
US7162080B2 (en) * 2001-02-23 2007-01-09 Zoran Corporation Graphic image re-encoding and distribution system and method
DE60140654D1 (en) * 2001-03-15 2010-01-14 Honda Res Inst Europe Gmbh Simulation of convolution network behavior and display of internal states of a network
US6624769B2 (en) * 2001-04-27 2003-09-23 Nokia Corporation Apparatus, and associated method, for communicating content in a bandwidth-constrained communication system
US6944357B2 (en) * 2001-05-24 2005-09-13 Microsoft Corporation System and process for automatically determining optimal image compression methods for reducing file size
US6832006B2 (en) * 2001-07-23 2004-12-14 Eastman Kodak Company System and method for controlling image compression based on image emphasis
KR20040054743A (en) * 2001-10-26 2004-06-25 코닌클리케 필립스 일렉트로닉스 엔.브이. Spatial scalable compression
US7453936B2 (en) * 2001-11-09 2008-11-18 Sony Corporation Transmitting apparatus and method, receiving apparatus and method, program and recording medium, and transmitting/receiving system
NZ515527A (en) * 2001-11-15 2003-08-29 Auckland Uniservices Ltd Method, apparatus and software for lossy data compression and function estimation
US7181071B2 (en) * 2001-11-27 2007-02-20 Samsung Electronics Co., Ltd. Method and apparatus for encoding and decoding key value data of orientation interpolator node
US7042942B2 (en) * 2001-12-21 2006-05-09 Intel Corporation Zigzag in-order for image/video encoder and decoder
US10277656B2 (en) * 2002-01-29 2019-04-30 FiveOpenBooks, LLC Method and system for delivering media data
US7085422B2 (en) * 2002-02-20 2006-08-01 International Business Machines Corporation Layer based compression of digital images
US6976026B1 (en) * 2002-03-14 2005-12-13 Microsoft Corporation Distributing limited storage among a collection of media objects
US7180943B1 (en) * 2002-03-26 2007-02-20 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Compression of a data stream by selection among a set of compression tools
US7451457B2 (en) * 2002-04-15 2008-11-11 Microsoft Corporation Facilitating interaction between video renderers and graphics device drivers
US7219352B2 (en) 2002-04-15 2007-05-15 Microsoft Corporation Methods and apparatuses for facilitating processing of interlaced video images for progressive video displays
EP1359773B1 (en) * 2002-04-15 2016-08-24 Microsoft Technology Licensing, LLC Facilitating interaction between video renderers and graphics device drivers
JP2003319184A (en) * 2002-04-18 2003-11-07 Toshiba Tec Corp Image processing apparatus, image processing method, image display method, and image storage method
AU2003237279A1 (en) 2002-05-29 2003-12-19 Pixonics, Inc. Classifying image areas of a video signal
US6928065B2 (en) * 2002-06-11 2005-08-09 Motorola, Inc. Methods of addressing and signaling a plurality of subscriber units in a single slot
MXPA05003984A (en) * 2002-10-15 2005-06-22 Digimarc Corp Identification document and related methods.
US20040093427A1 (en) * 2002-10-29 2004-05-13 Lopez Ricardo Jorge Service diversity for communication system
US7113185B2 (en) * 2002-11-14 2006-09-26 Microsoft Corporation System and method for automatically learning flexible sprites in video layers
US8769395B2 (en) * 2002-12-13 2014-07-01 Ricoh Co., Ltd. Layout objects as image layers
US8036475B2 (en) * 2002-12-13 2011-10-11 Ricoh Co., Ltd. Compression for segmented images and other types of sideband information
US7212676B2 (en) * 2002-12-30 2007-05-01 Intel Corporation Match MSB digital image compression
FR2850512B1 (en) * 2003-01-28 2005-03-11 Medialive AUTOMATIC AND ADAPTIVE ANALYSIS AND SCRAMBLING METHOD AND SYSTEM FOR DIGITAL VIDEO STREAMS
US20040150840A1 (en) * 2003-01-30 2004-08-05 Farrell Michael E. Methods and systems for structuring a raster image file for parallel streaming rendering by multiple processors
DK1600008T3 (en) * 2003-02-27 2008-06-30 T Mobile Deutschland Gmbh Method for compressed image data transfer to a three-dimensional rendering of scenes and objects
JP2004304469A (en) * 2003-03-31 2004-10-28 Minolta Co Ltd Regional division of original image, compression program, and regional division and compression method
US7212666B2 (en) * 2003-04-01 2007-05-01 Microsoft Corporation Generating visually representative video thumbnails
US20040202326A1 (en) * 2003-04-10 2004-10-14 Guanrong Chen System and methods for real-time encryption of digital images based on 2D and 3D multi-parametric chaotic maps
US7420625B1 (en) * 2003-05-20 2008-09-02 Pixelworks, Inc. Fuzzy logic based adaptive Y/C separation system and method
US7110606B2 (en) * 2003-06-12 2006-09-19 Trieu-Kien Truong System and method for a direct computation of cubic spline interpolation for real-time image codec
US7158668B2 (en) * 2003-08-01 2007-01-02 Microsoft Corporation Image processing using linear light values and other image processing improvements
US7643675B2 (en) * 2003-08-01 2010-01-05 Microsoft Corporation Strategies for processing image information using a color information data structure
US7657102B2 (en) * 2003-08-27 2010-02-02 Microsoft Corp. System and method for fast on-line learning of transformed hidden Markov models
US9614772B1 (en) 2003-10-20 2017-04-04 F5 Networks, Inc. System and method for directing network traffic in tunneling applications
EP1538844A3 (en) * 2003-11-26 2006-05-31 Samsung Electronics Co., Ltd. Color image residue transformation and encoding method
US7664763B1 (en) * 2003-12-17 2010-02-16 Symantec Operating Corporation System and method for determining whether performing a particular process on a file will be useful
US7302475B2 (en) * 2004-02-20 2007-11-27 Harris Interactive, Inc. System and method for measuring reactions to product packaging, advertising, or product features over a computer-based network
US7483577B2 (en) * 2004-03-02 2009-01-27 Mitsubishi Electric Research Laboratories, Inc. System and method for joint de-interlacing and down-sampling using adaptive frame and field filtering
US7948501B2 (en) * 2004-03-09 2011-05-24 Olympus Corporation Display control apparatus and method under plural different color spaces
US7590310B2 (en) 2004-05-05 2009-09-15 Facet Technology Corp. Methods and apparatus for automated true object-based image analysis and retrieval
US7664173B2 (en) 2004-06-07 2010-02-16 Nahava Inc. Method and apparatus for cached adaptive transforms for compressing data streams, computing similarity, and recognizing patterns
US7492953B2 (en) * 2004-06-17 2009-02-17 Smith Micro Software, Inc. Efficient method and system for reducing update requirements for a compressed binary image
US7764844B2 (en) * 2004-09-10 2010-07-27 Eastman Kodak Company Determining sharpness predictors for a digital image
US8024483B1 (en) 2004-10-01 2011-09-20 F5 Networks, Inc. Selective compression for network connections
KR100647294B1 (en) * 2004-11-09 2006-11-23 삼성전자주식회사 Method and apparatus for encoding and decoding image data
US7450723B2 (en) * 2004-11-12 2008-11-11 International Business Machines Corporation Method and system for providing for security in communication
US20060117268A1 (en) * 2004-11-30 2006-06-01 Micheal Talley System and method for graphical element selection for region of interest compression
US8230096B2 (en) 2005-01-14 2012-07-24 Citrix Systems, Inc. Methods and systems for generating playback instructions for playback of a recorded computer session
US7831728B2 (en) 2005-01-14 2010-11-09 Citrix Systems, Inc. Methods and systems for real-time seeking during real-time playback of a presentation layer protocol data stream
US20060159432A1 (en) 2005-01-14 2006-07-20 Citrix Systems, Inc. System and methods for automatic time-warped playback in rendering a recorded computer session
US8145777B2 (en) * 2005-01-14 2012-03-27 Citrix Systems, Inc. Method and system for real-time seeking during playback of remote presentation protocols
US8200828B2 (en) 2005-01-14 2012-06-12 Citrix Systems, Inc. Systems and methods for single stack shadowing
US8935316B2 (en) 2005-01-14 2015-01-13 Citrix Systems, Inc. Methods and systems for in-session playback on a local machine of remotely-stored and real time presentation layer protocol data
US8296441B2 (en) 2005-01-14 2012-10-23 Citrix Systems, Inc. Methods and systems for joining a real-time session of presentation layer protocol data
US8340130B2 (en) 2005-01-14 2012-12-25 Citrix Systems, Inc. Methods and systems for generating playback instructions for rendering of a recorded computer session
US20070183493A1 (en) * 2005-02-04 2007-08-09 Tom Kimpe Method and device for image and video transmission over low-bandwidth and high-latency transmission channels
JP2006236475A (en) * 2005-02-24 2006-09-07 Toshiba Corp Coded data reproduction apparatus
US8904463B2 (en) 2005-03-09 2014-12-02 Vudu, Inc. Live video broadcasting on distributed networks
US20080022343A1 (en) 2006-07-24 2008-01-24 Vvond, Inc. Multiple audio streams
US7698451B2 (en) 2005-03-09 2010-04-13 Vudu, Inc. Method and apparatus for instant playback of a movie title
US8219635B2 (en) 2005-03-09 2012-07-10 Vudu, Inc. Continuous data feeding in a distributed environment
US7191215B2 (en) * 2005-03-09 2007-03-13 Marquee, Inc. Method and system for providing instantaneous media-on-demand services by transmitting contents in pieces from client machines
US9176955B2 (en) 2005-03-09 2015-11-03 Vvond, Inc. Method and apparatus for sharing media files among network nodes
US20090025046A1 (en) * 2005-03-09 2009-01-22 Wond, Llc Hybrid architecture for media services
US7937379B2 (en) * 2005-03-09 2011-05-03 Vudu, Inc. Fragmentation of a file for instant access
US7583844B2 (en) * 2005-03-11 2009-09-01 Nokia Corporation Method, device, and system for processing of still images in the compressed domain
JP2006279850A (en) * 2005-03-30 2006-10-12 Sanyo Electric Co Ltd Image processing apparatus
JP4687216B2 (en) * 2005-04-18 2011-05-25 ソニー株式会社 Image signal processing apparatus, camera system, and image signal processing method
WO2006121986A2 (en) * 2005-05-06 2006-11-16 Facet Technology Corp. Network-based navigation system having virtual drive-thru advertisements integrated with actual imagery from along a physical route
US7756826B2 (en) 2006-06-30 2010-07-13 Citrix Systems, Inc. Method and systems for efficient delivery of previously stored content
US9621666B2 (en) 2005-05-26 2017-04-11 Citrix Systems, Inc. Systems and methods for enhanced delta compression
US8943304B2 (en) 2006-08-03 2015-01-27 Citrix Systems, Inc. Systems and methods for using an HTTP-aware client agent
US9407608B2 (en) 2005-05-26 2016-08-02 Citrix Systems, Inc. Systems and methods for enhanced client side policy
US9692725B2 (en) 2005-05-26 2017-06-27 Citrix Systems, Inc. Systems and methods for using an HTTP-aware client agent
US8121428B2 (en) * 2005-05-31 2012-02-21 Microsoft Corporation Accelerated image rendering
US8099511B1 (en) 2005-06-11 2012-01-17 Vudu, Inc. Instantaneous media-on-demand
JP4321496B2 (en) * 2005-06-16 2009-08-26 ソニー株式会社 Image data processing apparatus, image data processing method and program
KR100813258B1 (en) 2005-07-12 2008-03-13 삼성전자주식회사 Apparatus and method for encoding and decoding of image data
KR101045205B1 (en) * 2005-07-12 2011-06-30 삼성전자주식회사 Apparatus and method for encoding and decoding of image data
KR101138393B1 (en) * 2005-07-15 2012-04-26 삼성전자주식회사 Apparatus and method for encoding/decoding of color image and video using different prediction among color components according to coding modes
US7903306B2 (en) * 2005-07-22 2011-03-08 Samsung Electronics Co., Ltd. Sensor image encoding and/or decoding system, medium, and method
US7783781B1 (en) 2005-08-05 2010-08-24 F5 Networks, Inc. Adaptive compression
US20070030816A1 (en) * 2005-08-08 2007-02-08 Honeywell International Inc. Data compression and abnormal situation detection in a wireless sensor network
US8533308B1 (en) 2005-08-12 2013-09-10 F5 Networks, Inc. Network traffic management through protocol-configurable transaction processing
US8212823B2 (en) * 2005-09-28 2012-07-03 Synopsys, Inc. Systems and methods for accelerating sub-pixel interpolation in video processing applications
US8191008B2 (en) 2005-10-03 2012-05-29 Citrix Systems, Inc. Simulating multi-monitor functionality in a single monitor environment
WO2007044556A2 (en) * 2005-10-07 2007-04-19 Innovation Management Sciences, L.L.C. Method and apparatus for scalable video decoder using an enhancement stream
US8275909B1 (en) 2005-12-07 2012-09-25 F5 Networks, Inc. Adaptive compression
TWI261975B (en) * 2005-12-13 2006-09-11 Inventec Corp File compression system and method
CN101375238A (en) * 2005-12-14 2009-02-25 株式会社亚派 Image display device
US7882084B1 (en) 2005-12-30 2011-02-01 F5 Networks, Inc. Compression of data transmitted over a network
US7873065B1 (en) 2006-02-01 2011-01-18 F5 Networks, Inc. Selectively enabling network packet concatenation based on metrics
US8565088B1 (en) 2006-02-01 2013-10-22 F5 Networks, Inc. Selectively enabling packet concatenation based on a transaction boundary
CN100536523C (en) * 2006-02-09 2009-09-02 佳能株式会社 Method, device and storage media for the image classification
GB2435728A (en) * 2006-03-01 2007-09-05 Symbian Software Ltd A method for choosing a compression algorithm
US7660486B2 (en) * 2006-07-10 2010-02-09 Aten International Co., Ltd. Method and apparatus of removing opaque area as rescaling an image
US20080037880A1 (en) * 2006-08-11 2008-02-14 Lcj Enterprises Llc Scalable, progressive image compression and archiving system over a low bit rate internet protocol network
US8694684B2 (en) * 2006-08-21 2014-04-08 Citrix Systems, Inc. Systems and methods of symmetric transport control protocol compression
US9224145B1 (en) 2006-08-30 2015-12-29 Qurio Holdings, Inc. Venue based digital rights using capture device with digital watermarking capability
US9418450B2 (en) 2006-08-31 2016-08-16 Ati Technologies Ulc Texture compression techniques
US8296812B1 (en) 2006-09-01 2012-10-23 Vudu, Inc. Streaming video using erasure encoding
KR100820678B1 (en) * 2006-09-28 2008-04-11 엘지전자 주식회사 Apparatus and method for transmitting data in digital video recorder
US9356824B1 (en) 2006-09-29 2016-05-31 F5 Networks, Inc. Transparently cached network resources
KR100867131B1 (en) * 2006-11-10 2008-11-06 삼성전자주식회사 Apparatus and method for image displaying in portable terminal
US8417833B1 (en) 2006-11-29 2013-04-09 F5 Networks, Inc. Metacodec for optimizing network data compression based on comparison of write and read rates
DE102006057725B4 (en) * 2006-12-04 2010-01-14 Ralf Kopp Sports apparatus and method for designing its visual appearance
US8155462B2 (en) * 2006-12-29 2012-04-10 Fastvdo, Llc System of master reconstruction schemes for pyramid decomposition
US9106606B1 (en) 2007-02-05 2015-08-11 F5 Networks, Inc. Method, intermediate device and computer program code for maintaining persistency
US7477192B1 (en) 2007-02-22 2009-01-13 L-3 Communications Titan Corporation Direction finding system and method
US7865585B2 (en) 2007-03-12 2011-01-04 Citrix Systems, Inc. Systems and methods for providing dynamic ad hoc proxy-cache hierarchies
US7827237B2 (en) * 2007-03-12 2010-11-02 Citrix Systems, Inc. Systems and methods for identifying long matches of data in a compression history
US7532134B2 (en) 2007-03-12 2009-05-12 Citrix Systems, Inc. Systems and methods for sharing compression histories between multiple devices
US7460038B2 (en) 2007-03-12 2008-12-02 Citrix Systems, Inc. Systems and methods of clustered sharing of compression histories
US8255570B2 (en) 2007-03-12 2012-08-28 Citrix Systems, Inc. Systems and methods of compression history expiration and synchronization
US7619545B2 (en) 2007-03-12 2009-11-17 Citrix Systems, Inc. Systems and methods of using application and protocol specific parsing for compression
US7813588B2 (en) * 2007-04-27 2010-10-12 Hewlett-Packard Development Company, L.P. Adjusting source image data prior to compressing the source image data
WO2008147565A2 (en) * 2007-05-25 2008-12-04 Arc International, Plc Adaptive video encoding apparatus and methods
US8363264B2 (en) * 2007-06-29 2013-01-29 Brother Kogyo Kabushiki Kaisha Data transmission apparatus having data generation unit that outputs image data block including dot data lines made up of dot data elements
KR101345287B1 (en) * 2007-10-12 2013-12-27 삼성전자주식회사 Scalable video encoding method and apparatus and scalable video decoding method and apparatus
US20090110313A1 (en) * 2007-10-25 2009-04-30 Canon Kabushiki Kaisha Device for performing image processing based on image attribute
US20090179913A1 (en) * 2008-01-10 2009-07-16 Ali Corporation Apparatus for image reduction and method thereof
JP4697234B2 (en) * 2008-01-29 2011-06-08 セイコーエプソン株式会社 Image processing apparatus and image processing method
ES2306616B1 (en) 2008-02-12 2009-07-24 Fundacion Cidaut PROCEDURE FOR DETERMINATION OF THE LUMINANCE OF TRAFFIC SIGNS AND DEVICE FOR THEIR REALIZATION.
US8064408B2 (en) 2008-02-20 2011-11-22 Hobbit Wave Beamforming devices and methods
JP4900721B2 (en) * 2008-03-12 2012-03-21 株式会社メガチップス Image processing device
JP4919231B2 (en) * 2008-03-24 2012-04-18 株式会社メガチップス Image processing device
US20090290326A1 (en) * 2008-05-22 2009-11-26 Kevin Mark Tiedje Color selection interface for ambient lighting
US8270466B2 (en) * 2008-10-03 2012-09-18 Sony Corporation Adaptive decimation filter
WO2010042578A1 (en) 2008-10-08 2010-04-15 Citrix Systems, Inc. Systems and methods for real-time endpoint application flow control with network structure component
US8504716B2 (en) * 2008-10-08 2013-08-06 Citrix Systems, Inc Systems and methods for allocating bandwidth by an intermediary for flow control
EP2343878B1 (en) * 2008-10-27 2016-05-25 Nippon Telegraph And Telephone Corporation Pixel prediction value generation procedure automatic generation method, image encoding method, image decoding method, devices using these methods, programs for these methods, and recording medium on which these programs are recorded
JP5173873B2 (en) * 2008-11-20 2013-04-03 キヤノン株式会社 Image coding apparatus and control method thereof
JP5116650B2 (en) * 2008-12-10 2013-01-09 キヤノン株式会社 Image coding apparatus and control method thereof
US9723249B2 (en) * 2009-03-19 2017-08-01 Echostar Holdings Limited Archiving broadcast programs
US20100257174A1 (en) * 2009-04-02 2010-10-07 Matthew Dino Minuti Method for data compression utilizing pattern-analysis and matching means such as neural networks
JP5844263B2 (en) 2009-10-05 2016-01-13 ビーマル イメージング リミテッドBeamr Imaging Ltd. Apparatus and method for recompressing digital images
US20110122224A1 (en) * 2009-11-20 2011-05-26 Wang-He Lou Adaptive compression of background image (acbi) based on segmentation of three dimentional objects
US8315502B2 (en) * 2009-12-08 2012-11-20 Echostar Technologies L.L.C. Systems and methods for selective archival of media content
JP5875084B2 (en) 2010-04-29 2016-03-02 ビーマル イメージング リミテッドBeamr Imaging Ltd. Apparatus and method for recompression having a monotonic relationship between the degree of compression and the quality of the compressed image
JP2011249996A (en) * 2010-05-25 2011-12-08 Fuji Xerox Co Ltd Image processor, image transmitter, and program
US8774267B2 (en) * 2010-07-07 2014-07-08 Spinella Ip Holdings, Inc. System and method for transmission, processing, and rendering of stereoscopic and multi-view images
US20130188878A1 (en) * 2010-07-20 2013-07-25 Lockheed Martin Corporation Image analysis systems having image sharpening capabilities and methods using same
JP2013544451A (en) 2010-09-17 2013-12-12 アイ.シー.ブイ.ティー リミテッド Chroma downsampling error classification method
US9014471B2 (en) 2010-09-17 2015-04-21 I.C.V.T. Ltd. Method of classifying a chroma downsampling error
JP5903597B2 (en) * 2011-07-13 2016-04-13 パナソニックIpマネジメント株式会社 Image compression apparatus, image expansion apparatus, and image processing apparatus
US8615159B2 (en) 2011-09-20 2013-12-24 Citrix Systems, Inc. Methods and systems for cataloging text in a recorded session
US9154353B2 (en) 2012-03-07 2015-10-06 Hobbit Wave, Inc. Devices and methods using the hermetic transform for transmitting and receiving signals using OFDM
WO2013134506A2 (en) 2012-03-07 2013-09-12 Hobbit Wave, Inc. Devices and methods using the hermetic transform
US9542839B2 (en) 2012-06-26 2017-01-10 BTS Software Solutions, LLC Low delay low complexity lossless compression system
US10382842B2 (en) * 2012-06-26 2019-08-13 BTS Software Software Solutions, LLC Realtime telemetry data compression system
US11128935B2 (en) * 2012-06-26 2021-09-21 BTS Software Solutions, LLC Realtime multimodel lossless data compression system and method
US9953436B2 (en) * 2012-06-26 2018-04-24 BTS Software Solutions, LLC Low delay low complexity lossless compression system
KR101367777B1 (en) * 2012-08-22 2014-03-06 주식회사 핀그램 Adaptive predictive image compression system and method thereof
US9245714B2 (en) * 2012-10-01 2016-01-26 Kla-Tencor Corporation System and method for compressed data transmission in a maskless lithography system
US8824812B2 (en) * 2012-10-02 2014-09-02 Mediatek Inc Method and apparatus for data compression using error plane coding
JP2014078860A (en) * 2012-10-11 2014-05-01 Samsung Display Co Ltd Compressor, driving device, display device, and compression method
TWI502999B (en) * 2012-12-07 2015-10-01 Acer Inc Image processing method and electronic apparatus using the same
US9405015B2 (en) 2012-12-18 2016-08-02 Subcarrier Systems Corporation Method and apparatus for modeling of GNSS pseudorange measurements for interpolation, extrapolation, reduction of measurement errors, and data compression
US9250327B2 (en) * 2013-03-05 2016-02-02 Subcarrier Systems Corporation Method and apparatus for reducing satellite position message payload by adaptive data compression techniques
US10165227B2 (en) * 2013-03-12 2018-12-25 Futurewei Technologies, Inc. Context based video distribution and storage
US9123087B2 (en) * 2013-03-25 2015-09-01 Xerox Corporation Systems and methods for segmenting an image
JP6143521B2 (en) * 2013-04-01 2017-06-07 キヤノン株式会社 Information processing apparatus, information processing method, and program
US9654777B2 (en) 2013-04-05 2017-05-16 Qualcomm Incorporated Determining palette indices in palette-based video coding
US20150006390A1 (en) * 2013-06-26 2015-01-01 Visa International Service Association Using steganography to perform payment transactions through insecure channels
US9558567B2 (en) * 2013-07-12 2017-01-31 Qualcomm Incorporated Palette prediction in palette-based video coding
US11470339B2 (en) 2013-08-27 2022-10-11 Qualcomm Incorporated Residual prediction for intra block copying
US8879858B1 (en) * 2013-10-01 2014-11-04 Gopro, Inc. Multi-channel bit packing engine
US9531431B2 (en) 2013-10-25 2016-12-27 Hobbit Wave, Inc. Devices and methods employing hermetic transforms for encoding and decoding digital information in spread-spectrum communications systems
US9829568B2 (en) 2013-11-22 2017-11-28 VertoCOMM, Inc. Radar using hermetic transforms
US9877035B2 (en) * 2014-03-17 2018-01-23 Qualcomm Incorporated Quantization processes for residue differential pulse code modulation
US11304661B2 (en) 2014-10-23 2022-04-19 VertoCOMM, Inc. Enhanced imaging devices, and image construction methods and processes employing hermetic transforms
US9871684B2 (en) 2014-11-17 2018-01-16 VertoCOMM, Inc. Devices and methods for hermetic transform filters
KR102453803B1 (en) * 2015-09-10 2022-10-12 삼성전자주식회사 Method and apparatus for processing image
US10437918B1 (en) * 2015-10-07 2019-10-08 Google Llc Progressive image rendering using pan and zoom
US10305717B2 (en) 2016-02-26 2019-05-28 VertoCOMM, Inc. Devices and methods using the hermetic transform for transmitting and receiving signals using multi-channel signaling
US10552738B2 (en) * 2016-12-15 2020-02-04 Google Llc Adaptive channel coding using machine-learned models
CN207517054U (en) 2017-01-04 2018-06-19 意法半导体股份有限公司 Crossfire switchs
US20180189641A1 (en) 2017-01-04 2018-07-05 Stmicroelectronics S.R.L. Hardware accelerator engine
US10715177B2 (en) 2017-06-20 2020-07-14 Samsung Electronics Co., Ltd. Lossy compression drive
CN107318023B (en) * 2017-06-21 2020-12-22 西安万像电子科技有限公司 Image frame compression method and device
US11764940B2 (en) 2019-01-10 2023-09-19 Duality Technologies, Inc. Secure search of secret data in a semi-trusted environment using homomorphic encryption
US11049286B2 (en) * 2019-07-31 2021-06-29 Hewlett Packard Enterprise Development Lp Deep neural network color space optimization
US11593609B2 (en) 2020-02-18 2023-02-28 Stmicroelectronics S.R.L. Vector quantization decoding hardware unit for real-time dynamic decompression for parameters of neural networks
US20210361897A1 (en) * 2020-05-21 2021-11-25 Lawrence Neal Apparatus and method for selecting positive airway pressure mask interface
US11531873B2 (en) 2020-06-23 2022-12-20 Stmicroelectronics S.R.L. Convolution acceleration with embedded vector decompression
US11869222B2 (en) * 2021-03-10 2024-01-09 The United States Of America, As Represented By The Secretary Of The Navy Geospatial data abstraction library (GDAL) conversion tool for data compression
FR3124671B1 (en) * 2021-06-25 2023-07-07 Fond B Com Methods of decoding and encoding an image, devices and associated signal

Family Cites Families (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4122440A (en) * 1977-03-04 1978-10-24 International Business Machines Corporation Method and means for arithmetic string coding
US4222076A (en) * 1978-09-15 1980-09-09 Bell Telephone Laboratories, Incorporated Progressive image transmission
US4467317A (en) * 1981-03-30 1984-08-21 International Business Machines Corporation High-speed arithmetic compression coding using concurrent value updating
US4414580A (en) 1981-06-01 1983-11-08 Bell Telephone Laboratories, Incorporated Progressive transmission of two-tone facsimile
US4654484A (en) * 1983-07-21 1987-03-31 Interand Corporation Video compression/expansion system
US4903317A (en) * 1986-06-24 1990-02-20 Kabushiki Kaisha Toshiba Image processing apparatus
US4905297A (en) * 1986-09-15 1990-02-27 International Business Machines Corporation Arithmetic coding encoder and decoder system
US4891643A (en) * 1986-09-15 1990-01-02 International Business Machines Corporation Arithmetic coding data compression/de-compression by selectively employed, diverse arithmetic coding encoders and decoders
US4764805A (en) * 1987-06-02 1988-08-16 Eastman Kodak Company Image transmission system with line averaging preview mode using two-pass block-edge interpolation
US4849810A (en) * 1987-06-02 1989-07-18 Picturetel Corporation Hierarchial encoding method and apparatus for efficiently communicating image sequences
US4774562A (en) * 1987-06-02 1988-09-27 Eastman Kodak Company Image transmission system with preview mode
EP0314018B1 (en) * 1987-10-30 1993-09-01 Nippon Telegraph And Telephone Corporation Method and apparatus for multiplexed vector quantization
US4897717A (en) * 1988-03-30 1990-01-30 Starsignal, Inc. Computer-based video compression system
US4847677A (en) * 1988-04-27 1989-07-11 Universal Video Communications Corp. Video telecommunication system and method for compressing and decompressing digital color video data
US5187755A (en) * 1988-06-30 1993-02-16 Dainippon Screen Mfg. Co., Ltd. Method of and apparatus for compressing image data
US5353132A (en) 1989-02-06 1994-10-04 Canon Kabushiki Kaisha Image processing device
US5031053A (en) * 1989-06-01 1991-07-09 At&T Bell Laboratories Efficient encoding/decoding in the decomposition and recomposition of a high resolution image utilizing pixel clusters
US5299025A (en) * 1989-10-18 1994-03-29 Ricoh Company, Ltd. Method of coding two-dimensional data by fast cosine transform and method of decoding compressed data by inverse fast cosine transform
US5196946A (en) * 1990-03-14 1993-03-23 C-Cube Microsystems System for compression and decompression of video data using discrete cosine transform and coding techniques
US5270832A (en) * 1990-03-14 1993-12-14 C-Cube Microsystems System for compression and decompression of video data using discrete cosine transform and coding techniques
US5150209A (en) * 1990-05-11 1992-09-22 Picturetel Corporation Hierarchical entropy coded lattice threshold quantization encoding method and apparatus for image and video compression
US5189526A (en) * 1990-09-21 1993-02-23 Eastman Kodak Company Method and apparatus for performing image compression using discrete cosine transform
US5070532A (en) * 1990-09-26 1991-12-03 Radius Inc. Method for encoding color images
US5249053A (en) * 1991-02-05 1993-09-28 Dycam Inc. Filmless digital camera with selective image compression
US5148272A (en) * 1991-02-27 1992-09-15 Rca Thomson Licensing Corporation Apparatus for recombining prioritized video data
US5111292A (en) * 1991-02-27 1992-05-05 General Electric Company Priority selection apparatus as for a video signal processor
US5262958A (en) * 1991-04-05 1993-11-16 Texas Instruments Incorporated Spline-wavelet signal analyzers and methods for processing signals
US5157488A (en) * 1991-05-17 1992-10-20 International Business Machines Corporation Adaptive quantization within the jpeg sequential mode
US5262878A (en) * 1991-06-14 1993-11-16 General Instrument Corporation Method and apparatus for compressing digital still picture signals
DE69227217T2 (en) * 1991-07-15 1999-03-25 Canon Kk Image encoding
JP3116967B2 (en) * 1991-07-16 2000-12-11 ソニー株式会社 Image processing apparatus and image processing method
US5168375A (en) * 1991-09-18 1992-12-01 Polaroid Corporation Image reconstruction by use of discrete cosine and related transforms
US5335088A (en) * 1992-04-01 1994-08-02 Xerox Corporation Apparatus and method for encoding halftone images
US5325125A (en) * 1992-09-24 1994-06-28 Matsushita Electric Corporation Of America Intra-frame filter for video compression systems
JP2621747B2 (en) * 1992-10-06 1997-06-18 富士ゼロックス株式会社 Image processing device
JPH06180948A (en) * 1992-12-11 1994-06-28 Sony Corp Method and unit for processing digital signal and recording medium
US5586200A (en) * 1994-01-07 1996-12-17 Panasonic Technologies, Inc. Segmentation based image compression system

Also Published As

Publication number Publication date
EP0770246A1 (en) 1997-05-02
US5892847A (en) 1999-04-06
US20010019630A1 (en) 2001-09-06
WO1996002895A1 (en) 1996-02-01
AU3097995A (en) 1996-02-16
MX9700385A (en) 1998-05-31
JP2000511363A (en) 2000-08-29
BR9508403A (en) 1997-11-11
US6453073B2 (en) 2002-09-17
AU698055B2 (en) 1998-10-22
EP0770246A4 (en) 1998-01-14

Similar Documents

Publication Publication Date Title
CA2195110A1 (en) Method and apparatus for compressing images
WO1996002895A9 (en) Method and apparatus for compressing images
US5497435A (en) Apparatus and method for encoding and decoding digital signals
EP0663093B1 (en) Improved preprocessing and postprocessing for vector quantization
US5629780A (en) Image data compression having minimum perceptual error
US5128757A (en) Video transmission system using adaptive sub-band coding
US6151409A (en) Methods for compressing and re-constructing a color image in a computer system
US5272529A (en) Adaptive hierarchical subband vector quantization encoder
US5020120A (en) Methods for reducing quantization error in hierarchical decomposition and reconstruction schemes
JPH08502865A (en) Improved vector quantization
US7489827B2 (en) Scaling of multi-dimensional data in a hybrid domain
US7035472B2 (en) Image compression device and image uncompression apparatus utilizing the same
EP1324618A2 (en) Encoding method and arrangement
US7251370B2 (en) Method and device for compressing and/or indexing digital images
US6934420B1 (en) Wave image compression
Boucher et al. Color image compression by adaptive vector quantization
GB2266635A (en) Image data compression
US6580757B1 (en) Digital signal coding and decoding
Angelidis MR image compression using a wavelet transform coding algorithm
Waldemar et al. Subband coding of color images with limited palette size
US6256421B1 (en) Method and apparatus for simulating JPEG compression
US20030099466A1 (en) Image data recording and transmission
US20040136600A1 (en) Visually lossless still image compression for RGB, YUV, YIQ, YCrCb, K1K2K3 formats
Bartkowiak Improved interpolation of 4∶ 2∶ 0 colour images to 4∶ 4∶ 4 format exploiting inter-component correlation
JP2817843B2 (en) Image coding method

Legal Events

Date Code Title Description
FZDE Discontinued