US 20030165273 A1 Abstract A method and apparatus is described for segmenting an image, for adaptively scaling an image, and for automatically scaling and cropping an image based on codestream headers data. In one embodiment, a file that can provide a header that contains multi-scale entropy distribution information on blocks of an image is received. For each block, the block is assigned to a scale from a set of scales that maximizes a cost function. The cost function is a product of a total likelihood and a prior. The total likelihood is a product of likelihoods of the blocks. The image is segmented by grouping together blocks that have been assigned equivalent scales. In one embodiment, the file represents an image in JPEG 2000 format.
Claims(96) 1. A method comprising:
generating a granular entropy distribution using information obtained from a header of a compressed bitstream; and applying one or more image processing operations based on the granular entropy distribution. 2. The method defined in 3. The method defined in 4. An article of manufacture having one or more recordable medium with executable instructions stored thereon which, when executed by a system, cause the system to:
generate a granular entropy distribution using information obtained from a header of a compressed bitstream; and apply one or more image processing operations based on the granular entropy distribution. 5. The article of manufacture defined in 6. The article of manufacture defined in 7. An apparatus comprising:
means for generating a granular entropy distribution using information obtained from a header of a compressed bitstream; and means for applying one or more image processing operations based on the granular entropy distribution. 8. The apparatus defined in 9. The apparatus defined in 10. A method comprising:
performing image analysis on a codestream based on header information in the codestream; and decoding only coded data in one or more image portions specified by outputs of the image analysis. 11. The method defined in 12. The method defined in 13. The method defined in 14. The method defined in 15. An apparatus comprising:
means for performing image analysis on a codestream based on header information in the codestream; and means for decoding only coded data in one or more image portions specified by outputs of the image analysis. 16. The apparatus of 17. The apparatus of 18. The apparatus of 19. The apparatus of 20. An article of manufacture having one or more recordable medium with executable instructions stored thereon which, when executed by a system, cause the system to:
perform image analysis on a codestream based on header information in the codestream; and decode only coded data in one or more image portions specified by outputs of the image analysis. 21. The article of manufacture of 22. The article of manufacture of 23. The article of manufacture of 24. An article of manufacture of 25. A method comprising:
extracting header information from codestream having encoded image data; performing segmentation on the codestream based on the header information independent of decoding encoded image data; decoding encoded image data necessary to represent a segmented image portion. 26. The method defined in 27. The method defined in 28. The method defined in 29. An apparatus comprising:
means for extracting header information from codestream having encoded image data; means for performing segmentation on the codestream based on the header information independent of decoding encoded image data; means for decoding encoded image data necessary to represent a segmented image portion. 30. The apparatus of 31. The apparatus of 32. The apparatus of 33. An article of manufacture having one or more recordable medium with executable instructions stored thereon which, when executed by a system, cause the system to:
extract header information from codestream having encoded image data; perform segmentation on the codestream based on the header information independent of decoding encoded image data; decode encoded image data necessary to represent a segmented image portion. 34. The article of manufacture of 35. The article of manufacture of 36. The article of manufacture of 37. A method comprising:
receiving header information corresponding to a bit stream of multi-scale transform-based compressed data representing image data; generating a feature vector corresponding to image description bits in the bit stream from the header information; and performing one or more operations on at least a portion of the bit stream based on the feature vector. 38. The method defined in 39. The method defined in 40. The method defined in 41. The method defined in 42. The method defined in 43. The method defined in 44. The method defined in 45. An apparatus comprising:
means for receiving header information corresponding to a bit stream of multi-scale transform-based compressed data representing image data; means for generating a feature vector corresponding to image description bits in the bit stream from the header information; and means for performing one or more operations on at least a portion of the bit stream based on the feature vector. 46. The apparatus of 47. The apparatus of 48. The apparatus of 49. The apparatus of 50. The apparatus of 51. The apparatus of 52. The apparatus of 53. An article of manufacture having one or more recordable medium with executable instructions stored thereon which, when executed by a system, cause the system to:
receive header information corresponding to a bit stream of multi-scale transform-based compressed data representing image data; generate a feature vector corresponding to image description bits in the bit stream from the header information; and perform one or more operations on at least a portion of the bit stream based on the feature vector. 54. A method for segmenting an image comprising:
receiving a header that contains multi-scale entropy distribution information on blocks of an image; for each block, assigning to the block a scale from a set of scales that maximizes a cost function, wherein the cost function is a product of a total likelihood and a prior, wherein the total likelihood is a product of likelihoods calculated using the header of the block; and segmenting the image by grouping together blocks that have been assigned equivalent scales. 55. The method of 56. The method of 57. The method of 58. A method for adaptively scaling an image comprising:
receiving a header that contains multi-scale entropy distribution information on blocks of an image; for each block, determining that the block retains significance at a scale upon determining that an entropy of a multi-scale coefficient of a block at the scale is greater than a mean entropy of multi-scale coefficients of blocks in at least one coarser scale; and scaling the image to a coarsest scale at which a threshold percentage of the blocks retain significance at the scale. 59. The method of 60. The method of 61. The method of 62. A method for automatically scaling and cropping an image, comprising:
receiving a file that contains a header that contains multi-scale entropy distribution information on blocks of an image; for each block and for each scale of a set of scales:
setting a cumulative entropy distribution for the block at a scale equal to a weighted summation of a number of bits spent to code the block for scales at and between a first scale and a maximum scale; and
for each width and height offset within a given image width and height, setting an indicator function of the block at the chosen scale and chosen width and height offsets to one upon determining that a width location of the block is not greater than a first minimum value and a height location of the block is not greater than a second minimum value, wherein the first minimum value is a minimum value of a set consisting of a chosen width offset and a sum of the chosen width offset with the display width scaled by the first scale, and wherein the second minimum value is a minimum value of a set consisting of a chosen height offset and a sum of the chosen height offset with the display height scaled by the first scale;
computing a location and scale that together maximize a summation consisting of the cumulative entropy distribution for the block at the optimal scale multiplied with an indicator function of the block and by a parameter; and cropping the image to the optimal location and down-sampling a resulting cropped image to the optimal scale. 63. The method defined in 64. The method of 65. A method comprising:
segmenting an image generating a rectangular multi-scale partition of the image based on a multi-scale probability distribution; and generating a rectangular multi-scale partition of the image based on the multi-scale probability distribution. 66. The method in 67. The method defined in storing the rectangle; and
repeating the filling operation for at least one other rectangle.
68. The method defined in 69. An apparatus comprising:
means for segmenting an image generating a rectangular multi-scale partition of the image based on a multi-scale probability distribution; and means for generating a rectangular multi-scale partition of the image based on the multi-scale probability distribution. 70. The apparatus defined in 71. The apparatus defined in means for storing the rectangle; and
means for repeating the filling operation for at least one other rectangle.
72. The apparatus defined in 73. An article of manufacture having one or more recordable medium with executable instructions stored thereon which, when executed by a system, cause the system to:
generate a rectangular multi-scale partition of an image based on a multi-scale probability distribution; and generate a rectangular multi-scale partition of the image based on the multi-scale probability distribution. 74. An article of manufacture having one or more recordable media with executable instructions stored thereon which, when executed by a machine, cause the machine to:
receive a header that contains multi-scale entropy distribution information on blocks of an image; for each block, assign to the block a scale from a set of scales that maximizes a cost function, wherein the cost function is a product of a total likelihood and a prior, wherein the total likelihood is a product of likelihoods of the blocks; and segment the image by grouping together blocks that have been assigned equivalent scales. 75. The article of manufacture of 76. The article of manufacture of 77. The article of manufacture of 78. An article of manufacture having one or more recordable media with executable instructions stored thereon which, when executed by a machine, cause the machine to:
receive a file that contains a header that contains multi-scale entropy distribution information on blocks of an image; for each block, determine that the block retains significance at a scale upon determining that an entropy of a multi-scale coefficient of a block at the scale is greater than a mean entropy of multi-scale coefficients of blocks in at least one coarser scale; and scale the image to a coarsest scale at which a threshold percentage of the blocks retain significance at the scale. 79. The article of manufacture of 80. The article of manufacture of 81. An article of manufacture having one or more machine-readable media storing executable instruction thereon which, when executed by a machine, cause the machine to:
receive a header that contains multi-scale entropy distribution information on blocks of an image; for each block and for each first scale of a set of scales:
set a cumulative entropy distribution for the block at the first scale equal to a summation of a number of bits spent to code the block for scales at and between the first scale and a maximum scale; and
set an indicator function of the block and the first scale to one upon determining that a width of the block is not greater than a first minimum value and a height of the block is not greater than a second minimum value and to zero otherwise, wherein the first minimum value is a minimum value of a set consisting of a width of the image and a sum of the width of the block plus one plus a desired height scaled by the first scale, and wherein the second minimum value is a minimum value of a set consisting of a height of the image and a sum of the height of the block plus one plus a desired width scaled by the first scale;
compute an optimal location and an optimal scale that together maximize a summation, for each block in the optimal location at the optimal scale, of the cumulative entropy distribution for the block at the optimal scale, multiplied by the indicator function of the block and the optimal scale, multiplied by a parameter; and crop the image to the optimal location and down-sampling a resulting cropped image to the optimal scale. 82. The article of manufacture of 83. An apparatus comprising:
a receiving unit to receive a header that contains multi-scale entropy distribution information on blocks of an image; and a processing unit coupled with the receiving unit, the processing unit to
for each block, assign to the block a scale from a set of scales that maximizes a cost function, wherein the cost function is a product of a total likelihood and a prior, wherein the total likelihood is a product of likelihoods of the blocks; and
group together blocks that have been assigned equivalent scales to segment the image.
84. The apparatus of 85. The apparatus of 86. The apparatus of 87. An apparatus to adaptively scale an image, comprising:
a receiving unit to receive a header that contains multi-scale entropy distribution information on blocks of an image; and a processing unit coupled with the receiving unit, the processing unit to
for each block, determine that the block retains significance at a scale upon determining that an entropy of a multi-scale coefficient of a block at the scale is greater than a mean entropy of multi-scale coefficients of blocks in at least one coarser scale; and
scale the image to a coarsest scale at which a threshold percentage of the blocks retain significance at the scale.
88. The apparatus of 89. The apparatus of 90. An apparatus to automatically scale and crop an image, comprising:
a receiving unit to receive a header that contains multi-scale entropy distribution information on blocks of an image; and a processing unit coupled with the receiving unit, the processing unit to
for each block and for each first scale of a set of scales;
set a cumulative entropy distribution for the block at the first scale equal to a summation of a number of bits spent to code the block for scales at and between the first scale and a maximum scale; and
set an indicator function of the block and the first scale to one upon determining that a width of the block is not greater than a first minimum value and a height of the block is not greater than a second minimum value and to zero otherwise, wherein the first minimum value is a minimum value of a set consisting of a width of the image and a sum of the width of the block plus one plus a desired height scaled by the first scale, and wherein the second minimum value is a minimum value of a set consisting of a height of the image and a sum of the height of the block plus one plus a desired width scaled by the first scale;
compute an optimal location and an optimal scale that together maximize a summation, for each block in the optimal location at the optimal scale, of the cumulative entropy distribution for the block at the optimal scale, multiplied by the indicator function of the block and the optimal scale, multiplied by a parameter; and
crop the image to the optimal location and down-sample a resulting cropped image to the optimal scale.
91. The apparatus of 92. A method comprising:
obtaining an estimation of a low bit rate entropy distribution from a high bit rate granular entropy distribution using information obtained from a header of a compressed bitstream; and applying one or more image processing operations. 93. The method defined in 94. The method defined in 95. The method defined in 96. The method defined in Description [0001] This application is related to the co-pending application entitled Content And Display Device Dependent Creation Of Smaller Representations Of Images, concurrently filed on Jan. 10, 2002, U.S. patent application Ser. No. ______, assigned to the corporate assignee of the present invention. [0002] The invention relates generally to the field of image processing. More specifically, the invention relates to processing images using multi-scale transforms. [0003] Digital images can be represented and stored in a variety of formats. A common feature in digital image representation formats is that the bits constituting an image file are divided into image description bits and header bits. Image description bits describe the actual underlying image. Often the image description bits are divided into smaller units for convenience. Header bits provide organizational information about the image, such as image size in pixels, file size, length in bits for the various smaller image description units, etc. [0004] Compressed image files contain a wide variety of organizational information in the header primarily to facilitate convenient file management and interpretation. For example, in addition to conventional information such as width, height, color component information and other details, JPEG 2000 ITU-T Rec. T.800|(ISO/IEC 15444-1:2000) image headers also provide information about the number of bits contained in smaller units, such as groups of wavelet coefficients (termed code-blocks), that constitute compressed data for image and the wavelet-domain locations of these small units of coefficients. Other image file formats can contain similar information. [0005] In R. De Queiroz and R. Eschbach, “Fast segmentation of the JPEG compressed documents,” [0006] Image analysis involves describing, interpreting, and understanding an image. Image analysis extracts measurements, data or information from an image. Image analysis techniques involve feature extraction, segmentation and classification. Image analysis may be referred to as computer vision, image data extraction, scene analysis, image description, automatic photointerpretation, region selection or image understanding. See W. Pratt, [0007] Image processing produces a modified output image from an input image. Image processing techniques include cropping, scaling, point operations, filtering, noise removal, restoration, enhancement. (Jain chapters 7 and 8; Pratt Part 4.) [0008] In some applications, it is desirable for first perform image analysis on an image and then to use the analysis to control image processing on the image. For example, the program “pnmcrop” (http://www.acme.com/software/pbmplus/) first analyzes an image to find stripes of a background color (a single color value, for example white or black) on all four sides. Then it performs an image processing operation, cropping, on the image to remove the stripes. [0009] A method and apparatus is disclosed herein for performing operations such as image segmentation, adaptive scale selection, and automatic region selection and scaling on the underlying image using only the image file header information. The image files use a multi-scale image compression technique. A multi-scale bit allocation, which is used for processing, is estimated from the file header. The processing algorithms use the number of bits allocated by the image coder (or, in another embodiment, estimated to be allocated) as a quantitative measure for the visual importance of the underlying features. [0010] The present invention will be understood more fully from the detailed description given below and from the accompanying drawings of various embodiments of the invention, which, however, should not be taken to limit the invention to the specific embodiments, but are for explanation and understanding only. [0011]FIG. 1 illustrates a multi-scale entropy distribution for an image; [0012]FIG. 2 is a flow diagram illustrating one embodiment of a process for segmenting an image; [0013]FIG. 3 illustrates a segmentation map superimposed on an exemplary image of a woman; [0014]FIG. 4 illustrates a segmentation map superimposed on an exemplary image of Japanese text; [0015]FIG. 5 is a flow diagram of one embodiment of a process for adaptively scaling an image; [0016]FIG. 6 illustrates adaptive scaling of an exemplary image of a woman; [0017]FIG. 7 illustrates adaptive scaling of an exemplary image of Japanese text; [0018]FIG. 8 is a flow diagram of one embodiment of a process for automatically scaling and cropping an image; [0019]FIG. 9 illustrates automatic scaling and cropping of an exemplary image of a woman; [0020]FIG. 10 illustrates automatic scaling and cropping of an exemplary image of Japanese text; [0021]FIG. 11A is a block diagram of one embodiment of an apparatus to perform the processing described herein; [0022]FIG. 11B is a block diagram of an alternative embodiment of an apparatus to perform the processing describe herein; and [0023]FIG. 12 is a block diagram of a computer system. [0024] A method and apparatus for using file header information to process an underlying digital image is described. The file header information may be part of a bit stream that includes compressed data corresponding to the underlying digital image. The processing described herein uses the information in the header and process it in a specific way to determine what portions of the compressed data to decode. In essence, the information in the header enables identification of a region or regions upon which further processing is to occur. [0025] In one embodiment, the compressed data comprises an image representation format resulting from multi-scale transform-based compression. Compressed data consists of header and image description bits. That is, multi-scale transformed based compression is applied to image data as part of the process of generating the image description bits. From the header, the image coder's entropy distribution, or bit allocation, in the multi-scale domain may be estimated and used as a quantitative measure for visual importance of the underlying image features. For example, from the header of a JPEG 2000 file information such as, the length of codeblocks, the number of zero bit planes, the number of coding passes, may be used to determine the entropy distribution. In this manner, the bit distribution in a multi-scale transform based representation is used to perform one or more operations, including, but are not limited to, image segmentation, adaptive scale/resolution selection for images, and automatic scaling and detection and selection, scaling and cropping of important image regions. [0026] In one embodiment, information in the header is used to generate an entropy distribution map that indicates which portions of the compressed image data contain desirable data for subsequent processing. An example of such a map is given in FIG. 1. Other maps are possible and may indicate the number of layers, which are described below with the description of JPEG 2000, to obtain a desired bit rate (particularly for cases when layer assignment is related to distortion) or the entropy distribution for each of a number of bit rates. In the latter case, each rectangular area on the map has a vector associated with it. The vector might indicate values for multiple layers. [0027] Image representation formats that utilize multi-scale transforms to compress the image description bits typically incorporate many organizational details in the header, so that pixel-wise description about the digital image can be decoded correctly and conveniently. JPEG 2000 is an example of an image compression standard that provides multi-scale bit distributions in the file header. Often the image description bits are divided among smaller units, and the number of bits allocated by the encoder to these units is stored in the image header to facilitate features such as partial image access, adaptation to networked environments, etc. Using information theoretic conventions, the allocated number of bits is referred to as the entropy of each small unit. Entropy distributions used by image coders provide an excellent quantitative measure for visual importance in the compressed images. For lossless compression, an image coder uses more bits to describe the high activity (lot of detail) regions, and less bits to convey the regions with little detail information. For lossy compression, the image coder typically strives to convey the best possible description of the image within the allocated bits. Hence, the coder is designed to judiciously spends the available few bits describing visually important features in the image. [0028] A multi-scale image coder does not code image pixels, but coefficients of the transformed image where the transform performs a separation of image information into various frequency bands. Multi-scale image coders (e.g., a JPEG 2000 coder) provide the multi-scale distribution of entropy for the underlying image in the image header. Since such transform basis functions exhibit simultaneous spatial and frequency localization, the transform coefficients contain information about the frequency content at a specified location in the image. [0029] The ability to process an image simply based on its header is desirable, because not only is the header information easily accessed using a small number of computations, but also the condensed nature of the available image information enables more efficient subsequent processing. Importantly, the header information, which is easy to access, indicates information about the image without decoding coefficients. Therefore, processing decisions can be made without having to expend a large amount of time decoding coefficients. [0030] The techniques described herein have applications in areas such as, but not limited to, display-adaptive image representations, digital video surveillance, image database management, image classification, image retrieval, and preprocessing for pattern analysis, image filtering and sizing. [0031] In the following description, numerous details are set forth. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention. [0032] Some portions of the detailed descriptions which follow are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. [0033] It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices. [0034] The present invention also relates to apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. [0035] The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein. [0036] A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable medium includes read only memory (“ROM”); random access memory (“RAM”); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.); etc. [0037]FIG. 1 illustrates one multi-scale entropy distribution for an image. The image undergoes JPEG 2000 encoding initially. The underlying patterns are the wavelet coefficients of the image. The thin lines denote the JPEG 2000 division of the wavelet domain coefficients into code blocks, and the thick lines separate the different wavelet sub-bands. In JPEG 2000, the coder performing the encoding process allocates and divides the wavelet domain coefficients into small units called code blocks. The numbers shown in each square are the bits or entropies allocated to the respective code blocks by the JPEG 2000 coder operating at 0.5 bits per pixel using three levels of decomposition. These numbers represent the multiscale entropy distribution. [0038] The entropy allocations, which are accessed using only the JPEG 2000 file header, provide a good measure for the visual importance of the different features at various scales and help distinguish between the different types of important image features characterized by different multiscale properties. For example, to describe the feather region in the image, a multi-scale image coder spends many bits coding the fine scale coefficients and less on coarse scale coefficients than, e.g., fine scale coefficients corresponding to the feather region. On the other hand, to code the face region, a multi-scale image coder spends more bits coding the intermediate scale coefficients corresponding to the face region. The smooth background receives few bits. Thus, the multi-scale entropy distribution provides significant information about the underlying image features. Assuming knowledge of the multi-scale entropy distribution is obtained from headers, one or more operations may be performed. These operations may be, for example, image segmentation, automatic active region identification and scaling, and/or adaptive image scaling. [0039] JPEG 2000 is a standard to represent digital images in a coherent code-stream and file format (See, e.g., ITU-T Rec. T.800|ISO/IEC 15444-1:2000, “JPEG 2000 image coding standard,” in www.iso.ch). JPEG 2000 efficiently represents digital image by efficiently coding the wavelet coefficients of the image using the following steps. A typical image consists of one or more components (e.g., red, green, blue). Components are rectangular arrays of samples. These arrays are optionally divided further into rectangular tiles. On a tile-by-tile basis, the components are optionally decorrelated with a color space transformation. Each tile-component is compressed independently. Wavelet coefficients of each color component in the tile are obtained. The wavelet coefficients are separated into local groups in the wavelet domain. These are called code blocks. The code blocks are optionally ordered using precincts. Arithmetic coding is used to code these different wavelet-coefficient groups independently. The coded coefficients are optionally organized into layers to facilitate progression. Coded data from one layer of one resolution of one precinct of one component of one tile is stored in a unit called a packet. In addition to coded data, each packet has a packet header. After coding, a tile-component is optionally divided into tile-parts, otherwise the tile-component consists of a single tile-part. A tile-part is the minimum unit in the code-stream that corresponds to the syntax. A JPEG 2000 codestream consists of syntax (main and tile-part headers, plus EOC) and one or more bitstreams. A bitstream consists of packets (coded data for codeblocks, plus any instream markers including instream packet headers). The organizational information to parse the coded data, the packet headers, may be stored in the main header, tile headers, or instream. [0040] JPEG 2000 has main headers and tile headers which contain marker segments. JPEG 2000 also has packet headers which may be contained in marker segments or be in-stream in the bit stream. Headers are read and used as inputs to processing which obtains a multiscale entropy distribution. Table 1 summarizes the information contained in various JPEG 2000 headers that is relevant to header-based processing.
[0041] In the case of the packet header (PPM, PPT, in-stream), it may be in either the main header, tile header or in-stream, but not a combination of any two or more of these at the same time. On the other hand, the packet length and tile-length part may be in the main header or the tile headers, or in both at the same time. [0042] Estimation of Low Bit Rate Image From High Bit Rate Image [0043] The multi-scale entropy distribution at lower bit rates provides a robust measure for visual importance. At higher bit rates the existence of image noise, which is present in digital images from any sensor or capture device, corrupts the overall entropy distribution. Depending on the application, images are encoded losslessly or lossy. The layering scheme in the JPEG 2000 standard could be used to order the codestream of a lossless or high bit rate encoded image into layers of visual or Mean-Squared-Error (MSE)-based importance. In this case, a low bit rate version of the image could be obtained by extraction of information from only the packets in some layers and ignoring the packets in the other layers. If such layering is not employed by the encoder, the packet length information from the header can yield the multi-scale entropy distribution only at the bit rate chosen by the encoder, e.g. lossless, high bit rate or low bit rate. [0044] If the encoder choice was lossless or high bit rate, an estimation of a low bit rate version of the image is obtained before applying any of the image processing algorithms explained later. One embodiment for performing such an estimation is described below. To determine the order in which bits are allocated, information of the maximum of absolute values of coefficients and the number of coding passes in a codeblock from headers as well as heuristic and statistical information on visual or (MSE)-based importance of subbands at various resolution levels is used. [0045] The estimation successively subtracts bits from the total number of bits per codeblock until a given bit rate for the image is reached. The order of subtraction is the reverse of a bit allocation algorithm. The allocation algorithm may be the same as the one used by the encoder, but it is not required to be. [0046] From the packet header of a JPEG 2000 file the length of a codeblock, i.e. the number of bits “B”, number of zero bitplanes “NZ” and the number of coding passes “CP” used during encoding are available. From the number of zero bitplanes, an estimation of the maximum value of absolute values of coefficients in the codeblock, 2 Max [0047] where MSB is the maximum number of bitplanes of the specific subband of which the codebock belongs. MSB is defined by information in the appropriate QCC or QCD header entry for JPEG 2000. Based on visual or MSE-based weighting or statistical properties of images, an order of subbands and bitplanes can be derived that reflects the importance of a bit plane in a given subband. Based on, e.g., MSE importance, the ordering of importance of bit planes in a subband of a 5-level decomposition is given by the one displayed in Table 2.
[0048] The estimation algorithm uses that order and computes for each codeblock for order number i, the number of coding passes CP(b(i)) that contain the specific bitplane, b(i), in the subband, s(i), and the corresponding level, l(i), namely [0049] If that number is positive, a specific number of bits is subtracted from the codeblock bits. In one embodiment, the specific number of bits is computed as the average number of bits per coding pass in the specific subband, or the specific resolution. In the next step, order number (i+1), the derived number of bits is subtracted in a similar way from the codeblocks for bitplane b(i+1) of subband s(i+1) at level l(i+1). In pseudo code, an exemplary estimation algorithm for the example target rate of 0.5 bits/pixel is expressed as follows.
[0050] New_B and new_CP are arrays of size of the number of codeblocks. [0051] Once the target rate is reached, the new estimated bit values “new_B” are used in the entropy processing algorithms. [0052] There are many alternatives to estimating a low bit rate image from a high bit rate image. In an alternative embodiment, another approach for estimation of low bit rate images may be used. This approach uses a model on the distribution of wavelet coefficients of an image. [0053] It is assumed that the distribution of the wavelet coefficients can be described by a Gaussian or Laplacian distribution. The latter one is often used for modeling in the literature since distributions of many natural images are tested to follow the exponential distribution approximately. The Laplacian distribution has density [0054] The theoretical definition of the entropy is [0055] where p [0056] For the Laplacian distribution, this results in [0057] If the parameter λ could be estimated from the header data of a coding unit, then the pdf of the coefficients in that coding unit could be estimated and the entropy for any given quantizer Q be determined. [0058] The packet headers of a JPEG 2000 file include information on the number of zero bitplanes in a codeblock. From this information an estimation on the maximum absolute values of coefficients in that codeblock can be obtained by the variable MaxB from Equation 1. Using this variable, the parameter λ can be estimated as λ*= [0059] By inserting this estimate into the formulas in Equations (6) and (4), an estimate for the entropy given a specific quantization is obtained. The value H yields bits per pixel. Since the codeblock length is measured in bytes, the estimated value H has to be multiplied by 8*(#coefficients per codeblock). A final algorithm may use the same order as the previously described method to reduce the number of bits in different subbands at different resolution levels successively. The reduction of bits is given by setting the quantizer to the bitplane parameter b(i) from Table 2. [0060] Image Analysis Processing Algorithms [0061] By exploiting the multi-scale entropy distribution that is accessible from the header, techniques may be used to perform image analysis or computer vision and similar operations such as, for example, but not limited to, segmentation, automatic scaling, resolution selection, and automatic region selection and cropping on the underlying image. Common prior art techniques are described in W. Pratt, [0062] As described herein, the use of multi-scale information from an image available in JPEG 2000 headers is demonstrated in the framework of several image analysis algorithms (or computer vision). In one embodiment, the header parameters that are used are PPM, PPT, SIZ, COD, COC, QCC and QCD. From these parameters, the location of codeblocks in the wavelet domain and the number of bits used by the encoder to encode the corresponding coefficients can be extracted. These numbers can be used to derive a bit distribution of the multi-scale representation of the image. The scale and spatial localization of codeblocks, and the multi-scale bit distribution inferred from headers lead to different image processing applications such a multiscale segmentation, automatic scaling, automatic scaling and cropping, and production of multiscale collage. [0063] Segmentation [0064] A classification technique assigns a class label to each small area in an image. Such an area can be an individual pixel or a group of pixels, e.g. pixels contained in a square block. Various image analysis techniques use the class assignments in different ways, for example, the segmentation techniques separate an image into regions with homogeneous properties, e.g. same class labels. [0065] Using the multi-scale entropy distribution, a scale is assigned as the class label to each image region, so that even if the coefficients from the finer scales is ignored, the visual relevant information about the underlying region is retained at the assigned scale. Such labeling identifies the frequency bandwidth of the underlying image features. Segmentation is posed as an optimization problem, and a statistical approach is invoked to solve the problem. [0066] The location of codeblocks in the wavelet domain is given by the two-dimensional (2D) spatial location (i,k) and scale j. For example, if processing an image of size 512×512 and having codeblocks of size 32×32, there are 8×8 of size 32×32 codeblocks in each band of level 1, 4×4 codeblocks per band at level 2, and 2×2 codeblocks per band at level 3. The number of bits B [0067] A scale jε{1 . . . J} is assigned to each block, so that a cost function Λ is maximized,
[0068] where S [0069] In one embodiment, the prior art Maximum A Posteriori (“MAP”) approach is adopted from statistics to solve the segmentation problem, because such an approach can be tuned to suit the final application. The basic ingredients used by MAP to set the cost function A are the likelihood P(B|S), which is the probability of the image's entropy distribution B, given segmentation map S, and prior P(S), which is the probability of the segmentation map S. The MAP cost function Λ is given by Λ( [0070] The MAP segmentation solution corresponds to optimizing equation (8), using equation (9). [0071] The coefficients contained in a codeblock at level 1 contain information about a block of approximately twice the size in the pixel domain. If the pixel domain is divided into blocks of a specific size there are four times as many blocks in the pixel domain than codeblocks at level 1 of the wavelet decomposition, 16 times as many blocks in the pixel domain than codeblocks at level 2 of the wavelet decomposition, etc. Therefore, bits of a codeblock B [0072] In one embodiment, the number of level-j bits associated with the pixel domain is defined as
[0073] The above calculation is equivalent to piece wise interpolation of the entropy values. Other interpolation algorithms, such as, for example, polynomial interpolation or other nonlinear interpolation, can be used as well to calculate the level j bits. [0074] The cumulative weighted resolutions entropy of a pixel block of size 2n×2n at location (x,y) is given by
[0075] with
[0076] for the locations i and k in {circumflex over (B)} γ [0077] with w [0078] The likelihood for the entropy {circumflex over (B)} [0079] Under the assumption of the pixel domain blocks being independent, the total likelihood is given by
[0080] {circumflex over (B)} [0081] Now the prior P(s) has to be determined. The following discussion reflects existing knowledge about typical segmentation maps. There are many possible ways to choose the prior. For example, other ways to choose the prior are described in R. Neelamani, J. K. Romberg, H. Choi, R. Riedi, and R. G. Baraniuk, “Multiscale image segmentation using joint texture and shape analysis,” in Proceedings of Wavelet Applications in Signal and Image Processing VIII, part of SPIE's International Symposium on Optical Science and Technology, San Diego, Calif., July 2000; H. Cheng and C. A. Bouman, “Trainable context model for multiscale segmentation,” in Proc. IEEE Int. Conf. on Image Proc.—ICIP '98, Chicago, Ill., Oct. 4-7, 1998; and H. Choi and R. Baraniuk, “Multiscale texture segmentation using wavelet-domain hidden Markov models,” in Proc. 32nd Asilomar Conf. on Signals, Systems and Computers, Pacific Grove, Calif., Nov. 1-4, 1998. [0082] Because the segmentation map is expected to have contiguous regions, a prior is set on each location (x, y) based on its immediate neighborhood N(x, y), which consists of nine blocks (using reflection at the boundaries). The individual prior is
[0083] where #(N(x, y)=S(x, y)) is the number of neighbors which are the same as S(x,y), and α is a parameter that can be increased to favor contiguous regions; α=0 implies that the segmentation map blocks are independent of each other. In one embodiment, the overall prior is chosen as =π [0084] In one embodiment, a equals 0.02 to 0.08. The desired segmentation map can now be obtained by optimizing the cost function Λ(S,B). A number of prior art iterative techniques may be used to search for the local maxima. One iterative technique involves first calculating the initial segmentation map that optimizes the cost function using α=0 in equation (12). The segmentation map maximizing the resulting cost function is obtained because the vector optimization decouples into a scalar optimization problem. The segmentation map is given by
[0085] For all (x, y), the segmentation map at (x, y) is updated using
[0086] where N(x, y) is obtained from S [0087] The actual segmentation output in terms of labeling of regions is then given by the maximization of the MAP cost function Λ( [0088] as stated in equation (3) above. [0089]FIG. 2 is a flow diagram of one embodiment of a process for segmenting an image. Referring to FIG. 2, in process block [0090]FIG. 3 illustrates a segmentation map superimposed on an exemplary image of a woman. In one embodiment, the segmentation process (set forth above) labels the face regions of the image [0091]FIG. 4 illustrates a segmentation map superimposed on an exemplary image of Japanese text. Since the segmentation map [0092] The results can be extended to color images. A linear or non-linear combination of the multi-scale entropy allocations among the different color components can be used for segmentation. Segmentation can be performed on only one component such as luminance or green. A segmentation algorithm can be run on each component separately, and then combined using voting or by a MAP method. [0093] In one embodiment, the resolution of the final results are limited by the granularity (coarseness) of the multi-scale entropy distribution; typically, the resolution of the final results with respect to the underlying image is limited to multiples of the code-block size. In one embodiment, when precincts are employed, better resolution can be obtained if the precinct boundaries cause the code blocks to be split. [0094] Automatic Resolution Selection [0095] It is often desirable to know the best scale such that even if all finer scale coefficients are thrown away, the retained coefficients contain sufficient information to identify the image. This may be used, for example, with digital cameras. Since entropy is a good measure for visual information, this may be used as a measure for the amount of visual information that is lost when an image is represented at scale j. Furthermore, the multi-scale representation helps to identify the approximate areas in the image that lose their visual information during image scaling. The best scale is estimated as follows. For each scale j, the importance of a given group of multi-scale coefficients S [0096] For each scale j, measure the percentage P(j) of the image area that the significant coefficients at level j cover. P(j) measures the area that would lose a significant amount of information, if the significant coefficients at level j are thrown away (when the image X is scaled down by a factor 2 [0097] where P* is a threshold parameter that sets the minimum percentage of area that needs to be recognizable. In one embodiment, P* equals 35%. The best scale that retains sufficient information about the image is J [0098]FIG. 5 is a flow diagram of one embodiment of a process for adaptively scaling an image. In process block [0099]FIG. 6 illustrates adaptive scaling of an exemplary image of a woman. The size of the original image [0100] Given the significance threshold β, the labeling of a codeblock as significant or insignificant can be also performed by modeling the entropy of all the codeblocks in one resolution level as a mixture of two probability distributions, e.g., two Gaussian distribution with different mean μ [0101] Given the significance threshold β, the optimal scale J [0102] Fixed-Size-Window Automatic Cropping and Scaling [0103] Often, an image is constrained to be represented within a fixed size in pixels. Under such constraints, it is desirable to choose the “best” representation of the image that satisfies the given size constraints. Since entropy is a good measure for visual information, an image representation is obtained that encompasses the maximum entropy, while still satisfying the size constraints. [0104] The weighted cumulative entropy {circumflex over (B)} [0105] A two dimensional indicator function I is constructed with support dictated by the shape and size constraints of the application. For example, if the desired shape constraint is a rectangle and the size constraints are the pixel dimensions m×n, then the indicator function for a rectangle of size m×n located at position (x [0106] The “best” location (a*,b*) of the rectangle placed at the “best” level j* is computed as
[0107] where κ [0108] κ [0109] mask1=[(1.0 1.1 1.2 1.3 1.3 1.2 1.1 1.0)×(1.0 1.1 1.2 1.3 1.3 1.2 1.1 1.0) [0110] κ [0111] mask2=[11111111]×[11111111] [0112] and ∥mask1∥denotes the L [0113] Multiplying the cumulated weighted entropy at resolution j with mask1 means weighting the entropy values linearly decreasing from 1 to 0.77 from the center towards the edges of the image at resolution j. [0114] The best representation of the image is then obtained by theoretically computing the image at resolution j* and cropping out of that low-resolution image a rectangle of size m x n located with the lower left corner at position (a*/2 [0115]FIG. 8 is a flow diagram of one embodiment of a process for automatically scaling and cropping an image. The process is performed by processing logic that may comprise hardware (e.g., circuitry, dedicated logic, etc.), software (such as is run on a general purpose computer system or a dedicated machine), or a combination of both. [0116] Referring to FIG. 8, at processing block [0117] At processing block [0118] At processing block [0119] At processing block [0120] In one embodiment, the above process simultaneously chooses the region and its scaling factor for the images. FIG. 9 illustrates automatic scaling and cropping of an exemplary image of a woman according to one embodiment. The size of the original image [0121]FIG. 10 illustrates automatic scaling and cropping of an exemplary image of japanese text. The size of the original image [0122] Display Constraints [0123] Display space is often a constraint on any device. Under such circumstances, it is desirable to obtain a device dependent, meaningful, condensed representation of images. By combining header-based processing with display adaptation techniques, a variety of meaningful and condensed image representations can be provided. The display device characteristics set an upper and lower bound on the size of the image to be represented. Since the automatic scaling process set forth above suggests a scale which ensures that most of the image information is still retained in the scaled down image, a scale can be chosen between the bounds dictated by the display device that is closest to the suggested scale. [0124] Often, the size (e.g., in pixels) available to represent an image is fixed. In such a case, it is desirable to find the best representation of the image that can be accommodated with in the available pixels. The automatic region selection and scaling technique set forth above can provide the best fixed-size representation of the image, by exploiting the multi-scale entropy distribution. The parameters in the process can be chosen to tune the representation to specific display devices. [0125] Applications [0126] One approach to compressing digital video sequences is to compress each video frame independently using a multi-resolution image coder. For example, the Motion JPEG 2000 standard uses multi-scale transform-based compression on each video frame independently. Since our proposed algorithms can effectively process these frames, the aforementioned processing can be applied to Motion JPEG 2000 as well. For example, by setting the segmentation process parameters such as α and γ [0127] An aim of image classification is to automatically sort through an image database, and group images of similar types such as natural images, portraits, documents, uniform textures, etc. Segmentation maps obtained by processing the multi-scale entropy distributions can be an exploited as a feature to perform broad classifications. The classification can be fine-tuned later using more intensive and specialized processing. [0128] An aim of image retrieval is to identify images that are similar to some template image. Since good image retrieval algorithms are intensive and require the actual image to perform their analysis, header-based segmentation maps can be exploited to reduce the number of images that need to be decoded and fed to the specialized image-retrieval algorithms. [0129] The segmentation process set forth above can be used to provide an approximate segmentation that splits the image into regions containing coarse scale features and regions containing fine scale features. For example, in document images, the segmentation algorithm can approximately distinguish the text regions from the images. The approximate segmentation can be input to a more intensive pattern analysis algorithm such as optical character recognition (“OCR”) for further analysis. [0130] The segmentation technique set forth above can be used to create an abstract collage representation of the image, where different regions of the image are scaled more (or less) depending on whether the features contained in the region are coarse or fine. Such an abstract representation of an image can possibly be used in many graphical user interface (“GUI”) image communication applications such as web-browsers. [0131] Multiscale Collage [0132] For the calculation of a multiscale collage of an image as a first step a segmentation as in Segmentation section described above is performed. After this, rectangles are fitted to the segmented image in the following way. [0133] A multi-scale probability distribution such as the MAP cost function Λ({circumflex over (B)} [0134]FIG. 11A is a schematic diagram of an apparatus to segment an image, to adaptively scale an image, or to automatically scale and crop an image. Referring to FIG. 11A, the apparatus [0135] In one embodiment, processing [0136] In one embodiment, processing unit [0137] Processing unit [0138] Processing unit [0139] In one embodiment, processing unit [0140] Then, processing unit [0141]FIG. 11B is block diagram of one embodiment of a codestream processor for use in an image processing system. Referring to FIG. 11B, codestream [0142] The value of header-based processing is demonstrated in the example of creating a good 128×128 thumbnail representation of 1024×1024 image. An image analysis process described herein is the one for automatic cropping and scaling as described above. The complexity of processed data compared to traditional image processing of a JPEG 2000 image and a raster image is listed in Table 3. The advantage over an image in JPEG 2000 form is that only {fraction (1/1000)} of the data must be used by the segmentation algorithm and less than ½ of data must be decoded.
[0143] An Exemplary Computer System [0144]FIG. 12 is a block diagram of an exemplary computer system that may perform one or more of the operations described herein. Referring to FIG. 12, computer system [0145] System [0146] Computer system [0147] Computer system [0148] Another device that may be coupled to bus [0149] Note that any or all of the components of system [0150] Whereas many alterations and modifications of the present invention will no doubt become apparent to a person of ordinary skill in the art after having read the foregoing description, it is to be understood that any particular embodiment shown and described by way of illustration is in no way intended to be considered limiting. Therefore, references to details of various embodiments are not intended to limit the scope of the claims which in themselves recite only those features regarded as essential to the invention. Referenced by
Classifications
Rotate |