|Publication number||US7512277 B2|
|Application number||US 10/510,649|
|Publication date||Mar 31, 2009|
|Filing date||Apr 9, 2003|
|Priority date||Apr 19, 2002|
|Also published as||EP1497989A1, US20050169548, WO2003090471A1|
|Publication number||10510649, 510649, PCT/2003/1545, PCT/GB/2003/001545, PCT/GB/2003/01545, PCT/GB/3/001545, PCT/GB/3/01545, PCT/GB2003/001545, PCT/GB2003/01545, PCT/GB2003001545, PCT/GB200301545, PCT/GB3/001545, PCT/GB3/01545, PCT/GB3001545, PCT/GB301545, US 7512277 B2, US 7512277B2, US-B2-7512277, US7512277 B2, US7512277B2|
|Inventors||Paul Gerard Ducksbury, Margaret Jai Varga|
|Original Assignee||Qinetio Limited|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (21), Non-Patent Citations (6), Referenced by (3), Classifications (24), Legal Events (3)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This invention relates to a method, a computer program and an apparatus for data compression for colour images.
A colour image may contain a very large amount of data which makes it difficult to transmit over a conventional digital communications link because of bandwidth limitations. One specific problem arises in connection with histopathological slides used in medical treatment: these slides are chemically treated to introduce colour into tissue for diagnostic purposes. On such a slide, a tissue sample may be 1 cm2, and camera images may be produced at a magnification of ×40; a 5 Mbyte image may occupy 0.339×0.25 mm: to digitise the entire tissue sample would require (10×10×5)/(0.339×0.25) Mbytes, i.e. 5899 Mbytes. To transmit this over a 56 Kbit/second (7 Kbyte/second) telephone line would take approximately 239 hours, nearly ten days, and it is emphasised this is merely for a single tissue sample. The problem rapidly worsens if large numbers of samples are required.
With increasing use of higher resolution cameras, image sizes and the requirement to transmit large images are increasing. There is also a requirement to preserve image quality as much as possible. When communications bandwidth is a significant limitation, images must be compressed prior to transmission and subsequently decompressed on receipt.
Known forms of compression can be divided into two categories, lossy and lossless. Lossy compression techniques can achieve very high rates of compression (e.g. 50:1, 100:1 or greater), but this is at the expense of loss of image information and image degradation. Lossless compression preserves image information and avoids degradation, but can only achieve a low degree of compression (e.g. 3:1).
Methods of compression of monochrome images have been published as follows:
However, none of the above references discloses compression of colour images.
Compression of colour images is disclosed in published U.S. patent application Ser. No. 2001/0024529 A1. It discloses use of a wavelet transformation to transform the image, thresholding wavelet coefficients, discarding sub-threshold wavelet coefficients and quantizing the remainder. This is a viable technique, but it is limited by the fact that all image data are compressed in the same way: therefore, at a sufficiently high degree of compression, image data which it is required to retain become compromised by compression on the same basis as unwanted image data.
Published U.S. patent application Ser. No. 2001/0024529 A1 also discloses compression of colour images by colour space transformation followed by wavelet transformation, quantisation and entropy reduction. Here again, like published US Pat. Appln. No. 2001/0024529 A1, all image data are compressed in the same way: compression therefore compromises required image data in the same way as unwanted image data.
The present invention provides a method of data compression for colour images characterised in that it includes the following steps:
The invention provides the advantage that the reduced wavelet image is suitable for encoding, transmission over a digital communications link and production of a reconstituted colour image. Moreover, the invention implements a relatively low (e.g. zero) degree of compression in areas of an original colour image indicated to be of relatively higher importance, and a relatively high degree of compression in those indicated to be of relatively lower importance. In consequence the invention makes it possible to obtain a higher degree of compression in the overall image compared to lossless compression while preserving sufficient information in important image areas.
The invention may include the step of producing a reconstituted colour image by hierarchically encoding the reduced wavelet image to form an encoded image, transmitting the encoded image to another location, and implementing respective inverses of the steps of encoding, wavelet transformation, sub-sampling and colour image transformation. This step may comprise forming a progressive bitstream in which more important image features are encoded earlier, and which includes information on number image rows and columns, number of scales and filter type.
The step of distinguishing areas of relatively higher importance from those of relatively lower importance may comprise specifying a plurality of different levels of relatively lower importance, and the step of establishing a wavelet coefficient threshold and forming a reduced wavelet image then includes discarding progressively more wavelet coefficients as area importance level diminishes.
Relative importance of areas in an original colour image may be distinguished by associating differing binary digits therewith. The colour image may be transformed into a second image by a transformation into luminance, blue chrominance and red chrominance. Sub-sampling may reduce pixel number to one quarter that preceding, and wavelet transformation may employ a Daubechies-4 filter. The number of scales may be three, and a wavelet coefficient threshold may be established by forming a cumulative histogram of numbers of pixels not exceeding respective wavelet coefficient values.
In another aspect, the invention provides a computer program for use in data compression of colour images characterised in that it has instructions for implementing the following steps:
The computer program may have instructions for producing a reconstituted colour image by hierarchically encoding the reduced wavelet image to form an encoded image, transmitting the encoded image to another location, and implementing respective inverses of the steps of encoding, wavelet transformation, sub-sampling and colour image transformation. It may have instructions for producing an encoded image by forming a progressive bitstream in which more important image features are encoded earlier, and which includes information on number image rows and columns, number of scales and filter type. It may distinguish relative importance of areas in an original colour image by associating differing binary digits therewith.
The computer program may have instructions for distinguishing areas of relatively higher importance from those of relatively lower importance by specifying a plurality of different levels of relatively lower importance, and establishing a wavelet coefficient threshold and forming a reduced wavelet image by discarding progressively more wavelet coefficients as area importance level diminishes.
The computer program may have instructions for transforming the colour image into a second image by implementing a transformation into luminance, blue chrominance and red chrominance. Sub-sampling may reduce pixel number to one quarter that preceding. Wavelet transformation may employ a Daubechies-4 filter. The number of scales may be three. The wavelet coefficient threshold may be established by forming a cumulative histogram of numbers of pixels not exceeding respective wavelet coefficient values.
In a further aspect, the invention provides computer apparatus for use in data compression of colour images, the apparatus being arranged to run the computer program of the invention.
In order that the invention might be more fully understood, an embodiment thereof will now be described, by way of example only, with reference to the accompany drawings, in which:
The method of the invention will first be described in outline and later in more detail. Referring to
Next, at 14, the Y, Cb and Cr image planes are subjected to a wavelet compression scheme. At 16 unimportant wavelet coefficients are discarded. The Y, Cb and Cr image planes are then in a suitably compressed form for encoding and transmission over a digital communications link in a much shorter time interval than would be required for the original red, green and blue image planes. The compressed image planes are encoded, transmitted, received and decoded at 18. Encoding and decoding are carried out in accordance with U.S. Pat. No. 5,764,807 to Pearlman et al. The compressed image planes are then subjected to inverse wavelet transform at 20. At 22 the Cb and Cr image planes are increased in size by upsampling, and then the resulting image planes and the Y image plane are reverse colour converted to produce an RGB image.
The method of the invention will now be described in more detail. It initially employs two input images: one such is a colour image in conventional RGB format, i.e. it has intensity values for the colours red, green and blue at each pixel in the image. The other input image is binary (each pixel value is 0 or 1) and referred to as a “mask”: it may be produced manually by an observer or by a scanning device: e.g. an observer might view the colour image on a computer monitor and use a mouse to draw boundaries around areas of interest. Pixels within each boundary would be assigned a binary 1 value and other pixels binary 0: the value 1 represents a pixel which is potentially of interest and 0 a pixel regarded as unworthy of further consideration. The input binary image mask acts as an object mask for use in accepting some and rejecting other RGB image regions in subsequent image processing: the mask indicates which parts of the colour image are of more importance than others. A human operator is also required to specify a required percentage removal of wavelet coefficients (to be defined later) and the required size to which the image is to be compressed expressed as a storage file size.
The original RGB video colour image referred to above is converted to a YCbCr colour space as described by K. Jack in ‘Video Demystified—a handbook for the digital engineer’, Hightext Publications, San Diego, 1996. This is carried out in step 12 for each pixel in the RGB image using Equations (1) to (3) below.
where “red”, “green” and “blue” represent respectively red, green and blue pixel intensities. This produces Y, Cb and Cr values for each pixel in the original RGB image, and consequently it generates a Y image plane, a Cb image plane and a Cr image plane. The effect of the transformation implemented by Equations (1) to (3) is that relatively more image information appears in the Y image and relatively less in the Cb and Cr images compared to the original RGB image.
Also in step 12, the Cb and Cr images are then “sub-sampled” to reduce them by a factor of 2 in both width and height dimensions: sub-sampling involves dividing each of these two entire images into 2×2 groups of four pixels, and replacing each group by a single pixel having the value of the respective group's top-left hand pixel. Any one of the other three pixels in each group could be used instead, so long as pixels selected in all groups are like located. The image resulting from sub-sampling is a quarter of the size of the original in each case. The output of step 12 are three image planes Y, Cb and Cr, the first of which is full size and the second and third of which are one quarter size.
Referring now also to
Odd rows or columns: K j =c 3 a j −c 2 a j+1 +c 1aj+2 −c 0 a j+3 (4)
Even rows or columns: K j =c 0 a j +c 1 a j+1 +c 2 a j+2 +c 3 a j+3 (5)
Equations (4) and (5) express the wavelet filtering operation as taking successive sets of four adjacent pixels (i.e. j=1, pixels a1 to a4, j=2, pixels a2 to a5 etc., where j to j+3 are row or column numbers of pixels in the set, and the convolution is a column or row convolution respectively). The filter coefficients c3 to −c0 in Equation (4) can be considered as providing a ‘not a smoothing filter’ whilst the filter coefficients c0 to c3 in Equation (5) can be considered as providing a ‘smoothing filter’. Convolution using Equation (4) provides for odd rows or columns to yield a zero or insignificant response to a data vector that is considered to be smooth, and to yield ‘detail’ in a data vector that isn't smooth.
The convolutions expressed by Equations (4) and (5) are applied in a respective iterative process to each of the three image planes Y, Cb and Cr obtained earlier, Y being full size and Cb and Cr one quarter size. This yields three wavelet representations. In order to implement Equations (4) and (5), a mathematical function referred to as “numerical recipes function pwt” is applied to the image planes Y, Cb and Cr. The numerical recipes function pwt is disclosed in ‘Numerical Recipes in C’, 2nd Ed., Cambridge University Press, 1992. In order to use this function each row of pixel values in a Y, Cb or Cr image plane is treated as a one dimensional (1D) vector: the function is used to transform the vector taking four consecutive of pixels values at a time and incrementing j in Equations (4) and (5) to move along the vector. Towards the end of each 1D vector, when there are less than four pixel values remaining in the vector, wrap-around is used (i.e. additional pixel values are taken from the beginning of the vector to make up the four required). The values Kj that are computed in this way become coefficients of a new 1D vector: the coefficients are arranged so that this new vector has a first half representing ‘smooth’ information (from Equation (5)), and a second half representing ‘detail’ information (from Equation (4)). When all rows have been processed with the numerical recipes function pwt to provide a transformed image, columns in the transformed image are processed in the same way: i.e. each 1D vector is now a respective transformed image column.
The procedure also requires the number of scales or magnifications employed in the process to be predefined by a user. In the example described with reference to
Here the expression “forward transform” means a transform from an image plane to wavelet coefficients. The effect of the above computer program is that successive rows are convolved using Equation (4) (odd rows) or (5) (even rows), and each resulting convolution of four pixel values yields a new pixel value for insertion in the new wavelet image. When all rows have been convolved to produce the new wavelet image, columns of this new wavelet image are subjected to the same procedure: this provides wavelet information at a largest scale. The row and column lengths x and y are then divided by two and the row and column convolution procedures are repeated to provide wavelet information at a next to largest scale. Division by two and row and column convolution is repeated until the prearranged number of scales has been processed, i.e. three in the above example, and indicated by x or y ceasing to be greater than minsizex or minsizey respectively.
This convolution process provides three new filtered images containing wavelet coefficients. It produces resulting images which have the structure shown in
The next stage 16 in the process of this example is “intelligent” coefficient removal: firstly, the input binary mask image (the second of the two original input images) is taken and it is decomposed so that it has the same structure as the wavelet images previously computed for the Y, Cb and Cr image planes. The purpose is to provide a wavelet mask image which distinguishes significant and insignificant features of the image plane wavelet images.
The instruction “temporary array[i][j]=mask[2*i][2*j]” in the above computer program reduces the original binary input mask by a factor of two in both x and y dimensions, i.e. by a factor of four in area: it forms a sub-sampled mask or temporary array by replacing each square block of four contiguous pixels indicated by [2*i][2*j] in the original binary input mask by a single pixel indicated by [i][j] having the value of the top left hand pixel in the square block. The sub-sampled mask is then entered into the four quadrants of a new image by giving appropriate new pixel addresses to its pixels (e.g. by instructions such as mask[i][j]=temporary array[i−nx][j] which inverts pixel x co-ordinates); then the sub-sampled mask is sub-sampled once more and used to provide four 1/16 size images to overwrite the top left hand ¼ size image. For each execution of the loop the top-left quadrant becomes the next image to subsample and the process repeats. This procedure is carried out a number of times equal to the number of scales (three in the present example), so in this example the smallest sub-sampled mask is 1/64 of the area of the original. The output from this is now a transformed mask as shown in
Wavelet coefficients to be removed are now derived for each of the wavelet coefficient images obtained for the colour converted image planes (Y, Cb and Cr) respectively. This is carried out as follows: minimum and maximum pixel values are found for the wavelet coefficient image and a histogram is obtained. The histogram (referred to below as the original histogram) shows the number of pixels having each possible magnitude value, and is a vector of such values. It is used to form a cumulative histogram as follows. A first entry (pixel magnitude value) in the original histogram is set equal to a first entry in the cumulative histogram: all other entries in the cumulative histogram are the sums of corresponding entries in the original histogram with those preceding respectively: i.e. the ith entry Ci in the cumulative histogram is the sum of entries O1 to Oi in the original histogram. This can be achieved by setting Ci=Ci−1+Oi, i.e. the cumulative histogram's ith entry is set equal to the sum of its preceding or (i-j)th entry and the ith entry in the original histogram. The cumulative histogram is then used in data compression to remove a percentage of the image data. The percentage is specified as an input parameter by a user or is otherwise predetermined—a typical value is in the range 75-95%. This can be written as a computer program as follows:
This procedure is carried out for all three filtered wavelet coefficient images obtained as previously described. It provides three reduced wavelet images each containing a reduced set of wavelet coefficients.
The next stage 18 of the invention involves encoding the three reduced wavelet images produced in step 16, transmitting them in this data compressed form to another location, and then decoding them. In the present example, an encoding process is used which is an application of that disclosed in U.S. Pat. No. 5,764,807 to Pearlman et. al. (although other encoding schemes may be used): in this process each reduced wavelet image is subjected to hierarchical encoding to transform it into a progressive bitstream: in this connection a progressive bitstream is one in which more important image features are encoded earlier. The bitstream also contains a header, which includes the number of rows, columns, scales and filter number (in this case Daubechies-4) for use in the decoding process. In addition to this a human operator specifies as a parameter the required output file size for the bitstream. Transmission of a progressive bitstream can be truncated prematurely while retaining the ability to reconstruct or decode an image from its truncated equivalent (albeit image quality worsening with progressively earlier termination). Encoding includes a sub-band decomposition of the reduced wavelet image to derive coefficients, followed by coding of the coefficients for transmission. During encoding, lists are used comprising a list of significant pixels (LSP), a list of insignificant pixels (LIP) and a list of insignificant sets of pixels (LIS). Pixels in the LIP are tested and significant ones are moved to the LSP. Similarly, pixel sets found to be significant are removed from the LIS and partitioned into subsets: subsets with more than one element are returned to the LIS, while single coordinate sets are added to the LIP if insignificant or to the LSP otherwise. Decoding is the inverse of encoding and will not be described further.
The decoding process provides three decoded bitstreams each corresponding to a respective reduced wavelet image. Wavelet decompression in step 20 is carried out by applying an inverse wavelet transform to each decoded bitstream as follows:
Here the expression “inverse transform” means a transform from wavelet coefficients to an image plane. The result of this process is a set of three decompressed image planes, i.e. Y, Cb and Cr image planes.
Upsampling and colour conversion is carried out in the next step 22, which is applied to the Y, Cb and Cr image planes from the preceding inverse wavelet transformation or decompression step 20. Firstly, the Cb and Cr image planes are upsampled (increased) by a factor of 2 in both width and height: this is done by replicating each image plane pixel into a 2×2 block of four pixels in the new image. This provides Cb and Cr image planes which are of the same dimensions as the Y image plane. An inverse colour conversion is then applied to convert YCbCr to RGB as disclosed by K Jack in ‘Video Demystified—a handbook for the digital engineer’, Hightext Publications, San Diego, 1996: it is as follows:
This provides three RGB colour planes, which form the image for display on a colour monitor for example.
It has been shown that the invention can achieve 100:1 compression of colour images in the form of histopathological slides while preserving diagnostic information. This compares very favourable with about 3:1 for prior art lossless compression and about 50:1 for prior art lossy compression.
In the compression process of the invention, a user may initially prioritise regions of interest in accordance with their relative importance: a sliding scale (referred to as a ‘traffic light’) may be used from high importance through to low importance. The compression process is then adapted to discard an increasing percentage of wavelet coefficients as the importance of the regions of interest diminishes. In one example, 75% removal of background information is required and there are three regions of interest denoted by r1, r2 and r3: here r1 is most important, r2 is of lesser importance and r2 is of least importance. Wavelet coefficients derived from r1 are retained in full, r2 has 25% removal of wavelet coefficients and r3 has 50% removal. This compares with 75% removal of background wavelet coefficients. The removal of wavelet coefficients from prioritised regions of interest is arranged so that the least important region r3 has less removal than the background compared to which it is more important. In an earlier example of the invention, there were only two levels of importance, relatively high and relatively low (background). This later approach of more than two levels of importance corresponds to sub-dividing the former relatively low importance level into a plurality of importance levels with differing degrees of wavelet removal. A reduced wavelet image is then formed by discarding progressively more wavelet coefficients as area importance level diminishes.
Since inter alia examples of computer program code for implementing the invention have been given, the invention can clearly be implemented using an appropriate computer program comprising program instructions recorded on an appropriate carrier medium and running on a conventional computer system. The carrier medium may be a memory, a floppy or compact or optical disc or other hardware recordal medium, or an electrical signal. Such a program is straightforward for a skilled programmer to implement from the foregoing description without requiring invention, because it involves well known computational procedures.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5067015 *||Jun 18, 1990||Nov 19, 1991||British Aerospace Public Limited Company||Method of processing video image data for use in the storage or transmission of moving digital images|
|US5142360 *||Mar 5, 1991||Aug 25, 1992||Victor Company Of Japan, Ltd.||Motion vector detection circuit used in hierarchical processing of moving picture signal|
|US5321776 *||Feb 26, 1992||Jun 14, 1994||General Electric Company||Data compression system including successive approximation quantizer|
|US5347479 *||Dec 22, 1992||Sep 13, 1994||Nec Corporation||Small-size wavelet transform apparatus|
|US5477272 *||Jul 22, 1993||Dec 19, 1995||Gte Laboratories Incorporated||Variable-block size multi-resolution motion estimation scheme for pyramid coding|
|US5495292 *||Sep 3, 1993||Feb 27, 1996||Gte Laboratories Incorporated||Inter-frame wavelet transform coder for color video compression|
|US5602589 *||Aug 19, 1994||Feb 11, 1997||Xerox Corporation||Video image compression using weighted wavelet hierarchical vector quantization|
|US5619998||Mar 11, 1996||Apr 15, 1997||General Electric Company||Enhanced method for reducing ultrasound speckle noise using wavelet transform|
|US5764807 *||Sep 14, 1995||Jun 9, 1998||Primacomp, Inc.||Data compression using set partitioning in hierarchical trees|
|US5802369 *||Apr 22, 1996||Sep 1, 1998||The United States Of America As Represented By The Secretary Of The Navy||Energy-based wavelet system and method for signal compression and reconstruction|
|US6314452||Aug 31, 1999||Nov 6, 2001||Rtimage, Ltd.||System and method for transmitting a digital image over a communication network|
|US6359928 *||Sep 28, 1998||Mar 19, 2002||University Of Southern California||System and method for compressing images using multi-threshold wavelet coding|
|US7076108 *||May 1, 2002||Jul 11, 2006||Gen Dow Huang||Apparatus and method for image/video compression using discrete wavelet transform|
|US20010024529||Nov 30, 2000||Sep 27, 2001||Computer And Information Sciences, Inc.||Image compression and decompression based on an integer wavelet transform using a lifting scheme and a correction method|
|US20020006229 *||Nov 30, 2000||Jan 17, 2002||Computer And Information Sciences, Inc.||System and method for image compression and decompression|
|US20030016855 *||Jun 12, 2002||Jan 23, 2003||Hiroyuki Shinbata||Image processing apparatus, image processing method, storage medium, and program|
|EP0961494A1||Jan 29, 1998||Dec 1, 1999||Sharp Kabushiki Kaisha||Image coding device and image decoding device|
|WO1997016021A1||Oct 25, 1996||May 1, 1997||Sarnoff David Res Center||Apparatus and method for encoding zerotrees generated by a wavelet-based coding technique|
|WO1998011728A1||Jun 25, 1997||Mar 19, 1998||Wde Inc||Method, apparatus and system for compressing data|
|WO1998040842A1||Mar 11, 1998||Sep 17, 1998||Computer Information And Scien||System and method for image compression and decompression|
|WO2003090471A1 *||Apr 9, 2003||Oct 30, 2003||Qinetiq Ltd||Data compression for colour images using wavelet transform|
|1||Ducksbury et al. "Feature Detection and Fusion for Intelligent Compression", DERA/IEE workshop on intelligent sensor processing, Brimingham, Feb. 14, 2001.|
|2||Ducksbury, "Feature Detection and Fusion for Intelligent Compression", SPIE AeroSense 2001, Orlando,, Apr. 16-20, 2001.|
|3||Ducksbury, "Target Detection and Intelligent Image Compression", SPIE Aerosense 2000, Orlando Apr. 24-28, 2000.|
|4||*||M. Rabbani and R. Joshi, An overview of the JPEG 2000 still image compression standard, Signal Processing: Image Communication, vol. 17, Issue 1, Jan. 2002, pp. 3-48.|
|5||*||Rege, P.P.; Jog, K.S., "A new statistical bit allocation system for subband coding of images," TENCON 99. Proceedings of the IEEE Region 10 Conference, vol. 1, no., pp. 666-669 vol. 1, 1999.|
|6||Varga et al. "The Application of Intelligent Compression to Telepathology", National Corrections Telemedicine Conf., Tuscon, AZ Nov. 18-21, 2000.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US20080170811 *||Dec 26, 2007||Jul 17, 2008||Prolific Technology Inc.||Image capture apparatus|
|US20110032552 *||Feb 10, 2011||Canon Kabushiki Kaisha||Data processing apparatus and data processing method|
|US20130155117 *||Aug 23, 2012||Jun 20, 2013||Samsung Electronics Co., Ltd.||Display apparatus and method and computer-readable storage medium|
|U.S. Classification||382/232, 375/240|
|International Classification||H03M7/30, H04N7/26, G06K9/46, H04N7/30, G06K9/36, H04N1/41, G06T9/00, H04N7/12, H04B1/66|
|Cooperative Classification||H04N19/167, H04N19/115, H04N19/146, H04N19/162, H04N19/186, H04N19/17, H04N19/63, H04N19/61|
|European Classification||H04N7/26A6C8, H04N7/26H30C3V, H04N7/26H30C3R, H04N7/26H30C2J, H04N7/26H30E5A|
|Aug 16, 2005||AS||Assignment|
Owner name: QINETIQ LIMITED, UNITED KINGDOM
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DUCKSBURY, PAUL GERARD;VARGA, MARGARET JAI;REEL/FRAME:016407/0606
Effective date: 20041001
|Jun 2, 2009||CC||Certificate of correction|
|Sep 20, 2012||FPAY||Fee payment|
Year of fee payment: 4