|Publication number||US7729543 B2|
|Application number||US 11/618,179|
|Publication date||Jun 1, 2010|
|Filing date||Dec 29, 2006|
|Priority date||Aug 8, 2006|
|Also published as||US20080037881|
|Publication number||11618179, 618179, US 7729543 B2, US 7729543B2, US-B2-7729543, US7729543 B2, US7729543B2|
|Inventors||Kimitaka Murashita, Masayoshi Shimizu, Shoji Suzuki, Yasuto Watanabe|
|Original Assignee||Fujitsu Limited|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (11), Referenced by (2), Classifications (20), Legal Events (2)|
|External Links: USPTO, USPTO Assignment, Espacenet|
1. Field of the Invention
The present invention relates to an imaging apparatus and in particular to a technique for correcting a displacement between images.
2. Description of the Related Art
Various techniques have conventionally been provided for correcting a displacement between images when synthesizing a plurality of static images photographed by a digital camera. A technique for correcting a displacement between images due to an unsteady camera is suitable to processing an image photographed in the dark because an exposure time required for photographing therein is longer, increasing the occurrence of an unsteady camera. Specifically, there are methods of correcting an unsteady camera (also noted as “image stabilization method” hereinafter) by means of hardware and software. The image stabilization method by means of hardware includes a lens shift system which moves lens in the direction opposite to an actual moving direction of the camera, and a CCD (charge coupled device) shift system.
As to a technique for stabilizing image by software, provided is a technique which reduces an input image in a plurality of steps, utilizes a global shift amount roughly obtained by using a reduced image on a lower layer level and builds up a scan block when obtaining a local shift amount on a higher level, thereby making it possible to minimize a scan block set for a second image (refer to a patent document 1 for example). Another provision is a technique capable of speeding up a process for synthesizing a plurality of digital image data by making and storing a thumbnail for each of synthetic images in the case of performing a synthesizing process by using a plurality of image data (refer to a patent document 2 for example).
At this point, a description is a detail on a conventional calculation method of a displacement between images among image stabilization techniques by means of software.
Additionally, as to an image stabilization technique using software, provided is a technique of detecting an unsteady camera by means of software (refer to a patent document 3 for example).
[Patent document 1] Laid-Open Japanese Patent Application Publication No. 2004-343483
[Patent document 2] Laid-Open Japanese Patent Application Publication No. 2004-234624
[Patent document 3] Laid-Open Japanese Patent Application Publication No. 2005-197911
The above noted lens shift system and CCD shift system are both faced with a problem of an increased cost because hardware must be added to a camera for moving hardware such as a lens, CCD, et cetera. Another problem associated with these methods is that means of the image stabilization is prone to a shock and easy to fail.
As for an image stabilization method using software, individual images must be positioned with each other before compounding the first and second images as described above. Accordingly exercised is to detect a feature point from the first image and obtain a position corresponding to the feature point in the second image and calculate a displacement amount from the difference of positions between the images. Such a calculation method of a displacement amount requires the process regarding the entirety of the image for detecting a feature point and searching a position corresponding thereto. However, a memory capacity for storing a plurality of images tends to get larger in proportionate with digital cameras with bigger pixels, and it causing an increase of cost. Also associated with digital cameras with bigger pixels, processes for an image stabilizer increase, hence it creates a problem of increasing the processing time.
The purpose of the present invention is to provide a technique enabling a reduced memory capacity and a high speed image process in an imaging apparatus.
In order to solve the above described problems, an imaging apparatus according to the present invention comprises: a compression unit for making compressed data by respectively compressing each piece of plural images obtained from an imaging device; a compressed data retention unit for retaining each compressed data; a thumbnail creation unit for creating a thumbnail from each of the image data; a thumbnail retention unit for retaining the thumbnail; a detection unit for detecting a feature point from the thumbnail; a decode unit for respectively decoding zones including the feature point from each compressed data; and a calculation unit for obtaining positional information of the feature point in each of the decoded zones and calculating a displacement amount of each compressed data based on the positional information.
The present invention is contrived to compress, and retain, a plurality of images to be synthesized. A calculation of displacement amount is carried out by detecting a feature point from a thumbnail in advance, followed by decoding a zone only corresponding to the feature point detected from the thumbnail from among the compressed data. It is followed by obtaining positional information of the feature point for each piece of the compressed data by using the decoded data and calculating a displacement amount. The process for calculating the displacement amount is not required to perform for the entirety of an image and the temporarily retained data during the process are compressed data, thereby making it possible to suppress a memory capacity to be secured for calculating a displacement amount small and increase a process speed.
The thumbnail creation unit may be configured to create one thumbnail corresponding to certain compressed data among the plural compressed data. In this case, the decode unit is preferably configured to cover larger area for decoding from compressed data other than compressed data corresponding to the thumbnail than an area for decoding from the immediately aforementioned compressed data.
An alternative configuration may be in a manner that the thumbnail creation unit creates a first and a second thumbnails corresponding to the first and second compressed data, the detection unit detects the feature points from the first and second thumbnails, respectively, and the decode unit decodes for zones including respective feature points from the first and second compressed data based on a displacement of positions between feature points detected from the first and second thumbnails.
Furthermore, a zone decoded by the decode unit may be configured to be made up of 8M by 8N pixels (where M and N are natural numbers). In the case of compressed data being a JPEG (Joint Photographic Experts Group) format for instance, a process can be sped up by excluding extraneous processing.
The present invention enables to lower a memory capacity to be secured and speed up carrying out a process of synthesizing a plurality of images at an imaging apparatus.
The following is a detailed description of the preferred embodiment of the present invention by referring to the accompanying drawings.
The CCD camera 2 photographs an object. The code unit 3 converts the raw data obtained by the CCD camera 2 for example into compressed data such as JPEG, PNG (Portable Network Graphics), TIFF (Tagged Image File Format), et cetera. The following describes on an example converting raw data to data of a JPEG format. The code retention unit 4 retains the coded data by the code unit 3.
The thumbnail creation unit 5 creates a thumbnail from the raw data. The thumbnail retention unit 6 retains the created thumbnail. The feature point detection unit 7 detects a feature point from the thumbnail.
The partial image decode unit 8 partially decodes the coded data based on information on the feature point, which is detected by the feature point detection unit 7, within the thumbnail. The partial image retention unit 9 retains the decoded partial image data. The displacement amount calculation unit 10 calculates a displacement amount between the images retained by the code retention unit 4 by using the decoded partial image data.
As described above, the imaging apparatus 1 shown in
In the image shown on the left side of
Note that the descriptions have been provided for the case of the feature points being four in the examples shown in
Assuming a capacity of one frame of raw data to be approximately 6 megabytes (MB) and a compression ratio to be 1/10, a memory capacity required for synthesizing three images is 1.8 MB (=6*(1/10)*3). Comparably, the conventional technique has memory temporarily retain the raw data for creating a synthetic image, requiring a memory capacity of 18 MB (=6*3). The present embodiment greatly reduces the memory capacity. As for the thumbnail, the size for example is 128 by 96 pixels, and therefore the memory capacity to be secured for carrying out an image stabilization process is greatly suppressed.
The first step extracts, and decodes, the first and second images by the unit of block of a predetermined size, respectively, followed by detecting an edge by using a known technique such as a Sobel filter. The next determines whether or not a point corresponding to a feature point of the thumbnail exists in the detected edge and detects the feature point in the decoded data.
The position coordinates within the first and second images regarding a certain feature point can be considered to be closer to each other as compared to a feature point detected from another edge. Taking advantage of this, the present embodiment is configured to detect a feature point in a second image by searching a part corresponding to a feature point of a first image when the feature point is detected therefrom.
A partial image close to a feature point detected from the second image is to be preferably set larger than a partial image extracted from the first image. That is, it is necessary to divide one image into a plurality thereof likewise the first image, decode it partially and search a feature point, in order to detect, from the second image, a feature point detected from the first image of the size of one block. In the case that there is a displacement in a position of a figure between the first and second images due to an unsteady camera, however, an extraction of the same pixel may be faced with a possibility of the corresponding feature point being not detected. This is the reason for making an image element, which is to be extracted, larger for the second image and thereafter than an image element for the first image.
First step extracts a zone corresponding to a feature point R1, respectively, among coded data of the first and second images. In this event, a zone of a partial image extracted from the second image is to be preferably larger than that of a partial image of the first image, considering that a feature point may necessarily not exist in the same zone as that of the first image because there is a displacement in a position of a figure between two images.
The next decodes the part specified by the range and compares between the first and second images. The same process is performed for the remaining three feature points R2, R3 and R4, followed by calculating an amount for making the second image move to overlap with the first image, that is, a displacement amount.
First, step S11 makes compressed data of obtained plural image data and store them in memory. Step S12 creates a thumbnail of the first image and stores it in memory.
Step S13 detects a feature point (Rn) from the thumbnail. Processes of using a Sobel filter for detecting an edge and detecting a feature point from the edge are the same as described above.
Step S14 partially decodes proximity position corresponding to the feature point Rn, which is detected from the thumbnail, from the first and second images which are compressed data. Step S15 detects a feature point (Tn) from a partial image of the decoded first image. Then step S16 searches a position of the feature point Tn, which is detected from the first image, in the second image.
Step S17 determines whether or not a position of each of all the feature points Rn, which are detected from the thumbnail, within the compressed data has been obtained. If the obtainment is not completed for all the feature points Rn, the process returns to the step S14 for repeating it until the positions of feature points corresponding to all the feature points Rn are obtained.
If the obtainment is completed for all the feature points Rn, the step S17 shifts to the S18 in which a displacement amount is calculated based on the result obtained in the steps S15 and S16, the process terminates.
First, step S21 inputs data obtained by the CCD camera 2 to the code unit 3 shown in
First, as the step S31 inputs code data, step S32 inputs, to the partial image decode unit 8, an address of an area of memory storing feature point information indicating positions of feature points Rn of the thumbnail.
Step S33 performs a Huffman decoding by the unit of block, step S34 carries out a linear dequantization and step S35 performs a two-dimensional inverse DCT. Then, step S36 determines whether the decoded 8-by-8 block is close to the feature point which is stored in the area indicated by the address of the step S32, that is, to determine whether or not the decoded block includes feature points Tn corresponding to the feature points Rn which are detected from the thumbnail. If the feature point Tn is included within the block, step S37 outputs the decompressed image as a partial image, while if a feature point Tn is not detected, then no process is performed.
Then, proceeding to the step S38, determines whether or not a decoding process is completed for each code data. If a block unprocessed of decoding still remains, the process shifts to the step S33 for carrying out a series of processes of the steps S33 through S37. Upon completing decoding processes for all the blocks, the process terminates.
Note that the coding and decoding processes respectively shown in
Furthermore, an origination point of a process of JPEG data is preferably identical with a start point of a feature point proximity zone, and the feature point proximity zone is preferably of a size (i.e., 8M by 8N, where M and N are natural numbers) expressed by multiples of a JPEG block size (8 by 8 pixels). For instance, it is possible to suppress a data volume to be processed to a minimum if a feature point is included in the decoded data by making the vertical direction of a decoding zone identical with a JPEG block size and the horizontal direction multiples thereof. In other words, if the size of decoding area is set independent of a JPEG block size as exemplified in
Note that a range of a zone to be decoded at once is preferably made larger in the horizontal direction (i.e., a value of the above described N). A calculation of a displacement amount from a single decoded zone makes it possible to obtain a more accurate displacement amount as compared to the case of obtaining a displacement amount from a decoded zone which is divided into a plurality.
The displacement amount calculation method according to the present embodiment uses compressed data, including JPEG, et cetera, as image data to be synthesized for stabilizing an image. Therefore, it is possible to minimize a memory capacity to be secured for an image stabilization process. The method detects a feature point by using a thumbnail in advance for calculating a displacement amount of a figure between compressed images, followed by calculating a displacement amount by decoding a part corresponding to the feature point within the compressed data. There is no need to decode the entirety of the compressed data, and therefore an area of memory required for such processes can be minimized. Moreover, it is possible to search a zone of compressed data to be decoded, thereby enabling a high speed process.
Such is a description on the method for calculating a displacement amount by detecting a feature point by using one thumbnail. What follows here is a description of a displacement amount calculation method in the case of a plurality of thumbnails.
That is, the present embodiment is configured to detect feature points Rn and Rn′ respectively from thumbnails 1 and 2 corresponding to the first and second images. It then obtains feature points Tn corresponding to the feature points Rn detected from the thumbnail 1 as for the JPEG data of the first image, and feature points Tn′ corresponding to the feature points Rn′ detected from the thumbnail 2 as for the JPEG data of the second image.
Step S41 makes compressed data of the first and second images respectively and stores them in memory, the processes of which are the same as the step S11. Step S42 creates thumbnails (i.e., thumbnails land 2) corresponding to the first and second images, respectively, and stores them in the memory. Step S43 detects feature points Rn and Rn′ respectively from the thumbnails 1 and 2. Then, step S44 calculates a displacement amount of feature points between the thumbnails 1 and 2.
Step S45 decodes corresponding zones from respective compressed data of the first and second images based on the correlation between the feature points detected from the two thumbnails. The processes of the step S46 and thereafter respectively correspond to the processes of the step S15 and thereafter shown in
As described above, the displacement amount calculation method according to the present second embodiment is configured to create a thumbnail for each image data and calculate, in advance, a displacement amount between the thumbnails from the feature points detected therefrom. Then it searches a feature point in the compressed data based on the calculated displacement amount between the thumbnails. Since a thumbnail is a relatively small capacity, accuracy of a displacement calculation process is improved without losing a benefit of the above described first embodiment in saving memory capacity.
Note that the above description describes the method for correcting an unsteady camera by calculating a displacement amount of a plurality of static images obtained from continuous shots, the present invention, however, is not limited as such. For example, the technique is applicable to a mosaicing for photographing a panorama picture, et cetera. Although data must be temporarily retained before integrating images for mosaicing, a use of compressed data makes it possible to minimize a memory capacity to be secured.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US6005978 *||Feb 7, 1996||Dec 21, 1999||Cognex Corporation||Robust search for image features across image sequences exhibiting non-uniform changes in brightness|
|US6195462 *||Mar 30, 1998||Feb 27, 2001||Eastman Kodak Company||Image compression|
|US6226414 *||Apr 20, 1995||May 1, 2001||Oki Electric Industry Co., Ltd.||Image encoding and decoding method and apparatus using edge synthesis and inverse wavelet transform|
|US6477279 *||Nov 6, 2001||Nov 5, 2002||Oki Electric Industry Co., Ltd.||Image encoding and decoding method and apparatus using edge synthesis and inverse wavelet transform|
|US7613363 *||Jun 23, 2005||Nov 3, 2009||Microsoft Corp.||Image superresolution through edge extraction and contrast enhancement|
|US20010016078 *||Mar 16, 2001||Aug 23, 2001||Oki Electric Industry Co., Ltd.||Image encoding and decoding method and apparatus using edge synthesis and inverse wavelet transform|
|US20020051582 *||Nov 6, 2001||May 2, 2002||Oki Electric Industry Co., Ltd.||Image encoding and decoding method and apparatus using edge synthesis and inverse wavelet transform|
|US20030026457 *||Aug 6, 2001||Feb 6, 2003||Mitutoyo Corporation||Systems and methods for correlating images in an image correlation system with reduced computational loads|
|JP2004234624A||Title not available|
|JP2004343483A||Title not available|
|JP2005197911A||Title not available|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US8279291 *||Apr 27, 2010||Oct 2, 2012||Fujifilm Corporation||Image transforming apparatus using plural feature points and method of controlling operation of same|
|US20100271501 *||Apr 27, 2010||Oct 28, 2010||Fujifilm Corporation||Image transforming apparatus and method of controlling operation of same|
|U.S. Classification||382/190, 382/232, 348/208.99, 348/94|
|International Classification||H04N5/228, G06K9/46|
|Cooperative Classification||H04N19/59, H04N5/232, H04N19/80, H04N19/60, H04N19/54, H04N5/23229, H04N5/23232|
|European Classification||H04N5/232L, H04N5/232L3, H04N7/26F, H04N7/26M2N2, H04N5/232, H04N7/46S, H04N7/30|
|Jan 3, 2007||AS||Assignment|
Owner name: FUJITSU LIMITED, JAPAN
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MURASHITA, KIMITAKA;SHIMIZU, MASAYOSHI;SUZUKI, SHOJI;ANDOTHERS;REEL/FRAME:018747/0667;SIGNING DATES FROM 20061213 TO 20061214
Owner name: FUJITSU LIMITED,JAPAN
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MURASHITA, KIMITAKA;SHIMIZU, MASAYOSHI;SUZUKI, SHOJI;ANDOTHERS;SIGNING DATES FROM 20061213 TO 20061214;REEL/FRAME:018747/0667
|Oct 30, 2013||FPAY||Fee payment|
Year of fee payment: 4