Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS5754226 A
Publication typeGrant
Application numberUS 08/574,612
Publication dateMay 19, 1998
Filing dateDec 19, 1995
Priority dateDec 20, 1994
Fee statusPaid
Publication number08574612, 574612, US 5754226 A, US 5754226A, US-A-5754226, US5754226 A, US5754226A
InventorsEiji Yamada, Tetsuo Iwaki
Original AssigneeSharp Kabushiki Kaisha
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Imaging apparatus for obtaining a high resolution image
US 5754226 A
Abstract
An imaging apparatus includes: an imaging plate having a light receiving face; focusing device for focusing light from a subject on the light receiving face of the imaging plate as the image formed on the light receiving face; image position displacing device for displacing a position of the image formed by the focusing device with respect to a reference position; image position displacement control device for controlling the image position displacing device; motion vector detecting device for detecting a motion vector of each image with respect to a reference image; and image synthesis device for displacing pixels constituting each image and for interpolating the displaced pixels constituting each image between adjacent pixels of the reference image, thereby synthesizing the images into a single image.
Images(10)
Previous page
Next page
Claims(7)
What is claimed is:
1. An imaging apparatus comprising:
an imaging plate having a light receiving face, on which a plurality of light receiving elements are arranged at intervals of PH (PH is a positive real number) in a first direction and at intervals of Pv (Pv is a positive real number) in a second direction perpendicularly crossing the first direction, for imaging the received image formed on the light receiving face during a predetermined period of time as an image constituted by a plurality of pixels, the number of pixels being independent of the number of light receiving elements;
focusing means for focusing light from a subject on the light receiving face of the imaging plate as the received image formed on the light receiving face;
image position displacing means for displacing a position of the received image formed by the focusing means with respect to a reference position by approximately PH i/H (H is a predetermined integer of 1 or greater; and i is an integer: 0<i <H) in the first direction and by approximately Pv j/V (V is a predetermined integer of 1 or greater; and j is an integer: 0<j<V) in the second direction;
image position displacement control means for controlling the image position displacing means each time the imaging plate images the image so as to displace the received image to a position represented by a combination of i and j;
motion vector detecting means for detecting a motion vector of each of (N-1) images (N=HV) with respect to a reference image, using one of N images imaged by the imaging plate as the reference image; and
image synthesis means for displacing pixels constituting each of the (N-1) images by a degree obtained by synthesizing the motion vector of each of the (N-1) images detected by the motion vector detecting means and a displacement vector of the position of each of the (N-1) images with respect to the reference image, and for interpolating the displaced pixels constituting each of the (N-1) images between adjacent pixels of the reference image, thereby synthesizing the N images into a single image.
2. An imaging apparatus according to claim 1, wherein the motion vector detecting means detects a displacement of each of the (N-1) images with respect to the reference image as the motion vector.
3. An imaging apparatus according to claim 1, wherein the motion vector detecting means includes:
means for displacing pixels constituting each of the (N-1) images in the first direction and/or in the second direction so as to generate a plurality of displaced images of each of the (N-1) images; and
means for detecting a displacement of a displaced image having the highest correlation with the reference image among the plurality of displaced images of each of the (N-1) images, the motion vector detecting means detecting the displacement as the motion vector of each of the (N-1) images.
4. An imaging apparatus according to claim 3, wherein the means for displacing pixels further includes interpolation means for interpolating luminance of pixels.
5. An imaging apparatus according to claim 1, wherein the motion vector detecting means includes:
means for displacing a plurality of predetermined pixels among the pixels constituting each of the (N-1) images in the first direction and/or in the second direction so as to generate a displaced image constituted only by the displaced pixels; and
means for detecting a displacement of the displaced image having a minimum value obtained by accumulating a difference between each pixel of the plurality of displaced images having respectively different combinations of displacements in the first direction and the second direction and each pixel at a corresponding predetermined position in the reference image so as to detect the displacement as the motion vector of each of the (N-1) images.
6. An imaging apparatus according to claim 1, wherein the motion vector detecting means includes a filter for removing at least a high frequency component at a spatial frequency of a signal of each of the (N-1) images, and the signal is passed through the filter prior to the detection of the motion vector.
7. An imaging apparatus comprising:
an imaging plate having a light receiving face, on which a plurality of light receiving elements are arranged at intervals of PH (PH is a positive real number) in a first direction and at intervals of Pv (Pv is a positive real number) in a second direction perpendicularly crossing the first direction, for imaging a received image formed on the light receiving face during a predetermined period of time as an image formed of a plurality of pixels, the number of pixels being independent of the number of light receiving elements;
focusing means for focusing light from a subject on the light receiving face of the imaging plate as the received image formed on the light receiving face;
image position displacing means for displacing a position of the received image formed by the focusing means with respect to a reference position by approximately PH i/H (H is a predetermined integer of 1 or greater; and i is an integer: 0≦i<H) in the first direction and by approximately Pv j/V (V is a predetermined integer of 1 or greater; and j is an integer: 0≦j<V) in the second direction;
motion vector detecting means, with respect to each pair of images among N image imaged by the imaging plate, for detecting a displaced image having the highest correlation with one of the pair of images from displaced images obtained by displacing each pixel constituting the other of the pair of images in the first direction and/or the second direction;
image synthesis means for displacing pixels constituting the one image by a degree obtained by synthesizing a motion vector obtained from the other image to the one image, which is detected by the motion vector detection means, and a displacement of a position of the one image from the other image, and for synthesizing the pair of images by repeating a process for interpolating each displaced pixel of the one image between adjacent pixels of the other image, thereby finally obtaining a single synthesized image.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention:

The present invention relates to an imaging apparatus capable of obtaining resolution surpassing resolution corresponding to the number of pixels of an imaging plate.

2. Description of the Related Art:

As shown in FIG. 9, an imaging apparatus for obtaining resolution surpassing resolution corresponding to the number of pixels of an imaging plate, which uses a pair of imaging plates 21 and 22, has been previously proposed, for example, in Japanese Patent Publication No. 56-40546.

In the imaging apparatus, image light L is split into two parts by a half mirror 23 so that one part of the image light L is incident on a light receiving face of one imaging plate 21 while the other part of the image light L is incident on a light receiving face of the other imaging plate 22 via a mirror 24. The light receiving faces of the imaging plates 21 and 22 have the same pixel arrangement, and form an image with the incident image light L at the same magnification. These light receiving faces are placed so that one part of image light L is incident on the position horizontally shifted by a half-pixel from the position on which the other part of incident image light L is incident. The imaging plates 21 and 22 operate in a synchronous manner with driving signals φ1 and φ2 respectively transmitted from a driving signal generating section 25. Phases of the driving signals φ1 and φ2 are shifted by 180 degrees from each other. Then, images output from the imaging plates 21 and 22 are alternately inserted for each pixel in a horizontal direction in a synthesis signal processing section 26, thereby synthesizing the images into a single image.

In the image thus synthesized in the synthesis signal processing section 26, horizontal resolution can be doubled as compared with that of each of the imaging plates 21 and 22 because a horizontal spatial sampling interval is reduced by half. By vertically shifting the positions, on which the two parts of incident image light L are respectively incident, by a half-pixel from each other, vertical resolution can be improved by being doubled.

As another imaging apparatus for obtaining resolution surpassing the resolution corresponding to the number of pixels of an imaging plate, an imaging apparatus using a single imaging plate 31 in time division as shown in FIG. 10 has been previously proposed, for example, in Japanese Laid-Open Patent Publication No. 61-251380.

In this imaging apparatus, when the image light L is incident onto a light receiving face of the imaging plate 31 via an optical system 32, the position of the optical system 32 or the imaging plate 31 can be changed by function of an image position displacing section 33 so as to displace the position of the image. A control section 34 controls the image position displacing section 33. Specifically, the position of the image is set at a reference position by the control section 34, and the first image is formed on the imaging plate 31. Then, the position of the image is horizontally shifted by a half-pixel from the reference position by the control section 34, and the second image is formed on the imaging plate 31. In the same manner, the position of the image is vertically shifted by a half-pixel from the reference position, so that the third image is formed on the imaging plate 31. In the same manner, the position of the image is both vertically and horizontally shifted by a half-pixel from the reference position, so that the fourth image is formed on the imaging plate 31. The four images thus formed on the imaging plate 31 are transmitted to an A/D conversion section 35 to be sequentially converted into digital signals. Then, the sequential digital signals are stored in an image memory 36. A signal corresponding to each pixel is output from the four images stored in the image memory 36 in turn, thereby synthesizing the four images into a single image.

In the synthesized image output from the image memory 36, doubled horizontal and vertical resolution is obtained as compared with that of the imaging plate 31 because horizontal and vertical spatial sampling intervals are respectively reduced by half.

As still another imaging apparatus for obtaining resolution surpassing resolution realized by the number of pixels of an imaging plate, an imaging apparatus which utilizes fluctuation spontaneously applied to an imaging plate 41 as shown in FIG. 11 has been previously proposed, for example, in Japanese Laid-Open Patent Publication No. 4-172778.

This imaging apparatus sequentially transmits images formed on the imaging plate 41 to an A/D conversion section 42 so as to convert the images into digital signals. Then, the digital signals are stored in an image memory 43. When a predetermined number (4 or more) of the images are stored in the image memory 43, one image is selected for each time to be transmitted to an interpolation image generation section 44 as a reference image. Each time the interpolation image generation section 44 receives the reference image, the interpolation image generation section 44 generates three interpolation images: the first interpolation image is obtained by horizontally displacing each pixel of the reference image by a half-pixel; the second interpolation image is obtained by vertically displacing each pixel of the reference image by a half-pixel; and the third interpolation image is obtained by both horizontally and vertically displacing each pixel of the reference image by a half-pixel. These three interpolation images are transmitted to a highly correlative image detecting section 45. Each time the highly correlative image detecting section 45 receives the interpolation image, the highly correlative image detecting section 45 detects the image which is the most highly correlative with the interpolation image among all images stored in the image memory 43 except the reference image. In this way, the reference image transmitted to the interpolation image generation section 44 and the three images, which are detected by the highly correlative image detecting section 45 based on the reference image, are transmitted to a synthesis section 46. Then, each pixel of the three images is inserted in turn between pixels of the reference image in the synthesis section 46, thereby obtaining a single image.

The synthesized image obtained in the above synthesis section 46 has doubled horizontal and vertical resolution as compared with that of the imaging plate 41 because the horizontal and vertical spatial sampling intervals are respectively halved.

However, the above-mentioned conventional imaging apparatuses have the following disadvantages.

The conventional imaging apparatus shown in FIG. 9 requires the two imaging plates 21 and 22 so as to double the resolution, and needs the half mirror 23 and the like as an optical system. As a result, it becomes difficult to fabricate an inexpensive, compact and light-weighted apparatus.

In the conventional imaging apparatus shown in FIG. 10, unless the image position displacing section 33 precisely displaces the position of the image by a half-pixel for each image, distortion is generated in the resultant image. Therefore, the mechanism of the image position displacing section 33 should have high accuracy. As a result, the apparatus becomes disadvantageously expensive. In addition, since the imaging apparatus uses one imaging plate 31 in time division, a shift in the position of the image occurs when a subject moves or the imaging apparatus moves while taking the images due to a movement of a hand holding the apparatus, resulting in distortion in the synthesized image. Moreover, a large shift may bring a possibility that the order of pixels is changed during a synthesizing process. As a result, an untrue synthesized image is disadvantageously obtained in some cases.

Furthermore, since the conventional imaging apparatus shown in FIG. 11 utilizes fluctuation spontaneously applied to the imaging plate 41, there is no guarantee that three images, which are vertically, horizontally or both vertically and horizontally shifted by a half-pixel from the reference image, are obtained without fail from a predetermined number of images. Therefore, if a high probability of obtaining such three images is, the number of images stored in the image memory 43 should be increased. As a result, a capacity of the image memory 43 is increased to render the imaging apparatus expensive. In addition, in the case where the imaging apparatus is perfectly fixed, an increase in the number of images stored in the image memory 43 does not produce any improvement. Moreover, in the general case of taking images, a correlation between images is usually lowered with elapse of time due to movement of a subject or change in the degree of illuminance. Therefore, there is no guarantee that images with a higher correlation can be obtained as the number of images stored in the image memory 43 is increased. Furthermore, while the number of images stored in the image memory 43 is increased, the possibility that part of a subject moves while taking these images becomes high. When such images are synthesized into a single image, an untrue synthesized image may be obtained because the order of pixels changes in the moved part of the subject.

In the conventional example shown in FIG. 11, a correlation for each image formed on the imaging plate 41 is detected. However, Moire fringes and the like appear in such unsynthesized images due to aliasing which occurs while spatially sampling the images by each pixel of the imaging plate 41. As a result, it becomes difficult to precisely determine a correlation. Furthermore, the resolution is disadvantageously lowered due to false detection of the correlation in some cases.

The reason why aliasing occurs in the unsynthesized images will be described below.

Since an imaging apparatus in general spatially samples an image formed on a light receiving face of the imaging plate by each pixel, a frequency band of a spatial frequency of the formed image should be previously limited, taking the sampling theorem into consideration. For this purpose, an optical lowpass filter such as a birefringence plate is used.

For comparison, the reason why aliasing does not occur in a general imaging apparatus having resolution corresponding to the number of pixels will be first described.

FIG. 12 illustrates, on the left, a waveform (spatial region) showing a variation in luminance of the image with respect to the horizontal position on a light receiving face of the imaging plate, and on the right, the variation in luminance as a frequency spectrum (frequency region) of a spatial frequency. A distance between adjacent pixels in a horizontal direction on the light receiving face of the imaging plate is denoted as PH. In this case, the optical lowpass filter such as a birefringence plate previously removes frequency components higher than a spatial frequency of 1/(2PH) in the image formed on the light receiving face. Therefore, when the image is spatially sampled at a sampling frequency of 1/PH (PH : sampling interval), which is twice as high as the spatial frequency of 1/(2PH), an aliasing component A indicated with a broken line in a frequency spectrum does not appear in a region of spatial frequency of 1/(2PH) and lower as shown in FIG. 13. Thus, aliasing does not occur. In the case of spatial sampling with such pixels as explained above, attenuation in a high frequency band occurs due to aperture effect as shown in FIG. 13 since a photosensitive region of the pixel has a certain length and a certain width.

Next, the reason why aliasing occurs in the case where horizontal resolution is doubled by synthesizing images will be described.

In this case, as shown in FIG. 14 and which is different from the above general imaging apparatus; the birefringence plate and the like removes only frequency components of a spatial frequency of 1/PH or higher in an image formed on a light receiving face, and does not remove signal components in a region of a spatial frequency in the range of 1/(2PH) to 1/PH. This is because resolution of PH or higher cannot be obtained from the image where the birefringence plate and the like removes frequency components of a spatial frequency of 1/(2PH) and higher even if a sampling frequency is enhanced. Therefore, when the image, in which only frequency components of a spatial frequency of 1/PH and higher are removed, is spatially sampled at a sampling frequency of 1/PH (PH is a sampling interval), aliasing components indicated with a broken line in frequency spectra of FIG. 15 appears in a region having a frequency of 1/2PH or lower. A hatched region in FIG. 15 appears as aliasing AN. In this case, an actual signal is obtained as indicated by a solid line to which the aliasing AN indicated by the hatched region is added.

Also in the case where an image is spatially sampled while horizontally shifting the position of the image by a half-pixel (PH /2), aliasing AN occurs in a hatched region as shown in frequency spectra of FIG. 16. Also in FIGS. 15 and 16, attenuation in a high frequency band commonly occurs during spatial sampling due to aperture effect.

In the spatially sampled images shown in FIGS. 15 and 16, sample points of each image for spatial sampling are shifted by a half-pixel (PH /2) from each other. Therefore, a signal component having a center at a spatial frequency of 2n/PH (n is an integer) of FIG. 15 has the same phase as that of FIG. 16 while a signal component having a center at a spatial frequency of (2n+1)/PH shown in FIG. 15 has a phase shifted by 180 degrees from that of a signal component shown in FIG. 16. In other words, while the signal components having a center at a spatial frequency of 0 has the same phase with each other, the aliasing components having a center at a spatial frequency of 1/PH have the same amplitude of luminance and the phases shifted by 180 degrees from each other.

Thus, as shown in FIG. 17, when pixels of the respective images shown in FIGS. 15 and 16 are alternately positioned so as to synthesize the images into an image having a sampling frequency of 2/PH (that is, a sampling interval of PH /2), aliasing AN does not occur in a region having a spatial frequency of 1/PH and lower because aliasing components AN of both images are counterbalanced. Aliasing components A in a region of a spatial frequency of 1/PH and higher shown in FIG. 17 are signal components having a center at a spatial frequency of 2/PH, which are not counterbalanced.

As is understood from the above observation, when the frequency components to be removed by the birefringence plate and the like are limited to a high frequency band, aliasing AN does not occur in the synthesized image, but occurs in the unsynthesized images. As a result, Moire fringes and the like appear in the image. Therefore, when a correlation is detected for such unsynthesized images as described above, there arise a possibility that the detection of the correlation is rendered imprecise due to adverse effect of aliasing AN.

SUMMARY OF THE INVENTION

According to one aspect of the invention the imaging apparatus includes: an imaging plate having a light receiving face, on which a plurality of light receiving elements are arranged at intervals of PH (PH is a positive real number) in a first direction and at intervals of PV (PV is a positive real number) in a second direction perpendicularly crossing the first direction, for imaging an image formed on the light receiving face during a predetermined period of time as an image constituted by a plurality of pixels; focusing means for focusing light from a subject on the light receiving face of the imaging plate as the image formed on the light receiving face; image position displacing means for displacing a position of the image formed by the focusing means with respect to a reference position by approximately PH i/H (H is a predetermined integer of 1 or greater; and i is an integer: 0≦i <H) in the first direction and by approximately PV j/V (V is a predetermined integer of 1 or greater; and j is an integer: 0≦j<V) in the second direction; image position displacement control means for controlling the image position displacing means each time the imaging plate images an image so as to displace the image to a position represented by a combination of i and j; motion vector detecting means for detecting a motion vector of each of (N-1) images (N=HV) with respect to a reference image, using one of N images imaged by the imaging plate as the reference image; and image synthesis means for displacing pixels constituting each of the (N-1) images by a degree obtained by synthesizing the motion vector of each of the (N-1) images detected by the motion vector detecting means and a displacement vector of the position of each of the (N-1) images with respect to the reference image, and for interpolating the displaced pixels constituting each of the (N-1) images between adjacent pixels of the reference image, thereby synthesizing the N images into a single image.

According to another aspect of the invention, the imaging apparatus includes: an imaging plate having a light receiving face, on which a plurality of light receiving elements are arranged at intervals of PH (PH is a positive real number) in a first direction and at intervals of PV (PV is a positive real number) in a second direction perpendicularly crossing the first direction, for imaging an image formed on the light receiving face during a predetermined period of time as an image formed of a plurality of pixels; focusing means for focusing light from a subject on the light receiving face of the imaging plate as the image formed on the light receiving face; image position displacing means for displacing a position of the image formed by the focusing means with respect to a reference position by approximately PH i/H (H is a predetermined integer of 1 or greater; and i is an integer: 0≦i<H) in the first direction and by approximately PV j/V (V is a predetermined integer of 1 or greater; and j is an integer: 0≦j<V) in the second direction; motion vector detecting means, with respect to each pair of images among N image imaged by the imaging plate, for detecting a displaced image having the highest correlation with one of the pair of images from displaced images obtained by displacing each pixel constituting the other of the pair of images in the first direction and/or the second direction; image synthesis means for displacing pixels constituting the one image by a degree obtained by synthesizing the motion vector from the other image to the one image, which is detected by the motion vector detection means, and a displacement of a position of the one image from the other image, and for synthesizing the pair of images by repeating a process for interpolating each displaced pixel of the one image between adjacent pixels of the other image, thereby finally obtaining a single synthesized image.

Thus, the invention described herein makes possible the advantage of providing an imaging apparatus for obtaining a high resolution image in time division by using an inexpensive imaging plate and without displacing the position of the image with high precision, and not suffering from movement of a subject or a hand holding the imaging apparatus.

This and other advantages of the present invention will become apparent to those skilled in the art upon reading and understanding the following detailed description with reference to the accompanying figures.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a block diagram showing the configuration of an imaging apparatus in accordance with an example of the present invention.

FIG. 2 is a plan view showing the relationship between a pixel arrangement of an imaging plate and images formed on a light receiving face in accordance with an example of the present invention.

FIG. 3 is a plan view showing the positional relationship among pixels of four images in accordance with an example of the present invention.

FIG. 4 is a block diagram showing the configuration of a motion vector detecting section in accordance with an example of the present invention.

FIG. 5 is a graph showing a waveform and a frequency spectrum of an image formed on a light receiving face in accordance with an example of the present invention.

FIG. 6 is a graph showing a waveform and a frequency spectrum of the image which is obtained by spatially sampling the image of FIG. 5 and then output it from a digital filter in accordance with an example of the present invention.

FIG. 7 is a diagram illustrating a linear interpolation processing for detecting a motion vector in accordance with an example of the present invention.

FIG. 8 is diagram illustrating an interpolation processing of pixels for synthesizing images in accordance with an example of the present invention.

FIG. 9 is a block diagram illustrating the configuration of an imaging apparatus in accordance with a first conventional example.

FIG. 10 is a block diagram illustrating the configuration of an imaging apparatus in accordance with a second conventional example.

FIG. 11 is a block diagram illustrating the configuration of an imaging apparatus, showing a third conventional example.

FIG. 12 is a graph illustrating a waveform and a frequency spectrum of an image formed on a light receiving face in a general imaging apparatus.

FIG. 13 is a graph illustrating a waveform and a frequency spectrum of an image obtained by spatially sampling the image of FIG. 12.

FIG. 14 is a graph illustrating a waveform and a frequency spectrum of an image formed on a light receiving face in an imaging apparatus capable of obtaining doubled horizontal resolution.

FIG. 15 is a graph illustrating a waveform and frequency spectra of an image obtained by spatially sampling the image of FIG. 14.

FIG. 16 is a graph illustrating a waveform and frequency spectra of an image obtained by spatially sampling the image of FIG. 14 while shifting the image by a half-pixel.

FIG. 17 is a graph showing a waveform and a frequency spectrum of an image obtained by synthesizing the images of FIGS. 15 and 16.

DESCRIPTION OF THE PREFERRED EMBODIMENTS

In an imaging apparatus according to the present invention, positions of N images formed on an imaging plate are displaced horizontally by one-Hth pixel and vertically by one-Vth pixel within one pixel. An image position displacing means changes a refractive index or an incident angle of a transparent refractive plate placed in an optical system and/or changes a reflection angle of an reflective plate to shift an optical axis, thereby displacing the position of the image to be formed. Alternatively, the position of the image can be displaced by translating the imaging plate itself in a horizontal direction and/or a vertical direction. By imaging N images, the imaging plate can obtain a still image. Furthermore, by repeatedly imaging N images in a continuous manner, a video image can be obtained.

A motion vector detecting means detects a displacement of the displaced image having the highest correlation with a reference image from the displaced images obtained by horizontally and/or vertically displacing each pixel of (N-1) images except the reference image. The displaced images described herein can include the image itself which is not substantially displaced.

Since the imaging position of each of the (N-1) images is displaced from the imaging position of the reference image, the reference image and each of the other images are shifted by less than one pixel. Moreover, in the case where the imaging position is imprecisely displaced by the image position displacing means, a shift of the imaging position indicates the position of the reference image shifted from the position of the actual image.

In addition, since each of (N-1) images is imaged at a different point of time from the time when the reference image is imaged, the image is shifted in the case where a subject moves or an imaging apparatus moves due to movement of a hand holding the apparatus.

The motion vector detecting means detects motion vectors based on the above-explained shifts of these images.

In the case where a displacement of each pixel is represented by an integral multiple of PH in a horizontal direction and an integral multiple of PV in a vertical direction, the image can be displaced only by shifting each real pixel. Normally, however, since the displacement is not represented by an integral multiple of a distance between pixels, interpolation processing is performed in order to displace each pixel. The interpolation processing can be realized in principle as follows.

A pixel having a density of 0 is interpolated between pixels so as to include a pixel at the displaced position of the sample points, and then is subjected to oversampling. Thereafter, a filtering process is conducted with a lowpass filter having a cutoff frequency which is half of a sampling frequency. Then, the pixels are thinned-out, leaving the pixel at the displaced position.

However, such a process requires oversampling with high speed and an enormous amount of calculations in the lowpass filtering. Therefore, it is generally appropriate to use linear interpolation based on four pixels around the interpolated pixel or cubic convolution interpolation based on sixteen pixels around the interpolated pixel. The linear interpolation corresponds to linear Lagrangian polynomial interpolation, and quadratic or higher degree Lagrangian polynomial interpolation or other interpolation processing can also be used.

The density of pixel herein indicates luminance of each pixel in the case of monochrome images, and indicates gradation (gray-scale) of each color of each pixel, or a luminance component (Y signal component) obtained by synthesizing the respective colors (converting a color image into a monochrome image) in the case of color images.

A correlation can be obtained, for example, based on a value obtained by calculating a difference in density between each pixel of the displaced image and each pixel of the reference image. In this case, the displaced image having the minimum accumulated value has the highest correlation.

However, if a difference in density of each pixel of each displaced image is calculated with respect to all pixels of the reference image, the amount of calculation becomes enormous. Thus, it is possible in the present invention to adopt a representative point matching method for taking pixels at a plurality of predetermined positions as representative points and then performing calculations with respect to the pixels of the representative points alone.

Alternatively, the correlation between the reference image and each image can be calculated based on a cross-correlation function. In this case, however, since the product of a density of each pixel of the reference image and a density of each pixel of each of the displaced images is accumulated, the highest correlation is obtained with the highest accumulated value. In the case where calculation is performed based on the cross-correlation function, the amount of calculation can be also reduced by limiting the range of displacement of the displaced images and by obtaining the product of densities only for representative points.

The motion vector detecting means detects the displacement of the displaced image having the highest correlation as a motion vector. The motion vector indicates a direction of displacement with a direction of a vector, and a distance of displacement with an absolute value of the vector. Specifically, assuming that a displacement component in a horizontal direction is x and a displacement component in a vertical direction is y, the displacement is represented as a vector (x, y) in a vector quantity (i.e., in a list or an array). The displacement itself is also a vector.

The image synthesis means displaces, with respect to each of the (N-1) images except the reference image, each pixel of the particular image by the degree obtained by synthesizing a motion vector of the particular image and a displacement of the imaging position thereof. Also in this case, each pixel is displaced by interpolation processing similar to the above processing. A motion vector detected by the motion vector detecting means indicates a shift of the position of the image due to displacement of the imaging position even in the case where a subject or an imaging apparatus does not move. Therefore, in this case, since a motion vector of each image and a displacement of the imaging position are counterbalanced with each other by synthesizing vectors, each pixel of the image is not required to be displaced from its original position.

On the other hand, in the case where a subject or an imaging apparatus moves, the part corresponding to displacement of the imaging position is counterbalanced with each other in a motion vector of each image. Thus, it is possible to obtain an actual motion vector in the image by synthesizing the vectors.

In the case where the imaging position is imprecisely displaced by the image position displacing means, the displacement is treated in the same way as that for treating displacement due to movement of a subject or an imaging apparatus, since a shift of position of the image caused by the image position displacing means is not perfectly offset by displacement of the imaging position which is previously given as a design value. Specifically, an actual motion vector of the image contains a component of shift from such a design value of the imaging position. When each pixel is displaced by the actual motion vector for each of the (N-1) images, it is possible to obtain the image with the position displaced as indicated with a design value, excluding the movement of a subject or an imaging apparatus.

In the case where the position of an image is precisely displaced by the image position displacing means, and a subject and an imaging apparatus do not move; each pixel is displaced by a synthesis vector (0, 0), resulting in no interpolation processing being required. Also in other cases, when the displacement is only represented by an integral multiple of a pixel interval, it is only necessary to shift the position of a real pixel. Therefore, actual interpolation processing is not needed.

The image synthesizing means displaces pixels for each of the (N-1) images as described above, then synthesizes the images by interpolating these displaced pixels between the adjacent pixels of the reference image to obtain a single image. Thus, since the synthesized image has the number of pixels obtained by multiplying the number of pixels of the reference image by N (=HV), the synthesized image has a horizontal spatial sampling interval of PH /H and a vertical spatial sampling interval of PV /V.

As a result, according to the present invention, by imaging N pictures while shifting the positions of the images, a image having horizontal resolution improved by H times and vertical resolution improved by V times can be obtained. Furthermore, even in the case where a subject moves or an imaging apparatus moves due to movement of a hand holding the apparatus; an image with high resolution can be obtained by correcting a shift of the image due to such movement. In addition, in the case where the imaging position is imprecisely displaced by the image position displacing means, a shift of the image can be corrected in the same way as that for correcting the movement of a subject or the like.

In the case where a motion vector detecting means does not obtain a displaced image with a sufficiently high correlation due to, for example, movement of a subject beyond the range of displacement of the displaced image, the imaging apparatus according to the present invention can perform processing so that an image synthesis means does not use the image with an insufficient correlation for synthesis.

In the case of video images (motion picture), according to a method for producing one synthesized image from N images, it is necessary to take N pictures in a time period of one field. However, by synthesizing each of the N pictures as a reference image so that the images are synthesized based on each reference image, N synthesis images with high resolution arranged in time sequence can be obtained. As a result, it becomes possible to take each of the N pictures for each field.

Instead of determining a reference image, images can be paired off. A relative motion vector between the pair of images is detected, and then the pair of images are sequentially synthesized into an image. In this case, however, it is necessary to determine each pair of the images in a tree configuration from one image as a root. In addition, it is necessary to repeat a process for displacing each pixel for the image obtained by displacing each pixel of images and then synthesizing the images. As a result, the amount of calculation increases. A pair of images can be synthesized after synthesizing a relative motion vector and a relative displacement of the position of image with respect to all the images back to a root.

In the case where horizontal and vertical resolutions are enhanced by synthesizing a plurality of images as described above, it is necessary to include a high frequency component of 1/(2PH) or higher in a horizontal spatial frequency and a high frequency component of 1/(2PV) or higher in a vertical spatial frequency in the image formed on the light receiving face even when each image is formed on the light receiving face. However, when the image containing such a high frequency component is spatially sampled at a sampling frequency of 1/PH and 1/PV, aliasing appears. When a motion vector is detected based on this result, it becomes difficult to precisely detect a motion vector due to the effect of aliasing. However, since aliasing is caused due to an aliasing component having a center at a horizontal spatial frequency of 1/PH and a vertical spatial frequency of 1/PV, the level of aliasing generally increases in a higher frequency band. Therefore, when a motion vector is detected after a filtering process for removing a high frequency component for each of (N-1) images, adverse effect due to aliasing can be reduced. Herein, since a high frequency component removed by a filter corresponds to an image having sampling frequencies of 1/PH and 1/PV, such a high frequency component represents a frequency region having spatial frequencies around, but smaller than, 1/(2PH) and 1/(2PV).

Hereinafter, examples of the present invention will be described with reference to drawings.

FIGS. 1 through 8 show an example of the present invention. FIG. 1 is a block diagram showing the configuration of an imaging apparatus. FIG. 2 is a plan view showing the relationship between the pixel arrangement of an imaging plate and images formed on a light receiving face. FIG. 3 is a plan view showing the positional relationship among pixels of four images. FIG. 4 is a block diagram showing the configuration of a motion vector detecting section. FIG. 5 illustrates graphs showing a waveform and frequency spectra of an image formed on a light receiving face. FIG. 6 illustrates a graph showing a waveform and frequency spectra of an image which is spatially sampled and then output from a digital filter. FIG. 7 shows a graph illustrating a linear interpolation processing of pixels for detecting a motion vector. FIG. 8 is an interpolation processing of pixels for synthesizing images.

In this example, an imaging apparatus for synthesizing four images which are imaged while shifting their positions respectively by a half-pixel (PH /2, PV /2: H=2, V=2, N=HV=4) will be described.

As shown in FIG. 1, the imaging apparatus has such a configuration that imaging light L from a subject S is incident on an imaging plate 4 via a lens 1, a birefringence plate 2 and a transparent refracting plate 3. The lens 1 is an optical system for focusing the imaging light L on a light receiving face of the imaging plate 4. The birefringence plate 2 is an optical system for blocking only high frequency components in a spatial frequency of the imaging light L. In this case, a horizontal spatial frequency of 1/PH and a vertical spatial frequency of 1/PV serve as cutoff frequencies, so that frequency components higher than the cutoff frequencies are removed.

A spatial frequency component tends to more attenuate in a higher spatial frequency range due to Modulation transfer function (MTF) of the lens 1 or aperture effect of the imaging plate 4. Therefore, when frequency components higher than the cutoff frequencies are sufficiently attenuated so as not to affect imaging, the birefringence plate 2 can be omitted.

The transparent refracting plate 3 is a transparent plate-shaped optical system having a certain thickness and a certain refractive index, and is movably supported so as to be inclined with respect to horizontal and vertical directions in the light receiving face of the imaging plate 4.

The imaging plate 4 is a CCD imaging device or other solid-state imaging device. As shown in FIG. 2, on the light receiving face thereof, a number of pixels 4a (shown in FIG. 2) are arranged at intervals of PH in a horizontal direction and at intervals of PV in a vertical direction. These pixels 4a thus spatially sample the image formed on the light receiving face at a sampling frequency of 1/PH in a horizontal direction and at 1/PV in a vertical direction. Cutoff frequencies of the birefringence plate 2 are identical with these sampling frequencies. A light sensitive region of each pixel 4a is formed to have a width dH (0<dH ≦PH) in a horizontal direction and a length dV (0<dV ≦PV) in a vertical direction, and therefore attenuation in a high frequency band due to aperture effect occurs during spatial sampling because of these width and length.

As shown in FIG. 1, the transparent refracting plate 3 can be moved by an actuator 5. Specifically, for example, part of the transparent refracting plate 3 and the actuator 5 constitute a voice coil motor or a solenoid. The actuator 5 can vary an angle of inclination of the transparent refracting plate 3 in accordance with a current flowing through the actuator 5. The actuator 5 can be consist of a piezoelectric element, so that movement induced by distortion, which occurs upon application of a voltage, or movement of other mechanical structures is directly transmitted to the transparent refracting plate 3 so as to change an angle of inclination.

When an angle of inclination changes in this way, the incident image light L is refracted in accordance with the angle of inclination so as to shift the position of an image formed on the light receiving face of the imaging plate 4. The actuator 5 changes an angle of inclination of the transparent refracting plate 3, so that an image K10 horizontally shifted by a half-pixel (PH /2), an image K01 vertically shifted by a half-pixel (PV /2), and an image K11 both horizontally and vertically shifted by a half-pixel (PH /2, PV /2) can be formed with respect to a reference image K00 formed at a reference position.

As shown in FIG. 1, the actuator 5 and the imaging plate 4 can be controlled by a control signal from a control section 6. Specifically, the actuator 5 and the imaging plate 4 are controlled so as to perform the following process. First, when the reference image K00 is formed at a reference position of the imaging plate 4, the imaging plate 4 outputs a reference image I00. Then, when the images K10, K01 and K11 are successively formed, the imaging plate 4 output images I10, I01 and I11 in this order. It is assumed that the subject S keeps still, the imaging apparatus does not move due to slight movement of a hand holding the apparatus or the like, and the images K00 through K11 are respectively formed at accurate positions. When the four images I00, I10, I01 and I11 are overlapped regarding the images K00, K10, K01 and K11 as references, pixels of the respective images I00, I10, I01 and I11 are placed in turn at intervals of a half-pixel as shown in FIG. 3. When the images I00, I10, I01 and I11 are synthesized into a single image, horizontal and vertical sampling frequencies are respectively doubled. In FIG. 3, a circle, a square, a rhombus, and a triangle respectively indicate positions of pixels of the reference image I00, the image I10, the image I01 and the image I11.

Besides the use of the transparent refracting plate 3 and the actuator 5 described in this example, an angle of inclination of a reflection mirror can be varied, or an optical axis can be shifted by combining a birefringence plate and a polarizer as an image position displacing means. Alternatively, the position of the image can be shifted by translating the imaging plate 4 itself.

The images output from the imaging plate 4 are transmitted to an image memory 8 via an A/D conversion section 7 as shown in FIG. 1. The A/D conversion section 7 quantizes luminance data of the images spatially sampled by the imaging plate 4 for each pixel so as to convert the data into digital signals. The image memory 8 stores the four images I00, I10, I01 and I11 which are converted into the digital signals in the A/D conversion section 7. Then, the four images I00, I10, I01 and I11 stored in the image memory 8 are transmitted to a motion vector detecting section 9 and a synthesis section 10.

The motion vector detecting section 9 detects motion vectors of the images I10, I01 and I11 with respect to the reference image I00. The synthesis section 10 synthesizes the four images I00, I10, I01 and I11 based on the motion vectors detected in the motion vector detecting section 9. The image synthesized in the synthesis section 10 is externally output from an output terminal 11. The output image is recorded in a digital VTR or is subjected to a D/A conversion to be displayed on a display. The control section 6 outputs control signals (not shown) to the A/D conversion section 7, the image memory 8 and the motion vector detecting section 9, so that these sections operate in a synchronous manner.

The operation of the motion vector detecting section 9 will be described.

First, a plurality of displaced images for each of the three images I10, I01 and I11, except the reference image I00, are produced. The displaced images are obtained by appropriately horizontally and vertically displacing pixels of the images I10, I01 and I11. Then, the motion vector detecting section 9 detects a displaced image having the highest correlation with the reference image I00 from these displaced images.

Herein, it is assumed that each of the images I00, I10, I01 and I11 has NH pixels in the horizontal direction and NV pixels in the vertical direction. Moreover, i and j are 0 or 1, respectively, and the luminance of a pixel on a coordinate (x, y) in an image Iij is represented by Iij (x, y). In this case, x is an integer in the range of 1≦X≦N4, and y is an integer in the range of 1≦y≦NV. Therefore, a pixel corresponding to Iij (x, y) represents a real pixel on a sample point.

A correlation Rij (p, q) between the displaced image Iij obtained by displacing each pixel of the image Iij (excluding the image I00) by a vector (p, q) and the reference image I00 can be calculated based on, for example, Expression 1. ##EQU1##

If the difference in luminance is calculated for all pixels, the amount of calculation would be enormous. Therefore, the motion vector detecting section 9 selects 100 real pixels Iij (xk, Yk) (1≦k≦100) as representative points from each image Iij. Then, a correlation Rij (p, q) is calculated by a representative point matching method expressed by Expression 2. ##EQU2##

In this way, for each image Iij, a correlation Rij (p, q) with respect to the reference image I00 is calculated for each of all combinations of p and q in a predetermined range so as to detect the combination of p and q having the minimum correlation. The vector (p, g) having the minimum correlation Rij (p, q) is denoted as a motion vector Vij (Vxij, VYij). The image Iij has the highest correlation with the reference image I00 when each pixel is displaced by the motion vector Vij (Vxij, VYij). Thus, the motion vector detecting section 9 detects the motion vector Vij (Vxij, VYij) for each of the three images I10, I01 and I11, except the reference image I00.

For example, under the conditions in which the subject S keeps still, the imaging apparatus does not move due to slight movement of a hand or the like, and the position of the image is precisely displaced; motion vectors V10, V01 and V11 of the images I10, I01 and I11 are V10 (-0.5, 0), V01 (0, -0.5) and V11 (-0.5, -0.5), respectively. In other words, the motion vector represents only a displacement of the position of the image.

The number of representative points in the above representative point matching method is not limited to 100. With the increased number of representative points, the result approaches the result obtainable with the calculation expressed by Expression 1, and is hardly effected by a S/N ratio (signal to noise ratio) of the image. As a result, detection precision can be enhanced. However, the amount of calculation increases in proportion to the number of representative points. Thus, it is necessary to select an appropriate number of representative points to represent an equilibrium between the detection precision and the amount of calculation into consideration. Alternatively, a correlation can be calculated based on a cross-correlation function.

A specific example of the configuration of the motion vector detecting section 9 will be described with reference to FIG. 4.

Each of the images I00, I10, I01 and I11 from the image memory 8 are transmitted to a selector 9b via a digital filter 9a. The digital filter 9a is a band pass filter provided for improving the precision of detection of the correlation.

Specifically, a low frequency band of each of the images I00, I10, I01 and I11 is removed, thereby eliminating the effect of flicker or shading. By removing a high frequency band, the effect of noise as well as the effect of aliasing can be reduced.

More specifically, aliasing occurs in the images I00, I10, I01 and I11 before being synthesized. In general, aliasing has a larger amplitude in a higher frequency band. By removing the high frequency band with the digital filter 9a, the effect of aliasing is reduced. In addition, the detection precision of a correlation due to Moire fringes can be prevented.

FIG. 5 shows a waveform and frequency spectra of the images I00 , I10, I01 and I11 to be input to the digital filter 9a. In FIG. 5, the waveform and the frequency spectra only in a horizontal direction are shown.

These images I00, I10, I01 and I11 are obtained by spatially sampling, with a sampling frequency 1/PH, the images K00, K10, K01 and K11 which have passed the birefringence plate 2, in which signal components in a region having a spatial frequency in the range of 1/(2PH) to 1/PH are not removed. Since the images K00, K10, K01 and K11 are spatially sampled at a sampling frequency of 1/PH, an aliasing component having a peak at a sampling frequency of 1/PH appears in the region having a spatial frequency of 1/2PH and lower. As a result, aliasing AN occurs. As shown in FIG. 6, however, luminance of the images I00, I10, I01 and I11 output from the digital filter 9a approaches 0 by removing a low frequency band. Then, by removing a high frequency band, an amplitude of aliasing is reduced. Therefore, when a correlation is calculated based on the images I00, I10, I01 and I11, detection can be performed with high precision.

As shown in FIG. 4, each of the images I00, I10, I01 and I11 passing through the digital filter 9a is selectively distributed to a representative point memory 9c or an interpolation processing section 9d. Specifically, the reference image I00 is transmitted to the representative point memory 9c via the selector 9b. The other three images I10, I01 and I11 are sent to the interpolation processing section 9d via the selector 9b. The representative point memory 9c selects 100 pixels I00 (xk, yk) as representative points from the reference image I00. The interpolation processing section 9d displaces 100 pixels Iij (xk, yk) by a vector (p, q), respectively, for each of the images I10, I01 and I11 to convert the pixels Iij (xk, yk) into pixels Iij (xk +p, yk +q). All combinations of vector (p, q) are sequentially input to the interpolation processing section 9d. Each time a new combination of a vector (p, q) is input, the image memory 8 repeatedly transmits the image I10 to the motion vector detecting section 9. When all vector combinations (p, q) are transmitted to the motion vector detecting section 9, the subsequent image I01 and then the last image I11 are repeatedly and sequentially transmitted to the motion vector detecting section 9 in a similar manner.

The experiments conducted by the inventors reveals that a motion vector in this example requires detection precision of 0.05 pixel. Therefore, the above vector (p, q) should have a pitch of 0.05 pixel. As a result, a pixel Iij (xk +p, yk +q) is not present as a real pixel because xk +p and yk +q include the numbers under the decimal place. Therefore, the interpolation processing section 9d predicts the pixel Iij (xk +p, yk +q) by interpolation processing so as to displace the pixels.

This interpolation processing is realized in principle as follows. A pixel having luminance of 0 is interpolated between real pixels so as to include the pixel Iij (xk +p, yk +q) in the sample points. Then, the pixels are subjected to oversampling. Thereafter, after being processed with a lowpass filter having a horizontal spatial frequency of 1/(2PH) and a vertical spatial frequency of 1/(2PV) as cutoff frequencies, the pixels are subjected to a digital-digital conversion for performing a thinning out process so as to leave the pixel (xk +p, yk +q) alone. However, since such a process requires oversampling with high speed and an enormous amount of calculations in the lowpass filter, linear interpolation or cubic convolution interpolation is used in the interpolation processing section 9d of this example.

The linear interpolation is a method for interpolating a pixel in four pixels Iij around the pixel to be interpolated, based on these pixels. As shown in FIG. 7, it is assumed that a horizontal distance and a vertical distance between adjacent pixels are respectively 1. In the case where a pixel Iij (x+RX, y+RY) displaced from a real pixel Iij (x, y) by a vector (RX, RY) is to be interpolated, the luminance of a pixel Iij (x+RX, y+RY) can be predicted by calculating Expression 3 based on four real pixels Iij (x, y) through Iij (x+1, y+1) around the pixel Iij (x+RX, y+RY). ##EQU3##

The linear interpolation corresponds to linear Lagrangian polynomial interpolation. Therefore, in linear Lagrangian polynomial interpolation of n degrees based on horizontally and vertically adjacent n+1 pixels, interpolation processing can be performed using quadratic or higher degree Lagrangian polynomial interpolation.

The cubic convolution is a method for performing interpolation using a cubic expression based on 16 pixels around a pixel to be interpolated. This method can be regarded as a simplified method of digital-digital conversion using oversampling and thinning out processing described above.

In addition, other interpolation methods can be used in the interpolation processing section 9d. The other interpolation methods are described in detail, for example, in "The Transactions of the Institute of Electronics and Communication Engineers of Japan", pp. 1617-1623, Vol. J69-D, No. 11, November 1986 and "IEEE Transactions on Medical Imaging", pp.31-39, Vol. MI-2, No. 1, March 1983.

After interpolation of 100 pixels Iij (xk +p, yk +q) serving as representative points is performed in the interpolation processing section 9d, these pixels Iij (xk +p, yk +q) are sequentially transmitted to a subtracter 9e. 100 pixels I00 (Xk, yk) serving as representative points of the reference image I00 are also sequentially transmitted to the subtracter 9e from the representative point memory 9c. A difference in luminance between each pixel Iij (xk +p, yk +q) and each pixel Iij (xk, yk) is calculated. Then, the obtained absolute value is transmitted to an adder 9g via an absolute value circuit 9f. The adder 9g adds the absolute value representing the difference in luminance to an accumulated value read out from a correlation memory 9h. The result of calculation is held as an accumulated value again. Each time a new vector (p, q) is input to the interpolation processing section 9d, the accumulated value in the correlation memory 9h is cleared to 0. Therefore, the correlation memory 9h performs a calculation expressed by Expression 2 above for calculating difference in luminance between 100 pixels Iij (xk +p, yk +q) and 100 pixels I00 (xk, yk) for each vector (p, q) so as to obtain a correlation Rij (p, q) at the vector (p, q). Each value of the correlations Rij (p, q) after completion of accumulation is transmitted to a minimum value detecting section 9i. The minimum value detecting section 9i holds only the minimum correlation and its vector (p, q) among the correlations Rij (p, q) transmitted from the correlation memory 9h. Each time a different image from the images I10, I01 and I11 is transmitted to the interpolation processing section 9d, the minimum value detecting section 9i externally outputs the correlation and the vector which have been held therein by that time.

As a result, the motion vector detecting section 9 obtains the vectors (p, q) with which the correlation Rij (p, q) become minimum for the respective three images I10, I01 and I11 so as to output the vectors (p, q) as motion vectors V10, V01 and V11. In the case where the minimum correlation value Rij (p, q) is greater than a predetermined value, the motion vector detecting section 9 can output an error value indicating that sufficient correlation is not obtained, instead of the motion vector V10, V01 or V11.

The operation of the synthesis section 10 will be described.

The synthesis section 10 synthesizes the four images I00, I10, I01 and I11 transmitted from the image memory 8 based on the motion vectors V01, V10 and V11 transmitted from the motion vector detecting section 9, thereby obtaining an image I. Specifically, each pixel of the four images I00, I10, I01 and I11 is subjected to coordinate transformation based on Expressions 4 through 7, thereby obtaining each pixel of the image I.

Expression 4!

I(x,y)=I00(x,y)

Expression 5!

I(x+0.5,y)=I10(x+Vx10+0.5,y+Vy10)

Expression 6!

I(x,y+0.5)=I01(x+Vx01, y+Vy01+0.5)

Expression 7!

I(x+0.5,y+0.5)=I11(x+Vx11+0.5,y+Vy11+0.5)

As is apparent from Expression 4, each pixel I00 (x, y) of the image I00 is allocated to each pixel I (x, y) of the image I without performing coordinate transformation. However, each pixel I10 (x, y) of the image I10 is transformed into a pixel I10 (x+VX10 +0.5, y+VY10) based on the motion vector V10 (VX10, VY10), as expressed by Expression 5. The resultant pixel is allocated to each pixel I (x+0.5, y) of the image I. In this case, the pixel I10 (x+VX10 +0.5, y+VY10) is identical with a real pixel on the image I10 only when the motion vector V10 is (-0.5, 0). In other words, since the pixel I10 (x+VX10 +0.5, y+VY10) is not identical with a real pixel on the image I10 when the subject S moves or the imaging apparatus moves due to slight movement of a hand holding the apparatus, it is necessary to displace the pixels by means of interpolation processing similar to that performed in the interpolation processing section 9d of the motion vector detecting section 9. The same can be applied to each pixel of the images I01 and I11.

The process for transforming each real pixel I10 (x,y) in the image I10 into each pixel I (x+0.5, y) of the image I will be described in detail.

FIG. 8 shows the image I. Since the reference image I00 is identical with the image I, each pixel I00 (x, y) on the reference image I00 corresponds to each pixel I (x, y) in FIG. 8. In the case where the subject S does not move or the imaging apparatus does not move due to movement of a hand holding the apparatus, each pixel I10 (x, y) on the image I10 (not in FIG. 8) corresponds to each pixel I (x+0.5, y) in FIG. 8. In other words, the motion vector V10 (VX, VY) is a vector V10 (-0.5, 0) in this case. Therefore, by substituting Vx and Vy for Expression 5, each pixel I10 (x, y) is allocated to each pixel I (x+0.5, y) of the image I. In this case, since each pixel I10 (x, y) is a real pixel, the interpolation processing is not needed.

FIG. 8 exemplifies the image which is moved by a vector Vm (RX, RY) due to movement of a hand or the like while taking the images I00, I01 and I10. Thus, in FIG. 8, the position of the pixel I10 (x, y) on the image I10 is represented as a coordinate I10 (x+0.5-RX, y-RY), which is obtained by displacing the pixel I10 (x, y) horizontally by (0.5-RX) and vertically by (-RY).

When the motion vector of the pixel I10 (x, y) is detected, the pixel obtained by displacing the pixel I10 (x, y) horizontally by (-0.5+RX) and vertically by (RY) has the highest correlation with the pixel I00 (x, y) of the image I00. Therefore, a motion vector V10 (VX10, VY10) is V10 (-0.5+RX, RY). Since the pixel I (x+0.5, y) to be obtained is a pixel obtained by displacing the pixel I00 (x, y) by a vector (0.5, 0), the motion vector V10 (VX10, VY10)=V10 (-0.5+RX, RY) and the vector (0.5, 0) are synthesized, thereby obtaining (VX10 +0.5, VY10)=(RX, RY). As a result, a vector Vm (VX10 +0.5, VY10), which represents a vector Vm (RX, RY) indicating movement of the image using a motion vector V10 (VX10, VY10), can be obtained. In FIG. 8, the pixel I (x+0.5, y) displaced by the vector Vm from the pixel I10 (x, y) can be represented as the pixel I10 (x+RX, y+RY) for the pixel I10 (x, y) on the image I10 , and therefore can be rewritten as I10 (x+VX10 +0.5, y+VY10) Thus, when coordinate transformation expressed by Expression 5 is performed for each pixel I10 (x, y) of the image I10, each pixel I (x+0.5, y) of the image I can be obtained.

Assuming that a value of Rx is VX10 +0.5 and a value of RY is VY10 at a pixel I10 (x+RX, y+RY), the luminance of each pixel I10 (x+VX10 +0.5, y+VY10) on the image I10 can be obtained by performing the interpolation processing expressed by Expression 3, based on the four real pixels I10 (x, y) to I10 (x+1, y+1) around the pixel I10 (x+VX10 +0.5, y+VY10).

Even in the case where the subject S does not move or the imaging apparatus does not move due to movement of a hand, the position of the image may be shifted due to an imprecise angle of inclination of the transparent refracting plate 3 in some cases. As a result, when each pixel I10 (x, y) on the image I10 does not precisely correspond to a pixel I (x+0.5, y) in FIG. 8, the motion vector V10 (VX, VY) is not identical with a vector V10 (-0.5, 0). In this case, however, since the motion vector containing an error of the position of the image can be detected, an error can be automatically corrected by the above interpolation processing.

Thus, the image I synthesized by synthesis in the synthesis section 10, has the number of pixels obtained by both vertically and horizontally doubling the pixels of the reference image I00 based on the three images I10, I01 and I11. Thus, the horizontal and vertical resolution is doubled. In the case where the above-mentioned motion vector detecting section 9 outputs an error value indicating that a sufficient correlation is not obtained, the synthesis section 10 does not synthesize the images based on the images I10, I01 and I11 causing an error, but, for example, processes the images so as to interpolate pixels by interpolation processing based on the pixels of the reference image I00. In this case, however, the resolution is not expected to be improved based on the images producing the error value.

As described above, according to the imaging apparatus of the present example, the four images I00, I10, I01 and I11, which are obtained by vertically and horizontally displacing the imaging position on the imaging plate 4 by a half-pixel, are synthesized, thereby obtaining a synthesis image having horizontally and vertically doubled resolutions. Moreover, the motion vectors V10, V01 and V11 of the respective images I10, I01, and I11 are detected, and then the three images I10, I01 and I11 are synthesized in accordance with the motion vectors V10, V01 and V11. Therefore, even if the subject S moves or the imaging apparatus moves due to movement of a hand holding the apparatus, a shift of the position of the image can be corrected. In the case where the position of the image is imprecise, the position of the image can be simultaneously corrected based on the motion vectors V10, V01 and V11.

Furthermore, in detection of the motion vectors V10, V01 and V11 in the motion vector detecting section 9, most of the aliasing AN appearing in each of the images I00, I10, I01 and I11 can be removed by the digital filter 9a. As a result, degradation of the detection precision due to Moire fringes is prevented.

In the present example, the case where the imaging position is horizontally halved and vertically divided into two parts (H=2; V=2) is described. However, the position of the image can be divided into a larger number of parts. In the case where V=1, only the horizontal resolution is improved. In the case where H=1, only the vertical resolution is improved.

Although the present example is described regarding the case where a monochrome image is taken, the present invention is similarly applicable to a color imaging apparatus for imaging a color image. As the color imaging apparatus, a single plate type color imaging apparatus in which a color filter array is provided in front of a single imaging plate, or a three plate type color imaging apparatus in which imaging light is separated into three primary color beams, i.e., RGB by a color separating prism so as to be incident on three imaging plates, respectively, can be used. A value of each pixel is regarded as luminance of a scalar quantity in the above example. When a value of each pixel is regarded as a vector quantity consisting of each of RGB, the pixel value can be similarly processed. Furthermore, in the case where a motion vector is detected in the motion vector detecting section 9, a circuit size can be reduced and the operation can be simplified by converting a color image into a monochrome image (Y signals) and then detecting it.

As described above, according to the present invention, by synthesizing a plurality of images imaged in time division at different positions using an imaging plate, an image with high resolution is obtained. As a result, an inexpensive imaging plate can be used. In addition, in the case where a subject moves or an imaging apparatus moves due to slight movement of a hand holding the apparatus while imaging a plurality of images, an error in the images due to a shift of the positions of the images can be corrected to obtain an appropriate synthesized image by detecting motion vectors. Moreover, in the case where the position of each image is imprecisely displaced, such an imprecise displacement can be similarly corrected. Therefore, it is not necessary to provide a high-accuracy mechanism to displace the positions of the images.

Furthermore, by attenuating aliasing which occurs in the unsynthesized images with a filter prior to the detection of the motion vectors, a correlation can be precisely determined.

Various other modifications will be apparent to and can be readily made by those skilled in the art without departing from the scope and spirit of this invention. Accordingly, it is not intended that the scope of the claims appended hereto be limited to the description as set forth herein, but rather that the claims be broadly construed.

Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US4633317 *Jun 26, 1984Dec 30, 1986Bodenseewerk Geratetechnic GmbHElectro-optical detector system
US4641038 *Jun 22, 1983Feb 3, 1987Ferranti, PlcImaging device
US5012270 *Mar 6, 1989Apr 30, 1991Canon Kabushiki KaishaImage shake detecting device
US5363136 *Oct 7, 1992Nov 8, 1994Eastman Kodak CompanyCam actuated optical offset image sampling system
US5371539 *Apr 7, 1994Dec 6, 1994Sanyo Electric Co., Ltd.Video camera with electronic picture stabilizer
US5561460 *Jun 2, 1994Oct 1, 1996Hamamatsu Photonics K.K.Solid-state image pick up device having a rotating plate for shifting position of the image on a sensor array
JPH04172778A * Title not available
JPS5640546A * Title not available
JPS61251380A * Title not available
Non-Patent Citations
Reference
1 *J.A. Parker et al., IEEE Transactions on Medical Imaging , vol. MI 2, No. 1, pp. 31 39, 1983.
2J.A. Parker et al., IEEE Transactions on Medical Imaging, vol. MI-2, No. 1, pp. 31-39, 1983.
3 *K. Enami et al., The Transactions of the Institute of Electronics and Communications Engineers of Japan , vol. J69 D, No. 11, pp. 1617 1623, 1986.
4K. Enami et al., The Transactions of the Institute of Electronics and Communications Engineers of Japan, vol. J69-D, No. 11, pp. 1617-1623, 1986.
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US5877806 *Oct 30, 1995Mar 2, 1999Ohtsuka Patent OfficeImage sensing apparatus for obtaining high resolution computer video signals by performing pixel displacement using optical path deflection
US5920342 *Sep 15, 1995Jul 6, 1999Kabushiki Kaisha ToshibaImage input apparatus for capturing images of multiple resolutions
US5969757 *Jul 5, 1996Oct 19, 1999Sharp Kabushiki KaishaImaging apparatus and method having enhanced moire reduction
US6025586 *Jul 15, 1998Feb 15, 2000Kabushiki Kaisha ToshibaImage processing device, image recording apparatus, and image reading device and image forming apparatus
US6115147 *Sep 30, 1997Sep 5, 2000Fuji Photo Film Co., Ltd.Image reading apparatus
US6249312 *Nov 12, 1997Jun 19, 2001Sagem SaVideo camera having deviating means for improving resolution
US6292286Mar 17, 2000Sep 18, 2001Samsung Electronics Co., Ltd.Plate for rotatably supporting a light path refraction plate in multi-directions and an image processing device employing the same
US6429895 *Dec 22, 1997Aug 6, 2002Canon Kabushiki KaishaImage sensing apparatus and method capable of merging function for obtaining high-precision image by synthesizing images and image stabilization function
US6577341Oct 14, 1997Jun 10, 2003Sharp Kabushiki KaishaImaging apparatus
US6650361 *Dec 16, 1998Nov 18, 2003Canon Kabushiki KaishaImaging apparatus control method, and a computer program product having computer program code therefor
US6903754Jul 25, 2001Jun 7, 2005Clairvoyante, IncArrangement of color pixels for full color imaging devices with simplified addressing
US6917368Mar 4, 2003Jul 12, 2005Clairvoyante, Inc.Sub-pixel rendering system and method for improved display viewing angles
US6950115Dec 14, 2001Sep 27, 2005Clairvoyante, Inc.Color flat panel display sub-pixel arrangements and layouts
US7006712 *Nov 10, 2003Feb 28, 2006Cognex CorporationFast high-accuracy multi-dimensional pattern inspection
US7042509Jan 22, 2002May 9, 2006Canon Kabushiki KaishaImage sensing apparatus and method of capable of merging function for obtaining high-precision image by synthesizing images and image stabilization function
US7046256Jan 22, 2003May 16, 2006Clairvoyante, IncSystem and methods of subpixel rendering implemented on display panels
US7064779 *Oct 23, 2001Jun 20, 2006Ess Technology, Inc.Imaging system combining multiple still images for higher resolution image output
US7084923Oct 28, 2003Aug 1, 2006Clairvoyante, IncDisplay system having improved multiple modes for displaying image data from multiple input source formats
US7085431 *Nov 13, 2001Aug 1, 2006Mitutoyo CorporationSystems and methods for reducing position errors in image correlation systems during intra-reference-image displacements
US7106363 *May 23, 2002Sep 12, 2006Canon Kabushiki KaishaLens apparatus, camera system, and camera
US7123277Jan 16, 2002Oct 17, 2006Clairvoyante, Inc.Conversion of a sub-pixel format data to another sub-pixel data format
US7158181May 4, 2004Jan 2, 2007Andrew G. CartlidgeSystem and methods for increasing fill-factor on pixelated sensor arrays
US7167186Mar 4, 2003Jan 23, 2007Clairvoyante, IncSystems and methods for motion adaptive filtering
US7184066Aug 8, 2002Feb 27, 2007Clairvoyante, IncMethods and systems for sub-pixel rendering with adaptive filtering
US7187353Jun 6, 2003Mar 6, 2007Clairvoyante, IncDot inversion on novel display panel layouts with extra drivers
US7209105Jun 6, 2003Apr 24, 2007Clairvoyante, IncSystem and method for compensating for visual effects upon panels having fixed pattern noise with reduced quantization error
US7218301Jun 6, 2003May 15, 2007Clairvoyante, IncSystem and method of performing dot inversion with standard drivers and backplane on novel display panel layouts
US7221381May 17, 2002May 22, 2007Clairvoyante, IncMethods and systems for sub-pixel rendering with gamma adjustment
US7230584May 20, 2003Jun 12, 2007Clairvoyante, IncProjector systems with reduced flicker
US7248271Jan 31, 2005Jul 24, 2007Clairvoyante, IncSub-pixel rendering system and method for improved display viewing angles
US7268748May 20, 2003Sep 11, 2007Clairvoyante, IncSubpixel rendering for cathode ray tube devices
US7268758Mar 23, 2004Sep 11, 2007Clairvoyante, IncTransistor backplanes for liquid crystal displays comprising different sized subpixels
US7274383Jul 28, 2000Sep 25, 2007Clairvoyante, IncArrangement of color pixels for full color imaging devices with simplified addressing
US7283142Oct 22, 2002Oct 16, 2007Clairvoyante, Inc.Color display having horizontal sub-pixel arrangements and layouts
US7289153 *Jul 15, 2002Oct 30, 2007Kabushiki Kaisha ToshibaScanning type image pick-up apparatus and a scanning type laser beam receive apparatus
US7301563 *Jul 26, 1999Nov 27, 2007Olympus Optical Co., Ltd.Image pickup apparatus
US7327388Mar 9, 2004Feb 5, 2008Deltapix ApsMethod and apparatus for producing a high resolution image
US7352374Apr 7, 2003Apr 1, 2008Clairvoyante, IncImage data set with embedded pre-subpixel rendered image
US7389002 *Mar 22, 2004Jun 17, 2008Knight Andrew FMethod for increasing resolution in a camera
US7397455Jun 6, 2003Jul 8, 2008Samsung Electronics Co., Ltd.Liquid crystal display backplane layouts and addressing for non-standard subpixel arrangements
US7411626 *Mar 19, 2003Aug 12, 2008Olympus CorporationImage acquiring apparatus and image acquiring method
US7417648Oct 22, 2002Aug 26, 2008Samsung Electronics Co. Ltd.,Color flat panel display sub-pixel arrangements and layouts for sub-pixel rendering with split blue sub-pixels
US7420577Apr 23, 2007Sep 2, 2008Samsung Electronics Co., Ltd.System and method for compensating for visual effects upon panels having fixed pattern noise with reduced quantization error
US7492379Oct 22, 2002Feb 17, 2009Samsung Electronics Co., Ltd.Color flat panel display sub-pixel arrangements and layouts for sub-pixel rendering with increased modulation transfer function response
US7525526Oct 28, 2003Apr 28, 2009Samsung Electronics Co., Ltd.System and method for performing image reconstruction and subpixel rendering to effect scaling for multi-mode display
US7525590Feb 4, 2005Apr 28, 2009Sanyo Electric Co., Ltd.Camera apparatus with exposure correction based on movement of the object
US7545431Nov 1, 2006Jun 9, 2009Andrew G. CartlidgeSystem and methods for increasing fill-factor on pixelated sensor arrays
US7573448Mar 2, 2007Aug 11, 2009Samsung Electronics Co., Ltd.Dot inversion on novel display panel layouts with extra drivers
US7573493Aug 31, 2006Aug 11, 2009Samsung Electronics Co., Ltd.Four color arrangements of emitters for subpixel rendering
US7590299Jun 10, 2004Sep 15, 2009Samsung Electronics Co., Ltd.Increasing gamma accuracy in quantized systems
US7598963Oct 13, 2006Oct 6, 2009Samsung Electronics Co., Ltd.Operating sub-pixel rendering filters in a display system
US7598982Oct 28, 2003Oct 6, 2009Canon Kabushiki KaishaImaging apparatus having a detector for detecting spatial frequency characteristics, control method, and a computer program product having computer program code therefor
US7623141May 11, 2007Nov 24, 2009Samsung Electronics Co., Ltd.Methods and systems for sub-pixel rendering with gamma adjustment
US7646398Jul 14, 2005Jan 12, 2010Samsung Electronics Co., Ltd.Arrangement of color pixels for full color imaging devices with simplified addressing
US7646430Jun 28, 2006Jan 12, 2010Samsung Electronics Co., Ltd.Display system having improved multiple modes for displaying image data from multiple input source formats
US7671892 *May 18, 2007Mar 2, 2010Canon Kabushiki KaishaImage sensing apparatus, and control method, program, and storage medium of image sensing apparatus
US7688335Oct 11, 2006Mar 30, 2010Samsung Electronics Co., Ltd.Conversion of a sub-pixel format data to another sub-pixel data format
US7689058Oct 13, 2006Mar 30, 2010Samsung Electronics Co., Ltd.Conversion of a sub-pixel format data to another sub-pixel data format
US7701476Aug 31, 2006Apr 20, 2010Samsung Electronics Co., Ltd.Four color arrangements of emitters for subpixel rendering
US7728802Mar 4, 2005Jun 1, 2010Samsung Electronics Co., Ltd.Arrangements of color pixels for full color imaging devices with simplified addressing
US7738026May 2, 2005Jun 15, 2010Andrew G. CartlidgeIncreasing fill-factor on pixelated sensors
US7755648Jul 14, 2005Jul 13, 2010Samsung Electronics Co., Ltd.Color flat panel display sub-pixel arrangements and layouts
US7755649Apr 2, 2007Jul 13, 2010Samsung Electronics Co., Ltd.Methods and systems for sub-pixel rendering with gamma adjustment
US7755652Aug 30, 2006Jul 13, 2010Samsung Electronics Co., Ltd.Color flat panel display sub-pixel rendering and driver configuration for sub-pixel arrangements with split sub-pixels
US7798662 *Mar 20, 2007Sep 21, 2010Production Resource Group L.L.C.File system for a stage lighting array system
US7801441 *Feb 1, 2006Sep 21, 2010Steinbichler Optotechnik GmbhMethod and an apparatus for the taking of an image, in particular by a CCD sensor
US7843611Jul 18, 2007Nov 30, 2010Kuwait UniversityHigh speed flatbed scanner comprising digital image-capture module with two-dimensional optical image photo-sensor or digital camera
US7864194Jan 19, 2007Jan 4, 2011Samsung Electronics Co., Ltd.Systems and methods for motion adaptive filtering
US7864202Oct 13, 2006Jan 4, 2011Samsung Electronics Co., Ltd.Conversion of a sub-pixel format data to another sub-pixel data format
US7889215Oct 16, 2008Feb 15, 2011Samsung Electronics Co., Ltd.Conversion of a sub-pixel format data to another sub-pixel data format
US7911487Oct 13, 2009Mar 22, 2011Samsung Electronics Co., Ltd.Methods and systems for sub-pixel rendering with gamma adjustment
US7911515 *Apr 8, 2008Mar 22, 2011Victor Company Of Japan, Ltd.Imaging apparatus and method of processing video signal
US7916156Feb 11, 2010Mar 29, 2011Samsung Electronics Co., Ltd.Conversion of a sub-pixel format data to another sub-pixel data format
US7932925Nov 6, 2005Apr 26, 2011Elbit Systems Ltd.System and method for stabilizing an image
US7969456Feb 26, 2007Jun 28, 2011Samsung Electronics Co., Ltd.Methods and systems for sub-pixel rendering with adaptive filtering
US7999841 *Aug 18, 2006Aug 16, 2011Lockheed Martin CorporationImage expansion system to create larger size or larger resolution images
US8022969May 17, 2002Sep 20, 2011Samsung Electronics Co., Ltd.Rotatable display with sub-pixel rendering
US8031205Mar 13, 2008Oct 4, 2011Samsung Electronics Co., Ltd.Image data set with embedded pre-subpixel rendered image
US8134583Aug 11, 2008Mar 13, 2012Samsung Electronics Co., Ltd.To color flat panel display sub-pixel arrangements and layouts for sub-pixel rendering with split blue sub-pixels
US8144094Jun 26, 2008Mar 27, 2012Samsung Electronics Co., Ltd.Liquid crystal display backplane layouts and addressing for non-standard subpixel arrangements
US8159511Jun 28, 2010Apr 17, 2012Samsung Electronics Co., Ltd.Methods and systems for sub-pixel rendering with gamma adjustment
US8223168Feb 4, 2011Jul 17, 2012Samsung Electronics Co., Ltd.Conversion of a sub-pixel format data
US8294741Mar 1, 2010Oct 23, 2012Samsung Display Co., Ltd.Four color arrangements of emitters for subpixel rendering
US8378947Aug 7, 2006Feb 19, 2013Samsung Display Co., Ltd.Systems and methods for temporal subpixel rendering of image data
US8405692Apr 11, 2007Mar 26, 2013Samsung Display Co., Ltd.Color flat panel display arrangements and layouts with reduced blue luminance well visibility
US8421820Jun 27, 2011Apr 16, 2013Samsung Display Co., Ltd.Methods and systems for sub-pixel rendering with adaptive filtering
US8436799Oct 28, 2003May 7, 2013Samsung Display Co., Ltd.Image degradation correction in novel liquid crystal displays with split blue subpixels
US8456496Mar 12, 2012Jun 4, 2013Samsung Display Co., Ltd.Color flat panel display sub-pixel arrangements and layouts for sub-pixel rendering with split blue sub-pixels
US8477231Apr 27, 2009Jul 2, 2013Canon Kabushiki KaishaImage sensing apparatus
US8525888Jan 15, 2009Sep 3, 2013Nikon CorporationElectronic camera with image sensor and rangefinding unit
US8704744Feb 8, 2013Apr 22, 2014Samsung Display Co., Ltd.Systems and methods for temporal subpixel rendering of image data
US8804834 *Dec 28, 2010Aug 12, 2014Sony CorporationImage processing apparatus, image processing method and image processing program
US8830275May 17, 2007Sep 9, 2014Samsung Display Co., Ltd.Methods and systems for sub-pixel rendering with gamma adjustment
US20110170600 *Dec 28, 2010Jul 14, 2011Sony CorporationImage processing apparatus, image processing method and image processing program
EP0836330A2 *Oct 14, 1997Apr 15, 1998Sharp Kabushiki KaishaColour imaging apparatus
EP1134970A1 *Mar 16, 2000Sep 19, 2001Samsung Electronics Co., Ltd.Apparatus and method for improving the resolution of an image taken by a CCD using a refraction plate
Classifications
U.S. Classification348/219.1, 348/155, 348/E05.091, 348/369, 348/264, 348/E03.031, 348/340, 348/E05.066
International ClassificationH04N5/349, H04N5/378, H04N5/351, H04N5/335, H04N5/357, H04N5/372, H04N5/14, G06T3/40, H04N5/232, H04N5/225
Cooperative ClassificationH04N3/1587, G06T3/4007, H04N5/145, H04N5/335
European ClassificationH04N5/335, H04N3/15H, G06T3/40B
Legal Events
DateCodeEventDescription
Oct 21, 2009FPAYFee payment
Year of fee payment: 12
Oct 28, 2005FPAYFee payment
Year of fee payment: 8
Sep 27, 2001FPAYFee payment
Year of fee payment: 4
Apr 26, 1996ASAssignment
Owner name: SHARP KABUSHIKI KAISHA, JAPAN
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YAMADA, EIJI;IWAKI, TETSUO;REEL/FRAME:007954/0835
Effective date: 19960325