US20080231729A1 - Light source estimation device, light source estimation system, light source estimation method, device for super-resolution, and method for super-resolution - Google Patents

Light source estimation device, light source estimation system, light source estimation method, device for super-resolution, and method for super-resolution Download PDF

Info

Publication number
US20080231729A1
US20080231729A1 US12/080,228 US8022808A US2008231729A1 US 20080231729 A1 US20080231729 A1 US 20080231729A1 US 8022808 A US8022808 A US 8022808A US 2008231729 A1 US2008231729 A1 US 2008231729A1
Authority
US
United States
Prior art keywords
light source
imaging device
image
information
section
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/080,228
Inventor
Satoshi Sato
Katsuhiro Kanamori
Hideto Motomura
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sovereign Peak Ventures LLC
Original Assignee
Matsushita Electric Industrial Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co Ltd filed Critical Matsushita Electric Industrial Co Ltd
Assigned to MATSUSHITA ELECTRIC INDUSTRIAL CO., LTD. reassignment MATSUSHITA ELECTRIC INDUSTRIAL CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: KANAMORI, KATSUHIRO, MOTOMURA, HIDETO, SATO, SATOSHI
Publication of US20080231729A1 publication Critical patent/US20080231729A1/en
Assigned to PANASONIC CORPORATION reassignment PANASONIC CORPORATION CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: MATSUSHITA ELECTRIC INDUSTRIAL CO., LTD.
Priority to US12/630,884 priority Critical patent/US7893971B2/en
Assigned to SOVEREIGN PEAK VENTURES, LLC reassignment SOVEREIGN PEAK VENTURES, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PANASONIC CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution

Definitions

  • the present invention relates to techniques that are important in capturing images, processing images and synthesizing images in an ordinary environment, i.e., a technique for estimating light source information such as the position, direction, luminance, spectrum and color of the light source at the time of image capturing, and a technique for super-resolution using the light source information.
  • image processing is increasing as the camera-equipped mobile telephones and digital cameras become widespread.
  • image processing including, for example, a process of super-resolution also known as “digital zooming”, an image recognition process of recognizing and focusing on a human face, and augmented reality in which computer-generated graphics images, being virtual objects, are laid over a real image.
  • image processing are based on the “appearance” of an object recorded on imaging elements by capturing an image.
  • the appearance of the object is obtained by the imaging elements receiving light from the light source after being reflected at the object surface.
  • light source information is very important in image processing. In other words, it is very effective to obtain light source information and use the information when capturing images and processing images.
  • Patent Document 1 virtual objects are laid over the real world, wherein the light source environment is measured so as to add the reflection of the illuminated light on the virtual objects and the shadows produced by the virtual objects.
  • the light source information is useful not only in image processing but also when a cameraman, being a person who captures an image, obtains an image by an imaging device.
  • a cameraman being a person who captures an image
  • the light source position is detected, and the information is used to display the front-lighted condition or the backlighted condition to the cameraman, thereby allowing an ordinary person with no special knowledge to easily take, for example, a backlighted (semi-backlighted) portrait effectively utilizing light and shadows.
  • Patent Document 1 Japanese Laid-Open Patent Publication No. 11-175762
  • Patent Document 2 Japanese Laid-Open Patent Publication No. 8-160507
  • a captured image of the light source i.e., a light source image
  • a light source image is an image that is:
  • an ordinary object such as a person
  • an illumination such as a streetlight or a room light.
  • the object may appear dark due to the backlighted condition. Therefore, a cameraman tends to avoid such a position where the object and the light source are present in the same image.
  • Patent Document 1 and Patent Document 2 each employ an imaging device provided with a fish-eye lens with the optical axis being directed in the zenith direction in order to obtain the light source information.
  • the imaging device for imaging the object and the imaging device for imaging the light source are separate from each other, an alignment (calibration) between the two imaging devices is necessary. This is due to the fact that the positional relationship between the object and the light source is particularly important for estimating the light source information. In other words, when an object and a light source are imaged with different imaging devices, they need to be associated with each other, which is a very complicated operation.
  • an object of the present invention is to make it possible for a device such as a camera-equipped mobile telephone to obtain a light source image being a captured image of a light source and to estimate the light source information with no additional imaging devices.
  • the present invention provides a light source estimation device and a light source estimation method, in which the process determines whether a condition of an imaging device is suitable for obtaining light source information, captures an image by the imaging device when it is determined to be suitable, obtains the captured image as a light source image, obtains first imaging device information representing the condition of the imaging device at a point in time when the light source image is obtained, obtains second imaging device information representing the condition of the imaging device at the time of image capturing when an image is captured by the imaging device in response to a cameraman's operation, and estimates light source information including at least one of a direction and a position of a light source at the time of image capturing by using the light source image and the first and second imaging device information.
  • the process determines whether the condition of the imaging device is suitable for obtaining light source information, and obtains a light source image by the imaging device when it is determined to be suitable.
  • the process also obtains first imaging device information representing the condition of the imaging device.
  • the process obtains second imaging device information representing the condition of the imaging device at the time of image capturing when an image is captured by the imaging device in response to a cameraman's operation, and estimates light source information by using the light source image and the first and second imaging device information.
  • the light source image is obtained when it is determined, by using the imaging device which is used at the time of image capturing, that the condition of the imaging device is suitable for obtaining light source information. Therefore, it is possible to obtain a light source image and to estimate light source information with no additional imaging devices.
  • the present invention also provides a light source estimation device and a light source estimation method, in which the process captures an image by an imaging device to obtain the captured image as a light source image, obtains first imaging device information representing a condition of the imaging device at a point in time when the light source image is obtained, obtains second imaging device information representing the condition of the imaging device at the time of image capturing when an image is captured by the imaging device in response to a cameraman's operation, and estimates light source information including at least one of a direction and a position of a light source at the time of image capturing by using the light source image and the first and second imaging device information, wherein a plurality of light source images are obtained while an optical axis direction of the imaging device is being varied by optical axis direction varying means.
  • the process obtains a plurality of light source images by the imaging device while the optical axis direction of the imaging device is being varied by the optical axis direction varying means.
  • the process also obtains first imaging device information representing the condition of the imaging device.
  • the process obtains second imaging device information representing the condition of the imaging device at the time of image capturing when an image is captured by the imaging device in response to a cameraman's operation, and estimates light source information by using the light source image and the first and second imaging device information.
  • light source images are obtained by using the imaging device which is used at the time of image capturing while the optical axis direction of the imaging device is being varied. Therefore, it is possible to obtain a light source image over a wide area and to estimate light source information with no additional imaging devices.
  • the present invention also provides a super-resolution device and a super-resolution method, in which the process captures an image by an imaging device, performs a light source information estimation operation of estimating light source information including at least one of a direction and a position of a light source illuminating an object by using the light source estimation method of the present invention, obtains as shape information surface normal information or three-dimensional position information of the object, and performs super-resolution of the captured image by using the light source information and the shape information.
  • the present invention it is possible to obtain a light source image around an object and to estimate light source information without providing additional imaging devices other than the imaging device for imaging the object in devices provided with imaging devices such as camera-equipped mobile telephones. Moreover, it is possible to realize a super-resolution process by using the estimated light source information.
  • FIG. 1 is a block diagram showing a configuration of a light source estimation device according to a first embodiment of the present invention.
  • FIG. 2 is a schematic diagram showing a configuration of a mobile telephone provided with a light source estimation device of the present invention.
  • FIG. 3 is a diagram showing a camera-equipped mobile telephone in a folded position.
  • FIG. 4 is a flow chart showing the flow of the processes of an imaging device condition determination section and a light source image obtaining section.
  • FIG. 4A is a flow chart showing the flow of the processes of an imaging device condition determination section and a light source image obtaining section.
  • FIG. 5 is a schematic diagram showing a portion of information stored in a memory.
  • FIG. 6 is a schematic diagram illustrating a roll-pitch-yaw angle representation.
  • FIG. 7 is a schematic diagram illustrating the process of extracting a light source pixel.
  • FIG. 8 is a schematic diagram illustrating the relationship between a camera coordinate system and an image coordinate system.
  • FIG. 9 is a schematic diagram illustrating the process of estimating a three-dimensional position of a light source by utilizing the movement of the imaging device.
  • FIG. 10 is a schematic diagram illustrating a method for detecting an optical axis direction by using a weight and a touch sensor.
  • FIG. 11 is a schematic diagram showing a folding-type camera-equipped mobile telephone provided with a weight and a touch sensor.
  • FIG. 12 is a schematic diagram showing a state where a folding-type camera-equipped mobile telephone of FIG. 11 is placed.
  • FIG. 13 is a diagram showing the relationship between the optical axis direction and the ON/OFF state of the touch sensors.
  • FIG. 14 is a schematic diagram showing a state where a digital still camera provided with a weight and a touch sensor is placed.
  • FIG. 15 is a block diagram showing a configuration of a light source estimation device according to a second embodiment of the present invention.
  • FIG. 16 is a schematic diagram illustrating a method for synthesizing a panoramic light source image.
  • FIG. 17 is a schematic diagram showing a process of combining together a plurality of light source images to thereby widen an apparent range of viewing field.
  • FIG. 18 is a schematic diagram illustrating a method for synthesizing a panoramic light source image in a case where a rectangular parallelepiped is used as a projection plane.
  • FIG. 19 is a block diagram showing a configuration of a light source estimation device according to a third embodiment of the present invention.
  • FIG. 20 is an external view of a folding-type camera-equipped mobile telephone provided with the light source estimation device according to the third embodiment of the present invention.
  • FIG. 21 is a schematic diagram showing the movement of the folding-type camera-equipped mobile telephone of FIG. 20 when an open/close switch is pressed.
  • FIG. 22 is a block diagram showing another configuration of the light source estimation device according to the third embodiment of the present invention.
  • FIG. 23 is a schematic diagram illustrating an angle of vibration by a vibration mechanism.
  • FIG. 24 is a block diagram showing another configuration of the light source estimation device according to the third embodiment of the present invention.
  • FIG. 25 is a block diagram showing a configuration of a light source estimation system according to a fourth embodiment of the present invention.
  • FIG. 26 is a diagram showing an example of how an image is separated into a diffuse reflection image and a specular reflection image.
  • FIG. 27 is a block diagram showing a configuration of a super-resolution device in one embodiment of the present invention.
  • FIG. 28 is a diagram showing a camera-equipped mobile telephone provided with a super-resolution device in one embodiment of the present invention.
  • FIG. 29 is a graph showing variations in a reflected light intensity when a polarizing filter is rotated under linearly-polarized light.
  • FIG. 30 is a flow chart showing the flow of a process of separating a specular reflection image and a diffuse reflection image from each other by using a polarizing filter.
  • FIG. 31 is a schematic diagram illustrating an imaging device in which a polarization direction is varied from one pixel to another.
  • FIG. 32 is a schematic diagram illustrating a process of obtaining a distance and a three-dimensional position of an object by using a photometric stereo method.
  • FIG. 33 is a schematic diagram illustrating a process of obtaining shape information by using polarization characteristics of reflected light.
  • FIG. 34 is a graph showing variations in a reflected light intensity when a polarizing filter is rotated under natural light.
  • FIG. 35 is a schematic diagram showing the concept of a texton-based super-resolution process.
  • FIG. 36 is a conceptual diagram illustrating a texton-based super-resolution process using a linear matrix transformation.
  • FIG. 37 is a PAD diagram showing the flow of a learning process in a texton-based super-resolution process.
  • FIG. 38 is a schematic diagram illustrating a learning process in a texton-based super-resolution process.
  • FIG. 39 is a diagram showing a process of a two-dimensional discrete stationary wavelet transformation.
  • FIG. 40 shows an exemplary image result when a two-dimensional discrete stationary wavelet transformation is performed on a test image.
  • FIG. 41 is a PAD diagram showing the flow of a texton-based super-resolution process being performed.
  • FIG. 42 is a schematic diagram illustrating a texton-based super-resolution process being performed.
  • FIG. 43 is a diagram showing a process of a two-dimensional discrete stationary inverse wavelet transformation.
  • FIG. 44 is a schematic diagram illustrating a constant Sr for representing a difference in the luminance value between a diffuse reflection component and a specular reflection component.
  • FIG. 45 is a diagram showing the flow of a parameter estimating process for a specular reflection image in a super-resolution process in one embodiment of the present invention.
  • FIG. 46 is a conceptual diagram illustrating parameters of an expression representing the incident illuminance.
  • FIG. 47 is a flow chart showing the flow of a parameter estimating process by a simplex method.
  • FIG. 48 is a flow chart showing the flow of a parameter updating process in a simplex method.
  • FIG. 49 is a schematic diagram illustrating a polar coordinates representation.
  • FIG. 50 is a diagram showing the flow of a parameter estimating process for a diffuse reflection image in a super-resolution process in one embodiment of the present invention.
  • FIG. 51 is a diagram showing data stored in a memory where a pseudo-albedo is used.
  • a first aspect of the present invention provides a light source estimation device, including: an imaging device condition determination section for determining whether a condition of an imaging device is suitable for obtaining light source information; a light source image obtaining section for capturing an image by the imaging device when it is determined to be suitable by the imaging device condition determination section, to thereby obtain the captured image as a light source image; a first imaging device information obtaining section for obtaining first imaging device information representing a condition of the imaging device at a point in time when the light source image is obtained by the light source image obtaining section; a second imaging device information obtaining section for obtaining second imaging device information representing a condition of the imaging device at a time of image capturing when an image is captured by the imaging device in response to a cameraman's operation; and a light source information estimating section for estimating light source information including at least one of a direction and a position of a light source at the time of image capturing by using the light source image and the first and second imaging device information.
  • a second aspect of the present invention provides the light source estimation device of the first aspect, wherein the imaging device condition determination section detects a direction of an optical axis of the imaging device and determines it to be suitable when the optical axis is pointing upward.
  • a third aspect of the present invention provides the light source estimation device of the first aspect, wherein the light source image obtaining section obtains the light source image after confirming that an image is not being captured by the imaging device in response to a cameraman's operation.
  • a fourth aspect of the present invention provides the light source estimation device of the first aspect, wherein the light source information estimating section estimates, in addition to at least one of a direction and a position of the light source, at least one of luminance, color and spectrum information of the light source.
  • a fifth aspect of the present invention provides the light source estimation device of the first aspect, wherein: the light source image obtaining section obtains a plurality of the light source images; the first imaging device information obtaining section obtains the first imaging device information every time the light source image is obtained by the light source image obtaining section; the light source estimation device includes a light source image synthesis section for synthesizing a panoramic light source image from the plurality of light source images obtained by the light source image obtaining section by using the plurality of first imaging device information obtained by the first imaging device information obtaining section; and the light source information estimating section estimates the light source information by using the panoramic light source image and the second imaging device information.
  • a sixth aspect of the present invention provides the light source estimation device of the first aspect, including optical axis direction varying means for varying an optical axis direction of the imaging device, wherein a plurality of light source images are obtained by the light source image obtaining section while the optical axis direction of the imaging device is being varied by the optical axis direction varying means.
  • a seventh aspect of the present invention provides the light source estimation device of the sixth aspect, wherein: the light source estimation device is provided in a folding-type mobile telephone; and the optical axis direction varying means is an open/close mechanism for opening/closing the folding-type mobile telephone.
  • An eighth aspect of the present invention provides the light source estimation device of the sixth aspect, wherein the optical axis direction varying means is a vibration mechanism.
  • a ninth aspect of the present invention provides a light source estimation device, including: a light source image obtaining section for capturing an image by an imaging device to obtain the captured image as a light source image; a first imaging device information obtaining section for obtaining first imaging device information representing a condition of the imaging device at a point in time when the light source image is obtained by the light source image obtaining section; a second imaging device information obtaining section for obtaining second imaging device information representing a condition of the imaging device at a time of image capturing when an image is captured by the imaging device in response to a cameraman's operation; a light source information estimating section for estimating light source information including at least one of a direction and a position of a light source at the time of image capturing by using the light source image and the first and second imaging device information; and optical axis direction varying means for varying an optical axis direction of the imaging device, wherein a plurality of light source images are obtained by the light source image obtaining section while the optical axis direction of the imaging device is being varied
  • a tenth aspect of the present invention provides a light source estimation system for estimating light source information, including: a communication terminal including the imaging device condition determination section, the light source image obtaining section, the first imaging device information obtaining section and the second imaging device information obtaining section of the first aspect, wherein the communication terminal transmits the light source image obtained by the light source image obtaining section, the first imaging device information obtained by the first imaging device information obtaining section, and the second imaging device information obtained by the second imaging device information obtaining section; and a server including the light source information estimating section of the first aspect, wherein the server receives the light source image and the first and second imaging device information transmitted from the communication terminal to give the light source image and the first and second imaging device information to the light source information estimating section.
  • An eleventh aspect of the present invention provides a light source estimation method, including: a first step of determining whether a condition of an imaging device is suitable for obtaining light source information; a second step of capturing an image by the imaging device when it is determined to be suitable in the first step, to thereby obtain the captured image as a light source image; a third step of obtaining first imaging device information representing a condition of the imaging device at a point in time when the light source image is obtained in the second step; a fourth step of obtaining second imaging device information representing a condition of the imaging device at a time of image capturing when an image is captured by the imaging device in response to a cameraman's operation; and a fifth step of estimating light source information including at least one of a direction and a position of a light source at the time of image capturing by using the light source image and the first and second imaging device information.
  • a twelfth aspect of the present invention provides a light source estimation method, including: a first step of capturing an image by an imaging device to obtain the captured image as a light source image; a second step of obtaining first imaging device information representing a condition of the imaging device at a point in time when the light source image is obtained in the first step; a third step of obtaining second imaging device information representing a condition of the imaging device at a time of image capturing when an image is captured by the imaging device in response to a cameraman's operation; and a fourth step of estimating light source information including at least one of a direction and a position of a light source at the time of image capturing by using the light source image and the first and second imaging device information, wherein in the first step, an optical axis direction of the imaging device is varied by optical axis direction varying means, and a plurality of light source images are obtained while the optical axis direction of the imaging device is being varied.
  • a thirteenth aspect of the present invention provides a super-resolution device, including: an image-capturing section for capturing an image by an imaging device; a light source information estimating section for estimating light source information including at least one of a direction and a position of a light source illuminating an object, by the light source estimation method of the eleventh or twelfth aspect; a shape information obtaining section for obtaining, as shape information, surface normal information or three-dimensional position information of the object; and a super-resolution section for super-resolution of the image captured by the image-capturing section by using the light source information and the shape information.
  • a fourteenth aspect of the present invention provides the super-resolution device of the thirteenth aspect, wherein the super-resolution section separates an image captured by the image-capturing section into a diffuse reflection component and a specular reflection component, and separately enhances resolutions of the diffuse reflection component and the specular reflection component separated from each other.
  • a fifteenth aspect of the present invention provides the super-resolution device of the thirteenth aspect, wherein the super-resolution section decomposes an image captured by the image-capturing section into parameters, and separately enhances resolutions of the decomposed parameters.
  • a sixteenth aspect of the present invention provides a super-resolution method, including: a first step of capturing an image by an imaging device; a second step of estimating light source information including at least one of a direction and a position of a light source illuminating an object, by the light source estimation method of the eleventh or twelfth aspect; a third step of obtaining, as shape information, surface normal information or three-dimensional position information of the object; and a fourth step of super-resolution of the image captured in the first step by using the light source information and the shape information.
  • FIG. 1 is a block diagram showing a configuration of a light source estimation device according to a first embodiment of the present invention.
  • 1001 denotes an imaging device using CCDs, CMOSes, or the like, and 1002 a shutter button by which a cameraman, being a person who captures an image, instructs the imaging device 1001 to capture an image.
  • the imaging device 1001 is provided with a 3-degree-of-freedom (3DOF) sensor 1025 .
  • 3DOF 3-degree-of-freedom
  • 101 denotes an imaging device condition determination section for determining whether the condition of the imaging device 1001 is suitable for obtaining light source information
  • 102 a light source image obtaining section for capturing an image by the imaging device 1001 to obtain the captured image as a light source image when it is determined by the imaging device condition determination section 101 to be suitable
  • 103 a first imaging device information obtaining section for obtaining first imaging device information representing the condition of the imaging device 1001 when the light source image is obtained by the light source image obtaining section 102
  • 104 a second imaging device information obtaining section for obtaining second imaging device information representing the condition of the imaging device at the time of image capturing when the image is captured by the imaging device 1001 in response to a cameraman's operation
  • 105 a light source information estimating section for estimating at least one of the direction and the position of the light source at the time of image capturing based on the light source image obtained by the light source image obtaining section 102 , the first imaging device information obtained by the first imaging device information obtaining section 103 ,
  • the imaging device condition determination section 101 , the light source image obtaining section 102 , the first imaging device information obtaining section 103 , the second imaging device information obtaining section 104 and the light source information estimating section 105 are implemented as a program or programs executed by a CPU 1029 . Note however that all or some of these functions may be implemented as hardware.
  • a memory 1028 stores the light source image obtained by the light source image obtaining section 102 , and the first imaging device information obtained by the first imaging device information obtaining section 103 .
  • FIG. 2 shows an exemplary configuration of a folding-type camera-equipped mobile telephone 1000 provided with the light source estimation device of the present embodiment.
  • the imaging device 1001 includes a polarizing filter 1016 , and also includes a motor 1026 a for rotating the polarizing filter 1016 and an encoder 1027 a for detecting the angle of rotation thereof.
  • a motor 1026 b for driving the folding mechanism, and an encoder 1027 b for detecting the angle of rotation thereof are also provided.
  • a vibration mode switch 1034 is also provided.
  • FIG. 3 is a diagram showing the folding-type camera-equipped mobile telephone 1000 of FIG. 2 in a folded position.
  • 1005 denotes the optical axis direction of the imaging device 1001
  • 1006 a field of view of the imaging device 1001 .
  • the imaging device condition determination section 101 determines whether the condition of the imaging device 1001 is suitable for obtaining light source information.
  • the most ordinary light source may be a lighting device in a house, and may be a streetlight or the sunlight in the outdoors. Therefore, if the imaging direction, i.e., the direction of the optical axis, of the imaging device 1001 is upward, it can be determined to be a suitable condition for the imaging device 1001 to obtain light source information.
  • the imaging device condition determination section 101 uses the output of the angle sensor 1025 provided in the imaging device 1001 to detect the direction of the optical axis of the imaging device 1001 so as to determine that it is a suitable condition for obtaining light source information when the optical axis is pointing upward. Then, the imaging device condition determination section 101 sends an image-capturing prompting signal to the light source image obtaining section 102 .
  • the light source image obtaining section 102 captures an image by the imaging device 1001 to obtain the captured image as a light source image.
  • the obtained light source image is stored in the memory 1028 .
  • the light source image obtaining section 102 obtains a light source image after confirming that an image is not being captured by a cameraman's operation.
  • a light source image may be captured after confirming that the shutter button 1002 is not being pressed.
  • the light source image obtaining section 102 captures a light source image in a period during which an image is not being captured, in view of the cameraman's intention of capturing an image.
  • a light source image is captured by using the imaging device 1001 , which is used for imaging an object. Therefore, if the process of capturing a light source image is performed when the cameraman is about to image an object, the cameraman will not be able to image the object at the intended moment, thus neglecting the cameraman's intention of capturing an image.
  • a light source image is captured in a period during which it can be assumed that the cameraman will not capture an image, e.g., in a period during which the device is left on a table, or the like.
  • the optical axis direction 1005 is upward. Under this condition, it is possible to capture an optimal light source image.
  • FIG. 4 is a flow chart showing exemplary processes of the imaging device condition determination section 101 and the light source image obtaining section 102 .
  • the imaging device condition determination section 101 detects the optical axis direction of the imaging device 1001 , and determines whether the optical axis direction is upward (step S 121 ). If the optical axis direction is not upward (No in step S 121 ), the optical axis direction is repeatedly checked until the optical axis direction is upward. If the optical axis direction is upward (Yes in step S 122 ), the light source image obtaining section 102 checks the shutter button 1002 (step S 122 ).
  • step S 122 When the shutter button 1002 is being pressed for performing a process such as auto-focusing (AF) (No in step S 122 ), it is likely that an image is being captured, and therefore the process of capturing a light source image is not performed.
  • the light source image obtaining section 102 captures an image by the imaging device 1001 to obtain a light source image (step S 123 ).
  • an acceleration sensor or the like, may be used, wherein a light source image is obtained when the imaging device 1001 is stationary. Specifically, when the imaging device 1001 is stationary, it can be determined that the imaging device 1001 is not being held by the cameraman but is left on a table, or the like. Therefore, in such a case, it is likely that the cameraman is not capturing an image. When the cameraman is holding the imaging device 1001 for capturing an image, the acceleration sensor senses the camera shake.
  • the light source image obtaining section 102 may be configured not to capture an image in such a case.
  • FIG. 4A is a flow chart showing exemplary processes of the imaging device condition determination section 101 and the light source image obtaining section 102 in a case where whether the cameraman has an intention of capturing an image is determined by using the vibration mode.
  • the imaging device condition determination section 101 detects the optical axis direction of the imaging device 1001 , and determines whether the optical axis direction is upward (step S 121 ). If the optical axis direction is not upward (No in step S 121 ), the optical axis direction is repeatedly checked at regular intervals until the optical axis direction is upward. If the optical axis direction is upward (Yes in step S 121 ), the light source image obtaining section 102 checks the vibration mode (step S 124 ).
  • step S 124 If the vibration mode switch 1034 is OFF (No in step S 124 ), a light source image is not captured because it is likely that an image is to be captured. If the vibration mode switch 1034 is ON (Yes in step S 124 ), the light source image obtaining section 102 captures an image by the imaging device 1001 to obtain a light source image (step S 123 ).
  • the drive mode is set as the vibration mode, it can be assumed that the cameraman is traveling, whereby the process may choose not to capture a light source image. In other words, a light source image is captured in the silent mode but not in the drive mode.
  • the first imaging device information obtaining section 103 obtains first imaging device information representing the condition of the imaging device 1001 .
  • the output of the angle sensor 1025 and the focal distance information of the imaging device 1001 are obtained as the first imaging device information, for example.
  • the obtained first imaging device information is stored in the memory 1028 .
  • FIG. 5 is a schematic diagram showing part of information stored in the memory 1028 .
  • the angle sensor output and the focal distance for a light source image are stored as the first imaging device information.
  • the orientation information of the imaging device 1001 is represented by the following 3 ⁇ 3 matrix Rlight by using the output of the angle sensor 1025 .
  • the 3 ⁇ 3 matrix Rlight representing the orientation information of the imaging device 1001 is referred to as a camera orientation matrix.
  • ( ⁇ , ⁇ , ⁇ ) are values of the output from the sensor attached to the camera in a roll-pitch-yaw angle representation, each being expressed in terms of the amount of movement from a reference point.
  • a roll-pitch-yaw angle representation is a representation where a rotation is represented by three rotations, including the roll being the rotation about the z axis, the pitch being the rotation about the new y axis, and the yaw being the rotation about the new x axis, as shown in FIG. 6 .
  • Rx( ⁇ ), Ry( ⁇ ) and Rz( ⁇ ) are matrices for converting the roll-pitch-yaw angles to the x-axis rotation, the y-axis rotation and the z-axis rotation, and are expressed as follows.
  • the zooming information is also obtained as the focal distance information.
  • the focal distance information is also obtained.
  • the focal distance information can be obtained by performing a camera calibration operation as widely used in the field of image processing.
  • the method for obtaining the orientation information of the camera from the angle sensor or the angular velocity sensor attached to the camera may be an existing method (for example, Takayuki Okatani, “3D Shape Recovery By Fusion Of Mechanical And Image Sensors”, Journal of Information Processing Society of Japan, 2005-CVIM-147, pp. 123-130, 2005).
  • the second imaging device information obtaining section 104 obtains second imaging device information representing the condition of the imaging device 1001 .
  • the output of the angle sensor 1025 and the focal distance information of the imaging device 1001 are obtained as the second imaging device information.
  • the orientation matrix Rnow obtained from the output ( ⁇ , ⁇ , ⁇ ) of the angle sensor 1025 is referred to as the current orientation matrix.
  • the light source information estimating section 105 estimates light source information at the time of image capturing in response to a cameraman's operation by using the light source image and the first imaging device information stored in the memory 1028 , and the second imaging device information obtained by the second imaging device information obtaining section 104 . It is assumed herein that the direction of the light source is estimated.
  • FIG. 7 is a schematic diagram illustrating this process.
  • the imaging device 1001 having the field of view 1006 is imaging a light source 1007 .
  • the luminance value of an area 1009 where the light source is imaged is very high.
  • a threshold operation is used, wherein a pixel having a luminance value higher than a predetermined threshold is extracted as the light source pixel.
  • the light source direction is estimated from the obtained light source pixel.
  • This process requires the relationship between the pixel position (u,v) of the imaging device and the spatial position (xf,yf) on the imaging elements referred to as the image coordinate system.
  • the relationship between the pixel position (u,v) and the spatial position (xf,yf) can be obtained as follows.
  • (Cx,Cy) is the pixel center position
  • s is the scale factor
  • (dx,dy) is the size [mm] of one pixel of an imaging element
  • Ncx is the number of imaging elements in the x direction
  • Nfx is the number of effective pixels in the x direction
  • ⁇ 1 and ⁇ 2 are distortion parameters representing the distortion of the lens.
  • the relationship between the camera coordinate system (x,y,z) wherein the focal point of the imaging device is at the origin and the optical axis direction thereof is along the Z axis and the image coordinate system (xf,yf) as shown in FIG. 8 can be obtained as follows.
  • f represents the focal distance of the imaging device.
  • the camera parameters (Cx,Cy), s, (dx,dy), Ncx, Nfx, f, ⁇ 1 and ⁇ 2 are known, the pixel position (u,v) and the camera coordinate system (x,y,z) can be converted to each other by (Expression 2) and (Expression 3).
  • Ncx and Nfx can be known as long as the imaging elements can be identified, and (Cx,Cy), s, (dx,dy), ⁇ 1, ⁇ 2 and f can be known by performing a so-called “camera calibration” (for example, Roger Y. Tsai, “An Efficient And Accurate Camera Calibration Technique For 3D Machine Vision”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Miami Beach, Fla., 1986, pp. 364-374). These parameters do not change even when the position or the orientation of the imaging device changes. These parameters are referred to as “internal camera parameters”.
  • a camera calibration is performed to identify the internal camera parameters (Cx,Cy), s, (dx,dy), Ncx, Nfx, f, ⁇ 1 and ⁇ 2.
  • the default values as of when the imaging device is purchased may be used as these values.
  • the focal distance f for each step of zooming may be obtained in advance so that they can be selectively used as necessary. Then, the focal distance f may be stored together with a captured image.
  • the light source direction is estimated from the light source pixel by using information as described above.
  • the pixel position of the light source pixel is (ulight,vlight)
  • the light source direction Llight can be expressed as follows.
  • Llight Since Llight is represented by the camera coordinate system in which the light source image has been captured, it is converted to the current camera coordinate system Lnow. This can be expressed as follows.
  • the light source vector Lnow is estimated by performing these processes.
  • the direction of the light source is estimated as described above.
  • the process may also obtain the three-dimensional position of the light source in addition to the direction thereof.
  • FIG. 9 is a schematic diagram illustrating this process.
  • the light source should exist at the intersection between the extensions of the light source vectors 1010 A and 1010 B.
  • the three-dimensional position of the light source can be obtained as follows.
  • the orientation matrix of the imaging device, the relative three-dimensional position of the imaging device and the estimated light source vector at time t 1 are denoted as R 1 , P 1 and L 1 , respectively, and the orientation matrix of the imaging device and the estimated light source vector at time t 2 are denoted as R 2 and L 2 , respectively.
  • the position of the imaging device at time t 2 is assumed to be the origin O(0,0,0). Then, the light source position Plight satisfies.
  • the light source position Plight can be obtained by solving (Expression 5) and (Expression 6) as simultaneous equations in s and m. However, since there usually is noise, the light source position is obtained by using the method of least squares.
  • the light source position Plight is obtained by solving (Expression 7) and (Expression 8) as simultaneous equations in m and s, and substituting obtained s and m into (Expression 5) and (Expression 6).
  • the position of the light source is estimated as described above.
  • the relative three-dimensional position P 1 of the imaging device at time t 1 (the relative positional relationship between the imaging device at time t 1 and that at time t 2 ) is obtained by using an optical flow.
  • An optical flow is a vector extending between a point on an object in one image and the same point on the object in another temporally continuous image, i.e., a vector extending between corresponding points.
  • a geometric constraint expression holds between the corresponding points and the camera movement. Thus, if the corresponding points satisfy certain conditions, the movement of the camera can be calculated.
  • a method called an “8-point method”, for example, is known in the art (H. C. Longuet-Higgins, “A Computer Algorithm For Reconstructing A Scene From Two Projections”, Nature, vol. 293, pp. 133-135, 1981) as a method for obtaining the relative positional relationship of the imaging device at different points in time from an optical flow.
  • the camera movement is calculated from eight or more pairs of corresponding stationary points between two images.
  • Methods for obtaining such corresponding points between two images are widely known, and will not herein be described in detail (for example, Carlo Tomasi and Takeo Kanade, “Detection And Tracking Of Point Features”, Carnegie Mellon University Technical Report, CMU-CS-91-132, April 1991).
  • the luminance or the color of the light source can be obtained by obtaining the luminance value or the RGB values of the light source pixel.
  • the spectrum of the light source may be detected by obtaining an image by a multispectral camera. It is known that by thus obtaining the spectrum of the light source, it is possible to synthesize an image with high color reproducibility in the process of increasing the resolution of an image and in the augmented reality to be described later (for example, Toshio Uchiyama, Masaru Tshuchida, Masahiro Yamaguchi, Hideaki Haneishi, Nagaaki Ohyama “Capture Of Natural Illumination Environments And Spectral-Based Image Synthesis”, Technical Report of the Institute of Electronics, Information and Communication Engineers, PRMU2005-138, pp. 7-12, 2006).
  • the light source information estimating section 105 may be configured to obtain the illuminance information of the light source as the light source information. This can be done by using an illuminance meter whose optical axis direction coincides with that of the imaging device 1001 .
  • the illuminance meter may be a photocell illuminance meter, or the like, for reading the photocurrent caused by the incident light, wherein a microammeter is connected to the photocell.
  • the light source estimation device of the present embodiment obtains a light source image by the imaging device when it is determined that the condition of the imaging device is suitable for obtaining light source information, and estimates light source information at the time of image capturing by using the first imaging device information at the time of obtaining the light source image and the second imaging device information at the time of image capturing by a cameraman. Therefore, it is possible to estimate the light source information around the object with no additional imaging devices, in a camera-equipped mobile telephone, or the like.
  • the output of the angle sensor 1025 is used for the imaging device condition determination section 101 to detect the optical axis direction of the imaging device 1001 .
  • the present invention is not limited to this, and other existing methods may be employed, e.g., a method using a weight and touch sensors (see Japanese Laid-Open Patent Publication No. 4-48879), and a method using an acceleration sensor (see Japanese Laid-Open Patent Publication No. 63-219281).
  • FIG. 10 is a diagram showing a configuration of a weight and touch sensors.
  • 1003 denotes a weight hanging down with the base portion thereof rotatably supported so as to always keep the perpendicular direction
  • 1004 A and 1004 B touch sensors.
  • 1005 denotes the optical axis direction of the imaging device.
  • the touch sensors 1004 A and 1004 B come into contact with the weight 1003 when the optical axis direction 1005 is inclined from the horizontal position by predetermined angles ⁇ 1 and ⁇ 2.
  • FIG. 11 shows an exemplary configuration where a weight and touch sensors of FIG. 10 are provided in a folding-type camera-equipped mobile telephone.
  • the folding-type camera-equipped mobile telephone of FIG. 11 When the folding-type camera-equipped mobile telephone of FIG. 11 is placed with the imaging device 1001 facing down, the weight 1003 comes into contact with the touch sensor 1004 A, thus turning the touch sensor 1004 A ON ( FIG. 12( a )).
  • the weight 1003 comes into contact with the touch sensor 1004 B, thus turning the touch sensor 1004 B ON ( FIG. 12( b )).
  • FIG. 13 is a diagram showing the relationship between the optical axis direction and the ON/OFF state of the touch sensors.
  • the touch sensor 1004 A is ON and the touch sensor 1004 B is OFF, it can be assumed that the optical axis is facing downward with an inclination of + ⁇ 1 or more from the horizontal direction.
  • the touch sensor 1004 B is ON and the touch sensor 1004 A is OFF, it can be assumed that the optical axis is facing upward with an inclination of ⁇ 2 or more from the horizontal direction.
  • ⁇ 2 ⁇ 1 holds, and it can be assumed that the optical axis direction is substantially horizontal.
  • FIG. 14 shows an exemplary configuration where a weight and touch sensors are provided in a digital still camera.
  • a weight and touch sensors are provided in a digital still camera.
  • FIG. 14( a ) when the optical axis of the imaging device 1001 is facing downward, the weight 1003 is in contact with the touch sensor 1004 A.
  • FIG. 14( b ) when the optical axis of the imaging device 1001 is facing upward, the weight 1003 is in contact with the touch sensor 1004 B.
  • the imaging device condition determination section 101 determines whether the condition of the imaging device 1001 is suitable for obtaining light source information by detecting the optical axis of the imaging device 1001 .
  • the luminance value of the captured image may be detected, for example.
  • the pixel capturing the light source has a very high luminance value.
  • the imaging device 1001 may be used to capture an image, and if a luminance value greater than or equal to a threshold is present in the captured image, it can be determined that the light source is captured in the image and that the condition is suitable for obtaining light source information. In such a case, since it can be assumed that the light source has a very high luminance value, an image is preferably captured by the imaging device 1001 with as short an exposure time as possible.
  • whether there is a shading object within the range of viewing field of the camera may be detected so as to determine whether the condition of the imaging device 1001 is suitable for obtaining light source information. This is because if there is such a shading object, the light source will be shaded and it is likely that the light source cannot be captured.
  • the presence of a shading object can be detected by methods including a method using distance information and a method using image information.
  • the output of a distance sensor used in auto-focusing of a camera may be used so that if an object is present within 1 m, for example, the object is determined to be a shading object.
  • an image is captured by the imaging device 1001 , and a human is detected from within the image by an image processing, for example. If a human is in the captured image, it is determined that the human is a shading object. This is because it can be assumed that a most ordinary object that shades the light source in the vicinity of the camera is a human.
  • the detection of a human from within an image can be done by using image recognition techniques widely known in the art, e.g., by detecting a skin-colored region by using the color information.
  • the light source image obtaining section 102 obtains a light source image
  • the image is captured without using a flashlight. This is because if an object that causes specular reflection such as a mirror is present within the viewing field of the imaging device 1001 , the flashlight may be reflected, which may be erroneously assumed to be a light source pixel. Therefore, it is preferred to use an imaging device capable of capturing an image over a wide dynamic range, such as a cooled CCD camera or a multiple-exposure imaging.
  • the light source image obtaining section 102 obtains a light source image, if the amount of exposure is not sufficient, the exposure time may be elongated. This is particularly effective in a case where a light source image is obtained only when the imaging device 1001 is stationary by using an acceleration sensor, or the like.
  • FIG. 15 is a block diagram showing a configuration of a light source estimation device according to a second embodiment of the present invention.
  • like elements to those shown in FIG. 1 are denoted by like reference numerals, and will not be further described below.
  • a light source image synthesis section 106 is provided in addition to the configuration of FIG. 1 .
  • the light source image synthesis section 106 synthesizes a panoramic light source image from a plurality of light source images obtained by the light source image obtaining section 102 by using a plurality of pieces of first imaging device information obtained by the first imaging device information obtaining section 103 .
  • a panoramic light source image is a light source image capturing a scene over a wide range. By using a panoramic light source image, it is possible to obtain, at once, the light source information of a scene over a wide range.
  • the imaging device condition determination section 101 , the light source image obtaining section 102 and the first imaging device information obtaining section 103 repeatedly perform the processes to obtain a plurality of light source images and a plurality of pieces of first imaging device information corresponding respectively to the light source images.
  • the plurality of sets of the light source images and the first imaging device information are stored in the memory 1028 .
  • the first imaging device information obtaining section 103 may compare the new light source image with an already obtained light source image so that the first imaging device information is obtained only when the difference therebetween is significant. If the difference is insignificant, the new light source image may be discarded.
  • an acceleration sensor or an angle sensor may be used so that the light source image and the first imaging device information may be obtained when the imaging device 1001 is moved.
  • the light source image synthesis section 106 synthesizes a single wide-range panoramic light source image from a plurality of pairs of the light source images and the first imaging device information stored in the memory 1028 .
  • FIG. 16 is a schematic diagram illustrating a method for synthesizing a panoramic light source image.
  • 1001 denotes an imaging device, 1006 a field of view, 1011 a projection plane onto which an image is projected, and 1012 a projected image obtained by projecting the captured light source image.
  • a light source image stored in the memory 1028 is projected onto the projection plane 1011 using the corresponding first imaging device information. It is assumed herein that the projection plane is hemispherical.
  • the camera coordinate system (x,y,z) is (Xw,Yw,Zw) when the output ( ⁇ , ⁇ , ⁇ ) of the angle sensor 1025 provided in the imaging device 1001 is as follows:
  • the projection plane can be expressed as follows.
  • rprj is the radius of the hemisphere being the projection plane 1011 .
  • it may be set to 10 m assuming a streetlight for outdoor, and about 2.5 m assuming a lighting on the ceiling for indoor.
  • Such indoor/outdoor switching may be done by a cameraman switching between the indoor imaging mode and the outdoor imaging mode, for example.
  • R 0 is the orientation matrix obtained from (Expression 1) and (Expression 9).
  • all the captured light source images are projected onto the projection plane 1011 to produce the projected image 1012 .
  • two or more light source images may be projected onto the same area on the projection plane.
  • the projected image 1012 by the previously-captured light source image may be discarded while preferentially using the newly-captured light source image.
  • FIG. 17 is a schematic diagram showing how this is done.
  • 1001 denotes an imaging device, 1005 and 1006 the optical axis direction and the field of view, respectively, when the orientation of the imaging device 1001 is changed, 1013 the apparent field of view obtained by integrating together images captured while changing the orientation.
  • 1013 the apparent field of view obtained by integrating together images captured while changing the orientation.
  • the projection plane 1011 does not need to be a hemisphere, and may be a rectangular parallelepiped as shown in FIG. 18 for indoor, for example.
  • the camera coordinate system (x,y,z) is (Xw,Yw,Zw)
  • each plane of the rectangular parallelepiped being the projection plane can be expressed as follows.
  • a, b and c are constants, and this represents a projection plane on the rectangular parallelepiped of FIG. 18 . Therefore, as with (Expression 11), all pixels can be projected onto the projection plane 1011 as follows by obtaining the intersection between (Expression 4) and (Expression 59).
  • R 0 is the orientation matrix obtained from (Expression 1) and (Expression 9).
  • the indoor/outdoor switching may be done by using the color of the light source pixel. Specifically, by using the color obtained from the RGB components of the captured light source pixel, it is possible to determine whether it is the sunlight or a fluorescent light or a light bulb, whereby it is determined to be outdoor if it is the sunlight and to be indoor if it is a fluorescent light or a light bulb. In such a case, the wavelength characteristics of the imaging device for each of R, G and B and the gamma value being the ratio of the change in the converted voltage value with respect to the change in the brightness of the image may be stored.
  • the light source information estimating section 105 estimates the light source information by using the panoramic light source image synthesized by the light source image synthesis section 106 .
  • a method for estimating the light source position as the light source information will be described.
  • a light source pixel is extracted from the panoramic light source image by the method described above in the first embodiment.
  • the position of the extracted light source pixel is that in a case where the orientation of the imaging device 1001 is as represented by (Expression 9). Therefore, by using the second imaging device information obtained by the second imaging device information obtaining section 104 , the process estimates the light source position (X1_now, Y1_now, Z1_now) in terms of camera coordinates at the time of image capturing by a cameraman.
  • the light source information is estimated from the panoramic light source image, whereby it is possible to obtain, at once, the light source information of a scene over a wide range.
  • FIG. 19 is a block diagram showing a configuration of a light source estimation device according to a third embodiment of the present invention.
  • like elements to those shown in FIG. 1 are denoted by like reference numerals, and will not be further described below.
  • FIG. 19 presumes that it is provided in a folding-type mobile telephone. It includes an open/close mechanism 1031 for opening/closing the folding-type mobile telephone, and an open/close switch 1014 for instructing the open/close mechanism 1031 to perform the open/close action.
  • the open/close switch 1014 When the open/close switch 1014 is pressed and the open/close mechanism 1031 performs the open/close action, the optical axis direction of the imaging device 1001 provided in the folding-type mobile telephone changes due to the open/close action.
  • the open/close mechanism 1031 functions as optical axis direction varying means for varying the optical axis direction of the imaging device 1001 .
  • a light source image obtainment instructing section 110 first instructs the light source image obtaining section 102 to obtain a light source image when the open/close switch 1014 is pressed.
  • the imaging device 1001 preferably takes a series of consecutive shots or records a video.
  • the light source image obtainment instructing section 110 performs the open/close action by using the open/close mechanism 1031 . Therefore, the light source image obtaining section 102 obtains light source images while the open/close mechanism 1031 is performing the open/close action or, in other words, while the optical axis direction of the imaging device 1001 is being varied by the optical axis direction varying means.
  • FIG. 20 is an external view of the folding-type camera-equipped mobile telephone 1000 , which is provided with a light source estimation device of the present embodiment.
  • 1001 A and 1001 B denote imaging devices, 1002 a shutter button, and 1014 an open/close switch of a folding-type mobile telephone.
  • Arrows attached to the imaging devices 1001 A and 1001 B denote optical axis directions.
  • a folding-type mobile telephone is normally folded as shown in FIG. 20( b ) while it is not being used for a call or for capturing an image.
  • 1006 A and 1006 B denote fields of view to be captured by the two imaging devices ( 1001 A and 1001 B in FIG. 20) provided in the folding-type camera-equipped mobile telephone 1000 . It can be seen from FIG. 21 that the optical axis direction of the imaging devices 1001 A and 1001 B can be varied by using the open/close switch 1014 of the folding-type camera-equipped mobile telephone 1000 .
  • the open/close mechanism 1031 can be implemented by providing a spring or a lock mechanism (see, for example, Japanese Laid-Open Patent Publication No. 7-131850).
  • a motor may be provided in the hinge portion of the folding-type mobile telephone.
  • the orientation information of the imaging device 1001 can be obtained by using, as the angle sensor 1025 , a rotary encoder provided along with the motor.
  • the light source image obtainment instructing section 110 detects this and instructs the light source image obtaining section 102 to obtain light source images.
  • the open/close mechanism 1031 as the light source direction varying means performs the automatic open/close action of the folding-type mobile telephone.
  • the light source image obtaining section 102 captures a plurality of light source images by using the imaging device 1001 as instructed by the light source image obtainment instructing section 110 . Thereafter, the operation is similar to that of the first embodiment.
  • the exposure time is as short as possible so as to capture light source images while moving the imaging device 1001 .
  • camera shake compensation may be used so as to capture light source images without a blur even if the imaging device 1001 is moving.
  • the exposure time TE where camera shake compensation is not used can be expressed as follows.
  • M is the rotational velocity [deg/sec] of the optical axis which rotates as the open/close switch 1014 is pressed
  • ⁇ s is the viewing angle [deg] in the vertical direction of the imaging device 1001
  • Lx is the number of pixels in the vertical direction of the imaging device 1001 .
  • the open/close switch 1014 is often pressed immediately before image capturing.
  • the present embodiment is very effective because it is likely that light source images immediately before image capturing can be obtained.
  • Blurred images may intentionally be captured as light source images. By capturing blurred images, it is possible to capture light source images while keeping privacy of people in the captured scene. This can be realized by, for example, elongating the exposure time.
  • the exposure time and the rotational velocity may be determined so as to satisfy the following expression, for example.
  • the light source direction may be estimated by using a voting process from a plurality of light source images. This can improve the precision with which the light source direction is estimated. For example, if a light source position obtained from a light source image is significantly shifted from the light source position obtained from other light source images or if the corresponding light source is absent in other light source images, it can be determined that the estimation of the light source position is a failure and the obtained light source position can be discarded from the light source estimation result.
  • the image-capturing operation may be prompted by, for example, audibly outputting “Re-capture image of light source” or displaying “Re-capture image of light source” on the display.
  • FIG. 22 is a block diagram showing another configuration of the light source estimation device of the present embodiment.
  • like elements to those shown in FIGS. 1 and 19 are denoted by like reference numerals, and will not be further described below.
  • the configuration of FIG. 22 includes the imaging device condition determination section 101 shown in the first embodiment. It also includes a vibration mechanism 1026 to be used for realizing the vibration function of the mobile telephone, for example.
  • the vibration function is a function of indicating an incoming call by way of vibrations while the mobile telephone is in a vibration mode. If the vibration mechanism 1026 performs the vibration action while the vibration function is ON, the optical axis direction of the imaging device 1001 provided in the mobile telephone varies according to the vibration action.
  • the vibration mechanism 1026 functions as optical axis direction varying means for varying the optical axis direction of the imaging device 1001 .
  • the imaging device condition determination section 101 determines whether the condition of the imaging device 1001 is suitable for obtaining light source information.
  • the light source image obtainment instructing section 110 instructs the vibration mechanism 1026 to perform the vibration action and instructs the light source image obtaining section 102 to obtain light source images.
  • light source images are obtained by the light source image obtaining section 102 while the vibration mechanism 1026 is performing the vibration action or, in other words, while the optical axis direction of the imaging device 1001 is being varied by the optical axis direction varying means.
  • ⁇ s denotes the viewing angle of the imaging device 1001
  • ⁇ v denotes the angle by which the mobile telephone is vibrated by the vibration mechanism 1026 (the angle of vibration of FIG. 23 )
  • the enlarged viewing angle ⁇ t can be expressed as follows.
  • the required amount of vibration can be calculated if the viewing angle ⁇ s of the imaging device 1001 and the viewing angle ⁇ t required for the light source estimation are determined. For example, if the viewing angle ⁇ s of the imaging device 1001 is 80 degrees and a 90-degree viewing angle is required for the light source estimation, the angle of vibration is about 5 degrees. This is a value that can be realized by vibrating a mobile telephone having a height of 11 cm by about 9 mm.
  • vibration mechanism 1026 When the vibration mechanism 1026 is actuated for obtaining light source images, a different sound from that of an incoming call or that when receiving an email may be output from the speaker. Then, it is possible to distinguish the operation of obtaining light source images from when there is an incoming call or when receiving an email.
  • an LED or an interface liquid crystal display may be lit to notify the user.
  • the vibration mechanism 1026 may be operated to obtain light source images after giving the notification by a sound, an LED or a display.
  • FIG. 24 is a block diagram showing another exemplary configuration of a light source estimation device of the present embodiment.
  • like elements to those shown in FIGS. 1 and 22 are denoted by like reference numerals, and will not be further described below.
  • the configuration of FIG. 24 includes an email reception detecting section 1032 , while the imaging device condition determination section 101 is omitted.
  • the vibration mechanism 1026 performs the vibration action.
  • the light source image obtainment instructing section 110 instructs the light source image obtaining section 102 to obtain light source images.
  • light source images are obtained by the light source image obtaining section 102 while the vibration mechanism 1026 is performing the vibration action or, in other words, while the optical axis direction of the imaging device 1001 is being varied by the optical axis direction varying means.
  • light source images are obtained while the optical axis direction of the imaging device is being varied by the optical axis direction varying means, whereby it is possible to obtain light source images over a wide range around the object and thus to precisely estimate the light source information.
  • the light source image synthesis section 106 illustrated in the second embodiment may be provided so that a panoramic light source image is produced from a plurality of light source images.
  • the optical axis direction varying means is implemented by the open/close mechanism or the vibration mechanism of the folding-type mobile telephone.
  • the present invention is not limited to this, and it may be implemented by any means as long as it is capable of varying the optical axis direction of the imaging device.
  • a special driving mechanism may be provided in the imaging device itself.
  • FIG. 25 is a block diagram showing a configuration of a light source estimation system according to a fourth embodiment of the present invention.
  • like elements to those shown in FIG. 1 are denoted by like reference numerals, and will not be further described below.
  • a communication terminal 1100 being a camera-equipped mobile telephone, for example, is provided with those elements shown in FIG. 1 except for the light source information estimating section 105 .
  • the light source information estimating section 105 is provided in a server 1101 , which is an external device away from the communication terminal 1100 and connected to the communication terminal 1100 via a network.
  • the communication terminal 1100 does not perform all of the processes but only performs the process of obtaining light source images and imaging device information, with the light source information estimating process being performed by the server 1101 .
  • a light source image is obtained by the light source image obtaining section 102
  • the first imaging device information at the time of obtaining the light source image is obtained by the first imaging device information obtaining section 103
  • the second imaging device information at the time of actually capturing an image is obtained by the second imaging device information obtaining section 104 , as described above in the first embodiment.
  • the light source image and the first and second imaging device information are transmitted by an information transmitting section 108 . There may also be given an instruction as to how the light source information is estimated.
  • an information receiving section 109 receives information that is transmitted from the communication terminal 1100 via a network, i.e., the light source image and the first and second imaging device information.
  • the received light source image and the received first and second imaging device information are given to the light source information estimating section 105 .
  • the light source information estimating section 105 estimates the light source information as described above in the first embodiment. Where there is given an instruction as to how the light source information estimation, it estimates the light source information according to the instruction.
  • the light source information estimating section 105 is provided in the server 1101 so as to perform the light source information estimating process by the server 1101 , it is possible to reduce the computational burden on the communication terminal 1100 .
  • the light source estimation device of the present invention is particularly effective in the process of super-resolution also known as “digital zooming”.
  • the super-resolution process is important in the editing process after capturing an image since it makes it possible to arbitrarily enlarge captured-images.
  • Such a super-resolution process has been realized by an interpolation process, or the like, but it has a problem in that where an enlarged image with a factor of 2 ⁇ 2 or more is synthesized, the synthesized image will be blurred, deteriorating the image quality.
  • the light source estimation method of the present invention it is possible to realize a super-resolution process with reduced image quality deterioration. This method will now be described.
  • the super-resolution process of the present invention uses four pieces of input information as follows:
  • a diffuse reflection image is what is obtained by imaging only a diffuse reflection component, being a mat reflection component, of the input image.
  • a specular reflection image is what is obtained by imaging only a specular reflection component, being a shine, of the input image.
  • the diffuse reflection component is a component that is reflected at the surface of a mat object and scattered evenly in all directions.
  • the specular reflection component is a component that reflects strongly in the opposite direction to the direction of the incident light with respect to the normal as is a reflection at a mirror surface. Assuming a dichromatic reflection model, the luminance of an object is represented by the sum of a diffuse reflection component and a specular reflection component.
  • the specular reflection image and the diffuse reflection image can be obtained by imaging an object while rotating the polarizing filter, for example.
  • FIG. 26( a ) shows an image obtained by imaging an object (a tumbler) illuminated by a light source with an imaging device. It can be seen that a specular reflection, being a shine, appears in an upper portion of the figure.
  • FIGS. 26( b ) and ( c ) are the results of separating the image of FIG. 26( a ) into a diffuse reflection image and a specular reflection image by a method to be described later.
  • the shine has been removed to make clear the surface texture information, but the stereoscopic effect has been lost.
  • the specular reflection image detailed shape information appears clearly, but the texture information has been lost.
  • an input image is an image obtained by laying these two images containing totally different information on each other.
  • the super-resolution process uses a learning-based method.
  • Learning-based means preparing a pair of a low-resolution image and a high-resolution image in advance and learning the correspondence therebetween. In this process, the feature quantity extracted from the image, but not the image itself, is learned, so that the super-resolution process can be used also with images other than the prepared images.
  • FIG. 27 is a block diagram showing a configuration of a super-resolution device in one embodiment of the present invention.
  • the super-resolution device of FIG. 27 includes an image-capturing section 201 for capturing an image by using an imaging device, a light source information estimating section 203 for estimating the light source information such as the direction, position, luminance, color and spectrum of the light source illuminating the object by a light source estimation method as described above, a shape information obtaining section 204 for obtaining the surface normal information or the three-dimensional position information of the object as shape information, and a super-resolution section 217 for performing super-resolution of the image captured by the image-capturing section 201 by using the light source information estimated by the light source estimation section 203 and the shape information obtained by the shape information obtaining section 204 .
  • the super-resolution section 217 further separates the image captured by the image-capturing section 201 into a diffuse reflection component and a specular reflection component by means of a diffuse reflection/specular reflection separating section 202 and performs super-resolution of the diffuse reflection component and that of the specular reflection component separately.
  • the image-capturing section 201 images an object by using an imaging device such as a CCD or a CMOS.
  • an imaging device such as a CCD or a CMOS.
  • the specular reflection component where the luminance is very high and the diffuse reflection component are recorded at the same time without saturation. Therefore, it is preferred to use an imaging device capable of capturing an image over a wide dynamic range, such as a cooled CCD camera or a multiple-exposure imaging.
  • the diffuse reflection/specular reflection separating section 202 separates the image captured by the image-capturing section 201 into a diffuse reflection component and a specular reflection component.
  • the luminance of an object is represented as follows by the sum of a diffuse reflection component and a specular reflection component.
  • I is the luminance value of the object imaged by the imaging device
  • I a is an environmental light component
  • I d is a diffuse reflection component
  • I s is a specular reflection component.
  • the environmental light component refers to indirect light which is light from the light source being scattered by objects, etc. This is scattered to every part of the space, giving a slight brightness even to shaded areas where direct light does not reach. Therefore, normally, it is often treated as noise.
  • an image can be separated into a diffuse reflection component and a specular reflection component.
  • these components exhibit very different characteristics from each other, as the diffuse reflection component depends on texture information, whereas the specular reflection image depends on detailed shape information. Therefore, if the super-resolution is performed by separating an input image into a diffuse reflection image and a specular reflection image and performing super-resolution by different methods, it is possible to perform super-resolution with a very high definition. For this, it is first necessary to separate the diffuse reflection image and the specular reflection image from each other.
  • FIG. 28 shows the camera-equipped mobile telephone 1000 provided with a super-resolution device of the present embodiment.
  • the imaging device 1001 is provided with a linear polarizing filter 1016 A having a rotation mechanism (not shown).
  • the lighting device 1007 with a linear polarizing filter 1016 B attached thereto is also provided.
  • 1017 denotes a liquid crystal display as a user interface.
  • the imaging device 1001 captures a plurality of images of an object being illuminated by the lighting device 1007 with the linear polarizing filter 1016 B attached thereto while rotating the linear polarizing filter 1016 A by means of the rotation mechanism.
  • the reflected light intensity changes as shown in FIG. 29 with respect to the angle of rotation ⁇ of the polarizing filter 1016 A.
  • I d denotes the diffuse component of the reflected light
  • the maximum value I max and the minimum value I min of the reflection light luminance can be expressed as follows.
  • the diffuse component I d of the reflected light and the specular reflection component I s thereof are obtained as follows.
  • FIG. 30 shows the flow of this process.
  • the polarizing filter 1016 A is rotated by the rotation mechanism (step S 301 ), and images are captured and stored in a memory (step S 302 ). Then, it is determined whether a predetermined number of images have been captured and stored in the memory (step S 303 ). If a sufficient number of images for detecting the minimum value and the maximum value of the reflection light luminance have not been captured (No in step S 303 ), the polarizing filter is rotated again (step S 301 ) to repeat the image-capturing process.
  • step S 303 If a sufficient number of images have been captured (Yes in step S 303 ), the minimum value and the maximum value of the reflection light luminance are detected by using the captured image data (step S 304 ), and the diffuse reflection component and the specular reflection component are separated from each other by using (Expression 13) and (Expression 14) (step S 305 ). While this process may be done by obtaining the minimum value and the maximum value for each pixel from the plurality of images, fitting of a sin function is used herein. This process will now be described.
  • the reflection light luminance I for the polarizing filter angle ⁇ shown in FIG. 29 can be approximated by a sin function as follows.
  • A, B and C are constants, and the following expressions hold based on (Expression 13) and (Expression 14).
  • I i denotes the reflected light intensity for the polarizing filter angle ⁇ i .
  • the diffuse reflection component and the specular reflection component are separated from each other by using (Expression 16) to (Expression 23).
  • the number of unknown parameters is three, it is sufficient to capture at least three images with different angles of rotation of the polarizing filter.
  • FIG. 31 schematically shows pixels of such an imaging device.
  • 1022 denotes one pixel
  • straight lines in each pixel denote the polarization direction.
  • the imaging device has pixels of four different polarization directions of 0°, 45°, 90° and 135°. If pixels of four different kinds are treated to be a single pixel as in a Bayer arrangement as represented by a thick line 1023 in FIG. 31 , it is possible to simultaneously capture images with four different polarization directions.
  • Such an imaging device may be, for example, a photonic crystal device, or the like.
  • a polarized illumination e.g., a liquid crystal display
  • the liquid crystal display 1017 provided in the mobile telephone 1000 can be used.
  • it is preferred that the luminance value of the liquid crystal display 1017 is made higher than that when it is used as a user interface.
  • the polarizing filter 1016 B of the lighting device 1007 may be rotated instead of rotating the polarizing filter 1016 A of the imaging device 1001 .
  • a polarizing filter instead of providing a polarizing filter both for the imaging device 1001 and for the lighting device 1007 , a polarizing filter may be provided only for one of them, i.e., for the imaging device, and the diffuse reflection component and the specular reflection component may be separated from each other by using an independent component analysis (see, for example, Japanese Patent No. 3459981).
  • the light source information estimating section 203 obtains the position, color and illuminance information of the light source by using a light source estimation method as described above.
  • the shape information obtaining section 204 obtains the surface normal information or the three-dimensional position information of the object, which are shape information of the object.
  • Means for obtaining shape information of an object may be an existing method such as, for example, a slit-ray projection method, a pattern projection method or a laser radar method.
  • the shape information obtaining method is not limited to these methods.
  • the method may be a stereoscopic method using a plurality of cameras, a motion-stereo method using the motion of a camera, a photometric stereo method using images captured while varying the position of the light source, a method in which the distance from an object is measured using a millimeter wave or an ultrasonic wave, or a method using polarization characteristics of reflected light (for example, U.S. Pat. No. 5,028,138, and Daisuke Miyazaki, Katsushi Ikeuchi, “A Method To Estimate Surface Shape Of Transparent Objects By Using Polarization Raytracing Method”, Journal of the Institute of Electronics, Information and Communication Engineers, vol. J88-D-II, No. 8, pp. 1432-1439, 2005).
  • a photometric stereo method and a method using polarization characteristics will be described.
  • the photometric stereo method is a method for estimating the normal direction and the reflectance of an object by using three or more images of different light source directions.
  • H. Hayakawa “Photometric Stereo Under A Light Source With Arbitrary Motion”, Journal of the Optical Society of America A, vol. 11, pp. 3079-89, 1994 describes a method where six or more points of an equal reflectance are obtained from an image as known information and they are used as constraints, thereby estimating the following parameters even if the light source position information is unknown:
  • Diffuse reflection images of different light source directions are represented by the luminance matrix I d as follows.
  • I d [ i d ⁇ ⁇ 1 ⁇ ( 1 ) ⁇ i dF ⁇ ( 1 ) ⁇ ⁇ ⁇ i d ⁇ ⁇ 1 ⁇ ( P ) ⁇ i dF ⁇ ( P ) ] ( Expression ⁇ ⁇ 24 )
  • i df(p) denotes the luminance value of a pixel p in the diffuse reflection image of the light source direction f.
  • the number of pixels in the image is P, and the number of images captured with different light source directions is F.
  • the luminance value of a diffuse reflection image can be expressed as follows.
  • ⁇ p denotes the reflectance (albedo) of the pixel p
  • n p the normal direction vector of the pixel p
  • t f the incident illuminance of the light source f
  • L f the direction vector of the light source f.
  • R refers to a surface reflection matrix, N a surface normal matrix, L a light source direction matrix, T a light source intensity matrix, S a surface matrix, and M a light source matrix.
  • U′ is a P ⁇ 3 matrix
  • U′′ a P ⁇ (F ⁇ 3) matrix
  • ⁇ ′ a 3 ⁇ 3 matrix
  • V′ a 3 ⁇ F matrix
  • V′′ a (F-3) ⁇ F matrix
  • the shape information and the light source information can be obtained at once by solving (Expression 29), but the uncertainty of the 3 ⁇ 3 matrix A remains as follows.
  • A is a 3 ⁇ 3 matrix.
  • the matrix A needs to be obtained. This is satisfied if it is known that six or more points on the image have an equal reflectance. For example, if six points k 1 to k 6 have an equal reflectance, the following holds.
  • the matrix A can be solved by applying the singular value decomposition to (Expression 34).
  • the shape information and the light source information are obtained from (Expression 30) and (Expression 31).
  • the following information can be obtained by capturing three or more images of an object of which six or more pixels having an equal reflectance are known while changing the light source direction:
  • the reflectance of the object and the radiance of the light source obtained by the above process are relative values, and obtaining absolute values requires known information other than the above, such as the reflectance being known for six or more points on the image.
  • the distance or the three-dimensional position between the imaging device and the object may be obtained. This will now be described with reference to the drawings.
  • FIG. 32 is a schematic diagram illustrating this process.
  • 1001 denotes an imaging device
  • 1007 A and 1007 B light sources 1015 the object-observing point O
  • 1010 A and 1010 B the light source directions of the light sources at the object-observing point O
  • 1021 the viewing direction of the imaging device at the object-observing point O.
  • the three-dimensional positional relationships La and Lb between the imaging device 1001 and the light sources 1007 A and 1007 B are known.
  • the viewing direction 1021 of the imaging device 1001 is also known. Therefore, the object-observing point O 1015 exists on the viewing direction 1021 .
  • the light source directions 1010 A and 1010 B of the light sources at the object-observing point O are known.
  • the distance Lv between the imaging device 1001 and the observing point O 1015 is positive (Lv>0)
  • the positional relationship between the light source and the imaging device can be obtained from the design information.
  • the shape information obtaining section 204 may obtain the surface normal direction of the object by using the polarization characteristics of the reflected light. This process will now be described with reference to FIG. 33 .
  • 1001 denotes an imaging device, 1007 a light source, 1015 an observing point O, 1016 a linear polarizing filter having a rotation mechanism (not shown) such as a motor, and 1019 the normal direction.
  • a rotation mechanism such as a motor
  • the reflected light intensity will be a sin function of the period ⁇ , as shown in FIG. 34 .
  • the light source is a polarized light source
  • a reflected light component that has polarized characteristics is the specular reflection component reflected at the surface of the observing point O and a non-polarized component is the diffuse reflection component.
  • the observing point O at which there occurs an intensity difference between the maximum value I max and the minimum value I min of the reflected light intensity is an observing point where the specular reflection component is strong, i.e., where light is regularly reflected
  • the normal direction 1019 of the observing point O is a bisector between the light source direction from the observing point O and the imaging device direction from the observing point O). Therefore, the normal direction 1019 also exists within the plane of incidence.
  • ⁇ max or ⁇ min it can be assumed that the normal direction 1019 exists within the following plane:
  • ⁇ max or ⁇ min are estimated by performing the process of fitting a sin function.
  • the normal direction 1019 is estimated by obtaining the line of intersection between the two estimated planes. In this process, it is necessary to estimate the amount of movement of the imaging device 1001 , but it can be done by using the 8-point method, or the like.
  • an imaging device having a different polarization direction for each pixel may be used.
  • the normal direction 1019 may be obtained by providing a plurality of imaging devices, instead of changing the position of the imaging device 1001 .
  • the surface normal information is obtained as described above by the photometric stereo method and the method using polarization characteristics.
  • the three-dimensional position information of the object is obtained.
  • the object surface normal information is information on the gradient of the three-dimensional position of the object within a small space, and these are both object shape information.
  • the shape information obtaining section 204 obtains the surface normal information or the three-dimensional position information of the object, which are shape information of the object.
  • the shadow removing section 205 estimates shadow areas in an image and performs the shadow removing process. While various methods have been proposed for such a shadow removing and shadow area estimating process, it is possible for example to utilize the fact that a shadow area has a low luminance value, and to estimate that a pixel whose luminance value is less than or equal to a threshold is a shadow area.
  • ray tracing is a rendering method being widely used in the field of computer graphics. While a rendering process is done by calculating coordinate data of the object or data relating to the environment such as the position of the light source or the point of view, a ray tracing process is done by tracing backwards light rays that reach the point of view. Thus, it is possible with ray tracing to calculate where a shadow is formed and the degree of the shadow.
  • the resolutions of the diffuse reflection image and the specular reflection image separated by the diffuse reflection/specular reflection separating section 202 are separately increased by different methods.
  • the process for the diffuse reflection image will be described.
  • An albedo estimating section 206 estimates the albedo of the object by using the diffuse reflection image separated by the diffuse reflection/specular reflection separating section 202 . Since the albedo is not influenced by the light source information, it is possible to realize a process that is robust against light source variations by performing the process using an albedo image.
  • ⁇ i denotes the angle formed between the object normal direction vector and the light source vector.
  • the angle ⁇ i is known.
  • the incident illuminance t f of the light source can also be estimated as will be described later, the albedo r p of the object is obtained from (Expression 36).
  • r p ′ i df ⁇ ( p ) i sf_max ⁇ cos ⁇ ⁇ ⁇ i [ Formula ⁇ ⁇ 61 ]
  • i sf — max denotes the maximum luminance value of the specular reflection image.
  • a pseudo-albedo is effective in cases where the radiance (illuminance) of the light source cannot be obtained by the light source information estimating section 203 .
  • the maximum luminance value i sf — max of the specular reflection image used for the normalization is stored in a memory.
  • FIG. 51 is a diagram showing data to be stored in the memory in a case where the albedo estimating section 206 uses a pseudo-albedo. The produced pseudo-albedo images and the maximum luminance value i sf — max of the specular reflection image used for the normalization are stored.
  • the specular reflection parameter is uniform over a wide area of the object and there exist normals of various directions to the object surface, there exists a regular reflection pixel that causes regular reflection as long as the light source exists at such a position that it illuminates the object for the camera.
  • the maximum luminance value i sf — max of the specular reflection image is the luminance value of the regular reflection pixel.
  • the ratio between the luminance value of the regular reflection pixel at one light source position and that of the regular reflection pixel at another light source position is substantially equal to the light source radiance ratio between these light sources. Therefore, there remains the influence of the light source radiance if the luminance value i df(p) of the diffuse reflection image is simply divided by ⁇ i .
  • An albedo super-resolution section 207 performs the super-resolution of the albedo image estimated by the albedo estimating section 206 . This process will now be described in detail.
  • an albedo image is an image representing the reflectance characteristics that are inherent to the object and are not dependent on optical phenomena such as specular reflection of light and shading. Since object information is indispensable for the super-resolution process of the present embodiment, the process is based on learning the object in advance.
  • a super-resolution process based on the texton (the texture feature quantity of an image) is used.
  • FIG. 35 is a diagram showing the concept of the texton-based super-resolution process.
  • the low-resolution image LR (the number of pixels: N ⁇ N) input upon execution of the process is enlarged by interpolation by a factor of M ⁇ M so that the number of pixels is matched with the target number of pixels.
  • the image whose number of pixels is MN ⁇ MN is referred to as an “exLR image”.
  • the high-frequency component of the image is lost in the exLR image, and the exLR image will be a blurred image. Sharpening this blurred image is nothing but a super-resolution process.
  • the luminance value of the exLR image is transformed for each pixel to the T-dimensional texton based on multiple resolutions by using the multiple-resolution transformation WT.
  • This transformation uses a process such as a wavelet transformation or a pyramid structure decomposition.
  • a total of MN ⁇ MN T-dimensional texton vectors are produced for each pixel of the exLR image.
  • clustering is performed on the texton vectors to selectively produce L input representative texton vectors.
  • These L texton vectors are subjected to a transformation based on database information learned in advance to produce a T-dimensional resolution-increased texton vector.
  • the transformation uses a table lookup process, and a linear or non-linear transformation in the T-dimensional multidimensional feature vector space.
  • the resolution-increased texton vector is converted back to image luminance values by an inverse transformation IWT such as an inverse wavelet transformation or a pyramid structure reconstruction, thus forming a resolution-increased image HR.
  • FIG. 36 schematically illustrates the improvement above.
  • FIG. 37 is a PAD diagram illustrating the flow of the learning process
  • FIG. 38 is a diagram illustrating the relationship between a pixel to be processed and a cell to be processed in the processed image. The process will now be described referring to FIGS. 37 and 38 alternately.
  • the low-resolution image LR image, the high-resolution image HR image, and the enlarged image exLR image being a low-resolution image are input. These images are all produced from HR, and it is ensured that there is no pixel shifting at the time of image capturing. Bicubic interpolation is used for producing the exLR image from the LR image.
  • FIG. 38 three different images are provided, i.e., the high-resolution image HR (the number of pixels: 128 ⁇ 128), the low-resolution LR image (the number of pixels: 32 ⁇ 32), and the exLR image (the number of pixels: 128 ⁇ 128) that is obtained by matching LR to HR in terms only of the number of pixels.
  • SWT transformation discrete stationary wavelet transformation
  • a total of 1024 six-dimensional vectors of the textonized LRW image are clustered down to Cmax vectors.
  • the collection of the resulting 512 texton vectors is referred to as the “cluster C”. All of the 1024 textons may be used without clustering.
  • the process determines LR pixels identified to be the same cluster as the cluster C. Specifically, the pixel values of the LR image are replaced by the texton numbers of the cluster C.
  • the process searches for a pixel cell of exLR and a pixel cell of the HR image corresponding to the subject texton, and stores the subject cell number.
  • This searching process needs to be performed only for the number of pixels of the LR image, thus providing a significant reduction in the searching time in processes with high factors.
  • FIG. 38 The relationship between a pixel of the LR image, a pixel cell of the exLR image and a pixel cell of the HR image will be described with reference to FIG. 38 .
  • the numbers of the two cell positions are stored as having the subject texton.
  • these groups of pixel cells are textonized by pairs of exLR images and HR images. Specifically, a two-dimensional discrete stationary wavelet transformation is performed, thereby producing an exLRW image and an HRW image.
  • pairs of textons obtained from the HRW image and the exLRW image are integrated each in the form of a matrix.
  • Each one is in the form of a 6 ⁇ Data_num matrix.
  • the exLR and HR texton matrices integrated in S 319 and S 320 are denoted as Lf and Hf (size: 6 ⁇ Data_num), respectively, and the matrix to be obtained as M(6 ⁇ 6)
  • the method of least squares in S 322 can be performed as follows.
  • CMat is a group of 6 ⁇ 6 conversion matrices each defined for one cluster number.
  • FIG. 39 is a diagram showing the process of the two-dimensional discrete stationary wavelet transformation.
  • the image shrinks as the stage of decomposition progresses while the filter bank configuration remains the same.
  • the two filters i.e., the scaling function F and the wavelet function G
  • the two filters are upsampled (t) and elongated by a power of 2, thus realizing a multiple-resolution analysis.
  • the Haar basis the specific values of F and G and how the upsampling is performed are as shown in Table 1.
  • FIG. 40 shows an exemplary resulting image obtained when performing a two-dimensional discrete stationary wavelet transformation on a test image.
  • a texton vector is what is obtained by arranging corresponding values for each pixel of 1-STEP and 2-STEP transformed images of these wavelets, and is a seven-dimensional vector as follows.
  • the high-resolution transformation is performed by using only the six-dimensional vector portion, except for cA2 being the 2-STEP LL component, while the cA2 component is stored.
  • the number of steps of the wavelet transformation is set to 2-STEP both in S 314 and in S 318 .
  • 2-STEP is used in S 314 for clustering the LR image because it may not be possible with 1-STEP to obtain sufficient information for the surrounding pixels.
  • S 318 for producing textons used for increasing the resolution of the exLR image it has been experimentally confirmed that a better image can be obtained with 3-STEP than with 2-STEP for a factor of 8 ⁇ 8. Thus, it is preferred to determine the number of steps in view of the factor of magnification.
  • FIG. 41 is a PAD diagram illustrating the flow of the process being performed
  • FIG. 42 is a diagram showing the relationship of pixel cells when the process is performed.
  • an LR image and an exLR image obtained by enlarging the LR image are input.
  • the number of pixels of the LR image is 32 ⁇ 32 and the number of pixels of the exLR image is 128 ⁇ 128.
  • the exLR image is produced by a bicubic method as is the method for producing the exLR image, which is an image learned, in S 313 of FIG. 37 .
  • SWT transformation discrete stationary wavelet transformation
  • a texton vector of the shortest distance within the cluster C (Cmax textons) is searched for each texton to obtain the texton number (Ci).
  • This is equivalent to texton numbers of C 0 , C 1 , . . . , Cn being assigned to pixels 2011 , 2012 , . . . , 2013 along one line of the LR image in FIG. 42 .
  • the process proceeds to S 337 .
  • the process is to repeatedly process each cell of the HR image from one scanning line to another. Specifically, in FIG. 42 , as cells 2014 , 2015 , . . . , 2016 of the exLR image are processed, the resolutions of corresponding cells 2023 , 2024 , . . . , 2025 of the HR image are successively increased.
  • the subject cell region of the exLR image is textonized. Specifically, a two-dimensional discrete stationary wavelet transformation is performed to produce an exLRW image. Cells 2017 , 2018 , . . . , 2019 , etc., are produced.
  • the conversion matrix CMat is subtracted from the texton number to thereby determine the conversion matrix M in the subject cell.
  • the process is performed as shown in FIG. 42 .
  • a separate 6 ⁇ 6 conversion matrix M can be selected from Mat using C 0 , C 1 , . . . , Cn as texton numbers for the cells.
  • cells 2020 , 2021 , . . . , 2022 of the HRW image are produced from the cells 2017 , 2018 , . . . , 2019 of the exLRW image, respectively.
  • the seven-dimensional texton is produced by adding the LL component of 2-STEP of the exLRW image to the six-dimensional texton in these resolution-increased cells.
  • the inverse SWT (ISWT) transformation can be realized by the signal flow shown in FIG. 43 .
  • This is substantially the same representation as FIG. 39 .
  • the image is enlarged as the stage of decomposition progresses while the filter bank configuration remains the same.
  • the present inverse transformation the transformed image size remains unchanged as the stage of decomposition progresses, and the two filters, i.e., the scaling function F and the wavelet function G 1 , are downsampled ( ⁇ ) and shortened by a power of 2, thus realizing a multiple-resolution analysis.
  • the specific values of F and G 1 and how the downsampling is performed are as shown in Table 2.
  • the resolution of one component of an albedo image is increased as described above. By performing this process for the entire albedo image, a resolution-increased albedo image is synthesized.
  • the image may be normalized so that the process can be performed even if the size, orientation, direction, etc., of the object included in the albedo image change. It can be assumed that a texton-based super-resolution process may not exhibit a sufficient super-resolution precision when the size or the orientation in the albedo image are different from those of the learned data.
  • a plurality of pairs of albedo images are provided to solve this problem. Specifically, the process synthesizes images obtained by rotating an albedo image by 30 degrees, and the super-resolution process is performed on all of the images, so as to accommodate changes in the orientation or the direction. In such a case, in the process of searching for a texton of the shortest distance in step S 336 of FIG.
  • the process may search for a texton of the shortest distance for each of the textons of a plurality of LR images obtained from images resulting from the rotation process to thereby search for one with the shortest distance, thus obtaining the texton number (Ci).
  • the process may synthesize albedo images obtained while varying the image size.
  • an enlarging/shrinking process may be performed so that a 5 cm ⁇ 5 cm image is always turned to an 8 ⁇ 8 pixels, for example, and textons may be produced for such an image. Since the size of the object is known by the shape information obtaining section 204 , the size variations may be accommodated by producing textons from images of the same size for “Learning Process” and for “Super-resolution Process”.
  • a plurality of pairs of textons may be produced while rotating the albedo image “Learning Process” instead of rotating the albedo image “Super-resolution Process”, and the cluster C and the learned conversion matrix CMat may be stored in the albedo DB 208 .
  • the process may estimate what the input object is, and perform an orientation estimation to estimate how the estimated object is rotating.
  • a process can be realized by widely-used image recognition techniques. For example, this can be done by placing a tag such as RFID on the object so that the process can recognize the object by recognizing the tag information and further estimate the shape information of the object from the tag information, whereby an orientation estimation is performed based on the image or the shape information of the object (see, for example, Japanese Laid-Open Patent Publication No. 2005-346348).
  • a diffuse image super-resolution section 209 synthesizes a high-resolution diffuse image from a high-resolution albedo image synthesized by the albedo super-resolution section 207 . This process will now be described.
  • an albedo image is what is obtained by dividing the diffuse component image by the inner product between the light source vector and the normal vector of the object. Therefore, the process synthesizes a high-resolution diffuse image by multiplying the albedo image by the inner product between the light source vector estimated by the light source information estimating section 203 and the high-resolution normal direction vector of the object obtained by the parameter resolution increasing section to be described later. Where a plurality of light sources are estimated by the light source information estimating section 203 , the process synthesizes a high-resolution diffuse image for each of the light sources and combines together the images to synthesize a single super-resolution diffuse image.
  • the process multiplies the pseudo-albedo image by the inner product between the light source vector estimated by the light source information estimating section 203 and the high-density normal vector of the object obtained by a shape information resolution increasing section 211 , and further multiplies it by the maximum luminance value i sf — max of the specular reflection image used for normalization, thus synthesizing a super-resolution diffuse reflection image. Since the maximum luminance value i sf — max of the specular reflection image used in normalization is stored in the memory by the albedo estimating section 206 , the process can simply read out the stored information.
  • the process multiplies it by the maximum luminance value of the diffuse reflection image or the maximum luminance value of the input image used in normalization, instead of multiplying it by the maximum luminance value i sf — max of the specular reflection image.
  • the process described above it is possible to synthesize a super-resolution diffuse reflection image. While the super-resolution process is performed by using an albedo image, the process may directly perform super-resolution of a diffuse reflection image rather than the albedo image. In such a case, the learning process may be performed by using the diffuse reflection image.
  • a parameter estimating section 210 estimates parameters representing the object.
  • a method using the Cook-Torrance model which is widely used in the field of computer graphics, will be described.
  • E i denotes the incident illuminance, ⁇ s, ⁇ the bidirectional reflectance of the specular reflection component at the wavelength ⁇ , n the normal vector of the object, V the viewing vector, L the light source vector, H the halfway vector between the viewing vector and the light source vector, and ⁇ the angle between the halfway vector H and the normal vector n.
  • F ⁇ is the Fresnel coefficient being the ratio of the reflected light from the dielectric surface obtained from the Fresnel formula
  • D is the microfacet distribution function
  • G is the geometric attenuation factor representing the influence of shading by the irregularities on the object surface.
  • n ⁇ is the refractive index of the object
  • m is a coefficient representing the roughness of the object surface
  • I j is the radiance of the incident light
  • k s is a coefficient of the specular reflection component.
  • ⁇ d denotes the reflectance (albedo) of the diffuse reflection component, dpx and dpy the length of one pixel of the imaging device in the x direction and the y direction, respectively, and r the distance from the observing point O to the imaging device.
  • k d is a coefficient satisfying the following relationship.
  • FIG. 44 is a schematic diagram illustrating the constant Sr.
  • the diffuse reflection component energy reflected at the observing point O spreads hemispherically.
  • the ratio Sr between the energy reaching one imaging element of the imaging device and the total energy reflected at the observing point O is expressed by (Expression 48).
  • the parameter estimating section 210 estimates parameters from (Expression 37) to (Expression 45), (Expression 46), (Expression 47) and (Expression 48).
  • the coefficient k d of diffuse reflection component and the reflectance (albedo) ⁇ d of the diffuse reflection component are also unknown parameters, but these are not estimated so as to estimate only the parameters of the specular reflection component.
  • FIG. 45 is a flow chart showing the process of the parameter estimating section 210 .
  • the process includes the following two steps.
  • the incident illuminance E i is obtained by using the light source information (step S 351 ).
  • the process uses the light source position information obtained by the light source information estimating section 203 , the distance information between the imaging device and the object obtained by the shape information obtaining section 204 , and the light source illuminance obtained by the light source information obtaining section 203 . This is obtained from the following expression.
  • I i denotes the incident illuminance of the light source 1007 measured by an illuminance meter 1018 provided in the imaging device 1001 , R 1 the distance between the imaging device 1001 and the light source 1007 , R 2 the distance between the light source 1007 and the observing point O, ⁇ 1 the angle between the normal 1019 at the observing point O and the light source direction 1010 C, and ⁇ 2 the angle between the optical axis direction 1005 in the imaging device 1001 and the light source direction 1010 A (see FIG. 46 ).
  • the distance R 2 will be equal at all the observing points O on the object. Therefore, (R 1 /R 2 ) in (Expression 50) becomes a constant, and no longer needs to be actually measured.
  • the simplex method is a method in which variables are assigned to vertices of a shape called a “simplex”, and a function is optimized by changing the size and shape of the simplex (Noboru Ota, “Basics Of Color Reproduction Optics”, pp. 90-92, Corona Publishing Co., Ltd.).
  • a simplex is a collection of (n+1) points in an n-dimensional space.
  • n is an unknown number to be estimated and is herein “3”. Therefore, the simplex is a tetrahedron.
  • vectors x i representing the vertices of the simplex
  • new vectors are defined as follows.
  • ⁇ (>0), ⁇ (>1) and ⁇ (1> ⁇ >0) are coefficients.
  • the simplex method is based on the assumption that by selecting one of the vertices of the simplex that has the greatest function value, the function value in the reflection will be small. If this assumption is correct, it is possible to obtain the minimum value of the function by repeating the same process. Specifically, parameters given by initial values are updated by the three operations repeatedly until the error with respect to the target represented by the evaluation function becomes less than the threshold.
  • m, ⁇ ⁇ and k s are used as parameters, and the difference ⁇ I s between the specular reflection component image calculated from (Expression 37) and the specular reflection component image obtained by the diffuse reflection/specular reflection separating section 202 , represented by (Expression 56), is used as the evaluation function.
  • i s(i,j) ′ and i s(i,j) are the calculated specular reflection image estimate value I s ′, and the luminance value of the pixel (i,j) of the specular reflection component image I s obtained by the diffuse reflection/specular reflection separating section 202 , and M s(i,j) is a function that takes 1 when the pixel (i,j) has a specular reflection component and 0 otherwise.
  • FIG. 47 is a flow chart illustrating the flow of this process.
  • the counters n and k for storing the number of times the updating operation has been repeated are initialized to 0 (step S 361 ).
  • the counter n is a counter for storing the number of times the initial value has been changed
  • k is a counter for storing the number of times the candidate parameter has been updated by the simplex for an initial value.
  • step S 362 random numbers are used to determine the initial values of the candidate parameters m′, ⁇ ⁇ ′ and k s ′ of estimate parameters. Based on the physical constraint conditions of the parameters, the range of initial values was determined as follows.
  • the obtained candidate parameters are substituted into (Expression 37) to obtain the specular reflection image estimate value I s ′ (step S 363 ). Furthermore, the difference ⁇ I s between the calculated specular reflection image estimate value I s ′ and the specular reflection component image obtained by the diffuse reflection/specular reflection separating section 202 is obtained from (Expression 56), and this is used as the evaluation function of the simplex method (step S 364 ). If the obtained ⁇ I s is sufficiently small (Yes in step S 365 ), the candidate parameters m′, ⁇ ⁇ ′ and k s ′ are selected as the estimate parameters m, ⁇ ⁇ and k s , assuming that the parameter estimation has been succeeded, thus terminating the process. If ⁇ I s is large (No in step S 365 ), the candidate parameters are updated by the simplex method.
  • step S 366 the number of times update has been done is evaluated.
  • step S 367 the value of the counter k is judged. If the counter k is sufficiently great (No in step S 367 ), it is determined that the operation has been repeated sufficiently, but the value has dropped to the local minimum and the optimal value will not be reached by repeating the update operation, whereby the initial values are changed to attempt to escape from the local minimum. Therefore, 1 is added to the counter n and the counter k is set to 0 (step S 371 ).
  • step S 372 It is determined whether the value of the counter n is higher than the threshold to thereby determine whether the process is continued as it is or the process is terminated as being unprocessable (step S 372 ). If n is greater than the threshold (No in step S 372 ), the process is terminated determining that the image cannot be estimated. If n is smaller than the threshold (Yes in step S 372 ), initial values are re-selected from random numbers within the range of (Expression 57) (step S 362 ) to repeat the process.
  • a threshold for k may be, for example, 100, or the like.
  • step S 367 if the counter k is less than or equal to the threshold (Yes in step S 367 ), the candidate parameters are changed by using (Expression 53) to (Expression 55) (step S 368 ). This process will be described later.
  • the modified candidate parameters are meaningful as a solution (step S 369 ).
  • the modified parameters may become physically meaningless values (for example, the roughness parameter m being a negative value) as the simplex method is repeated, and such a possibility is eliminated.
  • the following conditions may be given so that a parameter is determined to be meaningful if it satisfies the condition and meaningless otherwise.
  • the refractive index ⁇ ⁇ is a value determined by the material of the object. For example, it is known to be 1.5-1.7 for plastic and 1.5-1.9 for glass, and these values can be used. Thus, if the object is plastic, the refractive index ⁇ ⁇ can be set to 1.5-1.7.
  • step S 369 If the modified parameters satisfy (Expression 58) (Yes in step S 369 ), it can be assumed that the candidate parameters are meaningful values, and they are set as new candidate parameters (step S 370 ), and the update process is repeated (step S 363 ). If the modified parameters do not satisfy (Expression 58) (No in step S 369 ), the update process for the initial values is canceled, and the update is performed with new initial values (step S 371 ).
  • FIG. 48 is a flow chart showing the flow of the process.
  • the candidate parameters m′, ⁇ ⁇ ′ and k s ′ are represented as a vector and it is used as the parameter x.
  • step S 382 If ⁇ I s (x r ) is smaller than ⁇ I s (x s ) (Yes in step S 382 ), the evaluation value ⁇ I s (x r ) having gone through the reflection operation and ⁇ I s (x l ) whose evaluation value is currently the best are compared with each other (step S 383 ). If ⁇ I s (x r ) is larger (No in step S 383 ), x h of which the evaluation value is worst is changed to x r (step S 384 ), and the process is terminated.
  • step S 383 If ⁇ I s (x r ) is smaller than ⁇ I s (x i ) (Yes in step S 383 ), (Expression 54) is used to perform the expansion process and to calculate the difference ⁇ I s (x e ) between the parameter x e and the specular reflection component image with x e (step S 385 ). Then, the obtained ⁇ I s (x r ) and ⁇ I s (x r ) obtained by the reflection operation are compared with each other (step S 386 ).
  • step S 386 If ⁇ I s (x e ) is smaller than ⁇ I s (x r ) (Yes in step S 386 ), x h of which the evaluation value has been worst is changed to x e (step S 387 ), and the process is terminated.
  • step S 386 If ⁇ I s (x e ) is greater than ⁇ I s (x r ) (No in step S 386 ), x h of which the evaluation value has been worst is changed to x r (step S 387 ), and the process is terminated.
  • step S 382 if ⁇ I s (x r ) is greater than ⁇ I s (x s ) (No in step S 382 ), the evaluation value ⁇ I s (x r ) having gone through the reflection operation and ⁇ I s (x h ) of which the evaluation value is currently worst are compared with each other (step S 388 ).
  • step S 388 If ⁇ I s (x r ) is smaller than ⁇ I s (x h ) (Yes in step S 388 ), x h of which the evaluation value has been worst is changed to x r (step S 389 ), and (Expression 55) is used to calculate the difference ⁇ I s (x c ) between the parameter x c having gone through the contraction operation and the specular reflection component image with x c (step S 390 ).
  • step S 388 If ⁇ I s (x r ) is greater than ⁇ I s (x h ) (No in step S 388 ), the difference ⁇ I s (x c ) between the parameter x c having gone through the contraction operation and the specular reflection component image with x c is calculated (step S 390 ) without changing x h .
  • step S 391 the obtained ⁇ I s (x c ) and ⁇ I s (x h ) of which the evaluation value is worst are compared with each other (step S 391 ). If ⁇ I s (x c ) is smaller than ⁇ I s (x h ) (Yes in step S 391 ), x h of which the evaluation value has been worst is changed to x c (step S 392 ), and the process is terminated.
  • the model used for the parameter estimation does not need to be the Cook-Torrance model, but may be, for example, the Torrance-Sparrow model, the Phong model, or the simplified Torrance-Sparrow model (for example, K. Ikeuchi and K. Sato, “Determining Reflectance Properties Of An Object Using Range And Brightness Images”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 11, pp. 1139-1153, 1991).
  • the parameter estimating method does not need to be the simplex method, but may be an ordinary parameter estimating method, such as, for example, the gradient method or the method of least squares.
  • the process described above may be performed for each pixel, or an equal set of parameters may be estimated for each of divided regions. Where the process is performed for each pixel, it is preferred to obtain samples in which known parameters such as the normal vector n of the object, the light source vector L or the viewing vector V are varied by moving the light source, the imaging device or the object. Where the process is performed for each region, it is preferred that the division of regions is changed so that variations in the parameters obtained for each region are little so as to realize an optimal parameter estimation.
  • the normal information resolution increasing section 211 increases the resolution of the surface normal information obtained by the shape information obtaining section 204 . This is realized as follows.
  • the surface normal information obtained by the shape information obtaining section 204 is projected onto the image obtained by the image-capturing section 201 to obtain the normal direction corresponding to each pixel in the image.
  • a process can be realized by performing a conventional camera calibration process (for example, Hiroki Unten, Katsushi Ikeuchi, “Texturing 3D Geometric Model For Virtualization Of Real-World Object”, CVIM-149-34, pp. 301-316, 2005).
  • the normal vector n p is represented by polar coordinates, and the values are denoted as ⁇ p and ⁇ p (see FIG. 49 ).
  • the images of ⁇ and ⁇ being the normal components are produced by the process described above.
  • the resolutions of the obtained ⁇ and ⁇ images are increased by a method similar to the albedo super-resolution section 207 described above to thereby estimate high-resolution normal information.
  • a learning process is performed before the resolution increasing process to store the cluster C for the normal ⁇ and ⁇ components and the learned conversion matrix CMat in a normal DB 212 .
  • the process described above is preferably performed only for those areas that are not removed by the shadow removing section 205 as being shadows. This is for preventing an error in the parameter estimating process from occurring due to the presence of shadows.
  • the parameter estimating section 210 may use a controllable light source provided in the vicinity of the imaging device.
  • the light source may be a flashlight of a digital camera.
  • a flashlighted image captured with a flashlight and a non-flashlighted image captured without a flashlight may be captured temporally continuously, and the parameter estimation may be performed by using the differential image therebetween.
  • the positional relationship between the imaging device and the flashlight being the light source is known, and the light source information of the flashlight such as the three-dimensional position, the color and the intensity can also be measured in advance. Since the imaging device and the flashlight are provided at positions very close to each other, it is possible to capture an image with little shadow. Therefore, parameters can be estimated for most of the pixels in the image.
  • a parameter resolution increasing section 213 increases the resolution of the parameter obtained by the parameter estimating section 210 .
  • a simple linear interpolation is performed for increasing the resolution of all the parameters.
  • a learning-based super-resolution method such as the albedo super-resolution section 207 described above may be used.
  • the resolution increasing method may be switched from one to another for different parameters. For example, it can be assumed that the value of the refractive index ⁇ ⁇ of the object being an estimate parameter will not change even if the resolution thereof is increased. Therefore, the resolution may be increased by simple interpolation for the refractive index ⁇ ⁇ of the object, whereas a learning-based super-resolution process may be performed for the diffuse reflection component coefficient k d , the specular reflection component coefficient k s and the reflectance (albedo) ⁇ d of the diffuse reflection component.
  • a specular reflection image super-resolution section 214 synthesizes a high-resolution specular reflection image by using the high-resolution normal information estimated by the normal information resolution increasing section 211 and parameters whose resolutions have been increased by the parameter resolution increasing section 214 .
  • the high-resolution specular reflection image is synthesized by substituting resolution-increased parameters into (Expression 37) to (Expression 45).
  • the roughness m of the object surface may be set to a greater value than the estimated value so as to synthesize a specular reflection image in which the shine is stronger than it actually is.
  • a shadow producing section 215 synthesizes a shadow image to be laid over the super-resolution diffuse reflection image and the super-resolution specular reflection image produced by a diffuse reflection image super-resolution section 209 and the specular reflection image super-resolution section 214 . This can be done by using ray tracing, which is used for the shadow removing section 205 .
  • the super-resolution section 217 has knowledge on the three-dimensional shape of the object of image capturing.
  • the shadow producing section 215 obtains the three-dimensional shape data of the object, and estimates the three-dimensional orientation and the three-dimensional position of the object based on the appearance of the object in the captured image.
  • An example of estimating the three-dimensional position and the three-dimensional orientation from the appearance in a case where the object is a human eye cornea is disclosed in K. Nishino and S. K. Nayar, “The World In An Eye”, in Proc. of Computer Vision and Pattern Recognition CVPR '04, vol. I, pp. 444-451, July, 2004.
  • a method of the above article can be applied to such an object.
  • the object surface normal information can be calculated at any point on the object.
  • the process described above is repeated for the captured images to calculate the object surface normal information.
  • it is possible to increase the resolution of the three-dimensional shape of the object by increasing the resolution of the object normal information by using the high-resolution normal information estimated by the normal information resolution increasing section 211 .
  • a high-resolution shadow image is estimated by performing ray tracing by using the high-resolution three-dimensional shape thus obtained and the parameters whose resolution has been increased by the parameter resolution increasing section 213 .
  • a rendering section 216 synthesizes a high-resolution output image by combining together the super-resolution diffuse reflection image synthesized by the diffuse reflection image super-resolution section 209 , the super-resolution specular reflection image synthesized by the specular reflection image super-resolution section 214 and the shadow image synthesized by the shadow producing section 215 .
  • the light source information is very important information that is needed for the shadow removing section 205 , the albedo estimating section 206 , the diffuse reflection image super-resolution section 209 , the parameter estimating section 210 , the specular reflection image super-resolution section 214 and the shadow producing section 215 . Therefore, the light source estimation method of the present invention capable of accurately obtaining light source information is very important for the image super-resolution process.
  • the parameter estimation may be performed also for the diffuse reflection image to perform super-resolution thereof.
  • FIG. 50 is a flow chart showing the flow of the parameter estimating process for the diffuse reflection image. After the process by the parameter estimating section 210 for the specular reflection image shown in FIG. 45 , two further steps as follows are performed.
  • the process estimates k d as follows by using (Expression 49) and k s obtained by the parameter estimation for the specular reflection image (step S 353 ).
  • the reflectance (albedo) ⁇ d of the diffuse reflection image is estimated as follows by using (Expression 47) (step S 354 ).
  • the super resolution of the diffuse reflection image can be performed by increasing the resolution of the obtained parameters by a method similar to the parameter resolution increasing section 213 .
  • the light source estimation method of the present invention is effective not only for image processes but also for image capturing, for example. Where a polarizing filter is used, for example, it can be installed at an optimal angle. This process will now be described.
  • Polarizing filters referred to as “PL filters” are used for removing the specular reflection light such as those from water surface or window glass.
  • the effect of a polarizing filter varies significantly depending on the polarization axis of the polarizing filter and the plane of incidence (a plane containing the line of incident light onto the object and the line of reflected light). Therefore, where an image is captured while rotating the polarizing filter 1016 A by a rotation mechanism as shown in FIG. 28 , the captured image will be significantly different depending on the angle of rotation. For example, it is most effective in a case where the polarization axis is parallel to the plane of incidence.
  • the plane of incidence on the object can be specified because the light source position can be known by using the light source estimation method of the present invention.
  • the rotation mechanism can be controlled so that the polarizing filter is parallel to the estimated plane of incidence.
  • the light source estimation method of the present invention it is possible to realize an image process such as a high-resolution digital zooming process and effective image capturing.
  • the present invention it is possible to obtain a light source image and to estimate light source information with no additional imaging devices. Therefore, the present invention is useful in performing an image process such as a super-resolution process in a camera-equipped mobile telephone, a digital camera or a digital video camera, for example.

Abstract

An imaging device condition determination section determines whether a condition of an imaging device is suitable for obtaining light source information. When it is determined to be suitable, a light source image obtaining section obtains a light source image by the imaging device. A first imaging device information obtaining section obtains first imaging device information representing the condition of the imaging device at a point in time when the light source image is obtained. A second imaging device information obtaining section obtains second imaging device information representing the condition of the imaging device at a time of actual image capturing. A light source information estimating section estimates light source information at the time of image capturing by using the light source image and the first and second imaging device information.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • This is a continuation of Application PCT/JP2007/060833 filed on May 28, 2007. This Non-provisional application claims priority under 35 U.S.C. § 119(a) on Patent Application No. 2006-147756 filed in Japan on May 29, 2006, the entire contents of which are hereby incorporated by reference.
  • TECHNICAL FIELD
  • The present invention relates to techniques that are important in capturing images, processing images and synthesizing images in an ordinary environment, i.e., a technique for estimating light source information such as the position, direction, luminance, spectrum and color of the light source at the time of image capturing, and a technique for super-resolution using the light source information.
  • BACKGROUND ART
  • The importance of image processing is increasing as the camera-equipped mobile telephones and digital cameras become widespread. There are various such image processing including, for example, a process of super-resolution also known as “digital zooming”, an image recognition process of recognizing and focusing on a human face, and augmented reality in which computer-generated graphics images, being virtual objects, are laid over a real image.
  • These image processing are based on the “appearance” of an object recorded on imaging elements by capturing an image. The appearance of the object is obtained by the imaging elements receiving light from the light source after being reflected at the object surface.
  • Therefore, light source information is very important in image processing. In other words, it is very effective to obtain light source information and use the information when capturing images and processing images. For example, in Patent Document 1, virtual objects are laid over the real world, wherein the light source environment is measured so as to add the reflection of the illuminated light on the virtual objects and the shadows produced by the virtual objects.
  • The light source information is useful not only in image processing but also when a cameraman, being a person who captures an image, obtains an image by an imaging device. For example, in Patent Document 2, the light source position is detected, and the information is used to display the front-lighted condition or the backlighted condition to the cameraman, thereby allowing an ordinary person with no special knowledge to easily take, for example, a backlighted (semi-backlighted) portrait effectively utilizing light and shadows.
  • Patent Document 1: Japanese Laid-Open Patent Publication No. 11-175762
  • Patent Document 2: Japanese Laid-Open Patent Publication No. 8-160507
  • DISCLOSURE OF THE INVENTION Problems to be Solved by the Invention
  • In order to obtain such light source information, a captured image of the light source, i.e., a light source image, is needed. Preferably, a light source image is an image that is:
      • (1) obtained by capturing an image in a different direction than the object; and
      • (2) a near-all-sky, wide-range image.
      • This is for the following reason.
  • (1) an ordinary object, such as a person, is often illuminated from above by an illumination such as a streetlight or a room light. When the light source is located in the same direction as the object, the object may appear dark due to the backlighted condition. Therefore, a cameraman tends to avoid such a position where the object and the light source are present in the same image.
      • (2) The number and positions of light sources are unknown to the cameraman.
  • Therefore, Patent Document 1 and Patent Document 2 each employ an imaging device provided with a fish-eye lens with the optical axis being directed in the zenith direction in order to obtain the light source information.
  • However, capturing an image while directing a fish-eye lens in a direction away from the object requires a second imaging device separately from the imaging device for imaging the object. This imposes a significant burden cost-wise. Particularly with camera-equipped mobile telephones for which there is a strong demand for reducing the size, adding an imaging device imposes a significant problem in terms of size.
  • Moreover, since the imaging device for imaging the object and the imaging device for imaging the light source are separate from each other, an alignment (calibration) between the two imaging devices is necessary. This is due to the fact that the positional relationship between the object and the light source is particularly important for estimating the light source information. In other words, when an object and a light source are imaged with different imaging devices, they need to be associated with each other, which is a very complicated operation.
  • In view of the problem set forth above, an object of the present invention is to make it possible for a device such as a camera-equipped mobile telephone to obtain a light source image being a captured image of a light source and to estimate the light source information with no additional imaging devices.
  • Means for Solving the Problems
  • The present invention provides a light source estimation device and a light source estimation method, in which the process determines whether a condition of an imaging device is suitable for obtaining light source information, captures an image by the imaging device when it is determined to be suitable, obtains the captured image as a light source image, obtains first imaging device information representing the condition of the imaging device at a point in time when the light source image is obtained, obtains second imaging device information representing the condition of the imaging device at the time of image capturing when an image is captured by the imaging device in response to a cameraman's operation, and estimates light source information including at least one of a direction and a position of a light source at the time of image capturing by using the light source image and the first and second imaging device information.
  • According to the present invention, the process determines whether the condition of the imaging device is suitable for obtaining light source information, and obtains a light source image by the imaging device when it is determined to be suitable. At this point in time, the process also obtains first imaging device information representing the condition of the imaging device. Then, the process obtains second imaging device information representing the condition of the imaging device at the time of image capturing when an image is captured by the imaging device in response to a cameraman's operation, and estimates light source information by using the light source image and the first and second imaging device information. Thus, the light source image is obtained when it is determined, by using the imaging device which is used at the time of image capturing, that the condition of the imaging device is suitable for obtaining light source information. Therefore, it is possible to obtain a light source image and to estimate light source information with no additional imaging devices.
  • The present invention also provides a light source estimation device and a light source estimation method, in which the process captures an image by an imaging device to obtain the captured image as a light source image, obtains first imaging device information representing a condition of the imaging device at a point in time when the light source image is obtained, obtains second imaging device information representing the condition of the imaging device at the time of image capturing when an image is captured by the imaging device in response to a cameraman's operation, and estimates light source information including at least one of a direction and a position of a light source at the time of image capturing by using the light source image and the first and second imaging device information, wherein a plurality of light source images are obtained while an optical axis direction of the imaging device is being varied by optical axis direction varying means.
  • According to the present invention, the process obtains a plurality of light source images by the imaging device while the optical axis direction of the imaging device is being varied by the optical axis direction varying means. At this point in time, the process also obtains first imaging device information representing the condition of the imaging device. Then, the process obtains second imaging device information representing the condition of the imaging device at the time of image capturing when an image is captured by the imaging device in response to a cameraman's operation, and estimates light source information by using the light source image and the first and second imaging device information. Thus, light source images are obtained by using the imaging device which is used at the time of image capturing while the optical axis direction of the imaging device is being varied. Therefore, it is possible to obtain a light source image over a wide area and to estimate light source information with no additional imaging devices.
  • The present invention also provides a super-resolution device and a super-resolution method, in which the process captures an image by an imaging device, performs a light source information estimation operation of estimating light source information including at least one of a direction and a position of a light source illuminating an object by using the light source estimation method of the present invention, obtains as shape information surface normal information or three-dimensional position information of the object, and performs super-resolution of the captured image by using the light source information and the shape information.
  • EFFECTS OF THE INVENTION
  • According to the present invention, it is possible to obtain a light source image around an object and to estimate light source information without providing additional imaging devices other than the imaging device for imaging the object in devices provided with imaging devices such as camera-equipped mobile telephones. Moreover, it is possible to realize a super-resolution process by using the estimated light source information.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram showing a configuration of a light source estimation device according to a first embodiment of the present invention.
  • FIG. 2 is a schematic diagram showing a configuration of a mobile telephone provided with a light source estimation device of the present invention.
  • FIG. 3 is a diagram showing a camera-equipped mobile telephone in a folded position.
  • FIG. 4 is a flow chart showing the flow of the processes of an imaging device condition determination section and a light source image obtaining section.
  • FIG. 4A is a flow chart showing the flow of the processes of an imaging device condition determination section and a light source image obtaining section.
  • FIG. 5 is a schematic diagram showing a portion of information stored in a memory.
  • FIG. 6 is a schematic diagram illustrating a roll-pitch-yaw angle representation.
  • FIG. 7 is a schematic diagram illustrating the process of extracting a light source pixel.
  • FIG. 8 is a schematic diagram illustrating the relationship between a camera coordinate system and an image coordinate system.
  • FIG. 9 is a schematic diagram illustrating the process of estimating a three-dimensional position of a light source by utilizing the movement of the imaging device.
  • FIG. 10 is a schematic diagram illustrating a method for detecting an optical axis direction by using a weight and a touch sensor.
  • FIG. 11 is a schematic diagram showing a folding-type camera-equipped mobile telephone provided with a weight and a touch sensor.
  • FIG. 12 is a schematic diagram showing a state where a folding-type camera-equipped mobile telephone of FIG. 11 is placed.
  • FIG. 13 is a diagram showing the relationship between the optical axis direction and the ON/OFF state of the touch sensors.
  • FIG. 14 is a schematic diagram showing a state where a digital still camera provided with a weight and a touch sensor is placed.
  • FIG. 15 is a block diagram showing a configuration of a light source estimation device according to a second embodiment of the present invention.
  • FIG. 16 is a schematic diagram illustrating a method for synthesizing a panoramic light source image.
  • FIG. 17 is a schematic diagram showing a process of combining together a plurality of light source images to thereby widen an apparent range of viewing field.
  • FIG. 18 is a schematic diagram illustrating a method for synthesizing a panoramic light source image in a case where a rectangular parallelepiped is used as a projection plane.
  • FIG. 19 is a block diagram showing a configuration of a light source estimation device according to a third embodiment of the present invention.
  • FIG. 20 is an external view of a folding-type camera-equipped mobile telephone provided with the light source estimation device according to the third embodiment of the present invention.
  • FIG. 21 is a schematic diagram showing the movement of the folding-type camera-equipped mobile telephone of FIG. 20 when an open/close switch is pressed.
  • FIG. 22 is a block diagram showing another configuration of the light source estimation device according to the third embodiment of the present invention.
  • FIG. 23 is a schematic diagram illustrating an angle of vibration by a vibration mechanism.
  • FIG. 24 is a block diagram showing another configuration of the light source estimation device according to the third embodiment of the present invention.
  • FIG. 25 is a block diagram showing a configuration of a light source estimation system according to a fourth embodiment of the present invention.
  • FIG. 26 is a diagram showing an example of how an image is separated into a diffuse reflection image and a specular reflection image.
  • FIG. 27 is a block diagram showing a configuration of a super-resolution device in one embodiment of the present invention.
  • FIG. 28 is a diagram showing a camera-equipped mobile telephone provided with a super-resolution device in one embodiment of the present invention.
  • FIG. 29 is a graph showing variations in a reflected light intensity when a polarizing filter is rotated under linearly-polarized light.
  • FIG. 30 is a flow chart showing the flow of a process of separating a specular reflection image and a diffuse reflection image from each other by using a polarizing filter.
  • FIG. 31 is a schematic diagram illustrating an imaging device in which a polarization direction is varied from one pixel to another.
  • FIG. 32 is a schematic diagram illustrating a process of obtaining a distance and a three-dimensional position of an object by using a photometric stereo method.
  • FIG. 33 is a schematic diagram illustrating a process of obtaining shape information by using polarization characteristics of reflected light.
  • FIG. 34 is a graph showing variations in a reflected light intensity when a polarizing filter is rotated under natural light.
  • FIG. 35 is a schematic diagram showing the concept of a texton-based super-resolution process.
  • FIG. 36 is a conceptual diagram illustrating a texton-based super-resolution process using a linear matrix transformation.
  • FIG. 37 is a PAD diagram showing the flow of a learning process in a texton-based super-resolution process.
  • FIG. 38 is a schematic diagram illustrating a learning process in a texton-based super-resolution process.
  • FIG. 39 is a diagram showing a process of a two-dimensional discrete stationary wavelet transformation.
  • FIG. 40 shows an exemplary image result when a two-dimensional discrete stationary wavelet transformation is performed on a test image.
  • FIG. 41 is a PAD diagram showing the flow of a texton-based super-resolution process being performed.
  • FIG. 42 is a schematic diagram illustrating a texton-based super-resolution process being performed.
  • FIG. 43 is a diagram showing a process of a two-dimensional discrete stationary inverse wavelet transformation.
  • FIG. 44 is a schematic diagram illustrating a constant Sr for representing a difference in the luminance value between a diffuse reflection component and a specular reflection component.
  • FIG. 45 is a diagram showing the flow of a parameter estimating process for a specular reflection image in a super-resolution process in one embodiment of the present invention.
  • FIG. 46 is a conceptual diagram illustrating parameters of an expression representing the incident illuminance.
  • FIG. 47 is a flow chart showing the flow of a parameter estimating process by a simplex method.
  • FIG. 48 is a flow chart showing the flow of a parameter updating process in a simplex method.
  • FIG. 49 is a schematic diagram illustrating a polar coordinates representation.
  • FIG. 50 is a diagram showing the flow of a parameter estimating process for a diffuse reflection image in a super-resolution process in one embodiment of the present invention.
  • FIG. 51 is a diagram showing data stored in a memory where a pseudo-albedo is used.
  • BEST MODE FOR CARRYING OUT THE INVENTION
  • A first aspect of the present invention provides a light source estimation device, including: an imaging device condition determination section for determining whether a condition of an imaging device is suitable for obtaining light source information; a light source image obtaining section for capturing an image by the imaging device when it is determined to be suitable by the imaging device condition determination section, to thereby obtain the captured image as a light source image; a first imaging device information obtaining section for obtaining first imaging device information representing a condition of the imaging device at a point in time when the light source image is obtained by the light source image obtaining section; a second imaging device information obtaining section for obtaining second imaging device information representing a condition of the imaging device at a time of image capturing when an image is captured by the imaging device in response to a cameraman's operation; and a light source information estimating section for estimating light source information including at least one of a direction and a position of a light source at the time of image capturing by using the light source image and the first and second imaging device information.
  • A second aspect of the present invention provides the light source estimation device of the first aspect, wherein the imaging device condition determination section detects a direction of an optical axis of the imaging device and determines it to be suitable when the optical axis is pointing upward.
  • A third aspect of the present invention provides the light source estimation device of the first aspect, wherein the light source image obtaining section obtains the light source image after confirming that an image is not being captured by the imaging device in response to a cameraman's operation.
  • A fourth aspect of the present invention provides the light source estimation device of the first aspect, wherein the light source information estimating section estimates, in addition to at least one of a direction and a position of the light source, at least one of luminance, color and spectrum information of the light source.
  • A fifth aspect of the present invention provides the light source estimation device of the first aspect, wherein: the light source image obtaining section obtains a plurality of the light source images; the first imaging device information obtaining section obtains the first imaging device information every time the light source image is obtained by the light source image obtaining section; the light source estimation device includes a light source image synthesis section for synthesizing a panoramic light source image from the plurality of light source images obtained by the light source image obtaining section by using the plurality of first imaging device information obtained by the first imaging device information obtaining section; and the light source information estimating section estimates the light source information by using the panoramic light source image and the second imaging device information.
  • A sixth aspect of the present invention provides the light source estimation device of the first aspect, including optical axis direction varying means for varying an optical axis direction of the imaging device, wherein a plurality of light source images are obtained by the light source image obtaining section while the optical axis direction of the imaging device is being varied by the optical axis direction varying means.
  • A seventh aspect of the present invention provides the light source estimation device of the sixth aspect, wherein: the light source estimation device is provided in a folding-type mobile telephone; and the optical axis direction varying means is an open/close mechanism for opening/closing the folding-type mobile telephone.
  • An eighth aspect of the present invention provides the light source estimation device of the sixth aspect, wherein the optical axis direction varying means is a vibration mechanism.
  • A ninth aspect of the present invention provides a light source estimation device, including: a light source image obtaining section for capturing an image by an imaging device to obtain the captured image as a light source image; a first imaging device information obtaining section for obtaining first imaging device information representing a condition of the imaging device at a point in time when the light source image is obtained by the light source image obtaining section; a second imaging device information obtaining section for obtaining second imaging device information representing a condition of the imaging device at a time of image capturing when an image is captured by the imaging device in response to a cameraman's operation; a light source information estimating section for estimating light source information including at least one of a direction and a position of a light source at the time of image capturing by using the light source image and the first and second imaging device information; and optical axis direction varying means for varying an optical axis direction of the imaging device, wherein a plurality of light source images are obtained by the light source image obtaining section while the optical axis direction of the imaging device is being varied by the optical axis direction varying means.
  • A tenth aspect of the present invention provides a light source estimation system for estimating light source information, including: a communication terminal including the imaging device condition determination section, the light source image obtaining section, the first imaging device information obtaining section and the second imaging device information obtaining section of the first aspect, wherein the communication terminal transmits the light source image obtained by the light source image obtaining section, the first imaging device information obtained by the first imaging device information obtaining section, and the second imaging device information obtained by the second imaging device information obtaining section; and a server including the light source information estimating section of the first aspect, wherein the server receives the light source image and the first and second imaging device information transmitted from the communication terminal to give the light source image and the first and second imaging device information to the light source information estimating section.
  • An eleventh aspect of the present invention provides a light source estimation method, including: a first step of determining whether a condition of an imaging device is suitable for obtaining light source information; a second step of capturing an image by the imaging device when it is determined to be suitable in the first step, to thereby obtain the captured image as a light source image; a third step of obtaining first imaging device information representing a condition of the imaging device at a point in time when the light source image is obtained in the second step; a fourth step of obtaining second imaging device information representing a condition of the imaging device at a time of image capturing when an image is captured by the imaging device in response to a cameraman's operation; and a fifth step of estimating light source information including at least one of a direction and a position of a light source at the time of image capturing by using the light source image and the first and second imaging device information.
  • A twelfth aspect of the present invention provides a light source estimation method, including: a first step of capturing an image by an imaging device to obtain the captured image as a light source image; a second step of obtaining first imaging device information representing a condition of the imaging device at a point in time when the light source image is obtained in the first step; a third step of obtaining second imaging device information representing a condition of the imaging device at a time of image capturing when an image is captured by the imaging device in response to a cameraman's operation; and a fourth step of estimating light source information including at least one of a direction and a position of a light source at the time of image capturing by using the light source image and the first and second imaging device information, wherein in the first step, an optical axis direction of the imaging device is varied by optical axis direction varying means, and a plurality of light source images are obtained while the optical axis direction of the imaging device is being varied.
  • A thirteenth aspect of the present invention provides a super-resolution device, including: an image-capturing section for capturing an image by an imaging device; a light source information estimating section for estimating light source information including at least one of a direction and a position of a light source illuminating an object, by the light source estimation method of the eleventh or twelfth aspect; a shape information obtaining section for obtaining, as shape information, surface normal information or three-dimensional position information of the object; and a super-resolution section for super-resolution of the image captured by the image-capturing section by using the light source information and the shape information.
  • A fourteenth aspect of the present invention provides the super-resolution device of the thirteenth aspect, wherein the super-resolution section separates an image captured by the image-capturing section into a diffuse reflection component and a specular reflection component, and separately enhances resolutions of the diffuse reflection component and the specular reflection component separated from each other.
  • A fifteenth aspect of the present invention provides the super-resolution device of the thirteenth aspect, wherein the super-resolution section decomposes an image captured by the image-capturing section into parameters, and separately enhances resolutions of the decomposed parameters.
  • A sixteenth aspect of the present invention provides a super-resolution method, including: a first step of capturing an image by an imaging device; a second step of estimating light source information including at least one of a direction and a position of a light source illuminating an object, by the light source estimation method of the eleventh or twelfth aspect; a third step of obtaining, as shape information, surface normal information or three-dimensional position information of the object; and a fourth step of super-resolution of the image captured in the first step by using the light source information and the shape information.
  • Embodiments of the present invention will now be described with reference to the drawings.
  • FIRST EMBODIMENT
  • FIG. 1 is a block diagram showing a configuration of a light source estimation device according to a first embodiment of the present invention. In FIG. 1, 1001 denotes an imaging device using CCDs, CMOSes, or the like, and 1002 a shutter button by which a cameraman, being a person who captures an image, instructs the imaging device 1001 to capture an image. The imaging device 1001 is provided with a 3-degree-of-freedom (3DOF) sensor 1025.
  • Moreover, 101 denotes an imaging device condition determination section for determining whether the condition of the imaging device 1001 is suitable for obtaining light source information, 102 a light source image obtaining section for capturing an image by the imaging device 1001 to obtain the captured image as a light source image when it is determined by the imaging device condition determination section 101 to be suitable, 103 a first imaging device information obtaining section for obtaining first imaging device information representing the condition of the imaging device 1001 when the light source image is obtained by the light source image obtaining section 102, 104 a second imaging device information obtaining section for obtaining second imaging device information representing the condition of the imaging device at the time of image capturing when the image is captured by the imaging device 1001 in response to a cameraman's operation, and 105 a light source information estimating section for estimating at least one of the direction and the position of the light source at the time of image capturing based on the light source image obtained by the light source image obtaining section 102, the first imaging device information obtained by the first imaging device information obtaining section 103, and the second imaging device information obtained by the second imaging device information obtaining section 104.
  • It is assumed herein that the imaging device condition determination section 101, the light source image obtaining section 102, the first imaging device information obtaining section 103, the second imaging device information obtaining section 104 and the light source information estimating section 105 are implemented as a program or programs executed by a CPU 1029. Note however that all or some of these functions may be implemented as hardware. A memory 1028 stores the light source image obtained by the light source image obtaining section 102, and the first imaging device information obtained by the first imaging device information obtaining section 103.
  • FIG. 2 shows an exemplary configuration of a folding-type camera-equipped mobile telephone 1000 provided with the light source estimation device of the present embodiment. In FIG. 2, like elements to those shown in FIG. 1 are denoted by like reference numerals. In the folding-type camera-equipped mobile telephone 1000 of FIG. 2, the imaging device 1001 includes a polarizing filter 1016, and also includes a motor 1026 a for rotating the polarizing filter 1016 and an encoder 1027 a for detecting the angle of rotation thereof. A motor 1026 b for driving the folding mechanism, and an encoder 1027 b for detecting the angle of rotation thereof are also provided. A vibration mode switch 1034 is also provided.
  • FIG. 3 is a diagram showing the folding-type camera-equipped mobile telephone 1000 of FIG. 2 in a folded position. In FIG. 3, 1005 denotes the optical axis direction of the imaging device 1001, and 1006 a field of view of the imaging device 1001.
  • The operation of each component of the light source estimation device of the present embodiment will now be described.
  • The imaging device condition determination section 101 determines whether the condition of the imaging device 1001 is suitable for obtaining light source information. The most ordinary light source may be a lighting device in a house, and may be a streetlight or the sunlight in the outdoors. Therefore, if the imaging direction, i.e., the direction of the optical axis, of the imaging device 1001 is upward, it can be determined to be a suitable condition for the imaging device 1001 to obtain light source information. Thus, the imaging device condition determination section 101 uses the output of the angle sensor 1025 provided in the imaging device 1001 to detect the direction of the optical axis of the imaging device 1001 so as to determine that it is a suitable condition for obtaining light source information when the optical axis is pointing upward. Then, the imaging device condition determination section 101 sends an image-capturing prompting signal to the light source image obtaining section 102.
  • When an image-capturing prompting signal is received from the imaging device condition determination section 101, i.e., when it is determined by the imaging device condition determination section 101 that the condition of the imaging device 1001 is suitable for obtaining light source information, the light source image obtaining section 102 captures an image by the imaging device 1001 to obtain the captured image as a light source image. The obtained light source image is stored in the memory 1028.
  • In this process, it is preferred that the light source image obtaining section 102 obtains a light source image after confirming that an image is not being captured by a cameraman's operation. For example, a light source image may be captured after confirming that the shutter button 1002 is not being pressed.
  • The light source image obtaining section 102 captures a light source image in a period during which an image is not being captured, in view of the cameraman's intention of capturing an image. With the light source estimation device of the present embodiment, a light source image is captured by using the imaging device 1001, which is used for imaging an object. Therefore, if the process of capturing a light source image is performed when the cameraman is about to image an object, the cameraman will not be able to image the object at the intended moment, thus neglecting the cameraman's intention of capturing an image.
  • Therefore, in the present embodiment, in order to reflect the cameraman's intention of capturing an image, a light source image is captured in a period during which it can be assumed that the cameraman will not capture an image, e.g., in a period during which the device is left on a table, or the like. For example, when the folding-type camera-equipped mobile telephone 1000 of FIG. 3 is left on a table, or the like, it can be assumed that the optical axis direction 1005 is upward. Under this condition, it is possible to capture an optimal light source image.
  • FIG. 4 is a flow chart showing exemplary processes of the imaging device condition determination section 101 and the light source image obtaining section 102. First, the imaging device condition determination section 101 detects the optical axis direction of the imaging device 1001, and determines whether the optical axis direction is upward (step S121). If the optical axis direction is not upward (No in step S121), the optical axis direction is repeatedly checked until the optical axis direction is upward. If the optical axis direction is upward (Yes in step S122), the light source image obtaining section 102 checks the shutter button 1002 (step S122). When the shutter button 1002 is being pressed for performing a process such as auto-focusing (AF) (No in step S122), it is likely that an image is being captured, and therefore the process of capturing a light source image is not performed. When the shutter button 1002 is not being pressed (Yes in step S122), the light source image obtaining section 102 captures an image by the imaging device 1001 to obtain a light source image (step S123).
  • While whether an image is being captured by a cameraman's operation is herein determined by checking the shutter button, the method for determining whether a cameraman has an intention of capturing an image is not limited to this. For example, a message “Capturing an image?” for checking whether an image is being captured may be shown on the display, wherein it is determined that the cameraman has no intention of capturing an image if the cameraman expressly indicates “No” or if there is no response at all.
  • Alternatively, an acceleration sensor, or the like, may be used, wherein a light source image is obtained when the imaging device 1001 is stationary. Specifically, when the imaging device 1001 is stationary, it can be determined that the imaging device 1001 is not being held by the cameraman but is left on a table, or the like. Therefore, in such a case, it is likely that the cameraman is not capturing an image. When the cameraman is holding the imaging device 1001 for capturing an image, the acceleration sensor senses the camera shake. The light source image obtaining section 102 may be configured not to capture an image in such a case.
  • It is also possible to determine whether the cameraman has an intention of capturing an image by using the vibration mode. This will be described in detail.
  • FIG. 4A is a flow chart showing exemplary processes of the imaging device condition determination section 101 and the light source image obtaining section 102 in a case where whether the cameraman has an intention of capturing an image is determined by using the vibration mode. First, the imaging device condition determination section 101 detects the optical axis direction of the imaging device 1001, and determines whether the optical axis direction is upward (step S121). If the optical axis direction is not upward (No in step S121), the optical axis direction is repeatedly checked at regular intervals until the optical axis direction is upward. If the optical axis direction is upward (Yes in step S121), the light source image obtaining section 102 checks the vibration mode (step S124). If the vibration mode switch 1034 is OFF (No in step S124), a light source image is not captured because it is likely that an image is to be captured. If the vibration mode switch 1034 is ON (Yes in step S124), the light source image obtaining section 102 captures an image by the imaging device 1001 to obtain a light source image (step S123).
  • If the drive mode is set as the vibration mode, it can be assumed that the cameraman is traveling, whereby the process may choose not to capture a light source image. In other words, a light source image is captured in the silent mode but not in the drive mode.
  • When a light source image is obtained by the light source image obtaining section 102, the first imaging device information obtaining section 103 obtains first imaging device information representing the condition of the imaging device 1001. Specifically, the output of the angle sensor 1025 and the focal distance information of the imaging device 1001 are obtained as the first imaging device information, for example. The obtained first imaging device information is stored in the memory 1028. FIG. 5 is a schematic diagram showing part of information stored in the memory 1028. The angle sensor output and the focal distance for a light source image are stored as the first imaging device information.
  • The orientation information of the imaging device 1001 is represented by the following 3×3 matrix Rlight by using the output of the angle sensor 1025.
  • [ Formula 1 ] R light = [ r l 0 , 0 r l 0 , 1 r l 0 , 2 r l 1 , 0 r l 1 , 1 r l 1 , 2 r l 2 , 0 r l 2 , 1 r l 2 , 2 ] = R x ( α ) R y ( β ) R z ( γ ) ( Expression 1 )
  • The 3×3 matrix Rlight representing the orientation information of the imaging device 1001 is referred to as a camera orientation matrix. In this expression, (α, β, γ) are values of the output from the sensor attached to the camera in a roll-pitch-yaw angle representation, each being expressed in terms of the amount of movement from a reference point. A roll-pitch-yaw angle representation is a representation where a rotation is represented by three rotations, including the roll being the rotation about the z axis, the pitch being the rotation about the new y axis, and the yaw being the rotation about the new x axis, as shown in FIG. 6.
  • Rx(α), Ry(β) and Rz(γ) are matrices for converting the roll-pitch-yaw angles to the x-axis rotation, the y-axis rotation and the z-axis rotation, and are expressed as follows.
  • [ Formula 2 ] R x ( α ) = [ 1 0 0 0 cos α - sin α 0 sin α cos α ] R y ( β ) = [ cos β 0 sin β 0 1 0 - sin β 0 cos β ] R z ( γ ) = [ cos β - sin β 0 sin β cos β 0 0 0 1 ]
  • If the imaging device 1001 is capable of zooming, the zooming information is also obtained as the focal distance information. In a case where the imaging device 1001 is a fixed-focus device, the focal distance information is also obtained. The focal distance information can be obtained by performing a camera calibration operation as widely used in the field of image processing.
  • The method for obtaining the orientation information of the camera from the angle sensor or the angular velocity sensor attached to the camera may be an existing method (for example, Takayuki Okatani, “3D Shape Recovery By Fusion Of Mechanical And Image Sensors”, Journal of Information Processing Society of Japan, 2005-CVIM-147, pp. 123-130, 2005).
  • At the time of image capturing when an image is captured by the imaging device 1001 in response to a cameraman's operation, the second imaging device information obtaining section 104 obtains second imaging device information representing the condition of the imaging device 1001. As with the first imaging device information obtaining section 103 described above, the output of the angle sensor 1025 and the focal distance information of the imaging device 1001 are obtained as the second imaging device information. In this process, the orientation matrix Rnow obtained from the output (α, β, γ) of the angle sensor 1025 is referred to as the current orientation matrix.
  • The light source information estimating section 105 estimates light source information at the time of image capturing in response to a cameraman's operation by using the light source image and the first imaging device information stored in the memory 1028, and the second imaging device information obtained by the second imaging device information obtaining section 104. It is assumed herein that the direction of the light source is estimated.
  • First, a pixel in the light source image that has a sufficiently high luminance value is extracted as a pixel capturing the light source, i.e., a light source pixel. FIG. 7 is a schematic diagram illustrating this process. In FIG. 7, the imaging device 1001 having the field of view 1006 is imaging a light source 1007. In a captured image 1008, the luminance value of an area 1009 where the light source is imaged is very high. In view of this, a threshold operation is used, wherein a pixel having a luminance value higher than a predetermined threshold is extracted as the light source pixel.
  • The light source direction is estimated from the obtained light source pixel. This process requires the relationship between the pixel position (u,v) of the imaging device and the spatial position (xf,yf) on the imaging elements referred to as the image coordinate system. In view of the influence of the distortion of the lens, the relationship between the pixel position (u,v) and the spatial position (xf,yf) can be obtained as follows.
  • [ Formula 3 ] x f = s · u dx + C x y f = v dy + C y dx = dx · N cx N fx u u = u + D u v u = v + D v D u = u ( κ 1 r 2 + κ 2 r 4 ) D v = v ( κ 1 r 2 + κ 2 r 4 ) r = u 2 + v 2 ( Expression 2 )
  • Note however that (Cx,Cy) is the pixel center position, s is the scale factor, (dx,dy) is the size [mm] of one pixel of an imaging element Ncx is the number of imaging elements in the x direction, Nfx is the number of effective pixels in the x direction, κ1 and κ2 are distortion parameters representing the distortion of the lens.
  • The relationship between the camera coordinate system (x,y,z) wherein the focal point of the imaging device is at the origin and the optical axis direction thereof is along the Z axis and the image coordinate system (xf,yf) as shown in FIG. 8 can be obtained as follows.
  • [ Formula 4 ] x f = f x z y f = f y z ( Expression 3 )
  • Herein, f represents the focal distance of the imaging device. Thus, if the camera parameters (Cx,Cy), s, (dx,dy), Ncx, Nfx, f, κ1 and κ2 are known, the pixel position (u,v) and the camera coordinate system (x,y,z) can be converted to each other by (Expression 2) and (Expression 3).
  • Normally, Ncx and Nfx can be known as long as the imaging elements can be identified, and (Cx,Cy), s, (dx,dy), κ1, κ2 and f can be known by performing a so-called “camera calibration” (for example, Roger Y. Tsai, “An Efficient And Accurate Camera Calibration Technique For 3D Machine Vision”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Miami Beach, Fla., 1986, pp. 364-374). These parameters do not change even when the position or the orientation of the imaging device changes. These parameters are referred to as “internal camera parameters”.
  • In view of this, before capturing an image, a camera calibration is performed to identify the internal camera parameters (Cx,Cy), s, (dx,dy), Ncx, Nfx, f, κ1 and κ2. The default values as of when the imaging device is purchased may be used as these values. In a case where the camera is not a fixed-focus camera but is capable of zooming, the focal distance f for each step of zooming may be obtained in advance so that they can be selectively used as necessary. Then, the focal distance f may be stored together with a captured image.
  • The light source direction is estimated from the light source pixel by using information as described above. Where the pixel position of the light source pixel is (ulight,vlight), the light source direction Llight can be expressed as follows.
  • [ Formula 5 ] L light = [ l x l y l z ] = 1 ( x f_light ) 2 + ( y f_light ) 2 + f 2 [ x f_light y f_light f ] [ Formula 6 ] x f_light = s · u light dx + C x y f_light = v light dy + C y
  • Since Llight is represented by the camera coordinate system in which the light source image has been captured, it is converted to the current camera coordinate system Lnow. This can be expressed as follows.
  • [Formula 7]

  • L now =R now −1 ·R light L light  (Expression 4)
  • The light source vector Lnow is estimated by performing these processes. The direction of the light source is estimated as described above.
  • By utilizing the movement of the imaging device 1001, the process may also obtain the three-dimensional position of the light source in addition to the direction thereof.
  • FIG. 9 is a schematic diagram illustrating this process. In FIG. 9, 1001A and 1010A denote the imaging device and the estimated light source vector at time t=t1, and 1001B and 1010B denote the imaging device and the estimated light source vector at time t=t2. Where the relative positional relationship between the imaging device at time t1 and that at time t2 and the orientations thereof are known, the light source should exist at the intersection between the extensions of the light source vectors 1010A and 1010B. Thus, the three-dimensional position of the light source can be obtained as follows.
  • The orientation matrix of the imaging device, the relative three-dimensional position of the imaging device and the estimated light source vector at time t1 are denoted as R1, P1 and L1, respectively, and the orientation matrix of the imaging device and the estimated light source vector at time t2 are denoted as R2 and L2, respectively. Note that the position of the imaging device at time t2 is assumed to be the origin O(0,0,0). Then, the light source position Plight satisfies the following expressions.
  • [Formula 8]

  • P light =m·R 2 −1 ·R 1 ·L 1 +P 1  (Expression 5)
  • [Formula 9]

  • P light =s·L 2  (Expression 6)
  • Note that s and m are each a constant. If all estimated values are correct and there is no noise, the light source position Plight can be obtained by solving (Expression 5) and (Expression 6) as simultaneous equations in s and m. However, since there usually is noise, the light source position is obtained by using the method of least squares.
  • First, the following function f(m,s) is considered.

  • f(m,s)={(m·R 2 −1 ·R 1 ·L 1 +P 1)−s·L 2}2  [Formula 10]
  • Herein, m and s satisfy the following relationship.
  • [ Formula 11 ] f m = 2 · ( R 2 - 1 · R 1 · L 1 ) T { ( m · R 2 - 1 · R 1 · L 1 + P 1 ) - s · L 2 } = 0 [ Formula 12 ] f s = - 2 · ( L 2 ) T { ( m · R 2 - 1 · R 1 · L 1 + P 1 ) - s · L 2 } = 0 Hence , [ Formula 13 ] ( R 2 - 1 · R 1 · L 1 ) 2 · m - ( R 2 - 1 · R 1 · L 1 ) T · L 2 · s + ( R 2 - 1 · R 1 · L 1 ) T · P 1 = 0 [ Formula 14 ] ( Expression 7 ) ( L 2 T · R 2 - 1 · R 1 · L 1 ) · m - L 2 2 · s + L 2 T · P 1 = 0 ( Expression 8 )
  • Thus, the light source position Plight is obtained by solving (Expression 7) and (Expression 8) as simultaneous equations in m and s, and substituting obtained s and m into (Expression 5) and (Expression 6). The position of the light source is estimated as described above.
  • The relative three-dimensional position P1 of the imaging device at time t1 (the relative positional relationship between the imaging device at time t1 and that at time t2) is obtained by using an optical flow. An optical flow is a vector extending between a point on an object in one image and the same point on the object in another temporally continuous image, i.e., a vector extending between corresponding points. A geometric constraint expression holds between the corresponding points and the camera movement. Thus, if the corresponding points satisfy certain conditions, the movement of the camera can be calculated.
  • A method called an “8-point method”, for example, is known in the art (H. C. Longuet-Higgins, “A Computer Algorithm For Reconstructing A Scene From Two Projections”, Nature, vol. 293, pp. 133-135, 1981) as a method for obtaining the relative positional relationship of the imaging device at different points in time from an optical flow. In this method, the camera movement is calculated from eight or more pairs of corresponding stationary points between two images. Methods for obtaining such corresponding points between two images are widely known, and will not herein be described in detail (for example, Carlo Tomasi and Takeo Kanade, “Detection And Tracking Of Point Features”, Carnegie Mellon University Technical Report, CMU-CS-91-132, April 1991).
  • Moreover, the luminance or the color of the light source can be obtained by obtaining the luminance value or the RGB values of the light source pixel. Alternatively, the spectrum of the light source may be detected by obtaining an image by a multispectral camera. It is known that by thus obtaining the spectrum of the light source, it is possible to synthesize an image with high color reproducibility in the process of increasing the resolution of an image and in the augmented reality to be described later (for example, Toshio Uchiyama, Masaru Tshuchida, Masahiro Yamaguchi, Hideaki Haneishi, Nagaaki Ohyama “Capture Of Natural Illumination Environments And Spectral-Based Image Synthesis”, Technical Report of the Institute of Electronics, Information and Communication Engineers, PRMU2005-138, pp. 7-12, 2006).
  • The light source information estimating section 105 may be configured to obtain the illuminance information of the light source as the light source information. This can be done by using an illuminance meter whose optical axis direction coincides with that of the imaging device 1001. The illuminance meter may be a photocell illuminance meter, or the like, for reading the photocurrent caused by the incident light, wherein a microammeter is connected to the photocell.
  • As described above, the light source estimation device of the present embodiment obtains a light source image by the imaging device when it is determined that the condition of the imaging device is suitable for obtaining light source information, and estimates light source information at the time of image capturing by using the first imaging device information at the time of obtaining the light source image and the second imaging device information at the time of image capturing by a cameraman. Therefore, it is possible to estimate the light source information around the object with no additional imaging devices, in a camera-equipped mobile telephone, or the like.
  • In the embodiment above, the output of the angle sensor 1025 is used for the imaging device condition determination section 101 to detect the optical axis direction of the imaging device 1001. However, the present invention is not limited to this, and other existing methods may be employed, e.g., a method using a weight and touch sensors (see Japanese Laid-Open Patent Publication No. 4-48879), and a method using an acceleration sensor (see Japanese Laid-Open Patent Publication No. 63-219281).
  • A method using a weight and touch sensors will now be described. FIG. 10 is a diagram showing a configuration of a weight and touch sensors. In FIG. 10( a), 1003 denotes a weight hanging down with the base portion thereof rotatably supported so as to always keep the perpendicular direction, and 1004A and 1004B touch sensors. Moreover, 1005 denotes the optical axis direction of the imaging device. As shown in FIG. 10( b), where the angle between the optical axis direction 1005 of the imaging device and the horizontal plane is θ, the touch sensors 1004A and 1004B come into contact with the weight 1003 when the optical axis direction 1005 is inclined from the horizontal position by predetermined angles θ1 and θ2.
  • FIG. 11 shows an exemplary configuration where a weight and touch sensors of FIG. 10 are provided in a folding-type camera-equipped mobile telephone. When the folding-type camera-equipped mobile telephone of FIG. 11 is placed with the imaging device 1001 facing down, the weight 1003 comes into contact with the touch sensor 1004A, thus turning the touch sensor 1004A ON (FIG. 12( a)). When it is placed with the imaging device 1001 facing up, the weight 1003 comes into contact with the touch sensor 1004B, thus turning the touch sensor 1004B ON (FIG. 12( b)).
  • FIG. 13 is a diagram showing the relationship between the optical axis direction and the ON/OFF state of the touch sensors. Thus, when the touch sensor 1004A is ON and the touch sensor 1004B is OFF, it can be assumed that the optical axis is facing downward with an inclination of +θ1 or more from the horizontal direction. When the touch sensor 1004B is ON and the touch sensor 1004A is OFF, it can be assumed that the optical axis is facing upward with an inclination of −θ2 or more from the horizontal direction. When the touch sensors 1004A and 1004B are both OFF, −θ2<θ<θ1 holds, and it can be assumed that the optical axis direction is substantially horizontal.
  • Thus, it is possible to detect the optical axis direction of the imaging device 1001 by using a weight and touch sensors.
  • While the illustrated example is directed to a folding-type camera-equipped mobile telephone, the optical axis direction of an imaging device can of course be detected by using a weight and touch sensors even with digital still cameras or digital video cameras. FIG. 14 shows an exemplary configuration where a weight and touch sensors are provided in a digital still camera. As shown in FIG. 14( a), when the optical axis of the imaging device 1001 is facing downward, the weight 1003 is in contact with the touch sensor 1004A. As shown in FIG. 14( b), when the optical axis of the imaging device 1001 is facing upward, the weight 1003 is in contact with the touch sensor 1004B.
  • In the embodiment above, the imaging device condition determination section 101 determines whether the condition of the imaging device 1001 is suitable for obtaining light source information by detecting the optical axis of the imaging device 1001. Instead of detecting the direction of the optical axis, the luminance value of the captured image may be detected, for example.
  • Where the light source is captured in the captured image, the pixel capturing the light source has a very high luminance value. In view of this, the imaging device 1001 may be used to capture an image, and if a luminance value greater than or equal to a threshold is present in the captured image, it can be determined that the light source is captured in the image and that the condition is suitable for obtaining light source information. In such a case, since it can be assumed that the light source has a very high luminance value, an image is preferably captured by the imaging device 1001 with as short an exposure time as possible.
  • Alternatively, whether there is a shading object within the range of viewing field of the camera may be detected so as to determine whether the condition of the imaging device 1001 is suitable for obtaining light source information. This is because if there is such a shading object, the light source will be shaded and it is likely that the light source cannot be captured.
  • The presence of a shading object can be detected by methods including a method using distance information and a method using image information. With the former, the output of a distance sensor used in auto-focusing of a camera, for example, may be used so that if an object is present within 1 m, for example, the object is determined to be a shading object. With the latter method of using image information, an image is captured by the imaging device 1001, and a human is detected from within the image by an image processing, for example. If a human is in the captured image, it is determined that the human is a shading object. This is because it can be assumed that a most ordinary object that shades the light source in the vicinity of the camera is a human. The detection of a human from within an image can be done by using image recognition techniques widely known in the art, e.g., by detecting a skin-colored region by using the color information.
  • When the light source image obtaining section 102 obtains a light source image, it is preferred that the image is captured without using a flashlight. This is because if an object that causes specular reflection such as a mirror is present within the viewing field of the imaging device 1001, the flashlight may be reflected, which may be erroneously assumed to be a light source pixel. Therefore, it is preferred to use an imaging device capable of capturing an image over a wide dynamic range, such as a cooled CCD camera or a multiple-exposure imaging. When the light source image obtaining section 102 obtains a light source image, if the amount of exposure is not sufficient, the exposure time may be elongated. This is particularly effective in a case where a light source image is obtained only when the imaging device 1001 is stationary by using an acceleration sensor, or the like.
  • SECOND EMBODIMENT
  • FIG. 15 is a block diagram showing a configuration of a light source estimation device according to a second embodiment of the present invention. In FIG. 15, like elements to those shown in FIG. 1 are denoted by like reference numerals, and will not be further described below.
  • In the configuration of FIG. 15, a light source image synthesis section 106 is provided in addition to the configuration of FIG. 1. The light source image synthesis section 106 synthesizes a panoramic light source image from a plurality of light source images obtained by the light source image obtaining section 102 by using a plurality of pieces of first imaging device information obtained by the first imaging device information obtaining section 103. A panoramic light source image is a light source image capturing a scene over a wide range. By using a panoramic light source image, it is possible to obtain, at once, the light source information of a scene over a wide range.
  • In the present embodiment, the imaging device condition determination section 101, the light source image obtaining section 102 and the first imaging device information obtaining section 103 repeatedly perform the processes to obtain a plurality of light source images and a plurality of pieces of first imaging device information corresponding respectively to the light source images. The plurality of sets of the light source images and the first imaging device information are stored in the memory 1028.
  • In this process, for example, when a new light source image is obtained by the light source image obtaining section 102, the first imaging device information obtaining section 103 may compare the new light source image with an already obtained light source image so that the first imaging device information is obtained only when the difference therebetween is significant. If the difference is insignificant, the new light source image may be discarded. Alternatively, an acceleration sensor or an angle sensor may be used so that the light source image and the first imaging device information may be obtained when the imaging device 1001 is moved.
  • The light source image synthesis section 106 synthesizes a single wide-range panoramic light source image from a plurality of pairs of the light source images and the first imaging device information stored in the memory 1028.
  • FIG. 16 is a schematic diagram illustrating a method for synthesizing a panoramic light source image. In FIG. 16, 1001 denotes an imaging device, 1006 a field of view, 1011 a projection plane onto which an image is projected, and 1012 a projected image obtained by projecting the captured light source image. First, a light source image stored in the memory 1028 is projected onto the projection plane 1011 using the corresponding first imaging device information. It is assumed herein that the projection plane is hemispherical. Assuming that the camera coordinate system (x,y,z) is (Xw,Yw,Zw) when the output (α, β, γ) of the angle sensor 1025 provided in the imaging device 1001 is as follows:
  • [Formula 15]

  • α=α0,β=β0,γ=γ0  (Expression 9)
  • then, the projection plane can be expressed as follows.
  • [Formula 16]

  • X w 2 +Y w 2 +Z w 2 =r prj 2  (Expression 10)
  • Herein, rprj is the radius of the hemisphere being the projection plane 1011. For example, it may be set to 10 m assuming a streetlight for outdoor, and about 2.5 m assuming a lighting on the ceiling for indoor. Such indoor/outdoor switching may be done by a cameraman switching between the indoor imaging mode and the outdoor imaging mode, for example.
  • From (Expression 4) and (Expression 10), all pixels can be projected onto the projection plane 1011 as follows.
  • [ Formula 17 ] [ X w Y w Z w ] = R 0 - 1 · R light [ r prj ( x f ) 2 + ( y f ) 2 + f 2 x f r prj ( x f ) 2 + ( y f ) 2 + f 2 y f r prj ( x f ) 2 + ( y f ) 2 + f 2 f ] ( Expression 11 )
  • Note however that R0 is the orientation matrix obtained from (Expression 1) and (Expression 9).
  • By using (Expression 11), all the captured light source images are projected onto the projection plane 1011 to produce the projected image 1012. In this process, two or more light source images may be projected onto the same area on the projection plane. In such a case, the projected image 1012 by the previously-captured light source image may be discarded while preferentially using the newly-captured light source image.
  • Thus, by synthesizing a projected image by integrating together a plurality of light source images, it is possible to widen the apparent field of view. FIG. 17 is a schematic diagram showing how this is done. In FIG. 17, 1001 denotes an imaging device, 1005 and 1006 the optical axis direction and the field of view, respectively, when the orientation of the imaging device 1001 is changed, 1013 the apparent field of view obtained by integrating together images captured while changing the orientation. Thus, it is possible to widen the field of view by integrating together images captured while changing the optical axis direction into a panoramic image.
  • The projection plane 1011 does not need to be a hemisphere, and may be a rectangular parallelepiped as shown in FIG. 18 for indoor, for example. In such a case, assuming that the camera coordinate system (x,y,z) is (Xw,Yw,Zw), each plane of the rectangular parallelepiped being the projection plane can be expressed as follows.
  • [Formula 18]

  • αX w +bY w +cZ w =d  (Expression 59)
  • Herein, a, b and c are constants, and this represents a projection plane on the rectangular parallelepiped of FIG. 18. Therefore, as with (Expression 11), all pixels can be projected onto the projection plane 1011 as follows by obtaining the intersection between (Expression 4) and (Expression 59).
  • [ Formula 19 ] [ X w Y w Z w ] = R 0 - 1 · R light [ d ax f + by f + cf x f d ax f + by f + cf y f d ax f + by f + cf f ]
  • Note that R0 is the orientation matrix obtained from (Expression 1) and (Expression 9).
  • The indoor/outdoor switching may be done by using the color of the light source pixel. Specifically, by using the color obtained from the RGB components of the captured light source pixel, it is possible to determine whether it is the sunlight or a fluorescent light or a light bulb, whereby it is determined to be outdoor if it is the sunlight and to be indoor if it is a fluorescent light or a light bulb. In such a case, the wavelength characteristics of the imaging device for each of R, G and B and the gamma value being the ratio of the change in the converted voltage value with respect to the change in the brightness of the image may be stored.
  • The light source information estimating section 105 estimates the light source information by using the panoramic light source image synthesized by the light source image synthesis section 106. Herein, a method for estimating the light source position as the light source information will be described.
  • First, a light source pixel is extracted from the panoramic light source image by the method described above in the first embodiment. The position of the extracted light source pixel is that in a case where the orientation of the imaging device 1001 is as represented by (Expression 9). Therefore, by using the second imaging device information obtained by the second imaging device information obtaining section 104, the process estimates the light source position (X1_now, Y1_now, Z1_now) in terms of camera coordinates at the time of image capturing by a cameraman.
  • [ Formula 20 ] [ X l_now Y l_now Z l_now ] = R now - 1 · R 0 [ X w Y w Z w ]
  • As described above, with the light source estimation device of the present embodiment, the light source information is estimated from the panoramic light source image, whereby it is possible to obtain, at once, the light source information of a scene over a wide range.
  • THIRD EMBODIMENT
  • FIG. 19 is a block diagram showing a configuration of a light source estimation device according to a third embodiment of the present invention. In FIG. 19, like elements to those shown in FIG. 1 are denoted by like reference numerals, and will not be further described below.
  • The configuration of FIG. 19 presumes that it is provided in a folding-type mobile telephone. It includes an open/close mechanism 1031 for opening/closing the folding-type mobile telephone, and an open/close switch 1014 for instructing the open/close mechanism 1031 to perform the open/close action. When the open/close switch 1014 is pressed and the open/close mechanism 1031 performs the open/close action, the optical axis direction of the imaging device 1001 provided in the folding-type mobile telephone changes due to the open/close action. Thus, the open/close mechanism 1031 functions as optical axis direction varying means for varying the optical axis direction of the imaging device 1001.
  • A light source image obtainment instructing section 110 first instructs the light source image obtaining section 102 to obtain a light source image when the open/close switch 1014 is pressed. In this process, the imaging device 1001 preferably takes a series of consecutive shots or records a video. After the start of the image-capturing operation of the imaging device 1001, the light source image obtainment instructing section 110 performs the open/close action by using the open/close mechanism 1031. Therefore, the light source image obtaining section 102 obtains light source images while the open/close mechanism 1031 is performing the open/close action or, in other words, while the optical axis direction of the imaging device 1001 is being varied by the optical axis direction varying means. Thus, it is possible to capture a plurality of light source images over a wide range of viewing field as shown in FIG. 17.
  • FIG. 20 is an external view of the folding-type camera-equipped mobile telephone 1000, which is provided with a light source estimation device of the present embodiment. In FIG. 20, 1001A and 1001B denote imaging devices, 1002 a shutter button, and 1014 an open/close switch of a folding-type mobile telephone. Arrows attached to the imaging devices 1001A and 1001B denote optical axis directions. In order to protect the liquid crystal display and to improve the portability, a folding-type mobile telephone is normally folded as shown in FIG. 20( b) while it is not being used for a call or for capturing an image.
  • If the open/close switch 1014 is pressed in the position of FIG. 20( b), the folding-type camera-equipped mobile telephone 1000 is automatically opened as shown in FIGS. 21( a) to (b), (c), (d) and (e). In FIG. 21, 1006A and 1006B denote fields of view to be captured by the two imaging devices (1001A and 1001B in FIG. 20) provided in the folding-type camera-equipped mobile telephone 1000. It can be seen from FIG. 21 that the optical axis direction of the imaging devices 1001A and 1001B can be varied by using the open/close switch 1014 of the folding-type camera-equipped mobile telephone 1000.
  • The open/close mechanism 1031 can be implemented by providing a spring or a lock mechanism (see, for example, Japanese Laid-Open Patent Publication No. 7-131850). A motor may be provided in the hinge portion of the folding-type mobile telephone. In such a case, the orientation information of the imaging device 1001 can be obtained by using, as the angle sensor 1025, a rotary encoder provided along with the motor.
  • The operation of the configuration of FIG. 19 will be described. First, when the open/close switch 1014 is pressed, the light source image obtainment instructing section 110 detects this and instructs the light source image obtaining section 102 to obtain light source images. Moreover, when the open/close switch 1014 is pressed, the open/close mechanism 1031 as the light source direction varying means performs the automatic open/close action of the folding-type mobile telephone. While the open/close mechanism 1031 is being actuated, the light source image obtaining section 102 captures a plurality of light source images by using the imaging device 1001 as instructed by the light source image obtainment instructing section 110. Thereafter, the operation is similar to that of the first embodiment.
  • In the present embodiment, it is preferred that the exposure time is as short as possible so as to capture light source images while moving the imaging device 1001. Moreover, camera shake compensation may be used so as to capture light source images without a blur even if the imaging device 1001 is moving.
  • Herein, the exposure time TE where camera shake compensation is not used can be expressed as follows.
  • [ Formula 21 ] T E · M < θ s 2 · L x ( Expression 60 )
  • Herein, M is the rotational velocity [deg/sec] of the optical axis which rotates as the open/close switch 1014 is pressed, θs is the viewing angle [deg] in the vertical direction of the imaging device 1001, and Lx is the number of pixels in the vertical direction of the imaging device 1001. For example, where M=180 [deg/sec], θs=80 [deg], and Lx=1000, the following holds:
  • [ Formula 22 ] T E < 1 4500
      • indicating that the exposure time can be set to about 1/4000 sec.
  • The rotational velocity M of the optical axis may be determined from (Expression 60). For example, where θs=40 [deg], Lx=1000 and TE=0.0005 [sec], the following holds:

  • M<40[deg/sec]
      • indicating that the open/close mechanism 1031 can be operated more slowly than in a normal open/close action.
  • Since the light source environment varies over time, it is preferred that light source images are obtained with a timing as close as possible to that with which an object is actually imaged. Normally, the open/close switch 1014 is often pressed immediately before image capturing. The present embodiment is very effective because it is likely that light source images immediately before image capturing can be obtained.
  • Blurred images may intentionally be captured as light source images. By capturing blurred images, it is possible to capture light source images while keeping privacy of people in the captured scene. This can be realized by, for example, elongating the exposure time.
  • As described above, if (Expression 60) is not satisfied, camera shake occurs as the optical axis is moved by the open/close mechanism 1031 thus capturing blurred images. In view of this, the exposure time and the rotational velocity may be determined so as to satisfy the following expression, for example.
  • [ Formula 95 ] T E · M = T B · θ s L x
  • Herein, TB is a constant that determines the amount of blur. For example, where TB=8, θs=40 [deg], Lx=1000 and M=40 [deg/sec], the exposure time TE can be set to 0.008 [sec].
  • The light source direction may be estimated by using a voting process from a plurality of light source images. This can improve the precision with which the light source direction is estimated. For example, if a light source position obtained from a light source image is significantly shifted from the light source position obtained from other light source images or if the corresponding light source is absent in other light source images, it can be determined that the estimation of the light source position is a failure and the obtained light source position can be discarded from the light source estimation result.
  • Where light source images have not been captured properly, there may be prompting for capturing the images again. For example, an image estimated from the estimated light source information may be compared with an actually captured light source image, and if the difference is significant, it may be determined that the light source has not been captured in the light source images and the light source estimation has failed. The image-capturing operation may be prompted by, for example, audibly outputting “Re-capture image of light source” or displaying “Re-capture image of light source” on the display.
  • FIG. 22 is a block diagram showing another configuration of the light source estimation device of the present embodiment. In FIG. 22, like elements to those shown in FIGS. 1 and 19 are denoted by like reference numerals, and will not be further described below.
  • The configuration of FIG. 22 includes the imaging device condition determination section 101 shown in the first embodiment. It also includes a vibration mechanism 1026 to be used for realizing the vibration function of the mobile telephone, for example. The vibration function is a function of indicating an incoming call by way of vibrations while the mobile telephone is in a vibration mode. If the vibration mechanism 1026 performs the vibration action while the vibration function is ON, the optical axis direction of the imaging device 1001 provided in the mobile telephone varies according to the vibration action. Thus, the vibration mechanism 1026 functions as optical axis direction varying means for varying the optical axis direction of the imaging device 1001.
  • As described in the first embodiment, the imaging device condition determination section 101 determines whether the condition of the imaging device 1001 is suitable for obtaining light source information. When it is determined by the imaging device condition determination section 101 to be suitable, the light source image obtainment instructing section 110 instructs the vibration mechanism 1026 to perform the vibration action and instructs the light source image obtaining section 102 to obtain light source images. Thus, light source images are obtained by the light source image obtaining section 102 while the vibration mechanism 1026 is performing the vibration action or, in other words, while the optical axis direction of the imaging device 1001 is being varied by the optical axis direction varying means. Thus, it is possible to capture a plurality of light source images over a wide range of viewing field as shown in FIG. 17. In this case, since the imaging device 1001 is not stationary, it is preferred to shorten the exposure time.
  • Where θs denotes the viewing angle of the imaging device 1001 and θv denotes the angle by which the mobile telephone is vibrated by the vibration mechanism 1026 (the angle of vibration of FIG. 23), the enlarged viewing angle θt can be expressed as follows.

  • θt2+2θv  [Formula 23]
  • From this expression, the required amount of vibration can be calculated if the viewing angle θs of the imaging device 1001 and the viewing angle θt required for the light source estimation are determined. For example, if the viewing angle θs of the imaging device 1001 is 80 degrees and a 90-degree viewing angle is required for the light source estimation, the angle of vibration is about 5 degrees. This is a value that can be realized by vibrating a mobile telephone having a height of 11 cm by about 9 mm.
  • When the vibration mechanism 1026 is actuated for obtaining light source images, a different sound from that of an incoming call or that when receiving an email may be output from the speaker. Then, it is possible to distinguish the operation of obtaining light source images from when there is an incoming call or when receiving an email. When light source images are being obtained, an LED or an interface liquid crystal display may be lit to notify the user.
  • Of course, the vibration mechanism 1026 may be operated to obtain light source images after giving the notification by a sound, an LED or a display.
  • FIG. 24 is a block diagram showing another exemplary configuration of a light source estimation device of the present embodiment. In FIG. 24, like elements to those shown in FIGS. 1 and 22 are denoted by like reference numerals, and will not be further described below.
  • As compared with FIG. 22, the configuration of FIG. 24 includes an email reception detecting section 1032, while the imaging device condition determination section 101 is omitted. Assuming that the vibration mode has been set, when the email reception detecting section 1032 detects the reception of an email, the vibration mechanism 1026 performs the vibration action. Receiving a signal from the email reception detecting section 1032, the light source image obtainment instructing section 110 instructs the light source image obtaining section 102 to obtain light source images. With such an operation, light source images are obtained by the light source image obtaining section 102 while the vibration mechanism 1026 is performing the vibration action or, in other words, while the optical axis direction of the imaging device 1001 is being varied by the optical axis direction varying means.
  • With this configuration, a plurality of light source images can be obtained while the optical axis direction of the imaging device is being varied, by using the vibration function that is actuated by the reception of an email, thus providing an advantage that no extra vibration is needed.
  • As described above, according to the present embodiment, light source images are obtained while the optical axis direction of the imaging device is being varied by the optical axis direction varying means, whereby it is possible to obtain light source images over a wide range around the object and thus to precisely estimate the light source information.
  • With each of the configurations described above, the light source image synthesis section 106 illustrated in the second embodiment may be provided so that a panoramic light source image is produced from a plurality of light source images.
  • In the present embodiment, the optical axis direction varying means is implemented by the open/close mechanism or the vibration mechanism of the folding-type mobile telephone. However, the present invention is not limited to this, and it may be implemented by any means as long as it is capable of varying the optical axis direction of the imaging device. For example, a special driving mechanism may be provided in the imaging device itself.
  • FOURTH EMBODIMENT
  • FIG. 25 is a block diagram showing a configuration of a light source estimation system according to a fourth embodiment of the present invention. In FIG. 25, like elements to those shown in FIG. 1 are denoted by like reference numerals, and will not be further described below.
  • In FIG. 25, a communication terminal 1100 being a camera-equipped mobile telephone, for example, is provided with those elements shown in FIG. 1 except for the light source information estimating section 105. The light source information estimating section 105 is provided in a server 1101, which is an external device away from the communication terminal 1100 and connected to the communication terminal 1100 via a network. Thus, in the present embodiment, the communication terminal 1100 does not perform all of the processes but only performs the process of obtaining light source images and imaging device information, with the light source information estimating process being performed by the server 1101.
  • In the communication terminal 1100, a light source image is obtained by the light source image obtaining section 102, the first imaging device information at the time of obtaining the light source image is obtained by the first imaging device information obtaining section 103, and the second imaging device information at the time of actually capturing an image is obtained by the second imaging device information obtaining section 104, as described above in the first embodiment. The light source image and the first and second imaging device information are transmitted by an information transmitting section 108. There may also be given an instruction as to how the light source information is estimated.
  • In the server 1101, an information receiving section 109 receives information that is transmitted from the communication terminal 1100 via a network, i.e., the light source image and the first and second imaging device information. The received light source image and the received first and second imaging device information are given to the light source information estimating section 105. The light source information estimating section 105 estimates the light source information as described above in the first embodiment. Where there is given an instruction as to how the light source information estimation, it estimates the light source information according to the instruction.
  • Thus, if the light source information estimating section 105 is provided in the server 1101 so as to perform the light source information estimating process by the server 1101, it is possible to reduce the computational burden on the communication terminal 1100.
  • (Super-Resolution Using Light Source Information)
  • The light source estimation device of the present invention is particularly effective in the process of super-resolution also known as “digital zooming”. The super-resolution process is important in the editing process after capturing an image since it makes it possible to arbitrarily enlarge captured-images. Such a super-resolution process has been realized by an interpolation process, or the like, but it has a problem in that where an enlarged image with a factor of 2×2 or more is synthesized, the synthesized image will be blurred, deteriorating the image quality. By using the light source estimation method of the present invention, it is possible to realize a super-resolution process with reduced image quality deterioration. This method will now be described.
  • First, the concept of this process will be described. The super-resolution process of the present invention uses four pieces of input information as follows:
      • Diffuse reflection image of object;
      • Specular reflection image of object;
      • Three-dimensional shape information of object; and
      • Light source position/color/illuminance.
  • A diffuse reflection image is what is obtained by imaging only a diffuse reflection component, being a mat reflection component, of the input image. Similarly, a specular reflection image is what is obtained by imaging only a specular reflection component, being a shine, of the input image. The diffuse reflection component is a component that is reflected at the surface of a mat object and scattered evenly in all directions. The specular reflection component is a component that reflects strongly in the opposite direction to the direction of the incident light with respect to the normal as is a reflection at a mirror surface. Assuming a dichromatic reflection model, the luminance of an object is represented by the sum of a diffuse reflection component and a specular reflection component. As will be described later, the specular reflection image and the diffuse reflection image can be obtained by imaging an object while rotating the polarizing filter, for example.
  • FIG. 26( a) shows an image obtained by imaging an object (a tumbler) illuminated by a light source with an imaging device. It can be seen that a specular reflection, being a shine, appears in an upper portion of the figure. FIGS. 26( b) and (c) are the results of separating the image of FIG. 26( a) into a diffuse reflection image and a specular reflection image by a method to be described later. In the diffuse reflection image, the shine has been removed to make clear the surface texture information, but the stereoscopic effect has been lost. On the other hand, in the specular reflection image, detailed shape information appears clearly, but the texture information has been lost. Thus, an input image is an image obtained by laying these two images containing totally different information on each other. By separating an image into a diffuse reflection image and a specular reflection image and processing these images separately, it is possible to realize a super-resolution process with a higher definition.
  • The super-resolution process uses a learning-based method. “Learning-based” means preparing a pair of a low-resolution image and a high-resolution image in advance and learning the correspondence therebetween. In this process, the feature quantity extracted from the image, but not the image itself, is learned, so that the super-resolution process can be used also with images other than the prepared images.
  • FIG. 27 is a block diagram showing a configuration of a super-resolution device in one embodiment of the present invention. The super-resolution device of FIG. 27 includes an image-capturing section 201 for capturing an image by using an imaging device, a light source information estimating section 203 for estimating the light source information such as the direction, position, luminance, color and spectrum of the light source illuminating the object by a light source estimation method as described above, a shape information obtaining section 204 for obtaining the surface normal information or the three-dimensional position information of the object as shape information, and a super-resolution section 217 for performing super-resolution of the image captured by the image-capturing section 201 by using the light source information estimated by the light source estimation section 203 and the shape information obtained by the shape information obtaining section 204. The super-resolution section 217 further separates the image captured by the image-capturing section 201 into a diffuse reflection component and a specular reflection component by means of a diffuse reflection/specular reflection separating section 202 and performs super-resolution of the diffuse reflection component and that of the specular reflection component separately. These processes will now be described.
  • The image-capturing section 201 images an object by using an imaging device such as a CCD or a CMOS. In the captured image, it is preferred that the specular reflection component where the luminance is very high and the diffuse reflection component are recorded at the same time without saturation. Therefore, it is preferred to use an imaging device capable of capturing an image over a wide dynamic range, such as a cooled CCD camera or a multiple-exposure imaging.
  • The diffuse reflection/specular reflection separating section 202 separates the image captured by the image-capturing section 201 into a diffuse reflection component and a specular reflection component.
  • First, reflection characteristics of an object will be described. Assuming a dichromatic reflection model, the luminance of an object is represented as follows by the sum of a diffuse reflection component and a specular reflection component.
  • [Formula 24]

  • I=I a +I d +I s  (Expression 12)
  • Herein, I is the luminance value of the object imaged by the imaging device, Ia is an environmental light component, Id is a diffuse reflection component, and Is is a specular reflection component. The environmental light component refers to indirect light which is light from the light source being scattered by objects, etc. This is scattered to every part of the space, giving a slight brightness even to shaded areas where direct light does not reach. Therefore, normally, it is often treated as noise.
  • Assuming that the environmental light component is sufficiently small and negligible as noise, an image can be separated into a diffuse reflection component and a specular reflection component. As described above, these components exhibit very different characteristics from each other, as the diffuse reflection component depends on texture information, whereas the specular reflection image depends on detailed shape information. Therefore, if the super-resolution is performed by separating an input image into a diffuse reflection image and a specular reflection image and performing super-resolution by different methods, it is possible to perform super-resolution with a very high definition. For this, it is first necessary to separate the diffuse reflection image and the specular reflection image from each other.
  • Various separation methods have been proposed in the art. For example, they include:
      • those using a polarizing filter utilizing the difference in degree of polarization between specular reflection and diffuse reflection (for example, Japanese Patent No. 3459981);
      • those using a multispectral camera while rotating an object so as to separate the specular reflection area (for example, Japanese Laid-Open Patent Publication No. 2003-85531); and
      • those using images of an object illuminated by the light source from various directions to synthesize a linearized image being an image in an ideal state where there is no specular reflection, and using the linearized image to separate specular reflection and shadow areas (for example, Yasunori Ishii, Koutaro Fukui, Yasuhiro Mukaigawa, Takeshi Shakunaga, “Classification of Photometric Factors Based on Photometric Linearization,” Journal of Information Processing Society of Japan, vol. 44, no. SIG5 (CVIM6), pp. 11-21, 2003).
  • Herein, a method using a polarizing filter is employed. FIG. 28 shows the camera-equipped mobile telephone 1000 provided with a super-resolution device of the present embodiment. As shown in FIG. 28, the imaging device 1001 is provided with a linear polarizing filter 1016A having a rotation mechanism (not shown). The lighting device 1007 with a linear polarizing filter 1016B attached thereto is also provided. Moreover, 1017 denotes a liquid crystal display as a user interface.
  • The imaging device 1001 captures a plurality of images of an object being illuminated by the lighting device 1007 with the linear polarizing filter 1016B attached thereto while rotating the linear polarizing filter 1016A by means of the rotation mechanism. In view of the fact that the illumination is linearly polarized, the reflected light intensity changes as shown in FIG. 29 with respect to the angle of rotation ψ of the polarizing filter 1016A. Where Id denotes the diffuse component of the reflected light and Is the specular reflection component thereof, the maximum value Imax and the minimum value Imin of the reflection light luminance can be expressed as follows.
  • [ Formula 25 ] I max = 1 2 I d + I s [ Formula 26 ] I min = 1 2 I d
  • In other words, the diffuse component Id of the reflected light and the specular reflection component Is thereof are obtained as follows.
  • [Formula 27]

  • I d=2I min  (Expression 13)
  • [Formula 28]

  • I s =I max −I min  (Expression 14)
  • FIG. 30 shows the flow of this process. First, the polarizing filter 1016A is rotated by the rotation mechanism (step S301), and images are captured and stored in a memory (step S302). Then, it is determined whether a predetermined number of images have been captured and stored in the memory (step S303). If a sufficient number of images for detecting the minimum value and the maximum value of the reflection light luminance have not been captured (No in step S303), the polarizing filter is rotated again (step S301) to repeat the image-capturing process. If a sufficient number of images have been captured (Yes in step S303), the minimum value and the maximum value of the reflection light luminance are detected by using the captured image data (step S304), and the diffuse reflection component and the specular reflection component are separated from each other by using (Expression 13) and (Expression 14) (step S305). While this process may be done by obtaining the minimum value and the maximum value for each pixel from the plurality of images, fitting of a sin function is used herein. This process will now be described.
  • The reflection light luminance I for the polarizing filter angle ψ shown in FIG. 29 can be approximated by a sin function as follows.
  • [Formula 29]

  • I=A·sin 2(ψ−B)+C  (Expression 15)
  • Herein, A, B and C are constants, and the following expressions hold based on (Expression 13) and (Expression 14).
  • [Formula 30]

  • I d=2(C−A)  (Expression 16)
  • [Formula 31]

  • Is=2A  (Expression 17)
  • Thus, it is possible to separate the diffuse reflection component and the specular reflection component from each other by obtaining A, B and C of (Expression 15) from the captured images.
  • (Expression 15) can be expanded as follows.

  • I=α·sin 2φ+b·cos 2φ+C  [Formula 32]
  • Note however,
  • [ Formula 33 ] A = a 2 + b 2 , sin ( - 2 B ) = b a 2 + b 2 , cos ( - 2 B ) = a a 2 + b 2 ( Expression 18 )
  • Thus, it is possible to separate the diffuse reflection component and the specular reflection component from each other by obtaining A, B and C that minimize the following evaluation expression.
  • f ( a , b , C ) = i = 0 N - 1 ( I i - a · sin 2 φ i - b · cos 2 φ i - C ) 2 [ Formula 34 ]
  • Herein, Ii denotes the reflected light intensity for the polarizing filter angle ψi. By using the method of least squares, the parameters are estimated as follows.
  • [ Formula 35 ] a = D E , b = F E , C = G E ( Expression 19 ) [ Formula 36 ] D = ( i = 0 N - 1 sin 2 φ i ) · ( i = 0 N - 1 cos 2 φ i ) · ( i = 0 N - 1 I i · cos 2 φ i ) + ( i = 0 N - 1 I i ) · ( i = 0 N - 1 cos 2 φ i ) · ( i = 0 N - 1 sin 2 φ i · cos 2 φ i ) + N · ( i = 0 N - 1 I i · sin 2 φ i ) · ( i = 0 N - 1 ( cos 2 φ i ) 2 ) - N · ( i = 0 N - 1 I i · cos 2 φ i ) · ( i = 0 N - 1 sin 2 φ i · cos 2 φ i ) - ( i = 0 N - 1 cos 2 φ i ) · ( i = 0 N - 1 cos 2 φ i ) · ( i = 0 N - 1 I i · sin 2 φ i ) - ( i = 0 N - 1 I i ) · ( i = 0 N - 1 sin 2 φ i ) · ( i = 0 N - 1 ( cos 2 φ i ) 2 ) ( Expression 20 ) [ Formula 37 ] E = 2 · ( i = 0 N - 1 sin 2 φ i ) · ( i = 0 N - 1 cos 2 φ i ) · ( i = 0 N - 1 sin 2 φ i · cos 2 φ i ) + N · ( i = 0 N - 1 ( sin 2 φ i ) 2 ) · ( i = 0 N - 1 ( cos 2 φ i ) 2 ) - ( i = 0 N - 1 sin 2 φ i ) 2 · ( i = 0 N - 1 ( cos 2 φ i ) 2 ) - ( i = 0 N - 1 cos 2 φ i ) 2 · ( i = 0 N - 1 ( sin 2 φ i ) 2 ) - N · ( i = 0 N - 1 sin 2 φ i · cos 2 φ i ) 2 ( Expression 21 ) [ Formula 38 ] F = N · ( i = 0 N - 1 I i · cos 2 φ i ) · ( i = 0 N - 1 ( sin 2 φ i ) 2 ) + ( i = 0 N - 1 sin 2 φ i ) · ( i = 0 N - 1 cos 2 φ i ) · ( i = 0 N - 1 I i · sin 2 φ i ) + ( i = 0 N - 1 I i ) · ( i = 0 N - 1 sin 2 φ i ) · ( i = 0 N - 1 sin 2 φ i · cos 2 φ i ) - ( i = 0 N - 1 sin 2 φ i ) 2 · ( i = 0 N - 1 I i · cos 2 φ i ) - ( i = 0 N - 1 I i ) · ( i = 0 N - 1 cos 2 φ i ) · ( i = 0 N - 1 ( sin 2 φ i ) 2 ) - N · ( i = 0 N - 1 I i · sin 2 φ i ) · ( i = 0 N - 1 sin φ i · cos 2 φ i ) ( Expression 22 ) [ Formula 39 ] G = ( i = 0 N - 1 sin 2 φ i ) · ( i = 0 N - 1 sin 2 φ i · cos 2 φ i ) · ( i = 0 N - 1 I i · cos 2 φ i ) + ( i = 0 N - 1 cos 2 φ i ) · ( i = 0 N - 1 sin 2 φ i · cos 2 φ i ) · ( i = 0 N - 1 I i · sin 2 φ i ) + ( i = 0 N - 1 I i ) · ( i = 0 N - 1 ( sin 2 φ i ) 2 ) · ( i = 0 N - 1 ( cos 2 φ i ) 2 ) - ( i = 0 N - 1 cos 2 φ i ) · ( i = 0 N - 1 ( sin 2 φ i ) 2 ) · ( i = 0 N - 1 I i · cos 2 φ i ) - ( i = 0 N - 1 I i ) · ( i = 0 N - 1 sin 2 φ i · cos 2 φ i ) 2 - ( i = 0 N - 1 sin 2 φ i ) · ( i = 0 N - 1 ( cos 2 φ i ) 2 ) · ( i = 0 N - 1 I i · sin 2 φ i ) ( Expression 23 )
  • As described above, the diffuse reflection component and the specular reflection component are separated from each other by using (Expression 16) to (Expression 23). In such a case, since the number of unknown parameters is three, it is sufficient to capture at least three images with different angles of rotation of the polarizing filter.
  • Therefore, instead of providing a rotation mechanism for the linear polarizing filter 1016A, one may employ an imaging device in which the polarization direction is varied from one pixel to another. FIG. 31 schematically shows pixels of such an imaging device. Herein, 1022 denotes one pixel, and straight lines in each pixel denote the polarization direction. Specifically, the imaging device has pixels of four different polarization directions of 0°, 45°, 90° and 135°. If pixels of four different kinds are treated to be a single pixel as in a Bayer arrangement as represented by a thick line 1023 in FIG. 31, it is possible to simultaneously capture images with four different polarization directions. Such an imaging device may be, for example, a photonic crystal device, or the like.
  • A polarized illumination, e.g., a liquid crystal display, may be used as the lighting device 1007. For example, the liquid crystal display 1017 provided in the mobile telephone 1000 can be used. In such a case, it is preferred that the luminance value of the liquid crystal display 1017 is made higher than that when it is used as a user interface.
  • Of course, the polarizing filter 1016B of the lighting device 1007 may be rotated instead of rotating the polarizing filter 1016A of the imaging device 1001. Moreover, instead of providing a polarizing filter both for the imaging device 1001 and for the lighting device 1007, a polarizing filter may be provided only for one of them, i.e., for the imaging device, and the diffuse reflection component and the specular reflection component may be separated from each other by using an independent component analysis (see, for example, Japanese Patent No. 3459981).
  • The light source information estimating section 203 obtains the position, color and illuminance information of the light source by using a light source estimation method as described above.
  • The shape information obtaining section 204 obtains the surface normal information or the three-dimensional position information of the object, which are shape information of the object. Means for obtaining shape information of an object may be an existing method such as, for example, a slit-ray projection method, a pattern projection method or a laser radar method.
  • Of course, the shape information obtaining method is not limited to these methods. For example, the method may be a stereoscopic method using a plurality of cameras, a motion-stereo method using the motion of a camera, a photometric stereo method using images captured while varying the position of the light source, a method in which the distance from an object is measured using a millimeter wave or an ultrasonic wave, or a method using polarization characteristics of reflected light (for example, U.S. Pat. No. 5,028,138, and Daisuke Miyazaki, Katsushi Ikeuchi, “A Method To Estimate Surface Shape Of Transparent Objects By Using Polarization Raytracing Method”, Journal of the Institute of Electronics, Information and Communication Engineers, vol. J88-D-II, No. 8, pp. 1432-1439, 2005). Herein, a photometric stereo method and a method using polarization characteristics will be described.
  • The photometric stereo method is a method for estimating the normal direction and the reflectance of an object by using three or more images of different light source directions. For example, H. Hayakawa, “Photometric Stereo Under A Light Source With Arbitrary Motion”, Journal of the Optical Society of America A, vol. 11, pp. 3079-89, 1994 describes a method where six or more points of an equal reflectance are obtained from an image as known information and they are used as constraints, thereby estimating the following parameters even if the light source position information is unknown:
      • the object information: the normal direction and the reflectance at each point of the image; and
      • the light source information: the light source direction and the illuminance at an object-observing point.
  • Herein, a photometric stereo method using only the diffuse reflection image separated by the diffuse reflection/specular reflection separating method described above is performed. Naturally, this method assumes that an object gives total diffuse reflection, and it therefore will result in a significant error with an object with specular reflection. Nevertheless, by using only the separated diffuse reflection image, it is possible to eliminate the estimation error due to the presence of specular reflection. Of course, the process may be performed on a diffuse reflection image from which shadow areas have been removed by a shadow removing section 205 as will be described later.
  • Diffuse reflection images of different light source directions are represented by the luminance matrix Id as follows.
  • [ Formula 40 ] I d = [ i d 1 ( 1 ) i dF ( 1 ) i d 1 ( P ) i dF ( P ) ] ( Expression 24 )
  • Herein, idf(p) denotes the luminance value of a pixel p in the diffuse reflection image of the light source direction f. The number of pixels in the image is P, and the number of images captured with different light source directions is F. Using a Lambertian model, the luminance value of a diffuse reflection image can be expressed as follows.
  • [Formula 41]

  • i f(p)=(ρp ·n p)·(t f ·L f)  (Expression 25)
  • Herein, ρp denotes the reflectance (albedo) of the pixel p, np the normal direction vector of the pixel p, tf the incident illuminance of the light source f, and Lf the direction vector of the light source f.
  • The following expression is derived from (Expression 24) and (Expression 25).
  • [ Formula 42 ] I = R · N · L · T = S · M Herein , ( Expression 26 ) R = [ ρ 1 0 0 ρ P ] [ Formula 43 ] N = [ n 1 n P ] T = [ n 1 x n 1 y n 1 z n Px n Py n Pz ] [ Formula 44 ] L = [ L 1 L F ] = [ l x 1 l x F l y 1 l y F l z 1 l z F ] [ Formula 45 ] T = [ t 1 0 0 t F ] [ Formula 46 ] [ Formula 47 ] S = [ s 1 s P ] T = [ s 1 x s 1 y s 1 z s Px s Py s Pz ] = R · N ( Expression 27 ) M = [ M 1 M F ] = [ m x 1 m 1 xF m y 1 m 1 yF m z 1 m 1 zF ] = L · T [ Formula 48 ]
  • Herein, R refers to a surface reflection matrix, N a surface normal matrix, L a light source direction matrix, T a light source intensity matrix, S a surface matrix, and M a light source matrix.
  • Using the singular value decomposition, (Expression 26) can be expanded as follows.
  • [ Formula 49 ] I = U · · V ( Expression 28 ) U = [ U U ] = [ 0 0 ] V = [ V V ] Herein , [ Formula 50 ] U T · U = V T · V = V · V T = E [ Formula 51 ]
  • and E denotes a unit matrix. Moreover, U′ is a P×3 matrix, U″ a P×(F−3) matrix, Σ′ a 3×3 matrix, Σ″ a (F-3)×(F-3) matrix, V′ a 3×F matrix, and V″ a (F-3)×F matrix. Herein, it can be assumed that U″ and V″ are orthogonal bases, i.e., noise components, of U′ and V′ being signal components. Using the singular value decomposition, (Expression 28) can be rearranged as follows.
  • [Formula 52]

  • Î=U′·Σ′·V′=Ŝ·{circumflex over (M)}  (Expression 29)

  • Ŝ=U′·(±[Σ′]1/2V′

  • {circumflex over (M)}=(±[Σ′]1/2V′  [Formula 53]
  • Thus, the shape information and the light source information can be obtained at once by solving (Expression 29), but the uncertainty of the 3×3 matrix A remains as follows.
  • [Formula 54]

  • S=Ŝ·A  (Expression 30)
  • [Formula 55]

  • M=A −1 ·{circumflex over (M)}  (Expression 31)
  • Herein, A is a 3×3 matrix. In order to obtain the shape information and the light source information, the matrix A needs to be obtained. This is satisfied if it is known that six or more points on the image have an equal reflectance. For example, if six points k1 to k6 have an equal reflectance, the following holds.
  • [Formula 56]

  • (s k1)2=(s k2)2=(s k3)2=(s k4)2=(s k5)2=(s k6)2=1  (Expression 32)
  • From (Expression 27), (Expression 30) and (Expression 32), the following holds.
  • [Formula 57]

  • (s ki)2=(ŝ ki T ·A)2=(ŝ ki T ·A)T·(ŝ ki T ·A)=(ŝ ki T ·A)·(ŝ ki T ·A)T ki T ·A·A T ·ŝ ki=1  (Expression 33)
  • Moreover, with
  • [Formula 58]

  • B=A·A T  (Expression 34)
  • (Expression 33) is rearranged as follows.
  • [Formula 59]

  • ŝ ki T ·B·ŝ ki=1  (Expression 35)
  • Herein, since the matrix B is a symmetric matrix from (Expression 34), the number of unknowns of the matrix B is six. Therefore, (Expression 35) can be solved if it is known that six or more points on the screen have an equal reflectance.
  • If the matrix B is known, the matrix A can be solved by applying the singular value decomposition to (Expression 34).
  • Moreover, the shape information and the light source information are obtained from (Expression 30) and (Expression 31).
  • Thus, the following information can be obtained by capturing three or more images of an object of which six or more pixels having an equal reflectance are known while changing the light source direction:
      • the object information: the normal direction vector and the reflectance of each point on the image; and
      • the light source information: the light source vector and the radiance at an object-observing point.
  • Note however that the reflectance of the object and the radiance of the light source obtained by the above process are relative values, and obtaining absolute values requires known information other than the above, such as the reflectance being known for six or more points on the image.
  • Where the positional relationship between the light source and the imaging device is known, the distance or the three-dimensional position between the imaging device and the object may be obtained. This will now be described with reference to the drawings.
  • FIG. 32 is a schematic diagram illustrating this process. In FIG. 32, 1001 denotes an imaging device, 1007A and 1007B light sources, 1015 the object-observing point O, 1010A and 1010B the light source directions of the light sources at the object-observing point O, and 1021 the viewing direction of the imaging device at the object-observing point O.
  • First, since the positional relationship between the light source and the imaging device is known, the three-dimensional positional relationships La and Lb between the imaging device 1001 and the light sources 1007A and 1007B are known. Assuming that the imaging device 1001 has been calibrated, the viewing direction 1021 of the imaging device 1001 is also known. Therefore, the object-observing point O 1015 exists on the viewing direction 1021. Moreover, by the photometric stereo method described above, the light source directions 1010A and 1010B of the light sources at the object-observing point O are known. Assuming that the distance Lv between the imaging device 1001 and the observing point O 1015 is positive (Lv>0), there exists only one observing point O that satisfies such a positional relationship. Therefore, the position of the observing point O 1015 can be known, and the distance Lv between the imaging device 1001 and the observing point O 1015 can be obtained.
  • In a case where a light source is provided in the imaging device, e.g., a flashlight of a digital camera, for example, the positional relationship between the light source and the imaging device can be obtained from the design information.
  • The shape information obtaining section 204 may obtain the surface normal direction of the object by using the polarization characteristics of the reflected light. This process will now be described with reference to FIG. 33.
  • In FIG. 33, 1001 denotes an imaging device, 1007 a light source, 1015 an observing point O, 1016 a linear polarizing filter having a rotation mechanism (not shown) such as a motor, and 1019 the normal direction. In a state where the natural light is illuminating as the light source, if images are captured while rotating the polarizing filter 1016 by means of the rotation mechanism, the reflected light intensity will be a sin function of the period π, as shown in FIG. 34.
  • Consider the angles ψmax and ψmin of the polarizing filter at which the maximum value Imax and the minimum value Imin of the reflected light intensity are measured. Assuming that a plane containing the imaging device 1001, the light source 1007 and the observing point O 1015 is the plane of incidence and the specular reflection component is dominant for the object, it is known that ψmax is such a direction that the polarization direction of the polarizing filter 1016 is perpendicular to the plane of incidence and ψmin is such a direction that the polarization direction of the polarizing filter 1016 is parallel to the plane of incidence.
  • As described above, where the light source is a polarized light source, a reflected light component that has polarized characteristics is the specular reflection component reflected at the surface of the observing point O and a non-polarized component is the diffuse reflection component. Thus, it can be seen that the observing point O at which there occurs an intensity difference between the maximum value Imax and the minimum value Imin of the reflected light intensity is an observing point where the specular reflection component is strong, i.e., where light is regularly reflected (the normal direction 1019 of the observing point O is a bisector between the light source direction from the observing point O and the imaging device direction from the observing point O). Therefore, the normal direction 1019 also exists within the plane of incidence. Thus, by estimating ψmax or ψmin, it can be assumed that the normal direction 1019 exists within the following plane:
      • a plane passing through the imaging device 1001 and containing the polarization direction ψmin of the polarizing filter 1016 (or the direction perpendicular to ψmax).
  • Herein, ψmax or ψmin are estimated by performing the process of fitting a sin function.
  • Moreover, it is possible to estimate two different planes containing the normal direction 1019 by performing a similar process while changing the position of the imaging device 1001. The normal direction 1019 is estimated by obtaining the line of intersection between the two estimated planes. In this process, it is necessary to estimate the amount of movement of the imaging device 1001, but it can be done by using the 8-point method, or the like.
  • Of course, as with the diffuse reflection/specular reflection separating section 202, an imaging device having a different polarization direction for each pixel may be used.
  • Of course, the normal direction 1019 may be obtained by providing a plurality of imaging devices, instead of changing the position of the imaging device 1001.
  • The surface normal information is obtained as described above by the photometric stereo method and the method using polarization characteristics. With a method such as the slit-ray projection method or the stereoscopic method, the three-dimensional position information of the object is obtained. The object surface normal information is information on the gradient of the three-dimensional position of the object within a small space, and these are both object shape information.
  • By the process described above, the shape information obtaining section 204 obtains the surface normal information or the three-dimensional position information of the object, which are shape information of the object.
  • By the process described above, the following information are obtained:
      • Diffuse reflection image of object;
      • Specular reflection image of object;
      • Three-dimensional shape information of object; and
      • Light source position/illuminance.
  • The shadow removing section 205 estimates shadow areas in an image and performs the shadow removing process. While various methods have been proposed for such a shadow removing and shadow area estimating process, it is possible for example to utilize the fact that a shadow area has a low luminance value, and to estimate that a pixel whose luminance value is less than or equal to a threshold is a shadow area.
  • Where the three-dimensional shape information has been obtained by the shape information obtaining section 204, one may employ ray tracing, which is a rendering method being widely used in the field of computer graphics. While a rendering process is done by calculating coordinate data of the object or data relating to the environment such as the position of the light source or the point of view, a ray tracing process is done by tracing backwards light rays that reach the point of view. Thus, it is possible with ray tracing to calculate where a shadow is formed and the degree of the shadow.
  • Then, the resolutions of the diffuse reflection image and the specular reflection image separated by the diffuse reflection/specular reflection separating section 202 are separately increased by different methods. First, the process for the diffuse reflection image will be described.
  • An albedo estimating section 206 estimates the albedo of the object by using the diffuse reflection image separated by the diffuse reflection/specular reflection separating section 202. Since the albedo is not influenced by the light source information, it is possible to realize a process that is robust against light source variations by performing the process using an albedo image.
  • This process will now be described. From (Expression 25), the following relationship holds for the diffuse reflection component.
  • [ Formula 60 ] r p = i f ( p ) t f · cos θ i ( Expression 36 )
  • Herein, θi denotes the angle formed between the object normal direction vector and the light source vector. With the light source information obtaining section 203 and the shape information obtaining section 204, the angle θi is known. Moreover, since the incident illuminance tf of the light source can also be estimated as will be described later, the albedo rp of the object is obtained from (Expression 36).
  • In this process, where cos θi has a value less than or equal to zero, i.e., where it is an attached shadow, (Expression 36) will be meaningless as the albedo becomes negative or a division by zero occurs. However, since such pixels have been removed by the shadow removing section 205 described above, such a problem does not occur.
  • Of course, it is possible to use a pseudo-albedo rp′ obtained by normalizing the albedo with the maximum luminance value of the specular reflection image by the following expression, instead of obtaining the albedo of the object.
  • r p = i df ( p ) i sf_max · cos θ i [ Formula 61 ]
  • Herein, isf max denotes the maximum luminance value of the specular reflection image. Such a pseudo-albedo is effective in cases where the radiance (illuminance) of the light source cannot be obtained by the light source information estimating section 203. Where a pseudo-albedo image is used, the maximum luminance value isf max of the specular reflection image used for the normalization is stored in a memory. FIG. 51 is a diagram showing data to be stored in the memory in a case where the albedo estimating section 206 uses a pseudo-albedo. The produced pseudo-albedo images and the maximum luminance value isf max of the specular reflection image used for the normalization are stored.
  • Assuming that the specular reflection parameter is uniform over a wide area of the object and there exist normals of various directions to the object surface, there exists a regular reflection pixel that causes regular reflection as long as the light source exists at such a position that it illuminates the object for the camera. Thus, the maximum luminance value isf max of the specular reflection image is the luminance value of the regular reflection pixel.
  • Where the reflection characteristics are uniform and the viewing direction 1021 is substantially uniform, the ratio between the luminance value of the regular reflection pixel at one light source position and that of the regular reflection pixel at another light source position is substantially equal to the light source radiance ratio between these light sources. Therefore, there remains the influence of the light source radiance if the luminance value idf(p) of the diffuse reflection image is simply divided by θi. However, by using a pseudo-albedo image obtained by further normalizing with the maximum luminance value isf max of the specular reflection image, which is the luminance value of the regular reflection pixel, it is possible to produce a diffuse component image that is not influenced by the light source even in a case where the radiance of the light source cannot be obtained.
  • It is also possible to produce a pseudo-albedo by normalizing with the maximum luminance value of the diffuse reflection image or the maximum luminance value of the input image, instead of normalizing with the maximum luminance value is isf max of the specular reflection image.
  • Next, the super-resolution process for an albedo image obtained as described above will be described.
  • An albedo super-resolution section 207 performs the super-resolution of the albedo image estimated by the albedo estimating section 206. This process will now be described in detail.
  • As described above, an albedo image is an image representing the reflectance characteristics that are inherent to the object and are not dependent on optical phenomena such as specular reflection of light and shading. Since object information is indispensable for the super-resolution process of the present embodiment, the process is based on learning the object in advance. Herein, a super-resolution process based on the texton (the texture feature quantity of an image) is used.
  • FIG. 35 is a diagram showing the concept of the texton-based super-resolution process. The low-resolution image LR (the number of pixels: N×N) input upon execution of the process is enlarged by interpolation by a factor of M×M so that the number of pixels is matched with the target number of pixels. The image whose number of pixels is MN×MN is referred to as an “exLR image”. The high-frequency component of the image is lost in the exLR image, and the exLR image will be a blurred image. Sharpening this blurred image is nothing but a super-resolution process.
  • Then, the luminance value of the exLR image is transformed for each pixel to the T-dimensional texton based on multiple resolutions by using the multiple-resolution transformation WT. This transformation uses a process such as a wavelet transformation or a pyramid structure decomposition. As a result, a total of MN×MN T-dimensional texton vectors are produced for each pixel of the exLR image. Then, in order to improve the generality, clustering is performed on the texton vectors to selectively produce L input representative texton vectors. These L texton vectors are subjected to a transformation based on database information learned in advance to produce a T-dimensional resolution-increased texton vector. The transformation uses a table lookup process, and a linear or non-linear transformation in the T-dimensional multidimensional feature vector space. The resolution-increased texton vector is converted back to image luminance values by an inverse transformation IWT such as an inverse wavelet transformation or a pyramid structure reconstruction, thus forming a resolution-increased image HR.
  • Since a very large amount of time is required for the searching in the process of clustering MN×MN T-dimensional texton vectors and for the table lookup process, it has been difficult with this process to realize a high processing speed for videos, and the like. In view of this, the following improvements have been introduced: 1) performing the clustering process on the LR image; and 2) replacing the table lookup process with a linear matrix transformation. With this process, by using the fact that one pixel of an LR image corresponds to a cell of M×M pixels of an HR image, the linear matrix transformation from T-dimensional to T-dimensional can be performed by cells, thereby maintaining the spatial continuity within a cell. The linear matrix to be used is optimally selected based on the result of clustering. In a case where the discontinuity at the cell boundary imposes a problem, there may be added a process such as partially overlapping matrix processing unit blocks with one another.
  • FIG. 36 schematically illustrates the improvement above. The LR image is WT-transformed into L (herein, L=3) representative feature vectors in the T-dimensional feature quantity space. Each feature vector is assigned a different linear matrix. This, when stored, is nothing but a resolution-increasing database.
  • The details of the image processing process will now be described with reference to an example where a 4×4 resolution-increasing process is performed on a low-resolution image of N=32 and M=4, i.e., 32×32 pixels. It is assumed that while the albedo image is an (RGB) color image, the color image is handled as independent color component images obtained by converting (RGB) to luminance/color difference (YCrCB). Normally, no awkwardness is introduced by using a high resolution only for the luminance Y component while the color component is the low-resolution color difference signal as it is, for a factor of about 2×2. For 4×4 or higher, however, it is necessary to also increase the resolution of the color signal, and the components are therefore treated similarly. A process for only one component of a color image will now be described.
  • (Learning Process)
  • FIG. 37 is a PAD diagram illustrating the flow of the learning process, and FIG. 38 is a diagram illustrating the relationship between a pixel to be processed and a cell to be processed in the processed image. The process will now be described referring to FIGS. 37 and 38 alternately.
  • First, in S311 to S313, the low-resolution image LR image, the high-resolution image HR image, and the enlarged image exLR image being a low-resolution image are input. These images are all produced from HR, and it is ensured that there is no pixel shifting at the time of image capturing. Bicubic interpolation is used for producing the exLR image from the LR image. In FIG. 38, three different images are provided, i.e., the high-resolution image HR (the number of pixels: 128×128), the low-resolution LR image (the number of pixels: 32×32), and the exLR image (the number of pixels: 128×128) that is obtained by matching LR to HR in terms only of the number of pixels.
  • In S314, the LR image is textonized. Specifically, a two-dimensional discrete stationary wavelet transformation (SWT transformation) using a Haar basis is performed. Assuming that the number of stages of the SWT transformation is two (2-step), there is produced a six-dimensional LRW image (the number of pixels: 32×32=1024). Naturally, a 2-step two-dimensional discrete stationary wavelet transformation yields a seven-dimensional feature vector. However, the LL component image of the lowest frequency is near the average luminance information of the image, and in order to store this, only the remaining six components are used.
  • In S315, a total of 1024 six-dimensional vectors of the textonized LRW image are clustered down to Cmax vectors. Herein, a K-means clustering is used to cluster them down to Cmax=512, for example. The collection of the resulting 512 texton vectors is referred to as the “cluster C”. All of the 1024 textons may be used without clustering.
  • In S316, the process determines LR pixels identified to be the same cluster as the cluster C. Specifically, the pixel values of the LR image are replaced by the texton numbers of the cluster C.
  • In S317, while repeatedly performing the process on all textons of the cluster C, the process searches for a pixel cell of exLR and a pixel cell of the HR image corresponding to the subject texton, and stores the subject cell number. This searching process needs to be performed only for the number of pixels of the LR image, thus providing a significant reduction in the searching time in processes with high factors.
  • The relationship between a pixel of the LR image, a pixel cell of the exLR image and a pixel cell of the HR image will be described with reference to FIG. 38. In FIG. 38, assume that two pixels 2001 and 2002 on the LR image are identified to be the same cluster as C (cluster number: Ci=0). Then, it can be assumed that they correspond to pixel cells 2003 and 2004 on the exLR image, which is obtained by simply enlarging the image while maintaining the positional relationship and that they correspond to pixels 2005 and 2006 on the HR image. Then, the numbers of the two cell positions are stored as having the subject texton. The number of pixels included in one pixel cell is equal to the factor of magnification, i.e., 4×4=16.
  • Then, in S318, these groups of pixel cells are textonized by pairs of exLR images and HR images. Specifically, a two-dimensional discrete stationary wavelet transformation is performed, thereby producing an exLRW image and an HRW image.
  • In S319 and S320, pairs of textons obtained from the HRW image and the exLRW image are integrated each in the form of a matrix. Each one is in the form of a 6×Data_num matrix. Herein, Data_num is (the number of pixels in one cell)×(the number of cells searched), and in the above example where Ci=0, it is 16×2=32 because two cells are searched.
  • In S321, the process calculates, by the method of least squares, a 6×6 matrix M from a total of 2×4×4=128 feature vectors belonging to these integrated matrices, and the calculated matrix is stored in the database CMat(K) together with the cluster number K=0 in S322. Where the exLR and HR texton matrices integrated in S319 and S320 are denoted as Lf and Hf (size: 6×Data_num), respectively, and the matrix to be obtained as M(6×6), the method of least squares in S322 can be performed as follows.
  • [Formula 62]

  • M=Hf·Lf T(Lf·Lf T)−1
  • Then, a similar process is repeated for the cluster number K=1, and this is repeated until K=511. Thus, CMat is a group of 6×6 conversion matrices each defined for one cluster number.
  • Finally, in S323 and S324, the cluster C used and the conversion matrix CMat learned are output. Thus, the obtained cluster C and the learned conversion matrix CMat are stored in an albedo DB 208.
  • FIG. 39 is a diagram showing the process of the two-dimensional discrete stationary wavelet transformation. With a normal wavelet transformation, the image shrinks as the stage of decomposition progresses while the filter bank configuration remains the same. However, with a two-dimensional discrete stationary wavelet transformation, the transformed image size remains unchanged as the stage of decomposition progresses, and the two filters, i.e., the scaling function F and the wavelet function G, are upsampled (t) and elongated by a power of 2, thus realizing a multiple-resolution analysis. With the Haar basis, the specific values of F and G and how the upsampling is performed are as shown in Table 1.
  • TABLE 1
    F G
    1-STEP(j = 0) ( 1 2 1 2 ) ( - 1 2 1 2 )
    2-STEP(j = 1) ( 1 2 0 1 2 0 ) ( - 1 2 0 1 2 0 )
  • Where the cA image being the LL component is subjected to wavelet decomposition one stage further, four different images are produced as shown in FIG. 39 by alternately one-dimensionally convoluting the F and G filters: 1) F in row direction and F in column direction: cA image (LL component); 2) F in row direction and G in column direction: Dh image (LH component); 3) G in row direction and F in column direction: cDv image (HL component); and 4) G in row direction and G in column direction: cDd image (HH component).
  • FIG. 40 shows an exemplary resulting image obtained when performing a two-dimensional discrete stationary wavelet transformation on a test image. A texton vector is what is obtained by arranging corresponding values for each pixel of 1-STEP and 2-STEP transformed images of these wavelets, and is a seven-dimensional vector as follows.

  • (cDh1,cDv1,cDd1,cDh2,cDv2,cDd2,cA2)  [Formula 63]
  • Note however that the high-resolution transformation is performed by using only the six-dimensional vector portion, except for cA2 being the 2-STEP LL component, while the cA2 component is stored.
  • The number of steps of the wavelet transformation is set to 2-STEP both in S314 and in S318. The larger the number of steps is, the more general features of the image can be represented by textons. While the number of steps is variable in the present invention, 2-STEP is used in S314 for clustering the LR image because it may not be possible with 1-STEP to obtain sufficient information for the surrounding pixels. In S318 for producing textons used for increasing the resolution of the exLR image, it has been experimentally confirmed that a better image can be obtained with 3-STEP than with 2-STEP for a factor of 8×8. Thus, it is preferred to determine the number of steps in view of the factor of magnification.
  • (Super-Resolution Process)
  • FIG. 41 is a PAD diagram illustrating the flow of the process being performed, and FIG. 42 is a diagram showing the relationship of pixel cells when the process is performed.
  • First, in S331 and S332, an LR image and an exLR image obtained by enlarging the LR image are input. As in the learning process, the number of pixels of the LR image is 32×32 and the number of pixels of the exLR image is 128×128. The exLR image is produced by a bicubic method as is the method for producing the exLR image, which is an image learned, in S313 of FIG. 37.
  • Then, in S333 and S334, the cluster C obtained during the learning process and the conversion matrix CMat are read out and input from the albedo DB 208.
  • In S335, the LR image is textonized. Specifically, a two-dimensional discrete stationary wavelet transformation (SWT transformation) using a Haar basis is performed, as shown in FIG. 42. Assuming that the number of stages of the SWT transformation is two (2-step), there is produced a six-dimensional LRW image (the number of pixels: 32×32=1024). Naturally, a 2-step two-dimensional discrete stationary wavelet transformation yields a seven-dimensional feature vector. However, the LL component image of the lowest frequency is near the average luminance information of the image, and in order to store this, only the remaining six components are used.
  • Then, in S336, a texton vector of the shortest distance within the cluster C (Cmax textons) is searched for each texton to obtain the texton number (Ci). This is equivalent to texton numbers of C0, C1, . . . , Cn being assigned to pixels 2011, 2012, . . . , 2013 along one line of the LR image in FIG. 42.
  • Then, the process proceeds to S337. From this step onward, the process is to repeatedly process each cell of the HR image from one scanning line to another. Specifically, in FIG. 42, as cells 2014, 2015, . . . , 2016 of the exLR image are processed, the resolutions of corresponding cells 2023, 2024, . . . , 2025 of the HR image are successively increased.
  • In S337, the subject cell region of the exLR image is textonized. Specifically, a two-dimensional discrete stationary wavelet transformation is performed to produce an exLRW image. Cells 2017, 2018, . . . , 2019, etc., are produced.
  • In S338, the conversion matrix CMat is subtracted from the texton number to thereby determine the conversion matrix M in the subject cell. The process is performed as shown in FIG. 42. In the LRW image, texton numbers are already assigned, i.e., the pixel 2011=C0, the pixel 2012=C1, . . . , the pixel 2013=Cn. With this being applied to the cells 2017, 2018, . . . , 2019 of the exLRW image for which the positional relationship is stored, a separate 6×6 conversion matrix M can be selected from Mat using C0, C1, . . . , Cn as texton numbers for the cells.
  • In S339, the conversion matrix M is applied to each cell. This can be done by applying the following expression for all of the textons LTi (i=1−16) in the cell.

  • HT i =M·LT i  [Formula 64]
  • By repeating this process, cells 2020, 2021, . . . , 2022 of the HRW image are produced from the cells 2017, 2018, . . . , 2019 of the exLRW image, respectively.
  • Then, the seven-dimensional texton is produced by adding the LL component of 2-STEP of the exLRW image to the six-dimensional texton in these resolution-increased cells.
  • In S340, the seven-dimensional texton in each cell is subjected to an inverse SWT transformation, thus converting the textons to an image. This is repeated for all the cells of the exLR image.
  • The inverse SWT (ISWT) transformation can be realized by the signal flow shown in FIG. 43. This is substantially the same representation as FIG. 39. With a normal wavelet inverse transformation, the image is enlarged as the stage of decomposition progresses while the filter bank configuration remains the same. In contrast, with the present inverse transformation, the transformed image size remains unchanged as the stage of decomposition progresses, and the two filters, i.e., the scaling function F and the wavelet function G1, are downsampled (↓) and shortened by a power of 2, thus realizing a multiple-resolution analysis. With the Haar basis, the specific values of F and G1 and how the downsampling is performed are as shown in Table 2.
  • TABLE 2
    F G
    1-STEP(j = 0) ( 1 2 0 1 2 0 ) ( 1 2 0 - 1 2 0 )
    2-STEP(j = 1) ( 1 2 1 2 ) ( 1 2 - 1 2 )
  • The resolution of one component of an albedo image is increased as described above. By performing this process for the entire albedo image, a resolution-increased albedo image is synthesized.
  • In this process, the image may be normalized so that the process can be performed even if the size, orientation, direction, etc., of the object included in the albedo image change. It can be assumed that a texton-based super-resolution process may not exhibit a sufficient super-resolution precision when the size or the orientation in the albedo image are different from those of the learned data. In view of this, a plurality of pairs of albedo images are provided to solve this problem. Specifically, the process synthesizes images obtained by rotating an albedo image by 30 degrees, and the super-resolution process is performed on all of the images, so as to accommodate changes in the orientation or the direction. In such a case, in the process of searching for a texton of the shortest distance in step S336 of FIG. 41 being a PAD diagram for “Super-resolution Process” as described above, the process may search for a texton of the shortest distance for each of the textons of a plurality of LR images obtained from images resulting from the rotation process to thereby search for one with the shortest distance, thus obtaining the texton number (Ci).
  • Moreover, in order to accommodate changes in size, the process may synthesize albedo images obtained while varying the image size.
  • Alternatively, based on the actual, size, an enlarging/shrinking process may be performed so that a 5 cm×5 cm image is always turned to an 8×8 pixels, for example, and textons may be produced for such an image. Since the size of the object is known by the shape information obtaining section 204, the size variations may be accommodated by producing textons from images of the same size for “Learning Process” and for “Super-resolution Process”.
  • Alternatively, a plurality of pairs of textons may be produced while rotating the albedo image “Learning Process” instead of rotating the albedo image “Super-resolution Process”, and the cluster C and the learned conversion matrix CMat may be stored in the albedo DB 208.
  • Moreover, the process may estimate what the input object is, and perform an orientation estimation to estimate how the estimated object is rotating. Such a process can be realized by widely-used image recognition techniques. For example, this can be done by placing a tag such as RFID on the object so that the process can recognize the object by recognizing the tag information and further estimate the shape information of the object from the tag information, whereby an orientation estimation is performed based on the image or the shape information of the object (see, for example, Japanese Laid-Open Patent Publication No. 2005-346348).
  • A diffuse image super-resolution section 209 synthesizes a high-resolution diffuse image from a high-resolution albedo image synthesized by the albedo super-resolution section 207. This process will now be described.
  • As described above, an albedo image is what is obtained by dividing the diffuse component image by the inner product between the light source vector and the normal vector of the object. Therefore, the process synthesizes a high-resolution diffuse image by multiplying the albedo image by the inner product between the light source vector estimated by the light source information estimating section 203 and the high-resolution normal direction vector of the object obtained by the parameter resolution increasing section to be described later. Where a plurality of light sources are estimated by the light source information estimating section 203, the process synthesizes a high-resolution diffuse image for each of the light sources and combines together the images to synthesize a single super-resolution diffuse image.
  • In a case where a pseudo-albedo image is used instead of an albedo image, the process multiplies the pseudo-albedo image by the inner product between the light source vector estimated by the light source information estimating section 203 and the high-density normal vector of the object obtained by a shape information resolution increasing section 211, and further multiplies it by the maximum luminance value isf max of the specular reflection image used for normalization, thus synthesizing a super-resolution diffuse reflection image. Since the maximum luminance value isf max of the specular reflection image used in normalization is stored in the memory by the albedo estimating section 206, the process can simply read out the stored information. Of course, in a case where normalization is done by using the maximum luminance value of the diffuse reflection image or the maximum luminance value of the input image, the process multiplies it by the maximum luminance value of the diffuse reflection image or the maximum luminance value of the input image used in normalization, instead of multiplying it by the maximum luminance value isf max of the specular reflection image.
  • By the process described above, it is possible to synthesize a super-resolution diffuse reflection image. While the super-resolution process is performed by using an albedo image, the process may directly perform super-resolution of a diffuse reflection image rather than the albedo image. In such a case, the learning process may be performed by using the diffuse reflection image.
  • Next, a super-resolution process for specular reflection images will be described. Herein, an image is decomposed into parameters, and the resolution of each parameter is increased separately. This process will be described step by step.
  • Using the normal information of the object obtained by the shape information obtaining section 204 and the diffuse reflection image and the specular reflection image separated by the diffuse reflection/specular reflection separating section 202, a parameter estimating section 210 estimates parameters representing the object. Herein, a method using the Cook-Torrance model, which is widely used in the field of computer graphics, will be described.
  • In the Cook-Torrance model, a specular reflection image is modeled as follows.
  • [ Formula 65 ] I s = K s ρ s , λ ( Expression 37 ) [ Formula 66 ] K s = 1 π E i k s ( Expression 38 ) [ Formula 67 ] ρ s , λ = F λ DG n · V ( Expression 39 ) [ Formula 68 ] F λ = 1 2 ( g λ - c ) 2 ( g λ + c ) 2 ( 1 + [ c ( g λ + c ) - 1 ] 2 [ c ( g λ - c ) + 1 ] 2 ) ( Expression 40 ) [ Formula 69 ] c = L · H ( Expression 41 ) [ Formula 70 ] g λ = n λ 2 - 1 + c 2 ( Expression 42 ) [ Formula 71 ] D = 1 4 m 2 cos 4 β exp { - tan 2 β m 2 } ( Expression 43 ) [ Formula 72 ] G = min { 1 , 2 ( n · H ) ( n · V ) ( V · H ) , 2 ( n · H ) ( n · L ) ( V · H ) } ( Expression 44 ) [ Formula 73 ] E i = j = 0 n - 1 I j n · L j ( Expression 45 )
  • Herein, Ei denotes the incident illuminance, ρs,λ the bidirectional reflectance of the specular reflection component at the wavelength λ, n the normal vector of the object, V the viewing vector, L the light source vector, H the halfway vector between the viewing vector and the light source vector, and β the angle between the halfway vector H and the normal vector n. Fλ is the Fresnel coefficient being the ratio of the reflected light from the dielectric surface obtained from the Fresnel formula, D is the microfacet distribution function, and G is the geometric attenuation factor representing the influence of shading by the irregularities on the object surface. Moreover, nλ is the refractive index of the object, m is a coefficient representing the roughness of the object surface, and Ij is the radiance of the incident light. Moreover, ks is a coefficient of the specular reflection component.
  • Furthermore, by using the Lambertian model of (Expression 25), (Expression 12) is expanded as follows.
  • [ Formula 74 ] I = I a + I d + I s = I a + K D + K s ρ s , λ Herein , ( Expression 46 ) [ Formula 75 ] K D = 1 π S r E i k d ρ d ( Expression 47 ) [ Formula 76 ] S r = dpx · dpy 2 π r 2 ( Expression 48 )
  • Herein, ρd denotes the reflectance (albedo) of the diffuse reflection component, dpx and dpy the length of one pixel of the imaging device in the x direction and the y direction, respectively, and r the distance from the observing point O to the imaging device. Moreover, kd is a coefficient satisfying the following relationship.
  • [Formula 77]

  • k d +k s=1  (Expression 49)
  • Sr is a constant representing the difference between the luminance value of the diffuse reflection component and that of the specular reflection component, indicating that the diffuse reflection component reflects energy in every direction from the object. FIG. 44 is a schematic diagram illustrating the constant Sr. In FIG. 44, the diffuse reflection component energy reflected at the observing point O spreads hemispherically. As the imaging device 1001 is spaced apart from the observing point O by r, the ratio Sr between the energy reaching one imaging element of the imaging device and the total energy reflected at the observing point O is expressed by (Expression 48).
  • As described above, the parameter estimating section 210 estimates parameters from (Expression 37) to (Expression 45), (Expression 46), (Expression 47) and (Expression 48).
  • Combining these relationships together, the known parameter for parameter estimation and parameters to be estimated are as follows:
  • (Known Parameters)
      • Environmental light component Ia;
      • Diffuse reflection component Id;
      • Specular reflection component Is;
      • Normal vector n of object;
      • Light source vector L;
      • Viewing vector V;
      • Halfway vector H;
      • Angle β between halfway vector H and normal vector n;
      • Lengths dpx and dpy of one pixel of imaging device 1001 in x and y directions;
      • Distance r between imaging device 1001 and observing point O; (Parameters to be Estimated)
      • Incident illuminance Ei;
      • Coefficient ks of specular reflection component;
      • Roughness m of object surface; and
      • Refractive index ηλ of object.
  • Herein, the coefficient kd of diffuse reflection component and the reflectance (albedo) ρd of the diffuse reflection component are also unknown parameters, but these are not estimated so as to estimate only the parameters of the specular reflection component.
  • FIG. 45 is a flow chart showing the process of the parameter estimating section 210. The process includes the following two steps.
  • First, the incident illuminance Ei is obtained by using the light source information (step S351). Herein, the process uses the light source position information obtained by the light source information estimating section 203, the distance information between the imaging device and the object obtained by the shape information obtaining section 204, and the light source illuminance obtained by the light source information obtaining section 203. This is obtained from the following expression.
  • [ Formula 78 ] E i = R 1 2 R 2 2 · cos θ 1 cos θ 2 · I i ( Expression 50 )
  • Herein, Ii denotes the incident illuminance of the light source 1007 measured by an illuminance meter 1018 provided in the imaging device 1001, R1 the distance between the imaging device 1001 and the light source 1007, R2 the distance between the light source 1007 and the observing point O, θ1 the angle between the normal 1019 at the observing point O and the light source direction 1010C, and θ2 the angle between the optical axis direction 1005 in the imaging device 1001 and the light source direction 1010A (see FIG. 46). Where it can be assumed that the size of the object is sufficiently smaller than the distance R2 between the light source 1007 and the observing point O, the distance R2 will be equal at all the observing points O on the object. Therefore, (R1/R2) in (Expression 50) becomes a constant, and no longer needs to be actually measured.
  • Next, the unknown parameters m, θλ and ks are estimated by using the simplex method (step S352). The simplex method is a method in which variables are assigned to vertices of a shape called a “simplex”, and a function is optimized by changing the size and shape of the simplex (Noboru Ota, “Basics Of Color Reproduction Optics”, pp. 90-92, Corona Publishing Co., Ltd.). A simplex is a collection of (n+1) points in an n-dimensional space. Herein, n is an unknown number to be estimated and is herein “3”. Therefore, the simplex is a tetrahedron. With vectors xi representing the vertices of the simplex, new vectors are defined as follows.
  • [ Formula 79 ] x h = arg max x i { f ( x i ) } , i = 1 , 2 , , n + 1 ( Expression 51 ) x s = arg max x i { f ( x i ) } , i h [ Formula 80 ] x l = arg min x i { f ( x i ) } , i = 1 , 2 , , n + 1 [ Formula 81 ] [ Formula 82 ] x 0 = x i n + 1 , i h , i = 1 , 2 , , n + 1 Herein , ( Expression 52 ) arg max x i { f ( x i ) } [ Formula 83 ] arg min x i { f ( x i ) } [ Formula 84 ]
  • denote xi that maximize and minimize the function f(xi), respectively. The three operations used in this method are defined as follows.
  • 1. Reflection:
  • [Formula 85]

  • x r=(1+α)x 0 −αx h  (Expression 53)
  • 2. Expansion
  • [Formula 86]

  • x e =βx r+(1−β)x h  (Expression 54)
  • 3. Contraction
  • [Formula 87]

  • x c =γx h+(1−γ)x 0  (Expression 55)
  • Herein, α(>0), β(>1) and γ(1>γ>0) are coefficients.
  • The simplex method is based on the assumption that by selecting one of the vertices of the simplex that has the greatest function value, the function value in the reflection will be small. If this assumption is correct, it is possible to obtain the minimum value of the function by repeating the same process. Specifically, parameters given by initial values are updated by the three operations repeatedly until the error with respect to the target represented by the evaluation function becomes less than the threshold. Herein, m, ηλ and ks are used as parameters, and the difference ΔIs between the specular reflection component image calculated from (Expression 37) and the specular reflection component image obtained by the diffuse reflection/specular reflection separating section 202, represented by (Expression 56), is used as the evaluation function.
  • [ Formula 88 ] Δ I s = j i M s ( i , j ) ( i s ( i , j ) - i s ( i , j ) ) 2 ( Expression 56 )
  • Herein, is(i,j)′ and is(i,j) are the calculated specular reflection image estimate value Is′, and the luminance value of the pixel (i,j) of the specular reflection component image Is obtained by the diffuse reflection/specular reflection separating section 202, and Ms(i,j) is a function that takes 1 when the pixel (i,j) has a specular reflection component and 0 otherwise.
  • This process will now be described in detail. FIG. 47 is a flow chart illustrating the flow of this process.
  • First, the counters n and k for storing the number of times the updating operation has been repeated are initialized to 0 (step S361). The counter n is a counter for storing the number of times the initial value has been changed, and k is a counter for storing the number of times the candidate parameter has been updated by the simplex for an initial value.
  • Then, random numbers are used to determine the initial values of the candidate parameters m′, ηλ′ and ks′ of estimate parameters (step S362). Based on the physical constraint conditions of the parameters, the range of initial values was determined as follows.
  • [Formula 89]

  • m≧0

  • ηλ≧1.0

  • 0≦ks≦1.0

  • 0≦Fλ≦1.0

  • 0≦D  (Expression 57)
  • Then, the obtained candidate parameters are substituted into (Expression 37) to obtain the specular reflection image estimate value Is′ (step S363). Furthermore, the difference ΔIs between the calculated specular reflection image estimate value Is′ and the specular reflection component image obtained by the diffuse reflection/specular reflection separating section 202 is obtained from (Expression 56), and this is used as the evaluation function of the simplex method (step S364). If the obtained ΔIs is sufficiently small (Yes in step S365), the candidate parameters m′, ηλ′ and ks′ are selected as the estimate parameters m, ηλ and ks, assuming that the parameter estimation has been succeeded, thus terminating the process. If ΔIs is large (No in step S365), the candidate parameters are updated by the simplex method.
  • Before the candidate parameters are updated, the number of times update has been done is evaluated. First, 1 is added to the counter k storing the number of times update has been done (step S366), and the value of the counter k is judged (step S367). If the counter k is sufficiently great (No in step S367), it is determined that the operation has been repeated sufficiently, but the value has dropped to the local minimum and the optimal value will not be reached by repeating the update operation, whereby the initial values are changed to attempt to escape from the local minimum. Therefore, 1 is added to the counter n and the counter k is set to 0 (step S371). It is determined whether the value of the counter n is higher than the threshold to thereby determine whether the process is continued as it is or the process is terminated as being unprocessable (step S372). If n is greater than the threshold (No in step S372), the process is terminated determining that the image cannot be estimated. If n is smaller than the threshold (Yes in step S372), initial values are re-selected from random numbers within the range of (Expression 57) (step S362) to repeat the process. Such a threshold for k may be, for example, 100, or the like.
  • In step S367, if the counter k is less than or equal to the threshold (Yes in step S367), the candidate parameters are changed by using (Expression 53) to (Expression 55) (step S368). This process will be described later.
  • Then, it is determined whether the modified candidate parameters are meaningful as a solution (step S369). Specifically, the modified parameters may become physically meaningless values (for example, the roughness parameter m being a negative value) as the simplex method is repeated, and such a possibility is eliminated. For example, the following conditions may be given so that a parameter is determined to be meaningful if it satisfies the condition and meaningless otherwise.
  • [Formula 90]

  • 0≦m

  • 1.0≦ηλ

  • 1.0≦ks≦1.0

  • 0.0≦D

  • 0.0≦Fλ≦1.0  (Expression 58)
  • These values can be obtained from the object. For example, the refractive index ηλ is a value determined by the material of the object. For example, it is known to be 1.5-1.7 for plastic and 1.5-1.9 for glass, and these values can be used. Thus, if the object is plastic, the refractive index ηλ can be set to 1.5-1.7.
  • If the modified parameters satisfy (Expression 58) (Yes in step S369), it can be assumed that the candidate parameters are meaningful values, and they are set as new candidate parameters (step S370), and the update process is repeated (step S363). If the modified parameters do not satisfy (Expression 58) (No in step S369), the update process for the initial values is canceled, and the update is performed with new initial values (step S371).
  • The modifying process in step S368 will now be described in detail. FIG. 48 is a flow chart showing the flow of the process. Herein, the candidate parameters m′, ηλ′ and ks′ are represented as a vector and it is used as the parameter x. Thus,
  • x = [ m η s , λ k s ] T [ Formula 91 ]
  • First, by using (Expression 51), (Expression 52) and (Expression 53), the parameter xr having gone through the reflection operation is calculated, and (Expression 56) is used to calculate the difference ΔIs(xr) with respect to the specular reflection component image with xr (step S381). Then, the obtained ΔIs(xr) and ΔIs(xs) of which the evaluation function was the second worst are compared with each other (step S382). If ΔIs(xr) is smaller than ΔIs(xs) (Yes in step S382), the evaluation value ΔIs(xr) having gone through the reflection operation and ΔIs(xl) whose evaluation value is currently the best are compared with each other (step S383). If ΔIs(xr) is larger (No in step S383), xh of which the evaluation value is worst is changed to xr (step S384), and the process is terminated.
  • If ΔIs(xr) is smaller than ΔIs(xi) (Yes in step S383), (Expression 54) is used to perform the expansion process and to calculate the difference ΔIs(xe) between the parameter xe and the specular reflection component image with xe (step S385). Then, the obtained ΔIs(xr) and ΔIs(xr) obtained by the reflection operation are compared with each other (step S386). If ΔIs(xe) is smaller than ΔIs(xr) (Yes in step S386), xh of which the evaluation value has been worst is changed to xe (step S387), and the process is terminated.
  • If ΔIs(xe) is greater than ΔIs(xr) (No in step S386), xh of which the evaluation value has been worst is changed to xr (step S387), and the process is terminated.
  • In step S382, if ΔIs(xr) is greater than ΔIs(xs) (No in step S382), the evaluation value ΔIs(xr) having gone through the reflection operation and ΔIs(xh) of which the evaluation value is currently worst are compared with each other (step S388). If ΔIs(xr) is smaller than ΔIs(xh) (Yes in step S388), xh of which the evaluation value has been worst is changed to xr (step S389), and (Expression 55) is used to calculate the difference ΔIs(xc) between the parameter xc having gone through the contraction operation and the specular reflection component image with xc (step S390). If ΔIs(xr) is greater than ΔIs(xh) (No in step S388), the difference ΔIs(xc) between the parameter xc having gone through the contraction operation and the specular reflection component image with xc is calculated (step S390) without changing xh.
  • Then, the obtained ΔIs(xc) and ΔIs(xh) of which the evaluation value is worst are compared with each other (step S391). If ΔIs(xc) is smaller than ΔIs(xh) (Yes in step S391), xh of which the evaluation value has been worst is changed to xc (step S392), and the process is terminated.
  • If ΔIs(xc) is greater than ΔIs(xh) (No in step S391), all the candidate parameters xi (i=1, 2, 3, 4) are changed as follows, and the process is terminated.
  • x i = 1 2 ( x i + x l ) [ Formula 92 ]
  • By repeating the process described above, m, ηλ and ks, being unknown parameters in the specular reflection image, are estimated.
  • By the process described above, it is possible to estimate all the unknown parameters.
  • The model used for the parameter estimation does not need to be the Cook-Torrance model, but may be, for example, the Torrance-Sparrow model, the Phong model, or the simplified Torrance-Sparrow model (for example, K. Ikeuchi and K. Sato, “Determining Reflectance Properties Of An Object Using Range And Brightness Images”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 11, pp. 1139-1153, 1991).
  • The parameter estimating method does not need to be the simplex method, but may be an ordinary parameter estimating method, such as, for example, the gradient method or the method of least squares.
  • The process described above may be performed for each pixel, or an equal set of parameters may be estimated for each of divided regions. Where the process is performed for each pixel, it is preferred to obtain samples in which known parameters such as the normal vector n of the object, the light source vector L or the viewing vector V are varied by moving the light source, the imaging device or the object. Where the process is performed for each region, it is preferred that the division of regions is changed so that variations in the parameters obtained for each region are little so as to realize an optimal parameter estimation.
  • The normal information resolution increasing section 211 increases the resolution of the surface normal information obtained by the shape information obtaining section 204. This is realized as follows.
  • First, the surface normal information obtained by the shape information obtaining section 204 is projected onto the image obtained by the image-capturing section 201 to obtain the normal direction corresponding to each pixel in the image. Such a process can be realized by performing a conventional camera calibration process (for example, Hiroki Unten, Katsushi Ikeuchi, “Texturing 3D Geometric Model For Virtualization Of Real-World Object”, CVIM-149-34, pp. 301-316, 2005).
  • In this process, the normal vector np is represented by polar coordinates, and the values are denoted as θp and φp (see FIG. 49). The images of θ and φ being the normal components are produced by the process described above. The resolutions of the obtained θ and φ images are increased by a method similar to the albedo super-resolution section 207 described above to thereby estimate high-resolution normal information. In this process, a learning process is performed before the resolution increasing process to store the cluster C for the normal θ and φ components and the learned conversion matrix CMat in a normal DB 212.
  • The process described above is preferably performed only for those areas that are not removed by the shadow removing section 205 as being shadows. This is for preventing an error in the parameter estimating process from occurring due to the presence of shadows.
  • Moreover, the parameter estimating section 210 may use a controllable light source provided in the vicinity of the imaging device. The light source may be a flashlight of a digital camera. In this case, a flashlighted image captured with a flashlight and a non-flashlighted image captured without a flashlight may be captured temporally continuously, and the parameter estimation may be performed by using the differential image therebetween. The positional relationship between the imaging device and the flashlight being the light source is known, and the light source information of the flashlight such as the three-dimensional position, the color and the intensity can also be measured in advance. Since the imaging device and the flashlight are provided at positions very close to each other, it is possible to capture an image with little shadow. Therefore, parameters can be estimated for most of the pixels in the image.
  • Moreover, a parameter resolution increasing section 213 increases the resolution of the parameter obtained by the parameter estimating section 210. Herein, a simple linear interpolation is performed for increasing the resolution of all the parameters. Of course, a learning-based super-resolution method such as the albedo super-resolution section 207 described above may be used.
  • The resolution increasing method may be switched from one to another for different parameters. For example, it can be assumed that the value of the refractive index ηλ of the object being an estimate parameter will not change even if the resolution thereof is increased. Therefore, the resolution may be increased by simple interpolation for the refractive index ηλ of the object, whereas a learning-based super-resolution process may be performed for the diffuse reflection component coefficient kd, the specular reflection component coefficient ks and the reflectance (albedo) ρd of the diffuse reflection component.
  • A specular reflection image super-resolution section 214 synthesizes a high-resolution specular reflection image by using the high-resolution normal information estimated by the normal information resolution increasing section 211 and parameters whose resolutions have been increased by the parameter resolution increasing section 214. The high-resolution specular reflection image is synthesized by substituting resolution-increased parameters into (Expression 37) to (Expression 45).
  • For example, only for the incident illuminance Ei, the estimated value may be multiplied by a coefficient l (e.g., l=2) so as to obtain a higher luminance value than the actual specular reflection image. This is for enhancing the texture of the object by increasing the luminance value of the specular reflection image. Similarly, the roughness m of the object surface may be set to a greater value than the estimated value so as to synthesize a specular reflection image in which the shine is stronger than it actually is.
  • A shadow producing section 215 synthesizes a shadow image to be laid over the super-resolution diffuse reflection image and the super-resolution specular reflection image produced by a diffuse reflection image super-resolution section 209 and the specular reflection image super-resolution section 214. This can be done by using ray tracing, which is used for the shadow removing section 205.
  • Herein, it is assumed that the super-resolution section 217 has knowledge on the three-dimensional shape of the object of image capturing. The shadow producing section 215 obtains the three-dimensional shape data of the object, and estimates the three-dimensional orientation and the three-dimensional position of the object based on the appearance of the object in the captured image. An example of estimating the three-dimensional position and the three-dimensional orientation from the appearance in a case where the object is a human eye cornea is disclosed in K. Nishino and S. K. Nayar, “The World In An Eye”, in Proc. of Computer Vision and Pattern Recognition CVPR '04, vol. I, pp. 444-451, July, 2004. Although objects of which the three-dimensional position and the three-dimensional orientation can be estimated from the appearance are limited, a method of the above article can be applied to such an object.
  • Once the three-dimensional orientation and the three-dimensional position of the object are estimated, the object surface normal information can be calculated at any point on the object. The process described above is repeated for the captured images to calculate the object surface normal information. Moreover, it is possible to increase the resolution of the three-dimensional shape of the object by increasing the resolution of the object normal information by using the high-resolution normal information estimated by the normal information resolution increasing section 211. A high-resolution shadow image is estimated by performing ray tracing by using the high-resolution three-dimensional shape thus obtained and the parameters whose resolution has been increased by the parameter resolution increasing section 213.
  • A rendering section 216 synthesizes a high-resolution output image by combining together the super-resolution diffuse reflection image synthesized by the diffuse reflection image super-resolution section 209, the super-resolution specular reflection image synthesized by the specular reflection image super-resolution section 214 and the shadow image synthesized by the shadow producing section 215.
  • As described above, a high-resolution digital zooming process is performed by using the light source estimation method as described above. Thus, in the super-resolution process, the light source information is very important information that is needed for the shadow removing section 205, the albedo estimating section 206, the diffuse reflection image super-resolution section 209, the parameter estimating section 210, the specular reflection image super-resolution section 214 and the shadow producing section 215. Therefore, the light source estimation method of the present invention capable of accurately obtaining light source information is very important for the image super-resolution process.
  • While only the super-resolution of specular reflection image is performed by using the parameter estimation in the above description, the parameter estimation may be performed also for the diffuse reflection image to perform super-resolution thereof.
  • This process will now be described. There are two unknown parameters of the diffuse reflection image as described above:
      • Diffuse reflection component coefficient kd; and
      • Reflectance (albedo) Pd of diffuse reflection component.
  • Therefore, these parameters are estimated. FIG. 50 is a flow chart showing the flow of the parameter estimating process for the diffuse reflection image. After the process by the parameter estimating section 210 for the specular reflection image shown in FIG. 45, two further steps as follows are performed.
  • First, the process estimates kd as follows by using (Expression 49) and ks obtained by the parameter estimation for the specular reflection image (step S353).

  • k d=1−k s  [Formula 93]
  • Moreover, the reflectance (albedo) ρd of the diffuse reflection image is estimated as follows by using (Expression 47) (step S354).
  • ρ d = π K D S r E i k d [ Formula 94 ]
  • By the process described above, it is possible to estimate all the unknown parameters. The super resolution of the diffuse reflection image can be performed by increasing the resolution of the obtained parameters by a method similar to the parameter resolution increasing section 213.
  • The light source estimation method of the present invention is effective not only for image processes but also for image capturing, for example. Where a polarizing filter is used, for example, it can be installed at an optimal angle. This process will now be described.
  • Polarizing filters referred to as “PL filters” are used for removing the specular reflection light such as those from water surface or window glass. However, the effect of a polarizing filter varies significantly depending on the polarization axis of the polarizing filter and the plane of incidence (a plane containing the line of incident light onto the object and the line of reflected light). Therefore, where an image is captured while rotating the polarizing filter 1016A by a rotation mechanism as shown in FIG. 28, the captured image will be significantly different depending on the angle of rotation. For example, it is most effective in a case where the polarization axis is parallel to the plane of incidence. The plane of incidence on the object can be specified because the light source position can be known by using the light source estimation method of the present invention. Thus, the rotation mechanism can be controlled so that the polarizing filter is parallel to the estimated plane of incidence.
  • As described above, by using the light source estimation method of the present invention, it is possible to realize an image process such as a high-resolution digital zooming process and effective image capturing.
  • INDUSTRIAL APPLICABILITY
  • According to the present invention, it is possible to obtain a light source image and to estimate light source information with no additional imaging devices. Therefore, the present invention is useful in performing an image process such as a super-resolution process in a camera-equipped mobile telephone, a digital camera or a digital video camera, for example.

Claims (16)

1. A light source estimation device, comprising:
an imaging device condition determination section for determining whether a condition of an imaging device is suitable for obtaining light source information;
a light source image obtaining section for capturing an image by the imaging device when it is determined to be suitable by the imaging device condition determination section, to thereby obtain the captured image as a light source image;
a first imaging device information obtaining section for obtaining first imaging device information representing a condition of the imaging device at a point in time when the light source image is obtained by the light source image obtaining section;
a second imaging device information obtaining section for obtaining second imaging device information representing a condition of the imaging device at a time of image capturing when an image is captured by the imaging device in response to a cameraman's operation; and
a light source information estimating section for estimating light source information including at least one of a direction and a position of a light source at the time of image capturing by using the light source image and the first and second imaging device information.
2. The light source estimation device of claim 1, wherein the imaging device condition determination section detects a direction of an optical axis of the imaging device and determines it to be suitable when the optical axis is pointing upward.
3. The light source estimation device of claim 1, wherein the light source image obtaining section obtains the light source image after confirming that an image is not being captured by the imaging device in response to a cameraman's operation.
4. The light source estimation device of claim 1, wherein the light source information estimating section estimates, in addition to at least one of a direction and a position of the light source, at least one of luminance, color and spectrum information of the light source.
5. The light source estimation device of claim 1, wherein:
the light source image obtaining section obtains a plurality of the light source images;
the first imaging device information obtaining section obtains the first imaging device information every time the light source image is obtained by the light source image obtaining section;
the light source estimation device includes a light source image synthesis section for synthesizing a panoramic light source image from the plurality of light source images obtained by the light source image obtaining section by using the plurality of first imaging device information obtained by the first imaging device information obtaining section; and
the light source information estimating section estimates the light source information by using the panoramic light source image and the second imaging device information.
6. The light source estimation device of claim 1, comprising optical axis direction varying means for varying an optical axis direction of the imaging device, wherein a plurality of light source images are obtained by the light source image obtaining section while the optical axis direction of the imaging device is being varied by the optical axis direction varying means.
7. The light source estimation device of claim 6, wherein:
the light source estimation device is provided in a folding-type mobile telephone; and
the optical axis direction varying means is an open/close mechanism for opening/closing the folding-type mobile telephone.
8. The light source estimation device of claim 6, wherein the optical axis direction varying means is a vibration mechanism.
9. A light source estimation device, comprising:
a light source image obtaining section for capturing an image by an imaging device to obtain the captured image as a light source image;
a first imaging device information obtaining section for obtaining first imaging device information representing a condition of the imaging device at a point in time when the light source image is obtained by the light source image obtaining section;
a second imaging device information obtaining section for obtaining second imaging device information representing a condition of the imaging device at a time of image capturing when an image is captured by the imaging device in response to a cameraman's operation;
a light source information estimating section for estimating light source information including at least one of a direction and a position of a light source at the time of image capturing by using the light source image and the first and second imaging device information; and
optical axis direction varying means for varying an optical axis direction of the imaging device,
wherein a plurality of light source images are obtained by the light source image obtaining section while the optical axis direction of the imaging device is being varied by the optical axis direction varying means.
10. A light source estimation system for estimating light source information, comprising:
a communication terminal including the imaging device condition determination section, the light source image obtaining section, the first imaging device information obtaining section and the second imaging device information obtaining section of claim 1, wherein the communication terminal transmits the light source image obtained by the light source image obtaining section, the first imaging device information obtained by the first imaging device information obtaining section, and the second imaging device information obtained by the second imaging device information obtaining section; and
a server including the light source information estimating section of claim 1, wherein the server receives the light source image and the first and second imaging device information transmitted from the communication terminal to give the light source image and the first and second imaging device information to the light source information estimating section.
11. A light source estimation method, comprising:
a first step of determining whether a condition of an imaging device is suitable for obtaining light source information;
a second step of capturing an image by the imaging device when it is determined to be suitable in the first step, to thereby obtain the captured image as a light source image;
a third step of obtaining first imaging device information representing a condition of the imaging device at a point in time when the light source image is obtained in the second step;
a fourth step of obtaining second imaging device information representing a condition of the imaging device at a time of image capturing when an image is captured by the imaging device in response to a cameraman's operation; and
a fifth step of estimating light source information including at least one of a direction and a position of a light source at the time of image capturing by using the light source image and the first and second imaging device information.
12. A light source estimation method, comprising:
a first step of capturing an image by an imaging device to obtain the captured image as a light source image;
a second step of obtaining first imaging device information representing a condition of the imaging device at a point in time when the light source image is obtained in the first step;
a third step of obtaining second imaging device information representing a condition of the imaging device at a time of image capturing when an image is captured by the imaging device in response to a cameraman's operation; and
a fourth step of estimating light source information including at least one of a direction and a position of a light source at the time of image capturing by using the light source image and the first and second imaging device information,
wherein in the first step, an optical axis direction of the imaging device is varied by optical axis direction varying means, and a plurality of light source images are obtained while the optical axis direction of the imaging device is being varied.
13. A super-resolution device, comprising:
an image-capturing section for capturing an image by an imaging device;
a light source information estimating section for estimating light source information including at least one of a direction and a position of a light source illuminating an object, by the light source estimation method of claim 11 or 12;
a shape information obtaining section for obtaining, as shape information, surface normal information or three-dimensional position information of the object; and
a super-resolution section for super-resolution of the image captured by the image-capturing section by using the light source information and the shape information.
14. The super-resolution device of claim 13, wherein the super-resolution section separates an image captured by the image-capturing section into a diffuse reflection component and a specular reflection component, and separately performs super-resolution of the diffuse reflection component and the specular reflection component separated from each other.
15. The super-resolution device of claim 13, wherein the super-resolution section decomposes an image captured by the image-capturing section into parameters, and separately increases resolutions of the decomposed parameters.
16. A super-resolution method, comprising:
a first step of capturing an image by an imaging device;
a second step of estimating light source information including at least one of a direction and a position of a light source illuminating an object, by the light source estimation method of claim 11 or 12;
a third step of obtaining, as shape information, surface normal information or three-dimensional position information of the object; and
a fourth step of performing super-resolution of the image captured in the first step by using the light source information and the shape information.
US12/080,228 2006-05-29 2008-03-31 Light source estimation device, light source estimation system, light source estimation method, device for super-resolution, and method for super-resolution Abandoned US20080231729A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/630,884 US7893971B2 (en) 2006-05-29 2009-12-04 Light source estimation device that captures light source images when it is determined that the imaging device is not being used by the cameraman

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2006-147756 2006-05-29
JP2006147756 2006-05-29
PCT/JP2007/060833 WO2007139070A1 (en) 2006-05-29 2007-05-28 Light source estimation device, light source estimation system, light source estimation method, device having increased image resolution, and method for increasing image resolution

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2007/060833 Continuation WO2007139070A1 (en) 2006-05-29 2007-05-28 Light source estimation device, light source estimation system, light source estimation method, device having increased image resolution, and method for increasing image resolution

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/630,884 Continuation US7893971B2 (en) 2006-05-29 2009-12-04 Light source estimation device that captures light source images when it is determined that the imaging device is not being used by the cameraman

Publications (1)

Publication Number Publication Date
US20080231729A1 true US20080231729A1 (en) 2008-09-25

Family

ID=38778588

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/080,228 Abandoned US20080231729A1 (en) 2006-05-29 2008-03-31 Light source estimation device, light source estimation system, light source estimation method, device for super-resolution, and method for super-resolution
US12/080,230 Expired - Fee Related US7688363B2 (en) 2006-05-29 2008-03-31 Super-resolution device, super-resolution method, super-resolution program, and super-resolution system
US12/630,884 Expired - Fee Related US7893971B2 (en) 2006-05-29 2009-12-04 Light source estimation device that captures light source images when it is determined that the imaging device is not being used by the cameraman

Family Applications After (2)

Application Number Title Priority Date Filing Date
US12/080,230 Expired - Fee Related US7688363B2 (en) 2006-05-29 2008-03-31 Super-resolution device, super-resolution method, super-resolution program, and super-resolution system
US12/630,884 Expired - Fee Related US7893971B2 (en) 2006-05-29 2009-12-04 Light source estimation device that captures light source images when it is determined that the imaging device is not being used by the cameraman

Country Status (4)

Country Link
US (3) US20080231729A1 (en)
JP (2) JP4077869B2 (en)
CN (2) CN101356546B (en)
WO (2) WO2007139067A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110043723A1 (en) * 2007-06-11 2011-02-24 Commissariat A L'energie Atomique Lighting Device for Liquid Crystal Screen
US20110102551A1 (en) * 2008-07-30 2011-05-05 Masahiro Iwasaki Image generation device and image generation method
US20120113064A1 (en) * 2010-11-05 2012-05-10 White Kevin J Downsampling data for crosstalk compensation
US20130121567A1 (en) * 2008-08-29 2013-05-16 Sunil Hadap Determining characteristics of multiple light sources in a digital image
US20150049211A1 (en) * 2013-08-19 2015-02-19 Lg Electronics Inc. Mobile terminal and control method thereof
US20170046819A1 (en) * 2014-05-02 2017-02-16 Olympus Corporation Image processing apparatus and image acquisition apparatus
US20190246048A1 (en) * 2018-02-05 2019-08-08 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US10607404B2 (en) * 2015-02-16 2020-03-31 Thomson Licensing Device and method for estimating a glossy part of radiation
US20200356761A1 (en) * 2008-01-03 2020-11-12 Apple Inc. Personal computing device control using face detection and recognition
US11619991B2 (en) 2018-09-28 2023-04-04 Apple Inc. Device control using gaze information
US11755712B2 (en) 2011-09-29 2023-09-12 Apple Inc. Authentication with secondary approver
US11765163B2 (en) 2017-09-09 2023-09-19 Apple Inc. Implementation of biometric authentication
US11768575B2 (en) 2013-09-09 2023-09-26 Apple Inc. Device, method, and graphical user interface for manipulating user interfaces based on unlock inputs
US11809784B2 (en) 2018-09-28 2023-11-07 Apple Inc. Audio assisted enrollment
US11835997B2 (en) 2019-09-27 2023-12-05 Electronic Theatre Controls, Inc. Systems and methods for light fixture location determination
US11836725B2 (en) 2014-05-29 2023-12-05 Apple Inc. User interface for payments
US11875547B2 (en) 2018-11-07 2024-01-16 Kabushiki Kaisha Toshiba Image processing apparatus, image processing method, and storage medium
US11928200B2 (en) 2018-06-03 2024-03-12 Apple Inc. Implementation of biometric authentication

Families Citing this family (71)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8050512B2 (en) * 2004-11-16 2011-11-01 Sharp Laboratories Of America, Inc. High dynamic range images from low dynamic range images
EP1854458A1 (en) * 2006-05-08 2007-11-14 IMBA-Institut für Molekulare Biotechnologie GmbH Use of a compound with RANKL activity
US8131116B2 (en) * 2006-08-31 2012-03-06 Panasonic Corporation Image processing device, image processing method and image processing program
JP4274221B2 (en) * 2006-10-02 2009-06-03 ソニー株式会社 Information processing apparatus and method, program, and recording medium
US7894662B2 (en) * 2006-10-11 2011-02-22 Tandent Vision Science, Inc. Method for using image depth information in identifying illumination fields
CN102611896B (en) * 2007-06-15 2015-01-07 松下电器产业株式会社 Image processing device
WO2009019887A1 (en) * 2007-08-07 2009-02-12 Panasonic Corporation Image processing device and image processing method
CN101542232B (en) 2007-08-07 2011-10-19 松下电器产业株式会社 Normal information generating device and normal information generating method
JP4791595B2 (en) * 2008-03-06 2011-10-12 富士通株式会社 Image photographing apparatus, image photographing method, and image photographing program
US8025408B2 (en) * 2008-07-08 2011-09-27 Panasonic Corporation Method, apparatus and program for image processing and method and apparatus for image synthesizing
TW201017578A (en) * 2008-10-29 2010-05-01 Chunghwa Picture Tubes Ltd Method for rebuilding 3D surface model
JP2010140460A (en) * 2008-11-13 2010-06-24 Sony Corp Apparatus, method and program for processing image
US8508646B2 (en) * 2008-12-22 2013-08-13 Apple Inc. Camera with internal polarizing filter
US20100177095A1 (en) * 2009-01-14 2010-07-15 Harris Corporation Geospatial modeling system for reducing shadows and other obscuration artifacts and related methods
US8705855B2 (en) * 2009-01-27 2014-04-22 Nec Corporation Color image processing method, color image processing device, and color image processing program
WO2010087164A1 (en) * 2009-01-29 2010-08-05 日本電気株式会社 Color image processing method, color image processing device, and recording medium
WO2010122502A1 (en) * 2009-04-20 2010-10-28 Yeda Research And Development Co. Ltd. Super-resolution from a single signal
KR101557678B1 (en) * 2009-04-22 2015-10-19 삼성전자주식회사 Apparatus and method for calibration of portable terminal
US8442309B2 (en) * 2009-06-04 2013-05-14 Honda Motor Co., Ltd. Semantic scene segmentation using random multinomial logit (RML)
JP5316305B2 (en) * 2009-08-13 2013-10-16 ソニー株式会社 Wireless transmission system and wireless transmission method
JP4844664B2 (en) * 2009-09-30 2011-12-28 カシオ計算機株式会社 Image processing apparatus, image processing method, and program
JP5310483B2 (en) * 2009-10-28 2013-10-09 株式会社リコー Imaging device
JP5562005B2 (en) * 2009-11-02 2014-07-30 キヤノン株式会社 Image processing apparatus and image processing method
WO2011103576A1 (en) * 2010-02-22 2011-08-25 Canfield Scientific, Incorporated Reflectance imaging and analysis for evaluating tissue pigmentation
JP5627256B2 (en) * 2010-03-16 2014-11-19 キヤノン株式会社 Image processing apparatus, imaging apparatus, and image processing program
ES2742264T3 (en) 2010-06-16 2020-02-13 Ultra Electronics Forensic Tech Inc Acquisition of 3D topographic images of tool marks using a non-linear photometric stereo method
US8760517B2 (en) * 2010-09-27 2014-06-24 Apple Inc. Polarized images for security
US9025019B2 (en) * 2010-10-18 2015-05-05 Rockwell Automation Technologies, Inc. Time of flight (TOF) sensors as replacement for standard photoelectric sensors
EP2650843A4 (en) * 2010-12-09 2018-03-28 Samsung Electronics Co., Ltd. Image processor, lighting processor and method therefor
US8503771B2 (en) * 2010-12-20 2013-08-06 Samsung Techwin Co., Ltd. Method and apparatus for estimating light source
TWI479455B (en) * 2011-05-24 2015-04-01 Altek Corp Method for generating all-in-focus image
WO2013089265A1 (en) * 2011-12-12 2013-06-20 日本電気株式会社 Dictionary creation device, image processing device, image processing system, dictionary creation method, image processing method, and program
US9578226B2 (en) * 2012-04-12 2017-02-21 Qualcomm Incorporated Photometric registration from arbitrary geometry for augmented reality
US9113143B2 (en) * 2012-06-29 2015-08-18 Behavioral Recognition Systems, Inc. Detecting and responding to an out-of-focus camera in a video analytics system
US8675999B1 (en) 2012-09-28 2014-03-18 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Apparatus, system, and method for multi-patch based super-resolution from an image
CN103136929B (en) * 2013-02-02 2014-12-17 深圳市格劳瑞电子有限公司 Method enabling network terminal with photosensitive components to initialize
JP5731566B2 (en) * 2013-04-23 2015-06-10 株式会社スクウェア・エニックス Information processing apparatus, control method, and recording medium
US9154698B2 (en) * 2013-06-19 2015-10-06 Qualcomm Technologies, Inc. System and method for single-frame based super resolution interpolation for digital cameras
US20150073958A1 (en) * 2013-09-12 2015-03-12 Bank Of America Corporation RESEARCH REPORT RECOMMENDATION ENGINE ("R+hu 3 +lE")
US9658688B2 (en) * 2013-10-15 2017-05-23 Microsoft Technology Licensing, Llc Automatic view adjustment
US9805510B2 (en) 2014-05-13 2017-10-31 Nant Holdings Ip, Llc Augmented reality content rendering via albedo models, systems and methods
JP5937661B2 (en) * 2014-11-13 2016-06-22 みずほ情報総研株式会社 Information prediction system, information prediction method, and information prediction program
CN104618646A (en) * 2015-01-22 2015-05-13 深圳市金立通信设备有限公司 Shooting method
GB201604345D0 (en) * 2016-03-14 2016-04-27 Magic Pony Technology Ltd Super resolution using fidelity transfer
GB2538095B (en) * 2015-05-07 2019-07-03 Thales Holdings Uk Plc Recognition of Objects from Shadow and Layover Effects in Synthetic Aperture Radar Images
CN105205782B (en) * 2015-09-06 2019-08-16 京东方科技集团股份有限公司 Supersolution is as method and system, server, user equipment and its method
US10839248B2 (en) * 2015-09-30 2020-11-17 Sony Corporation Information acquisition apparatus and information acquisition method
US9958267B2 (en) * 2015-12-21 2018-05-01 Industrial Technology Research Institute Apparatus and method for dual mode depth measurement
JP6894672B2 (en) * 2016-05-18 2021-06-30 キヤノン株式会社 Information processing equipment, information processing methods, programs
JP6682350B2 (en) 2016-05-18 2020-04-15 キヤノン株式会社 Information processing device, control device, information processing method, control method, and program
EP3465628B1 (en) * 2016-05-24 2020-07-08 E Ink Corporation Method for rendering color images
JP2018029279A (en) * 2016-08-18 2018-02-22 ソニー株式会社 Imaging device and imaging method
JP6772710B2 (en) * 2016-09-16 2020-10-21 富士通株式会社 Biological imaging device
JP6662745B2 (en) * 2016-10-04 2020-03-11 株式会社ソニー・インタラクティブエンタテインメント Imaging device, information processing system, and polarization image processing method
EP3544283B1 (en) * 2016-11-15 2023-07-19 Sony Group Corporation Image processing device
GB2561238A (en) * 2017-04-07 2018-10-10 Univ Bath Apparatus and method for monitoring objects in space
EP3633968A4 (en) * 2017-06-01 2020-06-03 FUJIFILM Corporation Imaging device, image processing device, imaging system, image processing method, and recording medium
CN110536125A (en) * 2018-05-25 2019-12-03 光宝电子(广州)有限公司 Image processing system and image treatment method
JP6800938B2 (en) * 2018-10-30 2020-12-16 キヤノン株式会社 Image processing equipment, image processing methods and programs
US10721458B1 (en) * 2018-12-14 2020-07-21 Ambarella International Lp Stereoscopic distance measurements from a reflecting surface
WO2021095672A1 (en) * 2019-11-15 2021-05-20 ソニーグループ株式会社 Information processing device and information processing method
WO2021117633A1 (en) * 2019-12-13 2021-06-17 ソニーグループ株式会社 Imaging device, information processing device, imaging method, and information processing method
US11348273B2 (en) * 2020-02-25 2022-05-31 Zebra Technologies Corporation Data capture system
KR20210126934A (en) 2020-04-13 2021-10-21 삼성전자주식회사 Method and apparatus of outputting light source information
US20220099824A1 (en) * 2020-09-25 2022-03-31 Rohde & Schwarz Gmbh & Co. Kg Radar target simulation system and radar target simulation method
WO2022102295A1 (en) * 2020-11-10 2022-05-19 ソニーグループ株式会社 Imaging device
KR102435957B1 (en) * 2020-11-27 2022-08-24 인하대학교 산학협력단 Probability-based object detector using various samples
IL279275A (en) * 2020-12-06 2022-07-01 Elbit Systems C4I And Cyber Ltd Device, systems and methods for scene image acquisition
US11663775B2 (en) * 2021-04-19 2023-05-30 Adobe, Inc. Generating physically-based material maps
CN113422928B (en) * 2021-05-28 2022-02-18 佛山市诚智鑫信息科技有限公司 Safety monitoring snapshot method and system
CN116245741B (en) * 2022-06-28 2023-11-17 荣耀终端有限公司 Image processing method and related device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5028138A (en) * 1989-05-23 1991-07-02 Wolff Lawrence B Method of and apparatus for obtaining object data by machine vision form polarization information
US7002623B1 (en) * 1998-09-08 2006-02-21 Olympus Optical Co., Ltd. Image processing apparatus for correcting a color and texture of an image and displaying the corrected image

Family Cites Families (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS63219281A (en) 1987-03-09 1988-09-12 Matsushita Electric Ind Co Ltd Video camera
JP2665388B2 (en) 1990-06-18 1997-10-22 富士写真フイルム株式会社 Signal processing method for abnormal posture
JP3278881B2 (en) 1991-12-13 2002-04-30 ソニー株式会社 Image signal generator
JPH05342368A (en) 1992-06-11 1993-12-24 Hitachi Ltd Method and device for generating three-dimensional picture
US5717789A (en) 1993-09-08 1998-02-10 California Institute Of Technology Image enhancement by non-linear extrapolation in frequency space
JP2551361B2 (en) 1993-10-30 1996-11-06 日本電気株式会社 Foldable portable radio
JPH07294985A (en) 1994-04-22 1995-11-10 Nikon Corp Camera provided with shake correcting function
JPH08160507A (en) 1994-12-07 1996-06-21 Canon Inc Camera
JPH1079029A (en) 1996-09-02 1998-03-24 Canon Inc Stereoscopic information detecting method and device therefor
US6075926A (en) 1997-04-21 2000-06-13 Hewlett-Packard Company Computerized method for improving data resolution
JP3834805B2 (en) 1997-06-12 2006-10-18 ソニー株式会社 Image conversion device, image conversion method, calculation device, calculation method, and recording medium
JPH1173524A (en) 1997-08-28 1999-03-16 Matsushita Electric Ind Co Ltd Rendering method
JPH11175762A (en) 1997-12-08 1999-07-02 Katsushi Ikeuchi Light environment measuring instrument and device and method for shading virtual image using same
JP2000258122A (en) 1999-03-12 2000-09-22 Mitsubishi Electric Corp Luminous position standardizing device
JP2001118074A (en) 1999-10-20 2001-04-27 Matsushita Electric Ind Co Ltd Method and device for producing three-dimensional image and program recording medium
JP2001166230A (en) * 1999-12-10 2001-06-22 Fuji Photo Film Co Ltd Optical device, image pickup device and image processor
JP3459981B2 (en) 2000-07-12 2003-10-27 独立行政法人産業技術総合研究所 Method for separating diffuse and specular reflection components
US6850872B1 (en) 2000-08-30 2005-02-01 Microsoft Corporation Facial image processing methods and systems
JP2002084412A (en) 2000-09-07 2002-03-22 Minolta Co Ltd Image processing unit and image processing method
US6707453B1 (en) 2000-11-21 2004-03-16 Hewlett-Packard Development Company, L.P. Efficient rasterization of specular lighting in a computer graphics system
US6766067B2 (en) 2001-04-20 2004-07-20 Mitsubishi Electric Research Laboratories, Inc. One-pass super-resolution images
JP4756148B2 (en) 2001-09-06 2011-08-24 独立行政法人情報通信研究機構 Gloss / color reproduction system, gloss / color reproduction program
US7428080B2 (en) * 2002-02-05 2008-09-23 Canon Kabushiki Kaisha Image reading apparatus, method of controlling image reading apparatus, program, and computer readable storage medium
JP2004021373A (en) * 2002-06-13 2004-01-22 Matsushita Electric Ind Co Ltd Method and apparatus for estimating body and optical source information
JP2004129035A (en) * 2002-10-04 2004-04-22 Casio Comput Co Ltd Imaging device and imaging method
US7414662B2 (en) * 2002-10-07 2008-08-19 Micron Technology, Inc. Multifunction lens
JP2005109935A (en) * 2003-09-30 2005-04-21 Nippon Hoso Kyokai <Nhk> Image data processor and image data processing program
JP4546155B2 (en) 2004-06-02 2010-09-15 パナソニック株式会社 Image processing method, image processing apparatus, and image processing program
JPWO2006033257A1 (en) * 2004-09-24 2008-05-15 松下電器産業株式会社 Image conversion method, image conversion apparatus, server client system, portable device, and program
CN100573579C (en) * 2004-12-07 2009-12-23 松下电器产业株式会社 Image conversion method and device, texture mapping method and device, server-client system
JP3996630B2 (en) * 2005-01-19 2007-10-24 松下電器産業株式会社 Image conversion method, texture mapping method, image conversion apparatus, server client system, image conversion program, shadow recognition method, and shadow recognition apparatus
US7319467B2 (en) * 2005-03-29 2008-01-15 Mitsubishi Electric Research Laboratories, Inc. Skin reflectance model for representing and rendering faces
US7340098B2 (en) * 2006-03-15 2008-03-04 Matsushita Electric Industrial Co., Ltd. Method and apparatus for image conversion

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5028138A (en) * 1989-05-23 1991-07-02 Wolff Lawrence B Method of and apparatus for obtaining object data by machine vision form polarization information
US7002623B1 (en) * 1998-09-08 2006-02-21 Olympus Optical Co., Ltd. Image processing apparatus for correcting a color and texture of an image and displaying the corrected image

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8675150B2 (en) * 2007-06-11 2014-03-18 Commissariat A L'energie Atomique Lighting device for liquid crystal screen
US20110043723A1 (en) * 2007-06-11 2011-02-24 Commissariat A L'energie Atomique Lighting Device for Liquid Crystal Screen
US20200356761A1 (en) * 2008-01-03 2020-11-12 Apple Inc. Personal computing device control using face detection and recognition
US11676373B2 (en) * 2008-01-03 2023-06-13 Apple Inc. Personal computing device control using face detection and recognition
US20110102551A1 (en) * 2008-07-30 2011-05-05 Masahiro Iwasaki Image generation device and image generation method
US9087388B2 (en) 2008-07-30 2015-07-21 Panasonic Corporation Image generation device and image generation method
US20130121567A1 (en) * 2008-08-29 2013-05-16 Sunil Hadap Determining characteristics of multiple light sources in a digital image
US8463072B2 (en) * 2008-08-29 2013-06-11 Adobe Systems Incorporated Determining characteristics of multiple light sources in a digital image
US20120113064A1 (en) * 2010-11-05 2012-05-10 White Kevin J Downsampling data for crosstalk compensation
US8913040B2 (en) * 2010-11-05 2014-12-16 Apple Inc. Downsampling data for crosstalk compensation
US11755712B2 (en) 2011-09-29 2023-09-12 Apple Inc. Authentication with secondary approver
US9538059B2 (en) * 2013-08-19 2017-01-03 Lg Electronics Inc. Mobile terminal and control method thereof
US20150049211A1 (en) * 2013-08-19 2015-02-19 Lg Electronics Inc. Mobile terminal and control method thereof
US11768575B2 (en) 2013-09-09 2023-09-26 Apple Inc. Device, method, and graphical user interface for manipulating user interfaces based on unlock inputs
US20170046819A1 (en) * 2014-05-02 2017-02-16 Olympus Corporation Image processing apparatus and image acquisition apparatus
US9965834B2 (en) * 2014-05-02 2018-05-08 Olympus Corporation Image processing apparatus and image acquisition apparatus
US11836725B2 (en) 2014-05-29 2023-12-05 Apple Inc. User interface for payments
US10607404B2 (en) * 2015-02-16 2020-03-31 Thomson Licensing Device and method for estimating a glossy part of radiation
US11765163B2 (en) 2017-09-09 2023-09-19 Apple Inc. Implementation of biometric authentication
US10863113B2 (en) * 2018-02-05 2020-12-08 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US20190246048A1 (en) * 2018-02-05 2019-08-08 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US11928200B2 (en) 2018-06-03 2024-03-12 Apple Inc. Implementation of biometric authentication
US11619991B2 (en) 2018-09-28 2023-04-04 Apple Inc. Device control using gaze information
US11809784B2 (en) 2018-09-28 2023-11-07 Apple Inc. Audio assisted enrollment
US11875547B2 (en) 2018-11-07 2024-01-16 Kabushiki Kaisha Toshiba Image processing apparatus, image processing method, and storage medium
US11835997B2 (en) 2019-09-27 2023-12-05 Electronic Theatre Controls, Inc. Systems and methods for light fixture location determination

Also Published As

Publication number Publication date
JP4082714B2 (en) 2008-04-30
US20080186390A1 (en) 2008-08-07
CN101356546B (en) 2011-10-12
CN101422035A (en) 2009-04-29
JPWO2007139067A1 (en) 2009-10-08
CN101422035B (en) 2012-02-29
WO2007139070A1 (en) 2007-12-06
CN101356546A (en) 2009-01-28
US7893971B2 (en) 2011-02-22
JPWO2007139070A1 (en) 2009-10-08
WO2007139067A1 (en) 2007-12-06
US20100079618A1 (en) 2010-04-01
US7688363B2 (en) 2010-03-30
JP4077869B2 (en) 2008-04-23

Similar Documents

Publication Publication Date Title
US7893971B2 (en) Light source estimation device that captures light source images when it is determined that the imaging device is not being used by the cameraman
US7948514B2 (en) Image processing apparatus, method and computer program for generating normal information, and viewpoint-converted image generating apparatus
JP4762369B2 (en) Image processing device
US8131116B2 (en) Image processing device, image processing method and image processing program
Schechner et al. Generalized mosaicing: Wide field of view multispectral imaging
JP4563513B2 (en) Image processing apparatus and pseudo-stereoscopic image generation apparatus
JP2008016918A (en) Image processor, image processing system, and image processing method
JP4469021B2 (en) Image processing method, image processing apparatus, image processing program, image composition method, and image composition apparatus
US20100290713A1 (en) System, method and apparatus for image processing and image format
US7486837B2 (en) Method, device, and program for image conversion, method, device and program for texture mapping, and server-client system
EP0735745B1 (en) Visual information processing method and apparatus
Matsuyama et al. Multi-camera systems for 3d video production

Legal Events

Date Code Title Description
AS Assignment

Owner name: MATSUSHITA ELECTRIC INDUSTRIAL CO., LTD., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SATO, SATOSHI;KANAMORI, KATSUHIRO;MOTOMURA, HIDETO;REEL/FRAME:021130/0348

Effective date: 20080326

AS Assignment

Owner name: PANASONIC CORPORATION, JAPAN

Free format text: CHANGE OF NAME;ASSIGNOR:MATSUSHITA ELECTRIC INDUSTRIAL CO., LTD.;REEL/FRAME:021858/0958

Effective date: 20081001

Owner name: PANASONIC CORPORATION,JAPAN

Free format text: CHANGE OF NAME;ASSIGNOR:MATSUSHITA ELECTRIC INDUSTRIAL CO., LTD.;REEL/FRAME:021858/0958

Effective date: 20081001

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: SOVEREIGN PEAK VENTURES, LLC, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PANASONIC CORPORATION;REEL/FRAME:047914/0675

Effective date: 20181012