Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS7912285 B2
Publication typeGrant
Application numberUS 12/881,170
Publication dateMar 22, 2011
Filing dateSep 13, 2010
Priority dateAug 16, 2004
Fee statusPaid
Also published asDE602006000400D1, DE602006000400T2, DE602006008009D1, EP1800259A1, EP1800259B1, EP1918872A2, EP1918872A3, EP1918872B1, US7606417, US7796816, US8175385, US20060039690, US20100008577, US20100328486, US20110157408, WO2007025578A1
Publication number12881170, 881170, US 7912285 B2, US 7912285B2, US-B2-7912285, US7912285 B2, US7912285B2
InventorsEran Steinberg, Yury Prilutsky, Peter Corcoran, Petronel Bigioi
Original AssigneeTessera Technologies Ireland Limited
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
Foreground/background segmentation in digital images with differential exposure calculations
US 7912285 B2
Abstract
A digital segmentation method and apparatus determines foreground and/or background within at least one portion of a captured image. The determining includes comparing a captured image to a pre-captured or post captured reference image of nominally the same scene. One of the images is taken with flash and the other without. The system can be implemented as part of a digital camera acquisition chain having effective computation complexity.
Images(7)
Previous page
Next page
Claims(56)
1. A digital image acquisition system having no photographic film, comprising:
(a) an apparatus for capturing digital images, including a lens, an image sensor and a processor;
(b) a flash unit for providing illumination during image capture;
(c) an analysis tool for comparing at least a portion of a captured image and a reference image of nominally the same scene, at least one of said captured and reference images being taken with flash, said analysis tool providing a measure of relative differences in illumination between regions of said captured image and said reference image of groups of pixels within said scene; and
(d) a classification tool for segmenting a foreground region from a background region within said scene based on said measure.
2. A system according to claim 1, further comprising an exposure equalizer for substantially equalizing an overall level of exposure of at least said portion of said captured and reference images prior to analysis by said analysis tool.
3. A system according to claim 1, further comprising a segmentation tool for determining one or more regions that are indicative of the foreground region, or of the background region, or of the background region and the foreground region, within at least one portion of a captured image, and wherein said analysis tool is arranged to analyse said foreground region or said background region, or both.
4. A system according to claim 1, wherein said classification tool is responsive to said measure exceeding a high threshold to classify a region as a foreground region, and responsive to said measure not exceeding a low threshold to classify a region as a background region.
5. A system according to claim 4, wherein said high and low threshold are coincident.
6. A system according to claim 1, further comprising a segmentation tool for determining one or more regions that are indicative of the foreground region, or of the background region, or of the background region and the foreground region, within at least one portion of the captured image, wherein said determining comprises comparing said captured image and the same reference image or a different reference image of nominally the same scene, or both.
7. A system according to claim 6, wherein the captured and reference images have different pixel resolutions, and wherein the system further comprises a pixel matching tool which is operative prior to application of the segmentation tool for matching the pixel resolutions of the captured and reference images at least in respect of said at least one portion.
8. A system according to claim 7, wherein said pixel matching tool utilizes up-sampling of the image of lower resolution or sub-sampling of the image of higher resolution, or both.
9. A system according to claim 6, further comprising an alignment tool which is operative prior to application of the segmentation tool for aligning said regions of said captured and reference images at least in respect of said at least one portion.
10. A system according to claim 6, further comprising an exposure equalizer for substantially equalizing an overall level of exposure of said regions or all of said captured and reference images at least in respect of said at least one portion.
11. A system according to claim 10, wherein the substantially equalising of the overall level of exposure of said regions or all of said captured and reference image comprises simulating an ambient exposure of the captured image on the reference image.
12. A system according to claim 11, wherein the simulating of the ambient exposure of the captured image on the reference image comprises digitally simulating a one or a combination of aperture, acquisition speed, color transformations and gain of the captured image on the reference image.
13. A system according to claim 11, wherein the simulating of the ambient exposure of the captured image comprises individual, non-uniform manipulating of individual regions or color channels or combinations thereof.
14. A system according to claim 10, wherein the substantially equalising of the overall level of exposure of said captured and reference image comprises setting an ambient exposure of the reference image to match a calculated exposure of the captured image.
15. A system according to claim 10, wherein at least in respect of said at least one portion, the segmentation tool determines corresponding pixels in the captured and reference images whose values differ by less than a predetermined threshold, and designates segments of the captured image bounded by said determined pixels as foreground or background by comparing pixel values in a segment with pixel values in a corresponding segment of the reference image.
16. A system according to claim 10, wherein at least in respect of said at least one portion, the segmentation tool determines upper and lower thresholds based on a comparison of the overall level of exposure of the captured and reference images and designates pixels of the captured image as foreground or background according to whether their values are greater than the upper threshold or less than the lower threshold.
17. A system according to claim 10, wherein at least in respect of said at least one portion, the segmentation tool designates one or more segments of the captured image as foreground or background by comparing pixel values in the captured and reference images.
18. A system according to claim 10, wherein the reference image comprises a preview image having a lower pixel resolution than the captured image, and the captured image comprises an image taken with flash.
19. A system according to claim 6, further comprising a face detection module.
20. A system according to claim 6, further comprising a red-eye detection filter or a red eye correction filter or both, for selective application to the foreground region.
21. A system according to claim 20, further comprising a probability module for changing a probability of a redeye candidate region being an actual redeye region according to whether the candidate appears in the foreground or background of the captured image.
22. A system according to claim 1, further comprising a depth of focus module for reducing a perceived depth of focus according to whether a candidate region appears in the foreground or background of the captured image.
23. A system according to claim 1, wherein the reference image comprises a preview image.
24. A system according to claim 1, wherein the reference image comprises an image captured chronologically after said captured image.
25. A system according to claim 1, wherein the reference image comprises a combination of multiple reference-images.
26. A system according to claim 1, wherein said digital image acquisition system comprises a digital camera.
27. A system according to claim 1, wherein said digital image acquisition system comprises a combination of a digital camera and an external processing device.
28. A system as claimed in claim 27, wherein said segmentation tool is located within said external processing device.
29. A system according to claim 1, wherein said digital image acquisition system comprises a batch processing system including a digital printing device.
30. A system according to claim 1, wherein said digital image acquisition system comprises a batch processing system including a server computer.
31. A computer readable medium having code embodied therein for programming a processor to perform a method of analyzing a captured image, the computer readable medium comprising an analysis and classification tool for comparing at least a portion of a captured image and a reference image of nominally the same scene, at least one of said captured and reference images being taken with flash, said tool providing a measure of relative differences in illumination between regions of said captured image and said reference image, and said tool for segmenting a foreground region from a background region within said scene based on said measure.
32. The analysis and classification tool of claim 31, further comprising a digital segmentation tool for determining one or more regions that are indicative of the foreground region, or of the background region, or of the background region and the foreground region, within at least one portion of a captured image, wherein said determining comprises comparing said captured image and a reference image of nominally the same scene, and wherein at least one of said captured and reference images being taken with flash.
33. The analysis and classification tool of claim 32, wherein the captured and reference images have different pixel resolutions, and wherein the segmentation tool operates in conjunction with a pixel matching tool which is operative prior to application of the segmentation tool for matching the pixel resolutions of the captured and reference images at least in respect of said at least one portion.
34. The analysis and classification tool of claim 33, wherein said pixel matching tool utilizes up-sampling of the image of lower resolution or sub-sampling of the image of higher resolution, or both.
35. The analysis and classification tool of claim 32, wherein the segmentation tool operates in conjunction with an alignment tool which is operative prior to application of the segmentation tool for aligning said captured and reference images at least in respect of said at least one portion.
36. The analysis and classification tool of claim 32, wherein said segmentation tool operates in conjunction with an exposure equalizer for substantially equalizing an overall level of exposure of said captured and reference images at least in respect of said at least one portion.
37. The analysis and classification tool of claim 32, wherein said segmentation tool operates in conjunction with a face detection module or a red-eye filter, or both, for selective application to the foreground region.
38. The analysis and classification tool of claim 37, wherein said segmentation tool further operates in conjunction with a probability module for changing a probability of a redeye candidate region being an actual redeye region according to whether the candidate appears in the foreground or background of the captured image.
39. The analysis and classification tool of claim 37, wherein said segmentation tool further operates in conjunction with a depth of focus module for reducing a perceived depth of focus according to whether a candidate region appears in the foreground or background of the captured image.
40. The analysis and classification tool of claim 37, wherein said segmentation tool further operates in conjunction with a blurring module for blurring said regions indicative of background of the captured image.
41. The analysis and classification tool of claim 31, wherein the reference image comprises a preview image.
42. The analysis and classification tool of claim 31, wherein the reference image comprises an image captured chronologically after said captured image.
43. The analysis and classification tool of claim 31, wherein the reference image comprises a combination of multiple reference-images.
44. A method of analyzing a captured image, comprising:
using a processor;
comparing at least a portion of a captured image and a reference image of nominally the same scene, at least one of said captured and reference images being taken with flash,
providing a measure of relative differences in illumination between regions of said captured image and said reference image, and
segmenting a foreground region from a background region within said scene based on said measure.
45. The method of claim 44, further comprising determining one or more regions that are indicative of the foreground region, or of the background region, or of the background region and the foreground region, within at least one portion of a captured image, wherein said determining comprises comparing said captured image and a reference image of nominally the same scene, and wherein at least one of said captured and reference images being taken with flash.
46. The method of claim 45, wherein the captured and reference images have different pixel resolutions, and wherein the method further comprises matching the pixel resolutions of the captured and reference images at least in respect of said at least one portion.
47. The method of claim 46, wherein said matching of pixel resolutions comprises up-sampling of the image of lower resolution or sub-sampling of the image of higher resolution, or both.
48. The method of claim 45, further comprising aligning said captured and reference images at least in respect of said at least one portion.
49. The method of claim 45, further comprising approximately equalizing an overall level of exposure of said captured and reference images at least in respect of said at least one portion.
50. The method of claim 45, further comprising detecting a face or filtering red eye, or both, within the foreground region.
51. The method of claim 50, further comprising changing a probability of a redeye candidate region being an actual redeye region according to whether the candidate appears in the foreground or background of the captured image.
52. The method of claim 50, further comprising reducing a perceived depth of focus according to whether a candidate region appears in the foreground or background of the captured image.
53. The method of claim 50, further comprising blurring said regions indicative of background of the captured image.
54. The method of claim 44, wherein the reference image comprises a preview image.
55. The method of claim 44, wherein the reference image comprises an image captured chronologically after said captured image.
56. The method of claim 44, wherein the reference image comprises a combination of multiple reference-images.
Description
PRIORITY

This application is a Continuation of U.S. patent application Ser. No. 12/562,833, filed Sep. 18, 2009; now U.S. Pat. No. 7,796,816, which is a Continuation of U.S. patent application Ser. No. 11/217,788, filed Aug. 30, 2005, now U.S. Pat. No. 7,606,417; which is a Continuation-in-Part to U.S. patent application Ser. No. 10/919,226, filed Aug. 16, 2004, now U.S. Pat. No. 7,738,015; which is related to U.S. applications Ser. Nos. 10/635,918, filed Aug. 5, 2003, now Abandoned, and 10/773,092, filed Feb. 4, 2004. Each of these applications is hereby incorporated by reference.

BACKGROUND OF THE INVENTION

1. Field of the Invention

The invention relates to an image segmentation method and system, and in particular to a tool for determining regions indicative of foreground and background based on exposure analysis of captured and reference images.

2. Description of the Related Art

Image segmentation involves digital image processing wherein an image is broken down into regions based on some predefined criteria. These criteria may be contextual, numerical, shape, size, and/or color-related, gradient-related and more. It is desired to have a technique for determining the foreground and background of digital images for numerous image processing operations. Such operations may include image enhancement, color correction, and/or object based image analysis. In the specific case of processing inside of an acquisition device, it is desired to perform such segmentation expeditiously, while utilizing suitable computations of relatively low complexity, for example, for performing calculations in-camera or in handset phones equipped with image acquisition capabilities.

SUMMARY OF THE INVENTION

A digital image acquisition system is provided having no photographic film. The system includes an apparatus for capturing digital images, a flash unit for providing illumination during image capture, and a segmentation tool for determining regions indicative of foreground and/or background within at least one portion of a captured image. The determining is effected as a function of a comparison of a captured image and a reference image of nominally the same scene. One of the captured and reference images is taken with flash and the other is taken without flash.

While available ambient light such as sunlight is in general more spatially uniform in nature than strobe lighting, especially for point-and-shoot cameras (as opposed to studio settings with multiple strobe units) that originates from or close to the camera. Due to the fact that the strobe energy is inverse to the square of the distance, the closer the object is, the stronger the light on the object will be. The overall light distribution will vary between the two images, because one shot or subset of shots will be illuminated only with available ambient light while another will be illuminated with direct flash light.

A background/foreground segmented image can be used in numerous digital image processing algorithms such as algorithms to enhance the separation of the subject, which is usually in the foreground, from the background. This technique may be used to enhance depth of field, to enhance or eliminate the background altogether, or to extract objects such as faces or people from an image.

By reducing the area which is subjected to an image processing analysis, processing time is reduced substantially for many real-time algorithms. This is particularly advantageous for algorithms implemented within a digital image acquisition device where it is desired to apply image processing as part of the main image acquisition chain. Thus, the click-to-click time of a digital camera is improved. In certain embodiments it may advantageously allow multiple image processing techniques to be employed where previously only a single technique was applied. It can also serve to reduce occurrences of false positives for certain image processing algorithms where these are more likely to occur in either the background or foreground regions of an image.

The invention may be applied to embedded devices with limited computation capability. It can be used also to improve productivity, in particular where large amounts of images are to be processed, such as for security based facial detection, large volume printing systems or desktop analysis of a collection of images. The invention may be applied to still image capture devices, as well as for video or continuous capture devices with stroboscopic capability.

BRIEF DESCRIPTION OF DRAWINGS

Preferred embodiments will now be described, by way of example, with reference to the accompanying drawings, in which:

FIG. 1 is a block diagram of a camera apparatus operating in accordance with a preferred embodiment.

FIGS. 2( a), 2(b) and 2(c) illustrate a detailed workflow in accordance with preferred embodiments.

FIG. 3 is a graph illustrating the distributions in pixel intensities for a flash and non-flash version of an image.

FIG. 4 illustrates the alignment process used in the workflow of FIG. 2( a).

DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

FIG. 1 shows a block diagram of an image acquisition device 20 operating in accordance with a preferred embodiment. The digital acquisition device 20, which in the present embodiment is a portable digital camera, includes a processor 120. It can be appreciated that many of the processes implemented in the digital camera may be implemented in or controlled by software operating in a microprocessor, central processing unit, controller, digital signal processor and/or an application specific integrated circuit, collectively depicted as block 120 labelled “processor”. Generically, user interface and control of peripheral components such as buttons and display is controlled by a μ-controller 122.

The processor 120, in response to a user input at 122, such as half pressing a shutter button (pre-capture mode 32), initiates and controls the digital photographic process. Ambient light exposure is determined using light sensor 40 in order to automatically determine if a flash is to be used. The distance to the subject is determined using focusing means 50 which also focuses the image on image capture component 60. If a flash is to be used, processor 120 causes the flash 70 to generate a photographic flash in substantial coincidence with the recording of the image by image capture component 60 upon full depression of the shutter button.

The image capture component 60 digitally records the image in color. The image capture component 60 is known to those familiar with the art and may include a CCD (charge coupled device) or CMOS to facilitate digital recording. The flash may be selectively generated either in response to the light sensor 40 or a manual input 72 from the user of the camera. The image I(x,y) recorded by image capture component 60 is stored in image store component 80 which may comprise computer memory such a dynamic random access memory or a non-volatile memory. The camera is equipped with a display 100, such as an LCD, for preview and post-view of images.

In the case of preview images P(x,y), which are generated in the pre-capture mode 32 with the shutter button half-pressed, the display 100 can assist the user in composing the image, as well as being used to determine focusing and exposure. A temporary storage space 82 is used to store one or plurality of the preview images and can be part of the image store means 80 or a separate component. The preview image is usually generated by the image capture component 60. Parameters of the preview image may be recorded for later use when equating the ambient conditions with the final image. Alternatively, the parameters may be determined to match those of the consequently captured, full resolution image. For speed and memory efficiency reasons, preview images may be generated by subsampling a raw captured image using software 124 which can be part of a general processor 120 or dedicated hardware or combination thereof, before displaying or storing the preview image. The sub sampling may be for horizontal, vertical or a combination of the two. Depending on the settings of this hardware subsystem, the pre-acquisition image processing may satisfy some predetermined test criteria prior to storing a preview image. Such test criteria may be chronological—such as to constantly replace the previous saved preview image with a new captured preview image every 0.5 seconds during the pre-capture mode 32, until the final full resolution image I(x,y) is captured by full depression of the shutter button. More sophisticated criteria may involve analysis of the of the preview image content, for example, testing the image for changes, or the detection of faces in the image before deciding whether the new preview image should replace a previously saved image. Other criteria may be based on image analysis such as the sharpness, detection of eyes or metadata analysis such as the exposure condition, whether a flash is going to happen, and/or the distance to the subjects.

If test criteria are not met, the camera continues by capturing the next preview image without saving the current one. The process continues until the final full resolution image I(x,y) is acquired and saved by fully depressing the shutter button.

Where multiple preview images can be saved, a new preview image will be placed on a chronological First In First Out (FIFO) stack, until the user takes the final picture. The reason for storing multiple preview images is that the last image, or any single image, may not be the best reference image for comparison with the final full resolution image in. By storing multiple images, a better reference image can be achieved, and a closer alignment between the preview and the final captured image can be achieved in an alignment stage discussed further in relation to FIGS. 2( a)-2(c) and 4. Other reasons for capturing multiple images are that a single image may be blurred due to motion, the focus not being set, and/or the exposure not being set.

In an alternative embodiment, the multiple images may be a combination of preview images, which are images captured prior to the main full resolution image and postview images, which are image or images captured after said main image. In one embodiment, multiple preview images may assist in creating a single higher quality reference image; either higher resolution or by taking different portions of different regions from the multiple images.

A segmentation filter 90 analyzes the stored image I(x,y) for foreground and background characteristics before forwarding the image along with its foreground/background segmentation information 99 for further processing or display. The filter 90 can be integral to the camera 20 or part of an external processing device 10 such as a desktop computer, a hand held device, a cell phone handset or a server. In this embodiment, the segmentation filter 90 receives the captured image I(x,y) from the full resolution image storage 80 as well as one or a plurality of preview images P(x,y) from the temporary storage 82.

The image I(x,y) as captured, segmented and/or further processed may be either displayed on image display 100, saved on a persistent storage 112 which can be internal or a removable storage such as CF card, SD card, USB dongle, or the like, or downloaded to another device, such as a personal computer, server or printer via image output component 110 which can be tethered or wireless. The segmentation data may also be stored 99 either in the image header, as a separate file, or forwarded to another function which uses this information for image manipulation.

In embodiments where the segmentation filter 90 is implemented in an external application in a separate device 10, such as a desktop computer, the final captured image I(x,y) stored in block 80 along with a representation of the preview image as temporarily stored in 82, may be stored prior to modification on the storage device 112, or transferred together via the image output component 110 onto the external device 10, later to be processed by the segmentation filter 90. The preview image or multiple images, also referred to as sprite-images, may be pre-processed prior to storage, to improve compression rate, remove redundant data between images, align or color compress data.

FIGS. 2( a)-2(b) illustrate a workflow of the segmentation filter 90 of this embodiment. Referring to FIG. 2( a), there are two input images into the filter, namely a full resolution flash image I(x,y), 510, which is the one that was captured by full depression of the shutter button and a preview image P(x,y), 520, which is used as a reference image and is nominally the same scene as the image I(x,y) but taken without the flash. The preview image may be a result of some image processing, 522, taking into account multiple preview images and creating a single image. Methods of improving image quality based on multiple images are familiar to those versed in the art of image processing. The resulting output from the analysis process of 522 is a single preview image.

As explained above, the reference image and the final image may have different resolutions. The preview image 520 is normally, but not necessarily, of lower resolution than the full resolution image 510, typically being generated by clocking out a subset of the image sensor cells of the image capture component 60 or by averaging the raw sensor data.

The discrepancy in resolution may lead to differences in content, or pixel values, even though no data was changed in the subject image. In particular, edge regions when down-sampled and then up-sampled may have a blurring or an averaging effect on the pixels. Thus direct comparison of different resolution images, even when aligned, may lead to false contouring.

Therefore, the two images need to be matched in pixel resolution, 530. In the present context “pixel resolution” is meant to refer to the size of the image in terms of the number of pixels constituting the image concerned. Such a process may be done by either up-sampling the preview image, 534, down-sampling the acquired image, 532, or a combination thereof. Those familiar in the art will be aware of several techniques that may be used for such sampling is methods. The result of step 530 is a pair of images I′(x,y) and P′(x,y) corresponding to the original images I(x,y) and P(x,y), or relevant regions thereof, with matching pixel resolution.

Where the foreground/background segmentation is done solely for the purpose of improving the detection of redeye artefacts, faces or other image features, the pixel matching as described above can be limited to those regions in the images containing or suspected to contain eyes, faces or other features, as the case may be, which can be determined by image processing techniques. In such a case the subsequent processing steps now to be described may be performed individually on each such region rather than on the images as a whole, and references to the “image” or “images” are to be interpreted accordingly.

The system and method of the preferred embodiment involves the segmentation of the image I(x,y) using exposure discrepancies between I′(x,y) and P′(x,y). It may also be advantageous to apply motion compensation 591 to one or both of the images I′(x,y) and P′(x,y). This can be achieved using two (or more) preview images 526, 527 to create a motion map 580 as described in U.S. application Ser. No. 10/985,657 and its corresponding PCT Application, which are hereby incorporated by reference, as well as other techniques for motion compensation that may be understood by those skilled in the art. In embodiments which incorporate motion compensation, the acquisition parameters for the main image I(x,y) will typically be used to determine if motion compensation is to be applied. Additionally, a user setting may be provided to enable or disable motion compensation. Alternatively, motion compensation may be applied, on a pixel by pixel basis, as part of alignment described below.

Motion compensation may be employed prior to the generation of a foreground/background map, e.g., where it is desired to eliminate a global motion of the image. However in certain embodiments it may be advantageous to perform a secondary motion compensation operation during the creation of the foreground/background map. This secondary motion compensation is not intended to eliminate a global motion of the image, but rather to compensate for small localized motions that may occur within the image. A good example is that of the leaves of a tree or bush which are fluttering in the wind while an image is being acquired. Such local motions can cause variations in luminance which should be compensated for after the initial foreground/background map is created 596 and segmented 597. Afterwards, a localized motion compensation may be employed to eliminate regions which exhibited localized motion or to subject such regions to more detailed analysis. This is illustrated in FIG. 2( c). In the case of this is embodiment, morphological closing 592 and elimination of small regions 593 are included. Techniques to implement each of these are known to those skilled in the art of image segmentation.

Although nominally of the same scene, the preview image and the finally acquired full resolution image may differ spatially due to the temporal lag between capturing the two images. The alignment may be global, due to camera movement or local due to object movement, or a combination of the two. Therefore, the two images are advantageously aligned 540 in accordance with a preferred embodiment. Essentially, alignment involves transforming at least portions of one of the images, and in this embodiment the preview image P′(x,y), to obtain maximum correlation between the images based on measurable characteristics such as color, texture, edge analysis. U.S. Pat. No. 6,295,367 is hereby incorporated by reference as disclosing techniques for achieving alignment. The technique may align images that are initially misaligned due to object and camera movement. U.S. Pat. No. 5,933,546 is also hereby incorporated by reference. Multi-resolution data may be used for pattern matching. Alignment is also discussed further in relation to FIG. 4.

The images are then equalized for exposure and possibly in color space 550. Equalisation attempts to bring the preview image and the flash full resolution image to the same overall level of exposure. The equalization can be achieved in different manners. The goal is to ensure that both images, preview and final have the same ambient conditions or a simulation of them. Specifically, the preview image is preferably conveyed having the same overall exposure as the flash image. In most cases, when using flash, even in a fill-flash mode, the final image will use a lower ambient exposure, to prevent over exposure due to the flash. In other words, the overall ambient exposure of the flash image is lower. In other words, the exposure on the foreground should remain constant after adding the flash light, and thus there is a need use a smaller aperture or shorter shutter speed. The equalization may be done analytically by matching the histograms of the images. Alternatively if the overall ambient exposure, which is to depicted as function of aperture, shutter speed, and sensitivity (or gain) can be calculated and if the exposure is different, the pixel value can be modified, up to clipping conditions, based on the ratio between the two. Note that the exposure might not be equal in all channels and may also include a stage of color correction which compensates for different exposures for the various color channels. An example of this is when the ambient light is warm, such as incandescent, while the final image using flash is closer to daylight in terms of the overall color temperature.

In an alternative method, when the final ambient exposure is known, the preview image used as reference can be acquired with the same equivalent exposure. This can serve to eliminate the equalization stage. Note that in such case, the preview image may not be optimal to ambient conditions, but it is equalized with the final flash image.

As can be seen from FIG. 3, an idealised non-flash image of a scene containing some foreground objects can be considered to have a generally unimodal distribution of luminance levels across all pixels. Where a scene is well lit, the peak of the distribution tends to be at a higher luminance level, whereas for dimly light scenes, the peak will tend to be a lower luminance level. In a flash version of the same scene, pixels corresponding to foreground objects will tend to have increased luminance levels due to the proximity to the flash source. However, pixels corresponding to background objects will tend to have relatively reduced luminance levels. Thus, in a preferred embodiment, pixel luminance levels for a flash version image of a scene are mapped to luminance levels which bring the non-flash (preview) image and the flash version image of the scene to the same overall level of exposure. This mapping g( )can be represented as follows:

x y P ( x , y ) = g ( P ( x , y ) , x , y ) x y P ( x , y )

In the simplest case, the function g( ) is a constant, in general greater than I, mapping exposure levels in a preview image P′(x,y) to produce an altered image P″(x,y) having the same overall exposure level as the flash version image I′(x,y). (Alternatively, the image I′(x,y) could be mapped to I″(x,y).) In the simplest implementation of this case, both images I′(x,y) and P′(x,y) are converted to greyscale and the mean luminance for each image is computed. The luminance values of one of the images are then adjusted so that the mean luminance values of the altered image P″(x,y) and the image I′(x,y) match.

However, the function g( ) can be dependent on the original exposure level of a pixel P′(x,y), for example, to prevent color saturation or loss of contrast. The function may also be dependent on a pixel's (x,y) location within an image, perhaps tending to adjust more centrally located pixels more than peripheral pixels.

Nonetheless, it will be seen from FIG. 3 that in a pixel by pixel comparison, or even a block-based comparison (each block comprising N×N pixels within a regular grid of M×M regions), the adjusted flash version of the image has a bimodal distribution between the exposure levels of background and foreground objects.

In preferred embodiments, during equalisation, one or more thresholds VH, VL, and possibly block size n are determined for later use in determining the background and foreground areas of the image I′(x,y). The threshold process is based on finding the optimal threshold values in a bimodal distribution and with the benefit of a reference unimodal non-flash image. Suitable techniques are described in the literature and are known to one familiar in the art of numerical classification. Nonetheless, as an example, the upper threshold level VH could be taken as the cross-over luminance value of the upper bimodal peak and the unimodal distribution, whereas the lower threshold VL could be taken as the cross-over of the lower bimodal peak and the unimodal distribution. It will be appreciated that the distribution of pixel exposure levels may not in practice be smooth and there may be several cross-over points in raw image data, and so some smoothing of the luminance distribution may need to be performed before determining such cross-over points and so the thresholds.

After the thresholds VH, VL are determined, the image is processed via a segmenting tool, 590, to designate pixels or regions as background or foreground. In one embodiment, pixels whose values change less than a threshold amount, say VH-VL (or some other empirically determined value) between flash I′(x,y) and non-flash versions P″(x,y) of the image represent pixels in areas of a flash-image forming a boundary between background and foreground objects. When such individual pixels are linked, then segments of the image I′(x,y) substantially enclosed by linked boundary pixels and having pixel values on average brighter than in the corresponding segment of the non-flash image P″(x,y) are designated as foreground, whereas segments of the image substantially enclosed by boundary pixels and having pixel values on average darker than in the corresponding segment of the non-flash image are designated as background.'

In a second embodiment, foreground pixels in a flash image are initially determined at step 596 as those with exposure levels greater than the upper exposure threshold value VH and background pixels in a flash image are those with exposure levels less than the lower exposure threshold value VL.

In a still further embodiment of step 596, thresholds are not employed and initial segmentation is achieved simply by subtracting the local exposure values for each image on a pixel by pixel or block by block (each block comprising n×n pixels) basis to create a difference map. Typically, foreground pixels will have a higher (brighter) value and background pixels will have a lower value.

One technique by which a block by block averaging can be advantageously achieved in a state-of-art digital camera is to employ a hardware subsampler 124 where available. This can very quickly generate a subsampled 1/n version of both images where each pixel of each image represents an average over an n×n block in the original image.

In certain embodiments, after an initial matching of size between preview and main image, further subsampling may be implemented prior to subtracting the local exposure values for each image on a pixel by pixel basis. After an initial foreground/background map is determined using the smallest pair of matched images, this map may be refined by applying the results to the next largest pair of subsampled images, each pixel now corresponding to an N×N block of pixels in the larger pair of images.

A refinement of said initial map may be achieved by performing a full pixel-by-pixel analysis, of the larger pair of matched images, only on the border regions of the initial foreground/background map. It will be appreciated that where a hardware subsampler is available that generating multiple sets of matched subsampled images is relatively inexpensive in terms of computing resources. In certain embodiments performing a series of such refinements on successively larger pairs of matched subsampled images can advantageously eliminate the need for alignement and registration of the images. The advantages of this technique must be balanced against the requirement to temporarily store a series of pairs of matched subsampled images of successively decreasing size.

Each of the processes involving threshold comparisons may also take into account neighbouring pixel operations where the threshold value or comparison is dependent on the surrounding pixel values to eliminate noise artefacts and slight shifts between the preview and the final image.

Nonetheless, it will be seen that the determination of background/foreground membership is not achieved with complete accuracy using a single pass pixel-based or block-based analysis alone. As an example, consider a person with a striped shirt. It may be that the corrected luminance of the dark stripes actually indicates they are background pixels even though they are in close proximity to a large collection of foreground pixels.

Accordingly it is advantageous to incorporate additional analysis and so, following the creation of an initial foreground map, even if this has been performed on a n×n block rather than pixel basis, the foreground pixels/blocks are segmented and labelled 597. This step helps to eliminate artefacts such as a striped shirt and those due to image noise or statistical outliers in the foreground map. It is also advantageous to eliminate small segments.

Thus a final map (mask) of foreground pixels is created 594. This may now be upsized to match the size of the main acquired image, 599-1, and can be advantageously employed for further image processing of the main image, 501. For example, although not shown, the system may include a face detector or redeye filter, and in such a case 501 can include techniques for applying these selectively to the foreground region defined by the mask, thus reducing the execution time for such algorithms by excluding the analysis of background segments. Alternatively, where the system includes a component for identifying redeye candidate regions 501, U.S. patent application Ser. No. 10/976,336 is hereby incorporated by reference. This component can implement a redeye raising analysis by increasing or decreasing the probability of a redeye candidate region being an actual redeye region according to whether the candidate appears in the foreground or background of the captured, image.

As was already mentioned, in a preferred embodiment it may be advantageous to initially employ aggressive downsampling of the images 510, 520. This may eliminate the need for the alignment step 540 and, if the present invention is applied recursively and selectively on a regional basis, a full-sized foreground mask can be achieved without a great increase in computation time.

Referring back now to FIG. 2( b), where it is assumed that during the size matching 530 of FIG. 2( a), several pairs of matching images are created or, alternatively, are created dynamically on each recursion through the loop of FIG. 2( b). For example, consider a main image of size 1024×768 with a preview of size 256×192. Let us suppose that three sets of matching images are created at resolutions of 1024×768 (preview is upsized by 4×), 256×192 (main image is downsized by 4×) and at 64×48 (main image downsized by 16× and preview downsized by 4×). Now we assume that the initial analysis is performed on the 64×48 image as described in FIG. 2( a) as far as the segmentation tool step 590.

After the step 590, an additional step 517 determines if the comparison size (the image size used to generate the latest iteration of the foreground map) is equal to the size of the main flash image I(x,y). If not then the foreground map is upsized to the next comparison size 599-2—in this case 256×192 pixels. Each pixel in the original map is now enlarged into a 4×4 pixel block. The regions forming the boundary between foreground and background segments—they were pixels at the lower map resolution—of this enlarged map are next determined 570 and the downsampled images of this comparison size (256×192) are loaded 531. In this case, the technique may be applied to the entire image or a portion of the entire image at the higher resolution as regions within foreground segments are determined to definitely be foreground regions. In this embodiment, it is only the boundary regions between background and foreground that are analyzed. The same analysis that was applied to the main image are now applied to these regions. They may be aligned 540, before being equalizing 551, and the segmentation tool 590 is applied to each 16×16 region. The results are merged with the existing foreground map 515.

If the foreground map is now of the same size as the main flash image 517 then it can be directly applied to the main image 501. Alternatively, if it is still smaller then it is upsampled to the next image comparison size 599-2 and a further recursion through the algorithm is performed.

The segmented data is stored, 598 as a segmentation mask as in FIG. 2( a). If necessary in order to return to the original image size, the segmentation mask will need to be up-sampled, 599, by the same factor the acquired image was down-sampled in step 532. The upsampling 599 should be sophisticated enough to investigate the edge information in the periphery of the mask, to ensure that the right regions in the upsampled map will be covered. Such techniques may include upsampling of an image or a mask while maintaining edge information.

FIG. 4 shows the workflow of the alignment function 540 of FIG. 2( a), where the inputs are the two images I′(x,y) and P′(x,y) as defined in relation to FIG. 2( a). The alignment may be global for the entire image or local for specific regions. Global movement may be caused by camera movement while local movement may be caused by object movement during the exposure interval of the image. For example, a simple linear alignment, such as a shift in the horizontal direction by H pixels, and/or in the vertical direction by V pixels, or a combination of the two. Mathematically, the shifted image, P″(x,y), can be described as:
P″(x,y)=P′(x−H,y−V)

However, simple translation operation assumes shift invariance which may not suffice in the aligning of the image. Even in the case of camera movement, such movement may include a Affine transformation that includes rotation, and shear as well as translation. Therefore, there may be a need for X-Y shearing, which is a symmetrical shift of the object's points in the direction of the axis to correct for perspective changes; X-Y tapering where the object is pinched by shifting its coordinates towards the axis, the greater the magnitude of the coordinate the further the shift; or rotation around an arbitrary point.

In general, the alignment process may involve an Affine transformation, defined as a special class of projective transformations that do not move any objects from the affine space

to the plane at infinity or conversely, or any transformation that preserves co linearity (i.e. all points lying on a line initially still lie on a line after transformation) and ratios of distances (e.g., the midpoint of a line segment remains the midpoint after transformation). Geometric contraction, expansion, dilation, reflection, rotation, shear, similarity transformations, spiral similarities and translation are all affine transformations, as are their combinations. In general, the alignment 540 may be achieved via an affine transformation which is a composition of rotations, translations, dilations, and shears, all well-known to one familiar in the art of image processing.

If it is determined through a correlation process that a global transformation suffices, as determined in block 542=YES, one of the images, and for simplicity the preview image, will undergo an Affine transformation, 544, to align itself with the final full resolution image. Mathematically, this transformation can be depicted as:
P″=AP′+q
where A is a linear transformation and q is a translation.

However, in some cases a global transformation may not work well, in particular for to cases where the subject matter moved, as could happen when photographing animated objects. In such case, in particular in images with multiple human subjects, and when the subjects move in independent fashion, the process of alignment 540 may be broken down, 546, to numerous local regions each with its own affine transformation. In the case of the use of the present technique for redeye detection and correction, it is preferred to align the eyes between the images. Therefore, according to this alternative, one or multiple local alignments may be performed, 548, for regions in the vicinity surrounding the eyes, such as faces.

Only after the images are aligned are the exposure value between the images equalised as in FIG. 2( a).

The preferred embodiments described above may be modified by adding or changing operations, steps and/or components in many ways to produce advantageous alternative embodiments. For example, the reference image can be a post-view image rather than a preview image, i.e. an image taken without flash immediately after the flash picture is taken.

Alternatively, the reference image could be the flash image and the full resolution captured image the non-flash image. An example of this is when the camera is set up in a special mode (similar to a portrait scene selection mode), so that the preview image is the one with the flash while the final image may be with no flash. In this case, the roles of the images reverse in terms of calculating the difference between the images. Additionally, the reference image may be either a preview image or a post-view image.

The preferred embodiments described herein may involve expanded digital acquisition technology that inherently involves digital cameras, but that may be integrated with other devices such as cell-phones equipped with an acquisition component or toy cameras. The digital camera or other image acquisition device of the preferred embodiment has the capability to record not only image data, but also additional data referred to as meta-data. The file header of an image file, such as JPEG, TIFF, JPEG-2000, etc., may include capture information including the preview image, a set of preview images or a single image that is processed to provide a compressed version of selected reference images, for processing and segmentation at a later post processing stage, which may be performed in the acquisition device or in a separate device such as a personal computer.

In these embodiments, in the comparison stages, the pixel values may be compared for lightness. Alternatively or additionally, these can be compared with other values such as color. An example of chromatic comparison is warm coloring such as yellow tint that may indicate incandescent light or blue tint that may indicate shade regions in sunlit environment, or other colours indicative of change between ambient lighting and the flash lighting. The comparison may be absolute or relative. In the absolute case the absolute value of the difference is recorded regardless to which of the images has the larger pixel value. In the relative case, not only the difference but also the direction is maintained. The two techniques may also assist in establishing the registration between the two images. In the case the subject slightly moves, for example horizontally, the relative difference may indicate a reversal of the values on the left side of the object and the right side of the object.

In certain embodiments it may also prove advantageous to employ a “ratio map” rather than a “difference map”. In such embodiments a ratio between the pixel luminance values of the two images (flash and non-flash) is determined. This technique can provide better results in certain cases and may be employed either as an alternative to a simple subtraction, or in certain embodiments it may be advantageous to combine output regions derived from both techniques using, logical or statistical techniques or a combination thereof, to generate a final foreground/background map.

The present invention is not limited to the embodiments described above herein, which may be amended or modified without departing from the scope of the present invention as set forth in the appended claims, and structural and functional equivalents thereof. In addition, United States published patent application no. 2003/0103159 to Nonaka, Osamu, entitled “Evaluating the effect of a strobe light in a camera” is hereby incorporated by reference as disclosing an in-camera image processing method for correcting shadow regions in a flash image.

In methods that may be performed according to preferred embodiments herein and that may have been described above and/or claimed below, the operations have been described in selected typographical sequences. However, the sequences have been selected and so ordered for typographical convenience and are not intended to imply any particular order fbr performing the operations.

In addition, all references cited above herein, in addition to the background and summary of the invention sections, are hereby incorporated by reference into the detailed description of the preferred embodiments as disclosing alternative embodiments and components.

Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US4683496Aug 23, 1985Jul 28, 1987The Analytic Sciences CorporationSystem for and method of enhancing images using multiband information
US5046118Feb 6, 1990Sep 3, 1991Eastman Kodak CompanyTone-scale generation method and apparatus for digital x-ray images
US5063448Jul 31, 1989Nov 5, 1991Imageware Research And Development Inc.Apparatus and method for transforming a digitized signal of an image
US5086314Jan 22, 1991Feb 4, 1992Nikon CorporationExposure control apparatus for camera
US5109425Sep 30, 1988Apr 28, 1992The United States Of America As Represented By The United States National Aeronautics And Space AdministrationMethod and apparatus for predicting the direction of movement in machine vision
US5130935Sep 14, 1989Jul 14, 1992Canon Kabushiki KaishaColor image processing apparatus for extracting image data having predetermined color information from among inputted image data and for correcting inputted image data in response to the extracted image data
US5164993Nov 25, 1991Nov 17, 1992Eastman Kodak CompanyMethod and apparatus for automatic tonescale generation in digital radiographic images
US5329379Oct 22, 1992Jul 12, 1994International Business Machines CorporationSystem and method of measuring fidelity of decompressed video signals and images
US5500685Oct 14, 1994Mar 19, 1996Avt Communications LimitedWiener filter for filtering noise from a video signal
US5504846Dec 16, 1993Apr 2, 1996International Business Machines CorporationMethod and apparatus for improved area demarcation in bit mapped image derived from multi-color bit mapped image
US5534924Dec 13, 1993Jul 9, 1996Thomson BroadcastMethod and device to obtain an element of information on depth in the field seen by picture-shooting device
US5594816Aug 21, 1995Jan 14, 1997Eastman Kodak CompanyMethod processing an image of a scene discernible by the human eye
US5621868Apr 15, 1994Apr 15, 1997Sony CorporationGenerating imitation custom artwork by simulating brush strokes and enhancing edges
US5724456Mar 31, 1995Mar 3, 1998Polaroid CorporationBrightness adjustment of images using digital scene analysis
US5812787Jun 30, 1995Sep 22, 1998Intel CorporationComputer-implemented method for encoding pictures of a sequence of pictures
US5844627Sep 11, 1995Dec 1, 1998Minerya System, Inc.Structure and method for reducing spatial noise
US5878152May 21, 1997Mar 2, 1999Cognex CorporationDepth from focal gradient analysis using object texture removal by albedo normalization
US5880737Jun 27, 1996Mar 9, 1999Microsoft CorporationMethod and system for accessing texture data in environments with high latency in a graphics rendering system
US5949914Mar 17, 1997Sep 7, 1999Space Imaging LpEnhancing the resolution of multi-spectral image data with panchromatic image data using super resolution pan-sharpening
US5990904Jun 27, 1996Nov 23, 1999Microsoft CorporationMethod and system for merging pixel fragments in a graphics rendering system
US6005959Jul 21, 1997Dec 21, 1999International Business Machines CorporationProduce size recognition system
US6008820Jun 27, 1996Dec 28, 1999Microsoft CorporationProcessor for controlling the display of rendered image layers and method for controlling same
US6018590Oct 7, 1997Jan 25, 2000Eastman Kodak CompanyTechnique for finding the histogram region of interest based on landmark detection for improved tonescale reproduction of digital radiographic images
US6061476Nov 24, 1997May 9, 2000Cognex CorporationMethod and apparatus using image subtraction and dynamic thresholding
US6069635Sep 19, 1997May 30, 2000Sony CorporationMethod of producing image data and associated recording medium
US6069982Dec 23, 1997May 30, 2000Polaroid CorporationEstimation of frequency dependence and grey-level dependence of noise in an image
US6122408Apr 30, 1996Sep 19, 2000Siemens Corporate Research, Inc.Light normalization method for machine vision
US6198505Jul 19, 1999Mar 6, 2001Lockheed Martin Corp.High resolution, high speed digital camera
US6240217Feb 24, 1998May 29, 2001Redflex Traffic Systems Pty LtdDigital image processing
US6243070Nov 13, 1998Jun 5, 2001Microsoft CorporationMethod and apparatus for detecting and reducing color artifacts in images
US6292194Jul 29, 1999Sep 18, 2001Microsoft CorporationImage compression method to reduce pixel and texture memory requirements in graphics applications
US6326964Nov 6, 1998Dec 4, 2001Microsoft CorporationMethod for sorting 3D object geometry among image chunks for rendering in a layered graphics rendering system
US6407777Oct 9, 1997Jun 18, 2002Deluca Michael JosephRed-eye filter method and apparatus
US6483521Feb 1, 1999Nov 19, 2002Matsushita Electric Industrial Co., Ltd.Image composition method, image composition apparatus, and data recording media
US6526161Aug 30, 1999Feb 25, 2003Koninklijke Philips Electronics N.V.System and method for biometrics-based facial feature extraction
US6535632Aug 30, 1999Mar 18, 2003University Of WashingtonImage processing in HSI color space using adaptive noise filtering
US6538656Aug 18, 2000Mar 25, 2003Broadcom CorporationVideo and graphics system with a data transport processor
US6577762Oct 26, 1999Jun 10, 2003Xerox CorporationBackground surface thresholding
US6577821Jul 17, 2001Jun 10, 2003Eastman Kodak CompanyCamera having oversized imager and method
US6593925Jun 22, 2000Jul 15, 2003Microsoft CorporationParameterized animation compression methods and arrangements
US6631206Mar 22, 2000Oct 7, 2003University Of WashingtonImage filtering in HSI color space
US6670963Jan 17, 2001Dec 30, 2003Tektronix, Inc.Visual attention model
US6678413Nov 24, 2000Jan 13, 2004Yiqing LiangSystem and method for object identification and behavior characterization using video analysis
US6683992Dec 28, 2000Jan 27, 2004Matsushita Electric Industrial Co., Ltd.Image decoding apparatus and image coding apparatus
US6744471Dec 3, 1998Jun 1, 2004Olympus Optical Co., LtdElectronic camera that synthesizes two images taken under different exposures
US6756993Jan 17, 2002Jun 29, 2004The University Of North Carolina At Chapel HillMethods and apparatus for rendering images using 3D warping techniques
US6781598Nov 24, 2000Aug 24, 2004Sony Computer Entertainment Inc.Entertainment apparatus, image generation method, and storage medium
US6803954Oct 20, 2000Oct 12, 2004Lg Electronics Inc.Filtering control method for improving image quality of bi-linear interpolated image
US6804408Dec 22, 1999Oct 12, 2004Eastman Kodak CompanyMethod for enhancing a digital image with noise-dependent control of texture
US6836273Nov 13, 2000Dec 28, 2004Matsushita Electric Industrial Co., Ltd.Memory management method, image coding method, image decoding method, image display method, memory management apparatus, and memory management program storage medium
US6842196Apr 4, 2000Jan 11, 2005Smith & Nephew, Inc.Method and system for automatic correction of motion artifacts
US6850236Dec 29, 2000Feb 1, 2005Sun Microsystems, Inc.Dynamically adjusting a sample-to-pixel filter in response to user input and/or sensor input
US6930718Jul 17, 2001Aug 16, 2005Eastman Kodak CompanyRevised recapture camera and method
US6952225Feb 2, 2000Oct 4, 2005Fuji Photo Film Co., Ltd.Method and apparatus for automatic white balance adjustment based upon light source type
US6956573Nov 14, 1997Oct 18, 2005Sarnoff CorporationMethod and apparatus for efficiently representing storing and accessing video information
US6987535Nov 8, 1999Jan 17, 2006Canon Kabushiki KaishaImage processing apparatus, image processing method, and storage medium
US6989859Dec 22, 2000Jan 24, 2006Eastman Kodak CompanyCamera having user interface ambient sensor viewer adaptation compensation and method
US6990252May 23, 2001Jan 24, 2006Adobe Systems, Inc.System for manipulating noise in digital images
US7013025Nov 1, 2001Mar 14, 2006Minolta Co., Ltd.Image correction apparatus
US7035477Dec 20, 2001Apr 25, 2006Hewlett-Packard Development Comapny, L.P.Image composition evaluation
US7042505Jun 12, 2002May 9, 2006Fotonation Ireland Ltd.Red-eye filter method and apparatus
US7054478Aug 18, 2003May 30, 2006Dynamic Digital Depth Research Pty LtdImage conversion and encoding techniques
US7064810Sep 15, 2003Jun 20, 2006Deere & CompanyOptical range finder with directed attention
US7081892Apr 9, 2002Jul 25, 2006Sony Computer Entertainment America Inc.Image with depth of field using z-buffer image data and alpha blending
US7102638Mar 19, 2003Sep 5, 2006Mitsubishi Eletric Research Labs, Inc.Reducing texture details in images
US7103227Mar 19, 2003Sep 5, 2006Mitsubishi Electric Research Laboratories, Inc.Enhancing low quality images of naturally illuminated scenes
US7103357Jan 11, 2001Sep 5, 2006Lightsurf Technologies, Inc.Media spooler system and methodology providing efficient transmission of media content from wireless devices
US7149974Apr 3, 2002Dec 12, 2006Fuji Xerox Co., Ltd.Reduced representations of video sequences
US7206449Mar 19, 2003Apr 17, 2007Mitsubishi Electric Research Laboratories, Inc.Detecting silhouette edges in images
US7218792Mar 19, 2003May 15, 2007Mitsubishi Electric Research Laboratories, Inc.Stylized imaging using variable controlled illumination
US7295720Mar 19, 2003Nov 13, 2007Mitsubishi Electric Research LaboratoriesNon-photorealistic camera
US7317843Apr 1, 2004Jan 8, 2008Microsoft CorporationLuminance correction
US7359562Mar 19, 2003Apr 15, 2008Mitsubishi Electric Research Laboratories, Inc.Enhancing low quality videos of illuminated scenes
US7394489Mar 19, 2004Jul 1, 2008Fuji Xerox Co., Ltd.Comparative object shooting condition judging device, image quality adjustment device, and image shooting apparatus
US7469071Feb 10, 2007Dec 23, 2008Fotonation Vision LimitedImage blurring
US7606417 *Aug 30, 2005Oct 20, 2009Fotonation Vision LimitedForeground/background segmentation in digital images with differential exposure calculations
US7738015 *Aug 16, 2004Jun 15, 2010Fotonation Vision LimitedRed-eye filter method and apparatus
US7796816 *Sep 18, 2009Sep 14, 2010Fotonation Vision LimitedForeground/background segmentation in digital images with differential exposure calculations
US20010000710Dec 7, 2000May 3, 2001Xerox Corporation.Method and apparatus for pre-processing mixed raster content planes to improve the quality of a decompressed image and increase document compression ratios
US20010012063Feb 5, 2001Aug 9, 2001Masamine MaedaImage pickup apparatus
US20020028014Aug 23, 2001Mar 7, 2002Shuji OnoParallax image capturing apparatus and parallax image processing apparatus
US20020080261Aug 29, 2001Jun 27, 2002Minolta Co., Ltd.Image processing apparatus and image sensing device
US20020093670Dec 7, 2000Jul 18, 2002Eastman Kodak CompanyDoubleprint photofinishing service with the second print having subject content-based modifications
US20020180748Jan 17, 2002Dec 5, 2002Voicu PopescuMethods and apparatus for rendering images using 3D warping techniques
US20020191860Dec 20, 2001Dec 19, 2002Cheatle Stephen PhilipImage composition evaluation
US20030038798Feb 28, 2002Feb 27, 2003Paul BeslMethod and system for processing, compressing, streaming, and interactive rendering of 3D color image data
US20030052991Sep 17, 2001Mar 20, 2003Stavely Donald J.System and method for simulating fill flash in photography
US20030091225Jan 7, 2003May 15, 2003Eastman Kodak CompanyMethod for forming a depth image from digital image data
US20030103159Nov 26, 2002Jun 5, 2003Osamu NonakaEvaluating the effect of a strobe light in a camera
US20030169944Feb 27, 2003Sep 11, 2003Dowski Edward RaymondOptimized image processing for wavefront coded imaging systems
US20030184671Mar 28, 2002Oct 2, 2003Robins Mark N.Glare reduction system for image capture devices
US20040047513Jun 13, 2002Mar 11, 2004Tetsujiro KondoImage processing apparatus and method, and image pickup apparatus
US20040145659Jan 13, 2004Jul 29, 2004Hiromi SomeyaImage-taking apparatus and image-taking system
US20040201753Dec 28, 2000Oct 14, 2004Tetsujiro KondoSignal processing device and method, and recording medium
US20040208385Apr 18, 2003Oct 21, 2004Medispectra, Inc.Methods and apparatus for visually enhancing images
US20040223063Aug 5, 2003Nov 11, 2004Deluca Michael J.Detecting red eye filter and apparatus using meta-data
US20050017968Jul 21, 2003Jan 27, 2005Stephan WurmlinDifferential stream of point samples for real-time 3D video
US20050031224Aug 5, 2003Feb 10, 2005Yury PrilutskyDetecting red eye filter and apparatus using meta-data
US20050041121 *Aug 16, 2004Feb 24, 2005Eran SteinbergRed-eye filter method and apparatus
US20050058322 *Sep 16, 2003Mar 17, 2005Farmer Michael E.System or method for identifying a region-of-interest in an image
US20050140801 *Feb 4, 2004Jun 30, 2005Yury PrilutskyOptimized performance and performance for red-eye filter method and apparatus
US20050213849 *Feb 25, 2005Sep 29, 2005Accuimage Diagnostics Corp.Methods and systems for intensity matching of a plurality of radiographic images
US20050243176 *Apr 30, 2004Nov 3, 2005James WuMethod of HDR image processing and manipulation
US20050271289 *Jun 2, 2004Dec 8, 2005Anubha RastogiImage region of interest encoding
US20060008171 *Jul 6, 2004Jan 12, 2006Microsoft CorporationDigital photography with flash/no flash extension
US20060039690 *Aug 30, 2005Feb 23, 2006Eran SteinbergForeground/background segmentation in digital images with differential exposure calculations
US20060104508 *Sep 22, 2005May 18, 2006Sharp Laboratories Of America, Inc.High dynamic range images from low dynamic range images
US20060153471 *Jan 7, 2005Jul 13, 2006Lim Suk HMethod and system for determining an indication of focus of an image
US20060181549 *Apr 19, 2006Aug 17, 2006Alkouh Homoud BImage data processing using depth image data for realistic scene representation
US20060193509 *Feb 25, 2005Aug 31, 2006Microsoft CorporationStereo-based image processing
US20070237355 *Mar 31, 2006Oct 11, 2007Fuji Photo Film Co., Ltd.Method and apparatus for adaptive context-aided human classification
EP1367538A2 *Mar 28, 2003Dec 3, 2003Eastman Kodak CompanyImage processing method and system
JP2000102040A * Title not available
JP2000299789A * Title not available
JP2001101426A * Title not available
JP2001223903A * Title not available
JP2002112095A * Title not available
JP2003281526A * Title not available
JP2004064454A * Title not available
JP2004166221A * Title not available
JP2004185183A * Title not available
JP2006024206A * Title not available
JP2006080632A * Title not available
JP2006140594A * Title not available
JPH0614193A Title not available
JPH02281879A * Title not available
JPH04127675A * Title not available
JPH08223569A Title not available
JPH10285611A * Title not available
WO1994026057A1Apr 29, 1994Nov 10, 1994Colin Dennis AgerBackground separation for still and moving images
WO2002052839A2Dec 20, 2001Jul 4, 2002Hewlett Packard CoImage composition evaluation
Non-Patent Citations
Reference
1Adelson, E.H., "Layered Representations for Image Coding, http://web.mit.edu/persci/people/adelson/pub.sub.-pdfs/layers91.pdf.", Massachusetts Institute of Technology, 1991, 20 pages.
2Adelson, E.H., "Layered Representations for Image Coding, http://web.mit.edu/persci/people/adelson/pub.sub.—pdfs/layers91.pdf.", Massachusetts Institute of Technology, 1991, 20 pages.
3Aizawa, K. et al., "Producing object-based special effects by fusing multiple differently focused images, http://rlinks2.dialog.com/NASApp/ChanneIWEB/DialogProServlet?ChName=engineering", IEEE transactions on circuits and systems for video technology, 2000, pp. 323-330, vol. 10-Issue 2.
4Aizawa, K. et al., "Producing object-based special effects by fusing multiple differently focused images, http://rlinks2.dialog.com/NASApp/ChanneIWEB/DialogProServlet?ChName=engineering", IEEE transactions on circuits and systems for video technology, 2000, pp. 323-330, vol. 10—Issue 2.
5Ashikhmin, Michael, "A tone mapping algorithm for high contrast images, http://portal.acm.org/citation.cfm?id=581 91 6&,coll=Portal&dl=ACM&CFID=1 7220933&CFTOKEN=89149269", ACM International Conference Proceeding Series, Proceedings of the 13th Eurographics workshop on Rendering, 2002, pp. 145-156, vol. 28.
6Barreiro, R.B. et al., "Effect of component separation on the temperature distribution of the cosmic microwave background, Monthly Notices of the Royal Astronomical Society, Current Contents Search®. Dialog® File No. 440 Accession No. 23119677", 2006, pp. 226-246, vol. 368-Issue 1.
7Barreiro, R.B. et al., "Effect of component separation on the temperature distribution of the cosmic microwave background, Monthly Notices of the Royal Astronomical Society, Current Contents Search®. Dialog® File No. 440 Accession No. 23119677", 2006, pp. 226-246, vol. 368—Issue 1.
8Beir, Thaddeus, "Feature-Based Image Metamorphosis," In Siggraph '92, Silicon Graphics Computer Systems, 2011 Shoreline Blvd, Mountain View CA 94043, http://www.hammerhead.com/thad/thad.html.
9Benedek, C. et al., "Markovian framework for foreground-background-shadow separation of real world video scenes, Proceedings v 3851 LNCS 2006, Ei Compendex®. Dialog® File No. 278 Accession No. 11071345", 7th Asian Conference on Computer Vision, 2006.
10Boutell, M. et al., "Photo classification by integrating image content and camera metadata", Pattern Recognition, Proceedings of the 17th International Conference, 2004, pp. 901-904, vol. 4.
11Braun M. et al., "Information Fusion of Flash and Non-Flash Images, retrieved from the Internet:URL: http://graphics.stanford.edu/{georgp/vision.htm", 2002, pp. 1-12.
12C. Swain and T. Chen, Defocus-based image segmentation, INSPEC. Proceedings ICASSP-95, vol. 4, pp. 2403-2406, Detroit, MI, May 1995. IEEE. http://citeseer.ist.psu.edu/swain95defocusbased.html.
13Chen, Shenchang et al., "View interpolation for image synthesis, ISBN:0-89791-601-8, http://portal.acm.org/citation.cfm?id=166153&coli=GUIDE&dl=GUIDE&CFID=680-9268&CFTOKEN=82843223.", International Conference on Computer Graphics and Interactive Techniques, Proceedings of the 20th annual conference on Computer graphics and interactive techniques, 1993, pp. 279-288, ACM Press.
14Eisemann, E. et al., "Flash Photography Enhancement via Intrinsic Relighting, ACM Transactions on URL: http://graphics.stanford.edu/{georgp/vision.htm", 2002, pp. 1-12.
15Eriksen, H.K. et al., "Cosmic microwave background component separation by parameter estimation, INSPEC. Dialog® File No. 2 Accession No. 9947674", Astrophysical Journal, 2006, pp. 665-682, vol. 641-Issue 2.
16Eriksen, H.K. et al., "Cosmic microwave background component separation by parameter estimation, INSPEC. Dialog® File No. 2 Accession No. 9947674", Astrophysical Journal, 2006, pp. 665-682, vol. 641—Issue 2.
17European Patent Office, Communication pursuant to Article 94(3) EPC for EP Application No. 06776529.7, dated Jan. 30, 2008, 3 pages.
18European Patent Office, extended European Search Report for EP application No. 07024773.9, dated Jun. 3, 2008, 5 pages.
19European Patent Office, extended European Search Report for EP application No. 07756848.3, dated May 27, 2009, 4 pages.
20Favaro, Paolo, "Depth from focus/defocus, http://homepages.inf.ed.ac.uk/rbf/Cvonline/LOCAL-COPIES/FAVAR01/dfdtutorial.html.", 2002.
21Favaro, Paolo, "Depth from focus/defocus, http://homepages.inf.ed.ac.uk/rbf/Cvonline/LOCAL—COPIES/FAVAR01/dfdtutorial.html.", 2002.
22Final Office Action mailed Feb. 4, 2009, for U.S. Appl. No. 11/319,766, filed Dec. 27, 2005.
23Final Office Action mailed Jun. 24, 2009, for U.S. Appl. No. 11/744,020, filed May 3, 2007.
24Final Office Action mailed Sep. 15, 2010, for U.S. Appl. No. 11/744,020, filed May. 3, 2007.
25Final Office Action mailed Sep. 18, 2009, for U.S. Appl. No. 11/319,766, filed Dec. 27, 2005.
26Haneda, E., "Color Imaging XII: Processing, Hardcopy, and Applications", Proceedings of Society of Optical Engineers, 2007, vol. 6493.
27Hashi Yuzuru et al., "A New Method to Make Special Video Effects. Trace and Emphasis of Main Portion of Images, Japan Broadcasting Corp., Sci. and Techical Res. Lab., JPN, Eizo Joho Media Gakkai Gijutsu Hokoku, http://rlinks2.dialog.com/NASApp/ChannelWEB/DialogProServlet?ChName=engineering", 2003, pp. 23-26, vol. 27.
28Heckbert, Paul S., "Survey of Texture Mapping, http://citeseer.ist.psu.edu/135643.html", Proceedings of Graphics Interface '86. IEEE Computer Graphics and Applications, 1986, pp. 56-67 & 207-212.
29Homayoun Kamkar-Parsi, A., "A multi-criteria model for robust foreground extraction, http://portal.acm.org/citation.cfm?id=1099410&coll=Porial&dl=ACM&CFID=17220933&CFTOKEN=89149 269", Proceedings of the third ACM international workshop on Video surveillance & sensor networks, 2005, pp. 67-70, ACM Press.
30Jin, Hailin et al., "A Variational Approach to Shape from Defocus, {ECCV} (2), http://citeseerist.psu.edu/554899.html", 2002, pp. 18-30.
31Jin, J., "Medical Imaging, Image Processing, Murray H. Loew, Kenneth M. Hanson, Editors", Proceedings of SPIE, 1996, pp. 864-868, vol. 2710.
32Kelby, Scott, " The Photoshop Elements 4 Book for Digital Photographers, XP002406720, ISBN: 0-321-38483-0, Section: Tagging Images of People (Face Tagging)", 2005, New Riders.
33Kelby, Scott, "Photoshop Elements 3: Down & Dirty Tricks, ISBN: 0-321-27835-6, One Hour Photo: Portrait and studio effects", 2004, Chapter 1, Peachpit Press.
34Khan, E.A., "Image-based material editing, http://portal.acm.org/citation.cfm?id=1141937&coll=GUIDE&dl=GUIDE&CFID=68-09268&CFTOKEN=82843223", International Conference on Computer Graphics and Interactive Techniques, 2006, pp. 654 663, ACM Press.
35Komatsu, Kunitoshi et al., "Design of Lossless Block Transforms and Filter Banks for Image Coding, http://citeseerist.psu.edu/komatsu99design.html".
36Leray et al., "Spatially distributed two-photon excitation fluorescence in scattering media: Experiments and timeresolved Monte Carlo simulations", Optics Communications, 2007, pp. 269-278, vol. 272-Issue 1.
37Leray et al., "Spatially distributed two-photon excitation fluorescence in scattering media: Experiments and timeresolved Monte Carlo simulations", Optics Communications, 2007, pp. 269-278, vol. 272—Issue 1.
38Leubner, Christian, "Multilevel Image Segmentation in Computer-Vision Systems, http://citeseerist.psu.edu/565983.html".
39Li, Han et al., "A new model of motion blurred images and estimation of its parameter", Acoustics, Speech, and Signal Processing, IEEE International Conference on ICASSP '86, 1986, pp. 2447-2450, vol. 11.
40Li, Liyuan et al., "Foreground object detection from videos containing complex background, http//portaLacm.org/citation.cfm?id=957017&coll= Portal&d1=ACM&CFI D=17220933&CFTCKEN=89149269", Proceedings of the eleventh ACM international conference on Multimedia, 2003, pp. 2-10, ACM Press.
41Li, S. et al., "Multifocus image fusion using artificial neural networks, DOI= http://dx.doi.org/10.1016/S0167-8655(02)00029-6", Pattern Recogn. Lett, 2002, pp. 985-997, vol. 23.
42McGuire, M. et al., "Defocus video matting, DOI= http://doi.acm.org/10.1145/1073204.1073231", ACM Trans. Graph., 2005, pp. 567-576, vol. 24-Issue 3.
43McGuire, M. et al., "Defocus video matting, DOI= http://doi.acm.org/10.1145/1073204.1073231", ACM Trans. Graph., 2005, pp. 567-576, vol. 24—Issue 3.
44Neri, A. et al., "Automatic moving object and background separation Ei Compendex®. Dialog® File No. 278 Accession No. 8063256", Signal Processing, 1998, pp. 219-232, vol. 66-Issue 2.
45Neri, A. et al., "Automatic moving object and background separation Ei Compendex®. Dialog® File No. 278 Accession No. 8063256", Signal Processing, 1998, pp. 219-232, vol. 66—Issue 2.
46Non-Final Office Action mailed Aug. 6, 2008, for U.S. Appl. No. 11/319,766, filed Dec. 27, 2005.
47Non-Final Office Action mailed Jul. 13, 2009, for U.S. Appl. No. 11/421,027, filed May 30, 2006.
48Non-Final Office Action mailed Mar. 10, 2009, for U.S. Appl. No. 11/217,788, filed Aug. 30, 2005.
49Non-Final Office Action mailed Mar. 31, 2010, for U.S. Appl. No. 11/744,020, filed May 3, 2007.
50Non-Final Office Action mailed Nov. 25, 2008, for U.S. Appl. No. 11/217,788, filed Aug. 30, 2005.
51Non-Final Office Action mailed Sep. 11, 2008, for U.S. Appl. No. 11/744,020, filed May 3, 2007.
52Office Action in co-pending European Application No. 06 776 529.7-2202, entitled "Communication Pursuant to Article 94(3) EPC", dated Sep. 30, 2008, 3 pages.
53Office Action in co-pending European Application No. 06 776 529.7—2202, entitled "Communication Pursuant to Article 94(3) EPC", dated Sep. 30, 2008, 3 pages.
54Owens, James, "Method for depth of field (DOE) adjustment using a combination of object segmentation and pixel binning", Research Disclosure, 2004, vol. 478, No. 97, Mason Publications.
55Pavlidis et al., "A Multi-Segment Residual Image Compression Technique" http://citeseerist.psu.edu/554555.html.
56PCT International Preliminary Report on Patentability, for PCT Application No. PCT/EP2006/007573, dated Jul. 1, 2008, 9 pages.
57PCT International Preliminary Report on Patentability, for PCT Application No. PCT/EP2006/008229, dated Aug. 19, 2008, 15 pages.
58PCT International Preliminary Report on Patentability, for PCT Application No. PCT/US2007/061956, dated Oct. 27, 2008, 3 pages.
59PCT International Preliminary Report on Patentability, for PCT Application No. PCT/US2007/068190, dated Nov. 4, 2008, 8 pages.
60PCT International Search Report and the Written Opinion of the International Searching Authority, or the Declaration for PCT application No. PCT/US2007/068190, dated Sep. 29, 2008, 10 pages.
61PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration (PCT/EP2006/007573), dated Nov. 27, 2006.
62PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration for PCT application No. PCT/EP2006/008229, dated Jan. 14, 2008, 18 pages.
63PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration for PCT application No. PCT/US2007/061956, dated Mar. 14, 2008, 9 pages.
64PCT Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for PCT Application No. PCT/EP2006/005109, Oct. 4, 2006, 14 pages.
65Petschnigg, G. et al., "Digital Photography with Flash and No Flash Image Pairs", The Institution of Electrical Engineers, 2004, pp. 664-672.
66Potmesil, Michael et al., "A lens and aperture camera model for synthetic image generation, ISBN:0-89791-045-1, http://portal.acm.org/citation.cfm?id=806818&coli=GUIDE&dl=GUIDE&CFID=680-9268&CFTOKEN=82843222.", International Conference on Computer Graphics and Interactive Techniques, Proceedings of the 8th annual conference on Computer graphics and interactive techniques, 1981, pp. 297-305, ACM Press.
67Rajagopalan, A.N. et al., "Optimal recovery of depth from defocused images using an mrf model, http://citeseerist.psu.edu/rajagopalan98optimal.html", In Proc. International Conference on Computer Vision, 1998, pp. 1047-1052.
68Reinhard, E. et al., "Depth-of-field-based alpha-matte extraction, http://doi.acm.org/10.1145/1080402.1080419", In Proceedings of the 2nd Symposium on Applied Perception in Graphics and Visualization, 2005, pp. 95-102, vol. 95.
69Sa, A. et al., "Range-Enhanced Active Foreground Extraction, XP010851333", Image Processing, IEEE International Conference, 2005, pp. 81-84.
70Saito, T. et al., "Separation of irradiance and reflectance from observed color images by logarithmical nonlinear diffusion process, Ei Compendex®. Dialog® File No. 278 Accession No. 10968692", Proceedings of Society for Optical Engineering Computational Imaging IV-Electronic Imaging, 2006, vol. 6065.
71Saito, T. et al., "Separation of irradiance and reflectance from observed color images by logarithmical nonlinear diffusion process, Ei Compendex®. Dialog® File No. 278 Accession No. 10968692", Proceedings of Society for Optical Engineering Computational Imaging IV—Electronic Imaging, 2006, vol. 6065.
72Schechner, Y.Y. et al., "Separation of transparent layers using focus, http:I/citeseer.ist.psu.edu/article/schechner98separation.html", Proc. ICCV, 1998, pp. 1061-1066.
73Serrano, N. et al., "A computationally efficient approach to indoor/outdoor scene classification, XP010613491, ISBN: 978-0/7695-1695-0.", Pattern Recognition, 2002 Proceedings. 16th International Conference, IEEE Comput. Soc, 2002, pp. 146-149, vol. 4.
74Simard, Patrice Y. et al., "A foreground/background separation algorithm for image compression, Ei Compendex®. Dialog® File No. 278 Accession No. 9897343", Data Compression Conference Proceedings, 2004.
75Subbarao, M. et al., "Depth from Defocus: A Spatial Domain Approach, Technical Report No. 9212.03, http://citeseerist.psu.edu/subbarao94depth.html", Computer Vision Laboratory, SUNY.
76Subbarao, Murali et al., "Noise Sensitivity Analysis of Depth-from-Defocus by a Spatial-Domain Approach, http://citeseer.ist.psu.edu/subbarao97noise.html".
77Sun, J. et al., "Flash Matting", ACM Transactions on Graphics, 2006, pp. 772-778, vol. 25-Issue 3.
78Sun, J. et al., "Flash Matting", ACM Transactions on Graphics, 2006, pp. 772-778, vol. 25—Issue 3.
79Szummer, M. et al., "Indoor-outdoor image classification", Content-Based Access of Image and Video Database, Proceedings., IEEE International Workshop, IEEE Comput. Soc, 1998, pp. 42-51.
80Television Asia, "Virtual sets and chromakey update: superimposing a foreground captured by one camera onto a background from another dates back to film days, but has come a long way since," Television Asia, vol. 13, No. 9, p. 26, Nov. 2006. Business & Industry®. Dialog® File No. 9 Accession No. 4123327.
81Tzovaras, D. et al., "Three-dimensional camera motion estimation and foreground/background separation for stereoscopic image sequences, INSPEC. Dialog® File No. 2 Accession No. 6556637.", Optical Engineering, 1997, pp. 574-579, vol. 36-Issue 2.
82Tzovaras, D. et al., "Three-dimensional camera motion estimation and foreground/background separation for stereoscopic image sequences, INSPEC. Dialog® File No. 2 Accession No. 6556637.", Optical Engineering, 1997, pp. 574-579, vol. 36—Issue 2.
83U.S. Appl. No. 10/772,767, filed Feb. 4, 2004, by inventors Michael J. DeLuca, et al.
84Utpal, G. et al., "On foreground-background separation in low quality document images, INSPEC. Dialog® File No. 2 Accession No. 9927003", International Journal on Document Analysis and Recognition, 2006, pp. 47-63, vol. 8-Issue 1.
85Utpal, G. et al., "On foreground-background separation in low quality document images, INSPEC. Dialog® File No. 2 Accession No. 9927003", International Journal on Document Analysis and Recognition, 2006, pp. 47-63, vol. 8—Issue 1.
86Watanabe, Masahiro et al., "Rational Filters for Passive Depth from Defocus", 1995.
87Yu, Jingyi et al., "Real-time reflection mapping with parallax, http//portal.acm.org/citation.cfm?id=1053449&coll=Portal&dl=ACM&CFID=1 7220933&CFTOKEN=89149269", Symposium on Interactive 3D Graphics, Proceedings of the 2005 symposium on Interactive 3D graphics and games, 2005, pp. 133-138, ACM Press.
88Ziou, D. et al., "Depth from Defocus Estimation in Spatial Domain, http://citeseer.ist.psu.edu/ziou99depth.html", CVIU, 2001, pp. 143-165, vol. 81-Issue 2.
89Ziou, D. et al., "Depth from Defocus Estimation in Spatial Domain, http://citeseer.ist.psu.edu/ziou99depth.html", CVIU, 2001, pp. 143-165, vol. 81—Issue 2.
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US8175385 *Mar 4, 2011May 8, 2012DigitalOptics Corporation Europe LimitedForeground/background segmentation in digital images with differential exposure calculations
US8587665Feb 15, 2011Nov 19, 2013DigitalOptics Corporation Europe LimitedFast rotation estimation of objects in sequences of acquired digital images
US8587666Feb 15, 2011Nov 19, 2013DigitalOptics Corporation Europe LimitedObject detection from image profiles within sequences of acquired digital images
US8705894Feb 15, 2011Apr 22, 2014Digital Optics Corporation Europe LimitedImage rotation from local motion estimates
US20110044544 *Oct 29, 2010Feb 24, 2011PixArt Imaging Incorporation, R.O.C.Method and system for recognizing objects in an image based on characteristics of the objects
US20110157408 *Mar 4, 2011Jun 30, 2011Tessera Technologies Ireland LimitedForeground/Background Segmentation in Digital Images with Differential Exposure Calculations
US20130050395 *Aug 29, 2011Feb 28, 2013DigitalOptics Corporation Europe LimitedRich Mobile Video Conferencing Solution for No Light, Low Light and Uneven Light Conditions
EP2515526A2Apr 6, 2012Oct 24, 2012DigitalOptics Corporation Europe LimitedDisplay device with image capture and analysis module
Classifications
U.S. Classification382/173
International ClassificationG06T7/00, H04N1/62, G06K9/34, G06K9/00
Cooperative ClassificationG06K9/0061, H04N1/624, G06T7/0081, G06T2207/10152, G06K9/00664, G06K9/38, G06K9/00791, G06T5/005, G06T2207/10024, G06T2207/20224, G06T7/0097, G06K9/00248, G06T7/40, G06T7/0002, G06T2207/30216, G06T2207/20144
European ClassificationG06T5/00D, G06K9/00S2, G06K9/00F1L, G06T7/40, G06K9/00V6, G06T7/00B, G06K9/38, G06T7/00S1, H04N1/62C, G06K9/00V2, G06T7/00S9
Legal Events
DateCodeEventDescription
Aug 27, 2014FPAYFee payment
Year of fee payment: 4
Dec 5, 2011ASAssignment
Owner name: DIGITALOPTICS CORPORATION EUROPE LIMITED, IRELAND
Free format text: CHANGE OF NAME;ASSIGNOR:TESSERA TECHNOLOGIES IRELAND LIMITED;REEL/FRAME:027326/0593
Effective date: 20110713
Jan 20, 2011ASAssignment
Effective date: 20101001
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FOTONATION VISION LIMITED;REEL/FRAME:025669/0455
Owner name: TESSERA TECHNOLOGIES IRELAND LIMITED, IRELAND
Sep 14, 2010ASAssignment
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:STEINBERG, ERAN;PRILUTSKY, YURY;CORCORAN, PETER;AND OTHERS;SIGNING DATES FROM 20050907 TO 20050922;REEL/FRAME:024979/0355
Owner name: FOTONATION VISION LTD., IRELAND