Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20030002870 A1
Publication typeApplication
Application numberUS 09/894,380
Publication dateJan 2, 2003
Filing dateJun 27, 2001
Priority dateJun 27, 2001
Publication number09894380, 894380, US 2003/0002870 A1, US 2003/002870 A1, US 20030002870 A1, US 20030002870A1, US 2003002870 A1, US 2003002870A1, US-A1-20030002870, US-A1-2003002870, US2003/0002870A1, US2003/002870A1, US20030002870 A1, US20030002870A1, US2003002870 A1, US2003002870A1
InventorsJohn Baron
Original AssigneeBaron John M.
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
System for and method of auto focus indications
US 20030002870 A1
Abstract
The present invention includes a system for and method of highlighting portions of the displayed image in a camera which are in focus to the photographer. Included in the highlighted portion are all focused portions of objects within the depth of field.
Images(5)
Previous page
Next page
Claims(28)
What is claimed is:
1. A method of automatically highlighting focused objects within a preview window comprising the steps of:
receiving a digital representation of an image;
determining a near focus distance;
identifying near portions of objects within said image at said near focus distance;
determining a far focus distance;
identifying far portions of objects within said image at said far focus distance; and
highlighting said near portions and said far portions of said objects within said image.
2. The method of claim 1 further including the stop of:
displaying a digital image including said highlighted near and far portions.
3 The method of claim 2 further comprising the step of:
performing said steps of receiving, determining a near focus distance, identifying near portions, determining a far focus distance, identifying far portions, highlight and displaying within a digital camera.
4 The method of claim 1 further comprising the step of:
determining focused portions of objects between said near portions and said far portions; and
highlighting said focused portions.
5. The method of claim 4 further including the step of:
displaying said highlighted focused portions on said digital image.
6. A camera comprising:
an image sensor responsive to a light image projected onto said image sensor for providing image data;
an adjustable focus lens configured to project said light image onto said image sensor;
a controller configured to adjust a focus of said adjustable focus lens and receive said image data from said image sensor, said controller further configured to distinguish portions of said image data that represent focused portions of said light image from portions that are not in focus; and
a display configured to display said image data together with highlighting distinguishing said portions of said image data that represent said focused portions of said light image from said portions that are not in focus.
7. The camera according to claim 6 further comprising a memory storing a contrast evaluation procedure executable by said controller for distinguishing said portions of said image data that represent said focused portions of said light image from said portions that are not in focus.
8. The camera according to claim 6 wherein said image sensor comprises a two-dimensional array of light detectors.
9. The camera according to claim 6 wherein said adjustable focus lens includes a focusing motor connected to adjust a configuration of optical elements of said adjustable focus lens in response to a control signal from said controller.
10. The camera according to claim 6 wherein said controller is configured to determine contrast values of said light image.
11. The camera according to claim 6 wherein said controller is further configured to process said image data for storage in a memory.
12. The camera according to claim 6 wherein said controller implements a lossy compression algorithm on said image data to form compressed image data and stores said compressed image data in a memory.
13. The method of claim 6 further comprising the step of:
disabling said highlighting of said near and said far portions.
14. The method of claim 6 further comprising the steps of:
compressing said digital image to provide compressed image data; and
storing said compressed image data in a memory.
15. The method of claim 6 wherein said determining said near and said far portions is performed from identified edges of objects contained within the digital representation of an image.
16. The method of claim 6 wherein said highlighting comprises blinking said near and far portions of said image in focus.
17. A focus highlighting system comprising:
a processor for highlighting focused portions of an image;
an autofocus mechanism configured to determine portions of an image within focus;
a display configured to display a digital image including highlighting; and
a memory configured to store said digital representation of said image.
18. The focus highlighting system of claim 17 wherein:
said autofocus calculates a near focus distance and determines near portions of objects using said near focus distance.
19. The focus highlighting system of claim 18 wherein:
said autofocus calculates a far focus distance and determines far portions of objects using said far focus distance.
20. The focus highlighting system of claim 19 wherein:
said portions of said image include said near focus portions and said far focus portions.
21. The focus highlighting system of claim 17 wherein said highlighting includes blinking.
22. The focus highlighting of claim 17 further including:
a disable feature which disables highlighting when selected by a user.
23. A camera comprising:
an image sensor;
an image processor configured to determine portions of objects which appear in focus and to highlight said portions; and
a memory configured to store said image captured by said image sensor.
24. The camera according to claim 23 further comprising:
a display connected to display an image captured by said image sensor including said highlighting.
25. The camera according to claim 23 further comprising:
an image compressor configured to perform compression of said corrected image data.
26. The camera according to claim 25 wherein said image compressor implements a lossy image compression algorithm.
27. The camera according to claim 23 further comprising a housing containing said image sensor, display, image processor and memory.
28. The camera according to claim 23 wherein said objects which appear in focus includes objects at different distances from said camera.
Description
BACKGROUND

[0001] Cameras, and other image capturing devices, have been used by individuals to record visual images for many years. Earlier cameras used light sensitized emulsion coated on a plate or film onto which a latent image was captured and, once captured and developed, used to create visual images which portrayed the original photographed scene. More recently, digital cameras have become available with their popularity increasing over the last couple of years. Digital cameras typically record captured images as bitmap images in a storage device such as a 3½ inch magnetic disk or similar storage media. These stored images may be processed or modified by a computer user and may be printed out and used accordingly. While the original digital cameras included basic functionality, today's digital cameras include numerous features and in some instances include features which cannot be implemented with conventional film-based cameras. For instance, storage techniques have evolved in such a way that digital cameras may store hundreds of low resolution images. Additionally, digital camera users may select the resolution desired for images being captured and stored. The digital camera user may select images to be recorded in low, medium, or high resolution modes. Since, as the resolution of the captured image increases, the amount of memory dedicated to storing the image also increases, appropriate selection of picture resolution allows faster image capture when only low resolution is required and a corresponding reduction in image processing and storage requirements. Digital photography also allows modifications of captured digital images heretofore unavailable in conventional film photography.

[0002] Some types of cameras, both digital and conventional film cameras, include built in automatic focusing. Simple cameras sometimes use Infra Red (IR) detectors to determine range to a subject, others use sonic transducers to provide distance information. In contrast, single lenses reflect (SLR) cameras typically include autofocusing systems which may be classified as contrast measurement or phase matching systems. While phase matching systems are typically used in conventional film cameras, contrast measurement is the preferred method for digital cameras. Thus, most digital cameras achieve a focused image by maximizing the contrast between objects within an image. An object in exact focus is one which is at the precise focusing distance and is dependent on image format (e.g., aspect ratio), lense focal length, aperture size, and focus distance and tolerable circle of confusion for the final image.

[0003] The depth of field is an indication of the distance of objects from the focal point of the camera which will appear in focus at a given time. Typically, one-third of the depth of field lies in front of the subject at the precise focusing distance and two-thirds of the depth of field lies behind the subject. One of ordinary skill in the art would appreciate that ultimate print size also effects the depth of field. Typically, for conventional photography, an 8×10 inch print viewed at a distance of 24 inches is used to determine acceptable depths of field guidemarks on lenses. The depth of field is related to a circle of confusion which indicates how large a circle related to an object which is not at the exact focus may become without appearing distorted (e.g., “fuzzy”) to the human eye.

[0004] Fully autofocus devices incorporated into prior art cameras may be implemented to adjust depth of field and aperture to bring, for example, a large portion of the viewed image into focus. Alternatively, many prior art cameras include spot focusing which allows the user to identify, to the camera, a specific portion of the image the photographer desires to be focused. One of ordinary skill in the art understands and appreciates these focusing techniques.

[0005] All of these prior art devices required the user to determine which portion of the image is being focused by the camera.

SUMMARY OF THE INVENTION

[0006] The present invention is directed to a system of and method for indicating to the photographer the specific portion or portions of the image which are in focus. The method of the present invention includes the steps of receiving a digital representation of an image, examining the digital representation to determine the portions of the image which are in focus and highlighting those focused portions to the photographer. For larger depths of field, far and near focused objects, and objects positioned between the far and near focus objects, may be highlighted, or all focused objects in the digital image may be highlighted.

BRIEF DESCRIPTION OF THE DRAWINGS

[0007]FIG. 1 is a flow diagram of a procedure implemented by a system to determine the focused areas of an image;

[0008]FIG. 2 is a flow diagram of a procedure implemented by a system which displays and highlights the focused areas of an image to a user;

[0009]FIG. 3 is a hardware block diagram of a camera which incorporates the present invention; and

[0010] FIGS. 4A-4C contain sample images, as viewed by the photographer through the viewfinder, which illustrate an embodiment of the present invention.

DETAILED DESCRIPTION

[0011] Generally, the present invention relates to a system for and method of unambiguously highlighting to a photographer the portions of an image contained in the viewfinder of a camera that are in focus. By highlighting the portions of the image which are in focus, confusion and uncertainty are eliminated and expected photographic images will result.

[0012]FIG. 1 is a flow diagram of a procedure implemented by a system to determine the focused area of an image. In step 101, a first region of the captured image is selected. In step 102, the first region selected is analyzed to determine the contrast between objects or pixels within the region. As one of ordinary skill in the art appreciates, as the contrast increases so does the focus of the region. In step 103, the contrast of the region calculated in step 102 is compared to a contrast threshold value. If the calculated contrast is greater than the threshold value, the region is considered to be in focus. For regions in which the calculated contrast is greater than the threshold contrast, step 104 marks the region as in focus. For regions in which the calculated contrast is equal to or less than the threshold contrast, step 105 marks the region as out-of-focus. In step 106, a determination is made as to whether additional regions remain to be checked. If additional regions remain, the next region is selected in step 107 and the process flow is returned to step 102. When each region has been checked and each region has been marked as in-focus or out-of-focus, the process is completed.

[0013]FIG. 2 is a flow diagram of a procedure used by a system to highlight the regions or areas of the image which are in focus. In step 201, a first region is selected for analysis. In step 202, the region is checked to determine if it has been marked as in-focus or out-of-focus. If the region is not marked, flow returns to FIG. 1. If the analyzed region is marked as in-focus, step 203 determines the edge of the region. If edges are found in step 204, then these edges are highlighted in step 205. Highlighting may include blinking the identified portion of the object, reversing its color scheme, enclosing the focused section within a box, or similar highlighting techniques. If edges are not detected in step 204 or, after the detected edges are highlighted in step 205, the procedure continues by determining if additional regions are present which must be checked for in-focus markings. If additional regions are available step 207 selects the next region and the process continues in step 202. If, however, all regions have been checked the procedure is completed.

[0014]FIG. 3 is a hardware block diagram of a camera which incorporates the present invention. Processor 301 is electrically connected to User Input 302, Image Sensor 303, Focus Motor 304, memory 305 and Viewfinder Display 306.

[0015] User Input 302 ensures that the input from the user, such as turning the highlighting feature on or off, is accepted by the system. Image Sensor 303 converts the light image into a suitable signal and/or image data for analysis. Once the image data is available to processor 301, processor 301 may determine areas of the image which are within the near focus ranges, the far focus range, and everything which is in focus between the near focus and the far focus. Focus motor 304 works with process or 301 to present a focused input to the user. Once processor 301 determines which portions of the image are in focus using FIG. 1 and highlights the appropriate portions using FIG. 2, the image, including highlighted portions, is presented on Viewfinder Display 306. When selected by the user, for instance by depressing the shutter, the captured image is recorded, without the associated highlighting, in memory 305.

[0016] Note that contrast measurements may be taken during focusing so as to distinguish near focus objects from far focus objects. For example, as focus motor 304 adjusts the lens system from an infinity focus towards a near focus configuration, objects in the far field will increase in contrast until precisely focused, and then decrease in contrast as the lens system continues to be adjusted. The system keeps track of when each object reaches maximum contrast to determine a range to the object based on the focus setting of the lens system. Further, the lens system may be caused to pass through the preferred focus so as to allow mapping of all objects and/or portions of the image, i.e., determine the range to each object based on when the object achieves maximum contrast. Assuming distinct objects will tend to be varying distances from the camera, this technique can also be used to identify the bounds, outline, and extent of image areas representing individual objects. This technique may be used in lieu of or in addition to, edge recognition previously described.

[0017]FIG. 4A shows a representative image as the image would be displayed on the viewfinder without benefit of the present invention. While a photographer presented with this image can clearly tell the portion of the image which is in focus, one of ordinary skill in the art would understand other images contain objects which the average photographer cannot be assured are in focus. FIG. 4B shows the image of FIG. 4A after the in-focus section has been highlighted, in this case by outlining, by the present invention. Additional contrast can be obtained by “greying out” or de-emphasizing portions of the image which are not in-focus as determined by contrast measurement as shown in FIG. 4C.

[0018] The present invention can also be applied to a manual focusing camera which lacks a focus motor in the lens. For implementation in a manual focusing camera software is included which enables the processor to interface with an encoder included within the lens and determine where in the focus travel the lens currently is positioned. This information is used to determine the portions of the view which are in focus and these portions are highlighted.

Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US6812969 *Jan 31, 2003Nov 2, 2004Minolta Co., Ltd.Digital camera
US6922527 *Dec 2, 2003Jul 26, 2005Fuji Photo Film Co., Ltd.Image display apparatus and print system
US7076119 *Sep 23, 2002Jul 11, 2006Fuji Photo Film Co., Ltd.Method, apparatus, and program for image processing
US7248301 *May 2, 2003Jul 24, 2007Hewlett-Packard Development Company, L.P.System and method for providing camera focus feedback
US7978247 *Aug 13, 2007Jul 12, 2011Seiko Epson CorporationFocusing information visualization device, and corresponding method, program and recording medium
US8212916 *Sep 27, 2010Jul 3, 2012Canon Kabushiki KaishaImage display device, image pickup apparatus, and image display method that allow focus assistant display
US20110096220 *Sep 27, 2010Apr 28, 2011Canon Kabushiki KaishaImage display device, image pickup apparatus, and image display method that allow focus assistant display
US20120075495 *Sep 23, 2011Mar 29, 2012Sanyo Electric Co., Ltd.Electronic camera
EP2430827A1 *Apr 20, 2010Mar 21, 2012Canon Kabushiki KaishaImage pickup apparatus
WO2008041158A2 *Sep 26, 2007Apr 10, 2008Nokia CorpEmphasizing image portions in an image
WO2012092246A2 *Dec 27, 2011Jul 5, 20123Dmedia CorporationMethods, systems, and computer-readable storage media for identifying a rough depth map in a scene and for determining a stereo-base distance for three-dimensional (3d) content creation
WO2013123983A1 *Feb 22, 2012Aug 29, 2013Sony Ericsson Mobile Communications AbMethod and device relating to image content
Classifications
U.S. Classification396/147, 348/E05.047, 348/E05.045
International ClassificationG03B3/00, G02B7/36, H04N5/232, H04N101/00, G02B7/28
Cooperative ClassificationH04N5/23293, H04N5/23212
European ClassificationH04N5/232F, H04N5/232V
Legal Events
DateCodeEventDescription
Sep 30, 2003ASAssignment
Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY L.P., TEXAS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;REEL/FRAME:014061/0492
Effective date: 20030926
Owner name: HEWLETT-PACKARD DEVELOPMENT COMPANY L.P.,TEXAS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;US-ASSIGNMENT DATABASE UPDATED:20100203;REEL/FRAME:14061/492
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;US-ASSIGNMENT DATABASE UPDATED:20100223;REEL/FRAME:14061/492
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;US-ASSIGNMENT DATABASE UPDATED:20100302;REEL/FRAME:14061/492
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;US-ASSIGNMENT DATABASE UPDATED:20100316;REEL/FRAME:14061/492
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;US-ASSIGNMENT DATABASE UPDATED:20100323;REEL/FRAME:14061/492
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;US-ASSIGNMENT DATABASE UPDATED:20100330;REEL/FRAME:14061/492
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;US-ASSIGNMENT DATABASE UPDATED:20100406;REEL/FRAME:14061/492
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;US-ASSIGNMENT DATABASE UPDATED:20100413;REEL/FRAME:14061/492
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;US-ASSIGNMENT DATABASE UPDATED:20100420;REEL/FRAME:14061/492
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;US-ASSIGNMENT DATABASE UPDATED:20100427;REEL/FRAME:14061/492
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;US-ASSIGNMENT DATABASE UPDATED:20100504;REEL/FRAME:14061/492
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;US-ASSIGNMENT DATABASE UPDATED:20100511;REEL/FRAME:14061/492
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;US-ASSIGNMENT DATABASE UPDATED:20100525;REEL/FRAME:14061/492
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HEWLETT-PACKARD COMPANY;REEL/FRAME:14061/492
Jan 18, 2002ASAssignment
Owner name: HEWLETT-PACKARD COMPANY, COLORADO
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BARON, JOHN M.;REEL/FRAME:012535/0189
Effective date: 20010914