Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20010012062 A1
Publication typeApplication
Application numberUS 09/121,760
Publication dateAug 9, 2001
Filing dateJul 23, 1998
Priority dateJul 23, 1998
Also published asUS6914625, US7567276, US20050231611, WO2000005875A1
Publication number09121760, 121760, US 2001/0012062 A1, US 2001/012062 A1, US 20010012062 A1, US 20010012062A1, US 2001012062 A1, US 2001012062A1, US-A1-20010012062, US-A1-2001012062, US2001/0012062A1, US2001/012062A1, US20010012062 A1, US20010012062A1, US2001012062 A1, US2001012062A1
InventorsEric C. Anderson
Original AssigneeEric C. Anderson
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
System and method for automatic analysis and categorization of images in an electronic imaging device
US 20010012062 A1
Abstract
A system and method for the automatic analysis and categorization of images in an electronic imaging device comprises one or more analysis modules that examine captured image files for selected criteria. The analysis modules then responsively generate and store appropriate category tags with the image file to enable the imaging device to subsequently access the stored category tags and automatically access desired categories of captured images.
Images(10)
Previous page
Next page
Claims(42)
What is claimed is:
1. A system for categorizing images, comprising:
an analysis module configured to analyze said images;
a processor coupled to said system for controlling said analysis module; and
category tags attached by said analysis module to each of said images, thereby enabling said processor to sort said images into different categories.
2. The system of
claim 1
wherein said processor automatically sorts said images into said different categories.
3. The system of
claim 1
wherein said images are captured by an electronic imaging device.
4. The system of
claim 2
wherein said electronic imaging device is a digital camera.
5. The system of
claim 1
wherein said analysis module includes one or more analysis algorithms for identifying said different categories.
6. The system of
claim 5
wherein said analysis module includes combination logic for combining analysis results from said analysis algorithms.
7. The system of
claim 1
wherein said analysis module includes parametric controls for controlling said analysis module.
8. The system of
claim 1
wherein said analysis module is selectively loaded into a volatile memory from a removable memory.
9. The system of
claim 1
further comprising a plurality of analysis modules.
10. The system of
claim 1
wherein said images each are stored as image data contained in individual image files.
11. The system of
claim 10
wherein said category tags are stored with said image data in said individual image files.
12. The system of
claim 1
further comprising an image processing backplane communicating with image processing modules.
13. The system of
claim 12
further comprising one or more insertion points between said image processing modules for inserting said analysis module to analyze said images.
14. The system of
claim 13
wherein a selectable plurality of analysis modules are inserted into said one or more insertion points.
15. The system of
claim 13
further comprising an RGB insertion point and a YCC insertion point.
16. The system of
claim 1
wherein said analysis module is configured to recognize and label said images that match predetermined criteria.
17. The system of
claim 1
wherein said analysis module is configured to access and categorize said images after said images have been processed and stored into a storage device.
18. The system of
claim 1
wherein said processor sorts said images by accessing and analyzing said category tags attached to each of said images.
19. The system of
claim 1
wherein said different categories include human images and nature images.
20. The system of
claim 1
wherein said different categories include city images and water images.
21. A method for automatically categorizing images, comprising the steps of:
analyzing said images with an analysis module;
controlling said analysis module with a processor coupled to said system; and
attaching category tags to each of said images with said analysis module, thereby enabling said processor to sort said images into different categories.
22. The method of
claim 21
wherein said processor automatically sorts said images into said different categories.
23. The method of
claim 21
wherein said images are captured by an electronic imaging device.
24. The method of
claim 21
wherein said electronic imaging device is a digital camera.
25. The method of
claim 21
wherein said analysis module includes one or more analysis algorithms for identifying said different categories.
26. The method of
claim 25
wherein said analysis module includes combination logic for combining analysis results from said analysis algorithms.
27. The method of
claim 21
wherein said analysis module includes parametric controls for controlling said analysis module.
28. The method of
claim 21
wherein said analysis module is selectively loaded into a volatile memory from a flash disk.
29. The method of
claim 21
further comprising a plurality of analysis modules.
30. The method of
claim 21
wherein said images each are stored as image data contained in individual image files.
31. The method of
claim 30
wherein said category tags are stored with said image data in said individual image files.
32. The method of
claim 21
further comprising an image processing backplane communicating with image processing modules.
33. The method of
claim 32
further comprising one or more insertion points between said image processing modules for inserting said analysis module to analyze said images.
34. The method of
claim 33
wherein a selectable plurality of analysis modules are inserted into said one or more insertion points.
35. The method of
claim 33
further comprising an RGB insertion point and a YCC insertion point.
36. The method of
claim 21
wherein said analysis module is configured to initially recognize and label said images that match predetermined criteria immediately upon capture of said images.
37. The method of
claim 21
wherein said analysis module is configured to access and categorize said images after said images have been processed and stored into a storage device.
38. The method of
claim 21
wherein said processor sorts said images by accessing and analyzing said category tags attached to each of said images.
39. The method of
claim 21
wherein said different categories include human images and nature images.
40. The method of
claim 21
wherein said different categories include city images and water images.
41. A system for automatically categorizing images, comprising:
means for analyzing said images;
means for controlling said means for analyzing; and
means for attaching category tags to each of said images, thereby enabling said means for controlling to sort said images into different categories.
42. A computer-readable medium comprising program instructions for automatically categorizing images by performing the steps of:
analyzing said images with an analysis module;
controlling said analysis module with a processor coupled to said system; and
attaching category tags to each of said images with said analysis module, thereby enabling said processor to sort said images into different categories.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application is related to co-pending U.S. patent application Ser. No. 08/735,705, entitled “System And Method For Correlating Processing Data And Image Data Within A Digital Camera Device,” filed on Oct. 23, 1996, which is hereby incorporated by reference. The related applications are commonly assigned.

BACKGROUND OF THE INVENTION

[0002] 1. Field of the Invention

[0003] This invention relates generally to electronic data processing, and relates more particularly to a system and method for the automatic analysis and categorization of images in an electronic imaging device.

[0004] 2. Description of the Background Art

[0005] The efficient manipulation of captured image data is a significant consideration for designers, manufacturers, and users of electronic imaging devices. Contemporary imaging devices such as digital cameras effectively enable users to capture images, assemble or edit the captured images, exchange the captured images electronically, or print a hard copy of the captured images.

[0006] As a camera user captures a number of digital images, it typically becomes necessary to sort and categorize the digital images. In some systems, a camera user must resort to the cumbersome and time-consuming task of individually viewing each captured image, identifying various groupings of image categories, and somehow manually tagging each image to specify the particular image category. For example, in Parulski, U.S. Pat. No. 5,633,678, a camera user manually selects a category for a group of images prior to the capture of the images. The camera user must select a new category for each new group of images. Such a manual categorization system is awkward to use and, therefore, does not provide as efficient an imaging device as a camera that features an automatic categorization system.

[0007] In other systems, software programs are available to permit the user to create thumbnails (smaller renditions of the captured image) and to place the thumbnails, with references to the original images, into various libraries or category systems. This process may also become very time consuming, especially as the number of captured images or the variety of category types increases.

[0008] From the preceding discussion, it becomes apparent that an electronic imaging system that manually analyzes and categorizes any significant number of captured images does not achieve an acceptable degree of efficiency. Therefore, an electronic imaging device that automatically analyzes captured images, and then responsively categorizes the analyzed images into one or more selected image groupings, would clearly provide a significant improvement in efficient functionality for various contemporary electronic imaging technologies.

[0009] For all the foregoing reasons, an improved system and method are needed for the automatic analysis and categorization of images in an electronic imaging device.

SUMMARY OF THE INVENTION

[0010] The present invention comprises a system and method for the automatic analysis and categorization of images in an electronic imaging device, such as a digital camera. In the preferred embodiment, a digital camera captures a selected image as CCD raw data, stores the raw data as image data into an individual image file, and then propagates the image file through the digital camera for processing and formatting of the image data.

[0011] In the preferred embodiment, the image data is initially converted into an RGB format, and then, selected analysis modules may connect through an RGB insertion point to advantageously analyze the image data at an RGB transition point, in accordance with the present invention. Once a particular analysis module analyzes the final line of the image data, then that analysis module preferably generates any appropriate category tags and stores the generated category tags into a blank category tag location in the image file. The digital camera may then subsequently access the stored category tags to automatically categorize and utilize the individual stored images (which each correspond to a separate image file).

[0012] Next, another image processing module preferably performs gamma correction and color space conversion on the image data. The image processing module also preferably converts the color space format of the image data. In the preferred embodiment, the image data is converted into YCC 444 format.

[0013] After the image data is converted into YCC 444 format, selected analysis modules may be plugged into a YCC insertion point to analyze the image data at a YCC transition point, in accordance with the present invention. As discussed above, once a particular analysis module analyzes the final line of the image data, then that analysis module preferably generates any appropriate category tags and stores the generated category tags into a blank category tag location in the image file for subsequent use by the camera to automatically categorize captured images. In other embodiments of the present invention, analysis modules may readily analyze image data at any other time or insertion point within the camera.

[0014] Next, an image processing module preferably performs a sharpening procedure on the image data, and also may perform a variety of other processing options. Then, an image processing module preferably decimates the image data, and the image data is compressed into a final image format (preferably JPEG.) Next, a file formatter preferably formats the compressed image file, and the resulting image file is finally saved into a removable memory device.

[0015] The image file thus includes any appropriate category tags, and the camera may then subsequently utilize the category tags to automatically access selected images, in accordance with the present invention. The present invention therefore provides an efficient system and method for automatically analysis and categorization of captured images in an electronic imaging device.

BRIEF DESCRIPTION OF THE DRAWINGS

[0016]FIG. 1 is a block diagram of one embodiment for a digital camera, according to the present invention;

[0017]FIG. 2 is a block diagram of one embodiment for the imaging device of FIG. 1, according to the present invention;

[0018]FIG. 3 is a block diagram of one embodiment for the camera computer of FIG. 1;

[0019]FIG. 4 is a rear elevation view of one embodiment for the FIG. 1 digital camera;

[0020]FIG. 5 is a diagram one embodiment for the non-volatile memory of FIG. 3, according to the present invention;

[0021]FIG. 6 is a diagram of one embodiment for the dynamic random-access memory of FIG. 3, according to the present invention;

[0022]FIG. 7 is a diagram of one embodiment for a single analysis module of FIG. 6, according to the present invention;

[0023]FIG. 8 is a diagram of one embodiment for an image file, in accordance with the present invention;

[0024]FIG. 9 is a diagram of one embodiment for the image tags of FIG. 8; and

[0025]FIG. 10 is a flowchart for one embodiment of method steps to automatically analyze and categorize images, according to the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

[0026] The present invention relates to an improvement in digital imaging devices, including digital cameras. The following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements. Although the present invention will be described in the context of a digital camera, various modifications to the preferred embodiment will be readily apparent to those skilled in the art and the generic principles herein may be applied to various other embodiments. That is, any imaging device, which captures image data, could incorporate the features described hereinbelow and that device would be within the spirit and scope of the present invention. Thus, the present invention is not intended to be limited to the embodiment shown, but is to be accorded the widest scope consistent with the principles and features described herein.

[0027] The present invention comprises one or more analysis modules that examine captured image files for selected criteria. The analysis modules then responsively generate and store appropriate category tags along with the image file to advantageously enable the imaging device to subsequently access the stored category tags and thereby automatically access desired categories of captured images.

[0028] Referring now to FIG. 1, a block diagram of one embodiment for a digital camera 110 is shown. Camera 110 preferably comprises an imaging device 114, a system bus 116, and a camera computer 118. Imaging capture device 114 may be optically coupled to an object 112 and electrically coupled via system bus 116 to camera computer 118. Once a user has focused imaging capture device 114 on object 112 and instructed camera 110 to capture an image of object 112, camera computer 118 commands imaging capture device 114 via system bus 116 to capture raw image data representing object 112. The captured raw image data is transferred over system bus 116 to camera computer 118, which performs various image-processing functions on the image data. System bus 116 also passes various status and control signals between imaging capture device 114 and camera computer 118.

[0029] Referring now to FIG. 2, a block diagram of one embodiment for imaging device 114 of FIG. 1 is shown. Imaging device 114 preferably comprises a lens 220 having an iris (not shown), a filter 222, an image sensor 224, a timing generator 226, an analog signal processor (ASP) 228, an analog-to-digital (A/D) converter 230, an interface 232, and one or more motors 234 to adjust focus of lens 220.

[0030] Imaging capture device 114 captures an image of object 112 via reflected light impacting image sensor 224 along optical path 236. Image sensor 224, which is preferably a charged-coupled device (CCD), responsively generates a set of raw image data in CCD format representing the captured image 112. The raw image data is then routed through ASP 228 A/D converter 230, and interface 232. Interface 232 has outputs for controlling ASP 228, motors 234 and timing generator 226. From interface 232, the raw image data passes over system bus 116 to camera computer 118.

[0031] Referring now to FIG. 3, a block diagram of one embodiment for camera computer 118 of FIG. 1 is shown. System bus 116 provides communication between imaging capture device 114, electrically-erasable programmable read-only memory (EEPROM) 341, optional power manager 342, central processing unit (CPU) 344, dynamic random-access memory (DRAM) 346, camera input/output (I/O) 348, non-volatile memory 350, and buffers/connector 352. Removable memory 354 connects to system bus 116 via buffers/connector 352. In alternate embodiments, camera 110 may also readily be implemented without removable memory 354 or buffers/connector 352.

[0032] Power manager 342 communicates via line 366 with power supply 356 and coordinates power management operations for camera 110. CPU 344 preferably includes a processor device for controlling the operation of camera 110. In the preferred embodiment, CPU 344 is capable of concurrently running multiple software routines to control the various processes of camera 110 within a multi-threading environment. DRAM 346 is a contiguous block of dynamic memory, which may be selectively allocated to various storage functions. LCD controller 390 accesses DRAM 346 and transfers processed image data to LCD screen 302 for display.

[0033] Camera I/O 348 is an interface device allowing communications to and from camera computer 118. For example, camera I/O 348 permits an external host computer (not shown) to connect to and communicate with camera computer 118. Camera I/O 348 may also interface with a plurality of buttons and/or dials 304, and an optional status LCD 306, which, in addition to LCD screen 302, are the hardware elements of the camera's user interface 308.

[0034] Non-volatile memory 350, which preferably comprises a conventional read-only memory or flash memory, stores a set of computer-readable program instructions to control the operation of camera 110. Removable memory 354 serves as an additional image data storage area and is preferably a non-volatile device, readily removable and replaceable by a camera user via buffers/connector 352. Thus, a user who possesses several removable memories 354 may replace a full removable memory 354 with an empty removable memory 354 to effectively expand the picture-taking capacity of camera 110. In the preferred embodiment of the present invention, removable memory 354 is preferably implemented using a flash disk.

[0035] Power supply 356 provides operating power to the various components of camera 110 via main power bus 362 and secondary power bus 364. The main power bus 362 provides power to imaging capture device 114, camera I/O 348, non-volatile memory 350 and removable memory 354, while secondary power bus 364 provides power to power manager 342, CPU 344 and DRAM 346.

[0036] Power supply 356 is connected to main batteries 358 and also to backup batteries 360. Camera 110 user may also connect power supply 356 to an optional external power source. During normal operation of power supply 356, main batteries 358 provide operating power to power supply 356 which then provides the operating power to camera 110 via both main power bus 362 and secondary power bus 364. During a power failure mode where main batteries 358 have failed (i.e., when their output voltage has fallen below a minimum operational voltage level), backup batteries 360 provide operating power to power supply 356 which then provides operating power only to the secondary power bus 364 of camera 110.

[0037] Referring now to FIG. 4, a rear elevation view of one embodiment for camera 110 of FIG. 1 is shown. The FIG. 4 representation depicts hardware components of user interface 308 of camera 110, showing LCD screen 302, user interface 308, a four-way navigation control button 409, an overlay button 412, a menu button 414, and a set of programmable soft keys 416.

[0038] User interface 308 includes several operating modes for supporting various camera functions. In the preferred embodiment, operating modes may include capture mode, review mode, play mode, and PC-connect mode. Within capture mode, menu options are available to set-up the categories used during image capture. The user preferably switches between the camera modes by selecting a mode dial (not shown).

[0039] Referring now to FIG. 5, a diagram one embodiment for the non-volatile memory 350 of FIG. 3 is shown. The FIG. 5 diagram includes control application 500, toolbox 502, drivers 504, kernel 506, and system configuration 508. Control application 500 comprises program instructions for controlling and coordinating the various functions of camera 110. Toolbox 502 contains selected function modules including image processing backplane 510, image processing modules 512, menu and dialog manager 514, and file formatter 516.

[0040] Image processing backplane 510 includes software routines that coordinate the functioning and communication of various image processing modules 512 and handle the data flow between the various modules. Image processing modules 512 preferably include selectable plug-in software routines that manipulate captured image data in a variety of ways, depending on the particular modules selected. Menu and dialog manager 514 includes software routines which provide information for controlling access to camera control menus and camera control menu items for access to features in camera 1 10. File formatter 516 includes software routines for creating an image file from the processed image data.

[0041] Drivers 504 control various hardware devices within camera 110 (for example, motors 234). Kernel 506 provides basic underlying services for the camera 110 operating system. System configuration 508 performs initial start-up routines for camera 110, including the boot routine and initial system diagnostics.

[0042] Now referring to FIG. 6, a diagram of one embodiment for dynamic random-access-memory (DRAM) 346 is shown. DRAM 346 includes RAM disk 532, system area 534, analysis modules 540 and working memory 530.

[0043] In the preferred embodiment, RAM disk 532 is a memory area used for storing raw and compressed image data and is organized in a “sectored” format similar to that of conventional hard disk drives. A conventional and standardized file system permits external host computer systems, via I/O 348, to recognize and access the data stored on RAM disk 532. System area 534 stores data regarding system errors (e.g., why a system shutdown occurred) for use by CPU 344 to restart computer 118.

[0044] Working memory 530 includes stacks, data structures and variables used by CPU 344 while executing the software routines used within camera computer 118. Working memory 530 also includes input buffers 538 for initially storing sets of image data received from imaging device 114 for image conversion, and frame buffers 536 for storing data to display on LCD screen 302.

[0045] In accordance with the present invention, analysis modules 540 preferably each include one or more software routines for automatically analyzing and categorizing images. In the FIG. 6 embodiment, analysis modules 540 may be loaded into RAM 346 from removable memory 354 or another external source. Analysis modules 540 further discussed below in conjunction with FIGS. 7 through 10.

[0046] Referring now to FIG. 7, a diagram of one embodiment for a single analysis module 540 of FIG. 6 is shown. Analysis module 540 includes text category list 610, combination logic 615, analysis algorithms 630, and parametric control 635.

[0047] Text category list 610 is a listing of the various possible image categories available for a given analysis module 540. Combination logic 615 determines how to resolve the results of the image analysis when multiple analysis algorithms 630 are utilized. Parametric control 635 is used to control settable parameters for analysis module 540. For example, analysis module may be turned on/off, or sensitivity settings for analysis module 540 may be controlled with parametric control 635.

[0048] Analysis algorithms 630 are a series of software routines ranging from analysis algorithm 1 (620) through analysis algorithm n (625.) Analysis algorithms 630 are each designed to allow analysis module 540 to access and analyze images at various stages in the processing chain of camera 110, in order to gather information about the image for later categorization.

[0049] Typically, each analysis algorithm 630 is designed to detect at least one image category. For example, individual analysis algorithms 630 may be designed to detect a person or groups of people based on characteristics like substantial amounts of flesh tones within the image. Individual analysis algorithms 630 may likewise be designed to detect nature scenes from characteristics like substantial green content in the image combined with the relative lack of hard edges. Similarly, categories like city images, water images or indoor images may be detected by characteristic features contained in those images. Once the last line of image data from a given image is processed, analysis module 540 then preferably generates one or more category tags that correspond to the particular image, and the generated category tags are stored as part of the image file. A user of camera 110 may thus readily utilize the category tags to efficiently access and sort images into selected categories.

[0050] Referring now to FIG. 8, a diagram of one embodiment for an image file 835 is shown, in accordance with the present invention. In the FIG. 8 embodiment, image file 835 includes a header 805, image data 810, a screennail 815, a thumbnail 820, and image tags 825.

[0051] Header 805 preferably includes information that identifies and describes the various contents of image file 835. Image data 810 contains actual captured image data. Image data 810 exists in whichever format that is appropriate for the current location of image file 835 within the image processing chain of camera 110. Screennail 815 and thumbnail 820 are each different versions of image data 810 that have varying degrees of reduced resolution for a number of special viewing applications.

[0052] Image tags 825 includes various types of information that correspond and relate to particular captured image data 810. Image tags 825 are further discussed below in conjunction with FIG. 9.

[0053] Referring now to FIG. 9, a diagram of one embodiment for the image tags of FIG. 8 is shown. In the FIG. 9 embodiment, image tags 825 include capture information tags 710, user tags 715, product tags 720, and category tags 735.

[0054] Capture information tags 710 preferably include various types of information that correlate with the captured image data 810 (FIG. 8). For example, capture information tags 710 may indicate focus setting, aperture setting, and other relevant information that may be useful for effectively processing or analyzing the corresponding image data 810. User tags 715 and product tags 720 typically contain various other information that may be needed for use with camera 110.

[0055] Category tags 735 are each preferably generated by analysis modules 540 after analysis modules 540 individually examine image data 810 from image file 835, in accordance with the present invention. Camera 110 may thus advantageously access and utilize category tags 735 to identify one or more categories to which a given set of image data 810 may likely relate. As discussed above in conjunction with FIG. 7, category tags 735 may correspond to a wide variety of possible image categories. In the preferred embodiment, image tags 825 initially contains sixteen empty locations to which various analysis modules 540 may write appropriate category tags 735 for automatically categorizing the corresponding image data 810, in accordance with the present invention.

[0056] Referring now to FIG. 10, a flowchart is shown for one embodiment of method steps to automatically analyze and categorize images, according to the present invention. FIG. 10 also details the operation of a series of plug-in image processing modules 512 for processing and formatting image data 810. However, in other embodiments of camera 110, various other modules may readily be substituted or added to those modules discussed in below conjunction with the FIG. 10 embodiment.

[0057] Initially, in step 910, camera 110 preferably captures a selected image as CCD raw data, stores the raw data as image data 810 into image file 835, and then propagates image file 835 through camera 110 for processing and formatting of the image data 810. In step 920, an image processing module 512 preferably replaces any defective pixels in image data 810, and also performs white balance and color correction on image data 810.

[0058] Next, in step 925, another image processing module 512 preferably performs interpolation (edge enhancement) on image data 810, and then converts image data 810 into an intermediate format. In the preferred embodiment, step 925 converts image data 810 into an RGB (Red, Blue, Green) format.

[0059] In the FIG. 10 embodiment, following step 925, selected analysis modules 540 may be plugged into an RGB insertion point 940 to advantageously analyze image data 810 at RGB transition point 930, in accordance with the present invention. One, some, or all of the analysis modules 540 may analyze image data 810 at RGB transition point 930. Preferably, analysis modules 540 are selected for optimal compatibility and effectiveness with the current format of image data 810 at RGB transition point 930. Once a particular analysis module 540 analyzes the final line of image data 810, then that analysis module 540 preferably generates any appropriate category tags 735 and stores the generated category tags 735 into a blank category tag location in image file 835. Then, camera 110 may subsequently access the stored category tags 735 to automatically categorize and utilize the individual stored images (which each correspond to a separate image file 835).

[0060] Next, in step 945, another image processing module 512 preferably performs gamma correction and color space conversion on image data 810. During step 945, the image processing module 512 also preferably converts the color space format of image data 810. In the FIG. 10 embodiment, image data 810 is converted to YCC 444 (Luminance, Chrominance-red, and Chrominance-blue) format.

[0061] In the FIG. 10 embodiment, following step 945, selected analysis modules 540 may be plugged into a YCC insertion point 960 to analyze image data 810 at YCC transition point 950, in accordance with the present invention. One, some, or all of the analysis modules 540 may analyze image data 810 at YCC transition point 950. As discussed above, once a particular analysis module 540 analyzes the final line of image data 810, then that analysis module 540 preferably generates any appropriate category tags 735 and stores the generated category tags 735 into a blank category tag location in image file 835 for subsequent use by camera 110 to automatically categorize captured images.

[0062] This discussion of the FIG. 10 embodiment specifically refers only RGB insertion point 940 and YCC insertion point 960. However, in other embodiments of the present invention, analysis modules 540 may readily analyze image data 810 at any other time or insertion point within camera 110. For example, in an alternate embodiment, analysis modules 540 may readily be configured to examine image data 810 at capture time, and to specifically recognize and identify the capture of any image that matches one or more selectable parameters.

[0063] Furthermore, in another embodiment, analysis modules 540 may advantageously access image files 835 that have been processed and stored onto removable memory 354. Analysis modules 540 may then automatically categorize the image files 835 by analyzing image data 810 and responsively generating corresponding category tags 735, in accordance with the present invention.

[0064] In step 965, an image processing module 512 preferably performs a sharpening procedure on image data 810, and also may perform a variety of other processing options. Then, in step 970, an image processing module 512 preferably decimates image data 810. In the preferred embodiment, the decimation process reduces image resolution by decimating the YCC 444 image data to produce YCC 422 or YCC 411 image data

[0065] In step 975, the image data 810 is preferably compressed into a final image format (preferably JPEG.) Next, in step 980, file formatter 516 preferably formats the compressed image file 835, and the resulting image file 835 is finally saved into removable memory 354 in step 985. As discussed above, image file 835 thus includes any appropriate category tags which camera 110 may then subsequently automatically access to sort selected images, in accordance with the present invention.

[0066] The invention has been explained above with reference to a preferred embodiment. Other embodiments will be apparent to those skilled in the art in light of this disclosure. For example, the present invention may readily be implemented using configurations other than those described in the preferred embodiment above. Additionally, the present invention may effectively be used in conjunction with systems other than the one described above as the preferred embodiment. Therefore, these and other variations upon the preferred embodiments are intended to be covered by the present invention, which is limited only by the appended claims.

Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US6538698 *Aug 28, 1998Mar 25, 2003Flashpoint Technology, Inc.Method and system for sorting images in an image capture unit to ease browsing access
US6833865 *Jul 29, 1999Dec 21, 2004Virage, Inc.Embedded metadata engines in digital capture devices
US6877134Jul 29, 1999Apr 5, 2005Virage, Inc.Integrated data and real-time metadata capture system and method
US7092564 *Apr 30, 2001Aug 15, 2006Hewlett-Packard Development Company, L.P.Automatic generation of frames for digital images
US7184573Jan 4, 2006Feb 27, 2007Myport Technologies, Inc.Apparatus for capturing information as a file and enhancing the file with embedded information
US7567276Jun 17, 2005Jul 28, 2009Scenera Technologies, LlcMethod and apparatus for managing categorized images in a digital camera
US7602424Jun 21, 2005Oct 13, 2009Scenera Technologies, LlcMethod and apparatus for automatically categorizing images in a digital camera
US8279319 *Feb 15, 2006Oct 2, 2012Sony CorporationInformation processing apparatus, information processing method, and information processing system
US8301995 *Jun 22, 2006Oct 30, 2012Csr Technology Inc.Labeling and sorting items of digital data by use of attached annotations
US8350928Oct 9, 2009Jan 8, 2013KDL Scan Deisgns LLCMethod and apparatus for automatically categorizing images in a digital camera
US8405711 *Oct 29, 2007Mar 26, 2013Capso Vision, Inc.Methods to compensate manufacturing variations and design imperfections in a capsule camera
US8477228 *Jun 30, 2008Jul 2, 2013Verizon Patent And Licensing Inc.Camera data management and user interface apparatuses, systems, and methods
US8531555Feb 21, 2012Sep 10, 2013Kdl Scan Designs LlcMethod and apparatus for automatically categorizing images in a digital camera
US8558920 *Aug 31, 2010Oct 15, 2013Fujifilm CorporationImage display apparatus and image display method for displaying thumbnails in variable sizes according to importance degrees of keywords
US8558921May 26, 2009Oct 15, 2013Walker Digital, LlcSystems and methods for suggesting meta-information to a camera user
US8621047Oct 13, 2010Dec 31, 2013Intellectual Ventures Fund 83 LlcSystem and method for managing images over a communication network
US8743261 *Jun 28, 2013Jun 3, 2014Verizon Patent And Licensing Inc.Camera data management and user interface apparatuses, systems, and methods
US8836819Sep 3, 2013Sep 16, 2014Kdl Scan Designs LlcMethod and apparatus for automatically categorizing images in a digital camera
US20060209089 *Feb 15, 2006Sep 21, 2006Sony CorporationInformation processing apparatus, information processing method, and information processing system
US20080165248 *Oct 29, 2007Jul 10, 2008Capso Vision, Inc.Methods to compensate manufacturing variations and design imperfections in a capsule camera
US20080166072 *Jan 9, 2007Jul 10, 2008Kang-Huai WangMethods to compensate manufacturing variations and design imperfections in a capsule camera
US20110050726 *Aug 31, 2010Mar 3, 2011Fujifilm CorporationImage display apparatus and image display method
Classifications
U.S. Classification348/222.1, 348/E05.047
International ClassificationG06F17/30, H04N5/76, H04N5/225, H04N5/228, H04N1/21, H04N5/232
Cooperative ClassificationH04N2201/325, H04N2201/3226, H04N1/2112, G06F17/3025, H04N2101/00, H04N5/23293
European ClassificationG06F17/30M1C, H04N1/21B3, H04N5/232V
Legal Events
DateCodeEventDescription
Jan 10, 2012ASAssignment
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FOTOMEDIA TECHNOLOGIES, LLC;REEL/FRAME:027511/0401
Owner name: KDL SCAN DESIGNS LLC, DELAWARE
Effective date: 20111212
Mar 5, 2004ASAssignment
Owner name: FLASHPOINT TECHNOLOGY, INC., NEW HAMPSHIRE
Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:IPAC ACQUISITION SUBSIDIARY I, LLC;REEL/FRAME:014402/0231
Effective date: 20031203
Aug 26, 2003ASAssignment
Owner name: FLASHPOINT TECHNOLOGY, INC., NEW HAMPSHIRE
Free format text: SECURITY AGREEMENT;ASSIGNOR:IPAC ACQUISITION SUBSIDIARY I, LLC;REEL/FRAME:013913/0138
Effective date: 20020617
Sep 25, 2002ASAssignment
Owner name: IPAC ACQUISITION SUBSIDIARY I, LLC, NEW HAMPSHIRE
Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE RECEIVING PARTY, FILED ON 07/05/2002, RECORDED ON REEL 013066 FRAME 0062;ASSIGNOR:FLASHPOINT TECHNOLOGY, INC.;REEL/FRAME:013258/0731
Effective date: 20020614
Free format text: ASSIGNMENT CORRECTION,REEL/FRAME;ASSIGNOR:FLASHPOINT TECHNOLOGY, INC.;REEL/FRAME:013323/0484
Free format text: ASSIGNMENT CORRECTION,REEL/FRAME:013056/0062;ASSIGNOR:FLASHPOINT TECHNOLOGY, INC.;REEL/FRAME:013323/0484
Jul 5, 2002ASAssignment
Owner name: IPAC SUB, NEW HAMPSHIRE
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:FLASHPOINT TECHNOLOGY, INC.;REEL/FRAME:013056/0062
Effective date: 20020614
Jul 23, 1998ASAssignment
Owner name: FLASHPOINT TECHNOLOGY, INC., CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ANDERSON, ERIC C.;REEL/FRAME:009351/0311
Effective date: 19980713