US20040223649A1 - Composite imaging method and system - Google Patents

Composite imaging method and system Download PDF

Info

Publication number
US20040223649A1
US20040223649A1 US10/431,057 US43105703A US2004223649A1 US 20040223649 A1 US20040223649 A1 US 20040223649A1 US 43105703 A US43105703 A US 43105703A US 2004223649 A1 US2004223649 A1 US 2004223649A1
Authority
US
United States
Prior art keywords
image
information
imaging information
imaging
elements
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/431,057
Inventor
Carolyn Zacks
Michael Telek
Dan Harel
Frank Marino
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Eastman Kodak Co
Original Assignee
Eastman Kodak Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eastman Kodak Co filed Critical Eastman Kodak Co
Priority to US10/431,057 priority Critical patent/US20040223649A1/en
Assigned to EASTMAN KODAK COMPANY reassignment EASTMAN KODAK COMPANY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HAREL, DAN, MARINO, FRANK, TELEK, MICHAEL J., ZACKS, CAROLYN A.
Publication of US20040223649A1 publication Critical patent/US20040223649A1/en
Priority to US11/681,499 priority patent/US20070182829A1/en
Priority to US12/975,720 priority patent/US8600191B2/en
Assigned to PAKON, INC., KODAK AVIATION LEASING LLC, EASTMAN KODAK INTERNATIONAL CAPITAL COMPANY, INC., KODAK (NEAR EAST), INC., FAR EAST DEVELOPMENT LTD., LASER-PACIFIC MEDIA CORPORATION, EASTMAN KODAK COMPANY, KODAK PHILIPPINES, LTD., CREO MANUFACTURING AMERICA LLC, QUALEX INC., NPEC INC., KODAK REALTY, INC., KODAK IMAGING NETWORK, INC., FPC INC., KODAK AMERICAS, LTD., KODAK PORTUGUESA LIMITED reassignment PAKON, INC. PATENT RELEASE Assignors: CITICORP NORTH AMERICA, INC., WILMINGTON TRUST, NATIONAL ASSOCIATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00127Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture
    • H04N1/00132Connection or combination of a still picture apparatus with another apparatus, e.g. for storage, processing or transmission of still picture signals or of information associated with a still picture in a digital photofinishing system, i.e. a system where digital photographic images undergo typical photofinishing processing, e.g. printing ordering
    • H04N1/00167Processing or editing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • H04N1/3872Repositioning or masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • H04N1/3876Recombination of partial images to recreate the original image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio

Definitions

  • the present invention relates to imaging systems and more particularly to imaging systems that are adapted to form images having multiple objects therein.
  • photographers address this problem by capturing multiple images of the group of elements and selecting from the multiple images a group image that shows all of the elements in the group image having a generally acceptable appearance. Even where this is done, it is often the case that one or more elements has a less than optimal appearance.
  • image editing software can be used to attempt to improve the appearance of elements in a group image.
  • Such editing software typically includes automatic image correction algorithms that can resolve common image problems such as the so-called red-eye problem that can occur in images of people. See for example, commonly assigned U.S. Pat. Publ. No. 2003-0053663 entitled “Method and Computer Program Products for Locating Facial Features” filed by Chen et al. on Nov. 26, 2001.
  • advanced users of such image editing software can use manual image editing tools such as Adobe PhotoShopTM software sold by Adobe Systems Inc., San Jose, Calif., USA, to manually alter images.
  • a method for forming a group image.
  • a set of imaging information is obtained depicting a scene over a period of time. Elements in the set of imaging information are distinguished and attributes of the elements in the set of image information are examined. Imaging information is selected from the set of imaging information depicting each element with the selection being made according to the attributes for that element.
  • a group image is formed based upon the set of imaging information with the archival image incorporating the selected image information.
  • a method for forming an image is provided.
  • a set of imaging information is obtained depicting a scene over a period of time.
  • a base image is provided based on the set of image information.
  • Elements are identified in the base image and portions of the set of imaging information depicting each of the elements are ordered according to an attribute of each element.
  • Imaging information from the set of imaging information is selected depicting each element according to the ordering.
  • An image is formed based upon the set of imaging information with the base image incorporating the selected image information.
  • a method for forming an image is provided.
  • images of a scene are obtained at different times.
  • Elements in the images are identified. Attributes for each of the elements in each of the images are determined and it is determined for each element which image shows the element having preferred attributes.
  • An image is prepared of the scene with each element having an appearance that corresponds to the appearance of the element in the image that shows the preferred attributes for the element.
  • a computer program product having data stored thereon for causing an imaging system to perform a method for forming a group image.
  • a set of imaging information is obtained depicting a scene over a period of time. Elements in the set of imaging information are distinguished and attributes of the elements in the set of image information are examined. Imaging information is selected from the set of imaging information depicting each element with the selection being made according to the attributes for that element.
  • a group image is formed based upon the set of imaging information with the archival image incorporating the selected image information.
  • a computer program product having data stored thereon for causing the imaging system to perform a method for forming an image.
  • a set of imaging information is obtained depicting a scene over a period of time.
  • a base image is provided based on the set of image information.
  • Elements are identified in the base image and portions of the set of imaging information depicting each of the elements are ordered according to an attribute of each element.
  • Imaging information from the set of imaging information is selected depicting each element according to the ordering.
  • An image is formed based upon the set of imaging information with the base image incorporating the selected image information.
  • a computer program product having data stored thereon for causing imaging system to perform a method for forming a group image.
  • images of a scene are obtained at different times. Elements in the images are identified. Attributes for each of the elements in each of the images are determined and it is determined for each element which image shows the element having preferred attributes.
  • An image is prepared of the scene with each element having an appearance that corresponds to the appearance of the element in the image that shows the preferred attributes for the element.
  • an imaging system has a source of a set of image information and a signal processor adapted to receive the set of image information identified, to identify elements in the set of image information, to distinguish elements in the set of image information, and to examine the attributes of the elements in the set of image information.
  • the signal processor is further adapted to select imaging information from the set of imaging information; to taking each element, with the selection being made according to the attributes for that element; and, to form a group image based upon the set of imaging information with the group image incorporating the selected image information.
  • an imaging system has a source of imaging information and a signal processor adapted to obtain a set of imaging information from the source of imaging information depicting a scene over a period of time.
  • the signal processor provides a base image based upon the set of imaging information and identifies elements in the base image.
  • the signal processor orders the portions of the set of imaging information depicting each of the elements according to an attribute of each element.
  • the processor selects imaging information from the set of imaging information depicting each element according to the ordering and forms a group image incorporating the selected image information.
  • an imaging system comprising a source of images of a scene captured at different times.
  • the signal processor is adapted to obtain images from the source, to identify elements in the images, to determine attributes for each of the elements in each of the images and to determine for each element which image shows an element having preferred attributes wherein the signal processor prepares an image of the scene with each element having an appearance that corresponds to the appearance of the element in the image that shows the preferred attributes for the element.
  • FIG. 1 shows one embodiment of a composite imaging system of the present invention.
  • FIG. 2 shows a back view of the embodiment of FIG. 1.
  • FIG. 3 shows a flow diagram of one embodiment of a method for forming a group image in accordance with the present invention.
  • FIG. 4 shows an illustration of objects and elements within an image.
  • FIG. 5 shows an illustration of the use of a set of image information to form an image.
  • FIG. 6 shows an illustration of the use of a desired facial expression or mood selection to influence the appearance of an image.
  • FIG. 7 shows a flow diagram of one embodiment of a method for approving and ordering an image in accordance with the present invention.
  • FIG. 8 shows an illustration depicting the operation of another embodiment of the method of the present invention.
  • FIG. 1 shows a block diagram of one embodiment of an imaging system 10 .
  • FIG. 2 shows a top, back, right side perspective view of the imaging system 10 of FIG. 1.
  • imaging system 10 comprises a body 20 containing an image capture system 22 having a lens system 23 , an image sensor 24 , a signal processor 26 , an optional display driver 28 and a display 30 .
  • lens system 23 can have one or more elements.
  • Lens system 23 can be of a fixed focus type or can be manually or automatically adjustable.
  • Lens system 23 is optionally adjustable to provide a variable zoom that can be varied manually or automatically. Other known arrangements can be used for lens system 23 .
  • Image sensor 24 Light from the scene that is focused by lens system 23 onto image sensor 24 is converted into image signals I representing an image of the scene.
  • Image sensor 24 can comprise a charge couple device (CCD), a complimentary metal oxide sensor (CMOS), or any other electronic image sensor known to those of ordinary skill in the art.
  • Image signals I can be in digital or analog form.
  • Signal processor 26 receives image signals I from image sensor 24 and transforms image signal I into a set of imaging information S.
  • Set of image information S can comprise a set of still images or other image information in the form of a video stream of apparently moving images.
  • the set of image information S can comprise image information in an interleaved or interlaced image form.
  • Signal processor 26 can also apply image processing algorithms to image signals I in the formation of the set of image information S. These can include but are not limited to color and exposure balancing, interpolation and compression. Where image signals I are in the form of analog signals, signal processor 26 converts these analog signals into a digital form.
  • a controller 32 controls the operation of image capture system 22 , including lens system 23 , image sensor 24 , signal processor 26 , and a memory such as memory 40 during imaging operations. Controller 32 causes image sensor 24 , signal processor 26 , display 30 and memory 40 to capture, store and display images in response to signals received from a user input system 34 , data from signal processor 26 and data received from optional sensors 36 . Controller 32 can comprise a microprocessor such as a programmable general purpose microprocessor, a dedicated micro-processor or micro-controller, or any other system that can be used to control operation of imaging system 10 .
  • User input system 34 can comprise any form of transducer or other device capable of receiving an input from a user and converting this input into a form that can be used by controller 32 in operating imaging system 10 .
  • user input system 34 can comprise a touch screen input, a 4-way switch, a 6-way switch, an 8-way switch, a stylus system, a trackball system, a joystick system, a voice recognition system, a gesture recognition system or other such systems.
  • user input system 34 includes a shutter trigger button 60 that sends a trigger signal to controller 32 indicating a desire to capture an image.
  • user input system 34 also includes a wide-angle zoom button 62 , and a tele zoom button 64 that controls the zoom settings of lens system 23 causing lens system 23 to zoom out when wide angle zoom button 62 is depressed and to zoom out when tele zoom button 64 is depressed.
  • Wide-angle zoom lens button 62 and telephoto zoom button 64 can also be used to provide signals that cause signal processor 26 to process image signal I to provide a set of image information that appears to have been captured at a different zoom setting than that actually provided by the optical lens system. This can be done by using a subset of the image signal I and interpolating a subset of the image signal I to form the set of image information S.
  • User input system 34 can also include other buttons including the Fix-It button 66 shown in FIG. 2 and the Select-It button 68 shown in FIG. 2, the function of which will be described in greater detail below.
  • Sensors 36 are optional and can include light sensors, range finders and other sensors known in the art that can be used to detect conditions in the environment surrounding imaging system 10 and to convert this information into a form that can be used by controller 32 in governing operation of imaging system 10 .
  • Controller 32 causes a set of image information S to be captured when a trigger condition is detected.
  • the trigger condition occurs when a user depresses shutter trigger button 60 , however, controller 32 can determine that a trigger condition exists at a particular time, or at a particular time after shutter trigger button 60 is depressed. Alternatively, controller 32 can determine that a trigger condition exists when optional sensors 36 detect certain environmental conditions.
  • Controller 32 can also be used to generate metadata M in association with each image.
  • Metadata M is data that is related to a set of image information or a portion of set of image information S but that is not necessarily observable in the image data itself.
  • controller 32 can receive signals from signal processor 26 , camera user input system 34 and other sensors 36 and, optionally, generates metadata M based upon such signals.
  • Metadata M can include but is not limited to information such as the time, date and location that the archival image was captured, the type of image sensor 24 , mode setting information, integration time information, taking lens unit setting information that characterizes the process used to capture the archival image and processes, methods and algorithms used by imaging system 10 to form the archival image.
  • Metadata M can also include but is not limited to any other information determined by controller 32 or stored in any memory in imaging system 10 such as information that identifies imaging system 10 , and/or instructions for rendering or otherwise processing the archival image with which metadata M is associated that can also be incorporated into the image metadata such an instruction to incorporate a particular message into the image. Metadata M can further include image information such as a set of display data, a set of image information S or any part thereof. Metadata M can also include any other information entered into imaging system 10 .
  • Set of image information S and optional metadata M can be stored in a compressed form.
  • the still images can be stored in a compressed form such as by using the JPEG (Joint Photographic Experts Group) ISO 10918-1 (ITU-T.81) standard.
  • This JPEG compressed image data is stored using the so-called “Exif” image format defined in the Exchangeable Image File Format version 2.2 published by the Japan Electronics and Information Technology Industries Association JEITA CP-3451.
  • other compression systems such as the MPEG-4 (Motion Pictures Export Group) or Apple QuicktimeTM standard can be used to store a set of image information that is received in a video form.
  • Other image compression and storage forms can be used.
  • the set of image information S can be stored in a memory such as memory 40 .
  • Memory 40 can include conventional memory devices including solid state, magnetic, optical or other data storage devices. Memory 40 can be fixed within imaging system 10 or it can be removable. In the embodiment of FIG. 1, imaging system 10 is shown having a memory card slot 46 that holds a removable memory 48 such as a removable memory card and has a removable memory interface 50 for communicating with removable memory 48 .
  • the set of image information can also be stored in a remote memory system 52 that is external to imaging system 10 such as a personal computer, computer network or other imaging system.
  • imaging system 10 has a communication module 54 for communicating with the remote memory system.
  • the communication module 54 can be for example, an optical, radio frequency or other transducer that converts image and other data into a form that can be conveyed to the remote imaging system by way of an optical signal, radio frequency signal or other form of signal.
  • Communication module 54 can also be used to receive a set of image information and other information from a host computer or network (not shown).
  • Controller 32 can also receive information and instructions from signals received by communication module 54 including but not limited to, signals from a remote control device (not shown) such as a remote trigger button (not shown) and can operate imaging system 10 in accordance with such signals.
  • Signal processor 26 optionally also converts image signals I into a set of display data DD that is in a format that is appropriate for presentation on display 30 .
  • Display 30 can comprise, for example, a color liquid crystal display (LCD), organic light emitting display (OLED) also known as an organic electroluminescent display (OELD) or other type of video display.
  • Display 30 can be external as is shown in FIG. 2, or it can be internal for example used in a viewfinder system 38 .
  • imaging system 10 can have more than one display with, for example, one being external and one internal.
  • display 30 has less imaging resolution than image sensor 24 . Accordingly, signal processor 26 reduces the resolution of image signal I when forming the set of display data DD adapted for presentation on display 30 . Down sampling and other conventional techniques for reducing the overall imaging resolution can be used. For example, resampling techniques such as are described in commonly assigned U.S. Pat. No. 5,164,831 “Electronic Still Camera Providing Multi-Format Storage Of Full And Reduced Resolution Images” filed by Kuchta et al., on Mar. 15, 1990, can be used.
  • the set of display data DD can optionally be stored in a memory such as memory 40 .
  • the set of display data DD can be adapted to be provided to an optional display driver 28 that can be used to drive display 30 .
  • the display data can be converted into signals that can be transmitted by signal processor 26 in a form that directly causes display 30 to present a set of display data DD. Where this is done, display driver 28 can be omitted.
  • Imaging system 10 can obtain a set of image information in a variety of ways.
  • imaging system 10 can capture a set of image information S using image sensor 24 .
  • Imaging operations that can be used to obtain a set of image information S from image capture system 22 include a capture process and can optionally also include a composition process and a verification process.
  • controller 32 causes signal processor 26 to cooperate with image sensor 24 to capture image signals I and present a set of display data DD on display 30 .
  • controller 32 enters the image composition phase when shutter trigger button 60 is moved to a half depression position.
  • other methods for determining when to enter a composition phase can be used.
  • one of user input system 34 for example, the “fix-it” button 66 shown in FIG. 2 can be depressed by a user of imaging system 10 , and can be interpreted by controller 32 as an instruction to enter the composition phase.
  • the set of display data DD presented during composition can help a user to compose the scene for the capture of a set of image information S.
  • the capture process is executed in response to controller 32 determining that a trigger condition exists.
  • a trigger signal is generated when shutter trigger button 60 is moved to a full depression condition and controller 32 determines that a trigger condition exists when controller 32 detects the trigger signal.
  • controller 32 sends a capture signal causing digital signal processor 26 to obtain image signals I and to process the image signals I to form a set of image information S.
  • a set of display data DD corresponding the set of image information S is optionally formed for presentation on display 30 .
  • the corresponding display data DD is supplied to display 30 and is presented for a period of time. This permits a user to verify that the set of image information S is acceptable.
  • signal processor 26 converts each image signal I into the set of imaging information S and then modifies the set of imaging information S to form a set of display data DD.
  • FIG. 3 shows a flow diagram of an embodiment of a method for composing an image.
  • FIG. 4 shows an illustration of objects and elements within an image.
  • FIG. 5 shows an illustration of the use of a set of image information to form an image.
  • FIG. 6 shows an illustration of the use of a desired facial expression or mood selection to influence the appearance of an image.
  • FIG. 7 shows a flow diagram of an embodiment of a method for approving and ordering an image.
  • FIG. 8 shows an illustration depicting the operation of another embodiment of the method of the present invention.
  • a method will be described. However, in another embodiment, the methods described hereinafter can take the form of a computer program product for forming a group image.
  • the computer program product for performing the described methods can be stored in a computer readable storage medium.
  • This medium may comprise, for example: magnetic storage media such as a magnetic disk (such as a hard drive or a floppy disk) or magnetic tape; optical storage media such as an optical disc, optical tape, or machine readable bar code; solid state electronic storage devices such as random access memory (RAM), or read only memory (ROM); or any other physical device or medium employed to store a computer program.
  • the computer program product for performing the described methods may also be stored on a computer readable storage medium that is connected to imaging system 10 by way of the internet or other communication medium. Those skilled in the art will readily recognize that the equivalent of such a computer program product can also be constructed in hardware.
  • the computer program product embodiment can be utilized by any well-known computer system, including but not limited to the computing systems incorporated in imaging system 10 described above.
  • many other types of computer systems can be used to execute the computer program embodiment. Examples of such other computer systems include personal computers, personal digital assistants, work station, internet applications and the like. Consequently, the computer system will not be discussed in further detail herein.
  • the method of forming a group image begins with imaging system 10 entering a mode for forming a group image (step 70 ).
  • the group image forming mode can be entered automatically with controller 32 entering the mode as a part of an initial start up operation that is executed when imaging system 10 is activated.
  • the group image mode can be entered automatically when signal processor 26 determines that a set of image information contains an arrangement of image elements that suggests that the scene can be beneficially processed using a set of image information.
  • the group image mode can also be entered when controller 32 detects a user selection at user input system 34 such as striking Fix-It button 66 shown in FIG. 2.
  • a set of image information S is then obtained (step 72 ).
  • the set of imaging information S can be obtained using the imaging operations described above.
  • controller 32 can be adapted to receive a trigger signal from user input 34 .
  • controller 32 causes a set of image information S to be obtained from image sensor 24 depicting the scene over a period of time.
  • This set of image information S can comprise, for example, a sequence of archival still images captured over the period of time.
  • This set of image information S can also comprise interlaced or other forms of video image information captured over the period of time. The period of time can begin at the moment that the trigger condition is detected.
  • controller 32 can cause a set of image information S to be stored in a first in first out buffer in a memory such as memory 40 during composition. Where this is done, the set of image information S can be captured during composition and fed into the buffer, so that at the time controller 32 determines that trigger condition exists, the buffer contains imaging information depicting the scene for a period of time prior to the point in time where controller 32 determines that the trigger condition exists. In this way, the set of imaging information S obtained can include imaging information obtained prior to the detected trigger condition.
  • a set of imaging information S can be obtained from any memory in imaging system 10 .
  • the set of imaging information S can be obtained from a removable memory 48 having the set of imaging information recorded therein by another image capture device (not shown). Further, the set of imaging information can be obtained from an external source by way of communication module 54 .
  • Objects and elements within a base image are distinguished within the set of imaging information (step 74 ). In the embodiment shown, this can be done by selecting a base image from the image stream and identifying objects and elements within the base image.
  • the base image can be selected by selecting the first image in the set of imaging information S, automatically selecting an image that corresponds to scene conditions at the time that the trigger condition is detected, or automatically selecting any other image in the set of imaging information S based on some other selection strategy.
  • the base image contains objects such as a region, person, place, or thing.
  • An object can contain multiple elements for example, where the object is a face of a person, elements can comprise the eyes and mouth of the person.
  • Objects and/or elements can be detected in the imaging information using a variety of detection algorithms and methods including but not limited to human body detection algorithms such as those disclosed in commonly assigned U.S. Pat. Pub. No. 2002-0076100 entitled “Image processing method for detecting human figures in a digital image” filed by Luo on Dec. 14, 2000, and human face recognition algorithms such as those described in commonly assigned U.S. Pat. Pub. No. 2003-0021448 entitled “Method For Detecting Eye and Mouth Positions in a Digital Image” filed by Chen et al. on May 1, 2001.
  • the step of sorting the image for objects can simplify the process of distinguishing elements within the objects by reducing the set of elements that are likely to be within certain areas of the image. However, this is optional and elements can also be identified in a base image without first distinguishing objects. Further, in certain embodiments, objects and/or elements can be distinguished within the set of imaging information S without first forming a base image.
  • Attributes of each of the elements are then examined (step 76 ).
  • Each element has variable attributes that can change over the period of time captured in the set of image information. For example, the eyes of a face can open and close during the period of time, or a mouth can shift from smiling to not smiling. Time variable attributes of elements such as eyes or a mouth can be identified automatically, as they are easily recognizable as being of interest in facial images.
  • the user of imaging system 10 can identify, manually, elements and attributes of interest. Where this is to be done, the base image is presented on display 30 and the user of imaging system 10 can use user input system 34 to identify objects, elements and attributes in the base image that are of interest. Such objects, elements and attributes can be identified for example by name, icon, image, outline, arrow, or other visual or audio symbol or signal. For convenience, the identifier used for the element can be presented on display 30 .
  • FIG. 4 shows a drawing of one example of a base image 90 from a set of image information, its objects, elements, and attributes.
  • Image 90 is composed of two objects: face 92 and face 94 .
  • Face 92 is composed of two elements: eye element 96 and mouth element 98 .
  • Face 94 is likewise composed of two elements: eye element 100 and mouth element 102 .
  • the attribute of eye element 96 is eyes open.
  • the attribute of eye element 100 is eyes closed.
  • the attribute of mouth element 98 is mouth not smiling.
  • the attribute of mouth element 102 is mouth smiling.
  • Objects and elements distinguished in base image 90 are then distinguished in the other images over the set of imaging information S in like manner (step 78 ). Attributes of each of the elements are then determined in the remaining portions of the set of imaging information S (step 80 ).
  • the imaging information depicting each element in the set of imaging information is ordered in decreasing attribute level across the available imaging information in the set of imaging information (step 82 ). This ordering is performed by comparing the appearance of each of the elements in the stream of imaging information to preferred attributes for the element. For example, if the best group image of a group of people is desired, then the attributes of eyes open and mouth smiling are of high priority. Therefore the imaging information associated with the element can be ordered based upon which imaging information depicts an eye element having the attribute of an open eye at the top of an ordered list and imaging information depicting the eye element having a closed or partially closed eye at the bottom of the ordered list. Similarly, the ordered list of the element of a mouth having the attribute of smiling would be at the top of an ordered list for the mouth element and imaging information depicting a mouth element having a non smiling arrangement would be at the bottom of the ordered list.
  • the preferred attributes used for the ordering scheme can be determined automatically by analysis of the set of imaging information S to determine what attributes can be preferred for the image.
  • the attributes to be used for ordering the image can be set by the user, for example by using user input system 34 .
  • Other imaging information ordering criteria can be used where other subjects are involved. For example, where the objects in the image include a group of show dogs posing for an image, while doing a similar acrobatic activity such as jumping, the head direction of each dog can be determined and a preference can be shown for the attribute of each dog facing in the general direction of the camera.
  • FIG. 5 illustrates an example of the operation of the method of FIG. 3.
  • FIG. 5 shows a set of imaging information 110 comprising a sequence of four images 112 , 114 , 116 , 118 captured over a period of time.
  • each of the images 112 , 114 , 116 and 118 contains two objects, i.e., faces 120 and 122 .
  • the eye elements 124 and 126 , and mouth elements 128 and 130 are identified as important image elements.
  • the attributes of eye elements 124 and 126 in image 112 are examined first and then the attributes of eye elements 124 and 126 in images 114 , 116 , and 118 are examined.
  • the preferred element attributes are then automatically determined: in this case open eyes and smiling mouth.
  • elements Images 124 128 126 130 112 medium high high medium 114 medium high medium high 116 medium medium low high 118 high medium medium high
  • An ordered list of imaging information depicting eye attributes 124 and 126 is formed based upon closeness of the attributes of the eye elements in each of images 112 , 114 , 116 and 118 to the preferred attributes of the eye elements. See Table 2.
  • the imaging information depicting the eye element having highest ordered attributes on the ordered list is used to form a group image 132 .
  • the mouth elements 128 and 130 in images 112 are examined and compared to the mouth elements 128 and 130 in images 114 , 116 and 118 , and an ordered list of imaging information having preferred mouth attributes is determined. See Table 2.
  • the mouth elements 128 and 130 that are highest on the ordered list of mouth attributes are used to form the group image 132 .
  • elements 124 128 126 130 Priority 118 112 112 114 112 114 114 116 114 116 118 118 118 116 112
  • the group image is then automatically composed (step 84 ). This can be done in a variety of ways.
  • controller 32 and signal processor 26 select an interim image for use in forming group image 132 . This can be done by selecting the base image 140 .
  • controller 32 can cooperate with signal processor 26 to determine which of the images available in the set of imaging information S has the highest overall combined attribute ranking for the various elements examined.
  • controller 32 can cooperate with signal processor 26 to determine which of the images available in set of imaging information S requires the least number of image processing steps to form a group image therefrom.
  • image 118 can be selected as only one step needs to be performed the step of fixing the appearance of the smile attribute of face 120 .
  • the step of selecting an interim image can comprise selecting the image that can most efficiently or most quickly be improved.
  • image 114 requires processing of both eye elements 124 and 126 , thus requiring more than one correction, however, the processing effort required to correct the appearance of the eye can be substantially less than the processing effort required to improve the appearance of the mouth element 128 of face object 120 in image 118 .
  • Other stratagems can also be used for selecting the interim image.
  • controller 32 and image processor 26 extract imaging information from the set of imaging information S that corresponds to the highest ordered appearance of that element and inserts that imaging information into the interim image in place of the imaging information in the interim image associated with that element.
  • imaging information is formed with each object in the image having elements with preferred attributes.
  • such attributes are based upon actual scene imaging information such as actual facial expressions and are not based upon imaging information manufactured during the editing process. This provides a group image having a more natural and a more realistic appearance.
  • group photos and other group images can be captured of scenes and circumstances wherein it is not preferred that each member of the group smiles. Rather for certain group photos a different mood or facial expression can be preferred.
  • a user of imaging system 10 can use user input system 34 to define such expressions. For example, a desired facial expression of “scared” can be selected by the user of imaging system 10 .
  • FIG. 6 shows how the selection of the desired facial expression or mood affects the composite picture outcome.
  • a base image 140 is selected and elements are identified.
  • base image 140 is comprised of two elements: a first face 142 and a second face 144 .
  • First face 142 has a neutral expression and second face 144 has a neutral expression.
  • eye elements 146 and 148 and mouth elements 150 and 152 are examined over the time period captured in the set of imaging information and ordered as described above.
  • the ordering is done with eye elements 146 and 148 and mouth elements 150 and 152 ordered in accordance with their similarity to eye elements 164 and mouth element 166 associated with a scared expression template 162 stored within a template library 154 that also contains other expression templates such as a happy template 156 , an angry template 158 and cynical template 160 .
  • the template library 154 can comprise, for example, a library of template images or other imaging information such as image analysis and processing algorithms that associate the attributes of elements in an object, such as a mouth or eye elements in a facial image, with an expression.
  • the templates in library 154 therefore can comprise images, algorithms, models and other data that can be used to evaluate the attributes of the elements detected in each object in the scene.
  • the templates can be based on overall typical anamorphic facial expressions or the templates can be derived based upon previous photographs or other images of first face 142 and second face 144 .
  • the templates can be stored in imaging system 10 or in a remote device that can be contacted by way of communication module 54 .
  • a group image 170 is then composed as is described above, with each object, e.g. face 142 and 144 having scared eye elements 172 and 174 , respectively and scared mouth elements 176 and 178 having an appearance associated with the highest ordered attributes of the elements.
  • object e.g. face 142 and 144 having scared eye elements 172 and 174 , respectively and scared mouth elements 176 and 178 having an appearance associated with the highest ordered attributes of the elements.
  • the set of imaging information S may not contain an expression that appropriately represents the desired expression or that does not suggest the desired expression to the extent desired.
  • a threshold test can optionally be used.
  • the ordering process can be performed so that the attributes of the features in the set of imaging information S are compared to scared template 162 and ordered according to a scoring scale.
  • the overall score for second face 144 for example, can be compared to a threshold score. Where the score is below the threshold, it can be determined that the set of imaging information S does not contain sufficient information for an appearance of the expression desired.
  • controller 32 can use communication module 54 to obtain imaging information depicting the elements having desired attributes from a remote memory system 52 having a database or template library depicting second face 144 . Controller 32 incorporated this remotely obtained imaging information into group image 170 in order to more closely adapt the appearance of eye elements 148 and mouth elements 152 of second face 144 to the desired “scared” expression to yield the group image 170 wherein second face 144 has the scared eye element 174 and scared mouth element 178 that correspond to the desired appearance.
  • the selection of a desired expression can be made in a variety of ways. For example, the selection can be made on an image by image basis with the selection made once for each image and applied to all elements in the image. Alternatively, the selection of the desired expression can be made on an element by element basis with each element having an individually selected desired expression or other desired attribute. For example, certain persons may feel that their appearance is optimized under circumstances where they have a big smile while other persons may feel that their appearance is optimized with a more subdued expression. In such circumstances, desired expressions can be selected for each person in an image.
  • FIG. 7 shows an alternative embodiment of the present invention in which at the ordering process is performed in a semi-automatic fashion.
  • a set of imaging information S is obtained (step 180 ).
  • Controller 32 can obtain the set of imaging information to be sent by capturing archival images or a set of archival imaging information such as a video stream as is described above.
  • Controller 32 can also obtain a set of images to be sent by extracting the digital images from a memory, such as a removable memory 48 .
  • a set of imaging information can also be obtained using communication module 54 .
  • the set of imaging information S is provided to one or more decision makers (step 182 ). Controller 32 can provide the set of imaging information S to each decision maker such as for example a person whose image is incorporated into the set of imaging information S. This can be done, for example, by presenting the set of imaging information S to the person using display 30 or by using communication module 54 to transmit the set of imaging information S to a remote terminal, personal digital assistant, personal computer or other display device.
  • each decision maker reviews the set of imaging information and provides an indication of which image in the set of imaging information has objects with elements having desired attributes (step 184 ).
  • This can be done in a variety of ways. For example, where an image includes a group of elements, a decision can be made for each element in the set of imaging information S as to which portion of the set of imaging information S depicts the element as having favored attributes. For example, one or more elements can be isolated for example by highlighting the element in a base image and a decision maker can then select from the imaging information that portion of the imaging information that depicts that element as having favored attributes. This selection can be made using user input system 34 for example by depressing the select-it button 68 shown in FIG. 2.
  • user input system 34 When a selection is made, user input system 34 generates a signal that indicates which segment of the set of imaging information S has imaging information that depicts that person with elements having the attributes preferred by that person. Controller 32 detects the signals from user input system 34 to indicate that the selected image contains desired attributes. It will be appreciated that circumstances can arise where more than one decision maker makes recommendations as to which portion of a set of imaging information S contains a preferred attribute. Such conflicts can be prevented by limiting certain decision makers to providing input only on selected elements. For example, where a group image comprises an image of a group of people, each person in the image can act as a decision maker for the elements associated with that person but not for others.
  • Such conflicts can be resolved by providing each person in the image with a different group image tailored to the preferences of that person.
  • the user input information can be used to help form the group image in two ways. In one way a user preference can be used in place of the ordering step described above. Alternatively, the ordering steps described above in previous embodiments can be used and the user preference information can be used to adjust the ordering performed on the imaging information.
  • Controller 32 then forms a group image based at least in part upon the input received from user input system 34 (step 186 ).
  • a single group image can be formed based upon the input from all of the decision makers.
  • controller 32 can be used to monitor the inputs from each decision maker with the group image selected by each decision maker using the input made by other decision makers to adjust the ordering of attributes of the elements.
  • FIG. 8 shows an illustration of another embodiment of the method of the present invention.
  • an interim image 200 is generated by the imaging system 10 as described above.
  • Interim image 200 contains imaging information that depicts an object 202 which is shown in FIG. 8 as a face object having a mouth element 204 .
  • the set of imaging information depicting the mouth element is extracted from the set of imaging information and incorporated into the interim image as metadata 206 associated with interim image 200 .
  • the image and metadata are transmitted by imaging system 10 to a home unit 208 such as a personal computer, personal digital assistant, or other device by way of a communication network 210 .
  • metadata 206 is incorporated into interim image 200 in a way that permits a user of the home receiver to access metadata 206 using software such as image editing software, image processing software or a conventional web browser.
  • the home user receives interim image 200 and, if desired, indicates that the user wishes to change or consider the option for changing the attributes of one of the elements. This can be done, for example, by hovering a mouse cursor over mouth element 204 of face object 202 in interim image 200 or otherwise indicating that an area of interim image 200 contains an element.
  • home unit 208 extracts the set of imaging information associated with mouth element 204 from metadata 206 and provides imaging information based upon the set of imaging information from which the home user can select attributes that are preferable to the home user.
  • a slide bar 212 appears on home unit 208 . By sliding slide bar 212 the user can move through the available set of imaging information associated with that image and select imaging information having preferred attributes.
  • the home receiver records an indication of which attributes are found to be preferable by the home user and adjusts the image to include those attributes. This allows each person captured in an image to adjust the attributes for that person in the archival image in order to optimize their appearance.
  • the adjusted group image can be adapted so that the adjusted group image and any copies of the image made from the adjusted group image will contain the preferred image attributes.
  • each recipient of the group image is provided with a copy of the group image that contains metadata for each image element and can select attributes for each element to form a local group image that is customized to the preferences of the recipient.
  • the home receiver also provides a feedback signal by way of communication network 210 to imaging system 10 or some other device 214 such as a storage device, server or printer containing the interim image with the feedback signal indicating adjustments made by home unit 208 .
  • imaging system 10 or some other device 214 such as a storage device, server or printer containing the interim image with the feedback signal indicating adjustments made by home unit 208 .
  • This information can be received by imaging system 10 or other storage device 214 and then used to form an adjusted archival image having a user selected and optimized appearance. It will be appreciated that such editing can be performed by user input system 34 to perform the function of selecting desirable attributes for the adjusted archival image.
  • imaging system 10 has been shown generally in the form of a digital still or motion image camera type imaging system, it will be appreciated that imaging system 10 of the present invention can be incorporated into and the methods and computer program product described herein can be used by any device that is capable of processing a set of imaging information examples of which include: cellular telephones; personal digital assistants; hand held, tablet, desktop, notebook and other personal computers and image processing appliances such as internet appliances and kiosks.
  • imaging system 10 can comprise a film or still image scanning system with lens system 23 and image sensor 24 adapted to scan imaging information from a set of images on a photographic film or prints and can even be adapted to obtain image information from a set of film image negatives.
  • imaging system 10 can comprise for example a personal computer, workstation, or other general purpose computing system having such an imaging system.
  • imaging system 10 can also comprise a scanning system such as those employed in conventional photofinishing systems such as the photographic processing apparatus described in commonly assigned U.S. Pat. No. 6,476,903 entitled “Image Processing” filed by Slater et al. on Jun. 21, 2000.

Abstract

In a first aspect of the invention, a method is provided for forming a group image. In accordance with the method, a set of imaging information is obtained depicting a scene over a period of time. Elements in the set of imaging information are distinguished and attributes of the elements in the set of image information are examined. Imaging information is selected from the set of imaging information depicting each element with the selection being made according to the attributes for that element. A group image is formed based upon the set of imaging information with the archival image incorporating the selected image information.

Description

    FIELD OF THE INVENTION
  • The present invention relates to imaging systems and more particularly to imaging systems that are adapted to form images having multiple objects therein. [0001]
  • BACKGROUND OF THE INVENTION
  • Professional and amateur photographers often capture images of groups of people such as images of families and athletic teams. Such group images are typically used for commemorative purposes. A common problem with such group images is that often one or more members of the group will have an appearance at the time that the group image is captured that the member does not prefer. For example, group members can blink, look away, make a comment or otherwise compose their facial attributes in a non-preferable way. Similar problems can occur whenever images are captured that include more than one element. Examples of such elements include people, as described above, animals, objects, areas such as a background of a scene, and/or any other photographic subject that can change over time. Typically, photographers address this problem by capturing multiple images of the group of elements and selecting from the multiple images a group image that shows all of the elements in the group image having a generally acceptable appearance. Even where this is done, it is often the case that one or more elements has a less than optimal appearance. [0002]
  • Various forms of image editing software can be used to attempt to improve the appearance of elements in a group image. Such editing software typically includes automatic image correction algorithms that can resolve common image problems such as the so-called red-eye problem that can occur in images of people. See for example, commonly assigned U.S. Pat. Publ. No. 2003-0053663 entitled “Method and Computer Program Products for Locating Facial Features” filed by Chen et al. on Nov. 26, 2001. Further, advanced users of such image editing software can use manual image editing tools such as Adobe PhotoShop™ software sold by Adobe Systems Inc., San Jose, Calif., USA, to manually alter images. It will be appreciated however, that the use of such image editing tools to correct a group image is time consuming and can yield results that have a less than authentic appearance. What is needed therefore is an imaging system and method that can effectively form optimal group images with an authentic appearance in a less time consuming manner. [0003]
  • SUMMARY OF THE INVENTION
  • In a first aspect of the invention, a method is provided for forming a group image. In accordance with the method, a set of imaging information is obtained depicting a scene over a period of time. Elements in the set of imaging information are distinguished and attributes of the elements in the set of image information are examined. Imaging information is selected from the set of imaging information depicting each element with the selection being made according to the attributes for that element. A group image is formed based upon the set of imaging information with the archival image incorporating the selected image information. [0004]
  • In another aspect of the invention, a method for forming an image is provided. In accordance with this method, a set of imaging information is obtained depicting a scene over a period of time. A base image is provided based on the set of image information. Elements are identified in the base image and portions of the set of imaging information depicting each of the elements are ordered according to an attribute of each element. Imaging information from the set of imaging information is selected depicting each element according to the ordering. An image is formed based upon the set of imaging information with the base image incorporating the selected image information. [0005]
  • In still another aspect of the invention, a method for forming an image is provided. In accordance with this method, images of a scene are obtained at different times. Elements in the images are identified. Attributes for each of the elements in each of the images are determined and it is determined for each element which image shows the element having preferred attributes. An image is prepared of the scene with each element having an appearance that corresponds to the appearance of the element in the image that shows the preferred attributes for the element. [0006]
  • In another aspect of the invention, a computer program product is provided having data stored thereon for causing an imaging system to perform a method for forming a group image. In accordance with the method, a set of imaging information is obtained depicting a scene over a period of time. Elements in the set of imaging information are distinguished and attributes of the elements in the set of image information are examined. Imaging information is selected from the set of imaging information depicting each element with the selection being made according to the attributes for that element. A group image is formed based upon the set of imaging information with the archival image incorporating the selected image information. [0007]
  • In another aspect of the invention, a computer program product is provided having data stored thereon for causing the imaging system to perform a method for forming an image. In accordance with this method, a set of imaging information is obtained depicting a scene over a period of time. A base image is provided based on the set of image information. Elements are identified in the base image and portions of the set of imaging information depicting each of the elements are ordered according to an attribute of each element. Imaging information from the set of imaging information is selected depicting each element according to the ordering. An image is formed based upon the set of imaging information with the base image incorporating the selected image information. [0008]
  • In another aspect of the invention, a computer program product is provided having data stored thereon for causing imaging system to perform a method for forming a group image. In accordance with this method, images of a scene are obtained at different times. Elements in the images are identified. Attributes for each of the elements in each of the images are determined and it is determined for each element which image shows the element having preferred attributes. An image is prepared of the scene with each element having an appearance that corresponds to the appearance of the element in the image that shows the preferred attributes for the element. [0009]
  • In another aspect of the invention an imaging system is provided. The imaging system has a source of a set of image information and a signal processor adapted to receive the set of image information identified, to identify elements in the set of image information, to distinguish elements in the set of image information, and to examine the attributes of the elements in the set of image information. Wherein the signal processor is further adapted to select imaging information from the set of imaging information; to taking each element, with the selection being made according to the attributes for that element; and, to form a group image based upon the set of imaging information with the group image incorporating the selected image information. [0010]
  • In still another aspect of the invention, an imaging system is provided. In accordance with this aspect, the imaging system has a source of imaging information and a signal processor adapted to obtain a set of imaging information from the source of imaging information depicting a scene over a period of time. The signal processor provides a base image based upon the set of imaging information and identifies elements in the base image. The signal processor orders the portions of the set of imaging information depicting each of the elements according to an attribute of each element. The processor selects imaging information from the set of imaging information depicting each element according to the ordering and forms a group image incorporating the selected image information. [0011]
  • In accordance with a further embodiment aspect of the invention, an imaging system is provided comprising a source of images of a scene captured at different times. The signal processor is adapted to obtain images from the source, to identify elements in the images, to determine attributes for each of the elements in each of the images and to determine for each element which image shows an element having preferred attributes wherein the signal processor prepares an image of the scene with each element having an appearance that corresponds to the appearance of the element in the image that shows the preferred attributes for the element.[0012]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows one embodiment of a composite imaging system of the present invention. [0013]
  • FIG. 2 shows a back view of the embodiment of FIG. 1. [0014]
  • FIG. 3 shows a flow diagram of one embodiment of a method for forming a group image in accordance with the present invention. [0015]
  • FIG. 4 shows an illustration of objects and elements within an image. [0016]
  • FIG. 5 shows an illustration of the use of a set of image information to form an image. [0017]
  • FIG. 6 shows an illustration of the use of a desired facial expression or mood selection to influence the appearance of an image. [0018]
  • FIG. 7 shows a flow diagram of one embodiment of a method for approving and ordering an image in accordance with the present invention. [0019]
  • FIG. 8 shows an illustration depicting the operation of another embodiment of the method of the present invention.[0020]
  • DETAILED DESCRIPTION OF THE INVENTION
  • FIG. 1 shows a block diagram of one embodiment of an [0021] imaging system 10. FIG. 2 shows a top, back, right side perspective view of the imaging system 10 of FIG. 1. As is shown in FIGS. 1 and 2, imaging system 10 comprises a body 20 containing an image capture system 22 having a lens system 23, an image sensor 24, a signal processor 26, an optional display driver 28 and a display 30. In operation, light from a scene is focused by lens system 23 to form an image on image sensor 24. Lens system 23 can have one or more elements. Lens system 23 can be of a fixed focus type or can be manually or automatically adjustable. Lens system 23 is optionally adjustable to provide a variable zoom that can be varied manually or automatically. Other known arrangements can be used for lens system 23.
  • Light from the scene that is focused by lens system [0022] 23 onto image sensor 24 is converted into image signals I representing an image of the scene. Image sensor 24 can comprise a charge couple device (CCD), a complimentary metal oxide sensor (CMOS), or any other electronic image sensor known to those of ordinary skill in the art. Image signals I can be in digital or analog form.
  • [0023] Signal processor 26 receives image signals I from image sensor 24 and transforms image signal I into a set of imaging information S. Set of image information S can comprise a set of still images or other image information in the form of a video stream of apparently moving images. In such embodiments, the set of image information S can comprise image information in an interleaved or interlaced image form. Signal processor 26 can also apply image processing algorithms to image signals I in the formation of the set of image information S. These can include but are not limited to color and exposure balancing, interpolation and compression. Where image signals I are in the form of analog signals, signal processor 26 converts these analog signals into a digital form.
  • A [0024] controller 32 controls the operation of image capture system 22, including lens system 23, image sensor 24, signal processor 26, and a memory such as memory 40 during imaging operations. Controller 32 causes image sensor 24, signal processor 26, display 30 and memory 40 to capture, store and display images in response to signals received from a user input system 34, data from signal processor 26 and data received from optional sensors 36. Controller 32 can comprise a microprocessor such as a programmable general purpose microprocessor, a dedicated micro-processor or micro-controller, or any other system that can be used to control operation of imaging system 10.
  • [0025] User input system 34 can comprise any form of transducer or other device capable of receiving an input from a user and converting this input into a form that can be used by controller 32 in operating imaging system 10. For example, user input system 34 can comprise a touch screen input, a 4-way switch, a 6-way switch, an 8-way switch, a stylus system, a trackball system, a joystick system, a voice recognition system, a gesture recognition system or other such systems. In the embodiment shown in FIGS. 1 and 2 user input system 34 includes a shutter trigger button 60 that sends a trigger signal to controller 32 indicating a desire to capture an image.
  • As shown in FIGS. 1 and 2, [0026] user input system 34 also includes a wide-angle zoom button 62, and a tele zoom button 64 that controls the zoom settings of lens system 23 causing lens system 23 to zoom out when wide angle zoom button 62 is depressed and to zoom out when tele zoom button 64 is depressed. Wide-angle zoom lens button 62 and telephoto zoom button 64 can also be used to provide signals that cause signal processor 26 to process image signal I to provide a set of image information that appears to have been captured at a different zoom setting than that actually provided by the optical lens system. This can be done by using a subset of the image signal I and interpolating a subset of the image signal I to form the set of image information S. User input system 34 can also include other buttons including the Fix-It button 66 shown in FIG. 2 and the Select-It button 68 shown in FIG. 2, the function of which will be described in greater detail below.
  • [0027] Sensors 36 are optional and can include light sensors, range finders and other sensors known in the art that can be used to detect conditions in the environment surrounding imaging system 10 and to convert this information into a form that can be used by controller 32 in governing operation of imaging system 10.
  • [0028] Controller 32 causes a set of image information S to be captured when a trigger condition is detected. Typically, the trigger condition occurs when a user depresses shutter trigger button 60, however, controller 32 can determine that a trigger condition exists at a particular time, or at a particular time after shutter trigger button 60 is depressed. Alternatively, controller 32 can determine that a trigger condition exists when optional sensors 36 detect certain environmental conditions.
  • [0029] Controller 32 can also be used to generate metadata M in association with each image. Metadata M is data that is related to a set of image information or a portion of set of image information S but that is not necessarily observable in the image data itself. In this regard, controller 32 can receive signals from signal processor 26, camera user input system 34 and other sensors 36 and, optionally, generates metadata M based upon such signals. Metadata M can include but is not limited to information such as the time, date and location that the archival image was captured, the type of image sensor 24, mode setting information, integration time information, taking lens unit setting information that characterizes the process used to capture the archival image and processes, methods and algorithms used by imaging system 10 to form the archival image. Metadata M can also include but is not limited to any other information determined by controller 32 or stored in any memory in imaging system 10 such as information that identifies imaging system 10, and/or instructions for rendering or otherwise processing the archival image with which metadata M is associated that can also be incorporated into the image metadata such an instruction to incorporate a particular message into the image. Metadata M can further include image information such as a set of display data, a set of image information S or any part thereof. Metadata M can also include any other information entered into imaging system 10.
  • Set of image information S and optional metadata M, can be stored in a compressed form. For example where set of image information S comprises a sequence of still images, the still images can be stored in a compressed form such as by using the JPEG (Joint Photographic Experts Group) ISO 10918-1 (ITU-T.81) standard. This JPEG compressed image data is stored using the so-called “Exif” image format defined in the Exchangeable Image File Format version 2.2 published by the Japan Electronics and Information Technology Industries Association JEITA CP-3451. Similarly, other compression systems such as the MPEG-4 (Motion Pictures Export Group) or Apple Quicktime™ standard can be used to store a set of image information that is received in a video form. Other image compression and storage forms can be used. [0030]
  • The set of image information S can be stored in a memory such as [0031] memory 40. Memory 40 can include conventional memory devices including solid state, magnetic, optical or other data storage devices. Memory 40 can be fixed within imaging system 10 or it can be removable. In the embodiment of FIG. 1, imaging system 10 is shown having a memory card slot 46 that holds a removable memory 48 such as a removable memory card and has a removable memory interface 50 for communicating with removable memory 48. The set of image information can also be stored in a remote memory system 52 that is external to imaging system 10 such as a personal computer, computer network or other imaging system.
  • In the embodiment shown in FIGS. 1 and 2, [0032] imaging system 10 has a communication module 54 for communicating with the remote memory system. The communication module 54 can be for example, an optical, radio frequency or other transducer that converts image and other data into a form that can be conveyed to the remote imaging system by way of an optical signal, radio frequency signal or other form of signal. Communication module 54 can also be used to receive a set of image information and other information from a host computer or network (not shown). Controller 32 can also receive information and instructions from signals received by communication module 54 including but not limited to, signals from a remote control device (not shown) such as a remote trigger button (not shown) and can operate imaging system 10 in accordance with such signals.
  • [0033] Signal processor 26 optionally also converts image signals I into a set of display data DD that is in a format that is appropriate for presentation on display 30. Display 30 can comprise, for example, a color liquid crystal display (LCD), organic light emitting display (OLED) also known as an organic electroluminescent display (OELD) or other type of video display. Display 30 can be external as is shown in FIG. 2, or it can be internal for example used in a viewfinder system 38. Alternatively, imaging system 10 can have more than one display with, for example, one being external and one internal.
  • Typically, [0034] display 30 has less imaging resolution than image sensor 24. Accordingly, signal processor 26 reduces the resolution of image signal I when forming the set of display data DD adapted for presentation on display 30. Down sampling and other conventional techniques for reducing the overall imaging resolution can be used. For example, resampling techniques such as are described in commonly assigned U.S. Pat. No. 5,164,831 “Electronic Still Camera Providing Multi-Format Storage Of Full And Reduced Resolution Images” filed by Kuchta et al., on Mar. 15, 1990, can be used. The set of display data DD can optionally be stored in a memory such as memory 40. The set of display data DD can be adapted to be provided to an optional display driver 28 that can be used to drive display 30. Alternatively, the display data can be converted into signals that can be transmitted by signal processor 26 in a form that directly causes display 30 to present a set of display data DD. Where this is done, display driver 28 can be omitted.
  • [0035] Imaging system 10 can obtain a set of image information in a variety of ways. For example, imaging system 10 can capture a set of image information S using image sensor 24. Imaging operations that can be used to obtain a set of image information S from image capture system 22 include a capture process and can optionally also include a composition process and a verification process.
  • During the optional composition process, [0036] controller 32 causes signal processor 26 to cooperate with image sensor 24 to capture image signals I and present a set of display data DD on display 30. In the embodiment shown in FIGS. 1 and 2, controller 32 enters the image composition phase when shutter trigger button 60 is moved to a half depression position. However, other methods for determining when to enter a composition phase can be used. For example, one of user input system 34, for example, the “fix-it” button 66 shown in FIG. 2 can be depressed by a user of imaging system 10, and can be interpreted by controller 32 as an instruction to enter the composition phase. The set of display data DD presented during composition can help a user to compose the scene for the capture of a set of image information S.
  • The capture process is executed in response to [0037] controller 32 determining that a trigger condition exists. In the embodiment of FIGS. 1 and 2, a trigger signal is generated when shutter trigger button 60 is moved to a full depression condition and controller 32 determines that a trigger condition exists when controller 32 detects the trigger signal. During the capture process, controller 32 sends a capture signal causing digital signal processor 26 to obtain image signals I and to process the image signals I to form a set of image information S. A set of display data DD corresponding the set of image information S is optionally formed for presentation on display 30.
  • During the verification phase, the corresponding display data DD is supplied to display [0038] 30 and is presented for a period of time. This permits a user to verify that the set of image information S is acceptable. In one alternative embodiment, signal processor 26 converts each image signal I into the set of imaging information S and then modifies the set of imaging information S to form a set of display data DD.
  • The group image forming features of [0039] imaging system 10 of FIGS. 1 and 2 will now be described with reference to FIGS. 3, 4, 5, 6, 7, and 8. FIG. 3 shows a flow diagram of an embodiment of a method for composing an image. FIG. 4 shows an illustration of objects and elements within an image. FIG. 5 shows an illustration of the use of a set of image information to form an image. FIG. 6 shows an illustration of the use of a desired facial expression or mood selection to influence the appearance of an image. FIG. 7 shows a flow diagram of an embodiment of a method for approving and ordering an image. FIG. 8 shows an illustration depicting the operation of another embodiment of the method of the present invention. In the following description, a method will be described. However, in another embodiment, the methods described hereinafter can take the form of a computer program product for forming a group image.
  • The computer program product for performing the described methods can be stored in a computer readable storage medium. This medium may comprise, for example: magnetic storage media such as a magnetic disk (such as a hard drive or a floppy disk) or magnetic tape; optical storage media such as an optical disc, optical tape, or machine readable bar code; solid state electronic storage devices such as random access memory (RAM), or read only memory (ROM); or any other physical device or medium employed to store a computer program. The computer program product for performing the described methods may also be stored on a computer readable storage medium that is connected to [0040] imaging system 10 by way of the internet or other communication medium. Those skilled in the art will readily recognize that the equivalent of such a computer program product can also be constructed in hardware.
  • In describing the following methods, it should be apparent that the computer program product embodiment can be utilized by any well-known computer system, including but not limited to the computing systems incorporated in [0041] imaging system 10 described above. However, many other types of computer systems can be used to execute the computer program embodiment. Examples of such other computer systems include personal computers, personal digital assistants, work station, internet applications and the like. Consequently, the computer system will not be discussed in further detail herein.
  • Turning now to FIG. 3, the method of forming a group image begins with [0042] imaging system 10 entering a mode for forming a group image (step 70). The group image forming mode can be entered automatically with controller 32 entering the mode as a part of an initial start up operation that is executed when imaging system 10 is activated. Alternatively, the group image mode can be entered automatically when signal processor 26 determines that a set of image information contains an arrangement of image elements that suggests that the scene can be beneficially processed using a set of image information. The group image mode can also be entered when controller 32 detects a user selection at user input system 34 such as striking Fix-It button 66 shown in FIG. 2.
  • A set of image information S is then obtained (step [0043] 72). As is described above, the set of imaging information S can be obtained using the imaging operations described above. For example, controller 32 can be adapted to receive a trigger signal from user input 34. When the trigger signal is received, controller 32 causes a set of image information S to be obtained from image sensor 24 depicting the scene over a period of time. This set of image information S can comprise, for example, a sequence of archival still images captured over the period of time. This set of image information S can also comprise interlaced or other forms of video image information captured over the period of time. The period of time can begin at the moment that the trigger condition is detected.
  • Alternatively, where an image composition phase is used to capture images, [0044] controller 32 can cause a set of image information S to be stored in a first in first out buffer in a memory such as memory 40 during composition. Where this is done, the set of image information S can be captured during composition and fed into the buffer, so that at the time controller 32 determines that trigger condition exists, the buffer contains imaging information depicting the scene for a period of time prior to the point in time where controller 32 determines that the trigger condition exists. In this way, the set of imaging information S obtained can include imaging information obtained prior to the detected trigger condition. In another alternative embodiment of the present invention, a set of imaging information S can be obtained from any memory in imaging system 10. For example, the set of imaging information S can be obtained from a removable memory 48 having the set of imaging information recorded therein by another image capture device (not shown). Further, the set of imaging information can be obtained from an external source by way of communication module 54.
  • Objects and elements within a base image are distinguished within the set of imaging information (step [0045] 74). In the embodiment shown, this can be done by selecting a base image from the image stream and identifying objects and elements within the base image. The base image can be selected by selecting the first image in the set of imaging information S, automatically selecting an image that corresponds to scene conditions at the time that the trigger condition is detected, or automatically selecting any other image in the set of imaging information S based on some other selection strategy. The base image contains objects such as a region, person, place, or thing. An object can contain multiple elements for example, where the object is a face of a person, elements can comprise the eyes and mouth of the person. Objects and/or elements can be detected in the imaging information using a variety of detection algorithms and methods including but not limited to human body detection algorithms such as those disclosed in commonly assigned U.S. Pat. Pub. No. 2002-0076100 entitled “Image processing method for detecting human figures in a digital image” filed by Luo on Dec. 14, 2000, and human face recognition algorithms such as those described in commonly assigned U.S. Pat. Pub. No. 2003-0021448 entitled “Method For Detecting Eye and Mouth Positions in a Digital Image” filed by Chen et al. on May 1, 2001.
  • It will be appreciated that the step of sorting the image for objects can simplify the process of distinguishing elements within the objects by reducing the set of elements that are likely to be within certain areas of the image. However, this is optional and elements can also be identified in a base image without first distinguishing objects. Further, in certain embodiments, objects and/or elements can be distinguished within the set of imaging information S without first forming a base image. [0046]
  • Attributes of each of the elements are then examined (step [0047] 76). Each element has variable attributes that can change over the period of time captured in the set of image information. For example, the eyes of a face can open and close during the period of time, or a mouth can shift from smiling to not smiling. Time variable attributes of elements such as eyes or a mouth can be identified automatically, as they are easily recognizable as being of interest in facial images. However, in certain circumstances the user of imaging system 10 can identify, manually, elements and attributes of interest. Where this is to be done, the base image is presented on display 30 and the user of imaging system 10 can use user input system 34 to identify objects, elements and attributes in the base image that are of interest. Such objects, elements and attributes can be identified for example by name, icon, image, outline, arrow, or other visual or audio symbol or signal. For convenience, the identifier used for the element can be presented on display 30.
  • FIG. 4 shows a drawing of one example of a [0048] base image 90 from a set of image information, its objects, elements, and attributes. Image 90 is composed of two objects: face 92 and face 94. Face 92 is composed of two elements: eye element 96 and mouth element 98. Face 94 is likewise composed of two elements: eye element 100 and mouth element 102. The attribute of eye element 96 is eyes open. The attribute of eye element 100 is eyes closed. The attribute of mouth element 98 is mouth not smiling. The attribute of mouth element 102 is mouth smiling.
  • Objects and elements distinguished in [0049] base image 90 are then distinguished in the other images over the set of imaging information S in like manner (step 78). Attributes of each of the elements are then determined in the remaining portions of the set of imaging information S (step 80).
  • The imaging information depicting each element in the set of imaging information is ordered in decreasing attribute level across the available imaging information in the set of imaging information (step [0050] 82). This ordering is performed by comparing the appearance of each of the elements in the stream of imaging information to preferred attributes for the element. For example, if the best group image of a group of people is desired, then the attributes of eyes open and mouth smiling are of high priority. Therefore the imaging information associated with the element can be ordered based upon which imaging information depicts an eye element having the attribute of an open eye at the top of an ordered list and imaging information depicting the eye element having a closed or partially closed eye at the bottom of the ordered list. Similarly, the ordered list of the element of a mouth having the attribute of smiling would be at the top of an ordered list for the mouth element and imaging information depicting a mouth element having a non smiling arrangement would be at the bottom of the ordered list.
  • The preferred attributes used for the ordering scheme can be determined automatically by analysis of the set of imaging information S to determine what attributes can be preferred for the image. Alternatively, the attributes to be used for ordering the image can be set by the user, for example by using [0051] user input system 34. Other imaging information ordering criteria can be used where other subjects are involved. For example, where the objects in the image include a group of show dogs posing for an image, while doing a similar acrobatic activity such as jumping, the head direction of each dog can be determined and a preference can be shown for the attribute of each dog facing in the general direction of the camera.
  • FIG. 5 illustrates an example of the operation of the method of FIG. 3. FIG. 5 shows a set of [0052] imaging information 110 comprising a sequence of four images 112, 114, 116, 118 captured over a period of time. As is shown, each of the images 112, 114, 116 and 118 contains two objects, i.e., faces 120 and 122. The eye elements 124 and 126, and mouth elements 128 and 130 are identified as important image elements. The attributes of eye elements 124 and 126 in image 112 are examined first and then the attributes of eye elements 124 and 126 in images 114, 116, and 118 are examined. The preferred element attributes are then automatically determined: in this case open eyes and smiling mouth.
    elements
    Images
    124 128 126 130
    112 medium high high medium
    114 medium high medium high
    116 medium medium low high
    118 high medium medium high
  • An ordered list of imaging information depicting eye attributes [0053] 124 and 126 is formed based upon closeness of the attributes of the eye elements in each of images 112, 114, 116 and 118 to the preferred attributes of the eye elements. See Table 2. The imaging information depicting the eye element having highest ordered attributes on the ordered list is used to form a group image 132. Similarly, the mouth elements 128 and 130 in images 112 are examined and compared to the mouth elements 128 and 130 in images 114, 116 and 118, and an ordered list of imaging information having preferred mouth attributes is determined. See Table 2. The mouth elements 128 and 130 that are highest on the ordered list of mouth attributes are used to form the group image 132.
    elements
    124 128 126 130
    Priority 118 112 112 114
    112 114 114 116
    114 116 118 118
    116 118 116 112
  • Other stratagems can also be used in forming an ordered list of imaging information. [0054]
  • The group image is then automatically composed (step [0055] 84). This can be done in a variety of ways. In one embodiment, controller 32 and signal processor 26 select an interim image for use in forming group image 132. This can be done by selecting the base image 140. Alternatively, controller 32 can cooperate with signal processor 26 to determine which of the images available in the set of imaging information S has the highest overall combined attribute ranking for the various elements examined. Alternatively, controller 32 can cooperate with signal processor 26 to determine which of the images available in set of imaging information S requires the least number of image processing steps to form a group image therefrom. For example, where the fewest number of image processing steps is the criterion for selecting the base image, then image 118 can be selected as only one step needs to be performed the step of fixing the appearance of the smile attribute of face 120. Alternatively, the step of selecting an interim image can comprise selecting the image that can most efficiently or most quickly be improved. For example, in FIG. 5, image 114, requires processing of both eye elements 124 and 126, thus requiring more than one correction, however, the processing effort required to correct the appearance of the eye can be substantially less than the processing effort required to improve the appearance of the mouth element 128 of face object 120 in image 118. Other stratagems can also be used for selecting the interim image.
  • The attributes of the interim image are then examined to determine whether each of the elements of the objects in the template image has attributes of the highest order for that attribute. Where an element is found that does not have attributes of the highest order, then [0056] controller 32 and image processor 26 extract imaging information from the set of imaging information S that corresponds to the highest ordered appearance of that element and inserts that imaging information into the interim image in place of the imaging information in the interim image associated with that element. In this way, a multi-element image is formed with each object in the image having elements with preferred attributes. Further, such attributes are based upon actual scene imaging information such as actual facial expressions and are not based upon imaging information manufactured during the editing process. This provides a group image having a more natural and a more realistic appearance.
  • It will be appreciated that group photos and other group images can be captured of scenes and circumstances wherein it is not preferred that each member of the group smiles. Rather for certain group photos a different mood or facial expression can be preferred. A user of [0057] imaging system 10 can use user input system 34 to define such expressions. For example, a desired facial expression of “scared” can be selected by the user of imaging system 10.
  • FIG. 6 shows how the selection of the desired facial expression or mood affects the composite picture outcome. In this example, a [0058] base image 140 is selected and elements are identified. In the illustration of FIG. 6, base image 140 is comprised of two elements: a first face 142 and a second face 144. First face 142 has a neutral expression and second face 144 has a neutral expression. To compose an image having the desired scared expression, eye elements 146 and 148 and mouth elements 150 and 152 are examined over the time period captured in the set of imaging information and ordered as described above. However, in this example, the ordering is done with eye elements 146 and 148 and mouth elements 150 and 152 ordered in accordance with their similarity to eye elements 164 and mouth element 166 associated with a scared expression template 162 stored within a template library 154 that also contains other expression templates such as a happy template 156, an angry template 158 and cynical template 160. The template library 154 can comprise, for example, a library of template images or other imaging information such as image analysis and processing algorithms that associate the attributes of elements in an object, such as a mouth or eye elements in a facial image, with an expression. The templates in library 154 therefore can comprise images, algorithms, models and other data that can be used to evaluate the attributes of the elements detected in each object in the scene. The templates can be based on overall typical anamorphic facial expressions or the templates can be derived based upon previous photographs or other images of first face 142 and second face 144. The templates can be stored in imaging system 10 or in a remote device that can be contacted by way of communication module 54.
  • As is shown in FIG. 6, after ordering, a [0059] group image 170 is then composed as is described above, with each object, e.g. face 142 and 144 having scared eye elements 172 and 174, respectively and scared mouth elements 176 and 178 having an appearance associated with the highest ordered attributes of the elements.
  • It will be appreciated that, in certain circumstances, the set of imaging information S may not contain an expression that appropriately represents the desired expression or that does not suggest the desired expression to the extent desired. Accordingly, a threshold test can optionally be used. For example, in the embodiment shown, in FIG. 6, the ordering process can be performed so that the attributes of the features in the set of imaging information S are compared to [0060] scared template 162 and ordered according to a scoring scale. When this is done, the overall score for second face 144, for example, can be compared to a threshold score. Where the score is below the threshold, it can be determined that the set of imaging information S does not contain sufficient information for an appearance of the expression desired. When this occurs, controller 32 can use communication module 54 to obtain imaging information depicting the elements having desired attributes from a remote memory system 52 having a database or template library depicting second face 144. Controller 32 incorporated this remotely obtained imaging information into group image 170 in order to more closely adapt the appearance of eye elements 148 and mouth elements 152 of second face 144 to the desired “scared” expression to yield the group image 170 wherein second face 144 has the scared eye element 174 and scared mouth element 178 that correspond to the desired appearance.
  • The selection of a desired expression can be made in a variety of ways. For example, the selection can be made on an image by image basis with the selection made once for each image and applied to all elements in the image. Alternatively, the selection of the desired expression can be made on an element by element basis with each element having an individually selected desired expression or other desired attribute. For example, certain persons may feel that their appearance is optimized under circumstances where they have a big smile while other persons may feel that their appearance is optimized with a more subdued expression. In such circumstances, desired expressions can be selected for each person in an image. [0061]
  • FIG. 7 shows an alternative embodiment of the present invention in which at the ordering process is performed in a semi-automatic fashion. In this embodiment, a set of imaging information S is obtained (step [0062] 180). Controller 32 can obtain the set of imaging information to be sent by capturing archival images or a set of archival imaging information such as a video stream as is described above. Controller 32 can also obtain a set of images to be sent by extracting the digital images from a memory, such as a removable memory 48. A set of imaging information can also be obtained using communication module 54.
  • The set of imaging information S is provided to one or more decision makers (step [0063] 182). Controller 32 can provide the set of imaging information S to each decision maker such as for example a person whose image is incorporated into the set of imaging information S. This can be done, for example, by presenting the set of imaging information S to the person using display 30 or by using communication module 54 to transmit the set of imaging information S to a remote terminal, personal digital assistant, personal computer or other display device.
  • After the set of imaging information S has been provided to the decision makers, each decision maker reviews the set of imaging information and provides an indication of which image in the set of imaging information has objects with elements having desired attributes (step [0064] 184). This can be done in a variety of ways. For example, where an image includes a group of elements, a decision can be made for each element in the set of imaging information S as to which portion of the set of imaging information S depicts the element as having favored attributes. For example, one or more elements can be isolated for example by highlighting the element in a base image and a decision maker can then select from the imaging information that portion of the imaging information that depicts that element as having favored attributes. This selection can be made using user input system 34 for example by depressing the select-it button 68 shown in FIG. 2.
  • When a selection is made, [0065] user input system 34 generates a signal that indicates which segment of the set of imaging information S has imaging information that depicts that person with elements having the attributes preferred by that person. Controller 32 detects the signals from user input system 34 to indicate that the selected image contains desired attributes. It will be appreciated that circumstances can arise where more than one decision maker makes recommendations as to which portion of a set of imaging information S contains a preferred attribute. Such conflicts can be prevented by limiting certain decision makers to providing input only on selected elements. For example, where a group image comprises an image of a group of people, each person in the image can act as a decision maker for the elements associated with that person but not for others. Alternatively, such conflicts can be resolved by providing each person in the image with a different group image tailored to the preferences of that person. The user input information can be used to help form the group image in two ways. In one way a user preference can be used in place of the ordering step described above. Alternatively, the ordering steps described above in previous embodiments can be used and the user preference information can be used to adjust the ordering performed on the imaging information.
  • [0066] Controller 32 then forms a group image based at least in part upon the input received from user input system 34 (step 186). There are a number of ways that this can be done. For example, a single group image can be formed based upon the input from all of the decision makers. Alternatively, controller 32 can be used to monitor the inputs from each decision maker with the group image selected by each decision maker using the input made by other decision makers to adjust the ordering of attributes of the elements.
  • FIG. 8 shows an illustration of another embodiment of the method of the present invention. In this embodiment, an [0067] interim image 200 is generated by the imaging system 10 as described above. Interim image 200 contains imaging information that depicts an object 202 which is shown in FIG. 8 as a face object having a mouth element 204. In this embodiment the set of imaging information depicting the mouth element is extracted from the set of imaging information and incorporated into the interim image as metadata 206 associated with interim image 200. The image and metadata are transmitted by imaging system 10 to a home unit 208 such as a personal computer, personal digital assistant, or other device by way of a communication network 210. In this embodiment, metadata 206 is incorporated into interim image 200 in a way that permits a user of the home receiver to access metadata 206 using software such as image editing software, image processing software or a conventional web browser. The home user receives interim image 200 and, if desired, indicates that the user wishes to change or consider the option for changing the attributes of one of the elements. This can be done, for example, by hovering a mouse cursor over mouth element 204 of face object 202 in interim image 200 or otherwise indicating that an area of interim image 200 contains an element.
  • When this occurs, [0068] home unit 208 extracts the set of imaging information associated with mouth element 204 from metadata 206 and provides imaging information based upon the set of imaging information from which the home user can select attributes that are preferable to the home user. In the embodiment illustrated, when the home user indicates a desire to change the appearance of mouth element 204, a slide bar 212 appears on home unit 208. By sliding slide bar 212 the user can move through the available set of imaging information associated with that image and select imaging information having preferred attributes. The home receiver records an indication of which attributes are found to be preferable by the home user and adjusts the image to include those attributes. This allows each person captured in an image to adjust the attributes for that person in the archival image in order to optimize their appearance. The adjusted group image can be adapted so that the adjusted group image and any copies of the image made from the adjusted group image will contain the preferred image attributes. In another alternative of this type, each recipient of the group image is provided with a copy of the group image that contains metadata for each image element and can select attributes for each element to form a local group image that is customized to the preferences of the recipient.
  • Optionally, the home receiver also provides a feedback signal by way of [0069] communication network 210 to imaging system 10 or some other device 214 such as a storage device, server or printer containing the interim image with the feedback signal indicating adjustments made by home unit 208. This information can be received by imaging system 10 or other storage device 214 and then used to form an adjusted archival image having a user selected and optimized appearance. It will be appreciated that such editing can be performed by user input system 34 to perform the function of selecting desirable attributes for the adjusted archival image.
  • Although [0070] imaging system 10 has been shown generally in the form of a digital still or motion image camera type imaging system, it will be appreciated that imaging system 10 of the present invention can be incorporated into and the methods and computer program product described herein can be used by any device that is capable of processing a set of imaging information examples of which include: cellular telephones; personal digital assistants; hand held, tablet, desktop, notebook and other personal computers and image processing appliances such as internet appliances and kiosks. Further, imaging system 10 can comprise a film or still image scanning system with lens system 23 and image sensor 24 adapted to scan imaging information from a set of images on a photographic film or prints and can even be adapted to obtain image information from a set of film image negatives. In such an application, imaging system 10 can comprise for example a personal computer, workstation, or other general purpose computing system having such an imaging system.
  • Alternatively, [0071] imaging system 10 can also comprise a scanning system such as those employed in conventional photofinishing systems such as the photographic processing apparatus described in commonly assigned U.S. Pat. No. 6,476,903 entitled “Image Processing” filed by Slater et al. on Jun. 21, 2000.
  • The invention has been described in detail with particular reference to preferred embodiments thereof, but it will be understood that variations and modifications can be effected within the spirit and scope of the invention. [0072]
  • Parts List
  • [0073] 10 imaging system
  • [0074] 20 body
  • [0075] 22 image capture system
  • [0076] 23 lens system
  • [0077] 24 image sensor
  • [0078] 26 signal processor
  • [0079] 28 display driver
  • [0080] 30 display
  • [0081] 32 controller
  • [0082] 34 user input system
  • [0083] 36 sensors
  • [0084] 38 viewfinder
  • [0085] 40 memory
  • [0086] 46 removable memory slot
  • [0087] 48 removable memory
  • [0088] 50 removable memory interface
  • [0089] 52 remote memory system
  • [0090] 54 communication module
  • [0091] 60 shutter trigger button
  • [0092] 62 “wide” angle zoom button
  • [0093] 64 “tele” zoom button
  • [0094] 66 fix-it button
  • [0095] 68 select-it button
  • [0096] 70 enter group image forming mode step
  • [0097] 72 obtain set of imaging information step
  • [0098] 74 distinguish objects and elements step
  • [0099] 76 examine attributes of elements step
  • [0100] 78 distinguish objects and elements over set of imaging information step
  • [0101] 80 examine attributes of elements over set of imaging information step
  • [0102] 82 order imaging information step
  • [0103] 84 compose image step
  • [0104] 90 base image
  • [0105] 92 face
  • [0106] 94 face
  • [0107] 96 eye element
  • [0108] 98 mouth element
  • [0109] 100 eye element
  • [0110] 102 mouth element
  • [0111] 110 set of imaging information
  • [0112] 112 image
  • [0113] 114 image
  • [0114] 116 image
  • [0115] 118 image
  • [0116] 120 face object
  • [0117] 122 face object
  • [0118] 124 eye element
  • [0119] 126 eye element
  • [0120] 128 mouth element
  • [0121] 130 mouth element
  • [0122] 132 group image
  • [0123] 140 base image
  • [0124] 142 face
  • [0125] 144 face
  • [0126] 146 eye element
  • [0127] 148 eye element
  • [0128] 150 mouth element
  • [0129] 152 mouth element
  • [0130] 154 template library
  • [0131] 156 happy template
  • [0132] 158 angry template
  • [0133] 160 cynical template
  • [0134] 162 scared template
  • [0135] 164 eye element
  • [0136] 166 mouth element
  • [0137] 170 group image
  • [0138] 172 scared eye element
  • [0139] 174 scared eye element
  • [0140] 176 scared mouth element
  • [0141] 178 scared mouth element
  • [0142] 180 obtain set of imaging information step
  • [0143] 182 provide set of imaging information to decision maker(s) step
  • [0144] 184 receive selection signal step
  • [0145] 186 form image based on selection signal step
  • [0146] 200 interim image
  • [0147] 202 face object
  • [0148] 204 mouth element
  • [0149] 206 metadata
  • [0150] 208 home unit
  • [0151] 210 communication network
  • [0152] 212 slide bar
  • [0153] 214 other device

Claims (42)

What is claimed is:
1. A method for forming a group image, the method comprising the steps of:
obtaining a set of imaging information depicting a scene over a period of time;
distinguishing elements in the set of image information;
examining attributes of the elements in the set of image information;
selecting imaging information from the set of imaging information depicting each element, with the selection being made according to the attributes for that element; and,
forming a group image based upon the set of imaging information with the group image incorporating the selected image information.
2. The method of claim 1 wherein the step of examining attributes of the elements comprises comparing the portions of the set of imaging information associated with the element to a template for that element.
3. The method of claim 1 wherein the step of examining attributes of the elements comprises obtaining an input identifying preferred attributes for an element and comparing the portions of the set of imaging information associated with the element to the preferred attributes.
4. The method of claim 1 wherein the step of examining attributes of the elements comprises comparing the portions of the set of imaging information associated with the element to a template for that element and determining a score based upon the comparison.
5. The method of claim 4, wherein the scores are compared to a threshold and, further comprising the step of obtaining imaging information for the element from a source other than the set of imaging information when no score meets the threshold.
6. The method of claim 4, further comprising the step of determining a preferred expression and wherein scores are based upon correspondence of facial features of elements with the preferred expression.
7. The method of claim 1, wherein the step of examining attributes of the elements comprises comparing the portions of the set of imaging information associated with the element to a template for that element, wherein the attributes for the element are examined based upon a preferred expression for the group image.
8. A method for forming an image, the method comprising the steps of:
obtaining a set of imaging information depicting a scene over a period of time;
providing a base image based upon the set of image information;
identifying elements in the base image,
ordering portions of the set of imaging information depicting each of the elements according to an attribute of each element;
selecting imaging information from the set of imaging information depicting each element according to the ordering; and,
forming an image based upon the set of imaging information with the image incorporating the selected image information.
9. The method of claim 8, wherein the step of providing a base image comprises automatically selecting an image from the set of image information.
10. The method of claim 8, wherein the step of providing a base image comprises receiving a manual input selecting the base image.
11. The method of claim 8 wherein the step of ordering the portions of the set of imaging information depicting each of the elements according to an attribute of each element is based at least in part upon a user input.
12. The method of claim 8, wherein the step of ordering the portions of the set of imaging information depicting each of the elements according to an attribute of each element is based at least in part upon user inputs from more than one user.
13. The method of claim 8, wherein the step of ordering the portions of the set of imaging information depicting each of the elements according to an attribute of each element comprises comparing the appearance of the imaging information depicting each of the elements to a template over the period of time that the element is depicted in the set of imaging information and ordering the imaging information associated with the element over the set of imaging information based upon the attributes for the element.
14. A method for forming an image comprising the steps of:
obtaining images of a scene at different times;
identifying elements the images;
determining attributes of each of the elements in each of the images;
determining for each element which image shows the element having preferred attributes; and,
preparing an image of the scene with each element having an appearance that corresponds to the appearance of the element in the image that shows the preferred attributes for the element.
15. The method of claim 14, wherein the images contain a group of more than one person and wherein the step of determining for each element which image shows the element having preferred attributes comprises soliciting an input from each person indicating a preferred attribute for elements associated with that person.
16. The method of claim 14, wherein the images contain a group of more than one element and wherein the step of determining for each element which image shows the element having preferred attributes comprises the steps of transmitting imaging information including imaging information depicting the appearance of the elements over the set of imaging information for review and receiving a response from users having authority to provide attribute preference information.
17. A computer program product having data stored thereon for causing an imaging system to perform a method of forming a group image, the method comprising the steps of:
obtaining a set of imaging information depicting a scene over a period of time;
distinguishing elements in the set of image information;
examining attributes of the elements in the set of image information;
selecting imaging information from the set of imaging information depicting each element, with the selection being made according to the attributes for that element; and,
forming a group image based upon the set of imaging information with the group image incorporating the selected image information.
18. A computer program product having data stored thereon for causing an imaging system to perform a method for forming an image, the method comprising the steps of:
obtaining a set of imaging information depicting a scene over a period of time;
providing a base image based upon the set of image information;
identifying elements in the base image,
ordering the portions of the set of imaging information depicting each of the elements according to an attribute of each element;
selecting imaging information from the set of imaging information depicting each element according to the ordering; and,
forming an image based upon the set of imaging information with the image incorporating the selected image information.
19. A computer program product having data stored thereon for causing an imaging system to perform a method for forming a group image, the method comprising the steps of:
obtaining images of a scene at different times;
identifying elements in the images;
determining attributes of each of the elements in each of the images;
determining for each element which image shows the element having preferred attributes; and,
preparing an image of the scene with each element having an appearance that corresponds to the appearance of the element in the image that shows the preferred attributes for the element.
20. An imaging system comprising:
a source of a set of image information; and
a signal processor adapted to receive the set of image information, identify elements in the set of image information, to distinguish elements in the set of image information, and to examine the attributes of the elements in the set of image information;
wherein the signal processor is further adapted to select imaging information from the set of imaging information depicting each element, with the selection being made according to the attributes for that element; and, to form a group image based upon the set of imaging information with the archival image incorporating the selected image information.
21. The imaging system of claim 20, wherein the source of a set of image information comprises an image capture system for capturing the set of image information.
22. The imaging system of claim 20 wherein the source of image information comprises a scanner for obtaining imaging information from film, paper or other tangible medium of expression.
23. The imaging system of claim 20, further comprising a communication module for obtaining the set of imaging information from a separate source.
24. The imaging system of claim 20, further comprising a display adapted to present the set of imaging information and a user input adapted to receive input that can be used to identify preferred attributes for each element and wherein the signal processor examines attributes of the elements by obtaining an input identifying imaging information for an element having preferred attributes and by comparing the portions of the set of imaging information associated with elements to the preferred attributes.
25. The imaging system of claim 20, further comprising a remote imaging system said remote imaging system having a remote communication module adapted to receive imaging information from the set of imaging information and to provide feedback to the signal process that the signal processor can use to select image information.
26. The imaging system of claim 20, wherein the signal processor examines attributes of the elements by comparing the portions of the set of imaging information associated with the element to a template for that element.
27. The imaging system of claim 20, wherein the signal processor examines attributes of the elements by comparing the portions of the set of imaging information associated with the element to a template for that element and determining a score based upon the comparison.
28. The imaging system of claim 20, wherein the imaging system comprises a communication module and the signal processor compares the scores to a threshold and uses the communication module to obtain imaging information for the element from a different source when no score meets the threshold.
29. The imaging system of claim 20, wherein the step of examining attributes of the elements comprises comparing the portions of the set of imaging information associated with the element to a template for that element, wherein the attribute for the element is analyzed based upon a mood setting for the group image.
30. An imaging system comprising:
a source of a set of image information;
a signal processor adapted to obtain a set of imaging information depicting a scene over a period of time, to provide a base image based upon the set of imaging information to identify elements in the base image, and to order the portions of the set of imaging information depicting each of the elements according to an attribute of each element, wherein the processor selects imaging information from the set of imaging information depicting each element according to the ordering; and forms an archival image based upon the set of imaging information with the archival image incorporating the selected image information.
31. The imaging system of claim 30, wherein the signal processor automatically selects a base image from the set of image information.
32. The imaging system of claim 30, further comprising a display for presenting the set of imaging information and a user input adapted to receive an input selecting a base image from the set of image information.
33. The imaging system of claim 30, further comprising a user input, wherein the signal processor orders the portions of the set of imaging information depicting each of the elements according to an attribute of each element based at least in part upon a user input.
34. The imaging system of claim 30, further comprising a user input, wherein the signal processor orders the portions of the set of imaging information depicting each of the elements according to an attribute of each element based at least in part upon inputs from more than one user.
35. The imaging system of claim 30, wherein the signal processor orders the portions of the set of imaging information depicting each of the elements according to an attribute of each element by comparing the appearance of the imaging information depicting each of the elements to a template over the period of time that the element is depicted in the set of imaging information and ordering the imaging information associated with the element over the set of imaging information based upon the attributes for the element.
36. An imaging system comprising:
a source of a images of a scene captured at different times;
a signal processor adapted to obtain images from the source, to identify elements in the images, to determine attributes of each of the elements in each of the images and to determine for each element which image shows the element having preferred attributes wherein the signal processor prepares an image of the scene with each element having an appearance that corresponds to the appearance of the element in the image that shows the preferred attributes for the element.
37. The imaging system of claim 36, further comprising a display and a user input, wherein the images contain more than one person and wherein the step of determining for each element which image shows the element having preferred attributes comprises using the display and user input to solicit an input from at least one person indicating a preferred attribute for at least one element.
38. The imaging system of claim 36, wherein the images contain a group of more than one element and wherein the step of determining for each element which image shows the element having preferred attributes comprises the steps of transmitting imaging information including imaging information depicting the appearance of the elements over the set of imaging information for review and receiving a response from users having authority to provide attribute preference information.
39. The imaging system of claim 36 wherein the source of the set of image information comprises a scanning system.
40. The imaging system of claim 36 wherein the source of the set of image information comprises an image capture system adapted to capture imaging information from a scene.
41. The imaging system of claim 36 wherein the source of the set of image information comprises a communication module.
42. The imaging system of claim 36 further comprising a memory interface adapted to receive a removable memory and the source of the set of image information comprises a removable memory stored in the memory interface.
US10/431,057 2003-05-07 2003-05-07 Composite imaging method and system Abandoned US20040223649A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US10/431,057 US20040223649A1 (en) 2003-05-07 2003-05-07 Composite imaging method and system
US11/681,499 US20070182829A1 (en) 2003-05-07 2007-03-02 Composite imaging method and system
US12/975,720 US8600191B2 (en) 2003-05-07 2010-12-22 Composite imaging method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/431,057 US20040223649A1 (en) 2003-05-07 2003-05-07 Composite imaging method and system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/681,499 Division US20070182829A1 (en) 2003-05-07 2007-03-02 Composite imaging method and system

Publications (1)

Publication Number Publication Date
US20040223649A1 true US20040223649A1 (en) 2004-11-11

Family

ID=33416376

Family Applications (3)

Application Number Title Priority Date Filing Date
US10/431,057 Abandoned US20040223649A1 (en) 2003-05-07 2003-05-07 Composite imaging method and system
US11/681,499 Abandoned US20070182829A1 (en) 2003-05-07 2007-03-02 Composite imaging method and system
US12/975,720 Expired - Fee Related US8600191B2 (en) 2003-05-07 2010-12-22 Composite imaging method and system

Family Applications After (2)

Application Number Title Priority Date Filing Date
US11/681,499 Abandoned US20070182829A1 (en) 2003-05-07 2007-03-02 Composite imaging method and system
US12/975,720 Expired - Fee Related US8600191B2 (en) 2003-05-07 2010-12-22 Composite imaging method and system

Country Status (1)

Country Link
US (3) US20040223649A1 (en)

Cited By (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060007501A1 (en) * 2004-07-06 2006-01-12 Fuji Photo Film Co., Ltd. Image processing apparatus and image processing program
US20070070181A1 (en) * 2005-07-08 2007-03-29 Samsung Electronics Co., Ltd. Method and apparatus for controlling image in wireless terminal
US20070225931A1 (en) * 2006-03-27 2007-09-27 Ge Inspection Technologies, Lp Inspection apparatus for inspecting articles
EP1847960A2 (en) * 2006-03-27 2007-10-24 General Electric Company Inspection apparatus for inspecting articles
WO2008150285A1 (en) * 2007-05-24 2008-12-11 Fotonation Vision Limited Image processing method and apparatus
EP2138979A1 (en) * 2008-06-25 2009-12-30 Sony Corporation Image processing
US20090322926A1 (en) * 2008-06-25 2009-12-31 Tetsuo Ikeda Image processing apparatus and image processing method
US7659923B1 (en) * 2005-06-24 2010-02-09 David Alan Johnson Elimination of blink-related closed eyes in portrait photography
US20100066840A1 (en) * 2007-02-15 2010-03-18 Sony Corporation Image processing device and image processing method
US7684630B2 (en) 2003-06-26 2010-03-23 Fotonation Vision Limited Digital image adjustable compression and resolution using face detection information
US7693311B2 (en) 2003-06-26 2010-04-06 Fotonation Vision Limited Perfecting the effect of flash within an image acquisition devices using face detection
US7809162B2 (en) 2003-06-26 2010-10-05 Fotonation Vision Limited Digital image processing using face detection information
US7844135B2 (en) 2003-06-26 2010-11-30 Tessera Technologies Ireland Limited Detecting orientation of digital images using face detection information
US7844076B2 (en) 2003-06-26 2010-11-30 Fotonation Vision Limited Digital image processing using face detection and skin tone information
US7855737B2 (en) 2008-03-26 2010-12-21 Fotonation Ireland Limited Method of making a digital camera image of a scene including the camera user
US20110013038A1 (en) * 2009-07-15 2011-01-20 Samsung Electronics Co., Ltd. Apparatus and method for generating image including multiple people
US7912245B2 (en) 2003-06-26 2011-03-22 Tessera Technologies Ireland Limited Method of improving orientation and color balance of digital images using face detection information
US7916971B2 (en) 2007-05-24 2011-03-29 Tessera Technologies Ireland Limited Image processing method and apparatus
US7962629B2 (en) 2005-06-17 2011-06-14 Tessera Technologies Ireland Limited Method for establishing a paired connection between media devices
US7965875B2 (en) 2006-06-12 2011-06-21 Tessera Technologies Ireland Limited Advances in extending the AAM techniques from grayscale to color images
US20110157221A1 (en) * 2009-12-29 2011-06-30 Ptucha Raymond W Camera and display system interactivity
US20110261219A1 (en) * 2010-04-26 2011-10-27 Kyocera Corporation Imaging device, terminal device, and imaging method
US8055067B2 (en) 2007-01-18 2011-11-08 DigitalOptics Corporation Europe Limited Color segmentation
US8210848B1 (en) * 2005-03-07 2012-07-03 Avaya Inc. Method and apparatus for determining user feedback by facial expression
US8224039B2 (en) 2007-02-28 2012-07-17 DigitalOptics Corporation Europe Limited Separating a directional lighting variability in statistical face modelling based on texture space decomposition
US20120300092A1 (en) * 2011-05-23 2012-11-29 Microsoft Corporation Automatically optimizing capture of images of one or more subjects
US8345114B2 (en) 2008-07-30 2013-01-01 DigitalOptics Corporation Europe Limited Automatic face and skin beautification using face detection
WO2013012370A1 (en) * 2011-07-15 2013-01-24 Scalado Ab Method of providing an adjusted digital image representation of a view, and an apparatus
US20130039583A1 (en) * 2005-07-27 2013-02-14 Canon Kabushiki Kaisha Image processing apparatus and image processing method, and computer program for causing computer to execute control method of image processing apparatus
US20130141605A1 (en) * 2011-12-06 2013-06-06 Youngkoen Kim Mobile terminal and control method for the same
US8494286B2 (en) 2008-02-05 2013-07-23 DigitalOptics Corporation Europe Limited Face detection in mid-shot digital images
US8498452B2 (en) 2003-06-26 2013-07-30 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US8503800B2 (en) 2007-03-05 2013-08-06 DigitalOptics Corporation Europe Limited Illumination detection using classifier chains
US20130201366A1 (en) * 2012-02-03 2013-08-08 Sony Corporation Image processing apparatus, image processing method, and program
WO2013131536A1 (en) * 2012-03-09 2013-09-12 Sony Mobile Communications Ab Image recording method and corresponding camera device
US8649604B2 (en) 2007-03-05 2014-02-11 DigitalOptics Corporation Europe Limited Face searching and detection in a digital image acquisition device
US8675991B2 (en) 2003-06-26 2014-03-18 DigitalOptics Corporation Europe Limited Modification of post-viewing parameters for digital images using region or feature information
US20140153832A1 (en) * 2012-12-04 2014-06-05 Vivek Kwatra Facial expression editing in images based on collections of images
US8750578B2 (en) 2008-01-29 2014-06-10 DigitalOptics Corporation Europe Limited Detecting facial expressions in digital images
US20140176764A1 (en) * 2012-12-21 2014-06-26 Sony Corporation Information processing device and recording medium
US20140244602A1 (en) * 2013-02-22 2014-08-28 Sap Ag Semantic compression of structured data
WO2015018244A1 (en) * 2013-08-07 2015-02-12 Microsoft Corporation Augmenting and presenting captured data
US8989453B2 (en) 2003-06-26 2015-03-24 Fotonation Limited Digital image processing using face detection information
CN104471596A (en) * 2012-07-18 2015-03-25 株式会社日立制作所 Computer, guide information provision device, and recording medium
US20150332123A1 (en) * 2014-05-14 2015-11-19 At&T Intellectual Property I, L.P. Image quality estimation using a reference image portion
US9196069B2 (en) 2010-02-15 2015-11-24 Mobile Imaging In Sweden Ab Digital image manipulation
US9344642B2 (en) 2011-05-31 2016-05-17 Mobile Imaging In Sweden Ab Method and apparatus for capturing a first image using a first configuration of a camera and capturing a second image using a second configuration of a camera
US9478056B2 (en) 2013-10-28 2016-10-25 Google Inc. Image cache for replacing portions of images
US9519814B2 (en) 2009-06-12 2016-12-13 Hand Held Products, Inc. Portable data terminal
US9560271B2 (en) * 2013-07-16 2017-01-31 Samsung Electronics Co., Ltd. Removing unwanted objects from photographed image
US9792012B2 (en) 2009-10-01 2017-10-17 Mobile Imaging In Sweden Ab Method relating to digital images
US20180204097A1 (en) * 2017-01-19 2018-07-19 Adobe Systems Incorporated Automatic Capture and Refinement of a Digital Image of a Group of People without User Intervention
US10032068B2 (en) 2009-10-02 2018-07-24 Fotonation Limited Method of making a digital camera image of a first scene with a superimposed second scene
US10255253B2 (en) 2013-08-07 2019-04-09 Microsoft Technology Licensing, Llc Augmenting and presenting captured data
US20190289225A1 (en) * 2018-03-19 2019-09-19 Panasonic Intellectual Property Management Co., Ltd. System and method for generating group photos
US10475222B2 (en) * 2017-09-05 2019-11-12 Adobe Inc. Automatic creation of a group shot image from a short video clip using intelligent select and merge
USRE47775E1 (en) * 2007-06-07 2019-12-17 Sony Corporation Imaging apparatus, information processing apparatus and method, and computer program therefor
US10942964B2 (en) 2009-02-02 2021-03-09 Hand Held Products, Inc. Apparatus and method of embedding meta-data in a captured image

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005303991A (en) * 2004-03-15 2005-10-27 Fuji Photo Film Co Ltd Imaging device, imaging method, and imaging program
JP2008035328A (en) * 2006-07-31 2008-02-14 Fujifilm Corp Template generating device, image locating device, change template generating device and program thereof
JP5055939B2 (en) * 2006-10-12 2012-10-24 株式会社ニコン Digital camera
US8558913B2 (en) * 2010-02-08 2013-10-15 Apple Inc. Capture condition selection from brightness and motion
US8866847B2 (en) * 2010-09-14 2014-10-21 International Business Machines Corporation Providing augmented reality information
US9283484B1 (en) * 2012-08-27 2016-03-15 Zynga Inc. Game rhythm
US9247129B1 (en) * 2013-08-30 2016-01-26 A9.Com, Inc. Self-portrait enhancement techniques
CN111413877A (en) * 2020-03-24 2020-07-14 珠海格力电器股份有限公司 Method and device for controlling household appliance

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5057019A (en) * 1988-12-23 1991-10-15 Sirchie Finger Print Laboratories Computerized facial identification system
US5164831A (en) * 1990-03-15 1992-11-17 Eastman Kodak Company Electronic still camera providing multi-format storage of full and reduced resolution images
US5638502A (en) * 1992-12-25 1997-06-10 Casio Computer Co., Ltd. Device for creating a new object image relating to plural object images
US5715325A (en) * 1995-08-30 1998-02-03 Siemens Corporate Research, Inc. Apparatus and method for detecting a face in a video image
US6272231B1 (en) * 1998-11-06 2001-08-07 Eyematic Interfaces, Inc. Wavelet-based facial motion capture for avatar animation
US20010046330A1 (en) * 1998-12-29 2001-11-29 Stephen L. Shaffer Photocollage generation and modification
US6366316B1 (en) * 1996-08-30 2002-04-02 Eastman Kodak Company Electronic imaging system for generating a composite image using the difference of two images
US20020076100A1 (en) * 2000-12-14 2002-06-20 Eastman Kodak Company Image processing method for detecting human figures in a digital image
US6476903B1 (en) * 1998-05-29 2002-11-05 Eastman Kodak Company Image processing
US20030021448A1 (en) * 2001-05-01 2003-01-30 Eastman Kodak Company Method for detecting eye and mouth positions in a digital image
US20030053663A1 (en) * 2001-09-20 2003-03-20 Eastman Kodak Company Method and computer program product for locating facial features
US6661906B1 (en) * 1996-12-19 2003-12-09 Omron Corporation Image creating apparatus
US20040076313A1 (en) * 2002-10-07 2004-04-22 Technion Research And Development Foundation Ltd. Three-dimensional face recognition
US6778703B1 (en) * 2000-04-19 2004-08-17 International Business Machines Corporation Form recognition using reference areas
US6897880B2 (en) * 2001-02-22 2005-05-24 Sony Corporation User interface for generating parameter values in media presentations based on selected presentation instances
US7024053B2 (en) * 2000-12-04 2006-04-04 Konica Corporation Method of image processing and electronic camera

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5057091A (en) * 1989-07-31 1991-10-15 Corpak, Inc. Enteral feeding tube with a flexible bolus and feeding bolus
US5550928A (en) * 1992-12-15 1996-08-27 A.C. Nielsen Company Audience measurement system and method
US6345274B1 (en) * 1998-06-29 2002-02-05 Eastman Kodak Company Method and computer program product for subjective image content similarity-based retrieval
JP3805145B2 (en) 1999-08-02 2006-08-02 富士写真フイルム株式会社 Imaging apparatus and group photo image forming method
JP3955170B2 (en) * 2000-07-03 2007-08-08 富士フイルム株式会社 Image search system
WO2003009216A1 (en) * 2001-07-17 2003-01-30 Yesvideo, Inc. Automatic selection of a visual image based on quality

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5057019A (en) * 1988-12-23 1991-10-15 Sirchie Finger Print Laboratories Computerized facial identification system
US5164831A (en) * 1990-03-15 1992-11-17 Eastman Kodak Company Electronic still camera providing multi-format storage of full and reduced resolution images
US5638502A (en) * 1992-12-25 1997-06-10 Casio Computer Co., Ltd. Device for creating a new object image relating to plural object images
US5715325A (en) * 1995-08-30 1998-02-03 Siemens Corporate Research, Inc. Apparatus and method for detecting a face in a video image
US6366316B1 (en) * 1996-08-30 2002-04-02 Eastman Kodak Company Electronic imaging system for generating a composite image using the difference of two images
US6661906B1 (en) * 1996-12-19 2003-12-09 Omron Corporation Image creating apparatus
US6476903B1 (en) * 1998-05-29 2002-11-05 Eastman Kodak Company Image processing
US6272231B1 (en) * 1998-11-06 2001-08-07 Eyematic Interfaces, Inc. Wavelet-based facial motion capture for avatar animation
US20010046330A1 (en) * 1998-12-29 2001-11-29 Stephen L. Shaffer Photocollage generation and modification
US6778703B1 (en) * 2000-04-19 2004-08-17 International Business Machines Corporation Form recognition using reference areas
US7024053B2 (en) * 2000-12-04 2006-04-04 Konica Corporation Method of image processing and electronic camera
US20020076100A1 (en) * 2000-12-14 2002-06-20 Eastman Kodak Company Image processing method for detecting human figures in a digital image
US6897880B2 (en) * 2001-02-22 2005-05-24 Sony Corporation User interface for generating parameter values in media presentations based on selected presentation instances
US20030021448A1 (en) * 2001-05-01 2003-01-30 Eastman Kodak Company Method for detecting eye and mouth positions in a digital image
US20030053663A1 (en) * 2001-09-20 2003-03-20 Eastman Kodak Company Method and computer program product for locating facial features
US20040076313A1 (en) * 2002-10-07 2004-04-22 Technion Research And Development Foundation Ltd. Three-dimensional face recognition

Cited By (109)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7684630B2 (en) 2003-06-26 2010-03-23 Fotonation Vision Limited Digital image adjustable compression and resolution using face detection information
US7853043B2 (en) 2003-06-26 2010-12-14 Tessera Technologies Ireland Limited Digital image processing using face detection information
US7693311B2 (en) 2003-06-26 2010-04-06 Fotonation Vision Limited Perfecting the effect of flash within an image acquisition devices using face detection
US8224108B2 (en) 2003-06-26 2012-07-17 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US8498452B2 (en) 2003-06-26 2013-07-30 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US8131016B2 (en) 2003-06-26 2012-03-06 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US9053545B2 (en) 2003-06-26 2015-06-09 Fotonation Limited Modification of viewing parameters for digital images using face detection information
US7912245B2 (en) 2003-06-26 2011-03-22 Tessera Technologies Ireland Limited Method of improving orientation and color balance of digital images using face detection information
US8989453B2 (en) 2003-06-26 2015-03-24 Fotonation Limited Digital image processing using face detection information
US8948468B2 (en) 2003-06-26 2015-02-03 Fotonation Limited Modification of viewing parameters for digital images using face detection information
US8675991B2 (en) 2003-06-26 2014-03-18 DigitalOptics Corporation Europe Limited Modification of post-viewing parameters for digital images using region or feature information
US8126208B2 (en) 2003-06-26 2012-02-28 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US8326066B2 (en) 2003-06-26 2012-12-04 DigitalOptics Corporation Europe Limited Digital image adjustable compression and resolution using face detection information
US8005265B2 (en) 2003-06-26 2011-08-23 Tessera Technologies Ireland Limited Digital image processing using face detection information
US7860274B2 (en) 2003-06-26 2010-12-28 Fotonation Vision Limited Digital image processing using face detection information
US7809162B2 (en) 2003-06-26 2010-10-05 Fotonation Vision Limited Digital image processing using face detection information
US7844135B2 (en) 2003-06-26 2010-11-30 Tessera Technologies Ireland Limited Detecting orientation of digital images using face detection information
US7844076B2 (en) 2003-06-26 2010-11-30 Fotonation Vision Limited Digital image processing using face detection and skin tone information
US7848549B2 (en) 2003-06-26 2010-12-07 Fotonation Vision Limited Digital image processing using face detection information
US7702136B2 (en) 2003-06-26 2010-04-20 Fotonation Vision Limited Perfecting the effect of flash within an image acquisition devices using face detection
US8055090B2 (en) 2003-06-26 2011-11-08 DigitalOptics Corporation Europe Limited Digital image processing using face detection information
US7539353B2 (en) * 2004-07-06 2009-05-26 Fujifilm Corporation Image processing apparatus and image processing program
US20060007501A1 (en) * 2004-07-06 2006-01-12 Fuji Photo Film Co., Ltd. Image processing apparatus and image processing program
US8210848B1 (en) * 2005-03-07 2012-07-03 Avaya Inc. Method and apparatus for determining user feedback by facial expression
US7962629B2 (en) 2005-06-17 2011-06-14 Tessera Technologies Ireland Limited Method for establishing a paired connection between media devices
US7659923B1 (en) * 2005-06-24 2010-02-09 David Alan Johnson Elimination of blink-related closed eyes in portrait photography
US20070070181A1 (en) * 2005-07-08 2007-03-29 Samsung Electronics Co., Ltd. Method and apparatus for controlling image in wireless terminal
US8908906B2 (en) * 2005-07-27 2014-12-09 Canon Kabushiki Kaisha Image processing apparatus and image processing method, and computer program for causing computer to execute control method of image processing apparatus
US20130039583A1 (en) * 2005-07-27 2013-02-14 Canon Kabushiki Kaisha Image processing apparatus and image processing method, and computer program for causing computer to execute control method of image processing apparatus
EP1840834A3 (en) * 2006-03-27 2008-01-23 General Electric Company Article inspection apparatus
US8368749B2 (en) 2006-03-27 2013-02-05 Ge Inspection Technologies Lp Article inspection apparatus
EP1847960A3 (en) * 2006-03-27 2008-01-23 General Electric Company Inspection apparatus for inspecting articles
EP1847960A2 (en) * 2006-03-27 2007-10-24 General Electric Company Inspection apparatus for inspecting articles
CN101267546B (en) * 2006-03-27 2013-05-29 通用电气检查技术有限合伙人公司 Inspection apparatus for inspecting articles
US8310533B2 (en) 2006-03-27 2012-11-13 GE Sensing & Inspection Technologies, LP Inspection apparatus for inspecting articles
US20070225931A1 (en) * 2006-03-27 2007-09-27 Ge Inspection Technologies, Lp Inspection apparatus for inspecting articles
US7965875B2 (en) 2006-06-12 2011-06-21 Tessera Technologies Ireland Limited Advances in extending the AAM techniques from grayscale to color images
US8055067B2 (en) 2007-01-18 2011-11-08 DigitalOptics Corporation Europe Limited Color segmentation
US8482651B2 (en) * 2007-02-15 2013-07-09 Sony Corporation Image processing device and image processing method
US20100066840A1 (en) * 2007-02-15 2010-03-18 Sony Corporation Image processing device and image processing method
US8224039B2 (en) 2007-02-28 2012-07-17 DigitalOptics Corporation Europe Limited Separating a directional lighting variability in statistical face modelling based on texture space decomposition
US8509561B2 (en) 2007-02-28 2013-08-13 DigitalOptics Corporation Europe Limited Separating directional lighting variability in statistical face modelling based on texture space decomposition
US8503800B2 (en) 2007-03-05 2013-08-06 DigitalOptics Corporation Europe Limited Illumination detection using classifier chains
US8649604B2 (en) 2007-03-05 2014-02-11 DigitalOptics Corporation Europe Limited Face searching and detection in a digital image acquisition device
US8923564B2 (en) 2007-03-05 2014-12-30 DigitalOptics Corporation Europe Limited Face searching and detection in a digital image acquisition device
US9224034B2 (en) 2007-03-05 2015-12-29 Fotonation Limited Face searching and detection in a digital image acquisition device
US8515138B2 (en) 2007-05-24 2013-08-20 DigitalOptics Corporation Europe Limited Image processing method and apparatus
US7916971B2 (en) 2007-05-24 2011-03-29 Tessera Technologies Ireland Limited Image processing method and apparatus
WO2008150285A1 (en) * 2007-05-24 2008-12-11 Fotonation Vision Limited Image processing method and apparatus
US9025837B2 (en) 2007-05-24 2015-05-05 Fotonation Limited Image processing method and apparatus
US8494232B2 (en) 2007-05-24 2013-07-23 DigitalOptics Corporation Europe Limited Image processing method and apparatus
USRE47775E1 (en) * 2007-06-07 2019-12-17 Sony Corporation Imaging apparatus, information processing apparatus and method, and computer program therefor
US11470241B2 (en) 2008-01-27 2022-10-11 Fotonation Limited Detecting facial expressions in digital images
US11689796B2 (en) 2008-01-27 2023-06-27 Adeia Imaging Llc Detecting facial expressions in digital images
US9462180B2 (en) 2008-01-27 2016-10-04 Fotonation Limited Detecting facial expressions in digital images
US8750578B2 (en) 2008-01-29 2014-06-10 DigitalOptics Corporation Europe Limited Detecting facial expressions in digital images
US8494286B2 (en) 2008-02-05 2013-07-23 DigitalOptics Corporation Europe Limited Face detection in mid-shot digital images
US7855737B2 (en) 2008-03-26 2010-12-21 Fotonation Ireland Limited Method of making a digital camera image of a scene including the camera user
US8243182B2 (en) 2008-03-26 2012-08-14 DigitalOptics Corporation Europe Limited Method of making a digital camera image of a scene including the camera user
US20090322926A1 (en) * 2008-06-25 2009-12-31 Tetsuo Ikeda Image processing apparatus and image processing method
EP2138979A1 (en) * 2008-06-25 2009-12-30 Sony Corporation Image processing
US8106991B2 (en) 2008-06-25 2012-01-31 Sony Corporation Image processing apparatus and image processing method
US9007480B2 (en) 2008-07-30 2015-04-14 Fotonation Limited Automatic face and skin beautification using face detection
US8345114B2 (en) 2008-07-30 2013-01-01 DigitalOptics Corporation Europe Limited Automatic face and skin beautification using face detection
US8384793B2 (en) 2008-07-30 2013-02-26 DigitalOptics Corporation Europe Limited Automatic face and skin beautification using face detection
US10942964B2 (en) 2009-02-02 2021-03-09 Hand Held Products, Inc. Apparatus and method of embedding meta-data in a captured image
US11042793B2 (en) 2009-06-12 2021-06-22 Hand Held Products, Inc. Portable data terminal
US9519814B2 (en) 2009-06-12 2016-12-13 Hand Held Products, Inc. Portable data terminal
US9959495B2 (en) 2009-06-12 2018-05-01 Hand Held Products, Inc. Portable data terminal
US20110013038A1 (en) * 2009-07-15 2011-01-20 Samsung Electronics Co., Ltd. Apparatus and method for generating image including multiple people
US8964066B2 (en) 2009-07-15 2015-02-24 Samsung Electronics Co., Ltd Apparatus and method for generating image including multiple people
US8411171B2 (en) * 2009-07-15 2013-04-02 Samsung Electronics Co., Ltd Apparatus and method for generating image including multiple people
US9792012B2 (en) 2009-10-01 2017-10-17 Mobile Imaging In Sweden Ab Method relating to digital images
US10032068B2 (en) 2009-10-02 2018-07-24 Fotonation Limited Method of making a digital camera image of a first scene with a superimposed second scene
US9319640B2 (en) * 2009-12-29 2016-04-19 Kodak Alaris Inc. Camera and display system interactivity
US20110157221A1 (en) * 2009-12-29 2011-06-30 Ptucha Raymond W Camera and display system interactivity
US9396569B2 (en) 2010-02-15 2016-07-19 Mobile Imaging In Sweden Ab Digital image manipulation
US9196069B2 (en) 2010-02-15 2015-11-24 Mobile Imaging In Sweden Ab Digital image manipulation
US8928770B2 (en) * 2010-04-26 2015-01-06 Kyocera Corporation Multi-subject imaging device and imaging method
US20110261219A1 (en) * 2010-04-26 2011-10-27 Kyocera Corporation Imaging device, terminal device, and imaging method
CN103548034A (en) * 2011-05-23 2014-01-29 微软公司 Automatically optimizing capture of images of one or more subjects
US20120300092A1 (en) * 2011-05-23 2012-11-29 Microsoft Corporation Automatically optimizing capture of images of one or more subjects
US9344642B2 (en) 2011-05-31 2016-05-17 Mobile Imaging In Sweden Ab Method and apparatus for capturing a first image using a first configuration of a camera and capturing a second image using a second configuration of a camera
US9432583B2 (en) 2011-07-15 2016-08-30 Mobile Imaging In Sweden Ab Method of providing an adjusted digital image representation of a view, and an apparatus
WO2013012370A1 (en) * 2011-07-15 2013-01-24 Scalado Ab Method of providing an adjusted digital image representation of a view, and an apparatus
US20130141605A1 (en) * 2011-12-06 2013-06-06 Youngkoen Kim Mobile terminal and control method for the same
US8885068B2 (en) * 2011-12-06 2014-11-11 Lg Electronics Inc. Mobile terminal and control method for the same
US20130201366A1 (en) * 2012-02-03 2013-08-08 Sony Corporation Image processing apparatus, image processing method, and program
WO2013131536A1 (en) * 2012-03-09 2013-09-12 Sony Mobile Communications Ab Image recording method and corresponding camera device
CN104471596A (en) * 2012-07-18 2015-03-25 株式会社日立制作所 Computer, guide information provision device, and recording medium
US20140153832A1 (en) * 2012-12-04 2014-06-05 Vivek Kwatra Facial expression editing in images based on collections of images
US20140176764A1 (en) * 2012-12-21 2014-06-26 Sony Corporation Information processing device and recording medium
US9432581B2 (en) * 2012-12-21 2016-08-30 Sony Corporation Information processing device and recording medium for face recognition
US9876507B2 (en) * 2013-02-22 2018-01-23 Sap Se Semantic compression of structured data
US20140244602A1 (en) * 2013-02-22 2014-08-28 Sap Ag Semantic compression of structured data
US9560271B2 (en) * 2013-07-16 2017-01-31 Samsung Electronics Co., Ltd. Removing unwanted objects from photographed image
WO2015018244A1 (en) * 2013-08-07 2015-02-12 Microsoft Corporation Augmenting and presenting captured data
US10817613B2 (en) 2013-08-07 2020-10-27 Microsoft Technology Licensing, Llc Access and management of entity-augmented content
US10255253B2 (en) 2013-08-07 2019-04-09 Microsoft Technology Licensing, Llc Augmenting and presenting captured data
US10776501B2 (en) 2013-08-07 2020-09-15 Microsoft Technology Licensing, Llc Automatic augmentation of content through augmentation services
US10217222B2 (en) 2013-10-28 2019-02-26 Google Llc Image cache for replacing portions of images
US9478056B2 (en) 2013-10-28 2016-10-25 Google Inc. Image cache for replacing portions of images
US20150332123A1 (en) * 2014-05-14 2015-11-19 At&T Intellectual Property I, L.P. Image quality estimation using a reference image portion
US10026010B2 (en) * 2014-05-14 2018-07-17 At&T Intellectual Property I, L.P. Image quality estimation using a reference image portion
US20180204097A1 (en) * 2017-01-19 2018-07-19 Adobe Systems Incorporated Automatic Capture and Refinement of a Digital Image of a Group of People without User Intervention
US10176616B2 (en) * 2017-01-19 2019-01-08 Adobe Inc. Automatic capture and refinement of a digital image of a group of people without user intervention
US10475222B2 (en) * 2017-09-05 2019-11-12 Adobe Inc. Automatic creation of a group shot image from a short video clip using intelligent select and merge
US10991141B2 (en) 2017-09-05 2021-04-27 Adobe Inc. Automatic creation of a group shot image from a short video clip using intelligent select and merge
US20190289225A1 (en) * 2018-03-19 2019-09-19 Panasonic Intellectual Property Management Co., Ltd. System and method for generating group photos

Also Published As

Publication number Publication date
US20070182829A1 (en) 2007-08-09
US20110193972A1 (en) 2011-08-11
US8600191B2 (en) 2013-12-03

Similar Documents

Publication Publication Date Title
US8600191B2 (en) Composite imaging method and system
US7869658B2 (en) Representative image selection based on hierarchical clustering
US7822233B2 (en) Method and apparatus for organizing digital media based on face recognition
US10839199B2 (en) Image selecting device, image selecting method, image pickup apparatus, and computer-readable medium
US8938100B2 (en) Image recomposition from face detection and facial features
US9025836B2 (en) Image recomposition from face detection and facial features
US9336442B2 (en) Selecting images using relationship weights
US20040257380A1 (en) Imaging method and system
JP7076974B2 (en) Image processing equipment, control methods and programs
US20130108168A1 (en) Image Recomposition From Face Detection And Facial Features
US20130108164A1 (en) Image Recomposition From Face Detection And Facial Features
JP6887816B2 (en) Image processing equipment, control methods, and programs
JP2007096405A (en) Method, device and program for judging direction of camera shake
WO2008045236A1 (en) Differential cluster ranking for image record access
US20130108171A1 (en) Image Recomposition From Face Detection And Facial Features
US20130108170A1 (en) Image Recomposition From Face Detection And Facial Features
US20130108166A1 (en) Image Recomposition From Face Detection And Facial Features
US9025835B2 (en) Image recomposition from face detection and facial features
JP2002258682A (en) Image forming device
JP2010225082A (en) Image data management system and image data management method
JP5556330B2 (en) Image processing apparatus, image processing method, and program thereof
US20130108157A1 (en) Image Recomposition From Face Detection And Facial Features
US20130108167A1 (en) Image Recomposition From Face Detection And Facial Features
US20130050744A1 (en) Automated photo-product specification method
JP2006180403A (en) Information processing apparatus

Legal Events

Date Code Title Description
AS Assignment

Owner name: EASTMAN KODAK COMPANY, NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZACKS, CAROLYN A.;TELEK, MICHAEL J.;HAREL, DAN;AND OTHERS;REEL/FRAME:014054/0342

Effective date: 20030506

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: FPC INC., CALIFORNIA

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: NPEC INC., NEW YORK

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: KODAK IMAGING NETWORK, INC., CALIFORNIA

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: KODAK AVIATION LEASING LLC, NEW YORK

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: FAR EAST DEVELOPMENT LTD., NEW YORK

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: KODAK PHILIPPINES, LTD., NEW YORK

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: KODAK (NEAR EAST), INC., NEW YORK

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: KODAK PORTUGUESA LIMITED, NEW YORK

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: EASTMAN KODAK INTERNATIONAL CAPITAL COMPANY, INC.,

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: CREO MANUFACTURING AMERICA LLC, WYOMING

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: PAKON, INC., INDIANA

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: EASTMAN KODAK COMPANY, NEW YORK

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: KODAK REALTY, INC., NEW YORK

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: LASER-PACIFIC MEDIA CORPORATION, NEW YORK

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: QUALEX INC., NORTH CAROLINA

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201

Owner name: KODAK AMERICAS, LTD., NEW YORK

Free format text: PATENT RELEASE;ASSIGNORS:CITICORP NORTH AMERICA, INC.;WILMINGTON TRUST, NATIONAL ASSOCIATION;REEL/FRAME:029913/0001

Effective date: 20130201