|Publication number||US7806604 B2|
|Application number||US 11/163,497|
|Publication date||Oct 5, 2010|
|Filing date||Oct 20, 2005|
|Priority date||Oct 20, 2005|
|Also published as||US20070092245, WO2007047719A2, WO2007047719A3|
|Publication number||11163497, 163497, US 7806604 B2, US 7806604B2, US-B2-7806604, US7806604 B2, US7806604B2|
|Inventors||Michael E. Bazakos, Vassilios Morellas|
|Original Assignee||Honeywell International Inc.|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (36), Non-Patent Citations (38), Referenced by (22), Classifications (6), Legal Events (2)|
|External Links: USPTO, USPTO Assignment, Espacenet|
The present invention relates generally to the field of face detection and tracking. More specifically, the present invention relates to face detection and tracking in a wide field of view.
Surveillance systems are being used with increasing frequency to detect and track individuals within an environment. In security applications, for example, such systems are often employed to detect and track individuals entering or leaving a building facility or security gate, or to monitor individuals within a store, hospital, museum or other such location where the health and/or safety of the occupants may be of concern. More recent trends in the art have focused on the use of facial detection and tracking methods to determine the identity of individuals located within a field of view. In the aviation industry, for example, such systems have been installed in airports to acquire a facial scan of individuals as they pass through various security checkpoints, which are then compared against images contained in a facial image database to determine whether an individual poses a security threat.
Current facial detection and tracking systems typically rely on the use of one or more pan-tilt-zoom (PTZ) cameras to track individuals located within a wide field of view. Such devices can include an optical system operatively coupled to a number of drive motors that permit the operator to zoom-in on the details of an individual, or to monitor a larger area from multiple camera angles. In certain designs, each of the cameras within the system can be connected to a computer equipped with image processing software and/or hardware that can be used to process images received from the cameras in order to detect the identity of the individual.
Due to the high resolution often necessary to accurately detect facial features, many prior-art facial detection and tracking systems lack the ability to both detect and track individuals within a wide field of view while simultaneously acquiring information sufficient to perform facial recognition. In systems employing PTZ cameras, for example, the ability of the camera to effectively track motion within a wide field of view is often limited by the speed and accuracy of the positioning mechanism employed. If, for example, the individual is located within a moving vehicle or is otherwise moving quickly through the image field, such cameras may not be able to adequately cover the entire image field while still providing sufficient resolution to abstract features from the individual's face. In some cases, the inability of the camera to accurate track individuals moving through the image field can also prevent multiple individuals from being detected and/or tracked simultaneously within a wide field of view.
The present invention relates generally to face detection and tracking systems and methods in a wide field of view. A facial detection and tracking system in accordance with an illustrative embodiment of the present invention can include a wide field of view camera for detecting one or more objects within a wider field of view, and at least one narrower field of view camera for obtaining a higher-resolution image of at least one object located within a subset space of the wider field of view. The narrower field of view cameras can, in some embodiments, be arranged in an array or pattern that, when seamed together, covers the entire field of view without the need for a positioning and/or zoom mechanism. In certain embodiments, the narrower field of view cameras can be overlapped slightly to facilitate the detection of objects moving from one subset space to the next.
In some illustrative embodiments, the face detection and tracking system can employ one or more tri-band imaging (TBI) cameras to detect and analyze various facial features utilizing a combination of low band near-IR light, high band near-IR light, and/or visible light. A near-IR illuminator can be provided to generate near-IR light on the individual, which can then be sensed by the one or more TBI cameras to determine the presence of skin and/or to detect various facial features. In certain embodiments, an adjustment module can also be provided for adjusting the amount of luminance emitted from the near-IR illuminator, if desired.
An illustrative method for detecting and tracking an individual within a wide field of view can include the steps of detecting an object using a wide field of view camera, determining the subset space location of the object within the wide field of view, tasking one or more narrower field of view cameras covering the subset space location to acquire one or more higher-resolution images of the object, and then processing the higher-resolution images to obtain one or more parameters relating to the object. In certain illustrative embodiments, the one or more narrower field of view cameras can be configured to obtain facial images of a tracked individual, which can then be compared against a facial image database to determine the identity of the individual. Various processing routines can be employed to detect and confirm the presence of skin and/or to detect one or more facial features related to the individual.
The following description should be read with reference to the drawings, in which like elements in different drawings are numbered in like fashion. The drawings, which are not necessarily to scale, depict illustrative embodiments and are not intended to limit the scope of the invention. Although examples of various steps are illustrated in the various views, those skilled in the art will recognize that the many of the examples provided have suitable alternatives that can be utilized. Moreover, while several illustrative applications are described throughout the disclosure, it should be understood that the present invention could be employed in other applications where facial detection and tracking is desired.
To detect one or more facial features of the individual 14 as they move through the wide field of view in the general direction indicated, for example, by arrow 24, the PTZ camera 12 can be configured to pan and/or tilt in a direction towards the individual's face 22 and initiate an optical-zoom or telephoto mode, wherein the PTZ camera 12 zooms-in on the area surrounding the individual's face 22. In certain designs, for example, the PTZ camera 12 can include a vari-focus optical lens that can be adjusted to concentrate the PTZ camera 12 on a particular space within the wide field of view in order to provide a higher-resolution image of the face 22 sufficient to perform facial recognition of the individual 14. In other designs, digital techniques can also be employed to adjust the resolution of the PTZ camera 12, such as, for example, by altering the resolution of a charge coupled device (CCD) or other such optical device within the PTZ camera 12.
The PTZ camera 12 can be configured to monitor the wide field of view until an object of interest has been detected, or, in the alternative, can be configured to scan various subset spaces within the wide field of view until such motion is detection. In the latter case, the PTZ camera 12 can be programmed to scan an area in some predefined or random path until an object of interest is detected. Once an individual 14 or other object of interest has been detected, the PTZ camera 12 can then be configured to focus on the individual 14 and acquire an image of the individual's face 22 in the higher-resolution, telephoto mode.
Because of the time required for the positioning mechanism to pan and/or tilt towards the individual 14 and to zoom-in on the individual's face 22, many PTZ cameras 12 are limited in their ability track individuals quickly moving through a wide field of view. If, for example, the individual 14 is positioned inside a moving vehicle or is otherwise moving through the image field at a rapid rate, the PTZ camera 12 may not be able to adequately track the individual while still providing a steady image necessary to perform facial recognition. In those systems in which the PTZ camera 12 is configured to scan the environment in a predefined or random path, the particular path traveled by the individual 14 through the wide field of view may even escape detection by the PTZ camera 12 altogether.
As can be further seen by reference to dashed lines 18′ and 20′ in
The wide field of view camera 28 can be configured to continuously operate in a wide-angle mode to constantly track objects of interest within a wide field of view. As can be seen in
In certain embodiments, the wide field of view camera 28 can be configured to operate in a low-resolution mode sufficient to detect and/or track an object of interest within the wide field of view while conserving power. The resolution capability of the wide field of view camera will depend on a number of factors, including, for example, the viewing angle of the camera, the pixel density of the optical system employed, and the various characteristics of the surrounding environment. While the illustrative wide field of view camera 28 depicted in
Each of the narrower field of view cameras 30,32 can be directed and/or focused on a subset space of the wide field of view for obtaining a facial-image of the individual. 34. As shown in
In use, the narrower field of view cameras 30,32 can be configured to provide a higher-resolution image of the individual's face 56 to detect and analyze various facial features of the individual 34 not capable with a wider field of view camera. As with the wide field of view camera 28, each of the narrower field of view cameras 30,32 can be fixed in position, covering a subset field of view that does not change significantly as the individual 34 moves from one field of view to another. In operation, this arrangement permits each narrower field of view camera 30,32 to track the individual 34 without having to first pan, tilt, and/or zoom-in on the individual 34. Moreover, since each of the narrower field of view cameras 30,32 remains fixed during tracking, the ability of the system to accurately track objects of interest is not limited to the accuracy and/or speed of the positioning mechanism employed.
In certain embodiments, the narrower field of view cameras 30,32 can be overlapped slightly to facilitate the detection and tracking of objects as they move from one subset space 44,50 to the next. In the illustrative embodiment of
The various cameras 28,30,32 forming the facial tracking and detection system 26 can be physically separated from each other at various locations within the environment, or can comprise a single camera unit including multiple cameras. In the latter case, for example, each of the cameras 28,30,32 can be disposed within a housing or inset within a wall, ceiling or other desired structure. In the illustrative embodiment of
While the illustrative embodiment of
As can be further seen in
In certain embodiments, the narrower field of view cameras 64 can comprise tri-band imaging (TBI) cameras, which use low band near-IR light, high band near-IR light, and visual light to analyze, detect, and match an individual's face. Such devices typically utilize a near-infrared light spectrum to scan facial images by sensing the IR light reflected from the individual's face. The ability to detect such reflected IR light avoids a characteristic problem inherent in many conventional visual spectrum systems, which attempt to analyze non-facial portions of the image during facial recognition. Moreover, since TBI cameras also utilize IR spectrum light to detect the presence of the individual's face, such devices are not as susceptible to environmental conditions such as glare through a window or windshield, inclement weather (e.g. fog, haze, rain, etc.), nighttime conditions, etc. that can affect the ability of the system to acquire clear image signals.
A near-IR illuminator 76 can be provided for generating near-IR light in both the low and high near-IR spectrums, if desired. In certain applications, for example, the near-IR illuminator 76 can be utilized to direct light towards the individual to obtain a clearer image of the individual's face during nighttime, or when other conditions exist. Since the near-IR light is outside of the visible spectrum, such light is not detectable by the naked eye, and therefore does not alert the individual that he or she is being detected and/or tracked.
As can be seen in
In the illustrative embodiment of
The narrower field of view cameras 80 can each be configured to recognize various facial features within a particular a range of X-coordinates and Y-coordinates that covers a subset space within the wide field of view. In some embodiments, the narrower field of view cameras 80 can be configured to cover the entire space covered by the wide field of view camera 78, allowing the system 76 to acquire higher-resolution images of individuals and/or objects at all locations within the wide field of view. The ranges covered by each narrower field of view camera 80 can be discrete (i.e. with no overlap between adjacent fields), or, in the alternative, can be overlapped by some desired amount. In certain embodiments, each of the narrower field of view camera elements 80 can comprise a tri-band image (TBI) camera or other such device for detecting and analyzing facial features using multiple light spectrums (e.g. near-IR light, visible light, UV light, etc.).
The TBI cameras 92,94,96,98 can be configured to operate simultaneously in a coordinated fashion to track and detect individuals 91 as they move from one subset space to the next. As can be seen in
As can be further seen in
The narrower field of view cameras 92,94,96,98 can each be configured to cover a discrete subset space within the wide field of view, or can be overlapped by some desired amount. In the latter case, the narrower field of view cameras 92,94,96,98 can be tasked to focus on different facial features of the individual 91. In certain embodiments, for example, one of the narrower field of view cameras (e.g. camera 92) could be tasked to provide a general scan of the individual's face whereas an adjacent narrower field of view (e.g. camera 94) could be tasked to provide a retinal scan of the individual 91. The various images acquired by each of the narrower field of view cameras 92,94,96,98 can then be processed via the computer 100 to determine the identity of the individual 91 and/or to computer various other parameter relating to the individual 91 (e.g. velocity, direction of travel, height, orientation, etc.).
Turning now to
Once one or more higher-resolution images are obtained from the narrower field of view cameras 80, an image processing routine or algorithm can be initiated to extract various features from the acquired images, as indicated generally by reference to block 128. At this stage, facial features related to the individual's nose, eyes, mouth, skin color, eyebrows, facial size, etc. may be obtained to perform facial recognition on the individual, or to determine some other desired parameter related to the individual. As indicated generally by reference to block 130, one or more parameters relating to the individual can then be outputted and further analyzed, if desired. As indicated generally by return arrow 132, the system 76 can be configured to update the X-Y coordinates and repeat the image processing as the individual moves through the wide field of view, allowing the system 76 to task different narrower field of view cameras 80 to track the individual, if necessary.
Once an image input is received from each narrower field of view TBI camera, a series of operations can then be performed to isolate the skin in the images from other surface elements, as indicated generally by reference to block 140. Such skin detection step 140, for example, can be performed to verify that the tracked object is not wearing a mask or other such covering that would prevent the system from accurately recognizing the individual's face.
As can be further seen with respect to block 142, a face detection step may also be performed to acquire various facial features that can be later used to determine the identity of the tracked individual as well as other desired parameters relating to the individual. Upon detecting various features of the individual face at step 142, the information obtained can then be compared against a facial image database containing a number of previously stored facial images, as indicated generally by reference to block 144. If a match is found, as indicated generally by reference to block 146, the result can be outputted at block 148, informing the operator that a match has been obtained along with the identification of that individual. Alternatively, if no match is found, the system can be configured to alert the operator that the individual is not recognized within the facial image database.
As can be further seen by reference to block 158, one or more feature images can also be extracted from the two near-IR images of steps 152 and 154 using a multi-band feature extraction scheme. In certain embodiments, for example, the two near-IR images obtained from steps 152 and 154 can be utilized in conjunction with visible light, UV light, radar, or some other desired wavelength spectrum to abstract various features from the individual's face (e.g. nose, eyes, mouth, skin color, eyebrows, facial size, etc.) that can be later used to perform facial recognition on the individual.
Next, as indicated generally with reference to block 160, the images acquired from the skin detection step 156 and multi-band feature extraction step 158 can be processed to determine the identity of the individual. In certain embodiments, for example, a series of generalized Hough transforms or model-sized algorithms can be performed, providing an approximation of features such as the location of the eyes, eyebrows, nose, and/or mouth. From this processing step 160, a final video facial image can be produced, as indicated generally by reference to block 162. Other parameters such as the identity of the individual can also be outputted at this step 162, if desired.
Having thus described the several embodiments of the present invention, those of skill in the art will readily appreciate that other embodiments may be made and used which fall within the scope of the claims attached hereto. Numerous advantages of the invention covered by this document have been set forth in the foregoing description. It will be understood that this disclosure is, in many respects, only illustrative. Changes can be made with respect to various elements described herein without exceeding the scope of the invention.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5396284||Aug 20, 1993||Mar 7, 1995||Burle Technologies, Inc.||Motion detection system|
|US6049281||Sep 29, 1998||Apr 11, 2000||Osterweil; Josef||Method and apparatus for monitoring movements of an individual|
|US6215519||Mar 4, 1998||Apr 10, 2001||The Trustees Of Columbia University In The City Of New York||Combined wide angle and narrow angle imaging system and method for surveillance and monitoring|
|US6370260||Sep 3, 1999||Apr 9, 2002||Honeywell International Inc.||Near-IR human detector|
|US6437819||Jun 25, 1999||Aug 20, 2002||Rohan Christopher Loveland||Automated video person tracking system|
|US6445298||Dec 21, 2000||Sep 3, 2002||Isaac Shepher||System and method for remotely monitoring movement of individuals|
|US6483935||Oct 29, 1999||Nov 19, 2002||Cognex Corporation||System and method for counting parts in multiple fields of view using machine vision|
|US6499025||Jun 1, 1999||Dec 24, 2002||Microsoft Corporation||System and method for tracking objects by fusing results of multiple sensing modalities|
|US6504482||Jan 4, 2001||Jan 7, 2003||Sanyo Electric Co., Ltd.||Abnormality detection apparatus and method|
|US6611206||Mar 15, 2001||Aug 26, 2003||Koninklijke Philips Electronics N.V.||Automatic system for monitoring independent person requiring occasional assistance|
|US6678413||Nov 24, 2000||Jan 13, 2004||Yiqing Liang||System and method for object identification and behavior characterization using video analysis|
|US6714665||Dec 3, 1996||Mar 30, 2004||Sarnoff Corporation||Fully automated iris recognition system utilizing wide and narrow fields of view|
|US6718049||Dec 3, 2002||Apr 6, 2004||Honeywell International Inc.||Near-infrared disguise detection|
|US6738073 *||Apr 30, 2002||May 18, 2004||Imove, Inc.||Camera system with both a wide angle view and a high resolution view|
|US6970576 *||Jul 19, 2000||Nov 29, 2005||Mbda Uk Limited||Surveillance system with autonomic control|
|US20020063711||Nov 16, 2001||May 30, 2002||Imove Inc.||Camera system with high resolution image inside a wide angle view|
|US20020075258||Nov 23, 2001||Jun 20, 2002||Imove Inc.||Camera system with high resolution image inside a wide angle view|
|US20020076087||Jul 13, 2001||Jun 20, 2002||Korea Institute Of Science And Technology||Visual tracking method by color information|
|US20020105578||Dec 19, 2001||Aug 8, 2002||Andrew Arthur Hunter||Tracking system|
|US20020140822||Mar 28, 2002||Oct 3, 2002||Kahn Richard Oliver||Camera with visible and infra-red imaging|
|US20020180759||Apr 30, 2002||Dec 5, 2002||Imove Inc.||Camera system with both a wide angle view and a high resolution view|
|US20030040815||Jun 21, 2002||Feb 27, 2003||Honeywell International Inc.||Cooperative camera network|
|US20030053658||Dec 27, 2001||Mar 20, 2003||Honeywell International Inc.||Surveillance system and methods regarding same|
|US20030053659||Dec 27, 2001||Mar 20, 2003||Honeywell International Inc.||Moving object assessment system and method|
|US20030053664||Feb 15, 2002||Mar 20, 2003||Ioannis Pavlidis||Near-infrared method and system for use in face detection|
|US20030076417||Aug 6, 2002||Apr 24, 2003||Patrick Thomas||Autonomous monitoring and tracking of vehicles in a parking lot to enforce payment rights|
|US20030095186||Nov 20, 2001||May 22, 2003||Aman James A.||Optimizations for live event, real-time, 3D object tracking|
|US20030123703||Dec 27, 2001||Jul 3, 2003||Honeywell International Inc.||Method for monitoring a moving object and system regarding same|
|US20030209893||Apr 14, 2003||Nov 13, 2003||Breed David S.||Occupant sensing system|
|US20040030531||Jan 10, 2003||Feb 12, 2004||Honeywell International Inc.||System and method for automated monitoring, recognizing, supporting, and responding to the behavior of an actor|
|US20040105004||Nov 30, 2002||Jun 3, 2004||Yong Rui||Automated camera management system and method for capturing presentations using videography rules|
|US20040240711||May 27, 2003||Dec 2, 2004||Honeywell International Inc.||Face identification verification using 3 dimensional modeling|
|US20050007450||Dec 12, 2003||Jan 13, 2005||Duane Hill||Vehicle mounted system and method for capturing and processing physical data|
|US20050055582||Sep 5, 2003||Mar 10, 2005||Bazakos Michael E.||System and method for dynamic stand-off biometric verification|
|US20050110610||Nov 3, 2004||May 26, 2005||Bazakos Michael E.||System and method for gate access control|
|WO1997021188A1||Dec 4, 1996||Jun 12, 1997||David Sarnoff Research Center, Inc.||Wide field of view/narrow field of view recognition system and method|
|1||"Burt Adelson Pyramid," 3 pages, on or before Mar. 4, 2005.|
|2||"CBVBS '01 Final Program" 2 pages, 2001.|
|3||Albiol et al., "Robust Motion Detection for Video Surveillance Applications," IEEE, International Conference on Image Processing, 5 pages, Sep. 2003.|
|4||Bazakos et al., Fast Access Control Technology Solutions (FACTS), IEEE, pp. 312-317, 2005.|
|5||Burt, et al., "The Laplacian Pyramid as a Compact Image Code," IEEE Transactions on Communications, vol. COM-31, No. 4, pp. 532-540, Apr. 1983.|
|6||Caspi, et al., "Alignment of Non-Overlapping Sequences," 8 pages, Oct. 19, 2005.|
|7||Chen et al., "Comparison and Combination of Visible and IR Image Face Recognition," 18 pages, Oct. 19, 2005.|
|8||Cockshott, et al., "Microscopic Volumetric Image Data Compression Using Vector Quantization and 3D Pyramid," 5 pages, Oct. 19, 2005.|
|9||Cutler, "Face Recoginition Using Infrared Images and Eigenfaces," pp. 1-5, Apr. 26, 1996.|
|10||Dowdall et al., "A Face Detetion Method Based on Multi-Band Feature Extraction in the Near-IR Spectrum," 9 pages, Oct. 19, 2005.|
|11||Dowdall et al., "Face Detection in the Near-IR Spectrum," 15 pages, Oct. 19, 2005.|
|12||Fromherz et al., "A Survey of Face Recognition," pp. 1-18.|
|13||Gibbons et al., "IrisNet: An Architecture for Internet-Scale Sensing," pp. 1-10, Oct. 19, 2005.|
|14||http://esdl.computer.org/comp/proceedings/fg/1996/7713/00/77131082abs.htm, "Comparison of visible and infra-red imagery for face recognition," 1 page, printed Jan. 20, 2004.|
|15||http://www.cc.gatech.edu/classes/AY200/cs7495-fall/participants/iwc/paperpres/Visible . . . "Comparison of Visible and Infra-Red Imagery for Face Recognition," 3 pages, printed Jan. 20, 2004.|
|16||http://www.cc.gatech.edu/classes/AY200/cs7495—fall/participants/iwc/paperpres/Visible . . . "Comparison of Visible and Infra-Red Imagery for Face Recognition," 3 pages, printed Jan. 20, 2004.|
|17||http://www.merl.com/projects/MultiCamera/, "Multi-Camera Systems," 2 pages, printed Jan. 19, 2004.|
|18||Javed, et al., "KnightM: A Real Time Surveillance System for Multiple Overlapping and Non-Overlapping Cameras," 4 pages, Oct. 19, 2005.|
|19||Javed, et al., "Tracking Across Multiple Cameras with Disjoint Views," Proceedings of the Ninth IEEE International Conference on Computer Vision, 6 pages, 2003.|
|20||Kettnaker, et al., "Bayesian Multi-Camera Surveillance," IEEE, 7 pages, 1999.|
|21||Khan, et al., "Consistent Labeling of Tracked Objects in Multiple Cameras with Overlapping Fields of View," 27 pages, Apr. 25, 2002.|
|22||Khan, et al., "Human Tracking in Multiple Cameras," 6 pages, Oct. 19, 2005.|
|23||Khan, et al., "Tracking in Uncalibrated Cameras with Overlapping Field of View," 8 pages, Oct. 19, 2005.|
|24||Kogut, et al., "A Wide Area Tracking System for Vision Sensor Networks," 9th World Congress on Intelligent Transport Systems, 11 pages, 2002.|
|25||Kong et al, "Recent advances in visual and infrared face recognition-a review," Elsevier, Science Direct, Computer Vision and Image Understanding 97, pp. 103-135, 2005.|
|26||Kong et al, "Recent advances in visual and infrared face recognition—a review," Elsevier, Science Direct, Computer Vision and Image Understanding 97, pp. 103-135, 2005.|
|27||Morimoto et al., "Pupil Detection and Tracking Using Multiple Light Sources," Image and Vision Computing, 18, pp. 331-335, 2000.|
|28||Nath et al., "IrisNet: An Architecture for Enabling Sensor-Enriched Internet Service," pp. 1-15, Dec. 2002.|
|29||Pavlidis et al., "A Near-Infrared Fusion Scheme for Automatic Detection of Vehicle Passengers," pp. 1-8, Oct. 19, 2005.|
|30||Pavlidis et al., "Automatic Passenger Counting in the High Occupany Vehicle (HOV) Lanes," 19 pages, Oct. 19, 2005.|
|31||Pavlidis et al., "Urban Surveillance Systems: From the Laboratory to the Commercial World," pp. 1-18, Oct. 19, 2005.|
|32||Porikli, et al., "Multi-Camera Calibration, Object Tracking and Query Generation," Mitsubishi Electric Research Labs, 4 pages, Oct. 19, 2005.|
|33||Quian et al., "Structure From Motion Using Sequential Monte Carlo Methods," Kluwer Academic Publishers, pp. 1-54, 2004.|
|34||Selinger et al., "Appearance-Based Facial Recognition Using Visible and Thermal Imagery: A Comparative Study," 28 pages, Oct. 19, 2005.|
|35||Socolinsky et al., "Illumination Invariant Face Recognition Using Thermal Infrared Imagery," 8 pages, Oct. 19, 2005.|
|36||TNO TPD, "Parkeerwachter," Intelligent videosurveillance-AFGOS-SDK, 1 page, 2003.|
|37||TNO TPD, "Parkeerwachter," Intelligent videosurveillance—AFGOS-SDK, 1 page, 2003.|
|38||Xu et al., "Pedestrian Detection and Tracking with Night Vision," 10 pages, Oct. 19, 2005.|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US8248448 *||Aug 21, 2012||Polycom, Inc.||Automatic camera framing for videoconferencing|
|US8320617 *||Nov 27, 2012||Utc Fire & Security Americas Corporation, Inc.||System, method and program product for camera-based discovery of social networks|
|US8395653||Mar 12, 2013||Polycom, Inc.||Videoconferencing endpoint having multiple voice-tracking cameras|
|US8630503 *||Jun 16, 2009||Jan 14, 2014||Canon Kabushiki Kaisha||Image processing apparatus, image processing method, and computer program|
|US8842161||Aug 20, 2012||Sep 23, 2014||Polycom, Inc.||Videoconferencing system having adjunct camera for auto-framing and tracking|
|US8965063||Sep 21, 2007||Feb 24, 2015||Eyelock, Inc.||Compact biometric acquisition system and method|
|US9076212||Sep 23, 2013||Jul 7, 2015||The Queen's Medical Center||Motion tracking system for real time adaptive imaging and spectroscopy|
|US9138175||Apr 28, 2015||Sep 22, 2015||The Queen's Medical Center||Motion tracking system for real time adaptive imaging and spectroscopy|
|US9177193 *||Feb 4, 2014||Nov 3, 2015||Bally Gaming, Inc.||Safe illumination for computerized facial recognition|
|US9269012||Aug 22, 2013||Feb 23, 2016||Amazon Technologies, Inc.||Multi-tracker object tracking|
|US9305365||Mar 14, 2013||Apr 5, 2016||Kineticor, Inc.||Systems, devices, and methods for tracking moving targets|
|US9392221||Mar 5, 2013||Jul 12, 2016||Polycom, Inc.||Videoconferencing endpoint having multiple voice-tracking cameras|
|US20070106797 *||Aug 31, 2006||May 10, 2007||Nortel Networks Limited||Mission goal statement to policy statement translation|
|US20090324129 *||Dec 31, 2009||Canon Kabushiki Kaisha||Image processing apparatus, image processing method, and computer program|
|US20100245567 *||Mar 27, 2009||Sep 30, 2010||General Electric Company||System, method and program product for camera-based discovery of social networks|
|US20100296888 *||May 18, 2010||Nov 25, 2010||Kouichi Katoh||Method of placing bottom block, block-transferring tool and machine tool provided with the tool|
|US20110007158 *||Apr 13, 2009||Jan 13, 2011||Alex Holtz||Technique for automatically tracking an object|
|US20130136298 *||Nov 29, 2011||May 30, 2013||General Electric Company||System and method for tracking and recognizing people|
|US20140147024 *||Feb 4, 2014||May 29, 2014||Bally Gaming, Inc.||Safe illumination for computerized facial recognition|
|US20140247361 *||May 16, 2014||Sep 4, 2014||Apple Inc.||Polarized Images for Security|
|US20150077323 *||Sep 17, 2013||Mar 19, 2015||Amazon Technologies, Inc.||Dynamic object tracking for user interfaces|
|WO2015026902A1 *||Aug 20, 2014||Feb 26, 2015||Amazon Technologies, Inc.||Multi-tracker object tracking|
|U.S. Classification||396/427, 348/153|
|International Classification||H04N7/18, G03B17/00|
|Oct 20, 2005||AS||Assignment|
Owner name: HONEYWELL INTERNATIONAL INC., NEW JERSEY
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BAZAKOS, MICHAEL E.;MORELLAS, VASSILIOS;REEL/FRAME:016667/0333
Effective date: 20051019
|Mar 26, 2014||FPAY||Fee payment|
Year of fee payment: 4