|Publication number||US20060152593 A1|
|Application number||US 11/347,764|
|Publication date||Jul 13, 2006|
|Filing date||Feb 3, 2006|
|Priority date||Mar 29, 2002|
|Also published as||CA2478666A1, EP1493127A1, US8411151, US20030193570, US20140098251, WO2003085598A1|
|Publication number||11347764, 347764, US 2006/0152593 A1, US 2006/152593 A1, US 20060152593 A1, US 20060152593A1, US 2006152593 A1, US 2006152593A1, US-A1-20060152593, US-A1-2006152593, US2006/0152593A1, US2006/152593A1, US20060152593 A1, US20060152593A1, US2006152593 A1, US2006152593A1|
|Inventors||Gregory Bone, Jonathan Walton|
|Original Assignee||Bone Gregory A, Walton Jonathan M|
|Export Citation||BiBTeX, EndNote, RefMan|
|Patent Citations (8), Referenced by (20), Classifications (7)|
|External Links: USPTO, USPTO Assignment, Espacenet|
This is a divisional application of 10/113,480 (attorney's file IQINV-58963) filed in the United States Patent and Trademark Office on Mar. 29, 2002.
This invention relates to a system for and method of processing an acquired image. More particularly, the invention relates to a system for, and method of, processing an image in a wide variety of ways, not previously capable of being accomplished in the prior art, to provide results which are enhanced compared to what has been able to be achieved in the prior art.
Systems are now in use for processing an acquired image. For example, systems are now in use for processing an acquired image to determine the entrance into, and the departure of individuals from, a defined area such as an enclosure. Systems are also in use for determining the identity of individuals and objects in an enclosure. Systems are further in use for tracking the movement and variations in the positioning of individuals in an enclosure. These are only a few examples of different types of processing and uses of acquired images.
As of now, different processing and uses of acquired images require different types of systems to be constructed. For example, the same system cannot be used to identify an individual in a crowd and to track the movement of the identified individual in the crowd and particularly the movement of the individual in a defined area such as an enclosure or from one defined area to another defined area. The same system cannot also be used to magnify a particular portion of an acquired image and process that magnified portion. Since different systems are required to perform different functions, costs to individuals or organizations have increased, available space has become limited and complexities in operation have become magnified.
A software development kit prioritizes certain aspects of an acquired image and introduces the prioritized aspects to a main processor. Alternatively, a coprocessor, or the coprocessor and the development kit, manipulate(s) the acquired image and introduce(s) the manipulated image to the processor. The reprogramming of either one of the development kit and the coprocessor may be initiated by either one of them and by the processor and the programming may be provided by the main processor.
A central station and a gate array may also be individually reprogrammable by the main processor which sets up, programs and controls a camera in accordance with the individual reprogrammings.
A reprogramming of an audio acquisition stage may also be initiated by that stage and any of the other stages and the processor and may be provided by the processor. The audio information may be related to the acquired image.
In the drawings:
The hardware section in
The audio signals preferably pass through a bus 35 between the audio codec or acquirer 32 and the field reprogrammable gate array 34. However, the audio signals could pass through a bus between the audio codec or acquirer 32 and the coprocessor 36. The system is more flexible when the audio signals pass between the audio codec or acquirer 32 and the field reprogrammable gate array 34 than when the audio signals pass between the audio codec or acquirer 32 and the coprocessor 36. The ability of the signals from the audio acquirer 32 to pass to either the gate array 34 or the coprocessor 36 may be seen by the extension of the bus 35 to the audio/video interface for the hardware section 28 a.
Signals pass between the hardware section 28 a and the software section 24 through a bus 38. Signals also pass between a miscellaneous input/output stage 40 (considered as hardware) and the software section 24 through a bus 42. Signals also pass through the hardware section 28 b and the software section 24 through a bus 44. The hardware section 28 b includes a compact flash card interface 46, a PC card interface 48 and a PCI interface 50. The hardware section 28 b provides information storage and includes a capacity for providing information storage expansion and other (non-storage) expansion.
The software section 24 includes a video manipulator 52, an audio manipulator 54, an event generator 56, an event responder 58, a platform user interface 60 and a kernel operating system 62. Each of these stages has an arrow 64 disposed in an oblique direction at the bottom right corner of the stage. The oblique arrow 64 indicates that the stage is capable of being reprogrammed. The reprogramming of any stage with the arrow 64 can be initiated by any stage whether the other stage has the arrow 64 to indicate its capability of being reprogrammed. For example, the reprogramming of any of the stages 34, 36 and 52-62 (even numbers only) can be self initiated and can be initiated by any of the other stages 34, 36 and 52-62 and by any other stages such as the stages 70, 72 and 74. Thus, each of the stages 34, 36 and 52-62 (even numbers only) is illustratively able to be reprogrammed. Thus, the stages 34, 36, 52-62 and 65 (even numbers only) receiving such communications have an enhanced flexibility in operation in comparison to the stages which do not receive such reprogramming. Each reprogrammable stage including the stages 34, 36 and 52-62 (even numbers only) can also initiate reprogramming of itself. The reprogramming of each reprogrammable stage including the stages 34, 36 and 52-62 (even numbers only) can be initiated by almost any stage in the system, except for the image acquirer 30, the audio acquirer or codec 32, the miscellaneous input/output 40 and the storage and expansion stage 28 b.
A software development kit 65 is indicated by a cloud designated as “platform configuration” with an arrow 64 in the upper left corner. The output from the software development kit 65 is introduced to a main processor 66 to control the operation of the main processor. The software development kit may be considered to be within the main processor 66. The main processor 66 reprograms individual ones of the stages 34, 36 and 52-64 (even numbers only) and the software development kit 65 to control the image acquired by the stage 30 from the camera and the audio acquired by the stage 32 from the camera.
The field programmable gate array 34 provides reprogrammable arrays of gates to clarify and sharpen the video data acquired from the image acquisition stage 30 and introduces the clarified image to the coprocessor 36. The coprocessor 36 manipulates the audio and video data depending upon the results desired to be obtained from the system 10. For example, different manipulations may be provided by the coprocessor 36 when the image is targeted on a single person or a group of people or on an inanimate object. The miscellaneous input/output stage 40 provides such information as motion sensing to indicate to an alarm panel that the camera has observed and detected motion in a scene. The hardware section 40 can also indicate to the camera that some external device has detected motion and wishes to inform the camera that an event worth observing is taking place. In addition, the hardware section 40 may also indicate to a lens to change the size of the iris in the lens. It will be appreciated that the hardware section 40 may perform a considerable number of function other than motion detecting.
The video manipulate stage 52 may manipulate an image to clarify the image as by correcting for color or extracting facial features. This is especially important when faces are in the image and the faces are to be matched against a database identifying a particular face. A similar type of manipulation is provided by the stage 54 with respect to audio information such as when a person is speaking. The event generator 56 matches the image from the stage 52 against the images in the database. This is important when the images are faces. The event responder stage 58 provides a response depending on the matching or lack of matching of the acquired image from the stage 52 and the image in the database. Although the matching has been discussed with reference to faces, the matching can be with respect to any physical object or any perceived state independent of a physical object.
The event responder 58 acts upon the output from the event generator 56 in accordance with the processing which is provided to obtain the desired results. The platform user interface 60 provides a mechanism for taking the information that the camera intelligent imaging platform 18 sees and the platform 60 calculates that information and presents the calculated information to the user. It also allows for the user to adjust the setting of the camera. The platform configuration 65 allows the user of the system to write code for customizing the camera to provide the desired result. The kernel operating system 62 provides the basic operation of the camera. It is well known in the art.
Although the stages 52-62 (even numbers only) and 65 constitute software, they may be disposed in the hardware section 28 c, since they control the operation of the main processing hardware 66. The main processing hardware 66 is sometimes referred to in this application as a “main processor”. The main processor 66 is connected by the bus 25 to communication stages or channels in the intelligent imaging platform 18. The intelligent imaging platform 18 includes a subset of communication channels 70 (ethernet), 72 (serial) and 74 (firewire) in the communications arrangement 20. The channel 70 receives information from an Ethernet source. The channel 72 receives serial information from an external source. The channel 74 receives high speed information from software known as Firewire and communicates this information to hardware. The channels 70, 72 and 74 are representative of the different types of information that may be acquired by the currently active communication channels in the intelligent imaging platform 18. The representative channels such as the channels 70, 72 and 74 also receive information from the main processor 66 and supply information to the main processor.
The intelligent imaging platform 18 in turn communicates through the communications network 76 to the central station 14. As shown in
The stage 30 acquires the image from the camera and introduces the acquired image to the field programmable gate array 34. The gate array 34 clarifies the image in accordance with the desired processing to be provided of the image and introduces the signals representing the clarified image to the coprocessor 36. The coprocessor 36 manipulates the clarified image dependent upon the desired result to be obtained from the system shown in
The signals from the coprocessor 36 are further manipulated by the stages 52 and 54. The video manipulator 52 further enhances the quality of the acquired image. For example, the video manipulator 52 may identify individual faces in a crowd and may extract facial features of an individual. The event generator 56 may match the facial features against a database to identify the individual on the basis of this matching against the database.
The system 10 shown in
The degradation of the signal resolution with increases in distance is particularly troublesome when analog signals are processed many of the camera systems of the prior art processed analog signals. In contrast, the system of this invention operates on a digital basis. Coupled with the disposition of the camera controls in the camera, the digital operation of the system of this invention enhances the sensitivity and the reliability of the operation of the system 10.
The system 10 also has other advantages. This results in part from the flexibility in the construction and operation of the system. For example, all of the stages 34, 36, 52-62 (even numbers only) and 65 are reprogrammable. Furthermore, each of the stages 34, 36, 52-62 and 65 can be reprogrammed on the basis of a decision from that stage or any of the other of these stages. This flexibility in reprogramming provides for an enhanced sensitivity and reliability in the adjustments that can be provided in the operation of the camera, thereby providing an enhanced performance of the camera.
The output of the analyzer 92 is stored or archived as at 96 in
The miscellaneous input/output stage 40 in
The video and audio signals then flow through a bus 124 to the coprocessor 36. The output from the coprocessor 36 is provided to a bus 126. These signals then pass through the gate array 34 to PCI bus 38. The signals on the buses 120 and 122 may also be by-passed, through the field reprogrammable gate array 34 to the PCI bus 38 without passing through the coprocessor 36. The signals on the PCI bus 38 pass through the main processor 66 and through a communications bus 129 to the communications network 76 in
The different blocks are defined and determined by the interfaces of applicants' assignee. These interfaces are as follows:
The second column in
The third column in
Column 5 in
Column 10 of
Although this invention has been disclosed and illustrated with reference to particular embodiments, the principle involved are susceptible for use in numerous other embodiments which will be apparent to persons of ordinary skill in the art. The invention is, therefore, to be limited only as indicated by the scope of the claims.
|Cited Patent||Filing date||Publication date||Applicant||Title|
|US5868666 *||May 6, 1996||Feb 9, 1999||Olympus Optical Co., Ltd.||Endoscope apparatus using programmable integrated circuit to constitute internal structure thereof|
|US6188381 *||Dec 31, 1997||Feb 13, 2001||Sarnoff Corporation||Modular parallel-pipelined vision system for real-time video processing|
|US6710799 *||Dec 28, 2000||Mar 23, 2004||Umech Technologies||Microscopic motion measuring|
|US6753925 *||Mar 30, 2001||Jun 22, 2004||Tektronix, Inc.||Audio/video processing engine|
|US6900777 *||Jan 3, 2001||May 31, 2005||Stryker Corporation||Infrared audio/video interface for head-mounted display|
|US20010036322 *||Mar 9, 2001||Nov 1, 2001||Bloomfield John F.||Image processing system using an array processor|
|US20040150734 *||Jan 23, 2004||Aug 5, 2004||Arthur Sobel||Interpolation of edge portions of a digital image|
|US20040201743 *||Nov 9, 2001||Oct 14, 2004||Amling Marc R.||Programmable and reconfigurable camera control unit for video systems|
|Citing Patent||Filing date||Publication date||Applicant||Title|
|US7769891||Aug 27, 2007||Aug 3, 2010||International Business Machines Corporation||System and method for providing multiple redundant direct routes between supernodes of a multi-tiered full-graph interconnect architecture|
|US7769892||Aug 27, 2007||Aug 3, 2010||International Business Machines Corporation||System and method for handling indirect routing of information between supernodes of a multi-tiered full-graph interconnect architecture|
|US7779148||Feb 1, 2008||Aug 17, 2010||International Business Machines Corporation||Dynamic routing based on information of not responded active source requests quantity received in broadcast heartbeat signal and stored in local data structure for other processor chips|
|US7793158||Aug 27, 2007||Sep 7, 2010||International Business Machines Corporation||Providing reliability of communication between supernodes of a multi-tiered full-graph interconnect architecture|
|US7809970||Aug 27, 2007||Oct 5, 2010||International Business Machines Corporation||System and method for providing a high-speed message passing interface for barrier operations in a multi-tiered full-graph interconnect architecture|
|US7822889||Aug 27, 2007||Oct 26, 2010||International Business Machines Corporation||Direct/indirect transmission of information using a multi-tiered full-graph interconnect architecture|
|US7827428||Aug 31, 2007||Nov 2, 2010||International Business Machines Corporation||System for providing a cluster-wide system clock in a multi-tiered full-graph interconnect architecture|
|US7840703||Aug 27, 2007||Nov 23, 2010||International Business Machines Corporation||System and method for dynamically supporting indirect routing within a multi-tiered full-graph interconnect architecture|
|US7904590||Aug 27, 2007||Mar 8, 2011||International Business Machines Corporation||Routing information through a data processing system implementing a multi-tiered full-graph interconnect architecture|
|US7921316||Sep 11, 2007||Apr 5, 2011||International Business Machines Corporation||Cluster-wide system clock in a multi-tiered full-graph interconnect architecture|
|US7958182||Aug 27, 2007||Jun 7, 2011||International Business Machines Corporation||Providing full hardware support of collective operations in a multi-tiered full-graph interconnect architecture|
|US7958183||Aug 27, 2007||Jun 7, 2011||International Business Machines Corporation||Performing collective operations using software setup and partial software execution at leaf nodes in a multi-tiered full-graph interconnect architecture|
|US8014387||Aug 27, 2007||Sep 6, 2011||International Business Machines Corporation||Providing a fully non-blocking switch in a supernode of a multi-tiered full-graph interconnect architecture|
|US8077602||Feb 1, 2008||Dec 13, 2011||International Business Machines Corporation||Performing dynamic request routing based on broadcast queue depths|
|US8108545||Aug 27, 2007||Jan 31, 2012||International Business Machines Corporation||Packet coalescing in virtual channels of a data processing system in a multi-tiered full-graph interconnect architecture|
|US8140731||Aug 27, 2007||Mar 20, 2012||International Business Machines Corporation||System for data processing using a multi-tiered full-graph interconnect architecture|
|US8185896||Aug 27, 2007||May 22, 2012||International Business Machines Corporation||Method for data processing using a multi-tiered full-graph interconnect architecture|
|US8417778||Dec 17, 2009||Apr 9, 2013||International Business Machines Corporation||Collective acceleration unit tree flow control and retransmit|
|US8751655||Mar 29, 2010||Jun 10, 2014||International Business Machines Corporation||Collective acceleration unit tree structure|
|US8756270||Apr 24, 2012||Jun 17, 2014||International Business Machines Corporation||Collective acceleration unit tree structure|
|International Classification||H04N5/232, G06T1/00|
|Cooperative Classification||G06T1/0007, H04N5/23203, G06T1/00|