WO2011028380A2 - Foreground object detection in a video surveillance system - Google Patents

Foreground object detection in a video surveillance system Download PDF

Info

Publication number
WO2011028380A2
WO2011028380A2 PCT/US2010/045224 US2010045224W WO2011028380A2 WO 2011028380 A2 WO2011028380 A2 WO 2011028380A2 US 2010045224 W US2010045224 W US 2010045224W WO 2011028380 A2 WO2011028380 A2 WO 2011028380A2
Authority
WO
WIPO (PCT)
Prior art keywords
foreground
video frame
patch
foreground patch
detected
Prior art date
Application number
PCT/US2010/045224
Other languages
French (fr)
Other versions
WO2011028380A3 (en
Inventor
Wesley Kenneth Cobb
Ming-Jung Seow
Tao Yang
Original Assignee
Behavioral Recognition Systems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Behavioral Recognition Systems, Inc. filed Critical Behavioral Recognition Systems, Inc.
Priority to EP10814147.4A priority Critical patent/EP2474163A4/en
Priority to BR112012004568A priority patent/BR112012004568A2/en
Publication of WO2011028380A2 publication Critical patent/WO2011028380A2/en
Publication of WO2011028380A3 publication Critical patent/WO2011028380A3/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19608Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30236Traffic on road, railway or crossing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Definitions

  • Embodiments of the invention provide techniques for computationally analyzing a sequence of video frames. More specifically, embodiments of the invention relate to techniques for detecting a foreground object in a scene depicted in the sequence of video frames.
  • a video surveillance system may be configured to classify a group of pixels (referred to as a "foreground object") in a given frame as being a particular object (e.g., a person or vehicle). Once identified, a foreground object may be tracked from frame-to-frame in order to follow the foreground object moving through the scene over time, e.g., a person walking across the field of vision of a video surveillance camera.
  • a foreground object a group of pixels in a given frame as being a particular object (e.g., a person or vehicle).
  • a foreground object may be tracked from frame-to-frame in order to follow the foreground object moving through the scene over time, e.g., a person walking across the field of vision of a video surveillance camera.
  • the background model may return fragmented foreground objects in a very complex environment, presenting an additional challenge to the tracker.
  • Embodiments of the invention relate to techniques for detecting a foreground object in a scene captured by a video camera or other recorded video.
  • the embodiment includes a computer-implemented method for detecting a foreground patch that depicts a foreground object in a sequence of video frames captured by a video camera.
  • the method itself may generally include receiving a video frame, extracting a first foreground patch from the video frame to produce an extracted first foreground patch, and computing a motion flow field for the video frame that includes an angle and magnitude value corresponding to a feature in the video frame and the feature in a previous video frame.
  • the motion flow field is filtered to produce a consistent motion flow field for the video frame that includes the extracted first foreground patch as a detected first foreground patch including the feature.
  • the first foreground patch is used to track the foreground object in the sequence of video frames to follow the foreground object over time.
  • Another embodiment of the invention includes a computer-readable storage medium containing a program, which when executed on a processor, performs an operation for detecting a foreground patch that depicts a foreground object in a sequence of video frames captured by a video camera.
  • the operation itself may generally include receiving a video frame, extracting a first foreground patch from the video frame to produce an extracted first foreground patch, and computing a motion flow field for the video frame that includes an angle and magnitude value corresponding to a feature in the video frame relative to the feature in a previous video frame.
  • the motion flow field is filtered to produce a consistent motion flow field for the video frame that includes the extracted first foreground patch as a detected first foreground patch including the feature.
  • the first foreground patch is used to track the foreground object in the sequence of video frames to follow the foreground object over time.
  • Still another embodiment of the invention provides a system.
  • the system itself may generally include a video input source configured to provide a sequence of video frames, each depicting a scene, a processor and a memory containing a program, which, when executed on the processor is configured to perform an operation for detecting a foreground patch that depicts a foreground object in a sequence of video frames captured by a video camera.
  • the operation itself may generally include receiving a video frame, extracting a first foreground patch from the video frame to produce an extracted first foreground patch, and computing a motion flow field for the video frame that includes an angle and magnitude value corresponding to a feature in the video frame relative to the feature in a previous video frame.
  • the motion flow field is filtered to produce a consistent motion flow field for the video frame that includes the extracted first foreground patch as a detected first foreground patch including the feature.
  • the first foreground patch is used to track the foreground object in the sequence of video frames to follow the foreground object over time.
  • a video frame includes one or more appearance values (e.g., RGB color values) for each of a plurality of pixels and one or more foreground objects are extracted from the video frame.
  • a motion flow field for each video frame in a sequence is used to validate foreground objects(s) that are extracted from the background model of a scene. Spurious foreground objects are filtered before the foreground objects to produce a consistent motion flow field including detected foreground objects that is provided to the tracking stage.
  • the consistent motion flow field is used by the tracking stage to improve the performance of the tracking as needed for real time surveillance applications.
  • Figure 1 illustrates components of a video analysis system, according to one embodiment of the invention.
  • Figure 2 further illustrates components of the video analysis system shown in Figure 1 , according to one embodiment of the invention.
  • Figure 3 illustrates an example of a detector component of the video analysis system shown in Figure 2, according to one embodiment of the invention.
  • Figures 4A and 4B illustrate examples of a frame and motion flow fields, according to one embodiment of the invention.
  • Figure 4C illustrates a method for detecting foreground patches, according to one embodiment of the invention.
  • Figure 4D illustrates a method for performing a step shown in Figure 4C to produce detected foreground patches, according to one embodiment of the invention.
  • Figure 5 illustrates an example of a tracker component of the video analysis system shown in Figure 2, according to one embodiment of the invention.
  • Figure 6A illustrates examples of detected foreground patches and corresponding motion flow fields, according to one embodiment of the invention.
  • Figures 6B, 6C, and 6D illustrate examples of detected foreground patches and existing tracks, according to one embodiment of the invention.
  • Figure 7 illustrates a method for tracking detected foreground patches, according to one embodiment of the invention.
  • Embodiments of the invention provide techniques for generating a background model for a complex and/or dynamic scene over a period of
  • an array of Adaptive Resonance Theory (ART) networks is used to generate a background model of the scene.
  • the background model may include a two-dimensional (2D) array of ART networks, where each pixel is modeled using one of the ART networks in the 2D array.
  • 2D array of ART networks observes the image for regular (or periodic) patterns occurring in the pixel color values.
  • an ART network may contain multiple clusters, each described by means and variances. The means and the variances for the clusters are updated in each successive video frame.
  • each cluster in an ART network may represent a distinct background state for the corresponding pixel. Additionally, each cluster may be monitored for maturity. When a cluster in the ART network for pixel (x, y) has matured, it is used to classify that pixel as depicting either foreground or background; namely, if the RGB values for a pixel map to a mature cluster, then that pixel is presumed to depict scene background.
  • each ART network in the 2D array models one of the pixels over multiple frames of video by creating new clusters, modifying, merging, and removing clusters from the network, based on the pixel color values for that pixel observed over time.
  • Classification is applied using choice tests and vigilance tests.
  • the choice test measures the length between two points (learned point of cluster vs. test point) in the RGB space.
  • the vigilance test measures the angle between two points in the RGB space.
  • the similarity measure used for the vigilance test helps prevent the background model from classifying weak shadow as foreground.
  • the creation of a new cluster may indicate either a valid change of a pixel or a noisy pixel.
  • the modification of an existing cluster reinforces the significance/importance of a cluster.
  • the merging of multiple clusters maintains the accuracy, stability, and scalability of the background model.
  • the deletion of a cluster removes a weak belief of a new background/foreground state for the corresponding pixel.
  • the door 'open' and 'close' states could be considered as layers in the proposed background model and therefore be treated as background.
  • noise in the scene may be modeled using multiple clusters in an ART and therefore be treated as background.
  • a random car driving through a camera's filed of vision not result in a new background state because any clusters generated for a pixel depicting the car over a small number of frames is unstable and eventually deleted when not reinforced. Consequently, the proposed background model is adaptive to complex and dynamic environments in a manner that does not require any supervision; thus, it is suitable for long-term observation in a video surveillance application.
  • the computer vision engine may compare the pixel values for a given frame with the background image and identify objects as they appear and move about the scene. Typically, when a region of pixels in the scene (referred to as a "blob" or "patch”) is classified as depicting
  • the patch itself is identified as a foreground object.
  • the object may be evaluated by a classifier configured to determine what is depicted by the foreground object (e.g., a vehicle or a person).
  • the computer vision engine may identify features (e.g., height/width in pixels, average color values, shape, area, and the like) used to track the object from frame-to-frame.
  • the computer vision engine may derive a variety of information while tracking the object from frame-to-frame, e.g., position, current (and projected) trajectory, direction, orientation, velocity, acceleration, size, color, and the like.
  • the computer vision outputs this information as a stream of "context events" describing a collection of kinematic information related to each foreground object detected in the video frames.
  • Data output from the computer vision engine may be supplied to the machine- learning engine.
  • the machine-learning engine may evaluate the context events to generate "primitive events" describing object behavior.
  • Each primitive event may provide some semantic meaning to a group of one or more context events. For example, assume a camera records a car entering a scene, and that the car turns and parks in a parking spot. In such a case, the computer vision engine could initially recognize the car as a foreground object; classify it as being a vehicle, and output kinematic data describing the position, movement, speed, etc., of the car in the context event stream.
  • a primitive event detector could generate a stream of primitive events from the context event stream such as "vehicle appears,” vehicle turns,” “vehicle slowing,” and “vehicle stops” (once the kinematic information about the car indicated a speed of 0).
  • the machine-learning engine may create, encode, store, retrieve, and reinforce patterns representing the events observed to have occurred, e.g., long-term memories representing a higher-level abstraction of a car parking in the scene - generated from the primitive events underlying the higher-level abstraction. Further still, patterns representing an event of interest may result in alerts passed to users of the behavioral recognition system.
  • One embodiment of the invention is implemented as a program product for use with a computer system.
  • the program(s) of the program product defines functions of the embodiments (including the methods described herein) and can be contained on a variety of computer-readable storage media.
  • Examples of computer-readable storage media include (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM or DVD-ROM disks readable by an optical media drive) on which information is permanently stored; (ii) writable storage media (e.g., floppy disks within a diskette drive or hard-disk drive) on which alterable information is stored.
  • Such computer-readable storage media when carrying computer-readable instructions that direct the functions of the present invention, are embodiments of the present invention.
  • Other examples media include communications media through which information is conveyed to a computer, such as through a computer or telephone network, including wireless communications networks.
  • routines executed to implement the embodiments of the invention may be part of an operating system or a specific application, component, program, module, object, or sequence of instructions.
  • the computer program of the present invention is comprised typically of a multitude of instructions that will be translated by the native computer into a machine-readable format and hence executable instructions.
  • programs are comprised of variables and data structures that either reside locally to the program or are found in memory or on storage devices.
  • various programs described herein may be identified based upon the application for which they are implemented in a specific embodiment of the invention. However, it should be appreciated that any particular program nomenclature that follows is used merely for convenience, and thus the invention should not be limited to use solely in any specific application identified and/or implied by such nomenclature.
  • FIG. 1 illustrates components of a video analysis and behavior-recognition system 100, according to one embodiment of the present invention.
  • the behavior-recognition system 100 includes a video input source 105, a network 1 10, a computer system 115, and input and output devices 118 (e.g., a monitor, a keyboard, a mouse, a printer, and the like).
  • the network 110 may transmit video data recorded by the video input 105 to the computer system 115.
  • the computer system 1 15 includes a CPU 120, storage 125 (e.g., a disk drive, optical disk drive, floppy disk drive, and the like), and a memory 130 containing both a computer vision engine 135 and a machine-learning engine 140.
  • Network 1 10 receives video data (e.g., video stream(s), video images, or the like) from the video input source 105.
  • the video input source 105 may be a video camera, a VCR, DVR, DVD, computer, web-cam device, or the like.
  • the video input source 105 may be a stationary video camera aimed at a certain area (e.g., a subway station, a parking lot, a building entry/exit, etc.), which records the events taking place therein.
  • the video input source 105 may be configured to record the scene as a sequence of individual video frames at a specified frame-rate (e.g., 24 frames per second), where each frame includes a fixed number of pixels (e.g., 320 x 240). Each pixel of each frame may specify a color value (e.g., an RGB value) or grayscale value (e.g., a radiance value between 0-255). Further, the video stream may be formatted using known such formats e.g., MPEG2, MJPEG, MPEG4, H.263, H.264, and the like.
  • the computer vision engine 35 may be configured to analyze this raw information to identify active objects in the video stream, classify the objects, derive a variety of metadata regarding the actions and interactions of such objects, and supply this information to a machine-learning engine 140.
  • the machine-learning engine 140 may be configured to evaluate, observe, learn and remember details regarding events (and types of events) that transpire within the scene over time.
  • the machine-learning engine 140 receives the video frames and the data generated by the computer vision engine 135.
  • the machine- learning engine 140 may be configured to analyze the received data, build semantic representations of events depicted in the video frames, detect patterns, and, ultimately, to learn from these observed patterns to identify normal and/or abnormal events.
  • data describing whether a normal/abnormal behavior/event has been determined and/or what such behavior/event is may be provided to output devices 118 to issue alerts, for example, an alert message presented on a GUI interface screen.
  • the computer vision engine 135 and the machine-learning engine 140 both process video data in real-time.
  • time scales for processing information by the computer vision engine 135 and the machine-learning engine 140 may differ.
  • the computer vision engine 135 processes the received video data frame-by-frame, while the machine-learning engine 140 processes data every N-frames.
  • the computer vision engine 135 analyzes each frame in real-time to derive a set of information about what is occurring within a given frame, the machine-learning engine 140 is not constrained by the real-time frame rate of the video input.
  • Figure 1 illustrates merely one possible arrangement of the behavior-recognition system 100.
  • the video input source 105 is shown connected to the computer system 1 15 via the network 1 10, the network 110 is not always present or needed (e.g., the video input source 05 may be directly connected to the computer system 1 15).
  • various components and modules of the behavior-recognition system 100 may be implemented in other systems.
  • the computer vision engine 135 may be implemented as a part of a video input device (e.g., as a firmware component wired directly into a video camera). In such a case, the output of the video camera may be provided to the machine-learning engine 140 for analysis.
  • the output from the computer vision engine 135 and machine-learning engine 140 may be supplied over computer network 110 to other computer systems.
  • the computer vision engine 135 and machine-learning engine 140 may be installed on a server system and configured to process video from multiple input sources (i.e., from multiple cameras).
  • a client application 250 running on another computer system may request (or receive) the results of over network 110.
  • Figure 2 further illustrates components of the computer vision engine 135 and the machine-learning engine 140 first illustrated in Figure 1 , according to one
  • the computer vision engine 135 includes a detector component 205, a tracker component 210, an estimator/identifier component 215, and a context processor component 220.
  • the components 205, 210, 215, and 220 provide a pipeline for processing an incoming sequence of video frames supplied by the video input source 105 (indicated by the solid arrows linking the components). Additionally, the output of one component may be provided to multiple stages of the component pipeline (as indicated by the dashed arrows) as well as to the machine-learning engine 140.
  • the components 205, 210, 215, and 220 may each provide a software module configured to provide the functions described herein.
  • the components 205, 210, 215, and 220 may be combined (or further subdivided) to suit the needs of a particular case.
  • the detector component 205 may be configured to separate each frame of video provided by the video input source 105 into a stationary or static part (the scene background) and a collection of volatile parts (the scene foreground.)
  • the frame itself may include a two-dimensional array of pixel values for multiple channels (e.g., RGB channels for color video or grayscale channel or radiance channel for black and white video).
  • the detector component 205 may model the background states for each pixel using a corresponding ART network. That is, each pixel may be classified as depicting scene foreground or scene background using an ART network modeling a given pixel.
  • the detector component 205 may be configured to generate a mask used to identify which pixels of the scene are classified as depicting foreground and, conversely, which pixels are classified as depicting scene background. The detector component 205 then identifies regions of the scene that contain a portion of scene foreground (referred to as a foreground "blob" or "patch") and supplies this information to subsequent stages of the pipeline. In one embodiment, a patch may be evaluated over a number of frames before being forwarded to other components of the computer vision engine 135.
  • the detector component 205 may evaluate features of a patch from frame-to-frame to make an initial determination that the patch depicts a foreground agent in the scene as opposed to simply a patch of pixels classified as foreground due to camera noise or changes in scene lighting. Additionally, pixels classified as depicting scene background maybe used to a background image modeling the scene.
  • the tracker component 210 may receive the foreground patches produced by the detector component 205 and generate computational models for the patches.
  • the tracker component 210 may be configured to use this information, and each successive frame of raw-video, to attempt to track the motion of the objects depicted by the foreground patches as they move about the scene.
  • the estimator/identifier component 215 may receive the output of the tracker component 210 (and the detector component 205) and classify each tracked object as being one of a known category of objects. For example, in one embodiment, estimator/ identifier component 215 may classify a tracked object as being a "person,” a "vehicle,” an "unknown,” or an "other.” In this context, the classification of "other" represents an affirmative assertion that the object is neither a "person” nor a "vehicle.” Additionally, the estimator/ identifier component may identify characteristics of the tracked object, e.g., for a person, a prediction of gender, an estimation of a pose (e.g., standing or sitting) or an indication of whether the person is carrying an object.
  • characteristics of the tracked object e.g., for a person, a prediction of gender, an estimation of a pose (e.g., standing or sitting) or an indication of whether the person is carrying an object.
  • the machine learning engine 140 may classify foreground objects observed by the vision engine 135.
  • the machine-learning engine 140 may include an unsupervised classifier configured to observe and distinguish among different agent types (e.g., between people and vehicles) based on a plurality of micro- features (e.g., size, speed, appearance, etc.).
  • the context processor component 220 may receive the output from other stages of the pipeline (i.e., the tracked objects, the background and foreground models, and the results of the estimator/identifier component 215). Using this information, the context processor 220 may be configured to generate a stream of context events regarding objects tracked (by tracker component 210) and classified (by estimator identifier component 215). For example, the context processor component 220 may evaluate a foreground object from frame-to-frame and output context events describing that object's height, width (in pixels), position (as a 2D coordinate in the scene), acceleration, velocity, orientation angle, etc.
  • the computer vision engine 135 may take the outputs of the components 205, 210, 215, and 220 describing the motions and actions of the tracked objects in the scene and supply this information to the machine-learning engine 140.
  • the primitive event detector 212 may be configured to receive the output of the computer vision engine 135 (i.e., the video images, the object classifications, and context event stream) and generate a sequence of primitive events - labeling the observed actions or behaviors in the video with semantic meaning. For example, assume the computer vision engine 135 has identified a foreground object and classified that foreground object as being a vehicle and the context processor component 220 estimates the kinematic data regarding the car's position and velocity.
  • this information is supplied to the machine-learning engine 140 and the primitive event detector 212.
  • the primitive event detector 212 may generate a semantic symbol stream providing a simple linguistic description of actions engaged in by the vehicle.
  • a sequence of primitive events related to observations of the computer vision engine 135 occurring at a parking lot could include formal language vectors representing the following: "vehicle appears in scene,” “vehicle moves to a given location,” “vehicle stops moving,” “person appears proximate to vehicle,” “person moves,” person leaves scene,” “person appears in scene,” “person moves proximate to vehicle,” “person disappears,” “vehicle starts moving,” and “vehicle disappears.”
  • the primitive event stream may be supplied to excite the perceptual associative memory 230.
  • the machine-learning engine 140 includes a long-term memory 225, a perceptual memory 230, an episodic memory 235, a workspace 240, codelets 245, and a mapper component 21 1.
  • the perceptual memory 230, the episodic memory 235, and the long-term memory 225 are used to identify patterns of behavior, evaluate events that transpire in the scene, and encode and store observations.
  • the perceptual memory 230 receives the output of the computer vision engine 135 (e.g., the context event stream) and a primitive event stream generated by primitive event detector 212.
  • the episodic memory 235 stores data representing observed events with details related to a particular episode, e.g., information describing time and space details related on an event. That is, the episodic memory 235 may encode specific details of a particular event, i.e., "what and where" something occurred within a scene, such as a particular vehicle (car A) moved to a location believed to be a parking space (parking space 5) at 9:43AM.
  • the long-term memory 225 may store data generalizing events observed in the scene.
  • the long-term memory 225 may encode information capturing observations and generalizations learned by an analysis of the behavior of objects in the scene such as "vehicles tend to park in a particular place in the scene,” “when parking vehicles tend to move a certain speed,” and “after a vehicle parks, people tend to appear in the scene proximate to the vehicle,” etc.
  • the long-term memory 225 stores observations about what happens within a scene with much of the particular episodic details stripped away.
  • memories from the episodic memory 235 and the long-term memory 225 may be used to relate and understand a current event, i.e., the new event may be compared with past experience, leading to both reinforcement, decay, and adjustments to the information stored in the long-term memory 225, over time.
  • the long-term memory 225 may be implemented as a binary ART network and a sparse-distributed memory data structure.
  • the mapper component 211 may receive the context event stream and the primitive event stream and parse information to multiple ART networks to generate statistical models of what occurs in the scene for different groups of context events and primitive events.
  • the workspace 240 provides a computational engine for the machine-learning engine 140.
  • the workspace 240 may be configured to copy information from the perceptual memory 230, retrieve relevant memories from the episodic memory 235 and the long-term memory 225, select and invoke the execution of one of codelets 245.
  • each codelet 245 is a software program configured to evaluate different sequences of events and to determine how one sequence may follow (or otherwise relate to) another (e.g., a finite state machine).
  • the codelet may provide a software module configured to detect interesting patterns from the streams of data fed to the machine-learning engine.
  • the codelet 245 may create, retrieve, reinforce, or modify memories in the episodic memory 235 and the long-term memory 225.
  • the machine- learning engine 140 performs a cognitive cycle used to observe, and learn, about patterns of behavior that occur within the scene. FOREGROUND OBJECT DETECTION
  • Figure 3 illustrates an example of a detector component 205 of the computer vision engine 135 system shown in Figure 2, according to one embodiment of the invention.
  • the detector component 205 includes a background/foreground component 310, a motion flow field component 315, a merge/split component 325, and a consistent motion flow field component 330.
  • the background image output by the background/foreground component 310 generally provides an RGB (or grayscale) value for each pixel in a scene being observed by the computer vision engine 135.
  • the RGB values in the background image specify a color value expected when the background of the scene is visible to the camera. That is, the color values observed in a frame of video when not occluded by a foreground object.
  • the background/foreground component 310 may update the color values of pixels in the background image dynamically while the computer vision engine observes a sequence of video frames.
  • the detector component 205 is configured to receive a current frame of video from an input source (e.g., a video camera) as image data 305. And in response, the background/foreground component 310 classifies each pixel in the frame as depicting scene background or scene foreground. For example, the RGB values for a given pixel may be passed to an input layer of a corresponding ART network in an ART network array. Each ART network in the array provides a
  • Each cluster in an ART network may be characterized by a mean and a variance from a prototype input representing that cluster (i.e., from an RGB value representing that cluster).
  • the prototype is generated first, as a copy of the input vector used to create a new cluster (i.e., from the first set of RGB values used to create the new cluster). Subsequently, as new input RGB values are mapped to an existing cluster, the prototype RGB values (and the mean and variance for that cluster) may be updated using the input RGB values.
  • the background/foreground component 310 may track how many input vectors (e.g., RGB pixel color values) map to a given cluster. Once a cluster has "matured” the background/foreground component 310 classifies a pixel mapping to that cluster as depicting scene background. In one embodiment, a cluster is "matured” once a minimum number of input RBG values have mapped to that cluster. Conversely, the background/foreground component 310 may classify pixels mapping to a cluster that has not matured (or pixels that result in a new cluster) as depicting an element of scene foreground.
  • input vectors e.g., RGB pixel color values
  • an ART network receives a vector storing the RGB color values of a pixel in a frame of image data 305.
  • the particular ART network receives the RGB pixel color values for that same pixel from frame-to-frame.
  • the ART network may either update an existing cluster or create a new cluster, as determined using a choice and a vigilance test for the ART network.
  • the choice and vigilance tests are used to evaluate the RGB input values passed to the ART network.
  • the choice test may be used to rank the existing clusters, relative to the vector input RGB values.
  • the choice test may compute a Euclidian distance in RGB space between each cluster and the input RGB value, and the resulting distances can be ranked by magnitude (where smaller distances are ranked higher than greater distances).
  • the vigilance test evaluates the existing clusters to determine whether to map the RGB input to one of the ranked clusters.
  • the vigilance test may compute a cosine angle between the two points (relative to a ⁇ 0, 0, 0> origin of RGB space).
  • the ART networks may provide dynamic cluster sizes. For example, each cluster may be given an initial shape and size, such as a radius of 5- 0.
  • each new input to a given ART network in array 315 is then used to update the size of a cluster for each dimension of input data (or create a new cluster).
  • the ART networks may also be configured to provide for cluster decay. For example, each ART network may be configured remove a cluster that has not been reinforced. In such a case, if a new cluster is created, but no new inputs (e.g., RGB values) map to that cluster for a specified period, then that ART network may simply remove the cluster. Doing so avoids transient elements (namely foreground objects which occlude the background) from being misclassified as scene background.
  • the background/foreground component 310 may evaluate the ART networks to classify each pixel in the input video frame as depicting scene foreground or scene background. Additionally, the
  • background/foreground component 310 may be configured to update the background image using the RGB values of pixels classified as depicting scene background.
  • the current background image may be updated using the input frame as follows. First, each pixel appearance value (e.g., RGB values) is mapped to the ART network corresponding to that pixel. If a given pixel maps to a cluster determined to model a background state, then that pixel is assigned a color value based on that cluster. Namely, each cluster has a mean which may be used to derive a set of RGB color values. In particular, the RGB values that would map directly to the mean value in the cluster.
  • each pixel appearance value e.g., RGB values
  • the mean for the closest cluster may be used to select an RGB values.
  • the RGB values in the current background image may remain unchanged.
  • this latter approach leaves the background image in the last observed state.
  • the last observed pixel RGB values may correspond to the color of the closed elevator door.
  • the background/foreground component 310 may be configured to identify contiguous regions of pixels classified as foreground. Such regions identify a foreground patch that is included in FG/BG image data 335 and output to the motion flow field component 315 along with the image data 305. As noted, the background/foreground component 310 may evaluate a patch over a number of frames before forwarding a patch to other elements of the computer vision engine 135, e.g., to ensure that a given foreground patch is not the result of camera noise or changes in scene lighting. Additionally, the current background image may be provided to other components of the computer vision engine 135 or machine-learning 140, after being updated with each successive frame.
  • the foreground patches that are extracted by the background/foreground component 310 from the background model may be spurious and/or unreliable depending on the characteristic of the camera and/or the scene environment.
  • some of the foreground patches may be fragments of a single object and those foreground patches should be merged together for more efficient tracking.
  • the motion flow field component 3 5, merge/split component 325, and consistent motion flow field component 330 are used to produce validated foreground patches by filtering out spurious foreground patches extracted by the background/foreground component 310 and reduce the tracking workload for tracker component 210.
  • the detected foreground patches depict foreground objects that are tracked over time.
  • the motion flow field component 315 receives the image data 305 and computes the motion flow field for each frame using optical flow processing techniques.
  • the motion flow field specifies vectors including an angle and magnitude for features in the frame.
  • the features correspond to one or more foreground patches that exist in multiple frames in a sequence.
  • the motion flow field that is computed for each frame by the motion flow field component 315 is output to the merge/split component 325.
  • the difference in color or grayscale values for each pixel between two sequential frames is the motion history data.
  • a single angle and magnitude may be computed for feature vectors that are included within an extracted foreground patch by averaging the angle and magnitude values or combining the angle and magnitude values using another function to produce a feature flow field.
  • the motion flow field component 315 outputs the motion flow field and the extracted foreground patches to the merge/split component 325 which merges and/or splits the extracted foreground patches to produce detected foreground patches.
  • the merge/split component 325 may also discard spurious extracted foreground patches based on the motion flow field.
  • An extracted foreground patch may be split into multiple foreground patches or merged with another extracted foreground patch, as described in conjunction with Figures 4A and 4B.
  • One or more of the feature vectors in the motion flow field are associated with each detected foreground patch.
  • the consistent motion flow field component 330 receives the detected foreground patches and the motion flow field and produces the consistent motion flow field.
  • a detected foreground patch is determined to be reliable (not spurious) by the merge/split component 325 when a consistent flow field is observed for the detected foreground patch over a predetermined number of sequential frames. Detected foreground patches and feature vectors that are not reliable are removed by the consistent motion flow field component 330. In some embodiments the predetermined number of sequential frames is three.
  • the motion flow field component 330 outputs the consistent motion flow field to the tracker component 2 0 as part of the FG/BG image data 335.
  • Figure 4A illustrates examples of frame and motion flow fields 406, 407, 408, and 409 for extracted foreground patches 401 , 402, 403, and 404, respectively, in a frame 400, according to one embodiment of the invention.
  • the merge/split component 325 joins extracted foreground patches that have been fragmented using the image data 305 to distinguish between the foreground and background color. Additionally, the merge/split component 325 joins extracted foreground patches that have similar direction and magnitude according to the feature flow field. As shown in Figure 4A, the direction (angle) and magnitude of the motion flow fields 406, 407, 408, and 409 are similar. Therefore, extracted foreground patches 401 , 402, 403, and 404 are merged by the merge/split component 325 to produce a single detected foreground patch.
  • Figure 4B illustrates examples of frame and motion flow fields for an extracted foreground patch that is split into two detected foreground patches, in a frame 410, according to one embodiment of the invention.
  • the merge/split component 325 splits extracted foreground patches that include separate objects based on the motion flow fields. Specifically, an extracted foreground patch is split when the motion flow field for the extracted foreground patch includes flows in opposing directions.
  • the extracted foreground patch may also be split based on flow fields that have different magnitudes and that indicate a fast moving object. As shown in Figure 4B, the direction (angle) and magnitude of the motion flow fields 412 and 416 are opposing. Therefore, an extracted foreground patch is split into two different detected foreground patches 41 1 and 415 by the merge/split component 325.
  • Figure 4C illustrates a method for detecting foreground patches and producing a consistent motion flow field, according to one embodiment of the invention.
  • the detector component 205 receives the image data for a frame of video.
  • the motion flow field component 315 generates a motion flow field for the frame and performs foreground patch extraction to produce extracted foreground patches that depict foreground objects, as described in further detail in conjunction with Figure 4D.
  • the merge/split component 325 receives the image data, extracted foreground patches, and motion flow field from the motion flow field component 315.
  • the merge/split component 325 merges any extracted foreground patches that have been fragmented.
  • the merging may occur to combine non-overlapping foreground patches that have consistent motion flow fields. That is, distinct foreground patches with consistent motion flow features may be treated as depicting different elements of a common foreground object.
  • the merge/split component 325 splits any extracted foreground patches that correspond to separate objects.
  • the detector component 205 outputs a consistent motion flow field that specifies the detected foreground patches and feature flow vectors following the merging and/or splitting of the extracted foreground patches.
  • Figure 4D illustrates a method for performing step 425 of Figure 4C to produce detected foreground patches, according to one embodiment of the invention.
  • the background model is determined by the background/foreground component 310.
  • the background /foreground component 310 uses the background model to extract foreground patches from each frame of the image data 305.
  • the motion flow field component 315 receives the image data 305 and produces detected foreground patches. Steps 435, 440, and 442 are independent of steps 430 and 432 and may be performed in parallel with steps 430 and 432. At step 435 the motion flow field component 315 computes the feature flow field using the motion flow field. At step 550 the motion flow field component 315 computes the motion history data using the motion flow field. Steps 435 and 440 may be performed in parallel by the motion flow field component 315.
  • the detector component 205 filters the foreground patches to produce detected foreground patches using a motion flow field since some of the extracted foreground patches may be spurious and/or unreliable depending on the characteristic of the camera and/or the scene environment. During the filtering process some of the extracted foreground patches may be merged together and other extracted foreground patches may be split for more efficient tracking.
  • Figure 5 illustrates an example of the tracker component 210 of the computer vision engine 135 shown in Figure 2, according to one embodiment of the invention.
  • Reliable tracking is essential for any surveillance system to perform well and a tracker should provide reliable tracking data for the estimator/identifier component 2 5 to consume. Due to the heavy computational workload of tracking operations, a conventional tracker may not be able to provide reliable tracking data fast enough for a practical real-time surveillance system.
  • the tracker component 210 uses a hybrid technique that relies on a combination of the motion flow field and covariance matrices to provide reliable tracking data with the performance needed for real-time surveillance.
  • the tracker component 210 includes a sort component 510, a patch management component 525, and a covariance matcher component 520.
  • the tracker component 210 receives the FG/BG image data 335 that includes the detected foreground patches and the consistent motion flow field from the detector component 205 and produces reliable tracking data 500 that is output to the estimator/identifier component 215.
  • the sort component 510 sorts the detected foreground patches (or detected rectangles) into discovered foreground patches and existing foreground patches.
  • the discovered foreground patches do not correspond with any existing foreground patches that were present in previous frames.
  • the discovered foreground patches are used during the processing of subsequent frames to differentiate between discovered and existing foreground patches.
  • the existing foreground patches were present in previous frames.
  • the patch management component 525 receives the detected foreground patches and categorizes the detected foreground patches into one of several categories based on the consistent motion flow field and, if needed, the covariance matrices.
  • the categories include N tracked foreground patch to one detected foreground patch (N to 1 ), one tracked foreground patch to K detected foreground patches (1 to K), one tracked foreground patch to one detected foreground patches (1 to 1 ), and one tracked foreground patch to zero detected foreground patch (1 to 0).
  • the covariance matcher component 520 computes the covariance matrix for each frame as instructed by the patch management component 525. The covariance matrix is used by the patch management component 525 to resolve difficult detected foreground patches.
  • the covariance matcher component 520 uses a number of features of the image data 305 and solves the generalized Eigen solution to produce a covariance matrix that measures the features of a foreground patch. In most cases, three features, e.g., R, G, and B color components, are used to produce the covariance matrix.
  • the covariance matrix is particularly useful for associating a detected foreground patch with a tracked foreground patch when the foreground object is not moving.
  • Figure 6A illustrates examples of detected foreground patches 603, 604, and 607and corresponding motion flow fields 602, 605, and 608 and 609, respectively, according to one embodiment of the invention.
  • the detected foreground patch 603 is received by the sort component 510 and the motion flow field 602 is used to associate the detected foreground patch 603 with the existing track 601 since the motion flow field 602 points from the motion flow field 602 to the detected foreground patch 603.
  • Detected foreground patch 603 is identified as an existing foreground patch and is associated with the existing track 60 .
  • the motion flow field 605 that corresponds to the detected foreground patch 604 points from no existing track to the detected foreground patch 604.
  • Detected foreground patch 604 is identified as a discovered foreground patch and added to the tracking data 500.
  • the motion flow fields 608 and 609 are used by the sort component 510 to associate the detected foreground patches 607 with the existing track 606 since the motion flow fields 608 and 609 point from the existing track 606 to the detected foreground patches 607. Therefore, detected foreground patches 607 are identified as an existing foreground patch and are associated with the existing track 606.
  • FIG. 6B illustrates an example of the N tracked foreground patches to one detected foreground patch (N to 1) tracking category, according to one embodiment of the invention.
  • the detected foreground patch 610 is associated with the two existing tracks 611 and 612 by the patch management component 525 using the motion flow field.
  • the detected foreground patch 610 replaces both of the existing track 61 1 and 612.
  • the tracking data 500 is updated by the patch management component 525 to include two tracked foreground patches with the same size and two different identifiers corresponding to the existing tracks 611 and 612.
  • FIG. 6C illustrates an example of the 1 tracked foreground patches to one detected foreground patch (1 to K) tracking category, according to one embodiment of the invention.
  • the patch management component 525 instructs the covariance matcher component 520 to compute the covariance matrix and determine whether or not detected foreground patches 631 , 632, and 633 are associated with the existing track 630 since the motion flow field is not adequate.
  • the detected foreground patches 631 , 632, and 633 may replace the existing track 615 to update the tracking data 500.
  • FIG. 6E illustrates an example of the 1 tracked foreground patches to one detected foreground patch (1 to1) tracking category, according to one embodiment of the invention.
  • Existing track 640 is the one tracked foreground patch and the detected foreground patch 645 is the one detected foreground patch.
  • the patch management component 525 determines if the size of the foreground patch corresponding to the existing track 640 is similar to the size of the detected foreground patch 645, and, if so, the detected foreground patch 645 is associated with the existing track 640.
  • the size may be considered to be similar when the area of the detected foreground patch 645 is within 60% to 120% of the area of the foreground patch corresponding to the existing track 640.
  • the patch management component 525 instructs the covariance matcher component 520 to compute the covariance matrix for the detected foreground patch 645 and determine whether or not the detected foreground patch 645 is associated with the existing track 645.
  • the detected foreground patch 645 may replace the existing track 640 to update the tracking data 500.
  • the patch management component 525 does not receive a detected foreground patch and there is one existing track, the 1 tracked foreground patch to zero detected foreground patch (1 to 0) tracking category is used.
  • the motion history data may be used by the patch management component 525 to identify a stabilized stationary foreground object corresponding to the one tracked foreground patch.
  • the patch management component 525 instructs the covariance matcher component 520 to compute the covariance matrix for the frame to determine whether or not the foreground patch corresponding to the one tracked foreground patch exists.
  • the tracking data 500 may be updated to remove a tracked foreground object that no longer exists in the scene.
  • Figure 7 illustrates a method for tracking detected foreground patches, according to one embodiment of the invention.
  • the tracker component 210 receives a consistent motion flow field that includes the detected foreground patches and the motion history data.
  • the tracker component 210 also receives the image data 305.
  • the sort component 510 sorts the detected foreground patches into the discovered track and the existing track.
  • the patch management component 525 uses the motion flow field for a frame to categorize the detected foreground patches for the frame by associating the existing tracks and detected foreground patches and identifying any new foreground patches that will be tracked.
  • the patch management component 525 If, at step 715 the patch management component 525 has successfully categorized all of the detected foreground patches, then at step 730 the patch management component 525 outputs the tracking data 500 with any updates. Otherwise, when the motion flow field does not provide enough information to successfully categorize the first detected foreground patch, at step 720 the patch management component 525 instructs the covariance matcher component 520 to compute the covariance matrix for the frame. At step 725 the covariance matcher component 520 uses the covariance matrix to associate the detected foreground patches with the existing tracks. The patch management component 525 may update the tracking data 500 to remove existing tracks for foreground patches that no longer exist or to add newly detected foreground patches that will be tracked. At step 730 the tracker component 210 outputs the updated tracking data 500.
  • the tracker component 210 uses a hybrid technique that relies on a combination of the motion flow field and covariance matrices to provide reliable tracking data with the performance needed for real-time surveillance. Reliable tracking is essential for any surveillance system to perform well, and the tracker component 210 provides reliable tracking data at a real-time performance level for processing by the estimator/identifier component 215.

Abstract

Techniques are disclosed for detecting foreground objects in a scene captured by a surveillance system and tracking the detected foreground objects from frame to frame in real time. A motion flow field is used to validate foreground object(s) that are extracted from the background model of a scene. Spurious foreground objects are filtered before the detected foreground objects are provided to the tracking stage. The motion flow field is also used by the tracking stage to improve the performance of the tracking as needed for real time surveillance applications.

Description

FOREGROUND OBJECT DETECTION IN A VIDEO SURVEILLANCE SYSTEM
BACKGROUND OF THE INVENTION
Field of the Invention
[0001] Embodiments of the invention provide techniques for computationally analyzing a sequence of video frames. More specifically, embodiments of the invention relate to techniques for detecting a foreground object in a scene depicted in the sequence of video frames.
Description of the Related Art
[0002] Some currently available video surveillance systems provide simple object recognition capabilities. For example, a video surveillance system may be configured to classify a group of pixels (referred to as a "foreground object") in a given frame as being a particular object (e.g., a person or vehicle). Once identified, a foreground object may be tracked from frame-to-frame in order to follow the foreground object moving through the scene over time, e.g., a person walking across the field of vision of a video surveillance camera.
[0003] However, such surveillance systems typically rely on a background model to extract foreground object(s) from the scene. The foreground object(s) that are extracted from the background model may be spurious and/or unreliable depending on
characteristics of the camera or the particular environment. Tracking spurious foreground object(s) is undesirable. To further complicate things, the background model may return fragmented foreground objects in a very complex environment, presenting an additional challenge to the tracker. In order to for any surveillance system to identify objects, events, behaviors, or patterns as being "normal" or
"abnormal' the foreground objects should be correctly detected and tracked.
Accordingly, what is needed is accurate foreground object detection and tracking that produces reliable results in real-time. SUMMARY OF THE INVENTION
[0004] Embodiments of the invention relate to techniques for detecting a foreground object in a scene captured by a video camera or other recorded video. One
embodiment includes a computer-implemented method for detecting a foreground patch that depicts a foreground object in a sequence of video frames captured by a video camera. The method itself may generally include receiving a video frame, extracting a first foreground patch from the video frame to produce an extracted first foreground patch, and computing a motion flow field for the video frame that includes an angle and magnitude value corresponding to a feature in the video frame and the feature in a previous video frame. The motion flow field is filtered to produce a consistent motion flow field for the video frame that includes the extracted first foreground patch as a detected first foreground patch including the feature. The first foreground patch is used to track the foreground object in the sequence of video frames to follow the foreground object over time.
[0005] Another embodiment of the invention includes a computer-readable storage medium containing a program, which when executed on a processor, performs an operation for detecting a foreground patch that depicts a foreground object in a sequence of video frames captured by a video camera. The operation itself may generally include receiving a video frame, extracting a first foreground patch from the video frame to produce an extracted first foreground patch, and computing a motion flow field for the video frame that includes an angle and magnitude value corresponding to a feature in the video frame relative to the feature in a previous video frame. The motion flow field is filtered to produce a consistent motion flow field for the video frame that includes the extracted first foreground patch as a detected first foreground patch including the feature. The first foreground patch is used to track the foreground object in the sequence of video frames to follow the foreground object over time.
[0006] Still another embodiment of the invention provides a system. The system itself may generally include a video input source configured to provide a sequence of video frames, each depicting a scene, a processor and a memory containing a program, which, when executed on the processor is configured to perform an operation for detecting a foreground patch that depicts a foreground object in a sequence of video frames captured by a video camera. The operation itself may generally include receiving a video frame, extracting a first foreground patch from the video frame to produce an extracted first foreground patch, and computing a motion flow field for the video frame that includes an angle and magnitude value corresponding to a feature in the video frame relative to the feature in a previous video frame. The motion flow field is filtered to produce a consistent motion flow field for the video frame that includes the extracted first foreground patch as a detected first foreground patch including the feature. The first foreground patch is used to track the foreground object in the sequence of video frames to follow the foreground object over time.
[0007] A video frame includes one or more appearance values (e.g., RGB color values) for each of a plurality of pixels and one or more foreground objects are extracted from the video frame. A motion flow field for each video frame in a sequence is used to validate foreground objects(s) that are extracted from the background model of a scene. Spurious foreground objects are filtered before the foreground objects to produce a consistent motion flow field including detected foreground objects that is provided to the tracking stage. The consistent motion flow field is used by the tracking stage to improve the performance of the tracking as needed for real time surveillance applications.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] So that the manner in which the above recited features, advantages, and objects of the present invention are attained and can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to the embodiments illustrated in the appended drawings.
[0009] It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
[0010] Figure 1 illustrates components of a video analysis system, according to one embodiment of the invention. [0011] Figure 2 further illustrates components of the video analysis system shown in Figure 1 , according to one embodiment of the invention.
[0012] Figure 3 illustrates an example of a detector component of the video analysis system shown in Figure 2, according to one embodiment of the invention.
[0013] Figures 4A and 4B illustrate examples of a frame and motion flow fields, according to one embodiment of the invention.
[0014] Figure 4C illustrates a method for detecting foreground patches, according to one embodiment of the invention.
[0015] Figure 4D illustrates a method for performing a step shown in Figure 4C to produce detected foreground patches, according to one embodiment of the invention.
[0016] Figure 5 illustrates an example of a tracker component of the video analysis system shown in Figure 2, according to one embodiment of the invention.
[0017] Figure 6A illustrates examples of detected foreground patches and corresponding motion flow fields, according to one embodiment of the invention.
[0018] Figures 6B, 6C, and 6D illustrate examples of detected foreground patches and existing tracks, according to one embodiment of the invention.
[0019] Figure 7 illustrates a method for tracking detected foreground patches, according to one embodiment of the invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0020] Embodiments of the invention provide techniques for generating a background model for a complex and/or dynamic scene over a period of
observations without supervision. The approaches described herein allow a background model generated by a computer vision engine to adapt to recognize different background states observed in the scene over time. Thus, the computer vision engine may more accurately distinguish between novel objects (foreground) present in the scene and elements of scene background, particularly for scenes with dynamic or complex backgrounds. [0021] In one embodiment, an array of Adaptive Resonance Theory (ART) networks is used to generate a background model of the scene. For example, the background model may include a two-dimensional (2D) array of ART networks, where each pixel is modeled using one of the ART networks in the 2D array. When the background model is initiated, the 2D array of ART networks observes the image for regular (or periodic) patterns occurring in the pixel color values. As described in greater detail herein, an ART network may contain multiple clusters, each described by means and variances. The means and the variances for the clusters are updated in each successive video frame. In context of the present invention, each cluster in an ART network may represent a distinct background state for the corresponding pixel. Additionally, each cluster may be monitored for maturity. When a cluster in the ART network for pixel (x, y) has matured, it is used to classify that pixel as depicting either foreground or background; namely, if the RGB values for a pixel map to a mature cluster, then that pixel is presumed to depict scene background.
[0022] Thus, each ART network in the 2D array models one of the pixels over multiple frames of video by creating new clusters, modifying, merging, and removing clusters from the network, based on the pixel color values for that pixel observed over time. Classification is applied using choice tests and vigilance tests. The choice test measures the length between two points (learned point of cluster vs. test point) in the RGB space. The vigilance test measures the angle between two points in the RGB space. The similarity measure used for the vigilance test helps prevent the background model from classifying weak shadow as foreground. The creation of a new cluster may indicate either a valid change of a pixel or a noisy pixel. The modification of an existing cluster reinforces the significance/importance of a cluster. The merging of multiple clusters maintains the accuracy, stability, and scalability of the background model. The deletion of a cluster removes a weak belief of a new background/foreground state for the corresponding pixel.
[0023] For example, in a scene where a door is generally always open or closed, the door 'open' and 'close' states could be considered as layers in the proposed background model and therefore be treated as background. Furthermore, noise in the scene may be modeled using multiple clusters in an ART and therefore be treated as background. Moreover, a random car driving through a camera's filed of vision not result in a new background state because any clusters generated for a pixel depicting the car over a small number of frames is unstable and eventually deleted when not reinforced. Consequently, the proposed background model is adaptive to complex and dynamic environments in a manner that does not require any supervision; thus, it is suitable for long-term observation in a video surveillance application.
[0024] Once the background model for a scene has matured, the computer vision engine may compare the pixel values for a given frame with the background image and identify objects as they appear and move about the scene. Typically, when a region of pixels in the scene (referred to as a "blob" or "patch") is classified as depicting
foreground, the patch itself is identified as a foreground object. Once identified, the object may be evaluated by a classifier configured to determine what is depicted by the foreground object (e.g., a vehicle or a person). Further, the computer vision engine may identify features (e.g., height/width in pixels, average color values, shape, area, and the like) used to track the object from frame-to-frame. Further still, the computer vision engine may derive a variety of information while tracking the object from frame-to-frame, e.g., position, current (and projected) trajectory, direction, orientation, velocity, acceleration, size, color, and the like. In one embodiment, the computer vision outputs this information as a stream of "context events" describing a collection of kinematic information related to each foreground object detected in the video frames.
[0025] Data output from the computer vision engine may be supplied to the machine- learning engine. In one embodiment, the machine-learning engine may evaluate the context events to generate "primitive events" describing object behavior. Each primitive event may provide some semantic meaning to a group of one or more context events. For example, assume a camera records a car entering a scene, and that the car turns and parks in a parking spot. In such a case, the computer vision engine could initially recognize the car as a foreground object; classify it as being a vehicle, and output kinematic data describing the position, movement, speed, etc., of the car in the context event stream. In turn, a primitive event detector could generate a stream of primitive events from the context event stream such as "vehicle appears," vehicle turns," "vehicle slowing," and "vehicle stops" (once the kinematic information about the car indicated a speed of 0). As events occur, and re-occur, the machine-learning engine may create, encode, store, retrieve, and reinforce patterns representing the events observed to have occurred, e.g., long-term memories representing a higher-level abstraction of a car parking in the scene - generated from the primitive events underlying the higher-level abstraction. Further still, patterns representing an event of interest may result in alerts passed to users of the behavioral recognition system.
[0026] In the following, reference is made to embodiments of the invention.
However, it should be understood that the invention is not limited to any specifically described embodiment. Instead, any combination of the following features and elements, whether related to different embodiments or not, is contemplated to implement and practice the invention. Furthermore, in various embodiments the invention provides numerous advantages over the prior art. However, although embodiments of the invention may achieve advantages over other possible solutions and/or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the invention. Thus, the following aspects, features, embodiments and advantages are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s).
Likewise, reference to "the invention" shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s).
[0027] One embodiment of the invention is implemented as a program product for use with a computer system. The program(s) of the program product defines functions of the embodiments (including the methods described herein) and can be contained on a variety of computer-readable storage media. Examples of computer-readable storage media include (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM or DVD-ROM disks readable by an optical media drive) on which information is permanently stored; (ii) writable storage media (e.g., floppy disks within a diskette drive or hard-disk drive) on which alterable information is stored. Such computer-readable storage media, when carrying computer-readable instructions that direct the functions of the present invention, are embodiments of the present invention. Other examples media include communications media through which information is conveyed to a computer, such as through a computer or telephone network, including wireless communications networks.
[0028] In general, the routines executed to implement the embodiments of the invention may be part of an operating system or a specific application, component, program, module, object, or sequence of instructions. The computer program of the present invention is comprised typically of a multitude of instructions that will be translated by the native computer into a machine-readable format and hence executable instructions. Also, programs are comprised of variables and data structures that either reside locally to the program or are found in memory or on storage devices. In addition, various programs described herein may be identified based upon the application for which they are implemented in a specific embodiment of the invention. However, it should be appreciated that any particular program nomenclature that follows is used merely for convenience, and thus the invention should not be limited to use solely in any specific application identified and/or implied by such nomenclature.
[0029] Figure 1 illustrates components of a video analysis and behavior-recognition system 100, according to one embodiment of the present invention. As shown, the behavior-recognition system 100 includes a video input source 105, a network 1 10, a computer system 115, and input and output devices 118 (e.g., a monitor, a keyboard, a mouse, a printer, and the like). The network 110 may transmit video data recorded by the video input 105 to the computer system 115. Illustratively, the computer system 1 15 includes a CPU 120, storage 125 (e.g., a disk drive, optical disk drive, floppy disk drive, and the like), and a memory 130 containing both a computer vision engine 135 and a machine-learning engine 140. As described in greater detail below, the computer vision engine 135 and the machine-learning engine 140 may provide software applications configured to analyze a sequence of video frames provided by the video input 105. [0030] Network 1 10 receives video data (e.g., video stream(s), video images, or the like) from the video input source 105. The video input source 105 may be a video camera, a VCR, DVR, DVD, computer, web-cam device, or the like. For example, the video input source 105 may be a stationary video camera aimed at a certain area (e.g., a subway station, a parking lot, a building entry/exit, etc.), which records the events taking place therein. Generally, the area visible to the camera is referred to as the "scene." The video input source 105 may be configured to record the scene as a sequence of individual video frames at a specified frame-rate (e.g., 24 frames per second), where each frame includes a fixed number of pixels (e.g., 320 x 240). Each pixel of each frame may specify a color value (e.g., an RGB value) or grayscale value (e.g., a radiance value between 0-255). Further, the video stream may be formatted using known such formats e.g., MPEG2, MJPEG, MPEG4, H.263, H.264, and the like.
[0031] As noted above, the computer vision engine 35 may be configured to analyze this raw information to identify active objects in the video stream, classify the objects, derive a variety of metadata regarding the actions and interactions of such objects, and supply this information to a machine-learning engine 140. And in turn, the machine-learning engine 140 may be configured to evaluate, observe, learn and remember details regarding events (and types of events) that transpire within the scene over time.
[0032] In one embodiment, the machine-learning engine 140 receives the video frames and the data generated by the computer vision engine 135. The machine- learning engine 140 may be configured to analyze the received data, build semantic representations of events depicted in the video frames, detect patterns, and, ultimately, to learn from these observed patterns to identify normal and/or abnormal events.
Additionally, data describing whether a normal/abnormal behavior/event has been determined and/or what such behavior/event is may be provided to output devices 118 to issue alerts, for example, an alert message presented on a GUI interface screen. In general, the computer vision engine 135 and the machine-learning engine 140 both process video data in real-time. However, time scales for processing information by the computer vision engine 135 and the machine-learning engine 140 may differ. For example, in one embodiment, the computer vision engine 135 processes the received video data frame-by-frame, while the machine-learning engine 140 processes data every N-frames. In other words, while the computer vision engine 135 analyzes each frame in real-time to derive a set of information about what is occurring within a given frame, the machine-learning engine 140 is not constrained by the real-time frame rate of the video input.
[0033] Note, however, Figure 1 illustrates merely one possible arrangement of the behavior-recognition system 100. For example, although the video input source 105 is shown connected to the computer system 1 15 via the network 1 10, the network 110 is not always present or needed (e.g., the video input source 05 may be directly connected to the computer system 1 15). Further, various components and modules of the behavior-recognition system 100 may be implemented in other systems. For example, in one embodiment, the computer vision engine 135 may be implemented as a part of a video input device (e.g., as a firmware component wired directly into a video camera). In such a case, the output of the video camera may be provided to the machine-learning engine 140 for analysis. Similarly, the output from the computer vision engine 135 and machine-learning engine 140 may be supplied over computer network 110 to other computer systems. For example, the computer vision engine 135 and machine-learning engine 140 may be installed on a server system and configured to process video from multiple input sources (i.e., from multiple cameras). In such a case, a client application 250 running on another computer system may request (or receive) the results of over network 110.
[0034] Figure 2 further illustrates components of the computer vision engine 135 and the machine-learning engine 140 first illustrated in Figure 1 , according to one
embodiment of the present invention. As shown, the computer vision engine 135 includes a detector component 205, a tracker component 210, an estimator/identifier component 215, and a context processor component 220. Collectively, the components 205, 210, 215, and 220 provide a pipeline for processing an incoming sequence of video frames supplied by the video input source 105 (indicated by the solid arrows linking the components). Additionally, the output of one component may be provided to multiple stages of the component pipeline (as indicated by the dashed arrows) as well as to the machine-learning engine 140. In one embodiment, the components 205, 210, 215, and 220 may each provide a software module configured to provide the functions described herein. Of course one of ordinary skill in the art will recognize that the components 205, 210, 215, and 220 may be combined (or further subdivided) to suit the needs of a particular case.
[0035] In one embodiment, the detector component 205 may be configured to separate each frame of video provided by the video input source 105 into a stationary or static part (the scene background) and a collection of volatile parts (the scene foreground.) The frame itself may include a two-dimensional array of pixel values for multiple channels (e.g., RGB channels for color video or grayscale channel or radiance channel for black and white video). As noted above, the detector component 205 may model the background states for each pixel using a corresponding ART network. That is, each pixel may be classified as depicting scene foreground or scene background using an ART network modeling a given pixel.
[0036] Additionally, the detector component 205 may be configured to generate a mask used to identify which pixels of the scene are classified as depicting foreground and, conversely, which pixels are classified as depicting scene background. The detector component 205 then identifies regions of the scene that contain a portion of scene foreground (referred to as a foreground "blob" or "patch") and supplies this information to subsequent stages of the pipeline. In one embodiment, a patch may be evaluated over a number of frames before being forwarded to other components of the computer vision engine 135. For example, the detector component 205 may evaluate features of a patch from frame-to-frame to make an initial determination that the patch depicts a foreground agent in the scene as opposed to simply a patch of pixels classified as foreground due to camera noise or changes in scene lighting. Additionally, pixels classified as depicting scene background maybe used to a background image modeling the scene.
[0037] The tracker component 210 may receive the foreground patches produced by the detector component 205 and generate computational models for the patches. The tracker component 210 may be configured to use this information, and each successive frame of raw-video, to attempt to track the motion of the objects depicted by the foreground patches as they move about the scene.
[0038] The estimator/identifier component 215 may receive the output of the tracker component 210 (and the detector component 205) and classify each tracked object as being one of a known category of objects. For example, in one embodiment, estimator/ identifier component 215 may classify a tracked object as being a "person," a "vehicle," an "unknown," or an "other." In this context, the classification of "other" represents an affirmative assertion that the object is neither a "person" nor a "vehicle." Additionally, the estimator/ identifier component may identify characteristics of the tracked object, e.g., for a person, a prediction of gender, an estimation of a pose (e.g., standing or sitting) or an indication of whether the person is carrying an object. In an alternative embodiment, the machine learning engine 140 may classify foreground objects observed by the vision engine 135. For example, the machine-learning engine 140 may include an unsupervised classifier configured to observe and distinguish among different agent types (e.g., between people and vehicles) based on a plurality of micro- features (e.g., size, speed, appearance, etc.).
[0039] The context processor component 220 may receive the output from other stages of the pipeline (i.e., the tracked objects, the background and foreground models, and the results of the estimator/identifier component 215). Using this information, the context processor 220 may be configured to generate a stream of context events regarding objects tracked (by tracker component 210) and classified (by estimator identifier component 215). For example, the context processor component 220 may evaluate a foreground object from frame-to-frame and output context events describing that object's height, width (in pixels), position (as a 2D coordinate in the scene), acceleration, velocity, orientation angle, etc.
[0040] The computer vision engine 135 may take the outputs of the components 205, 210, 215, and 220 describing the motions and actions of the tracked objects in the scene and supply this information to the machine-learning engine 140. In one embodiment, the primitive event detector 212 may be configured to receive the output of the computer vision engine 135 (i.e., the video images, the object classifications, and context event stream) and generate a sequence of primitive events - labeling the observed actions or behaviors in the video with semantic meaning. For example, assume the computer vision engine 135 has identified a foreground object and classified that foreground object as being a vehicle and the context processor component 220 estimates the kinematic data regarding the car's position and velocity. In such a case, this information is supplied to the machine-learning engine 140 and the primitive event detector 212. In turn, the primitive event detector 212 may generate a semantic symbol stream providing a simple linguistic description of actions engaged in by the vehicle. For example, a sequence of primitive events related to observations of the computer vision engine 135 occurring at a parking lot could include formal language vectors representing the following: "vehicle appears in scene," "vehicle moves to a given location," "vehicle stops moving," "person appears proximate to vehicle," "person moves," person leaves scene," "person appears in scene," "person moves proximate to vehicle," "person disappears," "vehicle starts moving," and "vehicle disappears." As described in greater detail below, the primitive event stream may be supplied to excite the perceptual associative memory 230.
[0041] Illustratively, the machine-learning engine 140 includes a long-term memory 225, a perceptual memory 230, an episodic memory 235, a workspace 240, codelets 245, and a mapper component 21 1. In one embodiment, the perceptual memory 230, the episodic memory 235, and the long-term memory 225 are used to identify patterns of behavior, evaluate events that transpire in the scene, and encode and store observations. Generally, the perceptual memory 230 receives the output of the computer vision engine 135 (e.g., the context event stream) and a primitive event stream generated by primitive event detector 212. The episodic memory 235 stores data representing observed events with details related to a particular episode, e.g., information describing time and space details related on an event. That is, the episodic memory 235 may encode specific details of a particular event, i.e., "what and where" something occurred within a scene, such as a particular vehicle (car A) moved to a location believed to be a parking space (parking space 5) at 9:43AM.
[0042] The long-term memory 225 may store data generalizing events observed in the scene. To continue with the example of a vehicle parking, the long-term memory 225 may encode information capturing observations and generalizations learned by an analysis of the behavior of objects in the scene such as "vehicles tend to park in a particular place in the scene," "when parking vehicles tend to move a certain speed," and "after a vehicle parks, people tend to appear in the scene proximate to the vehicle," etc. Thus, the long-term memory 225 stores observations about what happens within a scene with much of the particular episodic details stripped away. In this way, when a new event occurs, memories from the episodic memory 235 and the long-term memory 225 may be used to relate and understand a current event, i.e., the new event may be compared with past experience, leading to both reinforcement, decay, and adjustments to the information stored in the long-term memory 225, over time. In a particular embodiment, the long-term memory 225 may be implemented as a binary ART network and a sparse-distributed memory data structure.
[0043] The mapper component 211 may receive the context event stream and the primitive event stream and parse information to multiple ART networks to generate statistical models of what occurs in the scene for different groups of context events and primitive events.
[0044] Generally, the workspace 240 provides a computational engine for the machine-learning engine 140. For example, the workspace 240 may be configured to copy information from the perceptual memory 230, retrieve relevant memories from the episodic memory 235 and the long-term memory 225, select and invoke the execution of one of codelets 245. In one embodiment, each codelet 245 is a software program configured to evaluate different sequences of events and to determine how one sequence may follow (or otherwise relate to) another (e.g., a finite state machine).
More generally, the codelet may provide a software module configured to detect interesting patterns from the streams of data fed to the machine-learning engine. In turn, the codelet 245 may create, retrieve, reinforce, or modify memories in the episodic memory 235 and the long-term memory 225. By repeatedly scheduling codelets 245 for execution, copying memories and percepts to/from the workspace 240, the machine- learning engine 140 performs a cognitive cycle used to observe, and learn, about patterns of behavior that occur within the scene. FOREGROUND OBJECT DETECTION
[0045] Figure 3 illustrates an example of a detector component 205 of the computer vision engine 135 system shown in Figure 2, according to one embodiment of the invention. As shown, the detector component 205 includes a background/foreground component 310, a motion flow field component 315, a merge/split component 325, and a consistent motion flow field component 330.
[0046] The background image output by the background/foreground component 310 generally provides an RGB (or grayscale) value for each pixel in a scene being observed by the computer vision engine 135. The RGB values in the background image specify a color value expected when the background of the scene is visible to the camera. That is, the color values observed in a frame of video when not occluded by a foreground object. The background/foreground component 310 may update the color values of pixels in the background image dynamically while the computer vision engine observes a sequence of video frames.
[0047] In one embodiment, the detector component 205 is configured to receive a current frame of video from an input source (e.g., a video camera) as image data 305. And in response, the background/foreground component 310 classifies each pixel in the frame as depicting scene background or scene foreground. For example, the RGB values for a given pixel may be passed to an input layer of a corresponding ART network in an ART network array. Each ART network in the array provides a
specialized neural network configured to create clusters from a group of inputs (e.g., RGB pixel color values received from frame-to-frame). Each cluster in an ART network may be characterized by a mean and a variance from a prototype input representing that cluster (i.e., from an RGB value representing that cluster). The prototype is generated first, as a copy of the input vector used to create a new cluster (i.e., from the first set of RGB values used to create the new cluster). Subsequently, as new input RGB values are mapped to an existing cluster, the prototype RGB values (and the mean and variance for that cluster) may be updated using the input RGB values.
[0048] Additionally, the background/foreground component 310 may track how many input vectors (e.g., RGB pixel color values) map to a given cluster. Once a cluster has "matured" the background/foreground component 310 classifies a pixel mapping to that cluster as depicting scene background. In one embodiment, a cluster is "matured" once a minimum number of input RBG values have mapped to that cluster. Conversely, the background/foreground component 310 may classify pixels mapping to a cluster that has not matured (or pixels that result in a new cluster) as depicting an element of scene foreground.
[0049] For example, in context of the present invention, an ART network receives a vector storing the RGB color values of a pixel in a frame of image data 305. The particular ART network receives the RGB pixel color values for that same pixel from frame-to-frame. In response, the ART network may either update an existing cluster or create a new cluster, as determined using a choice and a vigilance test for the ART network. The choice and vigilance tests are used to evaluate the RGB input values passed to the ART network. The choice test may be used to rank the existing clusters, relative to the vector input RGB values. In one embodiment, the choice test may compute a Euclidian distance in RGB space between each cluster and the input RGB value, and the resulting distances can be ranked by magnitude (where smaller distances are ranked higher than greater distances). Once ranked, the vigilance test evaluates the existing clusters to determine whether to map the RGB input to one of the ranked clusters. In one embodiment, the vigilance test may compute a cosine angle between the two points (relative to a < 0, 0, 0> origin of RGB space).
[0050] If no cluster is found to update using the RGB values supplied to the input layer (evaluated using the ranked clusters) then a new cluster is created. Subsequent input vectors that most closely resemble the new cluster (also as determined using the choice and vigilance tests) are then used to update that cluster. As is known, the vigilance parameter has considerable influence on an ART network; higher vigilance values produce more fine-grained clusters, while lower vigilance values result in fewer, more general clusters. In one embodiment, the ART networks may provide dynamic cluster sizes. For example, each cluster may be given an initial shape and size, such as a radius of 5- 0. Each new input to a given ART network in array 315 is then used to update the size of a cluster for each dimension of input data (or create a new cluster). [0051] Additionally, in one embodiment, the ART networks may also be configured to provide for cluster decay. For example, each ART network may be configured remove a cluster that has not been reinforced. In such a case, if a new cluster is created, but no new inputs (e.g., RGB values) map to that cluster for a specified period, then that ART network may simply remove the cluster. Doing so avoids transient elements (namely foreground objects which occlude the background) from being misclassified as scene background.
[0052] As clusters emerge in the ART networks, the background/foreground component 310 may evaluate the ART networks to classify each pixel in the input video frame as depicting scene foreground or scene background. Additionally, the
background/foreground component 310 may be configured to update the background image using the RGB values of pixels classified as depicting scene background. For example, in one embodiment, the current background image may be updated using the input frame as follows. First, each pixel appearance value (e.g., RGB values) is mapped to the ART network corresponding to that pixel. If a given pixel maps to a cluster determined to model a background state, then that pixel is assigned a color value based on that cluster. Namely, each cluster has a mean which may be used to derive a set of RGB color values. In particular, the RGB values that would map directly to the mean value in the cluster.
[0053] For pixels in a frame of image data 305 with appearance values that do not map to a cluster classified as background, the mean for the closest cluster (determined using a Euclidian distance measure) may be used to select an RGB values.
Alternatively, as the background elements in the scene may have been occluded by a foreground agent, the RGB values in the current background image may remain unchanged. For a scene with multiple background states, this latter approach leaves the background image in the last observed state. For example, consider a person standing in front of a closed elevator door. In such a case, the last observed pixel RGB values may correspond to the color of the closed elevator door. When the person (a foreground object) occludes the door (e.g., while waiting for the elevator doors to open), the occluded pixels retain the last observation (state) while other pixels in the frame mapping to background clusters in the art network are updated. [0054] Once each pixel in the input frame is classified, the background/foreground component 310 may be configured to identify contiguous regions of pixels classified as foreground. Such regions identify a foreground patch that is included in FG/BG image data 335 and output to the motion flow field component 315 along with the image data 305. As noted, the background/foreground component 310 may evaluate a patch over a number of frames before forwarding a patch to other elements of the computer vision engine 135, e.g., to ensure that a given foreground patch is not the result of camera noise or changes in scene lighting. Additionally, the current background image may be provided to other components of the computer vision engine 135 or machine-learning 140, after being updated with each successive frame.
[0055] The foreground patches that are extracted by the background/foreground component 310 from the background model may be spurious and/or unreliable depending on the characteristic of the camera and/or the scene environment.
Additionally, some of the foreground patches may be fragments of a single object and those foreground patches should be merged together for more efficient tracking. The motion flow field component 3 5, merge/split component 325, and consistent motion flow field component 330 are used to produce validated foreground patches by filtering out spurious foreground patches extracted by the background/foreground component 310 and reduce the tracking workload for tracker component 210. The detected foreground patches depict foreground objects that are tracked over time.
[0056] The motion flow field component 315 receives the image data 305 and computes the motion flow field for each frame using optical flow processing techniques. The motion flow field specifies vectors including an angle and magnitude for features in the frame. The features correspond to one or more foreground patches that exist in multiple frames in a sequence. The motion flow field that is computed for each frame by the motion flow field component 315 is output to the merge/split component 325. The difference in color or grayscale values for each pixel between two sequential frames is the motion history data. A single angle and magnitude may be computed for feature vectors that are included within an extracted foreground patch by averaging the angle and magnitude values or combining the angle and magnitude values using another function to produce a feature flow field. [0057] The motion flow field component 315 outputs the motion flow field and the extracted foreground patches to the merge/split component 325 which merges and/or splits the extracted foreground patches to produce detected foreground patches. The merge/split component 325 may also discard spurious extracted foreground patches based on the motion flow field. An extracted foreground patch may be split into multiple foreground patches or merged with another extracted foreground patch, as described in conjunction with Figures 4A and 4B. One or more of the feature vectors in the motion flow field are associated with each detected foreground patch.
[0058] The consistent motion flow field component 330 receives the detected foreground patches and the motion flow field and produces the consistent motion flow field. A detected foreground patch is determined to be reliable (not spurious) by the merge/split component 325 when a consistent flow field is observed for the detected foreground patch over a predetermined number of sequential frames. Detected foreground patches and feature vectors that are not reliable are removed by the consistent motion flow field component 330. In some embodiments the predetermined number of sequential frames is three. The motion flow field component 330 outputs the consistent motion flow field to the tracker component 2 0 as part of the FG/BG image data 335.
[0059] Figure 4A illustrates examples of frame and motion flow fields 406, 407, 408, and 409 for extracted foreground patches 401 , 402, 403, and 404, respectively, in a frame 400, according to one embodiment of the invention. The merge/split component 325 joins extracted foreground patches that have been fragmented using the image data 305 to distinguish between the foreground and background color. Additionally, the merge/split component 325 joins extracted foreground patches that have similar direction and magnitude according to the feature flow field. As shown in Figure 4A, the direction (angle) and magnitude of the motion flow fields 406, 407, 408, and 409 are similar. Therefore, extracted foreground patches 401 , 402, 403, and 404 are merged by the merge/split component 325 to produce a single detected foreground patch.
[0060] Figure 4B illustrates examples of frame and motion flow fields for an extracted foreground patch that is split into two detected foreground patches, in a frame 410, according to one embodiment of the invention. The merge/split component 325 splits extracted foreground patches that include separate objects based on the motion flow fields. Specifically, an extracted foreground patch is split when the motion flow field for the extracted foreground patch includes flows in opposing directions. The extracted foreground patch may also be split based on flow fields that have different magnitudes and that indicate a fast moving object. As shown in Figure 4B, the direction (angle) and magnitude of the motion flow fields 412 and 416 are opposing. Therefore, an extracted foreground patch is split into two different detected foreground patches 41 1 and 415 by the merge/split component 325.
[0061] Figure 4C illustrates a method for detecting foreground patches and producing a consistent motion flow field, according to one embodiment of the invention. At step 420 the detector component 205 receives the image data for a frame of video. At step 425 the motion flow field component 315 generates a motion flow field for the frame and performs foreground patch extraction to produce extracted foreground patches that depict foreground objects, as described in further detail in conjunction with Figure 4D. At step 450 the merge/split component 325 receives the image data, extracted foreground patches, and motion flow field from the motion flow field component 315. At step 455 the merge/split component 325 merges any extracted foreground patches that have been fragmented. In one embodiment, the merging may occur to combine non-overlapping foreground patches that have consistent motion flow fields. That is, distinct foreground patches with consistent motion flow features may be treated as depicting different elements of a common foreground object. At step 460 the merge/split component 325 splits any extracted foreground patches that correspond to separate objects. At step 465 the detector component 205 outputs a consistent motion flow field that specifies the detected foreground patches and feature flow vectors following the merging and/or splitting of the extracted foreground patches.
[0062] Figure 4D illustrates a method for performing step 425 of Figure 4C to produce detected foreground patches, according to one embodiment of the invention. At step 430 the background model is determined by the background/foreground component 310. At step 432 the background /foreground component 310 uses the background model to extract foreground patches from each frame of the image data 305.
[0063] The motion flow field component 315 receives the image data 305 and produces detected foreground patches. Steps 435, 440, and 442 are independent of steps 430 and 432 and may be performed in parallel with steps 430 and 432. At step 435 the motion flow field component 315 computes the feature flow field using the motion flow field. At step 550 the motion flow field component 315 computes the motion history data using the motion flow field. Steps 435 and 440 may be performed in parallel by the motion flow field component 315.
[0064] The detector component 205 filters the foreground patches to produce detected foreground patches using a motion flow field since some of the extracted foreground patches may be spurious and/or unreliable depending on the characteristic of the camera and/or the scene environment. During the filtering process some of the extracted foreground patches may be merged together and other extracted foreground patches may be split for more efficient tracking.
FOREGROUND OBJECT TRACKING
[0065] Figure 5 illustrates an example of the tracker component 210 of the computer vision engine 135 shown in Figure 2, according to one embodiment of the invention. Reliable tracking is essential for any surveillance system to perform well and a tracker should provide reliable tracking data for the estimator/identifier component 2 5 to consume. Due to the heavy computational workload of tracking operations, a conventional tracker may not be able to provide reliable tracking data fast enough for a practical real-time surveillance system. In contrast with conventional trackers, the tracker component 210 uses a hybrid technique that relies on a combination of the motion flow field and covariance matrices to provide reliable tracking data with the performance needed for real-time surveillance.
[0066] The tracker component 210 includes a sort component 510, a patch management component 525, and a covariance matcher component 520. The tracker component 210 receives the FG/BG image data 335 that includes the detected foreground patches and the consistent motion flow field from the detector component 205 and produces reliable tracking data 500 that is output to the estimator/identifier component 215. The sort component 510 sorts the detected foreground patches (or detected rectangles) into discovered foreground patches and existing foreground patches. The discovered foreground patches do not correspond with any existing foreground patches that were present in previous frames. The discovered foreground patches are used during the processing of subsequent frames to differentiate between discovered and existing foreground patches. The existing foreground patches were present in previous frames.
[0067] The patch management component 525 receives the detected foreground patches and categorizes the detected foreground patches into one of several categories based on the consistent motion flow field and, if needed, the covariance matrices. The categories include N tracked foreground patch to one detected foreground patch (N to 1 ), one tracked foreground patch to K detected foreground patches (1 to K), one tracked foreground patch to one detected foreground patches (1 to 1 ), and one tracked foreground patch to zero detected foreground patch (1 to 0). The covariance matcher component 520 computes the covariance matrix for each frame as instructed by the patch management component 525. The covariance matrix is used by the patch management component 525 to resolve difficult detected foreground patches.
[0068] The covariance matcher component 520 uses a number of features of the image data 305 and solves the generalized Eigen solution to produce a covariance matrix that measures the features of a foreground patch. In most cases, three features, e.g., R, G, and B color components, are used to produce the covariance matrix. The covariance matrix is particularly useful for associating a detected foreground patch with a tracked foreground patch when the foreground object is not moving.
[0069] Figure 6A illustrates examples of detected foreground patches 603, 604, and 607and corresponding motion flow fields 602, 605, and 608 and 609, respectively, according to one embodiment of the invention. The detected foreground patch 603 is received by the sort component 510 and the motion flow field 602 is used to associate the detected foreground patch 603 with the existing track 601 since the motion flow field 602 points from the motion flow field 602 to the detected foreground patch 603. Detected foreground patch 603 is identified as an existing foreground patch and is associated with the existing track 60 . In contrast, the motion flow field 605 that corresponds to the detected foreground patch 604 points from no existing track to the detected foreground patch 604. Detected foreground patch 604 is identified as a discovered foreground patch and added to the tracking data 500. The motion flow fields 608 and 609 are used by the sort component 510 to associate the detected foreground patches 607 with the existing track 606 since the motion flow fields 608 and 609 point from the existing track 606 to the detected foreground patches 607. Therefore, detected foreground patches 607 are identified as an existing foreground patch and are associated with the existing track 606.
[0070] Figure 6B illustrates an example of the N tracked foreground patches to one detected foreground patch (N to 1) tracking category, according to one embodiment of the invention. Existing tracks 61 1 and 612 are the N=2 tracked foreground patches and the one detected foreground patch 610. The detected foreground patch 610 is associated with the two existing tracks 611 and 612 by the patch management component 525 using the motion flow field. The detected foreground patch 610 replaces both of the existing track 61 1 and 612. The tracking data 500 is updated by the patch management component 525 to include two tracked foreground patches with the same size and two different identifiers corresponding to the existing tracks 611 and 612.
[0071] Figure 6C illustrates an example of the 1 tracked foreground patches to one detected foreground patch (1 to K) tracking category, according to one embodiment of the invention. Existing track 630 is one tracked foreground patch and the detected foreground patches 631 , 632, and 633 are the K=3 detected foreground patches. The patch management component 525 instructs the covariance matcher component 520 to compute the covariance matrix and determine whether or not detected foreground patches 631 , 632, and 633 are associated with the existing track 630 since the motion flow field is not adequate. The detected foreground patches 631 , 632, and 633 may replace the existing track 615 to update the tracking data 500. [0072] Figure 6E illustrates an example of the 1 tracked foreground patches to one detected foreground patch (1 to1) tracking category, according to one embodiment of the invention. Existing track 640 is the one tracked foreground patch and the detected foreground patch 645 is the one detected foreground patch. The patch management component 525 determines if the size of the foreground patch corresponding to the existing track 640 is similar to the size of the detected foreground patch 645, and, if so, the detected foreground patch 645 is associated with the existing track 640. The size may be considered to be similar when the area of the detected foreground patch 645 is within 60% to 120% of the area of the foreground patch corresponding to the existing track 640. When the size is not similar, the patch management component 525 instructs the covariance matcher component 520 to compute the covariance matrix for the detected foreground patch 645 and determine whether or not the detected foreground patch 645 is associated with the existing track 645. The detected foreground patch 645 may replace the existing track 640 to update the tracking data 500.
[0073] When the patch management component 525 does not receive a detected foreground patch and there is one existing track, the 1 tracked foreground patch to zero detected foreground patch (1 to 0) tracking category is used. The motion history data may be used by the patch management component 525 to identify a stabilized stationary foreground object corresponding to the one tracked foreground patch. When there is not motion history data, the patch management component 525 instructs the covariance matcher component 520 to compute the covariance matrix for the frame to determine whether or not the foreground patch corresponding to the one tracked foreground patch exists. The tracking data 500 may be updated to remove a tracked foreground object that no longer exists in the scene.
[0074] Figure 7 illustrates a method for tracking detected foreground patches, according to one embodiment of the invention. At step 700 the tracker component 210 receives a consistent motion flow field that includes the detected foreground patches and the motion history data. The tracker component 210 also receives the image data 305. At step 705 the sort component 510 sorts the detected foreground patches into the discovered track and the existing track. At step 710 the patch management component 525 uses the motion flow field for a frame to categorize the detected foreground patches for the frame by associating the existing tracks and detected foreground patches and identifying any new foreground patches that will be tracked. If, at step 715 the patch management component 525 has successfully categorized all of the detected foreground patches, then at step 730 the patch management component 525 outputs the tracking data 500 with any updates. Otherwise, when the motion flow field does not provide enough information to successfully categorize the first detected foreground patch, at step 720 the patch management component 525 instructs the covariance matcher component 520 to compute the covariance matrix for the frame. At step 725 the covariance matcher component 520 uses the covariance matrix to associate the detected foreground patches with the existing tracks. The patch management component 525 may update the tracking data 500 to remove existing tracks for foreground patches that no longer exist or to add newly detected foreground patches that will be tracked. At step 730 the tracker component 210 outputs the updated tracking data 500.
[0075] The tracker component 210 uses a hybrid technique that relies on a combination of the motion flow field and covariance matrices to provide reliable tracking data with the performance needed for real-time surveillance. Reliable tracking is essential for any surveillance system to perform well, and the tracker component 210 provides reliable tracking data at a real-time performance level for processing by the estimator/identifier component 215.
[0076] While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims

WHAT IS CLAIMED IS:
1. A computer-implemented method for detecting a foreground patch that depicts a foreground object in a sequence of video frames captured by a video camera, the method comprising:
receiving a video frame in the sequence of video frame;
extracting a first foreground patch from the video frame to produce an extracted first foreground patch;
computing a motion flow field for the video frame that includes an angle and magnitude value corresponding to a feature in the video frame relative to the feature in a previous video frame;
filtering the motion flow field to produce a consistent motion flow field for the video frame that includes the extracted first foreground patch as a detected first foreground patch that includes the feature; and
tracking the first foreground object in the sequence of video frames based on the detected first foreground patch in order to follow the foreground object over time.
2. The method of claim 1 , wherein the feature motion flow field includes only a single angle and only a single magnitude value for the feature.
3. The method of claim 2, wherein the single angle is an average of the angles in the motion flow field within the feature and the single magnitude value is an average of the magnitude values in the motion flow field within the feature.
4. The method of claim 1 , wherein filtering the motion flow field includes
determining that the extracted first foreground patch is detected for multiple video frames in the sequence of video frames.
5. The method of claim 1 , further comprising generating motion history data for the video frame as a difference between a color component for each pixel in the video frame and the color component for each pixel in the previous video frame.
6. The method of claim 1 , wherein the filtering includes merging an extracted second foreground patch with the extracted first foreground patch to produce the first detected foreground patch.
7. The method of claim 1 , wherein the filtering includes splitting the extracted first foreground patch into a first portion that is the first detected foreground patch and a second portion that is a second detected foreground patch.
8. A computer-readable storage medium containing a program, which when executed on a processor, performs an operation for detecting a foreground patch that depicts a foreground object in a sequence of video frames captured by a video camera, the operation comprising:
receiving a video frame in the sequence of video frame;
extracting a first foreground patch from the video frame to produce an extracted first foreground patch;
computing a motion flow field for the video frame that includes an angle and magnitude value for each pixel corresponding to a feature in the video frame relative to the feature in a previous video frame;
filtering the motion flow field to produce a consistent motion flow field for the video frame that includes the extracted first foreground patch as a detected first foreground patch that includes the feature; and
tracking the first foreground object in the sequence of video frames based on the detected first foreground patch in order to follow the foreground object over time.
9. The computer-readable storage medium of claim 8, wherein the motion flow field includes only a single angle and only a single magnitude value for the feature.
10. The computer-readable storage medium of claim 9, wherein the single angle is an average of the angles within the feature and the single magnitude value is an average of the magnitude values within the feature.
11. The computer-readable storage medium of claim 8, wherein the filtering of the motion flow field includes determining that the extracted first foreground patch is detected for multiple video frames in the sequence of video frames.
12. The computer-readable storage medium of claim 8, further comprising generating motion history data for the video frame as a difference between a color component for each pixel in the video frame and the color component for each pixel in the previous video frame.
13. The computer-readable storage medium of claim 8, wherein the filtering includes merging an extracted second foreground patch with the extracted first foreground patch to produce the first detected foreground patch.
14. The computer-readable storage medium of claim 8, wherein the filtering includes splitting the extracted first foreground patch into a first portion that is the first detected foreground patch and a second portion that is a second detected foreground patch.
15. A system, comprising:
a video input source configured to provide a sequence of video frames;
a processor; and
a memory containing a program, which, when executed on the processor is configured to perform an operation for detecting a foreground patch that depicts a foreground object in the sequence of video frames, the operation comprising:
receiving a video frame in the sequence of video frame;
extracting a first foreground patch from the video frame to produce an extracted first foreground patch;
computing a motion flow field for the video frame that includes an angle and magnitude value for each pixel corresponding to a feature in the video frame relative to the feature in a previous video frame;
filtering the motion flow field to produce a consistent motion flow field for the video frame that includes the extracted first foreground patch as a detected first foreground patch that includes the feature; and tracking the first foreground object in the sequence of video frames based on the detected first foreground patch in order to follow the foreground object over time.
16. The system of claim 15, wherein the motion flow field includes only a single angle and only a single magnitude value for the feature.
17. The system of claim 16, wherein the single angle is an average of the angles within the feature and the single magnitude value is an average of the magnitude values within the feature.
18. The system of claim 15, further comprising generating motion history data for the video frame as a difference between a color component for each pixel in the video frame and the color component for each pixel in the previous video frame.
19. The system of claim 15, wherein the filtering includes merging a second foreground patch with the extracted first foreground patch to produce the first detected foreground patch.
20. The system of claim 15, wherein the filtering includes splitting the extracted first foreground patch into a first portion that is the first detected foreground patch and a second portion that is a second detected foreground patch.
PCT/US2010/045224 2009-09-01 2010-08-11 Foreground object detection in a video surveillance system WO2011028380A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP10814147.4A EP2474163A4 (en) 2009-09-01 2010-08-11 Foreground object detection in a video surveillance system
BR112012004568A BR112012004568A2 (en) 2009-09-01 2010-08-11 "foreground object detection in a video surveillance system"

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/552,210 US8218819B2 (en) 2009-09-01 2009-09-01 Foreground object detection in a video surveillance system
US12/552,210 2009-09-01

Publications (2)

Publication Number Publication Date
WO2011028380A2 true WO2011028380A2 (en) 2011-03-10
WO2011028380A3 WO2011028380A3 (en) 2011-06-03

Family

ID=43624971

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2010/045224 WO2011028380A2 (en) 2009-09-01 2010-08-11 Foreground object detection in a video surveillance system

Country Status (4)

Country Link
US (1) US8218819B2 (en)
EP (1) EP2474163A4 (en)
BR (1) BR112012004568A2 (en)
WO (1) WO2011028380A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2011253910B2 (en) * 2011-12-08 2015-02-26 Canon Kabushiki Kaisha Method, apparatus and system for tracking an object in a sequence of images

Families Citing this family (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2569411T3 (en) 2006-05-19 2016-05-10 The Queen's Medical Center Motion tracking system for adaptive real-time imaging and spectroscopy
US8218818B2 (en) * 2009-09-01 2012-07-10 Behavioral Recognition Systems, Inc. Foreground object tracking
JP5570176B2 (en) * 2009-10-19 2014-08-13 キヤノン株式会社 Image processing system and information processing method
KR101355974B1 (en) * 2010-08-24 2014-01-29 한국전자통신연구원 Method and devices for tracking multiple object
US8817087B2 (en) * 2010-11-01 2014-08-26 Robert Bosch Gmbh Robust video-based handwriting and gesture recognition for in-car applications
AU2010257454B2 (en) * 2010-12-24 2014-03-06 Canon Kabushiki Kaisha Summary view of video objects sharing common attributes
DE102011014081A1 (en) * 2011-03-16 2012-09-20 GM Global Technology Operations LLC (n. d. Gesetzen des Staates Delaware) Method for detecting a turning maneuver
US9606209B2 (en) 2011-08-26 2017-03-28 Kineticor, Inc. Methods, systems, and devices for intra-scan motion correction
US9131163B2 (en) 2012-02-07 2015-09-08 Stmicroelectronics S.R.L. Efficient compact descriptors in visual search systems
US11470285B2 (en) * 2012-02-07 2022-10-11 Johnson Controls Tyco IP Holdings LLP Method and system for monitoring portal to detect entry and exit
US9111353B2 (en) * 2012-06-29 2015-08-18 Behavioral Recognition Systems, Inc. Adaptive illuminance filter in a video analysis system
US10327708B2 (en) 2013-01-24 2019-06-25 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US9305365B2 (en) 2013-01-24 2016-04-05 Kineticor, Inc. Systems, devices, and methods for tracking moving targets
US9717461B2 (en) 2013-01-24 2017-08-01 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
WO2014120734A1 (en) 2013-02-01 2014-08-07 Kineticor, Inc. Motion tracking system for real time adaptive motion compensation in biomedical imaging
KR101491461B1 (en) * 2013-08-02 2015-02-23 포항공과대학교 산학협력단 Method for recognizing object using covariance descriptor and apparatus thereof
AU2013242830B2 (en) * 2013-10-10 2016-11-24 Canon Kabushiki Kaisha A method for improving tracking in crowded situations using rival compensation
CN106572810A (en) 2014-03-24 2017-04-19 凯内蒂科尔股份有限公司 Systems, methods, and devices for removing prospective motion correction from medical imaging scans
WO2016014718A1 (en) 2014-07-23 2016-01-28 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US9460522B2 (en) 2014-10-29 2016-10-04 Behavioral Recognition Systems, Inc. Incremental update for background model thresholds
US9471844B2 (en) 2014-10-29 2016-10-18 Behavioral Recognition Systems, Inc. Dynamic absorption window for foreground background detector
US9349054B1 (en) 2014-10-29 2016-05-24 Behavioral Recognition Systems, Inc. Foreground detector for video analytics system
US9613273B2 (en) 2015-05-19 2017-04-04 Toyota Motor Engineering & Manufacturing North America, Inc. Apparatus and method for object tracking
CN106651901B (en) * 2015-07-24 2020-08-04 株式会社理光 Object tracking method and device
US9943247B2 (en) 2015-07-28 2018-04-17 The University Of Hawai'i Systems, devices, and methods for detecting false movements for motion correction during a medical imaging scan
US9898665B2 (en) 2015-10-29 2018-02-20 International Business Machines Corporation Computerized video file analysis tool and method
WO2017091479A1 (en) 2015-11-23 2017-06-01 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US10643076B2 (en) 2016-07-01 2020-05-05 International Business Machines Corporation Counterfeit detection
US11068721B2 (en) * 2017-03-30 2021-07-20 The Boeing Company Automated object tracking in a video feed using machine learning
US10157476B1 (en) * 2017-06-15 2018-12-18 Satori Worldwide, Llc Self-learning spatial recognition system
US20190304102A1 (en) * 2018-03-30 2019-10-03 Qualcomm Incorporated Memory efficient blob based object classification in video analytics
US20190332522A1 (en) * 2018-04-27 2019-10-31 Satori Worldwide, Llc Microservice platform with messaging system
CN111115400B (en) 2018-10-30 2022-04-26 奥的斯电梯公司 System and method for detecting elevator maintenance behavior in an elevator hoistway
IT201900007815A1 (en) * 2019-06-03 2020-12-03 The Edge Company S R L METHOD FOR DETECTION OF MOVING OBJECTS
CN110335432A (en) * 2019-06-24 2019-10-15 安徽和润智能工程有限公司 A kind of exhibition room security system based on recognition of face
US11120675B2 (en) * 2019-07-24 2021-09-14 Pix Art Imaging Inc. Smart motion detection device
CN113129249B (en) * 2019-12-26 2023-01-31 舜宇光学(浙江)研究院有限公司 Depth video-based space plane detection method and system and electronic equipment
US11468676B2 (en) 2021-01-08 2022-10-11 University Of Central Florida Research Foundation, Inc. Methods of real-time spatio-temporal activity detection and categorization from untrimmed video segments
EP4156098A1 (en) * 2021-09-22 2023-03-29 Axis AB A segmentation method
KR20230077562A (en) * 2021-11-25 2023-06-01 한국전자기술연구원 Method and system for detecting complex image-based anomaly event
CN116704268B (en) * 2023-08-04 2023-11-10 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Strong robust target detection method for dynamic change complex scene

Family Cites Families (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4679077A (en) * 1984-11-10 1987-07-07 Matsushita Electric Works, Ltd. Visual Image sensor system
US5113507A (en) * 1988-10-20 1992-05-12 Universities Space Research Association Method and apparatus for a sparse distributed memory system
JP3123587B2 (en) * 1994-03-09 2001-01-15 日本電信電話株式会社 Moving object region extraction method using background subtraction
US6252974B1 (en) * 1995-03-22 2001-06-26 Idt International Digital Technologies Deutschland Gmbh Method and apparatus for depth modelling and providing depth information of moving objects
US7076102B2 (en) * 2001-09-27 2006-07-11 Koninklijke Philips Electronics N.V. Video monitoring system employing hierarchical hidden markov model (HMM) event learning and classification
US5969755A (en) * 1996-02-05 1999-10-19 Texas Instruments Incorporated Motion based event detection system and method
US5751378A (en) * 1996-09-27 1998-05-12 General Instrument Corporation Scene change detector for digital video
US6263088B1 (en) * 1997-06-19 2001-07-17 Ncr Corporation System and method for tracking movement of objects in a scene
US6711278B1 (en) * 1998-09-10 2004-03-23 Microsoft Corporation Tracking semantic objects in vector image sequences
US6570608B1 (en) * 1998-09-30 2003-05-27 Texas Instruments Incorporated System and method for detecting interactions of people and vehicles
AU1930700A (en) * 1998-12-04 2000-06-26 Interval Research Corporation Background estimation and segmentation based on range and color
US7136525B1 (en) * 1999-09-20 2006-11-14 Microsoft Corporation System and method for background maintenance of an image sequence
US6674877B1 (en) * 2000-02-03 2004-01-06 Microsoft Corporation System and method for visually tracking occluded objects in real time
US6940998B2 (en) * 2000-02-04 2005-09-06 Cernium, Inc. System for automated screening of security cameras
US7868912B2 (en) * 2000-10-24 2011-01-11 Objectvideo, Inc. Video surveillance system employing video primitives
US6678413B1 (en) * 2000-11-24 2004-01-13 Yiqing Liang System and method for object identification and behavior characterization using video analysis
US20030107650A1 (en) * 2001-12-11 2003-06-12 Koninklijke Philips Electronics N.V. Surveillance system with suspicious behavior detection
US20060165386A1 (en) * 2002-01-08 2006-07-27 Cernium, Inc. Object selective video recording
US7436887B2 (en) * 2002-02-06 2008-10-14 Playtex Products, Inc. Method and apparatus for video frame sequence-based object tracking
US6856249B2 (en) * 2002-03-07 2005-02-15 Koninklijke Philips Electronics N.V. System and method of keeping track of normal behavior of the inhabitants of a house
US7203356B2 (en) * 2002-04-11 2007-04-10 Canesta, Inc. Subject segmentation and tracking using 3D sensing technology for video compression in multimedia applications
US7227893B1 (en) * 2002-08-22 2007-06-05 Xlabs Holdings, Llc Application-specific object-based segmentation and recognition system
US7200266B2 (en) * 2002-08-27 2007-04-03 Princeton University Method and apparatus for automated video activity analysis
US20040113933A1 (en) * 2002-10-08 2004-06-17 Northrop Grumman Corporation Split and merge behavior analysis and understanding using Hidden Markov Models
US7095786B1 (en) * 2003-01-11 2006-08-22 Neo Magic Corp. Object tracking using adaptive block-size matching along object boundary and frame-skipping when object motion is low
US6999600B2 (en) * 2003-01-30 2006-02-14 Objectvideo, Inc. Video scene background maintenance using change detection and classification
US7026979B2 (en) * 2003-07-03 2006-04-11 Hrl Labortories, Llc Method and apparatus for joint kinematic and feature tracking using probabilistic argumentation
US7127083B2 (en) * 2003-11-17 2006-10-24 Vidient Systems, Inc. Video surveillance system with object detection and probability scoring based on object class
US20060018516A1 (en) * 2004-07-22 2006-01-26 Masoud Osama T Monitoring activity using video information
JP4928451B2 (en) * 2004-07-30 2012-05-09 ユークリッド・ディスカバリーズ・エルエルシー Apparatus and method for processing video data
US7606425B2 (en) * 2004-09-09 2009-10-20 Honeywell International Inc. Unsupervised learning of events in a video sequence
US7620266B2 (en) * 2005-01-20 2009-11-17 International Business Machines Corporation Robust and efficient foreground analysis for real-time video surveillance
US20060190419A1 (en) * 2005-02-22 2006-08-24 Bunn Frank E Video surveillance data analysis algorithms, with local and network-shared communications for facial, physical condition, and intoxication recognition, fuzzy logic intelligent camera system
US20080181453A1 (en) * 2005-03-17 2008-07-31 Li-Qun Xu Method of Tracking Objects in a Video Sequence
AU2006230361A1 (en) * 2005-03-30 2006-10-05 Cernium Corporation Intelligent video behavior recognition with multiple masks and configurable logic inference module
US7825954B2 (en) * 2005-05-31 2010-11-02 Objectvideo, Inc. Multi-state target tracking
CA2628611A1 (en) * 2005-11-04 2007-05-18 Clean Earth Technologies, Llc Tracking using an elastic cluster of trackers
CN101410855B (en) * 2006-03-28 2011-11-30 爱丁堡大学评议会 Method for automatically attributing one or more object behaviors
CN101443789B (en) * 2006-04-17 2011-12-28 实物视频影像公司 video segmentation using statistical pixel modeling
US8467570B2 (en) * 2006-06-14 2013-06-18 Honeywell International Inc. Tracking system with fused motion and object detection
KR100846498B1 (en) * 2006-10-18 2008-07-17 삼성전자주식회사 Image analysis method and apparatus, motion segmentation system
US7916944B2 (en) * 2007-01-31 2011-03-29 Fuji Xerox Co., Ltd. System and method for feature level foreground segmentation
EP2118864B1 (en) * 2007-02-08 2014-07-30 Behavioral Recognition Systems, Inc. Behavioral recognition system
WO2008103929A2 (en) * 2007-02-23 2008-08-28 Johnson Controls Technology Company Video processing systems and methods
US8086036B2 (en) * 2007-03-26 2011-12-27 International Business Machines Corporation Approach for resolving occlusions, splits and merges in video images
US20090016610A1 (en) * 2007-07-09 2009-01-15 Honeywell International Inc. Methods of Using Motion-Texture Analysis to Perform Activity Recognition and Detect Abnormal Patterns of Activities
US8064639B2 (en) * 2007-07-19 2011-11-22 Honeywell International Inc. Multi-pose face tracking using multiple appearance models
US8300924B2 (en) * 2007-09-27 2012-10-30 Behavioral Recognition Systems, Inc. Tracker component for behavioral recognition system
WO2009049314A2 (en) 2007-10-11 2009-04-16 Trustees Of Boston University Video processing system employing behavior subtraction between reference and observed video image sequences
US8452108B2 (en) * 2008-06-25 2013-05-28 Gannon Technologies Group Llc Systems and methods for image recognition using graph-based pattern matching
US9633275B2 (en) * 2008-09-11 2017-04-25 Wesley Kenneth Cobb Pixel-level based micro-feature extraction
US8121968B2 (en) * 2008-09-11 2012-02-21 Behavioral Recognition Systems, Inc. Long-term memory in a video analysis system
US8218818B2 (en) * 2009-09-01 2012-07-10 Behavioral Recognition Systems, Inc. Foreground object tracking

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of EP2474163A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2011253910B2 (en) * 2011-12-08 2015-02-26 Canon Kabushiki Kaisha Method, apparatus and system for tracking an object in a sequence of images

Also Published As

Publication number Publication date
EP2474163A2 (en) 2012-07-11
WO2011028380A3 (en) 2011-06-03
BR112012004568A2 (en) 2017-05-23
EP2474163A4 (en) 2016-04-13
US8218819B2 (en) 2012-07-10
US20110052003A1 (en) 2011-03-03

Similar Documents

Publication Publication Date Title
US8218819B2 (en) Foreground object detection in a video surveillance system
US8374393B2 (en) Foreground object tracking
US9959630B2 (en) Background model for complex and dynamic scenes
US8270733B2 (en) Identifying anomalous object types during classification
US8175333B2 (en) Estimator identifier component for behavioral recognition system
US10796164B2 (en) Scene preset identification using quadtree decomposition analysis
US8416296B2 (en) Mapper component for multiple art networks in a video analysis system
US8285060B2 (en) Detecting anomalous trajectories in a video surveillance system
US8705861B2 (en) Context processor for video analysis system
US8300924B2 (en) Tracker component for behavioral recognition system
US9208675B2 (en) Loitering detection in a video surveillance system
US8270732B2 (en) Clustering nodes in a self-organizing map using an adaptive resonance theory network
US9373055B2 (en) Hierarchical sudden illumination change detection using radiance consistency within a spatial neighborhood
US20110043689A1 (en) Field-of-view change detection
JP2011059898A (en) Image analysis apparatus and method, and program
Schuster et al. Multi-cue learning and visualization of unusual events

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10814147

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2010814147

Country of ref document: EP

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112012004568

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 112012004568

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20120229