WO2008103206A1 - Surveillance systems and methods - Google Patents

Surveillance systems and methods Download PDF

Info

Publication number
WO2008103206A1
WO2008103206A1 PCT/US2007/087566 US2007087566W WO2008103206A1 WO 2008103206 A1 WO2008103206 A1 WO 2008103206A1 US 2007087566 W US2007087566 W US 2007087566W WO 2008103206 A1 WO2008103206 A1 WO 2008103206A1
Authority
WO
WIPO (PCT)
Prior art keywords
score
module
data
decision making
sensor data
Prior art date
Application number
PCT/US2007/087566
Other languages
French (fr)
Other versions
WO2008103206B1 (en
Inventor
Hasan Timucin Ozedmir
Sameer Kibey
Lipin Liu
Kuo Chu Lee
Supraja Mosali
Namsoo Joo
Hongbing Li
Juan Yu
Original Assignee
Panasonic Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Panasonic Corporation filed Critical Panasonic Corporation
Priority to JP2009549578A priority Critical patent/JP5224401B2/en
Publication of WO2008103206A1 publication Critical patent/WO2008103206A1/en
Publication of WO2008103206B1 publication Critical patent/WO2008103206B1/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B31/00Predictive alarm systems characterised by extrapolation or other computation using updated historic data
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0407Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis
    • G08B21/0423Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis detecting deviation from an expected pattern of behaviour or schedule

Definitions

  • the present invention relates to methods and systems for automated detection and prediction of the progression of behavior and treat patterns in a real-time, multi-sensor environment.
  • the surveillance system generally includes a data capture module that collects sensor data.
  • a scoring engine module receives the sensor data and computes at least one of an abnormality score and a normalcy score based on the sensor data, at least one dynamically loaded learned data model, and a learned scoring method.
  • a decision making module receives the at least one of the abnormality score and the normalcy score and generates an alert message based on the at least one of the abnormality score and the normalcy score and a learned decision making method to produce progressive behavior and threat detection.
  • Figure 1 is a block diagram illustrating an exemplary surveillance system according to various aspects of the present teachings.
  • FIG. 2 is a dataflow diagram illustrating exemplary components of the surveillance system according to various aspects of the present teachings.
  • Figure 3 is a dataflow diagram illustrating an exemplary model builder module of the surveillance system according to various aspects of the present teachings.
  • Figure 4 is an illustration of an exemplary model of the surveillance system according to various aspects of the present teachings.
  • Figure 5 is a dataflow diagram illustrating an exemplary camera of the surveillance system according to various aspects of the present teachings.
  • Figure 6 is a dataflow diagram illustrating an exemplary decision making module of the camera according to various aspects of the present teachings.
  • Figure 7 is a dataflow diagram illustrating another exemplary decision making module of the camera according to various aspects of the present teachings.
  • Figure 8 is a dataflow diagram illustrating an exemplary alarm handling module of the surveillance system according to various aspects of the present teachings.
  • Figure 9 is a dataflow diagram illustrating an exemplary learning module of the surveillance system according to various aspects of the present teachings.
  • Figure 10 is a dataflow diagram illustrating an exemplary system configuration module of the surveillance system according to various aspects of the present teachings.
  • module or sub-module can refer to: a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, and/or other suitable components that can provide the described functionality and/or combinations thereof.
  • FIG. 1 depicts an exemplary surveillance system 10 implemented according to various aspects of the present teachings.
  • the exemplary surveillance system 10 includes one or more sensory devices 12a-12n.
  • the sensory devices 12a-12n generate sensor data 14a-14n corresponding to information sensed by the sensory devices 12a-12n.
  • a surveillance module 16 receives the sensor data 14a-14n and processes the sensor data 14a-14n according to various aspects of the present teachings. In general, the surveillance module 16 automatically recognizes suspicious behavior from the sensor data 14a-14n and generates alarm messages 18 to a user based on a prediction of abnormality scores.
  • a single surveillance module 16 can be implemented and located remotely from each sensory device 12a-12n as shown in Figure 1.
  • each surveillance module can be implemented, one for each sensory device 12a-12n.
  • the functionality of the surveillance module 16 may be divided into sub-modules, where some sub-modules are implemented on the sensory devices 12a-12n, while other sub-modules are implemented remotely from the sensory devices 12a-12n as shown in Figure 2.
  • FIG 2 a dataflow diagram illustrates a more detailed exemplary surveillance system 10 implemented according to various aspects of the present teachings. For exemplary purposes, the remainder of the disclosure will be discussed in the context of using one or more cameras 20a- 2On as the sensory devices 12a-12n ( Figure 1 ). As shown in Figure 2, each camera 20a-20n includes an image capture module 22, a video analysis module 80, a scoring engine module 24, a decision making module 26, and a device configuration module 28.
  • the image capture module 22 collects the sensor data 14a-14n as image data corresponding to a scene and the video analysis module 80 processes the image data to extract object meta data 30 from the scene.
  • the scoring engine module 24 receives the object meta data 30 and produces a measure of abnormality or normality also referred to as a score 34 based on learned models 32.
  • the decision making module 26 collects the scores 34 and determines an alert level for the object data 30.
  • the decision making module 26 sends an alert message 36n that includes the alert level to external components for further processing.
  • the decision making module 26 can exchange scores 34 and object data 30 with other decision making modules 26 of other cameras 20a, 20b to generate predictions about objects in motion.
  • the device configuration module 28 loads and manages various models 32, scoring engine methods 52, decision making methods 50, and/or decision making parameters 51 that can be associated with the camera 2On.
  • the surveillance system 10 can also include an alarm handling module 38, a surveillance graphical user interface (GUI) 40, a system configuration module 42, a learning module 44, and a model builder module 46.
  • GUI surveillance graphical user interface
  • Such components can be located remotely from the cameras 20a- 2On.
  • the alarm handling module 38 re-evaluates the alert messages 36a-36n from the cameras 20a-20n and dispatches the alarm messages 18.
  • the alarm handling module 38 interacts with the user via the surveillance GUI 40 to dispatch the alarm messages 18 and/or collect miss-classification data 48 during alarm acknowledgement operation.
  • the learning module 44 adapts the decision making methods
  • the decision making methods 50 are automatically learned and optimized for each scoring method 52 to support the prediction of potential incidents, increase the detection accuracy, and reduce the number of false alarms.
  • the decision making methods 50 fuse the scores 34 as well as previous scoring results, object history data, etc., to reach a final alert decision.
  • the model builder module 46 builds models 32 representing normal and/or abnormal conditions based on the collected object data 30.
  • the system configuration module 42 manages the models 32, the decision making methods 50 and parameters 51 , and the scoring engine methods 52 for the cameras 20a-20n and uploads the methods and data 32, 50, 51 , 52 to the appropriate cameras 20a-20n.
  • FIG. 3 is a more detailed exemplary model builder module 46 according to various aspects of the present teachings.
  • the model builder module 46 includes a model initialization module 60, a model initialization graphical user interface 62, a model learn module 64, an image data datastore 66, a model methods datastore 68, and a model data datastore 70.
  • the model initialization module 60 captures the domain knowledge from users, and provides initial configuration of system components (i.e., optimized models, optimized scoring functions, optimized decision making functions, etc.).
  • the model initialization module 60 builds initial models 32 for each camera 20a-20n ( Figure 2) based on input 74 received from a user via the model initialization GUI 62.
  • the model initialization GUI 62 displays a scene based on image data from a camera thus, providing easy to understand context for user to describe expected motions of objects within the camera field of view.
  • the image data can be received from the image data datastore 66.
  • the user can enter motion parameters 72 to simulate random trajectories of moving objects in the given scene.
  • the trajectories can represent normal or abnormal conditions.
  • the model initialization module 60 then simulates the trajectories and extracts data from the simulated trajectories in the scene to build the models 32.
  • the generated simulated metadata corresponds to an expected output of a selected video analysis module 80 ( Figure 2).
  • the model initialization module 60 builds the optimized models 32 from predefined model builder methods stored in the model methods datastore 68. In various aspects of the present teachings, the model initialization module 60 builds the optimal configuration according to a model builder method that selects particular decision making methods 50 ( Figure 2) , the configuration parameters 51 ( Figure 2) of decision making methods 50, a set of scoring engine methods 52 ( Figure 2), and/or configuration parameters of scoring engine methods. [0031] In various aspects of the present teachings, the model initialization GUI 62 can provide an option to the user to insert a predefined object into the displayed scene. The model initialization module 60 then simulates the predefined object along the trajectory path for verification purposes. If the user is satisfied with the trajectory paths, the model 32 is stored in the model data datastore 70.
  • the model learn module 64 can automatically adapt the models 32 for each camera 20a-20n ( Figure 2) by using the collected object data 30 and based on the various model builder methods stored in the model methods datastore 68.
  • the model learn module 64 stores the adapted models 32 in the model data datastore 70.
  • various model building methods can be stored to the model methods datastore 68 to allow the model builder module 46 to build a number of models 32 for each object based on a model type.
  • the various models can include, but are not limited to, a velocity model, an acceleration model, an occurrence model, an entry/exit zones model, a directional speed profile model, and a trajectory model. These models can be built for all observed objects as well as different types of objects.
  • the data for each model 32 can be represented as a multi-dimensional array structure 71 (i.e., a data cube) in which each element refers to a specific spatial rectangle (in 3D it is hyper-rectangle) and time interval.
  • the models 32 are represented according to a Predictive Model Markup Language (PMML) and its extended form for surveillance systems.
  • PMML Predictive Model Markup Language
  • the occurrence model describes the object detection probabilities in space and time dimensions.
  • Each element of the occurrence data cube represents the probability of detecting an object at the particular location in the scene at the particular time interval.
  • a time plus three dimensional occurrence data cube can be obtained from multiple cameras 20a-20n ( Figure 2).
  • the velocity model can be similarly built, where each cell of the velocity data cube can represent a Gaussian distribution of (dx,dy) or a mixture of Gaussian distributions. These parameters can be learned with recursive formulae. Similar to the velocity data cube, each cell of an acceleration data cube stores the Gaussian distribution of ((dx)',(dy)').
  • the entry/exit zones model models regions of the scene in which objects are first detected and last detected. These, areas can be modeled by a mixture of Gaussian models. Their location can be generated from first and last track points of each detected object by the application of clustering methods, such as, K-means, Expectation Maximization (EM) methods, etc.
  • clustering methods such as, K-means, Expectation Maximization (EM) methods, etc.
  • the trajectory models can be built by using the entry and exit regions with the object meta data 30 obtained from the video analysis module 80 ( Figure 2).
  • each entry-exit region defines a segment in the site used by the observed objects in motion.
  • a representation of each segment can be obtained by using curve fitting, regression, etc. methods on object data collected from a camera in real time or simulated. Since each entry and exit region includes time interval, the segments also include an associated time interval.
  • the directional models represent the motion of an object with respect to regions in a site.
  • each cell contains a probability of following a certain direction in the cell and a statistical representation of measurements in a spatio temporal region (cell), such as speed and acceleration.
  • a cell can contain links to entry regions, exit regions, trajectory models, and global data cube model of site under surveillance.
  • a cell can contain spatio temporal region specific optimized scoring engine methods as well as user specified scoring engine methods. Although the dimensions of the data cube are depicted as a uniform grid structure, it is appreciated that non-uniform intervals can be important for optimal model representation.
  • variable length intervals as well as clustered/segmented non-rigid spatio temporal shape descriptors (i.e., 3D/4D shape descriptions), can be used for model reduction.
  • the storage of the model 32 can utilize multi-dimensional indexing methods (such as R-tree, X-tree, SR-tree, etc.) for efficient access to cells.
  • multi-dimensional indexing methods such as R-tree, X-tree, SR-tree, etc.
  • the data cube structure supports predictive modeling of the statistical attributes in each cell so that the a motion trajectory of an observed object can be predicted based on the velocity and acceleration attributes stored in the data cube.
  • any object detected in location (X1 , Y1) may be highly likely to move to location (X2, Y2) after T seconds based on historical data.
  • a new object is observed in location (X1 , Y1), it is likely to move to location (X2, Y2) after T seconds.
  • the camera 20, as shown, includes the image capture module 22, a video analyzer module 80, the scoring engine module 24, the decision making module 26, the device configuration module 28, an object history datastore 82, a camera models datastore 92, a scoring engine scores history datastore 84, a parameters datastore 90, a decision methods datastore 88, and a scoring methods datastore 86.
  • the image capture module 22 captures image data 93 from the sensor data 14.
  • the image data 93 is passed to the video analyzer module 80 for the extraction of objects and properties of the objects.
  • the video analyzer module 80 can produce object data 30 in the form of an object detection vector (5), that includes: an object identifier (a unique key value per object); a location of a center of an object in the image plane (x,y), a timestamp; a minimum bounding box (MBB) in the image plane (x.low,y.low,x,upper,y.upper); a binary mask matrix that specifies which pixels belong to a detected object; image data of the detected object; and/or some other properties of detected objects such as visual descriptors specified by an Metadata format (i.e., MPEG7 Standard and its extended form for surveillance).
  • SE scoring engine
  • the video analyzer module 80 can access the models 32 of the camera models datastore 92, for example, for improving accuracy of the object tracking methods.
  • the models 32 are loaded to the camera models datastore 92 of the camera 20 via the device configuration module 28.
  • the device configuration module also instantiates the scoring engine module 24, the decision making module 26, and prepares a communication channel between modules involved in the processing of object data 30 for progressive behavior and threat detection.
  • the scoring engine module 24 produces one or more scores 34 for particular object traits, such as, an occurrence of the object in the scene, a velocity of the object, and an acceleration of the object.
  • the scoring engine module includes a plurality of scoring engine sub-module that performs the following functionality.
  • the scoring engine module 24 selects a particular scoring engine method 52 from the scoring methods datastore 86 based on the model type and the object trait to be scored. Various exemplary scoring engine methods 52 can be found in the attached Appendix A. The scoring engine methods 52 are loaded to the scoring methods datastore 86 via the device configuration module 28
  • the scores 34 of each detected object can be accumulated to obtain progress threat or alert levels at location (XO, YO) in real time. Furthermore, using the predictive model stored in the data cube, one can calculate the score 34 of the object in advance by first predicting the motion trajectory of the object and calculate the score of the object along the trajectory. As a result, the system can predict the changing of threat levels before it happens to support preemptive alert message generation.
  • the forward prediction can include the predicted properties of an object in the near future (such as it is location, speed, etc.) as well as the trend analysis of scoring results.
  • the determination of the score 34 can be based on the models 32, the object data 30, the scores history data 34, and in some cases object history data from the object history datastore 82, the some regions of interest (defined by user),, and their various combinations.
  • the score 34 can be a scalar value representing the measure of abnormality.
  • the score 34 can include two or more sealer values.
  • the score 34 can include a measure of normalcy and/or a confidence level, and/or a measure of abnormality and/or a confidence level.
  • the score data 34 is passed to the decision making module 26 and/or stored in the SE scores history datastore 84 with a timestamp.
  • the decision making module 26 then generates the alert message 36 based on a fusing of the scores 34 from the scoring engine modules 24 for a given object detection event data ( ⁇ ).
  • the decision making module can use the historical score data 34, and object data 30 during fusion.
  • the decision making module 26 can be implemented according to various decision making methods 50 stored to the decision methods datastore 88. Such decision making methods 50 can be loaded to the camera 20 via the device configuration module 28.
  • the alert message 36 is computed as a function of a summation of weighted scores as shown by the following equation:
  • w represents a weight for each score based on time (t) and spatial dimensions (XY).
  • the dimensions of the data cube can vary in number for example, XYZ spatial dimensions.
  • the weights (w) can be pre-configured or adaptively learned and loaded to the parameters datastore 90 via the device configuration module 28.
  • the alert message 36 is determined based on a decision tree based method as shown in Figure 7. The decision tree based method can be adaptively learned throughout the surveillance process.
  • the decision making module 26 can be implemented according to various decision making methods 50, the decision making module is preferable defined in a declarative form by using, for example, XML based representation such as an extended form of the Predictive Model Markup Language. This enables the Learning Module 44 to improve the decision making module accuracy since the learning module 44 changes various parameters (such as weight and the decision tree as explained above) and the decision making method also.
  • the decision making module 26 can generate predictions that can generate early-warning alert messages for progressive behavior and threat detection. For example, the decision making module 26 can generate predications about objects in motion based on the trajectory models 32. A prediction of a future location of an object in motion enables the decision making module 26 to identify whether two objects in motion will collide. If the collision is probable, the decision making module 26 can predict where objects will collide and when objects will collide as well as generate the alert message 36 to prevent a possible accident.
  • the decision making module 26 can exchange data with other decision making modules 26 such as decision make modules 26 running in other cameras 2Oa 1 20b ( Figure 2) or devices.
  • the object data 30 and the scores 34 of suspicious objects detected by other cameras 20a, 20b ( Figure 2) can be stored to the object history datastore 82 and the SE scores history datastore 84, respectively.
  • the object history datastore 82 and the SE scores history datastore 84 can be stored to the object history datastore 82 and the SE scores history datastore 84, respectively.
  • a dataflow diagram illustrates a more detailed exemplary alarm handling module 38 of the surveillance system 10 according to various aspects of the present teachings.
  • the alarm handling module 38 collects alert messages 36 and creates a "threat" structure for each new detected object.
  • the threat structure maintains the temporal properties associated with the detected object as well as associates other pre-stored properties and obtained properties (such as the result of face recognition) with the detected object.
  • the alarm handling module 38 re-evaluates the received alert messages 36 by using the collected properties of objects in the threat structure and additional system configuration to decide the level of alarm.
  • the alarm handling module can filter the alert message without generating any alarm, as well as increase the alarm level if desired.
  • the alarm handling module 38 can include a threats data datastore 98, a rule based abnormality evaluation module 94, a rules datastore 100, and a dynamic rule based alarm handling module 96.
  • the rule based abnormality evaluation module 94 can be considered another form of a decision making module 26 ( Figure 2) defined within a sensor device. Therefore, all explanations/operations associated with the decision making module 26 are applicable to the rule based abnormality evaluation module 94.
  • the decision making for the rule based abnormality evaluation module 94 can be declaratively defined in an extended form of Predictive Model Markup Language for surveillance.
  • the threats data datastore 98 stores the object data scores 34, and additional properties that can be associated with an identified object. Such additional properties can be applicable to identifying a particular threat and may include, but are not limited to: identity recognition characteristics of a person or item, such as, facial recognition characteristics or a license plate number; and object attributes such as an employment position or a criminal identity.
  • the rules datastore 100 stores rules that are dynamically configurable and that can be used to further evaluate the detected object.
  • evaluation rules can include, but are not limited to, rules identifying permissible objects even though they are identified as suspicious; rules associating higher alert levels with recognized objects; and rules recognizing an object as suspicious when the object is present in two different scenes at the same time.
  • the rule based abnormality evaluation module 94 associates the additional properties with the detected object based on the object data from the threats data datastore 98. The rule based abnormality evaluation module 94 then uses this additional information and the evaluation rules to re-evaluate the potential threat and the corresponding alert level. For example, the rule based abnormality evaluation module 94 can identify the object as a security guard traversing the scene during off-work hours. Based on the configurable rules and actions, the rule based abnormality evaluation module 94 can disregard the alert message 36 and prevent the alarm messages 18 from being dispatched even though a detection of a person at off-work hours is suspicious.
  • the dynamic rule based alarm handling module 96 dispatches an alert event 102 in the form of the alarm messages 18 and its additional data to interested modules, such as, the surveillance GUI 40 ( Figure 2) and/or an alarm logging module (not shown).
  • interested modules such as, the surveillance GUI 40 ( Figure 2) and/or an alarm logging module (not shown).
  • the dynamic rule based alarm handling module 96 dispatches the alarm messages 18 via the surveillance GUI 40, the user can provide additional feedback by agreeing or disagreeing with the alarm.
  • the feedback is provided by the user as miss-classification data 48 to the learning module 44 ( Figure 2) in the form of agreed or disagreed cases. This allows the surveillance system 10 to collect a set of data for further optimization of system components (i.e., models 32, scoring engine methods 52, decision making methods 50, rules, etc. ( Figure 2)).
  • FIG. 9 a dataflow diagram illustrates a more detailed exemplary learning module 44 of the surveillance system 10 according to various aspects of the present teachings.
  • the learning module 44 optimizes the scoring engine methods 52, the decision making methods 50, and the associated parameters 51 , such as, the spatio-temporal weights based on the learned miss-classification data 48.
  • the learning module 44 retrieves the decision making methods 50, the models 32, the scoring engine methods 52, and the parameters 51 from the system configuration module 42.
  • the learning module 44 selects one or more appropriate learning methods from a learning method datastore 106.
  • the learning methods can be associated with a particular decision making method 50.
  • the learning module 44 re-examines the decision making method 50 and the object data 30 from a camera against the miss-classification data 48.
  • the learning module can adjust the parameters 51 to minimize the error in the decision making operation.
  • the learning module 44 performs the above re- examination for each method 50 and uses a best result or some combination thereof to adjust the parameters 51.
  • FIG. 10 a dataflow diagram illustrates a more detailed exemplary system configuration module 42 of the surveillance system 10 according to various aspects of the present teachings.
  • the system configuration module 42 includes a camera configuration module 110, an information upload module 112, and a camera configuration datastore 114.
  • the camera configuration module 110 associates the models 32, the scoring engine methods 52, and the decision making methods 50 and parameters 51 with each of the cameras 20a-20n ( Figure 2) in the surveillance system 10.
  • the camera configuration module 110 can accept and associate additional system configuration data from the camera configuration datastore 114, such as, user accounts and network level information about devices in the system (such as cameras, encoders, recorders, IRIS recognition devices, etc.).
  • the camera configuration module 110 generates association data 116.
  • the information upload module 112 provides the models 32, the scoring engine methods 52, and the decision making methods 50 and parameters 51 to the device configuration module 28 ( Figure 2) based on the association date 116 of the cameras 20a-20n ( Figure 2) upon request.
  • the information upload module 112 can be configured to provide the models 32, the scoring engine methods 52, the decision making methods 50 and parameters 51 to the device configuration module 28 ( Figure 2) of the cameras 20a-20n at scheduled intervals.
  • Occurrence Model summarizes whether a detection of an object in [t,x,y] (time and space) is expected or not.
  • An object is detected at a location ([t,x,y]) there should not be such activity at cell [t,x,y] 2.
  • the same object track is used in two different time (in one time interval it is ok, in another time interval is NOT OK, or at least require human to investigate the activity)
  • the algorithm assigns abnormality score by using the distance from the mean value.
  • ThreatScore floor (CombinedOccurence (x,y, t) /QuantizationValue) +1; end 1.1 SE_OSE1 Method:
  • the algorithm assigns abnormality score by using the distance from the mean value divided by the standard deviation of occurrence probabilities.
  • ThreatScore floor ( threatDistance/Std)
  • ThreatScore min (MAX_THREAT_SCORE, ThreatScore) end 1.2 SE_OSE3 Method:
  • OSE3 uses the mean calculation algorithm used in OSE1 but it uses different algorithm to assign an threat score
  • ThreatScore floor ( threatDistance/QuantVal ) + 1 end
  • the algorithm assigns threat score by using the distance from the mean value divided by the standard deviation of occurrence probabilities.
  • Threat Score deltay/ s tdy+del tax/ stdx
  • ThreatScore min (MAX_THREAT_SCORE, ThreatScore) ; end
  • Threat score value can also be obtained from 2D Gaussian function.
  • % Obj is a matrix contains last n observations is [oid, tj.,Xi, yi , ⁇ Xi , ⁇ i] % k : controls threshold value (k*std)
  • ThreatScore max ( P ( : , 2 ) ) ; end
  • ThreatScore mode(P(:,2));
  • ThreatScore (average(P(:2))+mode(P(:2)))/2;
  • ThreatScore (average(P(:2))+median(P(:2)))/2;
  • ThreatScore (median(P(:2))+mode(P(:2)))/2;
  • ts(i) be the time stamp of i th object flow vector, and assume that the object flow vectors are decreasing time stamp order(ts(1 )>ts(2)>...>ts(n-
  • score(i) be the threat score associated to i th object flow vector. For given n, the final threat score is n
  • ⁇ t Weight of each score depends on the distance (in time dimension) between current time and the time stamp of instance. The weights are linear with respect to the distance (in time dimension).
  • the non-linear weight assignments can use sigmoid function, double sigmoid function, exponential decay functions, logistic functions, Gaussian distribution function, etc. to express the weights based on their distance to the current time. Their parameters could be adjusted by learning algorithms for fine tuning.
  • ThreatScore] VM_SE_ALG_X (Ofirst, o, Vaverage)
  • ThreatScore floor (MAX_THREAT_SCORE* ( (threshold- curr_speed) /threshold) ) ;
  • This algorithm detects that an object is wandering around, (not moving too much or moving very slowly)
  • Calculation of object's speed uses the first position and the current position.
  • speed of an object can be calculated by
  • Threat Score in [0..MAX_THREAT_SCORE] Let AMDC(t,x,y) denotes the acceleration model and o - [oid,t, ⁇ , y,ax, ay] denotes the object's acceleration flow vector where object o is detected at location (x,y) at time t. The threat score for this observation would be;
  • ThreatScore min (MAX_THREAT_SCORE, ThreatScore) ; end 3.2 SE_ASE1N Method:
  • % Obj is a matrix contains last n observations is [oid, ti,xi,yi , axi , ayi]
  • ThreatScore max ( P ( : , 2 ) ) ; end
  • ThreatScore average(P(:,2));
  • ThreatScore median(P(:,2));
  • ThreatScore (average(P(:2))+median(P(:2)))/2;
  • ThreatScore (median(P(:2))+mode(P(:2)))/2; 4. Speed Profile Based Algorithms
  • a scoring algorithm using Directional Speed Profile Data Cube accepts the object detection vectors ( ⁇ (o,t l ,x l ,y l ),(o I t (l _i).X( ⁇ -i),y( ⁇ -i)).— ⁇ )
  • Threatl_evel abs(ObservedSpeed- M ⁇ / ⁇ ,,;
  • the above function is one example to obtain threat level associated with an object.
  • the threat level determination function can be described by using exponential function that will produce non-linear threat measure with respect to the distance between the ObservedSpeed and the expected speed.
  • Directional Speed Profile Data Cube can use some recent positions of an object to obtain such measure with weighted sum formula.
  • a variation of such algorithm can use all the track data and build a normal distribution N( ⁇ , ⁇ ) for threat level data.
  • TargetDef [[xo,yo],[xi,yi]] specifies a region in camera view (camera image coordinates)
  • the target region of interest is defined in the field of view of a camera.
  • the scoring algorithm generates the threat scores based on the distance between object and the center of target region.
  • ThreatScore MAX_THREAT_SCORE*threatDistance
  • Target description can be a circle (described by center and radius) , as well as arbitrary shape defined by polygon representation (MPEG7 Region descriptor can be utilized)
  • Target description can be associated with time interval [tbegin.ten d ] during which it can be used
  • Threat distance is calculated with linear model. Threat distance could be calculated by using 2D Gaussian function centered at (x c ,y c ).
  • TargetDef [[xo.yoL[xi,y " i]] specifies a region in camera view (camera image coordinates)
  • TargetDef Target Definition, [ [x ⁇ y ⁇ ] [xl yl] ]
  • threatDistance l- (objectDistance/MAX_DIST) ;
  • ThreatScore MAX_THREAT_SCORE*threatDistance*DC (tidx, xidx,yi dx,2) ;
  • the threat score calculation uses the combination of the occurrence probability and the proximity to the target measures to find the final threat score. When the object is too close but it is in the frequently visited places, the threat score is reduced. When the object is too close but not in the frequently visited places, the threat score is increased.
  • TargetDef [[xo,yo],[xi > yi]] specifies a region in camera view (camera image coordinates)
  • Threat Score in [0..MAX_THREAT_SCORE]
  • (dx/dt) is the instantaneous velocity in x direction
  • (dy/dt) is the instantaneous velocity in y direction
  • is the angle in between direction of velocity and line joining target and the object.
  • TargetDef Target Definition, [ [x ⁇ y ⁇ ] [xl yl] ]
  • Nr is the number of points that are within a radius R (WANDER_RADIUS) of the current point.
  • N is the WANDERINGJDRDER. This is number of past samples used to determine if there is loitering.

Abstract

A surveillance system generally includes a data capture module that collects sensor data. A scoring engine module receives the sensor data and computes at least one of an abnormality score and a normalcy score based on the sensor data, at least one dynamically loaded learned data model, and a learned scoring method. A decision making module receives the at least one of the abnormality score and the normalcy score and generates an alert message based on the at least one of the abnormality score and the normalcy score and a learned decision making method to produce progressive behavior and threat detection.

Description

SURVEILLANCE SYSTEMS AND METHODS
FIELD
[0001] The present invention relates to methods and systems for automated detection and prediction of the progression of behavior and treat patterns in a real-time, multi-sensor environment.
BACKGROUND
[0002] The statements in this section merely provide background information related to the present disclosure and may not constitute prior art.
[0003] The recent trend in video surveillance systems is to provide video analysis components that can detect potential threats from live streamed video surveillance data. The detection of potential threats assists a security operator, who monitors the live feed from many cameras, to detect actual threats.
[0004] Conventional surveillance systems detect potential threats based on predefined patterns. To operate, each camera requires an operator to manually configure abnormal behavior detection features. When the predetermined abnormal pattern is detected, the system generates an alarm. It often requires substantial efforts in adjusting the sensitivity of multiple detection rules defined to detect specific abnormal patterns such as speeding, against the flow, abnormal flow.
[0005] Such systems are inefficient in their operation. For example, the proper configuration of each camera is time consuming, requires professional help, and increases deployment costs. In addition, the definition and configuration of every possible abnormal behavior is not realistically possible due to the fact that there may just be too many to enumerate, to study, and to develop a satisfying solution in all possible contexts.
SUMMARY
[0006] Accordingly, a surveillance system is provided. The surveillance system generally includes a data capture module that collects sensor data. A scoring engine module receives the sensor data and computes at least one of an abnormality score and a normalcy score based on the sensor data, at least one dynamically loaded learned data model, and a learned scoring method. A decision making module receives the at least one of the abnormality score and the normalcy score and generates an alert message based on the at least one of the abnormality score and the normalcy score and a learned decision making method to produce progressive behavior and threat detection.
[0007] Further areas of applicability will become apparent from the description provided herein. It should be understood that the description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present teachings in any way.
[0009] Figure 1 is a block diagram illustrating an exemplary surveillance system according to various aspects of the present teachings.
[0010] Figure 2 is a dataflow diagram illustrating exemplary components of the surveillance system according to various aspects of the present teachings.
[0011] Figure 3 is a dataflow diagram illustrating an exemplary model builder module of the surveillance system according to various aspects of the present teachings.
[0012] Figure 4 is an illustration of an exemplary model of the surveillance system according to various aspects of the present teachings.
[0013] Figure 5 is a dataflow diagram illustrating an exemplary camera of the surveillance system according to various aspects of the present teachings.
[0014] Figure 6 is a dataflow diagram illustrating an exemplary decision making module of the camera according to various aspects of the present teachings. [0015] Figure 7 is a dataflow diagram illustrating another exemplary decision making module of the camera according to various aspects of the present teachings.
[0016] Figure 8 is a dataflow diagram illustrating an exemplary alarm handling module of the surveillance system according to various aspects of the present teachings.
[0017] Figure 9 is a dataflow diagram illustrating an exemplary learning module of the surveillance system according to various aspects of the present teachings. [0018] Figure 10 is a dataflow diagram illustrating an exemplary system configuration module of the surveillance system according to various aspects of the present teachings.
DETAILED DESCRIPTION [0019] The following description is merely exemplary in nature and is not intended to limit the present teachings, their application, or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features. As used herein, the term module or sub-module can refer to: a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, and/or other suitable components that can provide the described functionality and/or combinations thereof.
[0020] Referring now to Figure 1 , Figure 1 depicts an exemplary surveillance system 10 implemented according to various aspects of the present teachings. The exemplary surveillance system 10 includes one or more sensory devices 12a-12n. The sensory devices 12a-12n generate sensor data 14a-14n corresponding to information sensed by the sensory devices 12a-12n. A surveillance module 16 receives the sensor data 14a-14n and processes the sensor data 14a-14n according to various aspects of the present teachings. In general, the surveillance module 16 automatically recognizes suspicious behavior from the sensor data 14a-14n and generates alarm messages 18 to a user based on a prediction of abnormality scores. [0021] In various aspects of the present teachings, a single surveillance module 16 can be implemented and located remotely from each sensory device 12a-12n as shown in Figure 1. In various other aspects of the present teachings, multiple surveillance modules (not shown) can be implemented, one for each sensory device 12a-12n. In various other aspects of the present teachings, the functionality of the surveillance module 16 may be divided into sub-modules, where some sub-modules are implemented on the sensory devices 12a-12n, while other sub-modules are implemented remotely from the sensory devices 12a-12n as shown in Figure 2. [0022] Referring now to Figure 2, a dataflow diagram illustrates a more detailed exemplary surveillance system 10 implemented according to various aspects of the present teachings. For exemplary purposes, the remainder of the disclosure will be discussed in the context of using one or more cameras 20a- 2On as the sensory devices 12a-12n (Figure 1 ). As shown in Figure 2, each camera 20a-20n includes an image capture module 22, a video analysis module 80, a scoring engine module 24, a decision making module 26, and a device configuration module 28.
[0023] The image capture module 22 collects the sensor data 14a-14n as image data corresponding to a scene and the video analysis module 80 processes the image data to extract object meta data 30 from the scene. The scoring engine module 24 receives the object meta data 30 and produces a measure of abnormality or normality also referred to as a score 34 based on learned models 32.
[0024] The decision making module 26 collects the scores 34 and determines an alert level for the object data 30. The decision making module 26 sends an alert message 36n that includes the alert level to external components for further processing. The decision making module 26 can exchange scores 34 and object data 30 with other decision making modules 26 of other cameras 20a, 20b to generate predictions about objects in motion. The device configuration module 28 loads and manages various models 32, scoring engine methods 52, decision making methods 50, and/or decision making parameters 51 that can be associated with the camera 2On. [0025] The surveillance system 10 can also include an alarm handling module 38, a surveillance graphical user interface (GUI) 40, a system configuration module 42, a learning module 44, and a model builder module 46. As shown, such components can be located remotely from the cameras 20a- 2On. The alarm handling module 38 re-evaluates the alert messages 36a-36n from the cameras 20a-20n and dispatches the alarm messages 18. The alarm handling module 38 interacts with the user via the surveillance GUI 40 to dispatch the alarm messages 18 and/or collect miss-classification data 48 during alarm acknowledgement operation. [0026] The learning module 44 adapts the decision making methods
50 and parameters 51 , and/or the scoring engine methods 52 for each camera 20a-20n by using the miss-classification data 48 collected from the user. As will be discussed further, the decision making methods 50 are automatically learned and optimized for each scoring method 52 to support the prediction of potential incidents, increase the detection accuracy, and reduce the number of false alarms. The decision making methods 50 fuse the scores 34 as well as previous scoring results, object history data, etc., to reach a final alert decision.
[0027] The model builder module 46 builds models 32 representing normal and/or abnormal conditions based on the collected object data 30. The system configuration module 42 manages the models 32, the decision making methods 50 and parameters 51 , and the scoring engine methods 52 for the cameras 20a-20n and uploads the methods and data 32, 50, 51 , 52 to the appropriate cameras 20a-20n.
[0028] Referring now to Figures 3 through 10, each Figure provides a more detailed exemplary illustration of the components of the surveillance system 10. More particularly, Figure 3 is a more detailed exemplary model builder module 46 according to various aspects of the present teachings. As shown, the model builder module 46 includes a model initialization module 60, a model initialization graphical user interface 62, a model learn module 64, an image data datastore 66, a model methods datastore 68, and a model data datastore 70. [0029] The model initialization module 60 captures the domain knowledge from users, and provides initial configuration of system components (i.e., optimized models, optimized scoring functions, optimized decision making functions, etc.). In particular, the model initialization module 60 builds initial models 32 for each camera 20a-20n (Figure 2) based on input 74 received from a user via the model initialization GUI 62. For example, the model initialization GUI 62 displays a scene based on image data from a camera thus, providing easy to understand context for user to describe expected motions of objects within the camera field of view. The image data can be received from the image data datastore 66. Using the model initialization GUI 62, the user can enter motion parameters 72 to simulate random trajectories of moving objects in the given scene. The trajectories can represent normal or abnormal conditions. The model initialization module 60 then simulates the trajectories and extracts data from the simulated trajectories in the scene to build the models 32. The generated simulated metadata corresponds to an expected output of a selected video analysis module 80 (Figure 2).
[0030] The model initialization module 60 builds the optimized models 32 from predefined model builder methods stored in the model methods datastore 68. In various aspects of the present teachings, the model initialization module 60 builds the optimal configuration according to a model builder method that selects particular decision making methods 50 (Figure 2) , the configuration parameters 51 (Figure 2) of decision making methods 50, a set of scoring engine methods 52 (Figure 2), and/or configuration parameters of scoring engine methods. [0031] In various aspects of the present teachings, the model initialization GUI 62 can provide an option to the user to insert a predefined object into the displayed scene. The model initialization module 60 then simulates the predefined object along the trajectory path for verification purposes. If the user is satisfied with the trajectory paths, the model 32 is stored in the model data datastore 70. Otherwise, the user can iteratively adjust the trajectory parameters and thus, the models 32 until the user is satisfied with the simulation. [0032] Thereafter, the model learn module 64 can automatically adapt the models 32 for each camera 20a-20n (Figure 2) by using the collected object data 30 and based on the various model builder methods stored in the model methods datastore 68. The model learn module 64 stores the adapted models 32 in the model data datastore 70.
[0033] As can be appreciated, various model building methods can be stored to the model methods datastore 68 to allow the model builder module 46 to build a number of models 32 for each object based on a model type. For example, the various models can include, but are not limited to, a velocity model, an acceleration model, an occurrence model, an entry/exit zones model, a directional speed profile model, and a trajectory model. These models can be built for all observed objects as well as different types of objects. As shown in Figure 4, the data for each model 32 can be represented as a multi-dimensional array structure 71 (i.e., a data cube) in which each element refers to a specific spatial rectangle (in 3D it is hyper-rectangle) and time interval. In various aspects of the present teachings, the models 32 are represented according to a Predictive Model Markup Language (PMML) and its extended form for surveillance systems.
[0034] In various aspects of the present teachings, the occurrence model describes the object detection probabilities in space and time dimensions. Each element of the occurrence data cube represents the probability of detecting an object at the particular location in the scene at the particular time interval. As can be appreciated, a time plus three dimensional occurrence data cube can be obtained from multiple cameras 20a-20n (Figure 2). The velocity model can be similarly built, where each cell of the velocity data cube can represent a Gaussian distribution of (dx,dy) or a mixture of Gaussian distributions. These parameters can be learned with recursive formulae. Similar to the velocity data cube, each cell of an acceleration data cube stores the Gaussian distribution of ((dx)',(dy)'). The entry/exit zones model models regions of the scene in which objects are first detected and last detected. These, areas can be modeled by a mixture of Gaussian models. Their location can be generated from first and last track points of each detected object by the application of clustering methods, such as, K-means, Expectation Maximization (EM) methods, etc.
[0035] The trajectory models can be built by using the entry and exit regions with the object meta data 30 obtained from the video analysis module 80 (Figure 2). In various aspects, each entry-exit region defines a segment in the site used by the observed objects in motion. A representation of each segment can be obtained by using curve fitting, regression, etc. methods on object data collected from a camera in real time or simulated. Since each entry and exit region includes time interval, the segments also include an associated time interval.
[0036] The directional models represent the motion of an object with respect to regions in a site. Specifically, each cell contains a probability of following a certain direction in the cell and a statistical representation of measurements in a spatio temporal region (cell), such as speed and acceleration. A cell can contain links to entry regions, exit regions, trajectory models, and global data cube model of site under surveillance. A cell can contain spatio temporal region specific optimized scoring engine methods as well as user specified scoring engine methods. Although the dimensions of the data cube are depicted as a uniform grid structure, it is appreciated that non-uniform intervals can be important for optimal model representation. The variable length intervals, as well as clustered/segmented non-rigid spatio temporal shape descriptors (i.e., 3D/4D shape descriptions), can be used for model reduction. Furthermore, the storage of the model 32 can utilize multi-dimensional indexing methods (such as R-tree, X-tree, SR-tree, etc.) for efficient access to cells. [0037] As can be appreciated, the data cube structure supports predictive modeling of the statistical attributes in each cell so that the a motion trajectory of an observed object can be predicted based on the velocity and acceleration attributes stored in the data cube. For example, based on a statistical analysis of the past history of motion objects, any object detected in location (X1 , Y1) may be highly likely to move to location (X2, Y2) after T seconds based on historical data. When a new object is observed in location (X1 , Y1), it is likely to move to location (X2, Y2) after T seconds. [0038] Referring now to Figure 5, a diagram illustrates a more detailed exemplary camera 20 of the surveillance system 10 according to various aspects of the present teachings. The camera 20, as shown, includes the image capture module 22, a video analyzer module 80, the scoring engine module 24, the decision making module 26, the device configuration module 28, an object history datastore 82, a camera models datastore 92, a scoring engine scores history datastore 84, a parameters datastore 90, a decision methods datastore 88, and a scoring methods datastore 86.
[0039] As discussed above, the image capture module 22 captures image data 93 from the sensor data 14. The image data 93 is passed to the video analyzer module 80 for the extraction of objects and properties of the objects. More particularly, the video analyzer module 80 can produce object data 30 in the form of an object detection vector (5), that includes: an object identifier (a unique key value per object); a location of a center of an object in the image plane (x,y), a timestamp; a minimum bounding box (MBB) in the image plane (x.low,y.low,x,upper,y.upper); a binary mask matrix that specifies which pixels belong to a detected object; image data of the detected object; and/or some other properties of detected objects such as visual descriptors specified by an Metadata format (i.e., MPEG7 Standard and its extended form for surveillance). The object data 30 can be sent to the scoring engine (SE) modules 24 and saved into the object history datastore 82.
[0040] In various aspects of the present teachings, the video analyzer module 80 can access the models 32 of the camera models datastore 92, for example, for improving accuracy of the object tracking methods. As discussed above, the models 32 are loaded to the camera models datastore 92 of the camera 20 via the device configuration module 28. The device configuration module also instantiates the scoring engine module 24, the decision making module 26, and prepares a communication channel between modules involved in the processing of object data 30 for progressive behavior and threat detection. [0041] The scoring engine module 24 produces one or more scores 34 for particular object traits, such as, an occurrence of the object in the scene, a velocity of the object, and an acceleration of the object. In various aspects, the scoring engine module includes a plurality of scoring engine sub-module that performs the following functionality. The scoring engine module 24 selects a particular scoring engine method 52 from the scoring methods datastore 86 based on the model type and the object trait to be scored. Various exemplary scoring engine methods 52 can be found in the attached Appendix A. The scoring engine methods 52 are loaded to the scoring methods datastore 86 via the device configuration module 28
[0042] The scores 34 of each detected object can be accumulated to obtain progress threat or alert levels at location (XO, YO) in real time. Furthermore, using the predictive model stored in the data cube, one can calculate the score 34 of the object in advance by first predicting the motion trajectory of the object and calculate the score of the object along the trajectory. As a result, the system can predict the changing of threat levels before it happens to support preemptive alert message generation. The forward prediction can include the predicted properties of an object in the near future (such as it is location, speed, etc.) as well as the trend analysis of scoring results.
[0043] The determination of the score 34 can be based on the models 32, the object data 30, the scores history data 34, and in some cases object history data from the object history datastore 82, the some regions of interest (defined by user),, and their various combinations. As can be appreciated, the score 34 can be a scalar value representing the measure of abnormality. In various other aspects of the present teachings, the score 34 can include two or more sealer values. For example, the score 34 can include a measure of normalcy and/or a confidence level, and/or a measure of abnormality and/or a confidence level. The score data 34 is passed to the decision making module 26 and/or stored in the SE scores history datastore 84 with a timestamp.
[0044] The decision making module 26 then generates the alert message 36 based on a fusing of the scores 34 from the scoring engine modules 24 for a given object detection event data (δ). The decision making module can use the historical score data 34, and object data 30 during fusion. The decision making module 26 can be implemented according to various decision making methods 50 stored to the decision methods datastore 88. Such decision making methods 50 can be loaded to the camera 20 via the device configuration module 28. In various aspects of the present teachings, as shown in Figure 6, the alert message 36 is computed as a function of a summation of weighted scores as shown by the following equation:
∑w?ι SE1(O) . (1 )
Where w represents a weight for each score based on time (t) and spatial dimensions (XY). In various aspects of the present teachings, the dimensions of the data cube can vary in number for example, XYZ spatial dimensions. The weights (w) can be pre-configured or adaptively learned and loaded to the parameters datastore 90 via the device configuration module 28. In various other aspects of the present teachings, the alert message 36 is determined based on a decision tree based method as shown in Figure 7. The decision tree based method can be adaptively learned throughout the surveillance process. [0045] Since the decision making module 26 can be implemented according to various decision making methods 50, the decision making module is preferable defined in a declarative form by using, for example, XML based representation such as an extended form of the Predictive Model Markup Language. This enables the Learning Module 44 to improve the decision making module accuracy since the learning module 44 changes various parameters (such as weight and the decision tree as explained above) and the decision making method also.
[0046] In various aspects of the present teachings, the decision making module 26 can generate predictions that can generate early-warning alert messages for progressive behavior and threat detection. For example, the decision making module 26 can generate predications about objects in motion based on the trajectory models 32. A prediction of a future location of an object in motion enables the decision making module 26 to identify whether two objects in motion will collide. If the collision is probable, the decision making module 26 can predict where objects will collide and when objects will collide as well as generate the alert message 36 to prevent a possible accident. [0047] As discussed above, to allow for co-operative decision making between cameras 20a-20n in the surveillance system 10, the decision making module 26 can exchange data with other decision making modules 26 such as decision make modules 26 running in other cameras 2Oa1 20b (Figure 2) or devices. The object data 30 and the scores 34 of suspicious objects detected by other cameras 20a, 20b (Figure 2) can be stored to the object history datastore 82 and the SE scores history datastore 84, respectively. Thus, providing a history of the suspicious object to improve the analysis by the decision making module 26. [0048] Referring now to Figure 8, a dataflow diagram illustrates a more detailed exemplary alarm handling module 38 of the surveillance system 10 according to various aspects of the present teachings. The alarm handling module 38 collects alert messages 36 and creates a "threat" structure for each new detected object. The threat structure maintains the temporal properties associated with the detected object as well as associates other pre-stored properties and obtained properties (such as the result of face recognition) with the detected object. The alarm handling module 38 re-evaluates the received alert messages 36 by using the collected properties of objects in the threat structure and additional system configuration to decide the level of alarm. The alarm handling module can filter the alert message without generating any alarm, as well as increase the alarm level if desired.
[0049] More particularly, the alarm handling module 38 can include a threats data datastore 98, a rule based abnormality evaluation module 94, a rules datastore 100, and a dynamic rule based alarm handling module 96. As can be appreciated, the rule based abnormality evaluation module 94 can be considered another form of a decision making module 26 (Figure 2) defined within a sensor device. Therefore, all explanations/operations associated with the decision making module 26 are applicable to the rule based abnormality evaluation module 94. For example, the decision making for the rule based abnormality evaluation module 94 can be declaratively defined in an extended form of Predictive Model Markup Language for surveillance. The threats data datastore 98 stores the object data scores 34, and additional properties that can be associated with an identified object. Such additional properties can be applicable to identifying a particular threat and may include, but are not limited to: identity recognition characteristics of a person or item, such as, facial recognition characteristics or a license plate number; and object attributes such as an employment position or a criminal identity.
[0050] The rules datastore 100 stores rules that are dynamically configurable and that can be used to further evaluate the detected object. Such evaluation rules, for example, can include, but are not limited to, rules identifying permissible objects even though they are identified as suspicious; rules associating higher alert levels with recognized objects; and rules recognizing an object as suspicious when the object is present in two different scenes at the same time.
[0051] The rule based abnormality evaluation module 94 associates the additional properties with the detected object based on the object data from the threats data datastore 98. The rule based abnormality evaluation module 94 then uses this additional information and the evaluation rules to re-evaluate the potential threat and the corresponding alert level. For example, the rule based abnormality evaluation module 94 can identify the object as a security guard traversing the scene during off-work hours. Based on the configurable rules and actions, the rule based abnormality evaluation module 94 can disregard the alert message 36 and prevent the alarm messages 18 from being dispatched even though a detection of a person at off-work hours is suspicious.
[0052] The dynamic rule based alarm handling module 96 dispatches an alert event 102 in the form of the alarm messages 18 and its additional data to interested modules, such as, the surveillance GUI 40 (Figure 2) and/or an alarm logging module (not shown). When the dynamic rule based alarm handling module 96 dispatches the alarm messages 18 via the surveillance GUI 40, the user can provide additional feedback by agreeing or disagreeing with the alarm. The feedback is provided by the user as miss-classification data 48 to the learning module 44 (Figure 2) in the form of agreed or disagreed cases. This allows the surveillance system 10 to collect a set of data for further optimization of system components (i.e., models 32, scoring engine methods 52, decision making methods 50, rules, etc. (Figure 2)).
[0053] Referring now to Figure 9, a dataflow diagram illustrates a more detailed exemplary learning module 44 of the surveillance system 10 according to various aspects of the present teachings. The learning module 44 optimizes the scoring engine methods 52, the decision making methods 50, and the associated parameters 51 , such as, the spatio-temporal weights based on the learned miss-classification data 48.
[0054] For example, the learning module 44 retrieves the decision making methods 50, the models 32, the scoring engine methods 52, and the parameters 51 from the system configuration module 42. The learning module 44 selects one or more appropriate learning methods from a learning method datastore 106. The learning methods can be associated with a particular decision making method 50. Based on the learning method, the learning module 44 re-examines the decision making method 50 and the object data 30 from a camera against the miss-classification data 48. The learning module can adjust the parameters 51 to minimize the error in the decision making operation. As can be appreciated, if more than one learning method is associated with the decision making method 50, the learning module 44 performs the above re- examination for each method 50 and uses a best result or some combination thereof to adjust the parameters 51.
[0055] Referring now to Figure 10, a dataflow diagram illustrates a more detailed exemplary system configuration module 42 of the surveillance system 10 according to various aspects of the present teachings. The system configuration module 42, as shown, includes a camera configuration module 110, an information upload module 112, and a camera configuration datastore 114.
[0056] The camera configuration module 110 associates the models 32, the scoring engine methods 52, and the decision making methods 50 and parameters 51 with each of the cameras 20a-20n (Figure 2) in the surveillance system 10. The camera configuration module 110 can accept and associate additional system configuration data from the camera configuration datastore 114, such as, user accounts and network level information about devices in the system (such as cameras, encoders, recorders, IRIS recognition devices, etc.). The camera configuration module 110 generates association data 116.
[0057] The information upload module 112 provides the models 32, the scoring engine methods 52, and the decision making methods 50 and parameters 51 to the device configuration module 28 (Figure 2) based on the association date 116 of the cameras 20a-20n (Figure 2) upon request. In various aspects of the present teachings, the information upload module 112 can be configured to provide the models 32, the scoring engine methods 52, the decision making methods 50 and parameters 51 to the device configuration module 28 (Figure 2) of the cameras 20a-20n at scheduled intervals.
[0058] Those skilled in the art can now appreciate from the foregoing description that the broad teachings of the present disclosure can be implemented in a variety of forms. Therefore, while this disclosure has been described in connection with particular examples thereof, the true scope of the disclosure should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and the following claims.
APPENDIX A 1. Occurrence Based Scoring Methods
Occurrence Model summarizes whether a detection of an object in [t,x,y] (time and space) is expected or not.
Expected results
1. An object is detected at a location ([t,x,y]) there should not be such activity at cell [t,x,y] 2. The same object track is used in two different time (in one time interval it is ok, in another time interval is NOT OK, or at least require human to investigate the activity)
SE_ALG1 Method:
Compare the current occurrence probability associated with current location of an object with the mean value of occurrence probabilities obtained from last 3 time slices
When the occurrence probability associated with the current location of an object is less than the mean value of occurrence probabilities, the algorithm assigns abnormality score by using the distance from the mean value.
Input: δ - [oid,t,x, y]
Output: Threat Score in [0..MAX_THREAT_SCORE]
Calculation of combined occurrence model for δ - [oid,t,x, y]
Select Occurrence Time slices for (t-2), (M ), and t from data cube (O(t-2),O(t-i),Ot)
CombinedOccurence=0 (t 2) +O (t-i ) +Ot ;
Calculation of mean occurrence probability for d = [oid,t,x, y] Find the mean value of non-zero entries from CombinedOccurence
Function [isThreat ThreatScore] = SE_ALG1 (DC, o)
QuantizationValue = meanValue/ (MAX_THREAT_SCORE-1) isThreat=0; ThreatScore=0 ;
If CombinedOccurence (x,y, t) >meanValue then
ThreatΞcore=0; // No threat else isThreat=l;
ThreatScore=floor (CombinedOccurence (x,y, t) /QuantizationValue) +1; end 1.1 SE_OSE1 Method:
Compare the current occurrence probability associated with the current location of an object with the mean value of occurrence probabilities in the current time slice
When the occurrence probability associated with the current location of an object is less than the mean value of occurrence probabilities, the algorithm assigns abnormality score by using the distance from the mean value divided by the standard deviation of occurrence probabilities.
Input: δ = [oid,t,x, y]
Output: Threat Score in [O..MAX_THREAT_SCORE]
Function [ isThreat ThreatScore] =OSEl (DC , o)
% DC : MuIti Model data cube
% o : Object detection vector [oid,t,x,y]
% Find corresponding tidx, yidx,xidx index in DC by using (t,x,y) tidx=FindTimeIndex(t) ; yidx=FindYGridIndex(y) ; xidx=FindXGridIndex(x) ;
% Here "v" is vector with non zero values
[m,n,v] = find (DC (tidx, :, :, 2)); % select non-zero occurrence probabilities Mean = mean(v) ; % Mean Std = std(v); % standard deviation Min = min(v) ; % Minimum Max = max(v) ; % Maximum isThreat=O; ThreatScore=0; threshold=Mean; if (DC (tidx,yidx,xidx, 2) <threshold) isThreat=l ; threatDistance=abs (DC (tidx, yidx, xidx, 2 ) -Mean) ;
ThreatScore= floor ( threatDistance/Std)
ThreatScore= min (MAX_THREAT_SCORE, ThreatScore) end 1.2 SE_OSE3 Method:
Compare the current occurrence probability associated with the current location of an object with the mean value of occurrence probabilities in the current time slice
When the occurrence probability associated with the current location of an object is less than the mean value of occurrence probabilities, the algorithm assigns threat score by using the distance from the mean value divided by the standard deviation of occurrence probabilities. Comparison with OSE1 : OSE3 uses the mean calculation algorithm used in OSE1 but it uses different algorithm to assign an threat score
Input: δ = [oid,t,x, y]
Output: Threat Score in [0..MAX_THREAT_SCORE]
Function [isThreat ThreatScore] =OSE3 (DC, o)
%
% DC: Multi Model data cube
% o : Object detection vector [oid,t,x,y]
%Find corresponding tidx,yidx,xidx index in DC by using (t,x,y) tidx=FindTimeIndex(t) ; yidx=FindYGridIndex(y) ; xidx=FindXGridIndex(x) ;
[m,n] = find(DC ( tidx, : , : , 2) > 0) ; % select non-zero occurrence probabilities nonZeroCount = length (m) ; p_max = max (max (DC (tidx, : , : , 2 ) ) ) ; p_mu = sum(sum(DC (tidx, :,:, 2) )) /nonZeroCount; QuantVal = p_mu/ (MAX_THREAT_SCORE -1); isThreat = 0; ThreatScore = 0; thresho1d=p_mu; i f ( DC ( tidx, yidx, xidx, 2 ) < threshold) isThreat=l ; threa tDi s tance=p_mu-Dc ( tidx, yidx, xidx, 2 ) ,-
ThreatScore = floor ( threatDistance/QuantVal ) + 1 end
Alternative threshold value could be mode(), median(), (mode()+mean())/2, mean-k*std, etc. MAX THREAT SCORE
r: ThreatDistance
QuantValue
Figure 1 Threat Score increase linearly with respect to the distance (ThreatDistance) from mean (p_mu)
1.3 SE_OSE6 Method:
Compare the current occurrence probability associated with the current location of an object with the threshold value obtained from occurrence probabilities in the current time slice
Threshold = Mean-Std
When the occurrence probability associated with the current location of an object is less than the threshold, the algorithm assigns threat score by using the distance from the mean value divided by the standard deviation of occurrence probabilities.
Comparison with OSE3: OSE6 uses different threshold value
Input: δ - [oid,t,x, y]
Output: Threat Score in [0..MAX_THREAT_SCORE]
Function [isThreat ThreatScore] =0SE6 (DC, o) %
% DC: Multi Modal Data Cube % o : Object detection vector [oid,t,x,y]
%Find corresponding tidx,yidx,xidx index in DC by using
(t,x,y) tidx=FindTimeIndex(t) ; yidx=FindYGridIndex(y) ; xidx=FindXGridIndex(x) ;
%Here "v" is vector with non zero values [m,n,v] = find (DC (ti, :, :, 2) ) ; Mean = mean(v) Std = std(v) Min = min(v) Max = max(v)
isThreat=0; ThreatScore=0 ; threshold= Mean-Std if (DC(ti,y,x,l)<l || DC(ti, y, x, 2) < threshold) isThreat=l threatDistance=abs ( (DC ( ti,y, x, 2 ) -Mean) ) ; %T"nreaLSco"-e- f Loor (abs (DC (c i ,y,x, 2 ) -yean) Λ2/Std) ThreatScore= floor ( threatDistance/Std) ThreatScore=min (MAX__THREAT_SCORE, ThreatScore) ; end 2. Velocity Profile Based Scoring Methods
2.1 SEJVSEl Method:
Compare the current velocity profile associated with the current location of an object with the threshold value obtained from velocity profile in the current time slice
Input: δ = [oid,t,x, y, Ax, Ay] denotes the object's flow vector at current time t k=threshold control parameter (k*std)
Output: Threat Score in [0..MAX_THREAT_SCORE]
Function [isThreat ThreatScore] =VM_SE_ALG1 (VMDC, o, k)
%
% VMDC : MuIti Modal Data Cube
% o : [oid, t, x, y, Δx, Δy ] vector
% k : controls threshold value (k*std)
%
%Find corresponding tidx,yidx, xidx index in VMDC by using
(t,x,y) tidx=FindTimeIndex(t) ; yidx=FindYGridIndex(y) ; xidx=FindXGridIndex(x) ;
xneany = VMDC ( tidx,yidx, xidx, 3 ) ; meanx = VMDC ( tidx,yidx, xidx, 4) ; stdy = VMDC (tidx,yidx,xidx, 5) ; stdx = VMDC (tidx,yidx,xidx, 6) ; count = VMDC (tidx,yidx,xidx, 7) ;
deltay - abs (vy-meany) deltax = abs (vx-meanx)
isThreat=0; ThreatScore=0 ;
if (deltay>(k*stdy) ) | | (deltax> (k*stdx) ) isThreat=l;
Threat Score=deltay/ s tdy+del tax/ stdx;
ThreatScore=min (MAX_THREAT_SCORE, ThreatScore) ; end
Threat score value can also be obtained from 2D Gaussian function. 2.2 SE_VSE1N Method:
Decide the threat score by using last n samples with the velocity model
Input:
Obj={[oid,ti,Xi,yι,ΔXi,Δyi], [oid,t(j-i ;iJ ,x(i-1) ,y(i-1), ΔX(j-i),Δy(j-i)]I..)[oid)t(i-n+1), ,X(i-n+i) ,y<μ n+i), ΔX(i-n+i), Δy(i-n+i)]} denotes the object's last n flow vectors, tj is the current time k=threshold control parameter (k*std)
Output: Threat Score in [0..MAX_THREAT_SCORE]
Function [isThreat ThreatScore] =VM_SE_ALG1N (VMDC, Obj , k, n)
%
% VMDC : Multi Modal Data Cube
% Obj : is a matrix contains last n observations is [oid, tj.,Xi, yi , ΔXi , Δγi] % k : controls threshold value (k*std)
isThreat=O ThreatScore=0 for i=l :min (length (Obj ) ,n) % length() gives number of rows in the matrix Obj P ( i ) = VM_ΞE_ALG1 (VMDC , Obj ( i ) , k) ; end isThreat=sum ( P ( : , 1 ) ) ; If (isThreat>l) isThreat=l;
ThreatScore=max ( P ( : , 2 ) ) ; end
The above algorithm used max() function to obtain a final threat score. There are many different ways to assign the threat score, such as
ThreatScore = average(P(:,2)); ThreatScore = median(P(:,2));
ThreatScore = mode(P(:,2));
ThreatScore = (average(P(:2))+mode(P(:2)))/2;
ThreatScore = (average(P(:2))+median(P(:2)))/2;
ThreatScore = (median(P(:2))+mode(P(:2)))/2;
The individual scores for each previous points can be combined by using weights. Let ts(i) be the time stamp of ith object flow vector, and assume that the object flow vectors are decreasing time stamp order(ts(1 )>ts(2)>...>ts(n-
1)>ts(n)). Let score(i) be the threat score associated to ith object flow vector. For given n, the final threat score is n
∑ W(Al1 , At) * score(ϊ)
ThreatScore = —
∑w(Δt, ,Δt)
J=I where ΔtFts(i)-ts(n), Δt=ts(1)-ts(n), and w(Δr,,Δ0 = —
Δt Weight of each score depends on the distance (in time dimension) between current time and the time stamp of instance. The weights are linear with respect to the distance (in time dimension).
The non-linear weight assignments can use sigmoid function, double sigmoid function, exponential decay functions, logistic functions, Gaussian distribution function, etc. to express the weights based on their distance to the current time. Their parameters could be adjusted by learning algorithms for fine tuning.
2.3 SE_VSE_X Method:
Compare the observed velocity of an object with the configured velocity threshold
Input:
• o=[oid, t,x,y] denotes the object's detection vector at current time t
• Ofirst denotes the first detection vector associated with the object of interest • Vaverage denotes expected average speed (used as threshold)
Output: Threat Score in [O..MAX_THREAT_SCORE]
Function [isThreat
ThreatScore] =VM_SE_ALG_X (Ofirst, o, Vaverage)
% Ofirst : [oid, t0, Xo, Yo] vector, first detection
% o : [oid,t,x,y] vector
% Vaverage : average speed threshold, pixel/sec
% Calculate speed of an object dist_x=0first . x-o.x; dist_y=0first . y-o . y; dist_t=sqrt (dist_x*dist_x+dist_y*dist_y) ; curr_speed = dist_t/ (corivertToSeconds (o . t-Ofirst . t ] isThreat=O; ThreatScore=0; threshold= Vaverage/2; if (curr_speed<threshold) isThreat=l;
ThreatScore=floor (MAX_THREAT_SCORE* ( (threshold- curr_speed) /threshold) ) ;
ThreatScore=; end
This algorithm detects that an object is wandering around, (not moving too much or moving very slowly)
Calculation of object's speed uses the first position and the current position. Alternatively, speed of an object can be calculated by
• Obtain speed from all object detection vectors (average, median, mode)
• Obtain speed from last n object detection vectors with weighted average 3. Acceleration Profile Based Scoring Methods
3.1 SE_ASE0 Method: Compare the current acceleration profile associated with the current location of an object with the threshold value obtained from acceleration profile in the current time slice
Input: • o = [oid, t, x, y, ax, ay] denotes the object's acceleration flow vector
• k=threshold control parameter (k*std)
Output: Threat Score in [0..MAX_THREAT_SCORE] Let AMDC(t,x,y) denotes the acceleration model and o - [oid,t, χ, y,ax, ay] denotes the object's acceleration flow vector where object o is detected at location (x,y) at time t. The threat score for this observation would be;
Function [isThreat ThreatScore] =AM_SE_ALG1 (AMDC, o, k)
% o : [oid, t,x, Y, ax, ay] vector
% AMDC : Multi Modal Data Cube
% k : threshold control parameter (k*std)
Find corresponding tidx, yidx,xidx index in AMDC by using
(t,x,y) tidx=FindTimeIndex(t) ; yidx=FindYGridIndex(y) ; xidx=FindXGridIndex(x) ; meany = AMDC (tidx, yidx,xidx, 8) meanx = AMDC ( tidx, yidx,xidx, 9) stdy = AMDC (tidx, yidx,xidx, 10) stdx = AMDC (tidx, yidx,xidx, 11) count = AMDC (tidx, yidx,xidx, 12) deltax=abs (ax-meanx) deltay=abs (ay-meany) isThreat = 0; ThreatScore = 0; if (deltax>(k*stdy) ) || (deltay> (k*stdx) ) isThreat=l;
ThreatScore=deltay/stdy+deltax/stdx;
ThreatScore=min (MAX_THREAT_SCORE, ThreatScore) ; end 3.2 SE_ASE1N Method:
Decide the threat score by using last n samples with the acceleration model
Input:
• ,X(i-n+i)
Figure imgf000028_0001
• k=threshold control parameter (k*std) • n= Number of last observations to be used
Output: Threat Score in [0..MAX_THREAT_SCORE]
Function [isThreat ThreatScore] ==AM SE ALGlN (AMDC, Obj , k,n)
% AMDC : Multi Modal Data Cube
% Obj : is a matrix contains last n observations is [oid, ti,xi,yi , axi , ayi]
% k : controls threshold value (k*std)
isThreat=0; ThreatScore=0 ; for i=l :min (length (Obj ) ,n) % length)) gives number of rows in the matrix Obj
P ( i) = AM_SE_ALG1 (AMDC, Obj ( i ) , k) ; end isThreat=sum ( P ( : , 1 ) ) ; If (isThreat>l) isThreat=1;
ThreatScore=max ( P ( : , 2 ) ) ; end
The above algorithm used max() function to obtain a final threat score. There are many different ways to assign the threat score, such as
ThreatScore = average(P(:,2));
ThreatScore = median(P(:,2));
ThreatScore = mode(P(:,2)); ThreatScore = (average(P(:2))+mode(P(:2)))/2;
ThreatScore = (average(P(:2))+median(P(:2)))/2;
ThreatScore = (median(P(:2))+mode(P(:2)))/2; 4. Speed Profile Based Algorithms
4.1 SE_SSE1 Method: Compare the observed speed of an object with the speed profile
Input: δ = [oid,t,x,y,Ax,Ay] denotes the object's velocity vector at current time t
Output: Threat Score in [0..MAX_THREAT_SCORE]
Figure imgf000029_0001
Note: It will detect both 'slow' and 'fast' as a threat 5. Directional Speed Profile Based Method
A scoring algorithm using Directional Speed Profile Data Cube accepts the object detection vectors ({(o,tl,xl,yl),(oIt(l_i).X(ι-i),y(ι-i)).—})
1 ) Finds the Mtxy cell in model by using last location, finds entry slice (i) and exit slice (j) within the cell
2) Compares the speed of object against the [μij± σ,j] interval, 3) if the value is inside of this interval, there is no threat, return
4) If the value is outside of this interval, there is a threat and threat level is calculated by Threatl_evel=abs(ObservedSpeed- M^/α,,;
The above function is one example to obtain threat level associated with an object. The threat level determination function can be described by using exponential function that will produce non-linear threat measure with respect to the distance between the ObservedSpeed and the expected speed.
Another scoring algorithm using Directional Speed Profile Data Cube can use some recent positions of an object to obtain such measure with weighted sum formula. A variation of such algorithm can use all the track data and build a normal distribution N(μ,σ) for threat level data.
6. Scoring methods for targets of interest
6.1 SE_CROSSOVER1 Method:
Compare the observed location of an object with the target region of interest
Input: δ = [oid,t,x, y] denotes the object's detection vector at current time t
TargetDef = [[xo,yo],[xi,yi]] specifies a region in camera view (camera image coordinates)
Output: Threat Score in [0..MAX_THREAT_SCORE]
Camera View
Figure imgf000031_0001
Xmax
Figure 2 Target region of interest
In Figure-2, the target region of interest is defined in the field of view of a camera. The scoring algorithm generates the threat scores based on the distance between object and the center of target region.
Function [isThreat ThreatScore] =SE_CROSSOVER1 (o, TargetDef)
% o [oid,t,x,y] vector
% TargetDef Target Definition, [ [xθ yθ] [xl yl] ]
% xO=TargetDef [1] [1] xl=TargetDef [2] [1] γO=TargetDef [1] [2] yl=TargetDef [2] [2] center_x= (xO+xl) /2 ; center_y= (yθ+γl) /2 ; diff_x=xl-center_x; diff_y=y1-center_y; maxDistance=sqrt (diff_xA2+diff_yΛ2) ;
% Calculate the distance between the object's location and % center of target region of interest diff_x= x-center_x; diff_y= y-center_y; objectDistance=sqrt (diff_xA2+diff_yΛ2 ) ; isThreat = 0; ThreatScore = 0 ;
if (objectDistance<maxDistance) isThreat=l; threatDistance=l- (objectDistance/maxDistance) ;
ThreatScore=MAX_THREAT_SCORE*threatDistance; end
Variations 1) Target description can be a circle (described by center and radius) , as well as arbitrary shape defined by polygon representation (MPEG7 Region descriptor can be utilized)
2) There could be more than one target description per camera
3) Target description can be associated with time interval [tbegin.tend] during which it can be used
4) Threat distance is calculated with linear model. Threat distance could be calculated by using 2D Gaussian function centered at (xc,yc).
6.2 SE_CROSSOVER2 Method:
Compare the observed location of an object with the target region of interest and the occurrence model
Input: d = [oid,t,x, y] denotes the object's detection vector at current time t
TargetDef = [[xo.yoL[xi,y"i]] specifies a region in camera view (camera image coordinates)
Output: Threat Score in [0..MAX_THREAT_SCORE]
ThreatScore = (1- normalized_distance) * (1-OccurenceProb) where, normalized_distance = Euclidean distance between target and object / max possible euclidean distance between target and object)
Function [isThreat ThreatScore] =SE_CR0SS0VER2 (DC, o, TargetDef, MAX_DIST) %
% DC : MuIti Modal Data Cube % o : [oid,t,x,y] vector
% TargetDef : Target Definition, [ [xθ yθ] [xl yl] ]
% Find corresponding tidx,yidx, xidx index in DC by using
(t,x,y) tidx=FindTimeIndex ( t) ; yidx=FindYGridIndex (y) ; xidx=FindXGridIndex (x) ; xO=TargetDef [ 1 ] [ I ] ; xl=TargetDef [ 2 ] [ I ] ; yO=TargetDef [ 1 ] [ 2 ] ; yl=TargetDef [ 2 ] [ 2 ] ;
center_x= (xθ+xl) /2 ; center_y= (yθ+yl ) /2 ; dif f_x=xl-center_x ; dif f_y=yl-center_y ;
% Calculate the distance between the object's location and % center of target region of interest diff_x= x-center_x; diff_y= y-center_y; objectDistance=sqrt (diff_xΛ2+diff_yΛ2) ;
isThreat = 1; ThreatScore = 0;
threatDistance=l- (objectDistance/MAX_DIST) ;
ThreatScore=MAX_THREAT_SCORE*threatDistance*DC (tidx, xidx,yi dx,2) ;
Note that the threat score calculation uses the combination of the occurrence probability and the proximity to the target measures to find the final threat score. When the object is too close but it is in the frequently visited places, the threat score is reduced. When the object is too close but not in the frequently visited places, the threat score is increased.
6.3 SE_APPROACH1 Method:
Compare the observed velocity and direction of an object with respect to the target region of interest Input: o = [oid,t,x, y, Ax, Ay] denotes the object's velocity flow vector at current time t
TargetDef = [[xo,yo],[xi>yi]] specifies a region in camera view (camera image coordinates) Output: Threat Score in [0..MAX_THREAT_SCORE]
If the object is approaching the target, this component is positive. For an object moving away from the target, the component would be negative. This information will be used to determine the threat by an approaching object. ApproachThreat = ||(dx/dt)i + (dy/dt)j || . cos(θ)
where
(dx/dt) is the instantaneous velocity in x direction, (dy/dt) is the instantaneous velocity in y direction, θ is the angle in between direction of velocity and line joining target and the object.
Function [isThreat ThreatScore] =SE_APPROACH1 (o, TargetDef)
%
% o : [oid, t , x,y, Δx, Δy] vector
% TargetDef : Target Definition, [ [xθ yθ] [xl yl] ]
% x0=TargetDef [1] [I]; xl=TargetDef [2] [I]; yO=TargetDef [1] [2] yl=TargetDef [2] [2] center_x= (xθ+xl) /2 center_y= (yθ+yl) /2 ; vectl= [ Δx Δy] ; vect2=[ (o(3) -center_x) (o (4) -center_y) ]
% angle between vectors costh=dot (vectl, vect2 ) / (norm(vectl) *norm(vect2) ) velocity = sqrt (o (5) Λ2+o (6) Λ2 ) *costh; isThreat = 0; ThreatScore = 0; if (velocity>0; isThreat=l; threatDistance=velocity* MAX_THREAT_SCORE; ThreatScore=min (MAX_THREAT_SCORE, threatDistance) ; end
7. Scoring methods for object motion data
7.1 SE_WANDER1 Methods:
Compare the observed object locations and judge whether an object stays in a particular region for the given number of frames
Input: •
Figure imgf000036_0001
,X(i-n+i) ,y(i-n+i)]} denotes the object's last n detection vectors, tj is the current time
Output: Threat Score in [0..MAX_THREAT_SCORE]
WanderRatio = Nr/N (3) where,
Nr is the number of points that are within a radius R (WANDER_RADIUS) of the current point.
N is the WANDERINGJDRDER. This is number of past samples used to determine if there is loitering.
Function [isThreat ThreatScore] =SE_WANDER1 (Obj )
%
% wandering factor - check the avg dist between current point and all
% points till now, for a wandering object, this dist should be smaller last_x=Obj (1,3) ; last_y=Obj (1,4) ;
Count = 0; for j = 1 : min (length (Obj ), WANDERING_ORDER) diff_x=last_x-Obj (j , 3 ) ; diff_y=last_y-Obj (j,4) ; distval = sqrt(diff_xΛ2+diff_γΛ2) ; if (distval < WAMDER_RADIUS ) Count = Count + 1; end end
WanderRatio = Count/WANDERING_ORDER; isThreat=O;
ThreatScore=0 ;
If (WanderRation>0.5 ) isThreat=l; threatDistance=WanderRatio,-
ThreatScore=WanderRatio*MAX_THREAT_SCORE; end

Claims

CLAIMS What is claimed is:
1. A surveillance system, comprising: a data capture module that collects sensor data; a scoring engine module that receives the sensor data and computes at least one of an abnormality score and a normalcy score based on the sensor data, at least one dynamically loaded learned data model, and a learned scoring method; and a decision making module that receives the at least one of the abnormality score and the normalcy score and generates an alert message based on the at least one of the abnormality score and the normalcy score and a learned decision making method to produce progressive behavior and threat detection.
2. The surveillance system of claim 1 further comprising a device configuration module that automatically loads the learned scoring methods, the learned decision making methods, and the learned model to at least one of the scoring engine module and the decision making module.
3. The surveillance system of claim 1 further comprising a model builder module that adaptively learns the model and wherein the scoring engine module computes the at least one of the abnormality score and the normalcy score based on the adaptively learned models.
4. The surveillance system of claim 1 further comprising a model builder module that builds the model based on at least one of a simulation of the sensor data and accumulated sensor data.
5. The surveillance system of claim 1 wherein the learned scoring method calculates an observed property of objects in motion against the model stored in a data cube to obtain a set of scores that represent at least one of similarity and difference scores between an object in motion and the learned model.
6. The surveillance system of claim 5 wherein the at least one of the similarity and difference scores are accumulated and normalized for the object in motion, to represent the at least on of normalcy and abnormality scores.
7. The surveillance system of claim 4 further comprising a graphical user interface that accepts parameters from a user to generate the simulation.
8. The surveillance system of claim 1 further comprising a learning module that adaptively learns at least one of the scoring methods, the decision making methods, and the learned model .
9. The surveillance system of claim 1 further comprising an alarm handling module that receives the alert message and generates an alarm message based on a further examination of the alert message.
10. The surveillance system of claim 1 wherein the data capture module collects sensor data from an image sensor and extracts object data from the sensor data, and wherein the scoring engine module computes the at least one of the abnormality score and the normalcy score based on the object data.
11. The surveillance system of claim 1 wherein the decision making module receives at least one of an abnormality score and a normalcy score generated from other sensor data and generates an alert message based on the at least one of the abnormality score and the normalcy score generated from the other sensor data.
12. A surveillance system, comprising: a plurality of image sensing devices, wherein the image sensing devices each include: a data capture module that collects sensor data; a scoring engine module that receives the sensor data and computes at least one of an abnormality score and a normalcy score based on the sensor data, at least one dynamically loaded data model, and a learned scoring method; and a decision making module that receives the at least one of the abnormality score and the normalcy score and generates an alert message based on the at least one of the abnormality score and the normalcy score and a learned decision making method to produce progressive behavior and threat detection.
13. The surveillance system of claim 12 wherein the decision making module of a first image sensing device receives the at least one of the abnormality score and the normalcy score from a second image sensing device, and wherein the decision making module of the first image sensing device generates the alert message based on the at least one of the abnormality score and the normalcy score from the second image sensing device.
14. The surveillance system of claim 12 further comprising a model builder module that adaptively learns the predetermined models.
15. The surveillance system of claim 12 wherein the image sensing devices each further include a device configuration module that automatically loads updated scoring methods, decision making methods, and the learned models to the image sensing device.
16. The surveillance system of claim 12 further comprising a model builder module that builds models based on a simulation of the sensor data and accumulated real sensor data.
17. The surveillance system of claim 16 further comprising a graphical user interface that accepts motion parameters from a user to generate the simulation.
18. The surveillance system of claim 12 further comprising a learning module that adaptively learns a decision making method and wherein the decision making method is selectively loaded to at least one of the plurality of image sensing devices.
19. The surveillance system of claim 12 further comprising an alarm handling module that receives the alert messages from the plurality of image sensing devices and generates an alarm message based on a further examination of the alert messages.
20. A surveillance method, comprising: receiving sensor data; dynamically loading data models; computing at least one of an abnormality score and a normalcy score based on the sensor data and the dynamically loaded data models; and generating an alert message based on the at least one of the abnormality score and the normalcy score.
21. The surveillance method of claim 20 further comprising selectively loading at least one of scoring methods and decision making methods to be used by at least one of the computing and the generating.
22. The surveillance method of claim 20 further comprising: adaptively learning the data models, and wherein the computing comprises computing the at least one of the abnormality score and the normalcy score based on the adaptively learned data models.
23. The surveillance method of claim 20 further comprising building the model based on a simulation of the sensor data.
24. The surveillance method of claim 20 further comprising:
adaptively learning a decision making method, and wherein the generating comprises generating the alert message based on the adaptively learned decision making method.
25. The surveillance method of claim 20 further comprising: further examining the alert message; and generating an alarm message based on the further examining.
26. The surveillance method of claim 20 wherein the receiving comprises receiving sensor data from an image sensor.
27. The surveillance method of claim 26 further comprising: extracting object data from the sensor data, and wherein the computing further comprises computing the at least one of the abnormality score and the normalcy score based on the object data.
PCT/US2007/087566 2007-02-16 2007-12-14 Surveillance systems and methods WO2008103206A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2009549578A JP5224401B2 (en) 2007-02-16 2007-12-14 Monitoring system and method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/676,127 US7667596B2 (en) 2007-02-16 2007-02-16 Method and system for scoring surveillance system footage
US11/676,127 2007-02-16

Publications (2)

Publication Number Publication Date
WO2008103206A1 true WO2008103206A1 (en) 2008-08-28
WO2008103206B1 WO2008103206B1 (en) 2008-10-30

Family

ID=39272736

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/087566 WO2008103206A1 (en) 2007-02-16 2007-12-14 Surveillance systems and methods

Country Status (3)

Country Link
US (1) US7667596B2 (en)
JP (1) JP5224401B2 (en)
WO (1) WO2008103206A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170185872A1 (en) * 2015-12-28 2017-06-29 Qualcomm Incorporated Automatic detection of objects in video images
EP3557549A1 (en) 2018-04-19 2019-10-23 PKE Holding AG Method for evaluating a motion event

Families Citing this family (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5121258B2 (en) * 2007-03-06 2013-01-16 株式会社東芝 Suspicious behavior detection system and method
US9380256B2 (en) * 2007-06-04 2016-06-28 Trover Group Inc. Method and apparatus for segmented video compression
KR101187901B1 (en) * 2007-07-03 2012-10-05 삼성테크윈 주식회사 System for intelligent surveillance and method for controlling thereof
WO2009038561A1 (en) * 2007-09-19 2009-03-26 United Technologies Corporation System and method for threat propagation estimation
WO2009045218A1 (en) 2007-10-04 2009-04-09 Donovan John J A video surveillance, storage, and alerting system having network management, hierarchical data storage, video tip processing, and vehicle plate analysis
US8013738B2 (en) 2007-10-04 2011-09-06 Kd Secure, Llc Hierarchical storage manager (HSM) for intelligent storage of large volumes of data
US20100153146A1 (en) * 2008-12-11 2010-06-17 International Business Machines Corporation Generating Generalized Risk Cohorts
US7962435B2 (en) * 2008-02-20 2011-06-14 Panasonic Corporation System architecture and process for seamless adaptation to context aware behavior models
JP4615038B2 (en) * 2008-06-23 2011-01-19 日立オートモティブシステムズ株式会社 Image processing device
US8301443B2 (en) * 2008-11-21 2012-10-30 International Business Machines Corporation Identifying and generating audio cohorts based on audio data input
US8041516B2 (en) * 2008-11-24 2011-10-18 International Business Machines Corporation Identifying and generating olfactory cohorts based on olfactory sensor input
US9111237B2 (en) * 2008-12-01 2015-08-18 International Business Machines Corporation Evaluating an effectiveness of a monitoring system
US8749570B2 (en) 2008-12-11 2014-06-10 International Business Machines Corporation Identifying and generating color and texture video cohorts based on video input
US8190544B2 (en) 2008-12-12 2012-05-29 International Business Machines Corporation Identifying and generating biometric cohorts based on biometric sensor input
US8417035B2 (en) * 2008-12-12 2013-04-09 International Business Machines Corporation Generating cohorts based on attributes of objects identified using video input
US20100153174A1 (en) * 2008-12-12 2010-06-17 International Business Machines Corporation Generating Retail Cohorts From Retail Data
US20100153147A1 (en) * 2008-12-12 2010-06-17 International Business Machines Corporation Generating Specific Risk Cohorts
US20100153597A1 (en) * 2008-12-15 2010-06-17 International Business Machines Corporation Generating Furtive Glance Cohorts from Video Data
US20100153133A1 (en) * 2008-12-16 2010-06-17 International Business Machines Corporation Generating Never-Event Cohorts from Patient Care Data
US11145393B2 (en) 2008-12-16 2021-10-12 International Business Machines Corporation Controlling equipment in a patient care facility based on never-event cohorts from patient care data
US8493216B2 (en) * 2008-12-16 2013-07-23 International Business Machines Corporation Generating deportment and comportment cohorts
US20100153180A1 (en) * 2008-12-16 2010-06-17 International Business Machines Corporation Generating Receptivity Cohorts
US8219554B2 (en) 2008-12-16 2012-07-10 International Business Machines Corporation Generating receptivity scores for cohorts
CN102576412B (en) * 2009-01-13 2014-11-05 华为技术有限公司 Method and system for image processing to classify an object in an image
US8253564B2 (en) * 2009-02-19 2012-08-28 Panasonic Corporation Predicting a future location of a moving object observed by a surveillance device
US20110055895A1 (en) * 2009-08-31 2011-03-03 Third Iris Corp. Shared scalable server to control confidential sensory event traffic among recordation terminals, analysis engines, and a storage farm coupled via a non-proprietary communication channel
US20110205359A1 (en) 2010-02-19 2011-08-25 Panasonic Corporation Video surveillance system
MX2012011118A (en) * 2010-03-26 2013-04-03 Fortem Solutions Inc Effortless navigation across cameras and cooperative control of cameras.
KR101746453B1 (en) * 2010-04-12 2017-06-13 삼성전자주식회사 System and Method for Processing Sensory Effect
KR20110132884A (en) * 2010-06-03 2011-12-09 한국전자통신연구원 Apparatus for intelligent video information retrieval supporting multi channel video indexing and retrieval, and method thereof
US8457354B1 (en) * 2010-07-09 2013-06-04 Target Brands, Inc. Movement timestamping and analytics
US10318877B2 (en) 2010-10-19 2019-06-11 International Business Machines Corporation Cohort-based prediction of a future event
US9158976B2 (en) 2011-05-18 2015-10-13 International Business Machines Corporation Efficient retrieval of anomalous events with priority learning
US20130027561A1 (en) * 2011-07-29 2013-01-31 Panasonic Corporation System and method for improving site operations by detecting abnormalities
GB2501542A (en) * 2012-04-28 2013-10-30 Bae Systems Plc Abnormal behaviour detection in video or image surveillance data
US8712100B2 (en) 2012-05-30 2014-04-29 International Business Machines Corporation Profiling activity through video surveillance
US9471300B2 (en) 2012-07-26 2016-10-18 Utc Fire And Security America Corporation, Inc. Wireless firmware upgrades to an alarm security panel
US9208676B2 (en) 2013-03-14 2015-12-08 Google Inc. Devices, methods, and associated information processing for security in a smart-sensored home
KR101747218B1 (en) * 2012-12-03 2017-06-15 한화테크윈 주식회사 Method for operating host apparatus in surveillance system, and surveillance system adopting the method
US20140372183A1 (en) * 2013-06-17 2014-12-18 Motorola Solutions, Inc Trailer loading assessment and training
US20140372182A1 (en) * 2013-06-17 2014-12-18 Motorola Solutions, Inc. Real-time trailer utilization measurement
US20150082203A1 (en) * 2013-07-08 2015-03-19 Truestream Kk Real-time analytics, collaboration, from multiple video sources
US9201581B2 (en) * 2013-07-31 2015-12-01 International Business Machines Corporation Visual rules for decision management
US9984345B2 (en) * 2014-09-11 2018-05-29 International Business Machine Corporation Rule adjustment by visualization of physical location data
US10719717B2 (en) 2015-03-23 2020-07-21 Micro Focus Llc Scan face of video feed
US10007849B2 (en) 2015-05-29 2018-06-26 Accenture Global Solutions Limited Predicting external events from digital video content
US9940730B2 (en) 2015-11-18 2018-04-10 Symbol Technologies, Llc Methods and systems for automatic fullness estimation of containers
US10713610B2 (en) 2015-12-22 2020-07-14 Symbol Technologies, Llc Methods and systems for occlusion detection and data correction for container-fullness estimation
SG10201510337RA (en) 2015-12-16 2017-07-28 Vi Dimensions Pte Ltd Video analysis methods and apparatus
US9965683B2 (en) 2016-09-16 2018-05-08 Accenture Global Solutions Limited Automatically detecting an event and determining whether the event is a particular type of event
US10795560B2 (en) * 2016-09-30 2020-10-06 Disney Enterprises, Inc. System and method for detection and visualization of anomalous media events
CN108024088B (en) * 2016-10-31 2020-07-03 杭州海康威视系统技术有限公司 Video polling method and device
JP6675297B2 (en) * 2016-12-09 2020-04-01 Dmg森精機株式会社 Information processing method, information processing system, and information processing apparatus
WO2018150270A1 (en) * 2017-02-17 2018-08-23 Zyetric Logic Limited Augmented reality enabled windows
US11093927B2 (en) * 2017-03-29 2021-08-17 International Business Machines Corporation Sensory data collection in an augmented reality system
GB2569555B (en) * 2017-12-19 2022-01-12 Canon Kk Method and apparatus for detecting deviation from a motion pattern in a video
GB2569557B (en) 2017-12-19 2022-01-12 Canon Kk Method and apparatus for detecting motion deviation in a video
GB2569556B (en) * 2017-12-19 2022-01-12 Canon Kk Method and apparatus for detecting motion deviation in a video sequence
US10417500B2 (en) 2017-12-28 2019-09-17 Disney Enterprises, Inc. System and method for automatic generation of sports media highlights
DE102018201570A1 (en) * 2018-02-01 2019-08-01 Robert Bosch Gmbh Multiple target object tracking method, apparatus and computer program for performing multiple target object tracking on moving objects
CA3040367A1 (en) * 2018-04-16 2019-10-16 Interset Software, Inc. System and method for custom security predictive models
US10783656B2 (en) 2018-05-18 2020-09-22 Zebra Technologies Corporation System and method of determining a location for placement of a package
US10733457B1 (en) 2019-03-11 2020-08-04 Wipro Limited Method and system for predicting in real-time one or more potential threats in video surveillance
US11158174B2 (en) 2019-07-12 2021-10-26 Carrier Corporation Security system with distributed audio and video sources
CN112801468A (en) * 2021-01-14 2021-05-14 深联无限(北京)科技有限公司 Intelligent management and decision-making auxiliary method for intelligent community polymorphic discrete information

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1403817A1 (en) * 2000-09-06 2004-03-31 Hitachi, Ltd. Abnormal behavior detector
WO2005066912A1 (en) * 2004-01-12 2005-07-21 Elbit Systems Ltd. System and method for identifying a threat associated person among a crowd
US20060059557A1 (en) * 2003-12-18 2006-03-16 Honeywell International Inc. Physical security management system

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5091780A (en) * 1990-05-09 1992-02-25 Carnegie-Mellon University A trainable security system emthod for the same
US5261041A (en) 1990-12-28 1993-11-09 Apple Computer, Inc. Computer controlled animation system based on definitional animated objects and methods of manipulating same
US5594856A (en) 1994-08-25 1997-01-14 Girard; Michael Computer user interface for step-driven character animation
US5666157A (en) 1995-01-03 1997-09-09 Arc Incorporated Abnormality detection and surveillance system
US7076102B2 (en) 2001-09-27 2006-07-11 Koninklijke Philips Electronics N.V. Video monitoring system employing hierarchical hidden markov model (HMM) event learning and classification
US6985172B1 (en) 1995-12-01 2006-01-10 Southwest Research Institute Model-based incident detection system with motion classification
US5966074A (en) 1996-12-17 1999-10-12 Baxter; Keith M. Intruder alarm with trajectory display
US5937092A (en) 1996-12-23 1999-08-10 Esco Electronics Rejection of light intrusion false alarms in a video security system
US5956424A (en) 1996-12-23 1999-09-21 Esco Electronics Corporation Low false alarm rate detection for a video image processing based security alarm system
US6088042A (en) 1997-03-31 2000-07-11 Katrix, Inc. Interactive motion data animation system
US7023913B1 (en) * 2000-06-14 2006-04-04 Monroe David A Digital security multimedia sensor
US6587574B1 (en) 1999-01-28 2003-07-01 Koninklijke Philips Electronics N.V. System and method for representing trajectories of moving objects for content-based indexing and retrieval of visual animated data
US6678413B1 (en) 2000-11-24 2004-01-13 Yiqing Liang System and method for object identification and behavior characterization using video analysis
US6441734B1 (en) 2000-12-12 2002-08-27 Koninklijke Philips Electronics N.V. Intruder detection through trajectory analysis in monitoring and surveillance systems
US7095328B1 (en) 2001-03-16 2006-08-22 International Business Machines Corporation System and method for non intrusive monitoring of “at risk” individuals
US7110569B2 (en) 2001-09-27 2006-09-19 Koninklijke Philips Electronics N.V. Video based detection of fall-down and other events
US6823011B2 (en) 2001-11-19 2004-11-23 Mitsubishi Electric Research Laboratories, Inc. Unusual event detection using motion activity descriptors
US6856249B2 (en) * 2002-03-07 2005-02-15 Koninklijke Philips Electronics N.V. System and method of keeping track of normal behavior of the inhabitants of a house
WO2003087929A1 (en) * 2002-04-10 2003-10-23 Pan-X Imaging, Inc. A digital imaging system
US20050104960A1 (en) 2003-11-17 2005-05-19 Mei Han Video surveillance system with trajectory hypothesis spawning and local pruning
US7136507B2 (en) 2003-11-17 2006-11-14 Vidient Systems, Inc. Video surveillance system with rule-based reasoning and multiple-hypothesis scoring
US20050104959A1 (en) 2003-11-17 2005-05-19 Mei Han Video surveillance system with trajectory hypothesis scoring based on at least one non-spatial parameter
US7127083B2 (en) 2003-11-17 2006-10-24 Vidient Systems, Inc. Video surveillance system with object detection and probability scoring based on object class
US7088846B2 (en) 2003-11-17 2006-08-08 Vidient Systems, Inc. Video surveillance system that detects predefined behaviors based on predetermined patterns of movement through zones
US7148912B2 (en) 2003-11-17 2006-12-12 Vidient Systems, Inc. Video surveillance system in which trajectory hypothesis spawning allows for trajectory splitting and/or merging
US7109861B2 (en) 2003-11-26 2006-09-19 International Business Machines Corporation System and method for alarm generation based on the detection of the presence of a person
US20050285937A1 (en) 2004-06-28 2005-12-29 Porikli Fatih M Unusual event detection in a video using object and frame features
US7426301B2 (en) 2004-06-28 2008-09-16 Mitsubishi Electric Research Laboratories, Inc. Usual event detection in a video using object and frame features
US7339607B2 (en) * 2005-03-25 2008-03-04 Yongyouth Damabhorn Security camera and monitor system activated by motion sensor and body heat sensor for homes or offices
US20070008408A1 (en) * 2005-06-22 2007-01-11 Ron Zehavi Wide area security system and method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1403817A1 (en) * 2000-09-06 2004-03-31 Hitachi, Ltd. Abnormal behavior detector
US20060059557A1 (en) * 2003-12-18 2006-03-16 Honeywell International Inc. Physical security management system
WO2005066912A1 (en) * 2004-01-12 2005-07-21 Elbit Systems Ltd. System and method for identifying a threat associated person among a crowd

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170185872A1 (en) * 2015-12-28 2017-06-29 Qualcomm Incorporated Automatic detection of objects in video images
US10083378B2 (en) * 2015-12-28 2018-09-25 Qualcomm Incorporated Automatic detection of objects in video images
EP3557549A1 (en) 2018-04-19 2019-10-23 PKE Holding AG Method for evaluating a motion event

Also Published As

Publication number Publication date
US20080201116A1 (en) 2008-08-21
US7667596B2 (en) 2010-02-23
JP2010519608A (en) 2010-06-03
WO2008103206B1 (en) 2008-10-30
JP5224401B2 (en) 2013-07-03

Similar Documents

Publication Publication Date Title
WO2008103206A1 (en) Surveillance systems and methods
US9875648B2 (en) Methods and systems for reducing false alarms in a robotic device by sensor fusion
KR101850286B1 (en) A deep learning based image recognition method for CCTV
US11217088B2 (en) Alert volume normalization in a video surveillance system
US11615623B2 (en) Object detection in edge devices for barrier operation and parcel delivery
US9524426B2 (en) Multi-view human detection using semi-exhaustive search
US9251598B2 (en) Vision-based multi-camera factory monitoring with dynamic integrity scoring
US10007850B2 (en) System and method for event monitoring and detection
US20150294143A1 (en) Vision based monitoring system for activity sequency validation
US20150294496A1 (en) Probabilistic person-tracking using multi-view fusion
KR101877294B1 (en) Smart cctv system for crime prevention capable of setting multi situation and recognizing automatic situation by defining several basic behaviors based on organic relation between object, area and object&#39;s events
US20180341814A1 (en) Multiple robots assisted surveillance system
US20120328153A1 (en) Device and method for monitoring video objects
CN111488803A (en) Airport target behavior understanding system integrating target detection and target tracking
US10210392B2 (en) System and method for detecting potential drive-up drug deal activity via trajectory-based analysis
CN112149491A (en) Method for determining a trust value of a detected object
US20180032817A1 (en) System and method for detecting potential mugging event via trajectory-based analysis
Mohan et al. Anomaly and activity recognition using machine learning approach for video based surveillance
CN114218992A (en) Abnormal object detection method and related device
Ko et al. Rectified trajectory analysis based abnormal loitering detection for video surveillance
KR102585665B1 (en) Risk Situation Analysis and Hazard Object Detection System
CN114266804A (en) Cross-sensor object attribute analysis method and system
KR102286229B1 (en) A feature vector-based fight event recognition method
JP6678706B2 (en) Type determination program, type determination device and type determination method
Kumari et al. Dynamic scheduling of an autonomous PTZ camera for effective surveillance

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07865683

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2009549578

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07865683

Country of ref document: EP

Kind code of ref document: A1